Abstract:Typically, the detection of marine debris relies on in-situ campaigns that are characterized by huge human effort and limited spatial coverage. Following the need of a rapid solution for the detection of floating plastic, methods based on remote sensing data have been proposed recently. Their main limitation is represented by the lack of a general reference for evaluating performance. Recently, the Marine Debris Archive (MARIDA) has been released as a standard dataset to develop and evaluate Machine Learning (ML) algorithms for detection of Marine Plastic Debris. The MARIDA dataset has been created for simplifying the comparison between detection solutions with the aim of stimulating the research in the field of marine environment preservation. In this work, an assessment of spectral based solutions is proposed by evaluating performance on MARIDA dataset. The outcome highlights the need of precise reference for fair evaluation.
Abstract:Tropical forests are a key component of the global carbon cycle. With plans for upcoming space-borne missions like BIOMASS to monitor forestry, several airborne missions, including TropiSAR and AfriSAR campaigns, have been successfully launched and experimented. Typical Synthetic Aperture Radar Tomography (TomoSAR) methods involve complex models with low accuracy and high computation costs. In recent years, deep learning methods have also gained attention in the TomoSAR framework, showing interesting performance. Recently, a solution based on a fully connected Tomographic Neural Network (TSNN) has demonstrated its effectiveness in accurately estimating forest and ground heights by exploiting the pixel-wise elements of the covariance matrix derived from TomoSAR data. This work instead goes beyond the pixel-wise approach to define a context-aware deep learning-based solution named CATSNet. A convolutional neural network is considered to leverage patch-based information and extract features from a neighborhood rather than focus on a single pixel. The training is conducted by considering TomoSAR data as the input and Light Detection and Ranging (LiDAR) values as the ground truth. The experimental results show striking advantages in both performance and generalization ability by leveraging context information within Multiple Baselines (MB) TomoSAR data across different polarimetric modalities, surpassing existing techniques.
Abstract:Deep learning (DL) in remote sensing has nowadays became an effective operative tool: it is largely used in applications such as change detection, image restoration, segmentation, detection and classification. With reference to synthetic aperture radar (SAR) domain the application of DL techniques is not straightforward due to non trivial interpretation of SAR images, specially caused by the presence of speckle. Several deep learning solutions for SAR despeckling have been proposed in the last few years. Most of these solutions focus on the definition of different network architectures with similar cost functions not involving SAR image properties. In this paper, a convolutional neural network (CNN) with a multi-objective cost function taking care of spatial and statistical properties of the SAR image is proposed. This is achieved by the definition of a peculiar loss function obtained by the weighted combination of three different terms. Each of this term is dedicated mainly to one of the following SAR image characteristics: spatial details, speckle statistical properties and strong scatterers preservation. Their combination allows to balance these effects. Moreover, a specifically designed architecture is proposed for effectively extract distinctive features within the considered framework. Experiments on simulated and real SAR images show the accuracy of the proposed method compared to the State-of-Art despeckling algorithms, both from quantitative and qualitative point of view. The importance of considering such SAR properties in the cost function is crucial for a correct noise rejection and object preservation in different underlined scenarios, such as homogeneous, heterogeneous and extremely heterogeneous.
Abstract:SAR images are affected by multiplicative noise that impairs their interpretations. In the last decades several methods for SAR denoising have been proposed and in the last years great attention has moved towards deep learning based solutions. Based on our last proposed convolutional neural network for SAR despeckling, here we exploit the effect of the complexity of the network. More precisely, once a dataset has been fixed, we carry out an analysis of the network performance with respect to the number of layers and numbers of features the network is composed of. Evaluation on simulated and real data are carried out. The results show that deeper networks better generalize on both simulated and real images.
Abstract:SAR despeckling is a key tool for Earth Observation. Interpretation of SAR images are impaired by speckle, a multiplicative noise related to interference of backscattering from the illuminated scene towards the sensor. Reducing the noise is a crucial task for the understanding of the scene. Based on the results of our previous solution KL-DNN, in this work we define a new cost function for training a convolutional neural network for despeckling. The aim is to control the edge preservation and to better filter manmade structures and urban areas that are very challenging for KL-DNN. The results show a very good improvement on the not homogeneous areas keeping the good results in the homogeneous ones. Result on both simulated and real data are shown in the paper.
Abstract:Removing speckle noise from SAR images is still an open issue. It is well know that the interpretation of SAR images is very challenging and despeckling algorithms are necessary to improve the ability of extracting information. An urban environment makes this task more heavy due to different structures and to different objects scale. Following the recent spread of deep learning methods related to several remote sensing applications, in this work a convolutional neural networks based algorithm for despeckling is proposed. The network is trained on simulated SAR data. The paper is mainly focused on the implementation of a cost function that takes account of both spatial consistency of image and statistical properties of noise.
Abstract:In SAR domain many application like classification, detection and segmentation are impaired by speckle. Hence, despeckling of SAR images is the key for scene understanding. Usually despeckling filters face the trade-off of speckle suppression and information preservation. In the last years deep learning solutions for speckle reduction have been proposed. One the biggest issue for these methods is how to train a network given the lack of a reference. In this work we proposed a convolutional neural network based solution trained on simulated data. We propose the use of a cost function taking into account both spatial and statistical properties. The aim is two fold: overcome the trade-off between speckle suppression and details suppression; find a suitable cost function for despeckling in unsupervised learning. The algorithm is validated on both real and simulated data, showing interesting performances.