Abstract:Crop classification using remote sensing data has emerged as a prominent research area in recent decades. Studies have demonstrated that fusing SAR and optical images can significantly enhance the accuracy of classification. However, a major challenge in this field is the limited availability of training data, which adversely affects the performance of classifiers. In agricultural regions, the dominant crops typically consist of one or two specific types, while other crops are scarce. Consequently, when collecting training samples to create a map of agricultural products, there is an abundance of samples from the dominant crops, forming the majority classes. Conversely, samples from other crops are scarce, representing the minority classes. Addressing this issue requires overcoming several challenges and weaknesses associated with traditional data generation methods. These methods have been employed to tackle the imbalanced nature of the training data. Nevertheless, they still face limitations in effectively handling the minority classes. Overall, the issue of inadequate training data, particularly for minority classes, remains a hurdle that traditional methods struggle to overcome. In this research, We explore the effectiveness of conditional tabular generative adversarial network (CTGAN) as a synthetic data generation method based on a deep learning network, in addressing the challenge of limited training data for minority classes in crop classification using the fusion of SAR-optical data. Our findings demonstrate that the proposed method generates synthetic data with higher quality that can significantly increase the number of samples for minority classes leading to better performance of crop classifiers.
Abstract:Synergetic use of sensors for soil moisture retrieval is attracting considerable interest due to the different advantages of different sensors. Active, passive, and optic data integration could be a comprehensive solution for exploiting the advantages of different sensors aimed at preparing soil moisture maps. Typically, pixel-based methods are used for multi-sensor fusion. Since, different applications need different scales of soil moisture maps, pixel-based approaches are limited for this purpose. Object-based image analysis employing an image object instead of a pixel could help us to meet this need. This paper proposes a segment-based image fusion framework to evaluate the possibility of preparing a multi-scale soil moisture map through integrated Sentinel-1, Sentinel-2, and Soil Moisture Active Passive (SMAP) data. The results confirmed that the proposed methodology was able to improve soil moisture estimation in different scales up to 20% better compared to pixel-based fusion approach.