Abstract:Hyperspectral imaging, a rapidly evolving field, has witnessed the ascendancy of deep learning techniques, supplanting classical feature extraction and classification methods in various applications. However, many researchers employ arbitrary architectures for hyperspectral image processing, often without rigorous analysis of the interplay between spectral and spatial information. This oversight neglects the implications of combining these two modalities on model performance. In this paper, we evaluate the performance of diverse deep learning architectures for hyperspectral image segmentation. Our analysis disentangles the impact of different architectures, spanning various spectral and spatial granularities. Specifically, we investigate the effects of spectral resolution (capturing spectral information) and spatial texture (conveying spatial details) on segmentation outcomes. Additionally, we explore the transferability of knowledge from large pre-trained image foundation models, originally designed for RGB images, to the hyperspectral domain. Results show that incorporating spatial information alongside spectral data leads to improved segmentation results, and that it is essential to further work on novel architectures comprising spectral and spatial information and on the adaption of RGB foundation models into the hyperspectral domain. Furthermore, we contribute to the field by cleaning and publicly releasing the Tecnalia WEEE Hyperspectral dataset. This dataset contains different non-ferrous fractions of Waste Electrical and Electronic Equipment (WEEE), including Copper, Brass, Aluminum, Stainless Steel, and White Copper, spanning the range of 400 to 1000 nm. We expect these conclusions can guide novel researchers in the field of hyperspectral imaging.
Abstract:Metric learning has become an attractive field for research on the latest years. Loss functions like contrastive loss, triplet loss or multi-class N-pair loss have made possible generating models capable of tackling complex scenarios with the presence of many classes and scarcity on the number of images per class not only work to build classifiers, but to many other applications where measuring similarity is the key. Deep Neural Networks trained via metric learning also offer the possibility to solve few-shot learning problems. Currently used state of the art loss functions such as triplet and contrastive loss functions, still suffer from slow convergence due to the selection of effective training samples that has been partially solved by the multi-class N-pair loss by simultaneously adding additional samples from the different classes. In this work, we extend triplet and multiclass-N-pair loss function by proposing the constellation loss metric where the distances among all class combinations are simultaneously learned. We have compared our constellation loss for visual class embedding showing that our loss function over-performs the other methods by obtaining more compact clusters while achieving better classification results.