Abstract:Hyperspectral imaging systems based on multiple-beam interference (MBI), such as Fabry-Perot interferometry, are attracting interest due to their compact design, high throughput, and fine resolution. Unlike dispersive devices, which measure spectra directly, the desired spectra in interferometric systems are reconstructed from measured interferograms. Although the response of MBI devices is modeled by the Airy function, existing reconstruction techniques are often limited to Fourier-transform spectroscopy, which is tailored for two-beam interference (TBI). These methods impose limitations for MBI and are susceptible to non-idealities like irregular sampling and noise, highlighting the need for an in-depth numerical framework. To fill this gap, we propose a rigorous taxonomy of the TBI and MBI instrument description and propose a unified Bayesian formulation which both embeds the description of existing literature works and adds some of the real-world non-idealities of the acquisition process. Under this framework, we provide a comprehensive review of spectroscopy forward and inverse models. In the forward model, we propose a thorough analysis of the discretization of the continuous model and the ill-posedness of the problem. In the inverse model, we extend the range of existing solutions for spectrum reconstruction, framing them as an optimization problem. Specifically, we provide a progressive comparative analysis of reconstruction methods from more specific to more general scenarios, up to employing the proposed Bayesian framework with prior knowledge, such as sparsity constraints. Experiments on simulated and real data demonstrate the framework's flexibility and noise robustness. The code is available at https://github.com/mhmdjouni/inverspyctrometry.
Abstract:Hyperspectral unmixing allows representing mixed pixels as a set of pure materials weighted by their abundances. Spectral features alone are often insufficient, so it is common to rely on other features of the scene. Matrix models become insufficient when the hyperspectral image (HSI) is represented as a high-order tensor with additional features in a multimodal, multifeature framework. Tensor models such as canonical polyadic decomposition allow for this kind of unmixing but lack a general framework and interpretability of the results. In this article, we propose an interpretable methodological framework for low-rank multifeature hyperspectral unmixing based on tensor decomposition (MultiHU-TD) that incorporates the abundance sum-to-one constraint in the alternating optimization alternating direction method of multipliers (ADMM) algorithm and provide in-depth mathematical, physical, and graphical interpretation and connections with the extended linear mixing model. As additional features, we propose to incorporate mathematical morphology and reframe a previous work on neighborhood patches within MultiHU-TD. Experiments on real HSIs showcase the interpretability of the model and the analysis of the results. Python and MATLAB implementations are made available on GitHub.
Abstract:In the last decade, novel hyperspectral cameras have been developed with particularly desirable characteristics of compactness and short acquisition time, retaining their potential to obtain spectral/spatial resolution competitive with respect to traditional cameras. However, a computational effort is required to recover an interpretable data cube. In this work we focus our attention on imaging spectrometers based on interferometry, for which the raw acquisition is an image whose spectral component is expressed as an interferogram. Previous works have focused on the inversion of such acquisition on a pixel-by-pixel basis within a Bayesian framework, leaving behind critical information on the spatial structure of the image data cube. In this work, we address this problem by integrating a spatial regularization for image reconstruction, showing that the combination of spectral and spatial regularizers leads to enhanced performances with respect to the pixelwise case. We compare our results with Plug-and-Play techniques, as its strategy to inject a set of denoisers from the literature can be implemented seamlessly with our physics-based formulation of the optimization problem.