Department of Electrical Engineering, National Cheng Kung University, Miin Wu School of Computing, National Cheng Kung University
Abstract:Change detection (CD) is a critical remote sensing technique for identifying changes in the Earth's surface over time. The outstanding substance identifiability of hyperspectral images (HSIs) has significantly enhanced the detection accuracy, making hyperspectral change detection (HCD) an essential technology. The detection accuracy can be further upgraded by leveraging the graph structure of HSIs, motivating us to adopt the graph neural networks (GNNs) in solving HCD. For the first time, this work introduces quantum deep network (QUEEN) into HCD. Unlike GNN and CNN, both extracting the affine-computing features, QUEEN provides fundamentally different unitary-computing features. We demonstrate that through the unitary feature extraction procedure, QUEEN provides radically new information for deciding whether there is a change or not. Hierarchically, a graph feature learning (GFL) module exploits the graph structure of the bitemporal HSIs at the superpixel level, while a quantum feature learning (QFL) module learns the quantum features at the pixel level, as a complementary to GFL by preserving pixel-level detailed spatial information not retained in the superpixels. In the final classification stage, a quantum classifier is designed to cooperate with a traditional fully connected classifier. The superior HCD performance of the proposed QUEEN-empowered GNN (i.e., QUEEN-G) will be experimentally demonstrated on real hyperspectral datasets.
Abstract:Multispectral unmixing (MU) is critical due to the inevitable mixed pixel phenomenon caused by the limited spatial resolution of typical multispectral images in remote sensing. However, MU mathematically corresponds to the underdetermined blind source separation problem, thus highly challenging, preventing researchers from tackling it. Previous MU works all ignore the underdetermined issue, and merely consider scenarios with more bands than sources. This work attempts to resolve the underdetermined issue by further conducting the light-splitting task using a network-inspired virtual prism, and as this task is challenging, we achieve so by incorporating the very advanced quantum feature extraction techniques. We emphasize that the prism is virtual (allowing us to fix the spectral response as a simple deterministic matrix), so the virtual hyperspectral image (HSI) it generates has no need to correspond to some real hyperspectral sensor; in other words, it is good enough as long as the virtual HSI satisfies some fundamental properties of light splitting (e.g., non-negativity and continuity). With the above virtual quantum prism, we know that the virtual HSI is expected to possess some desired simplex structure. This allows us to adopt the convex geometry to unmix the spectra, followed by downsampling the pure spectra back to the multispectral domain, thereby achieving MU. Experimental evidence shows great potential of our MU algorithm, termed as prism-inspired multispectral endmember extraction (PRIME).
Abstract:The deep learning model Transformer has achieved remarkable success in the hyperspectral image (HSI) restoration tasks by leveraging Spectral and Spatial Self-Attention (SA) mechanisms. However, applying these designs to remote sensing (RS) HSI restoration tasks, which involve far more spectrums than typical HSI (e.g., ICVL dataset with 31 bands), presents challenges due to the enormous computational complexity of using Spectral and Spatial SA mechanisms. To address this problem, we proposed Hyper-Restormer, a lightweight and effective Transformer-based architecture for RS HSI restoration. First, we introduce a novel Lightweight Spectral-Spatial (LSS) Transformer Block that utilizes both Spectral and Spatial SA to capture long-range dependencies of input features map. Additionally, we employ a novel Lightweight Locally-enhanced Feed-Forward Network (LLFF) to further enhance local context information. Then, LSS Transformer Blocks construct a Single-stage Lightweight Spectral-Spatial Transformer (SLSST) that cleverly utilizes the low-rank property of RS HSI to decompose the feature maps into basis and abundance components, enabling Spectral and Spatial SA with low computational cost. Finally, the proposed Hyper-Restormer cascades several SLSSTs in a stepwise manner to progressively enhance the quality of RS HSI restoration from coarse to fine. Extensive experiments were conducted on various RS HSI restoration tasks, including denoising, inpainting, and super-resolution, demonstrating that the proposed Hyper-Restormer outperforms other state-of-the-art methods.
Abstract:This paper investigates two performance metrics, namely ergodic capacity and symbol error rate, of mmWave communication system assisted by a reconfigurable intelligent surface (RIS). We assume independent and identically distributed (i.i.d.) Rician fadings between user-RIS-Access Point (AP), with RIS surface consisting of passive reflecting elements. First, we derive a new unified closed-form formula for the average symbol error probability of generalised M-QAM/M-PSK signalling over this mmWave link. We then obtain new closed-form expressions for the ergodic capacity with and without channel state information (CSI) at the AP.
Abstract:Terahertz (THz) technology has been a great candidate for applications, including pharmaceutic analysis, chemical identification, and remote sensing and imaging due to its non-invasive and non-destructive properties. Among those applications, penetrating-type hyperspectral THz signals, which provide crucial material information, normally involve a noisy, complex mixture system. Additionally, the measured THz signals could be ill-conditioned due to the overlap of the material absorption peak in the measured bands. To address those issues, we consider penetrating-type signal mixtures and aim to develop a blind hyperspectral unmixing (HU) method without requiring any information from a prebuilt database. The proposed HYperspectral Penetrating-type Ellipsoidal ReconstructION (HYPERION) algorithm is unsupervised, not relying on collecting extensive data or sophisticated model training. Instead, it is developed based on elegant ellipsoidal geometry under a very mild requirement on data purity, whose excellent efficacy is experimentally demonstrated.
Abstract:In this paper, we derive asymptotic expressions for the ergodic capacity of the multiple-input multiple-output (MIMO) keyhole channel at low SNR in independent and identically distributed (i.i.d.) Nakagami-$m$ fading conditions with perfect channel state information available at both the transmitter (CSI-T) and the receiver (CSI-R). We show that the low-SNR capacity of this keyhole channel scales proportionally as $\frac{\textrm{SNR}}{4} \log^2 \left(1/{\textrm{SNR}}\right)$. Further, we develop a practically appealing On-Off transmission scheme that is aymptotically capacity achieving at low SNR; it requires only one-bit CSI-T feedback and is robust against both mild and severe Nakagami-$m$ fadings for a very wide range of low-SNR values. These results also extend to the Rayleigh keyhole MIMO channel as a special case.
Abstract:Deep learning-based single image super-resolution enables very fast and high-visual-quality reconstruction. Recently, an enhanced super-resolution based on generative adversarial network (ESRGAN) has achieved excellent performance in terms of both qualitative and quantitative quality of the reconstructed high-resolution image. In this paper, we propose to add one more shortcut between two dense-blocks, as well as add shortcut between two convolution layers inside a dense-block. With this simple strategy of adding more shortcuts in the proposed network, it enables a faster learning process as the gradient information can be back-propagated more easily. Based on the improved ESRGAN, the dual reconstruction is proposed to learn different aspects of the super-resolved image for judiciously enhancing the quality of the reconstructed image. In practice, the super-resolution model is pre-trained solely based on pixel distance, followed by fine-tuning the parameters in the model based on adversarial loss and perceptual loss. Finally, we fuse two different models by weighted-summing their parameters to obtain the final super-resolution model. Experimental results demonstrated that the proposed method achieves excellent performance in the real-world image super-resolution challenge. We have also verified that the proposed dual reconstruction does further improve the quality of the reconstructed image in terms of both PSNR and SSIM.
Abstract:This paper reviews the AIM 2019 challenge on real world super-resolution. It focuses on the participating methods and final results. The challenge addresses the real world setting, where paired true high and low-resolution images are unavailable. For training, only one set of source input images is therefore provided in the challenge. In Track 1: Source Domain the aim is to super-resolve such images while preserving the low level image characteristics of the source input domain. In Track 2: Target Domain a set of high-quality images is also provided for training, that defines the output domain and desired quality of the super-resolved images. To allow for quantitative evaluation, the source input images in both tracks are constructed using artificial, but realistic, image degradations. The challenge is the first of its kind, aiming to advance the state-of-the-art and provide a standard benchmark for this newly emerging task. In total 7 teams competed in the final testing phase, demonstrating new and innovative solutions to the problem.
Abstract:Consider a structured matrix factorization model where one factor is restricted to have its columns lying in the unit simplex. This simplex-structured matrix factorization (SSMF) model and the associated factorization techniques have spurred much interest in research topics over different areas, such as hyperspectral unmixing in remote sensing, topic discovery in machine learning, to name a few. In this paper we develop a new theoretical SSMF framework whose idea is to study a maximum volume ellipsoid inscribed in the convex hull of the data points. This maximum volume inscribed ellipsoid (MVIE) idea has not been attempted in prior literature, and we show a sufficient condition under which the MVIE framework guarantees exact recovery of the factors. The sufficient recovery condition we show for MVIE is much more relaxed than that of separable non-negative matrix factorization (or pure-pixel search); coincidentally it is also identical to that of minimum volume enclosing simplex, which is known to be a powerful SSMF framework for non-separable problem instances. We also show that MVIE can be practically implemented by performing facet enumeration and then by solving a convex optimization problem. The potential of the MVIE framework is illustrated by numerical results.
Abstract:In blind hyperspectral unmixing (HU), the pure-pixel assumption is well-known to be powerful in enabling simple and effective blind HU solutions. However, the pure-pixel assumption is not always satisfied in an exact sense, especially for scenarios where pixels are heavily mixed. In the no pure-pixel case, a good blind HU approach to consider is the minimum volume enclosing simplex (MVES). Empirical experience has suggested that MVES algorithms can perform well without pure pixels, although it was not totally clear why this is true from a theoretical viewpoint. This paper aims to address the latter issue. We develop an analysis framework wherein the perfect endmember identifiability of MVES is studied under the noiseless case. We prove that MVES is indeed robust against lack of pure pixels, as long as the pixels do not get too heavily mixed and too asymmetrically spread. The theoretical results are verified by numerical simulations.