Université de Nice Sophia-Antipolis, CNRS, Nice, France
Abstract:Personal sound zone (PSZ) systems, which aim to create listening (bright) and silent (dark) zones in neighboring regions of space, are often based on time-varying acoustics. Conventional adaptive-based methods for handling PSZ tasks suffer from the collection and processing of acoustic transfer functions~(ATFs) between all the matching microphones and all the loudspeakers in a centralized manner, resulting in high calculation complexity and costly accuracy requirements. This paper presents a distributed pressure-matching (PM) method relying on diffusion adaptation (DPM-D) to spread the computational load amongst nodes in order to overcome these issues. The global PM problem is defined as a sum of local costs, and the diffusion adaption approach is then used to create a distributed solution that just needs local information exchanges. Simulations over multi-frequency bins and a computational complexity analysis are conducted to evaluate the properties of the algorithm and to compare it with centralized counterparts.
Abstract:Distributed Acoustic Sensing (DAS) that transforms city-wide fiber-optic cables into a large-scale strain sensing array has shown the potential to revolutionize urban traffic monitoring by providing a fine-grained, scalable, and low-maintenance monitoring solution. However, there are challenges that limit DAS's real-world usage: noise contamination and interference among closely traveling cars. To address the issues, we introduce a self-supervised U-Net model that can suppress background noise and compress car-induced DAS signals into high-resolution pulses through spatial deconvolution. To guide the design of the approach, we investigate the fiber response to vehicles through numerical simulation and field experiments. We show that the localized and narrow outputs from our model lead to accurate and highly resolved car position and speed tracking. We evaluate the effectiveness and robustness of our method through field recordings under different traffic conditions and various driving speeds. Our results show that our method can enhance the spatial-temporal resolution and better resolve closely traveling cars. The spatial deconvolution U-Net model also enables the characterization of large-size vehicles to identify axle numbers and estimate the vehicle length. Monitoring large-size vehicles also benefits imaging deep earth by leveraging the surface waves induced by the dynamic vehicle-road interaction.
Abstract:Deconvolution is a widely used strategy to mitigate the blurring and noisy degradation of hyperspectral images~(HSI) generated by the acquisition devices. This issue is usually addressed by solving an ill-posed inverse problem. While investigating proper image priors can enhance the deconvolution performance, it is not trivial to handcraft a powerful regularizer and to set the regularization parameters. To address these issues, in this paper we introduce a tuning-free Plug-and-Play (PnP) algorithm for HSI deconvolution. Specifically, we use the alternating direction method of multipliers (ADMM) to decompose the optimization problem into two iterative sub-problems. A flexible blind 3D denoising network (B3DDN) is designed to learn deep priors and to solve the denoising sub-problem with different noise levels. A measure of 3D residual whiteness is then investigated to adjust the penalty parameters when solving the quadratic sub-problems, as well as a stopping criterion. Experimental results on both simulated and real-world data with ground-truth demonstrate the superiority of the proposed method.
Abstract:Hyperspectral and multispectral image fusion allows us to overcome the hardware limitations of hyperspectral imaging systems inherent to their lower spatial resolution. Nevertheless, existing algorithms usually fail to consider realistic image acquisition conditions. This paper presents a general imaging model that considers inter-image variability of data from heterogeneous sources and flexible image priors. The fusion problem is stated as an optimization problem in the maximum a posteriori framework. We introduce an original image fusion method that, on the one hand, solves the optimization problem accounting for inter-image variability with an iteratively reweighted scheme and, on the other hand, that leverages light-weight CNN-based networks to learn realistic image priors from data. In addition, we propose a zero-shot strategy to directly learn the image-specific prior of the latent images in an unsupervised manner. The performance of the algorithm is illustrated with real data subject to inter-image variability.
Abstract:We propose the adaptive random Fourier features Gaussian kernel LMS (ARFF-GKLMS). Like most kernel adaptive filters based on stochastic gradient descent, this algorithm uses a preset number of random Fourier features to save computation cost. However, as an extra flexibility, it can adapt the inherent kernel bandwidth in the random Fourier features in an online manner. This adaptation mechanism allows to alleviate the problem of selecting the kernel bandwidth beforehand for the benefit of an improved tracking in non-stationary circumstances. Simulation results confirm that the proposed algorithm achieves a performance improvement in terms of convergence rate, error at steady-state and tracking ability over other kernel adaptive filters with preset kernel bandwidth.
Abstract:Spectral unmixing is one of the most important quantitative analysis tasks in hyperspectral data processing. Conventional physics-based models are characterized by clear interpretation. However, due to the complex mixture mechanism and limited nonlinearity modeling capacity, these models may not be accurate, especially, in analyzing scenes with unknown physical characteristics. Data-driven methods have developed rapidly in recent years, in particular deep learning methods as they possess superior capability in modeling complex and nonlinear systems. Simply transferring these methods as black-boxes to conduct unmixing may lead to low physical interpretability and generalization ability. Consequently, several contributions have been dedicated to integrating advantages of both physics-based models and data-driven methods. In this article, we present an overview of recent advances on this topic from several aspects, including deep neural network (DNN) structures design, prior capturing and loss design, and summarise these methods in a common mathematical optimization framework. In addition, relevant remarks and discussions are conducted made for providing further understanding and prospective improvement of the methods. The related source codes and data are collected and made available at http://github.com/xiuheng-wang/awesome-hyperspectral-image-unmixing.
Abstract:To overcome inherent hardware limitations of hyperspectral imaging systems with respect to their spatial resolution, fusion-based hyperspectral image (HSI) super-resolution is attracting increasing attention. This technique aims to fuse a low-resolution (LR) HSI and a conventional high-resolution (HR) RGB image in order to obtain an HR HSI. Recently, deep learning architectures have been used to address the HSI super-resolution problem and have achieved remarkable performance. However, they ignore the degradation model even though this model has a clear physical interpretation and may contribute to improve the performance. We address this problem by proposing a method that, on the one hand, makes use of the linear degradation model in the data-fidelity term of the objective function and, on the other hand, utilizes the output of a convolutional neural network for designing a deep prior regularizer in spectral and spatial gradient domains. Experiments show the performance improvement achieved with this strategy.
Abstract:The recursive least-squares algorithm with $\ell_1$-norm regularization ($\ell_1$-RLS) exhibits excellent performance in terms of convergence rate and steady-state error in identification of sparse systems. Nevertheless few works have studied its stochastic behavior, in particular its transient performance. In this letter, we derive analytical models of the transient behavior of the $\ell_1$-RLS in the mean and mean-square sense. Simulation results illustrate the accuracy of these models.
Abstract:In many areas such as computational biology, finance or social sciences, knowledge of an underlying graph explaining the interactions between agents is of paramount importance but still challenging. Considering that these interactions may be based on nonlinear relationships adds further complexity to the topology inference problem. Among the latest methods that respond to this need is a topology inference one proposed by the authors, which estimates a possibly directed adjacency matrix in an online manner. Contrasting with previous approaches based on linear models, the considered model is able to explain nonlinear interactions between the agents in a network. The novelty in the considered method is the use of a derivative-reproducing property to enforce network sparsity, while reproducing kernels are used to model the nonlinear interactions. The aim of this paper is to present a thorough convergence analysis of this method. The analysis is proven to be sane both in the mean and mean square sense. In addition, stability conditions are devised to ensure the convergence of the analyzed method.
Abstract:Multitemporal spectral unmixing (SU) is a powerful tool to process hyperspectral image (HI) sequences due to its ability to reveal the evolution of materials over time and space in a scene. However, significant spectral variability is often observed between collection of images due to variations in acquisition or seasonal conditions. This characteristic has to be considered in the design of SU algorithms. Because of its good performance, the multiple endmember spectral mixture analysis algorithm (MESMA) has been recently used to perform SU in multitemporal scenarios arising in several practical applications. However, MESMA does not consider the relationship between the different HIs, and its computational complexity is extremely high for large spectral libraries. In this work, we propose an efficient multitemporal SU method that exploits the high temporal correlation between the abundances to provide more accurate results at a lower computational complexity. We propose to solve the complex general multitemporal SU problem by separately addressing the endmember selection and the abundance estimation problems. This leads to a simpler solution without sacrificing the accuracy of the results. We also propose a strategy to detect and address abrupt abundance variations. Theoretical results demonstrate how the proposed method compares to MESMA in terms of quality, and how effective it is in detecting abundance changes. This analysis provides valuable insight into the conditions under which the algorithm succeeds. Simulation results show that the proposed method achieves state-of-the-art performance at a smaller computational cost.