Abstract:The development of online algorithms to track time-varying systems has drawn a lot of attention in the last years, in particular in the framework of online convex optimization. Meanwhile, sparse time-varying optimization has emerged as a powerful tool to deal with widespread applications, ranging from dynamic compressed sensing to parsimonious system identification. In most of the literature on sparse time-varying problems, some prior information on the system's evolution is assumed to be available. In contrast, in this paper, we propose an online learning approach, which does not employ a given model and is suitable for adversarial frameworks. Specifically, we develop centralized and distributed algorithms, and we theoretically analyze them in terms of dynamic regret, in an online learning perspective. Further, we propose numerical experiments that illustrate their practical effectiveness.
Abstract:We consider the problem of the recovery of a k-sparse vector from compressed linear measurements when data are corrupted by a quantization noise. When the number of measurements is not sufficiently large, different $k$-sparse solutions may be present in the feasible set, and the classical l1 approach may be unsuccessful. For this motivation, we propose a non-convex quadratic programming method, which exploits prior information on the magnitude of the non-zero parameters. This results in a more efficient support recovery. We provide sufficient conditions for successful recovery and numerical simulations to illustrate the practical feasibility of the proposed method.
Abstract:The recovery of signals with finite-valued components from few linear measurements is a problem with widespread applications and interesting mathematical characteristics. In the compressed sensing framework, tailored methods have been recently proposed to deal with the case of finite-valued sparse signals. In this work, we focus on binary sparse signals and we propose a novel formulation, based on polynomial optimization. This approach is analyzed and compared to the state-of-the-art binary compressed sensing methods.
Abstract:l1 reweighting algorithms are very popular in sparse signal recovery and compressed sensing, since in the practice they have been observed to outperform classical l1 methods. Nevertheless, the theoretical analysis of their convergence is a critical point, and generally is limited to the convergence of the functional to a local minimum or to subsequence convergence. In this letter, we propose a new convergence analysis of a Lasso l1 reweighting method, based on the observation that the algorithm is an alternated convex search for a biconvex problem. Based on that, we are able to prove the numerical convergence of the sequence of the iterates generated by the algorithm. This is not yet the convergence of the sequence, but it is close enough for practical and numerical purposes. Furthermore, we propose an alternative iterative soft thresholding procedure, which is faster than the main algorithm.
Abstract:Compressive sensing promises to enable bandwidth-efficient on-board compression of astronomical data by lifting the encoding complexity from the source to the receiver. The signal is recovered off-line, exploiting GPUs parallel computation capabilities to speedup the reconstruction process. However, inherent GPU hardware constraints limit the size of the recoverable signal and the speedup practically achievable. In this work, we design parallel algorithms that exploit the properties of circulant matrices for efficient GPU-accelerated sparse signals recovery. Our approach reduces the memory requirements, allowing us to recover very large signals with limited memory. In addition, it achieves a tenfold signal recovery speedup thanks to ad-hoc parallelization of matrix-vector multiplications and matrix inversions. Finally, we practically demonstrate our algorithms in a typical application of circulant matrices: deblurring a sparse astronomical image in the compressed domain.