Abstract:This paper studies spatial smoothing using sparse arrays in single-snapshot Direction of Arrival (DOA) estimation. We consider the application of automotive MIMO radar, which traditionally synthesizes a large uniform virtual array by appropriate waveform and physical array design. We explore deliberately introducing holes into this virtual array to leverage resolution gains provided by the increased aperture. The presence of these holes requires re-thinking DOA estimation, as conventional algorithms may no longer be easily applicable and alternative techniques, such as array interpolation, may be computationally expensive. Consequently, we study sparse array geometries that permit the direct application of spatial smoothing. We show that a sparse array geometry is amenable to spatial smoothing if it can be decomposed into the sum set of two subsets of suitable cardinality. Furthermore, we demonstrate that many such decompositions may exist - not all of them yielding equal identifiability or aperture. We derive necessary and sufficient conditions to guarantee identifiability of a given number of targets, which gives insight into choosing desirable decompositions for spatial smoothing. This provides uniform recovery guarantees and enables estimating DOAs at increased resolution and reduced computational complexity.
Abstract:We study the problem of noisy sparse array interpolation, where a large virtual array is synthetically generated by interpolating missing sensors using matrix completion techniques that promote low rank. The current understanding is quite limited regarding the effect of the (sparse) array geometry on the angle estimation error (post interpolation) of these methods. In this paper, we make advances towards solidifying this understanding by revealing the role of the physical beampattern of the sparse array on the performance of low rank matrix completion techniques. When the beampattern is analytically tractable (such as for uniform linear arrays and nested arrays), our analysis provides concrete and interpretable bounds on the scaling of the angular error as a function of the number of sensors, and demonstrates the effectiveness of nested arrays in presence of noise and a single temporal snapshot.
Abstract:Sparse arrays have emerged as a popular alternative to the conventional uniform linear array (ULA) due to the enhanced degrees of freedom (DOF) and superior resolution offered by them. In the passive setting, these advantages are realized by leveraging correlation between the received signals at different sensors. This has led to the belief that sparse arrays require a large number of temporal measurements to reliably estimate parameters of interest from these correlations, and therefore they may not be preferred in the sample-starved regime. In this paper, we debunk this myth by performing a rigorous non-asymptotic analysis of the Coarray ESPRIT algorithm. This seemingly counter-intuitive result is a consequence of the scaling of the singular value of the coarray manifold, which compensates for the potentially large covariance estimation error in the limited snapshot regime. Specifically, we show that for a nested array operating in the regime of fewer sources than sensors ($S=O(1)$), it is possible to bound the matching distance error between the estimated and true directions of arrival (DOAs) by an arbitrarily small quantity ($\epsilon$) with high probability, provided (i) the number of temporal snapshots ($L$) scales only logarithmically with the number of sensors ($P$), i.e. $L=\Omega(\ln(P)/\epsilon^2)$, and (ii) a suitable separation condition is satisfied. Our results also formally prove the well-known empirical resolution benefits of sparse arrays, by establishing that the minimum separation between sources can be $\Omega(1/P^2)$, as opposed to separation $\Omega(1/P)$ required by a ULA with the same number of sensors. Our sample complexity expression reveals the dependence on other key model parameters such as SNR and the dynamic range of the source powers. This enables us to establish the superior noise-resilience of nested arrays both theoretically and empirically.
Abstract:The problem of super-resolution is concerned with the reconstruction of temporally/spatially localized events (or spikes) from samples of their convolution with a low-pass filter. Distinct from prior works which exploit sparsity in appropriate domains in order to solve the resulting ill-posed problem, this paper explores the role of binary priors in super-resolution, where the spike (or source) amplitudes are assumed to be binary-valued. Our study is inspired by the problem of neural spike deconvolution, but also applies to other applications such as symbol detection in hybrid millimeter wave communication systems. This paper makes several theoretical and algorithmic contributions to enable binary super-resolution with very few measurements. Our results show that binary constraints offer much stronger identifiability guarantees than sparsity, allowing us to operate in "extreme compression" regimes, where the number of measurements can be significantly smaller than the sparsity level of the spikes. To ensure exact recovery in this "extreme compression" regime, it becomes necessary to design algorithms that exactly enforce binary constraints without relaxation. In order to overcome the ensuing computational challenges, we consider a first order auto-regressive filter (which appears in neural spike deconvolution), and exploit its special structure. This results in a novel formulation of the super-resolution binary spike recovery in terms of binary search in one dimension. We perform numerical experiments that validate our theory and also show the benefits of binary constraints in neural spike deconvolution from real calcium imaging datasets.