Abstract:This paper investigates the recovery of a node-domain sparse graph signal from the output of a graph filter. This problem, often referred to as the identification of the source of a diffused sparse graph signal, is seminal in the field of graph signal processing (GSP). Sparse graph signals can be used in the modeling of a variety of real-world applications in networks, such as social, biological, and power systems, and enable various GSP tasks, such as graph signal reconstruction, blind deconvolution, and sampling. In this paper, we assume double sparsity of both the graph signal and the graph topology, as well as a low-order graph filter. We propose three algorithms to reconstruct the support set of the input sparse graph signal from the graph filter output samples, leveraging these assumptions and the generalized information criterion (GIC). First, we describe the graph multiple GIC (GM-GIC) method, which is based on partitioning the dictionary elements (graph filter matrix columns) that capture information on the signal into smaller subsets. Then, the local GICs are computed for each subset and aggregated to make a global decision. Second, inspired by the well-known branch and bound (BNB) approach, we develop the graph-based branch and bound GIC (graph-BNB-GIC), and incorporate a new tractable heuristic bound tailored to the graph and graph filter characteristics. Finally, we propose the graph-based first order correction (GFOC) method, which improves existing sparse recovery methods by iteratively examining potential improvements to the GIC cost function through replacing elements from the estimated support set with elements from their one-hop neighborhood. We conduct simulations that demonstrate that the proposed sparse recovery methods outperform existing methods in terms of support set recovery accuracy, and without a significant computational overhead.
Abstract:THz communications are expected to play a profound role in future wireless systems. The current trend of the extremely massive multiple-input multiple-output (MIMO) antenna architectures tends to be costly and power inefficient when implementing wideband THz communications. An emerging THz antenna technology is leaky wave antenna (LWA), which can realize frequency selective beamforming with a single radiating element. In this work, we explore the usage of LWAs technology for wideband multi-user THz communications. We propose a model for the LWA signal processing that is physically compliant facilitating studying LWA-aided communication systems. Focusing on downlink systems, we propose an alternating optimization algorithm for jointly optimizing the LWA configuration along with the signal spectral power allocation to maximize the sum-rate performance. Our numerical results show that a single LWA can generate diverse beampatterns at THz, exhibiting performance comparable to costly fully digital MIMO arrays.
Abstract:Dynamic systems of graph signals are encountered in various applications, including social networks, power grids, and transportation. While such systems can often be described as state space (SS) models, tracking graph signals via conventional tools based on the Kalman filter (KF) and its variants is typically challenging. This is due to the nonlinearity, high dimensionality, irregularity of the domain, and complex modeling associated with real-world dynamic systems of graph signals. In this work, we study the tracking of graph signals using a hybrid model-based/data-driven approach. We develop the GSP-KalmanNet, which tracks the hidden graphical states from the graphical measurements by jointly leveraging graph signal processing (GSP) tools and deep learning (DL) techniques. The derivations of the GSP-KalmanNet are based on extending the KF to exploit the inherent graph structure via graph frequency domain filtering, which considerably simplifies the computational complexity entailed in processing high-dimensional signals and increases the robustness to small topology changes. Then, we use data to learn the Kalman gain following the recently proposed KalmanNet framework, which copes with partial and approximated modeling, without forcing a specific model over the noise statistics. Our empirical results demonstrate that the proposed GSP-KalmanNet achieves enhanced accuracy and run time performance as well as improved robustness to model misspecifications compared with both model-based and data-driven benchmarks.
Abstract:Achieving high-resolution Direction of Arrival (DoA) recovery typically requires high Signal to Noise Ratio (SNR) and a sufficiently large number of snapshots. This paper presents NUV-DoA algorithm, that augments Bayesian sparse reconstruction with spatial filtering for super-resolution DoA estimation. By modeling each direction on the azimuth's grid with the sparsity-promoting normal with unknown variance (NUV) prior, the non-convex optimization problem is reduced to iteratively reweighted least-squares under Gaussian distribution, where the mean of the snapshots is a sufficient statistic. This approach not only simplifies our solution but also accurately detects the DoAs. We utilize a hierarchical approach for interference cancellation in multi-source scenarios. Empirical evaluations show the superiority of NUV-DoA, especially in low SNRs, compared to alternative DoA estimators.
Abstract:In many parameter estimation problems, the exact model is unknown and is assumed to belong to a set of candidate models. In such cases, a predetermined data-based selection rule selects a parametric model from a set of candidates before the parameter estimation. The existing framework for estimation under model misspecification does not account for the selection process that led to the misspecified model. Moreover, in post-model-selection estimation, there are multiple candidate models chosen based on the observations, making the interpretation of the assumed model in the misspecified setting non-trivial. In this work, we present three interpretations to address the problem of non-Bayesian post-model-selection estimation as an estimation under model misspecification problem: the naive interpretation, the normalized interpretation, and the selective inference interpretation, and discuss their properties. For each of these interpretations, we developed the corresponding misspecified maximum likelihood estimator and the misspecified Cram$\acute{\text{e}}$r-Rao-type lower bound. The relations between the estimators and the performance bounds, as well as their properties, are discussed. Finally, we demonstrate the performance of the proposed estimators and bounds via simulations of estimation after channel selection. We show that the proposed performance bounds are more informative than the oracle Cram$\acute{\text{e}}$r-Rao Bound (CRB), where the third interpretation (selective inference) results in the lowest mean-squared-error (MSE) among the estimators.
Abstract:In this paper, we investigate the problem of estimating a complex-valued Laplacian matrix from a linear Gaussian model, with a focus on its application in the estimation of admittance matrices in power systems. The proposed approach is based on a constrained maximum likelihood estimator (CMLE) of the complex-valued Laplacian, which is formulated as an optimization problem with Laplacian and sparsity constraints. The complex-valued Laplacian is a symmetric, non-Hermitian matrix that exhibits a joint sparsity pattern between its real and imaginary parts. Leveraging the l1 relaxation and the joint sparsity, we develop two estimation algorithms for the implementation of the CMLE. The first algorithm is based on casting the optimization problem as a semi-definite programming (SDP) problem, while the second algorithm is based on developing an efficient augmented Lagrangian method (ALM) solution. Next, we apply the proposed SDP and ALM algorithms for the problem of estimating the admittance matrix under three commonly-used measurement models, that stem from Kirchhoff's and Ohm's laws, each with different assumptions and simplifications: 1) the nonlinear alternating current (AC) model; 2) the decoupled linear power flow (DLPF) model; and 3) the direct current (DC) model. The performance of the SDP and the ALM algorithms is evaluated using data from the IEEE 33-bus power system data under different settings. The numerical experiments demonstrate that the proposed algorithms outperform existing methods in terms of mean-squared-error (MSE) and F-score, thus, providing a more accurate recovery of the admittance matrix.
Abstract:Graph signal processing (GSP) deals with the representation, analysis, and processing of structured data, i.e. graph signals that are defined on the vertex set of a generic graph. A crucial prerequisite for applying various GSP and graph neural network (GNN) approaches is that the examined signals are smooth graph signals with respect to the underlying graph, or, equivalently, have low graph total variation (TV). In this paper, we develop GSP-based approaches to verify the validity of the smoothness assumption of given signals (data) and an associated graph. The proposed approaches are based on the representation of network data as the output of a graph filter with a given graph topology. In particular, we develop two smoothness detectors for the graph-filter-output model: 1) the likelihood ratio test (LRT) for known model parameters; and 2) a semi-parametric detector that estimates the graph filter and then validates its smoothness. The properties of the proposed GSP-based detectors are investigated, and some special cases are discussed. The performance of the GSP-based detectors is evaluated on synthetic data and on IEEE $14$-bus power system data, under different setups. The results demonstrate the effectiveness of the proposed approach and its robustness to different generating models, noise levels, and number of samples.
Abstract:Graph signal processing (GSP) has emerged as a powerful tool for practical network applications, including power system monitoring. By representing power system voltages as smooth graph signals, recent research has focused on developing GSP-based methods for state estimation, attack detection, and topology identification. Included, efficient methods have been developed for detecting false data injection (FDI) attacks, which until now were perceived as non-smooth with respect to the graph Laplacian matrix. Consequently, these methods may not be effective against smooth FDI attacks. In this paper, we propose a graph FDI (GFDI) attack that minimizes the Laplacian-based graph total variation (TV) under practical constraints. In addition, we develop a low-complexity algorithm that solves the non-convex GDFI attack optimization problem using ell_1-norm relaxation, the projected gradient descent (PGD) algorithm, and the alternating direction method of multipliers (ADMM). We then propose a protection scheme that identifies the minimal set of measurements necessary to constrain the GFDI output to high graph TV, thereby enabling its detection by existing GSP-based detectors. Our numerical simulations on the IEEE-57 bus test case reveal the potential threat posed by well-designed GSP-based FDI attacks. Moreover, we demonstrate that integrating the proposed protection design with GSP-based detection can lead to significant hardware cost savings compared to previous designs of protection methods against FDI attacks.
Abstract:The Kalman filter (KF) is a widely-used algorithm for tracking dynamic systems that are captured by state space (SS) models. The need to fully describe a SS model limits its applicability under complex settings, e.g., when tracking based on visual data, and the processing of high-dimensional signals often induces notable latency. These challenges can be treated by mapping the measurements into latent features obeying some postulated closed-form SS model, and applying the KF in the latent space. However, the validity of this approximated SS model may constitute a limiting factor. In this work, we study tracking from high-dimensional measurements under complex settings using a hybrid model-based/data-driven approach. By gradually tackling the challenges in handling the observations model and the task, we develop Latent-KalmanNet, which implements tracking from high-dimensional measurements by leveraging data to jointly learn the KF along with the latent space mapping. Latent-KalmanNet combines a learned encoder with data-driven tracking in the latent space using the recently proposed-KalmanNet, while identifying the ability of each of these trainable modules to assist its counterpart via providing a suitable prior (by KalmanNet) and by learning a latent representation that facilitates data-aided tracking (by the encoder). Our empirical results demonstrate that the proposed Latent-KalmanNet achieves improved accuracy and run-time performance over both model-based and data-driven techniques by learning a surrogate latent representation that most facilitates tracking, while operating with limited complexity and latency.
Abstract:In constrained parameter estimation, the classical constrained Cramer-Rao bound (CCRB) and the recent Lehmann-unbiased CCRB (LU-CCRB) are lower bounds on the performance of mean-unbiased and Lehmann-unbiased estimators, respectively. Both the CCRB and the LU-CCRB require differentiability of the likelihood function, which can be a restrictive assumption. Additionally, these bounds are local bounds that are inappropriate for predicting the threshold phenomena of the constrained maximum likelihood (CML) estimator. The constrained Barankin-type bound (CBTB) is a nonlocal mean-squared-error (MSE) lower bound for constrained parameter estimation that does not require differentiability of the likelihood function. However, this bound requires a restrictive mean-unbiasedness condition in the constrained set. In this work, we propose the Lehmann-unbiased CBTB (LU-CBTB) on the weighted MSE. This bound does not require differentiability of the likelihood function and assumes Lehmann-unbiasedness, which is less restrictive than the CBTB mean-unbiasedness. We show that the LU-CBTB is tighter than or equal to the LU-CCRB and coincides with the CBTB for linear constraints. For nonlinear constraints the LU-CBTB and the CBTB are different and the LU-CBTB can be a lower bound on the WMSE of constrained estimators in cases, where the CBTB is not. In the simulations, we consider direction-of-arrival estimation of an unknown constant modulus discrete signal. In this case, the likelihood function is not differentiable and constrained Cramer-Rao-type bounds do not exist, while CBTBs exist. It is shown that the LU-CBTB better predicts the CML estimator performance than the CBTB, since the CML estimator is Lehmann-unbiased but not mean-unbiased.