Abstract:Learning-based downlink power control in cell-free massive multiple-input multiple-output (CFmMIMO) systems offers a promising alternative to conventional iterative optimization algorithms, which are computationally intensive due to online iterative steps. Existing learning-based methods, however, often fail to exploit the intrinsic structure of channel data and neglect pilot allocation information, leading to suboptimal performance, especially in large-scale networks with many users. This paper introduces the pilot contamination-aware power control (PAPC) transformer neural network, a novel approach that integrates pilot allocation data into the network, effectively handling pilot contamination scenarios. PAPC employs the attention mechanism with a custom masking technique to utilize structural information and pilot data. The architecture includes tailored preprocessing and post-processing stages for efficient feature extraction and adherence to power constraints. Trained in an unsupervised learning framework, PAPC is evaluated against the accelerated proximal gradient (APG) algorithm, showing comparable spectral efficiency fairness performance while significantly improving computational efficiency. Simulations demonstrate PAPC's superior performance over fully connected networks (FCNs) that lack pilot information, its scalability to large-scale CFmMIMO networks, and its computational efficiency improvement over APG. Additionally, by employing padding techniques, PAPC adapts to the dynamically varying number of users without retraining.
Abstract:The use of one-bit analog-to-digital converter (ADC) has been considered as a viable alternative to high resolution counterparts in realizing and commercializing massive multiple-input multiple-output (MIMO) systems. However, the issue of discarding the amplitude information by one-bit quantizers has to be compensated. Thus, carefully tailored methods need to be developed for one-bit channel estimation and data detection as the conventional ones cannot be used. To address these issues, the problems of one-bit channel estimation and data detection for MIMO orthogonal frequency division multiplexing (OFDM) system that operates over uncorrelated frequency selective channels are investigated here. We first develop channel estimators that exploit Gaussian discriminant analysis (GDA) classifier and approximated versions of it as the so-called weak classifiers in an adaptive boosting (AdaBoost) approach. Particularly, the combination of the approximated GDA classifiers with AdaBoost offers the benefit of scalability with the linear order of computations, which is critical in massive MIMO-OFDM systems. We then take advantage of the same idea for proposing the data detectors. Numerical results validate the efficiency of the proposed channel estimators and data detectors compared to other methods. They show comparable/better performance to that of the state-of-the-art methods, but require dramatically lower computational complexities and run times.
Abstract:Most existing convolutional dictionary learning (CDL) algorithms are based on batch learning, where the dictionary filters and the convolutional sparse representations are optimized in an alternating manner using a training dataset. When large training datasets are used, batch CDL algorithms become prohibitively memory-intensive. An online-learning technique is used to reduce the memory requirements of CDL by optimizing the dictionary incrementally after finding the sparse representations of each training sample. Nevertheless, learning large dictionaries using the existing online CDL (OCDL) algorithms remains highly computationally expensive. In this paper, we present a novel approximate OCDL method that incorporates sparse decomposition of the training samples. The resulting optimization problems are addressed using the alternating direction method of multipliers. Extensive experimental evaluations using several image datasets show that the proposed method substantially reduces computational costs while preserving the effectiveness of the state-of-the-art OCDL algorithms.
Abstract:Beamforming is a signal processing technique to steer, shape, and focus an electromagnetic wave using an array of sensors toward a desired direction. It has been used in several engineering applications such as radar, sonar, acoustics, astronomy, seismology, medical imaging, and communications. With the advances in multi-antenna technologies largely for radar and communications, there has been a great interest on beamformer design mostly relying on convex/nonconvex optimization. Recently, machine learning is being leveraged for obtaining attractive solutions to more complex beamforming problems. This article captures the evolution of beamforming in the last twenty-five years from convex-to-nonconvex optimization and optimization-to-learning approaches. It provides a glimpse of this important signal processing technique into a variety of transmit-receive architectures, propagation zones, paths, and conventional/emerging applications.
Abstract:This paper considers a formulation of the robust adaptive beamforming (RAB) problem based on worst-case signal-to-interference-plus-noise ratio (SINR) maximization with a nonconvex uncertainty set for the steering vectors. The uncertainty set consists of a similarity constraint and a (nonconvex) double-sided ball constraint. The worst-case SINR maximization problem is turned into a quadratic matrix inequality (QMI) problem using the strong duality of semidefinite programming. Then a linear matrix inequality (LMI) relaxation for the QMI problem is proposed, with an additional valid linear constraint. Necessary and sufficient conditions for the tightened LMI relaxation problem to have a rank-one solution are established. When the tightened LMI relaxation problem still has a high-rank solution, the LMI relaxation problem is further restricted to become a bilinear matrix inequality (BLMI) problem. We then propose an iterative algorithm to solve the BLMI problem that finds an optimal/suboptimal solution for the original RAB problem by solving the BLMI formulations. To validate our results, simulation examples are presented to demonstrate the improved array output SINR of the proposed robust beamformer.
Abstract:Simultaneous sparse approximation (SSA) seeks to represent a set of dependent signals using sparse vectors with identical supports. The SSA model has been used in various signal and image processing applications involving multiple correlated input signals. In this paper, we propose algorithms for convolutional SSA (CSSA) based on the alternating direction method of multipliers. Specifically, we address the CSSA problem with different sparsity structures and the convolutional feature learning problem in multimodal data/signals based on the SSA model. We evaluate the proposed algorithms by applying them to multimodal and multifocus image fusion problems.
Abstract:Graph convolutional networks (GCNs) can successfully learn the graph signal representation by graph convolution. The graph convolution depends on the graph filter, which contains the topological dependency of data and propagates data features. However, the estimation errors in the propagation matrix (e.g., the adjacency matrix) can have a significant impact on graph filters and GCNs. In this paper, we study the effect of a probabilistic graph error model on the performance of the GCNs. We prove that the adjacency matrix under the error model is bounded by a function of graph size and error probability. We further analytically specify the upper bound of a normalized adjacency matrix with self-loop added. Finally, we illustrate the error bounds by running experiments on a synthetic dataset and study the sensitivity of a simple GCN under this probabilistic error model on accuracy.
Abstract:The robust adaptive beamforming (RAB) problem is considered via the worst-case signal-to-interference-plus-noise ratio (SINR) maximization over distributional uncertainty sets for the random interference-plus-noise covariance (INC) matrix and desired signal steering vector. The distributional uncertainty set of the INC matrix accounts for the support and the positive semidefinite (PSD) mean of the distribution, and a similarity constraint on the mean. The distributional uncertainty set for the steering vector consists of the constraints on the known first- and second-order moments. The RAB problem is formulated as a minimization of the worst-case expected value of the SINR denominator achieved by any distribution, subject to the expected value of the numerator being greater than or equal to one for each distribution. Resorting to the strong duality of linear conic programming, such a RAB problem is rewritten as a quadratic matrix inequality problem. It is then tackled by iteratively solving a sequence of linear matrix inequality relaxation problems with the penalty term on the rank-one PSD matrix constraint. To validate the results, simulation examples are presented, and they demonstrate the improved performance of the proposed robust beamformer in terms of the array output SINR.
Abstract:The problem of direction-of-arrival (DOA) estimation in the presence of nonuniform sensor noise is considered and a novel algorithm is developed. The algorithm consists of three phases. First, the diagonal nonuniform sensor noise covariance matrix is estimated using an iterative procedure that requires only few iterations to obtain an accurate estimate. The asymptotic variance of one iteration is derived for the proposed noise covariance estimator. Second, a forward-only rooting-based DOA estimator as well as its forward-backward averaging extension are developed for DOA estimation. The DOA estimators take advantage of using second-order statistics of signal subspace perturbation in constructing a weight matrix of a properly designed generalized least squares minimization problem. Despite the fact that these DOA estimators are iterative, only a few iterations are sufficient to reach accurate results. The asymptotic performance of these DOA estimators is also investigated. Third, a newly designed DOA selection strategy with reasonable computational cost is developed to select L actual sources out of 2L candidates generated at the second phase. Numerical simulations are conducted in order to establish the considerable superiority of the proposed algorithm compared to the the existing state-of-the-art methods in challenging scenarios in both cases of uniform and nonuniform sensor noise.
Abstract:Convolutional sparse coding improves on the standard sparse approximation by incorporating a global shift-invariant model. The most efficient convolutional sparse coding methods are based on the alternating direction method of multipliers and the convolution theorem. The only major difference between these methods is how they approach a convolutional least-squares fitting subproblem. This letter presents a solution to this subproblem, which improves the efficiency of the state-of-the-art algorithms. We also use the same approach for developing an efficient convolutional dictionary learning method. Furthermore, we propose a novel algorithm for convolutional sparse coding with a constraint on the approximation error.