Abstract:Direct satellite-to-device communication is a promising future direction due to its lower latency and enhanced efficiency. However, intermittent and unpredictable terrestrial interference significantly affects system reliability and performance. Continuously employing sophisticated interference mitigation techniques is practically inefficient. Motivated by the periodic idle intervals characteristic of burst-mode satellite transmissions, this paper investigates online interference detection frameworks specifically tailored for satellite-to-device scenarios. We first rigorously formulate interference detection as a binary hypothesis testing problem, leveraging differences between Rayleigh (no interference) and Rice (interference present) distributions. Then, we propose a cumulative sum (CUSUM)-based online detector for scenarios with known interference directions, explicitly characterizing the trade-off between detection latency and false alarm rate, and establish its asymptotic optimality. For practical scenarios involving unknown interference direction, we further propose a generalized likelihood ratio (GLR)-based detection method, jointly estimating interference direction via the Root-MUSIC algorithm. Numerical results validate our theoretical findings and demonstrate that our proposed methods achieve high detection accuracy with remarkably low latency, highlighting their practical applicability in future satellite-to-device communication systems.
Abstract:This paper addresses the joint transceiver design, including pilot transmission, channel feature extraction and feedback, as well as precoding, for low-overhead downlink massive multiple-input multiple-output (MIMO) communication in frequency-division duplex (FDD) systems. Although deep learning (DL) has shown great potential in tackling this problem, existing methods often suffer from poor scalability in practical systems, as the solution obtained in the training phase merely works for a fixed feedback capacity and a fixed number of users in the deployment phase. To address this limitation, we propose a novel DL-based framework comprised of choreographed neural networks, which can utilize one training phase to generate all the transceiver solutions used in the deployment phase with varying sizes of feedback codebooks and numbers of users. The proposed framework includes a residual vector-quantized variational autoencoder (RVQ-VAE) for efficient channel feedback and an edge graph attention network (EGAT) for robust multiuser precoding. It can adapt to different feedback capacities by flexibly adjusting the RVQ codebook sizes using the hierarchical codebook structure, and scale with the number of users through a feedback module sharing scheme and the inherent scalability of EGAT. Moreover, a progressive training strategy is proposed to further enhance data transmission performance and generalization capability. Numerical results on a real-world dataset demonstrate the superior scalability and performance of our approach over existing methods.
Abstract:This paper considers an intelligent reflecting surface (IRS)-assisted bi-static localization architecture for the sixth-generation (6G) integrated sensing and communication (ISAC) network. The system consists of a transmit user, a receive base station (BS), an IRS, and multiple targets in either the far-field or near-field region of the IRS. In particular, we focus on the challenging scenario where the line-of-sight (LOS) paths between targets and the BS are blocked, such that the emitted orthogonal frequency division multiplexing (OFDM) signals from the user reach the BS merely via the user-target-IRS-BS path. Based on the signals received by the BS, our goal is to localize the targets by estimating their relative positions to the IRS, instead of to the BS. We show that subspace-based methods, such as the multiple signal classification (MUSIC) algorithm, can be applied onto the BS's received signals to estimate the relative states from the targets to the IRS. To this end, we create a virtual signal via combining user-target-IRS-BS channels over various time slots. By applying MUSIC on such a virtual signal, we are able to detect the far-field targets and the near-field targets, and estimate the angle-of-arrivals (AOAs) and/or ranges from the targets to the IRS. Furthermore, we theoretically verify that the proposed method can perfectly estimate the relative states from the targets to the IRS in the ideal case with infinite coherence blocks. Numerical results verify the effectiveness of our proposed IRS-assisted localization scheme. Our paper demonstrates the potential of employing passive anchors, i.e., IRSs, to improve the sensing coverage of the active anchors, i.e., BSs.
Abstract:This paper considers networked sensing in cellular network, where multiple base stations (BSs) first compress their received echo signals from multiple targets and then forward the quantized signals to the cloud via limited-capacity backhaul links, such that the cloud can leverage all useful echo signals to perform high-resolution localization. Under this setup, we manage to characterize the posterior Cramer-Rao Bound (PCRB) for localizing all the targets as a function of the transmit covariance matrix and the compression noise covariance matrix of each BS. Then, a PCRB minimization problem subject to the transmit power constraints and the backhaul capacity constraints is formulated to jointly design the BSs' transmission and compression strategies. We propose an efficient algorithm to solve this problem based on the alternating optimization technique. Specifically, it is shown that when either the transmit covariance matrices or the compression noise covariance matrices are fixed, the successive convex approximation technique can be leveraged to optimize the other type of covariance matrices locally. Numerical results are provided to verify the effectiveness of our proposed algorithm.
Abstract:This letter considers the transceiver design in frequency division duplex (FDD) massive multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems for high-quality data transmission. We propose a novel deep learning based framework where the procedures of pilot design, channel feedback, and hybrid beamforming are realized by carefully crafted deep neural networks. All the considered modules are jointly learned in an end-to-end manner, and a graph neural network is adopted to effectively capture interactions between beamformers based on the built graphical representation. Numerical results validate the effectiveness of our method.
Abstract:Fast and precise beam alignment is crucial for high-quality data transmission in millimeter-wave (mmWave) communication systems, where large-scale antenna arrays are utilized to overcome the severe propagation loss. To tackle the challenging problem, we propose a novel deep learning-based hierarchical beam alignment method for both multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) systems, which learns two tiers of probing codebooks (PCs) and uses their measurements to predict the optimal beam in a coarse-to-fine search manner. Specifically, a hierarchical beam alignment network (HBAN) is developed for MISO systems, which first performs coarse channel measurement using a tier-1 PC, then selects a tier-2 PC for fine channel measurement, and finally predicts the optimal beam based on both coarse and fine measurements. The propounded HBAN is trained in two steps: the tier-1 PC and the tier-2 PC selector are first trained jointly, followed by the joint training of all the tier-2 PCs and beam predictors. Furthermore, an HBAN for MIMO systems is proposed to directly predict the optimal beam pair without performing beam alignment individually at the transmitter and receiver. Numerical results demonstrate that the proposed HBANs are superior to the state-of-art methods in both alignment accuracy and signaling overhead reduction.
Abstract:This paper investigates the problem of activity detection and channel estimation in cooperative multi-cell massive access systems with temporally correlated activity, where all access points (APs) are connected to a central unit via fronthaul links. We propose to perform user-centric AP cooperation for computation burden alleviation and introduce a generalized sliding-window detection strategy for fully exploiting the temporal correlation in activity. By establishing the probabilistic model associated with the factor graph representation, we propose a scalable Dynamic Compressed Sensing-based Multiple Measurement Vector Generalized Approximate Message Passing (DCS-MMV-GAMP) algorithm from the perspective of Bayesian inference. Therein, the activity likelihood is refined by performing standard message passing among the activities in the spatial-temporal domain and GAMP is employed for efficient channel estimation. Furthermore, we develop two schemes of quantize-and-forward (QF) and detect-and-forward (DF) based on DCS-MMV-GAMP for the finite-fronthaul-capacity scenario, which are extensively evaluated under various system limits. Numerical results verify the significant superiority of the proposed approach over the benchmarks. Moreover, it is revealed that QF can usually realize superior performance when the antenna number is small, whereas DF shifts to be preferable with limited fronthaul capacity if the large-scale antenna arrays are equipped.
Abstract:This paper studies the temporally-correlated massive access system where a large number of users communicate with the base station sporadically and continue transmitting data in the following frames in high probability when being active. To exploit both the sparsity and the temporal correlations in the user activities, we formulate the joint user activity detection and channel estimation problem in multiple consecutive frames as a dynamic compressed sensing (DCS) problem. Particularly, the problem is proposed to be solved under Bayesian inference to fully utilize the channel statistics and the activity evolution process. The hybrid generalized approximate message passing (HyGAMP) framework is leveraged to design a HyGAMP-DCS algorithm, which can nearly achieve the Bayesian optimality with efficient computations. Specifically, a GAMP part for channel estimation and an MP part for activity likelihood update are included in the proposed algorithm, then the extrinsic information is exchanged between them for performance enhancement. Moveover, we develop the expectation maximization HyGAMP-DCS (EM-HyGAMP-DCS) algorithm to adaptively learn the hyperparameters during the estimation procedure when the system statistics are unavailable. Particularly, the analytical tool of state evolution is provided to find the appropriate hyperparameter initialization that ensures EM-HyGAMP-DCS to achieve satisfied performance and fast convergence. From the simulation results, it is validated that our proposed algorithm can significantly outperform the existing methods.
Abstract:Fast and precise beam alignment is crucial to support high-quality data transmission in millimeter wave (mmWave) communication systems. In this work, we propose a novel deep learning based hierarchical beam alignment method that learns two tiers of probing codebooks (PCs) and uses their measurements to predict the optimal beam in a coarse-to-fine searching manner. Specifically, the proposed method first performs coarse channel measurement using the tier-1 PC, then selects a tier-2 PC for fine channel measurement, and finally predicts the optimal beam based on both coarse and fine measurements. The proposed deep neural network (DNN) architecture is trained in two steps. First, the tier-1 PC and the tier-2 PC selector are trained jointly. After that, all the tier-2 PCs together with the optimal beam predictors are trained jointly. The learned hierarchical PCs can capture the features of propagation environment. Numerical results based on realistic ray-tracing datasets demonstrate that the proposed method is superior to the state-of-art beam alignment methods in both alignment accuracy and sweeping overhead.
Abstract:This letter considers temporal-correlated massive access, where each device, once activated, is likely to transmit continuously over several consecutive frames. Motivated by that the device activity at each frame is correlated to not only its previous frame but also its next frame, we propose a double-sided information (DSI) aided joint activity detection and channel estimation algorithm based on the approximate message passing (AMP) framework. The DSI is extracted from the estimation results in a sliding window that contains the target detection frame and its previous and next frames. The proposed algorithm demonstrates superior performance over the state-of-the-art methods.