Abstract:In this paper, we propose a low-complexity and fast hybrid beamforming design for joint communications and sensing (JCAS) based on deep unfolding. We first derive closed-form expressions for the gradients of the communications sum rate and sensing beampattern error with respect to the analog and digital precoders. Building on this, we develop a deep neural network as an unfolded version of the projected gradient ascent algorithm, which we refer to as UPGANet. This approach efficiently optimizes the communication-sensing performance tradeoff with fast convergence, enabled by the learned step sizes. UPGANet preserves the interpretability and flexibility of the conventional PGA optimizer while enhancing performance through data training. Our simulations show that UPGANet achieves up to a 33.5% higher communications sum rate and 2.5 dB lower beampattern error compared to conventional designs based on successive convex approximation and Riemannian manifold optimization. Additionally, it reduces runtime and computational complexity by up to 65% compared to PGA without unfolding.
Abstract:The Kalman filter (KF) and its variants are among the most celebrated algorithms in signal processing. These methods are used for state estimation of dynamic systems by relying on mathematical representations in the form of simple state-space (SS) models, which may be crude and inaccurate descriptions of the underlying dynamics. Emerging data-centric artificial intelligence (AI) techniques tackle these tasks using deep neural networks (DNNs), which are model-agnostic. Recent developments illustrate the possibility of fusing DNNs with classic Kalman-type filtering, obtaining systems that learn to track in partially known dynamics. This article provides a tutorial-style overview of design approaches for incorporating AI in aiding KF-type algorithms. We review both generic and dedicated DNN architectures suitable for state estimation, and provide a systematic presentation of techniques for fusing AI tools with KFs and for leveraging partial SS modeling and data, categorizing design approaches into task-oriented and SS model-oriented. The usefulness of each approach in preserving the individual strengths of model-based KFs and data-driven DNNs is investigated in a qualitative and quantitative study, whose code is publicly available, illustrating the gains of hybrid model-based/data-driven designs. We also discuss existing challenges and future research directions that arise from fusing AI and Kalman-type algorithms.
Abstract:Joint communications and sensing (JCAS) is expected to be a crucial technology for future wireless systems. This paper investigates beamforming design for a multi-user multi-target JCAS system to ensure fairness and balance between communications and sensing performance. We jointly optimize the transmit and receive beamformers to maximize the weighted sum of the minimum communications rate and sensing mutual information. The formulated problem is highly challenging due to its non-smooth and non-convex nature. To overcome the challenges, we reformulate the problem into an equivalent but more tractable form. We first solve this problem by alternating optimization (AO) and then propose a machine learning algorithm based on the AO approach. Numerical results show that our algorithm scales effectively with the number of the communications users and provides better performance with shorter run time compared to conventional optimization approaches.
Abstract:A leading family of algorithms for state estimation in dynamic systems with multiple sub-states is based on particle filters (PFs). PFs often struggle when operating under complex or approximated modelling (necessitating many particles) with low latency requirements (limiting the number of particles), as is typically the case in multi target tracking (MTT). In this work, we introduce a deep neural network (DNN) augmentation for PFs termed learning flock (LF). LF learns to correct a particles-weights set, which we coin flock, based on the relationships between all sub-particles in the set itself, while disregarding the set acquisition procedure. Our proposed LF, which can be readily incorporated into different PFs flow, is designed to facilitate rapid operation by maintaining accuracy with a reduced number of particles. We introduce a dedicated training algorithm, allowing both supervised and unsupervised training, and yielding a module that supports a varying number of sub-states and particles without necessitating re-training. We experimentally show the improvements in performance, robustness, and latency of LF augmentation for radar multi-target tracking, as well its ability to mitigate the effect of a mismatched observation modelling. We also compare and illustrate the advantages of LF over a state-of-the-art DNN-aided PF, and demonstrate that LF enhances both classic PFs as well as DNN-based filters.
Abstract:Deep neural networks (DNNs) were shown to facilitate the operation of uplink multiple-input multiple-output (MIMO) receivers, with emerging architectures augmenting modules of classic receiver processing. Current designs consider static DNNs, whose architecture is fixed and weights are pre-trained. This induces a notable challenge, as the resulting MIMO receiver is suitable for a given configuration, i.e., channel distribution and number of users, while in practice these parameters change frequently with network variations and users leaving and joining the network. In this work, we tackle this core challenge of DNN-aided MIMO receivers. We build upon the concept of hypernetworks, augmenting the receiver with a pre-trained deep model whose purpose is to update the weights of the DNN-aided receiver upon instantaneous channel variations. We design our hypernetwork to augment modular deep receivers, leveraging their modularity to have the hypernetwork adapt not only the weights, but also the architecture. Our modular hypernetwork leads to a DNN-aided receiver whose architecture and resulting complexity adapts to the number of users, in addition to channel variations, without retraining. Our numerical studies demonstrate superior error-rate performance of modular hypernetworks in time-varying channels compared to static pre-trained receivers, while providing rapid adaptivity and scalability to network variations.
Abstract:We study iterative blind symbol detection for block-fading linear inter-symbol interference channels. Based on the factor graph framework, we design a joint channel estimation and detection scheme that combines the expectation maximization (EM) algorithm and the ubiquitous belief propagation (BP) algorithm. Interweaving the iterations of both schemes significantly reduces the EM algorithm's computational burden while retaining its excellent performance. To this end, we apply simple yet effective model-based learning methods to find a suitable parameter update schedule by introducing momentum in both the EM parameter updates as well as in the BP message passing. Numerical simulations verify that the proposed method can learn efficient schedules that generalize well and even outperform coherent BP detection in high signal-to-noise scenarios.
Abstract:Multiple-input multiple-output (MIMO) systems play a key role in wireless communication technologies. A widely considered approach to realize scalable MIMO systems involves architectures comprised of multiple separate modules, each with its own beamforming capability. Such models accommodate cell-free massive MIMO and partially connected hybrid MIMO architectures. A core issue with the implementation of modular MIMO arises from the need to rapidly set the beampatterns of the modules, while maintaining their power efficiency. This leads to challenging constrained optimization that should be repeatedly solved on each coherence duration. In this work, we propose a power-oriented optimization algorithm for beamforming in uplink modular hybrid MIMO systems, which learns from data to operate rapidly. We derive our learned optimizer by tackling the rate maximization objective using projected gradient ascent steps with momentum. We then leverage data to tune the hyperparameters of the optimizer, allowing it to operate reliably in a fixed and small number of iterations while completely preserving its interpretable operation. We show how power efficient beamforming can be encouraged by the learned optimizer, via boosting architectures with low-resolution phase shifts and with deactivated analog components. Numerical results show that our learn-to-optimize method notably reduces the number of iterations and computation latency required to reliably tune modular MIMO receivers, and that it allows obtaining desirable balances between power efficient designs and throughput.
Abstract:Extremely large-scale antenna arrays are poised to play a pivotal role in sixth-generation (6G) networks. Utilizing such arrays often results in a near-field spherical wave transmission environment, enabling the generation of focused beams, which introduces new degrees of freedom for wireless localization. In this paper, we consider a beam-focusing design for localizing multiple sources in the radiating near-field. Our formulation accommodates various expected types of implementations of large antenna arrays, including hybrid analog/digital architectures and dynamic metasurface antennas (DMAs). We consider a direct localization estimation method exploiting curvature-of-arrival of impinging spherical wavefront to obtain user positions. In this regard, we adopt a two-stage approach configuring the array to optimize near-field positioning. In the first step, we focus only on adjusting the array coefficients to minimize the estimation error. We obtain a closed-form approximate solution based on projection and the better one based on the Riemann gradient algorithm. We then extend this approach to simultaneously localize and focus the beams via a sub-optimal iterative approach that does not rely on such knowledge. The simulation results show that near-field localization accuracy based on a hybrid array or DMA can achieve performance close to that of fully digital arrays at a lower cost, and DMAs can attain better performance than hybrid solutions with the same aperture.
Abstract:Deep learning is envisioned to facilitate the operation of wireless receivers, with emerging architectures integrating deep neural networks (DNNs) with traditional modular receiver processing. While deep receivers were shown to operate reliably in complex settings for which they were trained, the dynamic nature of wireless communications gives rise to the need to repeatedly adapt deep receivers to channel variations. However, frequent re-training is costly and ineffective, while in practice, not every channel variation necessitates adaptation of the entire DNN. In this paper, we study concept drift detection for identifying when does a deep receiver no longer match the channel, enabling asynchronous adaptation, i.e., re-training only when necessary. We identify existing drift detection schemes from the machine learning literature that can be adapted for deep receivers in dynamic channels, and propose a novel soft-output detection mechanism tailored to the communication domain. Moreover, for deep receivers that preserve conventional modular receiver processing, we design modular drift detection mechanisms, that simultaneously identify when and which sub-module to re-train. The provided numerical studies show that even in a rapidly time-varying scenarios, asynchronous adaptation via modular drift detection dramatically reduces the number of trained parameters and re-training times, with little compromise on performance.
Abstract:Various communication technologies are expected to utilize mobile ad hoc networks (MANETs). By combining MANETs with non-orthogonal multiple access (NOMA) communications, one can support scalable, spectrally efficient, and flexible network topologies. To achieve these benefits of NOMA MANETs, one should determine the transmission protocol, particularly the superposition code. However, the latter involves lengthy optimization that has to be repeated when the topology changes. In this work, we propose an algorithm for rapidly optimizing superposition codes in multi-hop NOMA MANETs. To achieve reliable tunning with few iterations, we adopt the emerging deep unfolding methodology, leveraging data to boost reliable settings. Our superposition coding optimization algorithm utilizes a small number of projected gradient steps while learning its per-user hyperparameters to maximize the minimal rate over past channels in an unsupervised manner. The learned optimizer is designed for both settings with full channel state information, as well as when the channel coefficients are to be estimated from pilots. We show that the combination of principled optimization and machine learning yields a scalable optimizer, that once trained, can be applied to different topologies. We cope with the non-convex nature of the optimization problem by applying parallel-learned optimization with different starting points as a form of ensemble learning. Our numerical results demonstrate that the proposed method enables the rapid setting of high-rate superposition codes for various channels.