Abstract:The advent of next-generation ultra-reliable and low-latency communications (xURLLC) presents stringent and unprecedented requirements for key performance indicators (KPIs). As a disruptive technology, non-orthogonal multiple access (NOMA) harbors the potential to fulfill these stringent KPIs essential for xURLLC. However, the immaturity of research on the tail distributions of these KPIs significantly impedes the application of NOMA to xURLLC. Stochastic network calculus (SNC), as a potent methodology, is leveraged to provide dependable theoretical insights into tail distribution analysis and statistical QoS provisioning (SQP). In this article, we develop a NOMA-assisted uplink xURLLC network architecture that incorporates an SNC-based SQP theoretical framework (SNC-SQP) to support tail distribution analysis in terms of delay, age-of-information (AoI), and reliability. Based on SNC-SQP, an SQP-driven power optimization problem is proposed to minimize transmit power while guaranteeing xURLLC's KPIs on delay, AoI, reliability, and power consumption. Extensive simulations validate our proposed theoretical framework and demonstrate that the proposed power allocation scheme significantly reduces uplink transmit power and outperforms conventional schemes in terms of SQP performance.
Abstract:Revolutionary sixth-generation wireless communications technologies and applications, notably digital twin networks (DTN), connected autonomous vehicles (CAVs), space-air-ground integrated networks (SAGINs), zero-touch networks, industry 5.0, and healthcare 5.0, are driving next-generation wireless networks (NGWNs). These technologies generate massive data, requiring swift transmission and trillions of device connections, fueling the need for sophisticated next-generation multiple access (NGMA) schemes. NGMA enables massive connectivity in the 6G era, optimizing NGWN operations beyond current multiple access (MA) schemes. This survey showcases non-orthogonal multiple access (NOMA) as NGMA's frontrunner, exploring What has NOMA delivered?, What is NOMA providing?, and What lies ahead?. We present NOMA variants, fundamental operations, and applicability in multi-antenna systems, machine learning, reconfigurable intelligent surfaces (RIS), cognitive radio networks (CRN), integrated sensing and communications (ISAC), terahertz networks, and unmanned aerial vehicles (UAVs). Additionally, we explore NOMA's interplay with state-of-the-art wireless technologies, highlighting its advantages and technical challenges. Finally, we unveil NOMA research trends in the 6G era and provide design recommendations and future perspectives for NOMA as the leading NGMA solution for NGWNs.
Abstract:A Cram\'er-Rao bound (CRB) optimization framework for near-field sensing (NISE) with continuous-aperture arrays (CAPAs) is proposed. In contrast to conventional spatially discrete arrays (SPDAs), CAPAs emit electromagnetic (EM) probing signals through continuous source currents for target sensing, thereby exploiting the full spatial degrees of freedom (DoFs). The maximum likelihood estimation (MLE) method for estimating target locations in the near-field region is developed. To evaluate the NISE performance with CAPAs, the CRB for estimating target locations is derived based on continuous transmit and receive array responses of CAPAs. Subsequently, a CRB minimization problem is formulated to optimize the continuous source current of CAPAs. This results in a non-convex, integral-based functional optimization problem. To address this challenge, the optimal structure of the source current is derived and proven to be spanned by a series of basis functions determined by the system geometry. To solve the CRB minimization problem, a low-complexity subspace manifold gradient descent (SMGD) method is proposed, leveraging the derived optimal structure of the source current. Our simulation results validate the effectiveness of the proposed SMGD method and further demonstrate that i)~the proposed SMGD method can effectively solve the CRB minimization problem with reduced computational complexity, and ii)~CAPA achieves a tenfold improvement in sensing performance compared to its SPDA counterpart, due to full exploitation of spatial DoFs.
Abstract:A DeepCAPA (Deep Learning for Continuous Aperture Array (CAPA)) framework is proposed to learn beamforming in CAPA systems. The beamforming optimization problem is firstly formulated, and it is mathematically proved that the optimal beamforming lies in the subspace spanned by users' conjugate channel responses. Two challenges are encountered when directly applying deep neural networks (DNNs) for solving the formulated problem, i) both the input and output spaces are infinite-dimensional, which are not compatible with DNNs. The finite-dimensional representations of inputs and outputs are derived to address this challenge. ii) A closed-form loss function is unavailable for training the DNN. To tackle this challenge, two additional DNNs are trained to approximate the operations without closed-form expressions for expediting gradient back-propagation. To improve learning performance and reduce training complexity, the permutation equivariance properties of the mappings to be learned are mathematically proved. As a further advance, the DNNs are designed as graph neural networks to leverage the properties. Numerical results demonstrate that: i) the proposed DeepCAPA framework achieves higher spectral efficiency and lower inference complexity compared to match-filtering and state-of-art Fourier-based discretization method, and ii) DeepCAPA approaches the performance upper bound of optimizing beamforming in the spatially discrete array-based system as the number of antennas in a fixed-sized area tends toward infinity.
Abstract:This article targets at unlocking the potentials of a class of prominent generative artificial intelligence (GAI) method, namely diffusion model (DM), for mobile communications. First, a DM-driven communication architecture is proposed, which introduces two key paradigms, i.e., conditional DM and DMdriven deep reinforcement learning (DRL), for wireless data generation and communication management, respectively. Then, we discuss the key advantages of DM-driven communication paradigms. To elaborate further, we explore DM-driven channel generation mechanisms for channel estimation, extrapolation, and feedback in multiple-input multiple-output (MIMO) systems. We showcase the numerical performance of conditional DM using the accurate DeepMIMO channel datasets, revealing its superiority in generating high-fidelity channels and mitigating unforeseen distribution shifts in sophisticated scenes. Furthermore, several DM-driven communication management designs are conceived, which is promising to deal with imperfect channels and taskoriented communications. To inspire future research developments, we highlight the potential applications and open research challenges of DM-driven communications. Code is available at https://github.com/xiaoxiaxusummer/GAI_COMM/
Abstract:The continuous aperture array (CAPA) can provide higher degree-of-freedom and spatial resolution than the spatially discrete array (SDPA), where optimizing multi-user current distributions in CAPA systems is crucial but challenging. The challenge arises from solving non-convex functional optimization problems without closed-form objective functions and constraints. In this paper, we propose a deep learning framework called L-CAPA to learn current distribution policies. In the framework, we find finite-dimensional representations of channel functions and current distributions, allowing them to be inputted into and outputted from a deep neural network (DNN) for learning the policy. To address the issue that the integrals in the loss function without closed-form expressions hinder training the DNN in an unsupervised manner, we propose to design another two DNNs for learning the integrals. The DNNs are designed as graph neural networks to incorporate with the permutation properties of the mappings to be learned, thereby improving learning performance. Simulation results show that L-CAPA can achieve the performance upper-bound of optimizing precoding in the SDPA system as the number of antennas approaches infinity, and it is with low inference complexity.
Abstract:A novel accelerated mobile edge generation (MEG) framework is proposed for generating high-resolution images on mobile devices. Exploiting a large-scale latent diffusion model (LDM) distributed across edge server (ES) and user equipment (UE), cost-efficient artificial intelligence generated content (AIGC) is achieved by transmitting low-dimensional features between ES and UE. To reduce overheads of both distributed computations and transmissions, a dynamic diffusion and feature merging scheme is conceived. By jointly optimizing the denoising steps and feature merging ratio, the image generation quality is maximized subject to latency and energy consumption constraints. To address this problem and tailor LDM sub-models, a low-complexity MEG acceleration protocol is developed. Particularly, a backbone meta-architecture is trained via offline distillation. Then, dynamic diffusion and feature merging are determined in online channel environment, which can be viewed as a constrained Markov Decision Process (MDP). A constrained variational policy optimization (CVPO) based MEG algorithm is further proposed for constraint-guaranteed learning, namely MEG-CVPO. Numerical results verify that: 1) The proposed framework can generate 1024$\times$1024 high-quality images over noisy channels while reducing over $40\%$ latency compared to conventional generation schemes. 2) The developed MEG-CVPO effectively mitigates constraint violations, thus flexibly controlling the trade-off between image distortion and generation costs.
Abstract:Driven by the ever-increasing requirements of ultra-high spectral efficiency, ultra-low latency, and massive connectivity, the forefront of wireless research calls for the design of advanced next generation multiple access schemes to facilitate provisioning of these stringent demands. This inspires the embrace of non-orthogonal multiple access (NOMA) in future wireless communication networks. Nevertheless, the support of massive access via NOMA leads to additional security threats, due to the open nature of the air interface, the broadcast characteristic of radio propagation as well as intertwined relationship among paired NOMA users. To address this specific challenge, the superimposed transmission of NOMA can be explored as new opportunities for security aware design, for example, multiuser interference inherent in NOMA can be constructively engineered to benefit communication secrecy and privacy. The purpose of this tutorial is to provide a comprehensive overview on the state-of-the-art physical layer security techniques that guarantee wireless security and privacy for NOMA networks, along with the opportunities, technical challenges, and future research trends.
Abstract:Massive interconnection has sparked people's envisioning for next-generation ultra-reliable and low-latency communications (xURLLC), prompting the design of customized next-generation advanced transceivers (NGAT). Rate-splitting multiple access (RSMA) has emerged as a pivotal technology for NGAT design, given its robustness to imperfect channel state information (CSI) and resilience to quality of service (QoS). Additionally, xURLLC urgently appeals to large-scale access techniques, thus massive multiple-input multiple-output (mMIMO) is anticipated to integrate with RSMA to enhance xURLLC. In this paper, we develop an innovative RSMA-assisted massive-MIMO xURLLC (RSMA-mMIMO-xURLLC) network architecture tailored to accommodate xURLLC's critical QoS constraints in finite blocklength (FBL) regimes. Leveraging uplink pilot training under imperfect CSI at the transmitter, we estimate channel gains and customize linear precoders for efficient downlink short-packet data transmission. Subsequently, we formulate a joint rate-splitting, beamforming, and transmit antenna selection optimization problem to maximize the total effective transmission rate (ETR). Addressing this multi-variable coupled non-convex problem, we decompose it into three corresponding subproblems and propose a low-complexity joint iterative algorithm for efficient optimization. Extensive simulations substantiate that compared with non-orthogonal multiple access (NOMA) and space division multiple access (SDMA), the developed architecture improves the total ETR by 15.3% and 41.91%, respectively, as well as accommodates larger-scale access.
Abstract:Multiuser beamforming is considered for partially-connected millimeter wave massive MIMO systems. Based on perfect channel state information (CSI), a low-complexity hybrid beamforming scheme that decouples the analog beamformer and the digital beamformer is proposed to maximize the sum-rate. The analog beamformer design is modeled as a phase alignment problem to harvest the array gain. Given the analog beamformer, the digital beamformer is designed by solving a weighted minimum mean squared error problem. Then based on imperfect CSI, an analog-only beamformer design scheme is proposed, where the design problem aims at maximizing the desired signal power on the current user and minimizing the power on the other users to mitigate the multiuser interference. The original problem is then transformed into a series of independent beam nulling subproblems, where an efficient iterative algorithm using the majorization-minimization framework is proposed to solve the subproblems. Simulation results show that, under perfect CSI, the proposed scheme achieves almost the same sum-rate performance as the existing schemes but with lower computational complexity; and under imperfect CSI, the proposed analog-only beamforming design scheme can effectively mitigate the multiuser interference.