Abstract:A novel pinching antenna system (PASS)-enabled downlink multi-user multiple-input single-output (MISO) framework is proposed. PASS consists of multiple waveguides spanning over thousands of wavelength, which equip numerous low-cost dielectric particles, named pinching antennas (PAs), to radiate signals into free space. The positions of PAs can be reconfigured to change both the large-scale path losses and phases of signals, thus facilitating the novel pinching beamforming design. A sum rate maximization problem is formulated, which jointly optimizes the transmit and pinching beamforming to adaptively achieve constructive signal enhancement and destructive interference mitigation. To solve this highly coupled and nonconvex problem, both optimization-based and learning-based methods are proposed. 1) For the optimization-based method, a majorization-minimization and penalty dual decomposition (MM-PDD) algorithm is developed, which handles the nonconvex complex exponential component using a Lipschitz surrogate function and then invokes PDD for problem decoupling. 2) For the learning-based method, a novel Karush-Kuhn-Tucker (KKT)-guided dual learning (KDL) approach is proposed, which enables KKT solutions to be reconstructed in a data-driven manner by learning dual variables. Following this idea, a KDL-Tranformer algorithm is developed, which captures both inter-PA/inter-user dependencies and channel-state-information (CSI)-beamforming dependencies by attention mechanisms. Simulation results demonstrate that: i) The proposed PASS framework significantly outperforms conventional massive multiple input multiple output (MIMO) system even with a few PAs. ii) The proposed KDL-Transformer can improve over 30% system performance than MM-PDD algorithm, while achieving a millisecond-level response on modern GPUs.
Abstract:The potential of applying diffusion models (DMs) for multiple antenna communications is discussed. A unified framework of applying DM for multiple antenna tasks is first proposed. Then, the tasks are innovatively divided into two categories, i.e., decision-making tasks and generation tasks, depending on whether an optimization of system parameters is involved. For each category, it is conceived 1) how the framework can be used for each task and 2) why the DM is superior to traditional artificial intelligence (TAI) and conventional optimization tasks. It is highlighted that the DMs are well-suited for scenarios with strong interference and noise, excelling in modeling complex data distribution and exploring better actions. A case study of learning beamforming with a DM is then provided, to demonstrate the superiority of the DMs with simulation results. Finally, the applications of DM for emerging multiple antenna technologies and promising research directions are discussed.
Abstract:This article proposes a novel design for the Pinching Antenna Systems (PASS) and advocates simple yet efficient wireless communications over the `last meter'. First, the potential benefits of PASS are discussed by reviewing an existing prototype. Then, the fundamentals of PASS are introduced, including physical principles, signal models, and communication designs. In contrast to existing multi-antenna systems, PASS brings a novel concept termed \emph{Pinching Beamforming}, which is achieved by dynamically adjusting the positions of PAs. Based on this concept, a couple of practical transmission architectures are proposed for employing PASS, namely non-multiplexing and multiplexing architectures. More particularly, 1) The non-multiplexing architecture is featured by simple baseband signal processing and relies only on the pinching beamforming; while 2) the multiplexing architecture provides enhanced signal manipulation capabilities with joint baseband and pinching beamforming, which is further divided into sub-connected, fully-connected, and phase-shifter-based fully-connected schemes. Furthermore, several emerging scenarios are put forward for integrating PASS into future wireless networks. As a further advance, by demonstrating a few numerical case studies, the significant performance gain of PASS is revealed compared to conventional multi-antenna systems. Finally, several research opportunities and open problems of PASS are highlighted.
Abstract:This article targets at unlocking the potentials of a class of prominent generative artificial intelligence (GAI) method, namely diffusion model (DM), for mobile communications. First, a DM-driven communication architecture is proposed, which introduces two key paradigms, i.e., conditional DM and DMdriven deep reinforcement learning (DRL), for wireless data generation and communication management, respectively. Then, we discuss the key advantages of DM-driven communication paradigms. To elaborate further, we explore DM-driven channel generation mechanisms for channel estimation, extrapolation, and feedback in multiple-input multiple-output (MIMO) systems. We showcase the numerical performance of conditional DM using the accurate DeepMIMO channel datasets, revealing its superiority in generating high-fidelity channels and mitigating unforeseen distribution shifts in sophisticated scenes. Furthermore, several DM-driven communication management designs are conceived, which is promising to deal with imperfect channels and taskoriented communications. To inspire future research developments, we highlight the potential applications and open research challenges of DM-driven communications. Code is available at https://github.com/xiaoxiaxusummer/GAI_COMM/
Abstract:A novel accelerated mobile edge generation (MEG) framework is proposed for generating high-resolution images on mobile devices. Exploiting a large-scale latent diffusion model (LDM) distributed across edge server (ES) and user equipment (UE), cost-efficient artificial intelligence generated content (AIGC) is achieved by transmitting low-dimensional features between ES and UE. To reduce overheads of both distributed computations and transmissions, a dynamic diffusion and feature merging scheme is conceived. By jointly optimizing the denoising steps and feature merging ratio, the image generation quality is maximized subject to latency and energy consumption constraints. To address this problem and tailor LDM sub-models, a low-complexity MEG acceleration protocol is developed. Particularly, a backbone meta-architecture is trained via offline distillation. Then, dynamic diffusion and feature merging are determined in online channel environment, which can be viewed as a constrained Markov Decision Process (MDP). A constrained variational policy optimization (CVPO) based MEG algorithm is further proposed for constraint-guaranteed learning, namely MEG-CVPO. Numerical results verify that: 1) The proposed framework can generate 1024$\times$1024 high-quality images over noisy channels while reducing over $40\%$ latency compared to conventional generation schemes. 2) The developed MEG-CVPO effectively mitigates constraint violations, thus flexibly controlling the trade-off between image distortion and generation costs.
Abstract:A novel paradigm of mobile edge generation (MEG)-enabled digital twin (DT) is proposed, which enables distributed on-device generation at mobile edge networks for real-time DT applications. First, an MEG-DT architecture is put forward to decentralize generative artificial intelligence (GAI) models onto edge servers (ESs) and user equipments (UEs), which has the advantages of low latency, privacy preservation, and individual-level customization. Then, various single-user and multi-user generation mechanisms are conceived for MEG-DT, which strike trade-offs between generation latency, hardware costs, and device coordination. Furthermore, to perform efficient distributed generation, two operating protocols are explored for transmitting interpretable and latent features between ESs and UEs, namely sketch-based generation and seed-based generation, respectively. Based on the proposed protocols, the convergence between MEG and DT are highlighted. Considering the seed-based image generation scenario, numerical case studies are provided to reveal the superiority of MEG-DT over centralized generation. Finally, promising applications and research opportunities are identified.
Abstract:This article focuses on the application of artificial intelligence (AI) in non-orthogonal multiple-access (NOMA), which aims to achieve automated, adaptive, and high-efficiency multi-user communications towards next generation multiple access (NGMA). First, the limitations of current scenario-specific multi-antenna NOMA schemes are discussed, and the importance of AI for NGMA is highlighted. Then, to achieve the vision of NGMA, a novel cluster-free NOMA framework is proposed for providing scenario-adaptive NOMA communications, and several promising machine learning solutions are identified. To elaborate further, novel centralized and distributed machine learning paradigms are conceived for efficiently employing the proposed cluster-free NOMA framework in single-cell and multi-cell networks, where numerical results are provided to demonstrate the effectiveness. Furthermore, the interplays between the proposed cluster-free NOMA and emerging wireless techniques are presented. Finally, several open research issues of AI enabled NGMA are discussed.
Abstract:A multi-cell cluster-free NOMA framework is proposed, where both intra-cell and inter-cell interference are jointly mitigated via flexible cluster-free successive interference cancellation (SIC) and coordinated beamforming design, respectively. The joint design problem is formulated to maximize the system sum rate while satisfying the SIC decoding requirements and users' data rate constraints. To address this highly complex and coupling non-convex mixed integer nonlinear programming (MINLP), a novel distributed auto-learning graph neural network (AutoGNN) architecture is proposed to alleviate the overwhelming information exchange burdens among base stations (BSs). The proposed AutoGNN can train the GNN model weights whilst automatically learning the optimal GNN architecture, namely the GNN network depth and message embedding sizes, to achieve communication-efficient distributed scheduling. Based on the proposed architecture, a bi-level AutoGNN learning algorithm is further developed to efficiently approximate the hypergradient in model training. It is theoretically proved that the proposed bi-level AutoGNN learning algorithm can converge to a stationary point. Numerical results reveal that: 1) the proposed cluster-free NOMA framework outperforms the conventional cluster-based NOMA framework in the multi-cell scenario; and 2) the proposed AutoGNN architecture significantly reduces the computation and communication overheads compared to the conventional convex optimization-based methods and the conventional GNN with a fixed architecture.
Abstract:A generalized downlink multi-antenna non-orthogonal multiple access (NOMA) transmission framework is proposed with the novel concept of cluster-free successive interference cancellation (SIC). In contrast to conventional NOMA approaches, where SIC is successively carried out within the same cluster, the key idea is that the SIC can be flexibly implemented between any arbitrary users to achieve efficient interference elimination. Based on the proposed framework, a sum rate maximization problem is formulated for jointly optimizing the transmit beamforming and the SIC operations between users, subject to the SIC decoding conditions and users' minimal data rate requirements. To tackle this highly-coupled mixed-integer nonlinear programming problem, an alternating direction method of multipliers-successive convex approximation (ADMM-SCA) algorithm is developed. The original problem is first reformulated into a tractable biconvex augmented Lagrangian (AL) problem by handling the non-convex terms via SCA. Then, this AL problem is decomposed into two subproblems that are iteratively solved by the ADMM to obtain the stationary solution. Moreover, to reduce the computational complexity and alleviate the parameter initialization sensitivity of ADMM-SCA, a Matching-SCA algorithm is proposed. The intractable binary SIC operations are solved through an extended many-to-many matching, which is jointly combined with an SCA process to optimize the transmit beamforming. The proposed Matching-SCA can converge to an enhanced exchange-stable matching that guarantees the local optimality. Numerical results demonstrate that: i) the proposed Matching-SCA algorithm achieves comparable performance and a faster convergence compared to ADMM-SCA; ii) the proposed generalized framework realizes scenario-adaptive communications and outperforms traditional multi-antenna NOMA approaches in various communication regimes.
Abstract:Millimeter wave (mmWave) communication is a promising New Radio in Unlicensed (NR-U) technology to meet with the ever-increasing data rate and connectivity requirements in future wireless networks. However, the development of NR-U networks should consider the coexistence with the incumbent Wireless Gigabit (WiGig) networks. In this paper, we introduce a novel multiple-input multiple-output non-orthogonal multiple access (MIMO-NOMA) based mmWave NR-U and WiGig coexistence network for uplink transmission. Our aim for the proposed coexistence network is to maximize the spectral efficiency while ensuring the strict NR-U delay requirement and the WiGig transmission performance in real time environments. A joint user grouping, hybrid beam coordination and power control strategy is proposed, which is formulated as a Lyapunov optimization based mixed-integer nonlinear programming (MINLP) with unit-modulus and nonconvex coupling constraints. Hence, we introduce a penalty dual decomposition (PDD) framework, which first transfers the formulated MINLP into a tractable augmented Lagrangian (AL) problem. Thereafter, we integrate both convex-concave procedure (CCCP) and inexact block coordinate update (BCU) methods to approximately decompose the AL problem into multiple nested convex subproblems, which can be iteratively solved under the PDD framework. Numerical results illustrate the performance improvement ability of the proposed strategy, as well as demonstrate the effectiveness to guarantee the NR-U traffic delay and WiGig network performance.