Abstract:The convergence of digital twin technology and the emerging 6G network presents both challenges and numerous research opportunities. This article explores the potential synergies between digital twin and 6G, highlighting the key challenges and proposing fundamental principles for their integration. We discuss the unique requirements and capabilities of digital twin in the context of 6G networks, such as sustainable deployment, real-time synchronization, seamless migration, predictive analytic, and closed-loop control. Furthermore, we identify research opportunities for leveraging digital twin and artificial intelligence to enhance various aspects of 6G, including network optimization, resource allocation, security, and intelligent service provisioning. This article aims to stimulate further research and innovation at the intersection of digital twin and 6G, paving the way for transformative applications and services in the future.
Abstract:Digital twins (DTs) have emerged as a promising enabler for representing the real-time states of physical worlds and realizing self-sustaining systems. In practice, DTs of physical devices, such as mobile users (MUs), are commonly deployed in multi-access edge computing (MEC) networks for the sake of reducing latency. To ensure the accuracy and fidelity of DTs, it is essential for MUs to regularly synchronize their status with their DTs. However, MU mobility introduces significant challenges to DT synchronization. Firstly, MU mobility triggers DT migration which could cause synchronization failures. Secondly, MUs require frequent synchronization with their DTs to ensure DT fidelity. Nonetheless, DT migration among MEC servers, caused by MU mobility, may occur infrequently. Accordingly, we propose a two-timescale DT synchronization and migration framework with reliability consideration by establishing a non-convex stochastic problem to minimize the long-term average energy consumption of MUs. We use Lyapunov theory to convert the reliability constraints and reformulate the new problem as a partially observable Markov decision-making process (POMDP). Furthermore, we develop a heterogeneous agent proximal policy optimization with Beta distribution (Beta-HAPPO) method to solve it. Numerical results show that our proposed Beta-HAPPO method achieves significant improvements in energy savings when compared with other benchmarks.
Abstract:Semantic communications have been envisioned as a potential technique that goes beyond Shannon paradigm. Unlike modern communications that provide bit-level security, the eaves-dropping of semantic communications poses a significant risk of potentially exposing intention of legitimate user. To address this challenge, a novel deep neural network (DNN) enabled secure semantic communication (DeepSSC) system is developed by capitalizing on physical layer security. To balance the tradeoff between security and reliability, a two-phase training method for DNNs is devised. Particularly, Phase I aims at semantic recovery of legitimate user, while Phase II attempts to minimize the leakage of semantic information to eavesdroppers. The loss functions of DeepSSC in Phases I and II are respectively designed according to Shannon capacity and secure channel capacity, which are approximated with variational inference. Moreover, we define the metric of secure bilingual evaluation understudy (S-BLEU) to assess the security of semantic communications. Finally, simulation results demonstrate that DeepSSC achieves a significant boost to semantic security particularly in high signal-to-noise ratio regime, despite a minor degradation of reliability.
Abstract:A variable-length cross-packet hybrid automatic repeat request (VL-XP-HARQ) is proposed to boost the spectral efficiency (SE) and the energy efficiency (EE) of communications. The SE is firstly derived in terms of the outage probabilities, with which the SE is proved to be upper bounded by the ergodic capacity (EC). Moreover, to facilitate the maximization of the SE, the asymptotic outage probability is obtained at high signal-to-noise ratio (SNR), with which the SE is maximized by properly choosing the number of new information bits while guaranteeing outage requirement. By applying Dinkelbach's transform, the fractional objective function is transformed into a subtraction form, which can be decomposed into multiple sub-problems through alternating optimization. By noticing that the asymptotic outage probability is a convex function, each sub-problem can be easily relaxed to a convex problem by adopting successive convex approximation (SCA). Besides, the EE of VL-XP-HARQ is also investigated. An upper bound of the EE is found and proved to be attainable. Furthermore, by aiming at maximizing the EE via power allocation while confining outage within a certain constraint, the methods to the maximization of SE are invoked to solve the similar fractional problem. Finally, numerical results are presented for verification.
Abstract:In this paper, a power-constrained hybrid automatic repeat request (HARQ) transmission strategy is developed to support ultra-reliable low-latency communications (URLLC). In particular, we aim to minimize the delivery latency of HARQ schemes over time-correlated fading channels, meanwhile ensuring the high reliability and limited power consumption. To ease the optimization, the simple asymptotic outage expressions of HARQ schemes are adopted. Furthermore, by noticing the non-convexity of the latency minimization problem and the intricate connection between different HARQ rounds, the graph convolutional network (GCN) is invoked for the optimal power solution owing to its powerful ability of handling the graph data. The primal-dual learning method is then leveraged to train the GCN weights. Consequently, the numerical results are presented for verification together with the comparisons among three HARQ schemes in terms of the latency and the reliability, where the three HARQ schemes include Type-I HARQ, HARQ with chase combining (HARQ-CC), and HARQ with incremental redundancy (HARQ-IR). To recapitulate, it is revealed that HARQ-IR offers the lowest latency while guaranteeing the demanded reliability target under a stringent power constraint, albeit at the price of high coding complexity.
Abstract:To support massive connectivity and boost spectral efficiency for internet of things (IoT), a downlink scheme combining virtual multiple-input multiple-output (MIMO) and nonorthogonal multiple access (NOMA) is proposed. All the single-antenna IoT devices in each cluster cooperate with each other to establish a virtual MIMO entity, and multiple independent data streams are requested by each cluster. NOMA is employed to superimpose all the requested data streams, and each cluster leverages zero-forcing detection to de-multiplex the input data streams. Only statistical channel state information (CSI) is available at base station to avoid the waste of the energy and bandwidth on frequent CSI estimations. The outage probability and goodput of the virtual MIMO-NOMA system are thoroughly investigated by considering Kronecker model, which embraces both the transmit and receive correlations. Furthermore, the asymptotic results facilitate not only the exploration of physical insights but also the goodput maximization. In particular, the asymptotic outage expressions provide quantitative impacts of various system parameters and enable the investigation of diversity-multiplexing tradeoff (DMT). Moreover, power allocation coefficients and/or transmission rates can be properly chosen to achieve the maximal goodput. By favor of Karush-Kuhn-Tucker conditions, the goodput maximization problems can be solved in closed-form, with which the joint power and rate selection is realized by using alternately iterating optimization.Besides, the optimization algorithms tend to allocate more power to clusters under unfavorable channel conditions and support clusters with higher transmission rate under benign channel conditions.
Abstract:Federated learning (FL) is an emerging machine learning method that can be applied in mobile edge systems, in which a server and a host of clients collaboratively train a statistical model utilizing the data and computation resources of the clients without directly exposing their privacy-sensitive data. We show that running stochastic gradient descent (SGD) in such a setting can be viewed as adding a momentum-like term to the global aggregation process. Based on this finding, we further analyze the convergence rate of a federated learning system by accounting for the effects of parameter staleness and communication resources. These results advance the understanding of the Federated SGD algorithm, and also forges a link between staleness analysis and federated computing systems, which can be useful for systems designers.
Abstract:Non-orthogonal multiple access (NOMA) enabled fog radio access networks (NOMA-F-RANs) have been taken as a promising enabler to release network congestion, reduce delivery latency, and improve fog user equipments' (F-UEs') quality of services (QoS). Nevertheless, the effectiveness of NOMA-F-RANs highly relies on the charted feature information (preference distribution, positions, mobilities, etc.) of F-UEs as well as the effective caching, computing, and resource allocation strategies. In this article, we explore how artificial intelligence (AI) techniques are utilized to solve foregoing tremendous challenges. Specifically, we first elaborate on the NOMA-F-RANs architecture, shedding light on the key modules, namely, cooperative caching and cache-aided mobile edge computing (MEC). Then, the potentially applicable AI-driven techniques in solving the principal issues of NOMA-F-RANs are reviewed. Through case studies, we show the efficacy of AI-enabled methods in terms of F-UEs' latent feature extraction and cooperative caching. Finally, future trends of AI-driven NOMA-F-RANs, including open research issues and challenges, are identified.