Sherman
Abstract:Federated learning (FL) can fully leverage large-scale terminal data while ensuring privacy and security, and is considered as a distributed alternative for the centralized machine learning. However, the issue of data heterogeneity poses limitations on FL's performance. To address this challenge, artificial intelligence-generated content (AIGC) which is an innovative data synthesis technique emerges as one potential solution. In this article, we first provide an overview of the system architecture, performance metrics, and challenges associated with AIGC-assistant FL system design. We then propose the Generative federated learning (GenFL) architecture and present its workflow, including the design of aggregation and weight policy. Finally, using the CIFAR10 and CIFAR100 datasets, we employ diffusion models to generate dataset and improve FL performance. Experiments conducted under various non-independent and identically distributed (non-IID) data distributions demonstrate the effectiveness of GenFL on overcoming the bottlenecks in FL caused by data heterogeneity. Open research directions in the research of AIGC-assisted FL are also discussed.
Abstract:Constant-envelope signals are widely employed in wireless communication systems due to their hardware-friendly design, energy efficiency, and enhanced reliability. However, detecting these signals reliably using simple, power-efficient receivers remains a critical challenge. While coherent detection methods generally offer superior performance, they require complex frequency synchronization, which increases receiver complexity and power consumption. In contrast, noncoherent detection is inherently simpler since it avoids frequency synchronization. However, traditional noncoherent detection approaches still rely on In-phase and Quadrature-phase (IQ) demodulators to extract the noisy received signal magnitudes, and assume the energy detector as the test statistic according to the IQ structure, without rigorous theoretical justification. Motivated by the practical need for robust and low-complexity detection, this paper proposes a comprehensive framework for optimal signal detection using a simple bandpass-filter envelope-detector (BFED) in conjunction with a Bayesian approach and the generalized likelihood ratio test (GLRT) under unknown amplitude conditions. By leveraging approximations of the modified Bessel functions, we derive two distinct regimes of the optimal detector. In the low SNR regime, we rigorously prove that the energy detector emerges as the Bayesian-optimal solution, thereby establishing its theoretical validity for the first time. In the high SNR regime, we derive a novel amplitude-based detector that directly compares the estimated amplitude against the noise standard deviation, leading to a simple yet optimal detection strategy. Numerical simulations validate the theoretical findings, confirming that both the energy and amplitude detectors achieve the minimum error probability in their respective SNR domains.
Abstract:Cognitive aerial-terrestrial networks (CATNs) offer a solution to spectrum scarcity by sharing spectrum between aerial and terrestrial networks. However, aerial users (AUs) experience significant interference from numerous terrestrial base stations (BSs). To alleviate such interference, we investigate a user association and coordinated beamforming (CBF) problem in CATN, where the aerial network serves as the primary network sharing its spectrum with the terrestrial network. Specifically, we maximize the sum rate of the secondary terrestrial users (TUs) under the interference temperature constraints of the AUs. Traditional iterative optimization schemes are impractical due to their high computational complexity and information exchange overhead. Although deep reinforcement learning (DRL) based schemes can address these challenges, their performance is sensitive to the weights of the weighted penalty terms for violating constraints in the reward function. Motivated by these issues, we propose a safe DRL-based user association and CBF scheme for CATN, eliminating the need for training multiple times to find the optimal penalty weight before actual deployment. Specifically, the CATN is modeled as a networked constrained partially observable Markov game. Each TU acts as an agent to choose its associated BS, and each BS acts as an agent to decide its beamforming vectors, aiming to maximize the reward while satisfying the safety constraints introduced by the interference constraints of the AUs. By exploiting a safe DRL algorithm, the proposed scheme incurs lower deployment expenses than the penalty-based DRL schemes since only one training is required before actual deployment. Simulation results show that the proposed scheme can achieve a higher sum rate of TUs than a two-stage optimization scheme while the average received interference power of the AUs is generally below the threshold.
Abstract:Recently, over-the-air federated learning (FL) has attracted significant attention for its ability to enhance communication efficiency. However, the performance of over-the-air FL is often constrained by device selection strategies and signal aggregation errors. In particular, neglecting straggler devices in FL can lead to a decline in the fairness of model updates and amplify the global model's bias toward certain devices' data, ultimately impacting the overall system performance. To address this issue, we propose a joint device selection and transmit power optimization framework that ensures the appropriate participation of straggler devices, maintains efficient training performance, and guarantees timely updates. First, we conduct a theoretical analysis to quantify the convergence upper bound of over-the-air FL under age-of-information (AoI)-based device selection. Our analysis further reveals that both the number of selected devices and the signal aggregation errors significantly influence the convergence upper bound. To minimize the expected weighted sum peak age of information, we calculate device priorities for each communication round using Lyapunov optimization and select the highest-priority devices via a greedy algorithm. Then, we formulate and solve a transmit power and normalizing factor optimization problem for selected devices to minimize the time-average mean squared error (MSE). Experimental results demonstrate that our proposed method offers two significant advantages: (1) it reduces MSE and improves model performance compared to baseline methods, and (2) it strikes a balance between fairness and training efficiency while maintaining satisfactory timeliness, ensuring stable model performance.
Abstract:Backscatter communication offers a promising solution to connect massive Internet-of-Things (IoT) devices with low cost and high energy efficiency. Nevertheless, its inherently passive nature limits transmission reliability, thereby hindering improvements in communication range and data rate. To overcome these challenges, we introduce a bistatic broadband backscatter communication (BBBC) system, which equips the backscatter device (BD) with multiple antennas. In the proposed BBBC system, a radio frequency (RF) source directs a sinusoidal signal to the BD, facilitating single-carrier block transmission at the BD. Meanwhile, without requiring channel state information (CSI), cyclic delay diversity (CDD) is employed at the multi-antenna BD to enhance transmission reliability through additional cyclically delayed backscattered signals. We also propose a receiver design that includes preprocessing of the time-domain received signal, pilot-based parameter estimation, and frequency-domain equalization, enabling low-complexity detection of the backscattered signal. Leveraging the matched filter bound (MFB), we analyze the achievable diversity gains in terms of outage probability. Our analysis reveals that spatial diversity is achievable under general Rayleigh fading conditions, and both frequency and spatial diversity are attainable in scenarios where the forward link experiences a line-of-sight (LoS) channel. Simulation results validate the effectiveness of the proposed BBBC system. As the number of BD antennas increases, our results show that the proposed scheme not only enhances array gain but also improves diversity order, significantly reducing both outage probability and bit error rate (BER). Consequently, it outperforms conventional schemes that yield only minor gains.
Abstract:In reconfigurable intelligent surface (RIS)-assisted symbiotic radio (SR), an RIS is exploited to assist the primary system and to simultaneously operate as a secondary transmitter by modulating its own information over the incident primary signal from the air. Such an operation is called over-the-air modulation. The existing modulation schemes such as on-off keying and binary phase-shift keying suffer from two problems for joint detection of the primary and secondary signals in RIS-assisted SR, i.e., one is the detection ambiguity problem when the direct link is blocked, and the other is the bit error rate (BER) error-floor problem when the direct link is weak. To address the two problems, we propose a novel modulation scheme by dividing the phase-shift matrix into two parts: one is the assistance beamforming matrix for assisting the primary system and the other is the transmission beamforming matrix for delivering the secondary signal. To optimize the assistance and transmission beamforming matrices, we first introduce an assistance factor that describes the performance requirement of the primary system and then formulate a problem to minimize the BER of the secondary system, while guaranteeing the BER requirement of the primary system controlled by the assistance factor. To solve this non-convex problem, we resort to the successive convex approximation technique to obtain a suboptimal solution. Furthermore, to draw more insights, we propose a low-complexity assistance-transmission beamforming structure by borrowing the idea from the classical maximum ratio transmission and zero forcing techniques. Finally, simulation results reveal an interesting tradeoff between the BER performance of the primary and secondary systems by adjusting the assistance factor.
Abstract:The space-air-ground integrated network (SAGIN) is a pivotal architecture to support ubiquitous connectivity in the upcoming 6G era. Inter-operator resource and service sharing is a promising way to realize such a huge network, utilizing resources efficiently and reducing construction costs. Given the rationality of operators, the configuration of resources and services in SAGIN should focus on both the overall system performance and individual benefits of operators. Motivated by emerging symbiotic communication facilitating mutual benefits across different radio systems, we investigate the resource and service sharing in SAGIN from a symbiotic communication perspective in this paper. In particular, we consider a SAGIN consisting of a ground network operator (GNO) and a satellite network operator (SNO). Specifically, we aim to maximize the weighted sum rate (WSR) of the whole SAGIN by jointly optimizing the user association, resource allocation, and beamforming. Besides, we introduce a sharing coefficient to characterize the revenue of operators. Operators may suffer revenue loss when only focusing on maximizing the WSR. In pursuit of mutual benefits, we propose a mutual benefit constraint (MBC) to ensure that each operator obtains revenue gains. Then, we develop a centralized algorithm based on the successive convex approximation (SCA) method. Considering that the centralized algorithm is difficult to implement, we propose a distributed algorithm based on Lagrangian dual decomposition and the consensus alternating direction method of multipliers (ADMM). Finally, we provide extensive numerical simulations to demonstrate the effectiveness of the two proposed algorithms, and the distributed optimization algorithm can approach the performance of the centralized one.
Abstract:Active reconfigurable intelligent surface (RIS) has attracted significant attention in wireless communications, due to its reflecting elements (REs) capable of reflecting incident signals with not only phase shifts but also amplitude amplifications. In this paper, we are interested in active RIS-aided interference channels in which $K$ user pairs share the same time and frequency resources with the aid of active RIS. Thanks to the promising amplitude amplification capability, activating a moderate number of REs, rather than all of them, is sufficient for the active RIS to mitigate cross-channel interferences. Motivated by this, we propose a power-aware sparse reflect beamforming design for the active RIS-aided interference channels, which allows the active RIS to flexibly adjust the number of activated REs for the sake of reducing hardware and power costs. Specifically, we establish the power consumption model in which only those activated REs consume the biasing and operation power that supports the amplitude amplification, yielding an $\ell_0$-norm power consumption function. Based on the proposed model, we investigate a sum-rate maximization problem and an active RIS power minimization problem by carefully designing the sparse reflect beamforming vector. To solve these problems, we first replace the nonconvex $\ell_0$-norm function with an iterative reweighted $\ell_1$-norm function. Then, fractional programming is used to solve the sum-rate maximization, while semidefinite programming together with the difference-of-convex algorithm (DCA) is used to solve the active RIS power minimization. Numerical results show that the proposed sparse designs can notably increase the sum rate of user pairs and decrease the power consumption of active RIS in interference channels.
Abstract:Constructing earth-fixed cells with low-earth orbit (LEO) satellites in non-terrestrial networks (NTNs) has been the most promising paradigm to enable global coverage. The limited computing capabilities on LEO satellites however render tackling resource optimization within a short duration a critical challenge. Although the sufficient computing capabilities of the ground infrastructures can be utilized to assist the LEO satellite, different time-scale control cycles and coupling decisions between the space- and ground-segments still obstruct the joint optimization design for computing agents at different segments. To address the above challenges, in this paper, a multi-time-scale deep reinforcement learning (DRL) scheme is developed for achieving the radio resource optimization in NTNs, in which the LEO satellite and user equipment (UE) collaborate with each other to perform individual decision-making tasks with different control cycles. Specifically, the UE updates its policy toward improving value functions of both the satellite and UE, while the LEO satellite only performs finite-step rollout for decision-makings based on the reference decision trajectory provided by the UE. Most importantly, rigorous analysis to guarantee the performance convergence of the proposed scheme is provided. Comprehensive simulations are conducted to justify the effectiveness of the proposed scheme in balancing the transmission performance and computational complexity.
Abstract:Non-terrestrial networks (NTNs) with low-earth orbit (LEO) satellites have been regarded as promising remedies to support global ubiquitous wireless services. Due to the rapid mobility of LEO satellite, inter-beam/satellite handovers happen frequently for a specific user equipment (UE). To tackle this issue, earth-fixed cell scenarios have been under studied, in which the LEO satellite adjusts its beam direction towards a fixed area within its dwell duration, to maintain stable transmission performance for the UE. Therefore, it is required that the LEO satellite performs real-time resource allocation, which however is unaffordable by the LEO satellite with limited computing capability. To address this issue, in this paper, we propose a two-time-scale collaborative deep reinforcement learning (DRL) scheme for beam management and resource allocation in NTNs, in which LEO satellite and UE with different control cycles update their decision-making policies through a sequential manner. Specifically, UE updates its policy subject to improving the value functions of both the agents. Furthermore, the LEO satellite only makes decisions through finite-step rollouts with a reference decision trajectory received from the UE. Simulation results show that the proposed scheme can effectively balance the throughput performance and computational complexity over traditional greedy-searching schemes.