Abstract:Spectrum access is an essential problem in device-to-device (D2D) communications. However, with the recent growth in the number of mobile devices, the wireless spectrum is becoming scarce, resulting in low spectral efficiency for D2D communications. To address this problem, this paper aims to integrate the ambient backscatter communication technology into D2D devices to allow them to backscatter ambient RF signals to transmit their data when the shared spectrum is occupied by mobile users. To obtain the optimal spectrum access policy, i.e., stay idle or access the shared spectrum and perform active transmissions or backscattering ambient RF signals for transmissions, to maximize the average throughput for D2D users, deep reinforcement learning (DRL) can be adopted. However, DRL-based solutions may require long training time due to the curse of dimensionality issue as well as complex deep neural network architectures. For that, we develop a novel quantum reinforcement learning (RL) algorithm that can achieve a faster convergence rate with fewer training parameters compared to DRL thanks to the quantum superposition and quantum entanglement principles. Specifically, instead of using conventional deep neural networks, the proposed quantum RL algorithm uses a parametrized quantum circuit to approximate an optimal policy. Extensive simulations then demonstrate that the proposed solution not only can significantly improve the average throughput of D2D devices when the shared spectrum is busy but also can achieve much better performance in terms of convergence rate and learning complexity compared to existing DRL-based methods.
Abstract:Motivated by the superior performance of deep learning in many applications including computer vision and natural language processing, several recent studies have focused on applying deep neural network for devising future generations of wireless networks. However, several recent works have pointed out that imperceptible and carefully designed adversarial examples (attacks) can significantly deteriorate the classification accuracy. In this paper, we investigate a defense mechanism based on both training-time and run-time defense techniques for protecting machine learning-based radio signal (modulation) classification against adversarial attacks. The training-time defense consists of adversarial training and label smoothing, while the run-time defense employs a support vector machine-based neural rejection (NR). Considering a white-box scenario and real datasets, we demonstrate that our proposed techniques outperform existing state-of-the-art technologies.
Abstract:Deep learning algorithms have been shown to be powerful in many communication network design problems, including that in automatic modulation classification. However, they are vulnerable to carefully crafted attacks called adversarial examples. Hence, the reliance of wireless networks on deep learning algorithms poses a serious threat to the security and operation of wireless networks. In this letter, we propose for the first time a countermeasure against adversarial examples in modulation classification. Our countermeasure is based on a neural rejection technique, augmented by label smoothing and Gaussian noise injection, that allows to detect and reject adversarial examples with high accuracy. Our results demonstrate that the proposed countermeasure can protect deep-learning based modulation classification systems against adversarial examples.
Abstract:This paper investigates the optimization of the long-standing probabilistically robust transmit beamforming problem with channel uncertainties in the multiuser multiple-input single-output (MISO) downlink transmission. This problem poses significant analytical and computational challenges. Currently, the state-of-the-art optimization method relies on convex restrictions as tractable approximations to ensure robustness against Gaussian channel uncertainties. However, this method not only exhibits high computational complexity and suffers from the rank relaxation issue but also yields conservative solutions. In this paper, we propose an unsupervised deep learning-based approach that incorporates the sampling of channel uncertainties in the training process to optimize the probabilistic system performance. We introduce a model-driven learning approach that defines a new beamforming structure with trainable parameters to account for channel uncertainties. Additionally, we employ a graph neural network to efficiently infer the key beamforming parameters. We successfully apply this approach to the minimum rate quantile maximization problem subject to outage and total power constraints. Furthermore, we propose a bisection search method to address the more challenging power minimization problem with probabilistic rate constraints by leveraging the aforementioned approach. Numerical results confirm that our approach achieves non-conservative robust performance, higher data rates, greater power efficiency, and faster execution compared to state-of-the-art optimization methods.
Abstract:Traditional physical layer secure beamforming is achieved via precoding before signal transmission using channel state information (CSI). However, imperfect CSI will compromise the performance with imperfect beamforming and potential information leakage. In addition, multiple RF chains and antennas are needed to support the narrow beam generation, which complicates hardware implementation and is not suitable for resource-constrained Internet-of-Things (IoT) devices. Moreover, with the advancement of hardware and artificial intelligence (AI), low-cost and intelligent eavesdropping to wireless communications is becoming increasingly detrimental. In this paper, we propose a multi-carrier based multi-band waveform-defined security (WDS) framework, independent from CSI and RF chains, to defend against AI eavesdropping. Ideally, the continuous variations of sub-band structures lead to an infinite number of spectral features, which can potentially prevent brute-force eavesdropping. Sub-band spectral pattern information is efficiently constructed at legitimate users via a proposed chaotic sequence generator. A novel security metric, termed signal classification accuracy (SCA), is used to evaluate the security robustness under AI eavesdropping. Communication error probability and complexity are also investigated to show the reliability and practical capability of the proposed framework. Finally, compared to traditional secure beamforming techniques, the proposed multi-band WDS framework reduces power consumption by up to six times.
Abstract:As an attractive enabling technology for next-generation wireless communications, network slicing supports diverse customized services in the global space-air-ground integrated network (SAGIN) with diverse resource constraints. In this paper, we dynamically consider three typical classes of radio access network (RAN) slices, namely high-throughput slices, low-delay slices and wide-coverage slices, under the same underlying physical SAGIN. The throughput, the service delay and the coverage area of these three classes of RAN slices are jointly optimized in a non-scalar form by considering the distinct channel features and service advantages of the terrestrial, aerial and satellite components of SAGINs. A joint central and distributed multi-agent deep deterministic policy gradient (CDMADDPG) algorithm is proposed for solving the above problem to obtain the Pareto optimal solutions. The algorithm first determines the optimal virtual unmanned aerial vehicle (vUAV) positions and the inter-slice sub-channel and power sharing by relying on a centralized unit. Then it optimizes the intra-slice sub-channel and power allocation, and the virtual base station (vBS)/vUAV/virtual low earth orbit (vLEO) satellite deployment in support of three classes of slices by three separate distributed units. Simulation results verify that the proposed method approaches the Pareto-optimal exploitation of multiple RAN slices, and outperforms the benchmarkers.
Abstract:With a growing interest in outer space, space robots have become a focus of exploration. To coordinate them for unmanned space exploration, we propose to use the "mother-daughter structure". In this setup, the mother spacecraft orbits the planet, while daughter probes are distributed across the surface. The mother spacecraft senses the environment, computes control commands and distributes them to daughter probes to take actions. They synergistically form sensing-communication-computing-control ($\mathbf{SC^3}$) loops, which are indivisible. We thereby optimize the spacecraft-probe downlink within $\mathbf{SC^3}$ loops to minimize the sum linear quadratic regulator (LQR) cost. The optimization variables are block length and transmit power. On account of the cycle time constraint, the spacecraft-probe downlink operates in the finite block length (FBL) regime. To solve the nonlinear mixed-integer problem, we first identify the optimal block length and then transform the power allocation problem into a tractable convex one. Additionally, we derive the approximate closed-form solutions for the proposed scheme and also for the max-sum rate scheme and max-min rate scheme. On this basis, we reveal their different power allocation principles. Moreover, we find that for time-insensitive control tasks, the proposed scheme demonstrates equivalence to the max-min rate scheme. These findings are verified through simulations.
Abstract:Reconfigurable intelligent surface (RIS) devices have emerged as an effective way to control the propagation channels for enhancing the end users' performance. However, RIS optimization involves configuring the radio frequency (RF) response of a large number of radiating elements, which is challenging in real-world applications due to high computational complexity. In this paper, a model-free cross-entropy (CE) algorithm is proposed to optimize the binary RIS configuration for improving the signal-to-noise ratio (SNR) at the receiver. One key advantage of the proposed method is that it only needs system performance parameters, e.g., the received SNR, without the need for channel models or channel estimation. Both simulations and experiments are conducted to evaluate the performance of the proposed CE algorithm. The results demonstrate that the CE algorithm outperforms benchmark algorithms, and shows stronger channel hardening with increasing numbers of RIS elements.
Abstract:This paper investigates deep learning techniques to predict transmit beamforming based on only historical channel data without current channel information in the multiuser multiple-input-single-output downlink. This will significantly reduce the channel estimation overhead and improve the spectrum efficiency especially in high-mobility vehicular communications. Specifically, we propose a joint learning framework that incorporates channel prediction and power optimization, and produces prediction for transmit beamforming directly. In addition, we propose to use the attention mechanism in the Long Short-Term Memory Recurrent Neural Networks to improve the accuracy of channel prediction. Simulation results using both a simple autoregressive process model and the more realistic 3GPP spatial channel model verify that our proposed predictive beamforming scheme can significantly improve the effective spectrum efficiency compared to traditional channel estimation and the method that separately predicts channel and then optimizes beamforming.
Abstract:Simultaneous wireless information and power transfer (SWIPT) has long been proposed as a key solution for charging and communicating with low-cost and low-power devices. However, the employment of radio frequency (RF) signals for information/power transfer needs to comply with international health and safety regulations. In this paper, we provide a complete framework for the design and analysis of far-field SWIPT under safety constraints. In particular, we deal with two RF exposure regulations, namely, the specific absorption rate (SAR) and the maximum permissible exposure (MPE). The state-of-the-art regarding SAR and MPE is outlined together with a description as to how these can be modeled in the context of communication networks. We propose a deep learning approach for the design of robust beamforming subject to specific information, energy harvesting and SAR constraints. Furthermore, we present a thorough analytical study for the performance of large-scale SWIPT systems, in terms of information and energy coverage under MPE constraints. This work provides insights with regards to the optimal SWIPT design as well as the potentials from the proper development of SWIPT systems under health and safety restrictions.