Abstract:Radio frequency (RF) communication has been an important part of civil and military communication for decades. With the increasing complexity of wireless environments and the growing number of devices sharing the spectrum, it has become critical to efficiently manage and classify the signals that populate these frequencies. In such scenarios, the accurate classification of wireless signals is essential for effective spectrum management, signal interception, and interference mitigation. However, the classification of wireless RF signals often faces challenges due to the limited availability of labeled training data, especially under low signal-to-noise ratio (SNR) conditions. To address these challenges, this paper proposes the use of a Vector-Quantized Variational Autoencoder (VQ-VAE) to augment training data, thereby enhancing the performance of a baseline wireless classifier. The VQ-VAE model generates high-fidelity synthetic RF signals, increasing the diversity and fidelity of the training dataset by capturing the complex variations inherent in RF communication signals. Our experimental results show that incorporating VQ-VAE-generated data significantly improves the classification accuracy of the baseline model, particularly in low SNR conditions. This augmentation leads to better generalization and robustness of the classifier, overcoming the constraints imposed by limited real-world data. By improving RF signal classification, the proposed approach enhances the efficacy of wireless communication in both civil and tactical settings, ensuring reliable and secure operations. This advancement supports critical decision-making and operational readiness in environments where communication fidelity is essential.
Abstract:Deep Reinforcement Learning (DRL) has been highly effective in learning from and adapting to RF environments and thus detecting and mitigating jamming effects to facilitate reliable wireless communications. However, traditional DRL methods are susceptible to catastrophic forgetting (namely forgetting old tasks when learning new ones), especially in dynamic wireless environments where jammer patterns change over time. This paper considers an anti-jamming system and addresses the challenge of catastrophic forgetting in DRL applied to jammer detection and mitigation. First, we demonstrate the impact of catastrophic forgetting in DRL when applied to jammer detection and mitigation tasks, where the network forgets previously learned jammer patterns while adapting to new ones. This catastrophic interference undermines the effectiveness of the system, particularly in scenarios where the environment is non-stationary. We present a method that enables the network to retain knowledge of old jammer patterns while learning to handle new ones. Our approach substantially reduces catastrophic forgetting, allowing the anti-jamming system to learn new tasks without compromising its ability to perform previously learned tasks effectively. Furthermore, we introduce a systematic methodology for sequentially learning tasks in the anti-jamming framework. By leveraging continual DRL techniques based on PackNet, we achieve superior anti-jamming performance compared to standard DRL methods. Our proposed approach not only addresses catastrophic forgetting but also enhances the adaptability and robustness of the system in dynamic jamming environments. We demonstrate the efficacy of our method in preserving knowledge of past jammer patterns, learning new tasks efficiently, and achieving superior anti-jamming performance compared to traditional DRL approaches.
Abstract:Integrated Sensing and Communication (ISAC) represents a transformative approach within 5G and beyond, aiming to merge wireless communication and sensing functionalities into a unified network infrastructure. This integration offers enhanced spectrum efficiency, real-time situational awareness, cost and energy reductions, and improved operational performance. ISAC provides simultaneous communication and sensing capabilities, enhancing the ability to detect, track, and respond to spectrum dynamics and potential threats in complex environments. In this paper, we introduce I-SCOUT, an innovative ISAC solution designed to uncover moving targets in NextG networks. We specifically repurpose the Positioning Reference Signal (PRS) of the 5G waveform, exploiting its distinctive autocorrelation characteristics for environment sensing. The reflected signals from moving targets are processed to estimate both the range and velocity of these targets using the cross ambiguity function (CAF). We conduct an in-depth analysis of the tradeoff between sensing and communication functionalities, focusing on the allocation of PRSs for ISAC purposes. Our study reveals that the number of PRSs dedicated to ISAC has a significant impact on the system's performance, necessitating a careful balance to optimize both sensing accuracy and communication efficiency. Our results demonstrate that I-SCOUT effectively leverages ISAC to accurately determine the range and velocity of moving targets. Moreover, I-SCOUT is capable of distinguishing between multiple targets within a group, showcasing its potential for complex scenarios. These findings underscore the viability of ISAC in enhancing the capabilities of NextG networks, for both commercial and tactical applications where precision and reliability are critical.
Abstract:This paper presents a deep reinforcement learning (DRL) solution for power control in wireless communications, describes its embedded implementation with WiFi transceivers for a WiFi network system, and evaluates the performance with high-fidelity emulation tests. In a multi-hop wireless network, each mobile node measures its link quality and signal strength, and controls its transmit power. As a model-free solution, reinforcement learning allows nodes to adapt their actions by observing the states and maximize their cumulative rewards over time. For each node, the state consists of transmit power, link quality and signal strength; the action adjusts the transmit power; and the reward combines energy efficiency (throughput normalized by energy consumption) and penalty of changing the transmit power. As the state space is large, Q-learning is hard to implement on embedded platforms with limited memory and processing power. By approximating the Q-values with a DQN, DRL is implemented for the embedded platform of each node combining an ARM processor and a WiFi transceiver for 802.11n. Controllable and repeatable emulation tests are performed by inducing realistic channel effects on RF signals. Performance comparison with benchmark schemes of fixed and myopic power allocations shows that power control with DRL provides major improvements to energy efficiency and throughput in WiFi network systems.
Abstract:Deep learning (DL) finds rich applications in the wireless domain to improve spectrum awareness. Typically, the DL models are either randomly initialized following a statistical distribution or pretrained on tasks from other data domains such as computer vision (in the form of transfer learning) without accounting for the unique characteristics of wireless signals. Self-supervised learning enables the learning of useful representations from Radio Frequency (RF) signals themselves even when only limited training data samples with labels are available. We present the first self-supervised RF signal representation learning model and apply it to the automatic modulation recognition (AMR) task by specifically formulating a set of transformations to capture the wireless signal characteristics. We show that the sample efficiency (the number of labeled samples required to achieve a certain accuracy performance) of AMR can be significantly increased (almost an order of magnitude) by learning signal representations with self-supervised learning. This translates to substantial time and cost savings. Furthermore, self-supervised learning increases the model accuracy compared to the state-of-the-art DL methods and maintains high accuracy even when a small set of training data samples is used.
Abstract:Generative Adversarial Networks (GANs) are Machine Learning (ML) algorithms that have the ability to address competitive resource allocation problems together with detection and mitigation of anomalous behavior. In this paper, we investigate their use in next-generation (NextG) communications within the context of cognitive networks to address i) spectrum sharing, ii) detecting anomalies, and iii) mitigating security attacks. GANs have the following advantages. First, they can learn and synthesize field data, which can be costly, time consuming, and nonrepeatable. Second, they enable pre-training classifiers by using semi-supervised data. Third, they facilitate increased resolution. Fourth, they enable the recovery of corrupted bits in the spectrum. The paper provides the basics of GANs, a comparative discussion on different kinds of GANs, performance measures for GANs in computer vision and image processing as well as wireless applications, a number of datasets for wireless applications, performance measures for general classifiers, a survey of the literature on GANs for i)-iii) above, and future research directions. As a use case of GAN for NextG communications, we show that a GAN can be effectively applied for anomaly detection in signal classification (e.g., user authentication) outperforming another state-of-the-art ML technique such as an autoencoder.
Abstract:An end-to-end communications system based on Orthogonal Frequency Division Multiplexing (OFDM) is modeled as an autoencoder (AE) for which the transmitter (coding and modulation) and receiver (demodulation and decoding) are represented as deep neural networks (DNNs) of the encoder and decoder, respectively. This AE communications approach is shown to outperform conventional communications in terms of bit error rate (BER) under practical scenarios regarding channel and interference effects as well as training data and embedded implementation constraints. A generative adversarial network (GAN) is trained to augment the training data when there is not enough training data available. Also, the performance is evaluated in terms of the DNN model quantization and the corresponding memory requirements for embedded implementation. Then, interference training and randomized smoothing are introduced to train the AE communications to operate under unknown and dynamic interference (jamming) effects on potentially multiple OFDM symbols. Relative to conventional communications, up to 36 dB interference suppression for a channel reuse of four can be achieved by the AE communications with interference training and randomized smoothing. AE communications is also extended to the multiple-input multiple-output (MIMO) case and its BER performance gain with and without interference effects is demonstrated compared to conventional MIMO communications.
Abstract:We consider a wireless communication system that consists of a background emitter, a transmitter, and an adversary. The transmitter is equipped with a deep neural network (DNN) classifier for detecting the ongoing transmissions from the background emitter and transmits a signal if the spectrum is idle. Concurrently, the adversary trains its own DNN classifier as the surrogate model by observing the spectrum to detect the ongoing transmissions of the background emitter and generate adversarial attacks to fool the transmitter into misclassifying the channel as idle. This surrogate model may differ from the transmitter's classifier significantly because the adversary and the transmitter experience different channels from the background emitter and therefore their classifiers are trained with different distributions of inputs. This system model may represent a setting where the background emitter is a primary, the transmitter is a secondary, and the adversary is trying to fool the secondary to transmit even though the channel is occupied by the primary. We consider different topologies to investigate how different surrogate models that are trained by the adversary (depending on the differences in channel effects experienced by the adversary) affect the performance of the adversarial attack. The simulation results show that the surrogate models that are trained with different distributions of channel-induced inputs severely limit the attack performance and indicate that the transferability of adversarial attacks is neither readily available nor straightforward to achieve since surrogate models for wireless applications may significantly differ from the target model depending on channel effects.
Abstract:We consider a wireless communication system, where a transmitter sends signals to a receiver with different modulation types while the receiver classifies the modulation types of the received signals using its deep learning-based classifier. Concurrently, an adversary transmits adversarial perturbations using its multiple antennas to fool the classifier into misclassifying the received signals. From the adversarial machine learning perspective, we show how to utilize multiple antennas at the adversary to improve the adversarial (evasion) attack performance. Two main points are considered while exploiting the multiple antennas at the adversary, namely the power allocation among antennas and the utilization of channel diversity. First, we show that multiple independent adversaries, each with a single antenna cannot improve the attack performance compared to a single adversary with multiple antennas using the same total power. Then, we consider various ways to allocate power among multiple antennas at a single adversary such as allocating power to only one antenna, and proportional or inversely proportional to the channel gain. By utilizing channel diversity, we introduce an attack to transmit the adversarial perturbation through the channel with the largest channel gain at the symbol level. We show that this attack reduces the classifier accuracy significantly compared to other attacks under different channel conditions in terms of channel variance and channel correlation across antennas. Also, we show that the attack success improves significantly as the number of antennas increases at the adversary that can better utilize channel diversity to craft adversarial attacks.
Abstract:We consider the problem of hiding wireless communications from an eavesdropper that employs a deep learning (DL) classifier to detect whether any transmission of interest is present or not. There exists one transmitter that transmits to its receiver in the presence of an eavesdropper, while a cooperative jammer (CJ) transmits carefully crafted adversarial perturbations over the air to fool the eavesdropper into classifying the received superposition of signals as noise. The CJ puts an upper bound on the strength of perturbation signal to limit its impact on the bit error rate (BER) at the receiver. We show that this adversarial perturbation causes the eavesdropper to misclassify the received signals as noise with high probability while increasing the BER only slightly. On the other hand, the CJ cannot fool the eavesdropper by simply transmitting Gaussian noise as in conventional jamming and instead needs to craft perturbation signals built by adversarial machine learning to enable covert communications. Our results show that signals with different modulation types and eventually 5G communications can be effectively hidden from an eavesdropper even if it is equipped with a DL classifier to detect transmissions.