Abstract:Automatic modulation classification (AMC) using the Deep Neural Network (DNN) approach outperforms the traditional classification techniques, even in the presence of challenging wireless channel environments. However, the adversarial attacks cause the loss of accuracy for the DNN-based AMC by injecting a well-designed perturbation to the wireless channels. In this paper, we propose a novel generative adversarial network (GAN)-based countermeasure approach to safeguard the DNN-based AMC systems against adversarial attack examples. GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier. Specifically, we have shown the resiliency of our proposed defense GAN against the Fast-Gradient Sign method (FGSM) algorithm as one of the most potent kinds of attack algorithms to craft the perturbed signals. The existing defense-GAN has been designed for image classification and does not work in our case where the above-mentioned communication system is considered. Thus, our proposed countermeasure approach deploys GANs with a mixture of generators to overcome the mode collapsing problem in a typical GAN facing radio signal classification problem. Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
Abstract:Differential privacy (DP) is considered a de-facto standard for protecting users' privacy in data analysis, machine, and deep learning. Existing DP-based privacy-preserving training approaches consist of adding noise to the clients' gradients before sharing them with the server. However, implementing DP on the gradient is not efficient as the privacy leakage increases by increasing the synchronization training epochs due to the composition theorem. Recently researchers were able to recover images used in the training dataset using Generative Regression Neural Network (GRNN) even when the gradient was protected by DP. In this paper, we propose two layers of privacy protection approach to overcome the limitations of the existing DP-based approaches. The first layer reduces the dimension of the training dataset based on Hensel's Lemma. We are the first to use Hensel's Lemma for reducing the dimension (i.e., compress) of a dataset. The new dimensionality reduction method allows reducing the dimension of a dataset without losing information since Hensel's Lemma guarantees uniqueness. The second layer applies DP to the compressed dataset generated by the first layer. The proposed approach overcomes the problem of privacy leakage due to composition by applying DP only once before the training; clients train their local model on the privacy-preserving dataset generated by the second layer. Experimental results show that the proposed approach ensures strong privacy protection while achieving good accuracy. The new dimensionality reduction method achieves an accuracy of 97%, with only 25 % of the original data size.