Abstract:Federated Learning (FL) facilitates distributed model learning to protect users' privacy. In the absence of labels for a new user's data, the knowledge transfer in FL allows a learned global model to adapt to the new samples quickly. The multi-source domain adaptation in FL aims to improve the model's generality in a target domain by learning domain-invariant features from different clients. In this paper, we propose Federated Knowledge Alignment (FedKA) that aligns features from different clients and those of the target task. We identify two types of negative transfer arising in multi-source domain adaptation of FL and demonstrate how FedKA can alleviate such negative transfers with the help of a global features disentangler enhanced by embedding matching. To further facilitate representation learning of the target task, we devise a federated voting mechanism to provide labels for samples from the target domain via a consensus from querying local models and fine-tune the global model with these labeled samples. Extensive experiments, including an ablation study, on an image classification task of Digit-Five and a text sentiment classification task of Amazon Review, show that FedKA could be augmented to existing FL algorithms to improve the generality of the learned model for tackling a new task.
Abstract:Phishing emails that appear legitimate lure people into clicking on the attached malicious links or documents. Increasingly more sophisticated phishing campaigns in recent years necessitate a more adaptive detection system other than traditional signature-based methods. In this regard, natural language processing (NLP) with deep neural networks (DNNs) is adopted for knowledge acquisition from a large number of emails. However, such sensitive daily communications containing personal information are difficult to collect on a server for centralized learning in real life due to escalating privacy concerns. To this end, we propose a decentralized phishing email detection method called the Federated Phish Bowl (FPB) leveraging federated learning and long short-term memory (LSTM). FPB allows common knowledge representation and sharing among different clients through the aggregation of trained models to safeguard the email security and privacy. A recent phishing email dataset was collected from an intergovernmental organization to train the model. Moreover, we evaluated the model performance based on various assumptions regarding the total client number and the level of data heterogeneity. The comprehensive experimental results suggest that FPB is robust to a continually increasing client number and various data heterogeneity levels, retaining a detection accuracy of 0.83 and protecting the privacy of sensitive email communications.
Abstract:An attack on deep learning systems where intelligent machines collaborate to solve problems could cause a node in the network to make a mistake on a critical judgment. At the same time, the security and privacy concerns of AI have galvanized the attention of experts from multiple disciplines. In this research, we successfully mounted adversarial attacks on a federated learning (FL) environment using three different datasets. The attacks leveraged generative adversarial networks (GANs) to affect the learning process and strive to reconstruct the private data of users by learning hidden features from shared local model parameters. The attack was target-oriented drawing data with distinct class distribution from the CIFAR- 10, MNIST, and Fashion-MNIST respectively. Moreover, by measuring the Euclidean distance between the real data and the reconstructed adversarial samples, we evaluated the performance of the adversary in the learning processes in various scenarios. At last, we successfully reconstructed the real data of the victim from the shared global model parameters with all the applied datasets.