Abstract:Federated learning (FL), integrating group fairness mechanisms, allows multiple clients to collaboratively train a global model that makes unbiased decisions for different populations grouped by sensitive attributes (e.g., gender and race). Due to its distributed nature, previous studies have demonstrated that FL systems are vulnerable to model poisoning attacks. However, these studies primarily focus on perturbing accuracy, leaving a critical question unexplored: Can an attacker bypass the group fairness mechanisms in FL and manipulate the global model to be biased? The motivations for such an attack vary; an attacker might seek higher accuracy, yet fairness considerations typically limit the accuracy of the global model or aim to cause ethical disruption. To address this question, we design a novel form of attack in FL, termed Profit-driven Fairness Attack (PFATTACK), which aims not to degrade global model accuracy but to bypass fairness mechanisms. Our fundamental insight is that group fairness seeks to weaken the dependence of outputs on input attributes related to sensitive information. In the proposed PFATTACK, an attacker can recover this dependence through local fine-tuning across various sensitive groups, thereby creating a biased yet accuracy-preserving malicious model and injecting it into FL through model replacement. Compared to attacks targeting accuracy, PFATTACK is more stealthy. The malicious model in PFATTACK exhibits subtle parameter variations relative to the original global model, making it robust against detection and filtering by Byzantine-resilient aggregations. Extensive experiments on benchmark datasets are conducted for four fair FL frameworks and three Byzantine-resilient aggregations against model poisoning, demonstrating the effectiveness and stealth of PFATTACK in bypassing group fairness mechanisms in FL.
Abstract:Recent advances in federated learning (FL) enable collaborative training of machine learning (ML) models from large-scale and widely dispersed clients while protecting their privacy. However, when different clients' datasets are heterogeneous, traditional FL mechanisms produce a global model that does not adequately represent the poorer clients with limited data resources, resulting in lower accuracy and higher bias on their local data. According to the Matthew effect, which describes how the advantaged gain more advantage and the disadvantaged lose more over time, deploying such a global model in client applications may worsen the resource disparity among the clients and harm the principles of social welfare and fairness. To mitigate the Matthew effect, we propose Egalitarian Fairness Federated Learning (EFFL), where egalitarian fairness refers to the global model learned from FL has: (1) equal accuracy among clients; (2) equal decision bias among clients. Besides achieving egalitarian fairness among the clients, EFFL also aims for performance optimality, minimizing the empirical risk loss and the bias for each client; both are essential for any ML model training, whether centralized or decentralized. We formulate EFFL as a constrained multi-constrained multi-objectives optimization (MCMOO) problem, with the decision bias and egalitarian fairness as constraints and the minimization of the empirical risk losses on all clients as multiple objectives to be optimized. We propose a gradient-based three-stage algorithm to obtain the Pareto optimal solutions within the constraint space. Extensive experiments demonstrate that EFFL outperforms other state-of-the-art FL algorithms in achieving a high-performance global model with enhanced egalitarian fairness among all clients.
Abstract:In this paper, we present DBMark, a new end-to-end digital image watermarking framework to deep boost the robustness of DNN-based image watermarking. The key novelty is the synergy of the Invertible Neural Networks(INNs) and effective watermark features generation. The framework generates watermark features with redundancy and error correction ability through message processing, synergized with the powerful information embedding and extraction capabilities of Invertible Neural Networks to achieve higher robustness and invisibility. Extensive experiment results demonstrate the superiority of the proposed framework compared with the state-of-the-art ones under various distortions.
Abstract:Pedestrian trajectory forecasting is a fundamental task in multiple utility areas, such as self-driving, autonomous robots, and surveillance systems. The future trajectory forecasting is multi-modal, influenced by physical interaction with scene contexts and intricate social interactions among pedestrians. The mainly existing literature learns representations of social interactions by deep learning networks, while the explicit interaction patterns are not utilized. Different interaction patterns, such as following or collision avoiding, will generate different trends of next movement, thus, the awareness of social interaction patterns is important for trajectory forecasting. Moreover, the social interaction patterns are privacy concerned or lack of labels. To jointly address the above issues, we present a social-dual conditional variational auto-encoder (Social-DualCVAE) for multi-modal trajectory forecasting, which is based on a generative model conditioned not only on the past trajectories but also the unsupervised classification of interaction patterns. After generating the category distribution of the unlabeled social interaction patterns, DualCVAE, conditioned on the past trajectories and social interaction pattern, is proposed for multi-modal trajectory prediction by latent variables estimating. A variational bound is derived as the minimization objective during training. The proposed model is evaluated on widely used trajectory benchmarks and outperforms the prior state-of-the-art methods.