Abstract:Ensuring Large Language Models (LLMs) align with diverse human preferences while preserving privacy and fairness remains a challenge. Existing methods, such as Reinforcement Learning from Human Feedback (RLHF), rely on centralized data collection, making them computationally expensive and privacy-invasive. We introduce PluralLLM a federated learning-based approach that enables multiple user groups to collaboratively train a transformer-based preference predictor without sharing sensitive data, which can also serve as a reward model for aligning LLMs. Our method leverages Federated Averaging (FedAvg) to aggregate preference updates efficiently, achieving 46% faster convergence, a 4% improvement in alignment scores, and nearly the same group fairness measure as in centralized training. Evaluated on a Q/A preference alignment task, PluralLLM demonstrates that federated preference learning offers a scalable and privacy-preserving alternative for aligning LLMs with diverse human values.
Abstract:Ensuring fairness in machine learning, particularly in human-centric applications, extends beyond algorithmic bias to encompass fairness in privacy, specifically the equitable distribution of privacy risk. This is critical in federated learning (FL), where decentralized data necessitates balanced privacy preservation across clients. We introduce FinP, a framework designed to achieve fairness in privacy by mitigating disproportionate exposure to source inference attacks (SIA). FinP employs a dual approach: (1) server-side adaptive aggregation to address unfairness in client contributions in global model, and (2) client-side regularization to reduce client vulnerability. This comprehensive strategy targets both the symptoms and root causes of privacy unfairness. Evaluated on the Human Activity Recognition (HAR) and CIFAR-10 datasets, FinP demonstrates ~20% improvement in fairness in privacy on HAR with minimal impact on model utility, and effectively mitigates SIA risks on CIFAR-10, showcasing its ability to provide fairness in privacy in FL systems without compromising performance.