attacks.To mitigate this vulnerability issue, we propose a novel robust aggregation algorithm inspired by the truth inference methods in crowdsourcing via incorporating the worker's reliability into aggregation. We evaluate our solution on three real-world datasets with a variety of machine learning models. Experimental results show that our solution ensures robust federated learning and is resilient to various types of attacks, including noisy data attacks, Byzantine attacks, and label flipping attacks.
Federated learning is a prominent framework that enables clients (e.g., mobile devices or organizations) to train a collaboratively global model under a central server's orchestration while keeping local training datasets' privacy. However, the aggregation step in federated learning is vulnerable to adversarial attacks as the central server cannot manage clients' behavior. Therefore, the global model's performance and convergence of the training process will be affected under such