Optimal Uncertainty Budget Allocation for Robust Federated Learning under Byzantine Attacks
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Federated learning (FL) has revolutionized the development of machine learning models by enabling decentralized training while safeguarding user privacy. However, the presence of Byzantine adversaries introduces significant vulnerabilities, as malicious clients can disrupt the learning process by providing misleading updates. This paper addresses the critical challenge of allocating an uncertainty budget across heterogeneous clients to enhance the robustness of federated learning systems against such adversarial attacks. We introduce the Uncertainty Budget Allocation Problem (UBAP), formulating it as a mixed-integer nonlinear program (MINLP) aimed at optimizing resource distribution for improved model convergence and stability. Our framework not only rethinks traditional assumptions about client contributions but also presents a novel mathematical analysis underlying the relationship between uncertainty allocation and adversarial strength. Extensive empirical evaluations on standard benchmarks demonstrate substantial improvements in model performance and resistance to attacks, showcasing the practical efficacy of our approach. Through this work, we underscore the importance of optimal uncertainty budget allocation to foster resilience in federated learning systems, paving the way for further innovations in this domain and enhancing the security of decentralized AI applications.