Poison-Resilient and Privacy-Preserving Federated Learning Scheme in Mobile Systems
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Mobile systems including smartphones, IoT devices generate massive high value data, while conventional centralized data collection and analysis suffer from security and privacy vulnerabilities. Federating learning, an emerging paradigm in machine learning, collaboratively train s high performance model via participants sharing local training model updates rather than raw data, thereby preserving the privacy of their local datasets, provides a new approach for the secure extraction of data value in mobile system. However, malicious participants may inject carefully crafted poisoned samples into their local datasets, with the intent of disrupting the convergence of the global model or inducing targeted misclassification. Consequently, the identification of such malicious participants are of critical importance in federated learning. To address these challenges, this paper proposes poison-resilient and privacy-preserving federated learning scheme in mobile system. It not only detects poisoning attacks in both vertical federated learning and horizontal federated learning, but also removes prior assumptions regarding client data distributions and restrictions on the proportion of adversarial participants. In addition, a convergence control parameter is introduced to regulate the model’s convergence rate. The security, correctness, fairness, and robustness of the proposed scheme are formally analyzed and rigorously proven. Experimental results demonstrate that our scheme detects poisoning attacks effectively while maintaining high accuracy and model training efficiency.