An Efficient Collusion-Resistant and Drop-Proof Federated Learning Security Aggregation Scheme Based on RLWE

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Federated learning, as a distributed machine learning paradigm, is of great value in protecting data privacy, but the existing FL scheme based on homomorphic encryption and differential privacy cannot take into account the properties of high efficiency, high accuracy, anti-collusion and so on. In this paper, we propose an efficient federated learning privacy preservation scheme (RLFL) based on ring-on-band error learning (RLWE), which realises an efficient federated learning framework with anti-collusion attack and anti-client dropout by fusing RLWE cryptographic properties with secure multi-party computation techniques, and the specific innovations include: 1. Introducing high-bit encoding technology to reduce noise impact to a negligible level, achieving a 3.13% accuracy improvement over traditional LWE schemes on the MNIST dataset;2. Designing a secure aggregation protocol by utilising the additive homomorphism of Shamir's Secret Sharing (SS), combined with the \((t,k)\) threshold mechanism to ensure that the honest-square gradient cannot be recovered when up to \(t-1\) malicious clients collude, where \(k\) is the number of participating clients; 3. Improving the communication efficiency with the help of the combination of RLWE and Number Theoretic Transformation (NTT), using coefficient coding to improve the communication efficiency, the communication overhead is only 26.7% of the original scheme in 10,000 + client scenarios, and the training speed is improved by 2.3 times compared with the FLDP scheme. The experimental results show that the accuracy of RLFL on MNIST, FMNIST, CIFAR-10, and SVHN datasets reaches up to 91.45%, 79.56%, 70.04%, and 57.04%, respectively, which is a 3.13%, 3.45%, 2.81%, and 2.87% enhancement compared with the FLDP scheme, respectively. In the 500-client scenario, the total training time of RLFL is 700.71 seconds, which is 47% lower than that of FLDP (1320.54 seconds), and the communication overhead is only 26.7% of that using the LWE scheme. The security analysis proves that the scheme meets the requirements of anti-collusion attack under the IND-CPA security model and ensures the integrity of the data aggregation process through a verifiable secret sharing mechanism, providing an efficient and secure solution for federated learning applications in sensitive domains such as healthcare and finance.

Article activity feed