Verifiable Secure Aggregation Scheme for Privacy Protection in Federated Learning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Federated learning enables multiple participants to construct a distributed machine learning system coordinated by server. Most existing solutions assume a semi-honest system, considering each participant to be honest but curious, which does not align with the complex real-world environment. In reality, servers might be malicious, potentially tampering with or forging aggregation results. To verify the integrity of server aggregation computations while protecting the privacy of clients, this paper introduces a privacy-preserving verifiable secure aggregation scheme for federated learning networks. Initially, we construct a functional reuse private key ring generation algorithm, enabling clients to encrypt and protect their private gradients using the private key ring. Subsequently, leveraging the discrete logarithm difficulty problem, we devise a commitment protocol where clients commit to their encrypted private gradients. Upon receiving the aggregation result from the server, they collaboratively unlock the commitment, thereby verifying the aggregation result. Security analysis demonstrates that our solution effectively ensures privacy protection. We simulated consumer electronic products on the Raspberry Pi and tested the performance of the solution. Experimental data reveals that, with 100 clients, our scheme demonstrates that the overhead for proof generation and verification computations are 39.9% and 34.1% of the existing scheme, respectively, highlighting its lightweight nature.

Article activity feed