Quantization-Based Chained Privacy-Preserving Federated Learning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Federated Learning (FL) is an advanced distributed machine learning framework crucial in protecting data privacy and security. By enabling multiple participants to train models while keeping their data local collaboratively, FL effectively mitigates the risks associated with centralized storage and sharing of raw data. However, traditional FL schemes face significant challenges regarding communication efficiency, computational costs, and privacy preservation. For instance, its communication and computational overhead in edge computing scenarios is often excessively high, hindering real-time applications. This paper proposes an innovative federated learning framework, Q-Chain FL, integrating quantization compression techniques into a chained FL architecture. This Q-Chain FL scheme adopts efficient compression and transmission of model parameter differences at the user node and executes seamless decompression and aggregation at the server node. Experiments on several publicly available datasets, including MNIST, CIFAR-10, and CelebA, demonstrate low communication and computational overhead, fast convergence speed, and high security of Q-Chain FL. Compared to traditional FedAvg and Chain-PPFL, Q-Chain FL reduces communication overhead by approximately 62.5\% and 44.7\%, respectively. These results underscore the robustness and adaptability of Q-Chain FL in various datasets and real-world learning scenarios.

Article activity feed