Privacy-Preserving and Communication-Efficient Federated Learning for Cloud-Scale Distributed Intelligence

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study focuses on privacy protection and multi-party collaborative optimization in cloud computing environments. A federated learning framework is proposed, integrating differential privacy mechanisms and communication compression strategies. The framework adopts a layered architecture consisting of local computing nodes, a compression module, and a privacy-enhancing module. It enables global model training without exposing raw data, ensuring both model performance and data security. During the training process, the framework uses the federated averaging algorithm as the basis for global aggregation. A Gaussian noise perturbation mechanism is introduced to enhance the model's resistance to inference attacks. To address bandwidth limitations in practical cloud computing scenarios, a lightweight communication compression strategy is designed. This helps reduce the overhead and synchronization pressure caused by parameter exchange. The experimental design includes sensitivity analysis from multiple dimensions, such as network bandwidth constraints, client count variation, and data distribution heterogeneity. These experiments validate the adaptability and robustness of the proposed method under various complex scenarios. The results show that the method outperforms existing approaches in several key metrics, including accuracy, communication rounds, and model size. The proposed approach demonstrates strong engineering deployability and system-level security. It provides a novel technical path for building efficient and trustworthy distributed intelligent systems.

Article activity feed