Advancing Privacy-Preserving AI: A Survey on Federated Learning and Its Applications

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Federated Learning (FL) has emerged as a transformative approach to distributed machine learning, enabling the collaborative training of models across decentralized and private datasets. Unlike traditional centralized learning paradigms, FL ensures data privacy by keeping raw data localized on client devices while leveraging aggregated updates to build global models. This survey explores the critical aspects of efficient federated learning, including communication reduction, robustness to system and data heterogeneity, and scalability in real-world applications. We discuss key techniques such as model compression, asynchronous updates, personalized learning, and robust aggregation to address challenges posed by resource-constrained devices, non-IID data distributions, and adversarial environments. Applications of FL across diverse domains, including healthcare, finance, smart cities, and autonomous systems, highlight its potential to transform industries while preserving privacy and compliance with regulatory frameworks. The survey also identifies open challenges in scalability, privacy guarantees, fairness, and ethical considerations, providing future research directions to address these gaps. As FL continues to evolve, it holds the promise of enabling privacy-preserving, collaborative intelligence on a global scale, fostering innovation while addressing critical societal and technical challenges.

Article activity feed