A Comprehensive Survey of Federated Learning for Edge AI: Recent Trends and Future Directions

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Federated learning (FL) has emerged as a key enabler of privacy-preserving, distributed intelligence, yet most deployments and surveys remain cloud-centric. This paper presents a unified, forward-looking survey of federated learning and Edge AI beyond the centralized cloud, with an emphasis on work from roughly 2021–2025. We organize and analyze over 200 papers spanning cross-device, cross-silo, edge-only, decentralized, and split/hybrid designs. First, we introduce the foundations of cloud–fog–edge computing, multi-tier network hierarchies, and edge/on-device AI, and formalize the FL optimization problem and its main variants. We then develop a taxonomy of FL architectures for Edge AI, comparing coordination topologies, aggregation locality, trust and failure models, and communication complexity. Next, we synthesize cross-layer challenges—including statistical and system heterogeneity, stragglers, communication and energy efficiency, and privacy, security, and trust—and review enabling techniques such as personalization, client selection, compression, green FL, and secure and robust aggregation. We survey hardware accelerators, TinyML platforms, neuromorphic and processing-in-memory prototypes, and major FL/edge AI frameworks. Finally, we summarize applications in healthcare, transportation, industrial IoT, and smart cities, and outline future directions in federated continual learning, on-device adaptation of foundation models, and the convergence of FL with TinyML and emerging hardware.

Article activity feed