STellar-FL: A Decentralized Federated Learning Architecture for Scalable Cross-Institution AI Under Network-Constrained Environments

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Federated learning (FL) enables collaborative model training without sharing raw data and has become an important paradigm for privacy-sensitive applications such as healthcare and other regulated domains. However, most existing federated learning frameworks rely on centralized coordination servers, fixed network configurations, and complex infrastructure requirements, which limit their deployment in real-world institutional environments with strict cybersecurity and data governance constraints.In this work, we propose STellar-FL, a decentralized federated learning architecture designed for scalable cross-institution model training under network-constrained environments. The proposed framework adopts a microservice-based design consisting of a federated training orchestration module, a distributed communication layer, and federated execution nodes. STellar-FL enables secure model exchange through relay-assisted peer connectivity, eliminates the need for centralized servers with public IP exposure, and provides a unified workflow for model development, deployment, and validation.Compared with conventional federated learning frameworks, STellar-FL reduces deployment complexity, improves system robustness by removing single points of failure, and supports flexible collaboration across heterogeneous institutional infrastructures. The proposed architecture provides a practical foundation for real-world privacy-preserving AI deployment in healthcare and other data-sensitive domains.

Article activity feed