Adaptive Privacy-Preserving Split-Hierarchical Federated Learning for Resource-Constrained IoT Networks

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The proliferation of Internet-of-Things (IoT) devices necessitates efficient machine learning paradigms that address bandwidth constraints, privacy requirements, and computational heterogeneity. While hierarchical federated learning offers com- munication efficiency and split learning reduces computational burden on resource-constrained devices, existing approaches lack adaptive mechanisms for dynamic environments and formal privacy guarantees. We propose AP-SHFL (Adaptive Privacy- Preserving Split-Hierarchical Federated Learning), a novel three- tier architecture that jointly optimizes split point selection, hierarchical aggregation, and differential privacy mechanisms. Our approach employs Q-learning for per-client dynamic split point adaptation based on real-time loss and communication feed- back, while implementing staleness-adaptive differential privacy that calibrates noise injection according to model freshness in asynchronous settings. Experimental results on MNIST demon- strate 99.19% test accuracy, 40-60% communication reduction compared to FedAvg baseline, rapid convergence (>98.5% in <10 rounds), and adaptive split point evolution from an average of 1.10 to 2.75 across clients, showcasing effective per-client optimization. Ablation studies confirm the contribution of each component. Our framework achieves state-of-the-art results, outperforming HSFL (98.1% accuracy) while maintaining formal differential privacy guarantees.

Article activity feed