FusionNet Lite: A Lightweight Federated Deep Learning Model for Privacy-Preserving Predictive Maintenance
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Federated learning (FL) offers a privacy-preserving strategy for industrial predictive maintenance (PdM), yet many existing models remain too large or energy-intensive for deployment on edge devices. This study proposes \textit{FusionNet Lite}, an ultralight hybrid convolutional architecture with fewer than 1420 parameters, designed for low-latency federated optimisation under non-IID industrial conditions. The model integrates depthwise separable convolutions, a compact ConvMixer module, and squeeze-and-excitation (SE) attention to minimise computation and communication while preserving predictive performance. Experiments on the AI4I~2020 dataset show that FusionNet~Lite matches the accuracy of larger baselines and achieves the highest energy-efficiency index across both CPU and GPU environments. The model also maintains attributional stability between centralised and federated training, with Integrated Gradients (IG) demonstrating consistent feature importance patterns. Communication overhead remains below 25~kB per round, enabling deployment in bandwidth-constrained Industrial Internet of Things (IIoT) networks. The results confirm that lightweight FL architectures can support real-time PdM while preserving data privacy in distributed industrial settings.