Federated Learning for Privacy-Preserving Network Anomaly Detection: A High-Performance Convolutional Framework with Differential Privacy
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Federated learning (FL) has emerged as a promising paradigm for privacy-preserving collaborative machine learning, enabling multiple organizations to train shared models without exchanging sensitive data. This study presents a comprehensive investigation of FL for network anomaly detection using the NSL-KDD dataset, incorporating real-world experimental evaluations across centralized baselines, IID and non-IID federated settings, differential privacy mechanisms, and robust optimization strategies such as FedProx. The results show that FL is feasible and efficient for distributed cybersecurity applications but exhibits sensitivity to data heterogeneity and privacy constraints. Centralized models achieved near-perfect detection performance, whereas FL under IID conditions demonstrated competitive accuracy and stable convergence. Under label-skew and quantity-skew non-IID conditions, FedAvg performance declined, while FedProx significantly improved stability and accuracy. Differential privacy introduced predictable accuracy degradation, with moderate budgets (ε = 10, 5) maintaining operational viability. System profiling revealed low communication overhead and rapid round execution, confirming practical deployability on CPU-based nodes. This work provides a rigorous experimental foundation for integrating federated learning into distributed intrusion detection systems and identifies key challenges related to privacy, heterogeneity, and model robustness that must be addressed to ensure reliable real-world adoption.