G-SAFE: Generative Synthetic Augmentation for Federated Edge Security

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Federated learning (FL) has emerged as a promising decentralized machine learning paradigm for edge computing, enabling collaborative model training without sharing raw data. However, FL models can suffer from limited and non-iid local datasets and lingering privacy risks from model updates. In this paper, we introduce G-SAFE (Generative Synthetic Augmentation for Federated Edge Security) , a novel framework that integrates generative artificial intelligence (AI) with federated learning to enhance security applications at the network edge. G-SAFE leverages generative models (such as GANs) at each client to produce synthetic data that augment local training sets, thereby improving model generalization and addressing data scarcity and imbalance. The synthetic samples preserve statistical characteristics of sensitive data without exposing personal identifiers, mitigating privacy concerns. We design a two-fold methodology comprising a client-side generative augmentation strategy and a privacy-preserving federated training process . The augmented local models are periodically aggregated by the server, yielding a robust global model. We evaluate G-SAFE on a distributed intrusion detection use-case with IoT edge devices. Results show that our approach accelerates model convergence and significantly improves detection performance over standard FL. G-SAFE achieves a global accuracy of ~98.3%, approaching the centralized training upper bound (≈99%), and outperforms vanilla federated learning by over 2.5% absolute accuracy. Precision–recall metrics for minority attack classes substantially improve (e.g. recall +35% for rare exploits) with synthetic augmentation. We compare G-SAFE against baseline methods and discuss its impact on privacy, showing that sharing only generative models or synthetic data further reduces information leakage risks. This work demonstrates that generative synthetic augmentation can greatly enhance federated edge security systems by balancing data utility and privacy.

Article activity feed