Model Poisoning Attacks to Federated Learning based on Fake Clients
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The increasing use of decentralized and anonymous networks creates vast amounts of darknet traffic, offering opportunities to enhance network security by detecting threats, filtering malicious activity, and identifying anomalies through improved traffic classification. Federated Learning (FL) presents a promising approach for decentralized data processing, allowing models to be trained across distributed devices while preserving data privacy. However, FL is vulnerable to poisoning attacks, where adversarial clients can degrade the performance of the global model. In this paper, we utilize a rich dataset that captures encrypted darknet traffic to develop new methods for defending against model poisoning attacks.We propose novel attack strategies based on fake clients and gradient inversion: Model Poisoning Attack based on Fake Clients (MPAF), Gradient Descent Inversion Attack (GDIA), and Selective Aggregation Poisoning Attack (SAPA). Alongside these attacks, we introduce two defense strategies: Adaptive Weighting in Aggregation (AWA) and Statistical Outlier Filtering (SOF). Experimental results show that attacks like GDIA can drastically reduce accuracy to 0% and the MPAF attack reduces accuracy to approximately 32.38%. The AWA defense notably restores accuracy under GDIA to around 80.95% and under MPAF to about 93.33%, clearly outperforming SOF. After refining the attack implementations by strengthening the base model for MPAF and reducing the intensity of GDIA, MPAF became significantly stronger, bringing accuracy down to 0%. However, GDIA exhibited more controlled degradation, with AWA defense still effectively stabilizing accuracy at approximately 72.06%.