Resisting Against Targeted Poisoning Attacks in Lightweight Privacy-Preserving Federated Learning
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Federated learning is a distributed computing paradigm designed to protect client privacy. However, its distributed nature makes it vulnerable to targeted poisoning attacks.Although existing solutions can effectively mitigate such attacks, they often struggle to handle statistical heterogeneity.Moreover, privacy attacks often coexist with targeted poisoning attacks in federated learning, further increasing the difficulty of defense.To address the above challenges, this paper proposes a lightweight privacy-preserving federated learning framework, named FedSP, to defend against targeted poisoning attacks. The key idea is to design a protocol between two servers to detect and aggregate model updates submitted by clients in a perturbed form. Specifically, we design an adaptive clustering strategy during aggregation to mitigate inconsistencies of model updates caused by statistical heterogeneity.Additionally, we employ a dimensionality reduction to identify a plausible model update, eliminating assumptions regarding the proportion of malicious clients and the root dataset.Theoretical analysis demonstrates the privacy preservation and convergence of FedSP.Extensive experiments show that FedSP effectively defends against targeted poisoning attacks without compromising privacy.