FMPA: Fragment Model Poisoning Attack in Federated Learning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Federated learning is a distributed machine learning method that enables multiple participants to jointly train a machine learning model while preserving data privacy. However, its distributed nature makes federated learning vulnerable to Byzantine attacks, leading to degraded model performance or failure to converge. Existing model poisoning attacks primarily target all model parameter dimensions, which limits attackers in evading server defense methods and reduces the effectiveness of the attack. To address this, we propose a new fragment model poisoning attack method—FMPA. This method focuses on specific dimensions of model parameters, achieving a more concentrated attack to evade defense methods while significantly degrading model performance. Experimental results show that FMPA can effectively impair model performance even in the face of five different Byzantine robustness defense methods.

Article activity feed