AP-PPFL: An Anti-poisoning Privacy-preserving Federated Learning Method

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Federated Learning (FL) has been widely used in Internet of Things (IoT) environments as a promising decentralized framework capable of collaborative model training without exposing local data. Despite its advantages, FL still encounters significant security challenges. In particular, semi-honest servers can potentially infer private information from the gradients shared by clients. Additionally, FL’s distributed nature opens up vulnerabilities to adversarial behavior, where malicious clients may submit manipulated gradients to degrade the global model's accuracy or hinder its convergence. Addressing privacy and robustness simultaneously is an enormous challenge, as most privacy-preserving approaches focus on securing gradients through encryption or noise injection, which obstructs the identification of malicious clients—an essential step in poisoning defense. To resolve this conflict, this work introduces AP-PPFL, a federated learning framework that integrates both privacy protection and poisoning defense. The proposed approach incorporates a voting-based parameter importance evaluation strategy and a cosine similarity-based mechanism to filter out harmful gradients. Furthermore, it leverages Paillier homomorphic encryption within a dual-server setup to maintain gradient confidentiality while enabling secure computation directly over encrypted data. Compared with conventional methods, AP-PPFL achieves a balanced improvement in both privacy-preserving and attack resilience, with comprehensive security analysis provided.

Article activity feed