Multi-Layer Defense Strategies and Privacy Preserving Enhancements for Membership Reasoning Attacks in a Federated Learning Framework

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In order to enhance the privacy protection ability of federated learning under membership inference attack, a multi-layer defense model integrating feature perturbation, gradient compression and regular control is constructed to systematically analyze the inhibition effect of each intervening mechanism on privacy leakage and the impact of model performance. The results show that on the CIFAR-100 and Purchase-100 datasets, the attack accuracy decreases from 84.2% and 91.6% to 34.7% and 38.1%, respectively, and the success rate of member inference decreases by more than 50% on average, and the model Top-1 accuracy decreases by no more than 3% only. This strategy effectively improves the robustness of the model against existential privacy attacks.

Article activity feed