A Bijection-backdoor-based Adversarial Examples Defense Method in Federated Learning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

With the continuous advancement of Internet of Things (IoT), the issue of data privacy in it is also quietly emerging. Federated learning (FL) enables the creation of a potent global model from a consortium of clients, safeguarding sensitive client data while upholding model accuracy. Unlike conventional centralized learning paradigms, federated learning operates without necessitating access to local datasets, thus effectively mitigating data privacy concerns. Nonetheless, malevolent entities can exploit vulnerabilities by introducing subtle perturbations to client-side samples, thereby executing adversarial example (AE) attacks that disrupt model predictions. To address this challenge, we present a novel bijection-backdoor-based adversarial examples defense method in federated learning, termed BAEDFL-CL. Initially, an intricate backdoor mechanism is transmitted from the server to the client, i.e. IoT device, neutralizing the impact of adversarial examples on model outputs while preserving the primary task's performance. Additionally, to bolster the model's defense capabilities within the federated learning for practical scenario, we devise a representation enhancement technique grounded in supervised contrastive learning (CL). This method encourages the model to craft feature representations endowed with enhanced generalization ability. Through comprehensive experiment on diverse datasets spanning IID and Non-IID scenarios, our results reveal that BAEDFL-CL significantly diminishes the attack success rate to 17.66% and 24.35%, respectively. Concurrently, BAEDFL-CL improves main task performance by 0.02% and 2.24% correspondingly, substantiating its efficacy in countering adversarial examples in federated learning environments.

Article activity feed