Multi-Level Defense Strategy for Vertical Federated Learning Against Label Inference Attacks

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Vertical federated learning (VFL) is increasingly recognized as an indispensable paradigm. Especially in the medical field, where the protection of data privacy plays an indispensable role. In the healthcare domain, adherence to stringent regulatory frameworks such as GDPR and HIPAA is indispensable. VFL facilitates a collaborative approach among institutions, enabling the prediction of disease risks without collecting sensitive patient data. However, VFL remains susceptible to label inference attacks, wherein malicious entities may extrapolate personal data from the exchanged intermediate results. To address the above challenge, we propose a strategic mechanism known as the balanced noise injection strategy (BNIS). This strategy is designed to meticulously regulate the introduction of noise, achieving a trade-off between privacy preservation and model accuracy. Moreover, to bolster our framework, we propose the multi-loss defense strategy (MLDS), an innovative defense explicitly engineered to withstand direct label inference attacks with resilience. Extensive evaluations on four benchmark datasets demonstrate that our approach defenses against passive attacks and yields a significant improvement in accuracy compared to the prevailing FL similar gradient (FLSG) benchmark. Furthermore, MLDS concurrently addresses breaches in label inference and greatly enhances the precision of the model.

Article activity feed