FedDPGu: Adaptive Prompt-tuning with Built-in Unlearning for Federated Learning

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Pre-trained Language Models (PLMs) have demonstrated impressive performance in various NLP tasks. However, traditional fine-tuning methods for leveraging PLMs for downstream tasks entail significant computational overhead. Prompt-tuning has emerged as an efficient alternative that involves prepending a limited number of parameters to the input sequence and only updating them while the PLM's parameters are frozen. However, this technique's prompts remain fixed for all inputs, reducing the model's flexibility. The Federated Learning (FL) technique has gained attention in recent years to address the growing concerns around data privacy. However, challenges such as communication and computation limitations of clients still need to be addressed. To mitigate these challenges, this paper introduces the \textbf{Fed}erated D ynamic P rompt G enerator (FedDPG), which incorporates a dynamic prompt generator network to generate context-aware prompts based on the given input, ensuring flexibility and adaptability while prioritising data privacy in federated learning settings. Our experiments on three NLP benchmark datasets showcase that FedDPG outperforms the state-of-the-art parameter-efficient fine-tuning methods in terms of global model performance compared with five models on three datasets, with only one configuration having a marginal lower performance, and significantly reducing the calculation time and the number of parameters to be sent through the FL network. Finally, we propose FedDPGu, a re-label-based method designed to handle local client unlearning. By further integrating an efficient federated unlearning method, we extend it to fast-FedDPGu, which leverages model difference estimation to enable efficient global unlearning of a target client. Together, these methods ensure that FedDPG can effectively forget sensitive client information at both the local and global levels in federated settings. Our code is available at https://github.com/gotobcn8/FedDPG.

Article activity feed