Soft Prompt Tuning via Differential Privacy: Balancing Accuracy and Privacy in Language Models
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Despite its effectiveness, soft prompt tuning approach raises concerns about privacy---we might risk disclosing sensitive information about individuals represented in the data if attackers carefully inspect the learned prompts. To address privacy issues in general, differential privacy (DP) studies optimization algorithms that have strong theoretical privacy guarantees. In this work, we explore how DP can be integrated into soft prompt tuning to develop privacy-preserving language models. Our goal is to strike a balance between parameter efficiency, downstream accuracy, and data privacy.