Machine Unlearning in Large Language Models: A Survey of Challenges and Methods

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The rapid development of Large Language Models (LLMs) has made machine unlearning essential for privacy and compliance. This technology erases specific information without retraining the whole model. However, the inherent complexity of LLMs leads to fundamental differences in machine unlearning compared to traditional models. To analyze these distinctions, this survey conducts a detailed comparison of machine unlearning in traditional models and LLMs. This comparison reveals four major challenges: performance degradation, unlearning completeness, efficiency and cost, and black-box constraints. Instead of broadly categorizing algorithms, we structure our taxonomy around these core challenges, systematically evaluating how existing methodologies mitigate these specific risks, and finally discuss promising directions for future research.

Article activity feed