Revisiting Fine-Tuning: A Survey of Parameter-Efficient Techniques for Large AI Models

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Foundation models have revolutionized artificial intelligence by achieving state-of-the-art performance across a wide range of tasks. However, fine-tuning these massive models for specific applications remains computationally expensive and memory-intensive. Parameter-Efficient Fine-Tuning (PEFT) techniques have emerged as an effective alternative, allowing adaptation with significantly fewer trainable parameters while maintaining competitive performance. This survey provides a comprehensive overview of PEFT, covering its theoretical foundations, major methodologies, empirical performance across various domains, and emerging trends. We begin by exploring the motivation behind PEFT, emphasizing the prohibitive cost of full fine-tuning and the necessity for more efficient adaptation strategies. We then categorize and discuss key PEFT techniques, including adapters, Low-Rank Adaptation (LoRA), prefix tuning, and prompt tuning. Each method is analyzed in terms of its architectural modifications, computational efficiency, and effectiveness across different tasks. Additionally, we present the theoretical underpinnings of PEFT, such as low-rank reparameterization and the role of sparsity in fine-tuning. Empirical evaluations are examined through large-scale benchmarking studies across natural language processing, vision, and speech tasks. We highlight trade-offs between efficiency and performance, demonstrating that PEFT methods can achieve near full fine-tuning accuracy with significantly reduced resource requirements. Furthermore, we discuss recent advancements in hybrid PEFT approaches, continual learning, hardware-aware optimization, and PEFT applications beyond traditional machine learning, including edge AI and scientific computing. Despite its advantages, several open challenges remain, including scalability to ultra-large models, robustness against adversarial attacks, and improved generalization across diverse tasks. We outline future research directions that aim to address these challenges and enhance the efficiency, adaptability, and security of PEFT methods. By summarizing key findings and identifying critical research gaps, this survey serves as a comprehensive resource for researchers and practitioners interested in optimizing the fine-tuning of foundation models. As PEFT continues to evolve, it holds the potential to make large-scale AI models more accessible, efficient, and widely deployable across real-world applications.

Article activity feed