Advances in Parameter-Efficient Fine-Tuning: Optimizing Foundation Models for Scalable AI
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The unprecedented scale and capabilities of foundation models, such as large language models and vision transformers, have transformed artificial intelligence (AI) across diverse domains. However, fine-tuning these models for specific tasks remains computationally expensive and memory-intensive, posing challenges for practical deployment, especially in resource-constrained environments. Parameter-efficient fine-tuning (PEFT) methods have emerged as a promising solution, enabling efficient adaptation of large-scale models with minimal parameter updates while maintaining high performance. This survey provides a comprehensive review of PEFT techniques, categorizing existing approaches into adapter-based tuning, low-rank adaptation (LoRA), prefix and prompt tuning, BitFit, and hybrid strategies. We analyze their theoretical foundations, trade-offs in computational efficiency and expressiveness, and empirical performance across various tasks. Furthermore, we explore real-world applications of PEFT in natural language processing, computer vision, multimodal learning, and edge computing, highlighting its impact on accessibility and scalability. Beyond existing methodologies, we discuss emerging trends in PEFT, including meta-learning, dynamic fine-tuning strategies, cross-modal adaptation, and federated fine-tuning. We also address key challenges such as optimal method selection, interpretability, and deployment considerations, paving the way for future research. As foundation models continue to grow, PEFT will remain a crucial area of study, ensuring that the benefits of large-scale AI systems are broadly accessible, efficient, and sustainable.