Merging LoRA Adapters for Multi-Task Code Analysis: An Empirical Study of Linear Combination and Task Interference

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Deploying multiple code analysis capabilities, including static code analysis (SCA) and vulnerability detection (VD), typically requires maintaining separate models or running independent inference passes. We investigate whether task-specific LoRA adapters, each fine-tuned independently on Meta-Llama-3.1-8B-Instruct, can be merged via weighted linear combination into a single adapter that preserves performance on both tasks. We evaluate 19 configurations: a \((4 \times 4)\) lambda grid (\((\lsca, \lvd \in \{0.3, 0.5, 0.7, 1.0\})\)) plus three baselines, on synthetic SCA data (3{,}463 samples, 11 categories) and PrimeVul vulnerability data (9{,}858 expert-verified C/C++ samples). Our SCA adapter achieves F1=0.994 and our VD adapter achieves F1=0.732 (MCC=0.466) as solo adapters. The best merged configuration retains 98% of solo VD performance (F1=0.717) while gaining SCA capability, and 91% of solo SCA performance (F1=0.907) while gaining VD capability. We find that interference is asymmetric : VD is more sensitive to SCA adapter weight than vice versa. Equal high lambdas (\((\lsca = \lvd = 1.0)\)) cause catastrophic degradation on both tasks. Three Pareto-optimal configurations span the trade-off space for practical deployment. Our results also document that VD dataset quality, not model capacity, is the primary bottleneck: switching from BigVul (F1=0.483) to PrimeVul (F1=0.732) on the same model produced the largest improvement.

Article activity feed