AbTune: Layer-wise selective fine-tuning of protein language models for antibodies

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Antibodies play crucial roles in immune defense and serve as key therapeutic agents for numerous diseases. The structural and sequence diversity of their antigen recognition loops, coupled with the scarcity of high-quality data, pose significant challenges in the development of generalizable predictive models. Here, we present a sequence specific fine-tuning strategy for antibodies that partially bypass the need for generalization. We evaluated this approach in three biologically relevant tasks: antibody structure prediction, zero-shot prediction of beneficial mutation in antibody-antigen complexes and binding affinity prediction. In all three tasks, we observed substantial improvements over pLM baselines without fine-tuning, while using only a fraction of the computational and time resources required for fully fine-tuning antibody-specific pLMs. We further extended our method to layer-wise selective fine-tuning, with the aim of investigating how model size, fine-tuning duration, and fine-tuning depth collectively influence downstream performance. Fine-tuning 50−75% of LoRA layers was found to be optimal for small- to medium-sized pLMs, with the initial perplexity of each sequence providing some guidance for determining the best fine-tuning duration. Building on these insights, our approach achieves state-of-the-art performance in predicting beneficial mutations and binding affinity. These results establish our layer-wise selective, sequence specific fine-tuning strategy as an efficient and practical strategy for antibody-related prediction tasks, providing a useful protocol for future applications in immunology.

Article activity feed