LoFT-TCR: A LoRA-based Fine-tuning Framework for TCR-Antigen Binding Prediction

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

T cells recognize and eliminate diseased cells by binding their T cell receptors (TCRs) to short endogenous peptides (antigens) presented on the cell surface. Such interactions are central to adaptive immunity, yet current experimental approaches to identify TCR-antigen binding pairs remain labor-intensive and constrained by limited reagents. Here, we propose LoFT-TCR, a low-rank adaptation (LoRA)-based fine-tuning framework designed for TCR-antigen binding prediction. To capture precise and informative sequence representations, we first fine-tuned the protein large language model ESM-2 on antigen-specific TCR datasets using LoRA. Subsequently, we constructed a heterogeneous interaction graph where nodes encode sequence features and edges indicate interaction relationships. By leveraging a graph learning framework, LoFT-TCR effectively integrates sequence and topological information to enhance prediction capability. Systematic experiments validated that fine-tuning ESM-2 significantly enhanced the model’s capability to extract discriminative sequence representations, which are critical for accurate TCR specificity prediction. Moreover, LoFT-TCR consistently achieved superior performance compared to state-of-the-art methods on both TCR-antigen binding prediction and TCR specificity discrimination tasks. Experimental results demonstrate that LoFT-TCR achieves substantial improvements in predictive accuracy and holds potential for advancing personalized T cell-based immunotherapy.

Article activity feed