A large-scale foundation model for bulk transcriptomes

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large language models (LLMs) have emerged as powerful foundation models leading to breakthroughs in transcriptome analysis. However, current RNA-seq foundation models are exclusively pretrained on sparse single-cell RNA-seq (scRNA-seq) data, which typically detects only ∼3000 genes per cell. This thus creates a critical gap in models specifically designed for bulk transcriptomes, a fundamentally different modality capable of profiling ∼16,000 genes per sample. Here we propose BulkFormer, a large-scale foundation model for bulk transcriptome analysis. With 150 million parameters covering about 20,000 protein-coding genes, BulkFormer is pretrained on over 500,000 human bulk transcriptomic profiles. BulkFormer incorporates a hybrid encoder architecture, combining a graph neural network to capture explicit gene-gene interactions and a performer module to model global expression dependencies. As a result, despite incurring much lower training costs than scRNA-seq foundation models, BulkFormer consistently outperforms them in all six downstream tasks: transcriptome imputation, disease annotation, prognosis modeling, drug response prediction, compound perturbation simulation, and gene essentiality scoring. Notably, BulkFormer not only enhances the discovery of novel clinical biomarkers but also uncovers latent disease mechanisms by imputing biologically meaningful gene expression. Collectively, these results demonstrate BulkFormer’s power as a versatile and robust framework for bulk transcriptome modeling and biomedical discovery, bridging a critical gap in the current foundation model landscape.

Article activity feed