Neuro-BOTs: Biologically-Informed Transformers for Brain Imaging Analysis
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Transformer models have reshaped natural language processing by using attention mechanisms to model relationships across sequences. Here, we adapt this architecture to brain imaging with Neurobiologically-Optimized Transformers (Neuro-BOTs)—a framework that embeds prior neurobiological knowledge directly into the model's attention layers. Rather than learning attention weights solely from data, Neuro-BOTs incorporate fixed spatial filters derived from brain maps—such as neurotransmitter distributions, mitochondrial density, or anatomical connectivity. These priors guide how the model attends to functional MRI features across brain regions. We evaluate Neuro-BOTs on three classification tasks using resting-state fMRI. In a Parkinson’s disease dataset, incorporating a noradrenergic filter improves classification accuracy from 71.3% to 89.7%, suggesting that early-stage noradrenergic dysfunction is a key discriminative signal. To assess specificity, we test the model on healthy ageing datasets—where no single biological system should dominate—and find no spurious performance gains across diverse filters and configurations. To assess sensitivity, we apply the model to two small datasets measuring LSD-induced brain responses where we hypothesised knowledge about the pharmacological profile of the drug (serotonergic priors) would improve classification despite limited sample size. These results show that embedding biologically meaningful priors into Transformer architectures enhances both accuracy and interpretability across a wide range of contexts and applications. More broadly, Neuro-BOTs provide a principled way to integrate multiscale brain knowledge into deep learning models, enabling new forms of biologically grounded inference in clinical prediction and cross-species translation.