Efficient Deep Learning Models for Predicting Individualized Task Activation from Resting-State Functional Connectivity

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Deep learning has shown promise in predicting task-evoked brain activation patterns from resting-state fMRI. In this study, we replicate the state-of-the-art BrainSurfCNN model using data from the Human Connectome Project, and explore biologically motivated frameworks to improve prediction performance and computational efficiency. Specifically, we evaluate two model variants: BrainSERF, which integrates a Squeeze-and-Excitation attention mechanism into the U-Net backbone, and BrainSurfGCN, a lightweight graph neural network architecture that leverages mesh topology for efficient message passing. Both models yield comparable prediction performance to BrainSurfCNN, with BrainSERF offering modest improvements in subject identification accuracy and BrainSurfGCN delivering substantial reductions in model size and training time. We also investigate factors contributing to interindividual variability in prediction accuracy and identify task performance and data quality as significant modulators. Our findings highlight new architectural avenues for improving the scalability of brain decoding models and underscore the need to consider individual variability when evaluating prediction fidelity.

Article activity feed