cuBNM: GPU-Accelerated Brain Network Modeling
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Brain network modeling uses computer simulations to infer about latent neural properties at micro- and mesoscales by fitting brain dynamic models to empirical data of individual subjects or groups. However, computational costs of (individualized) model fitting is a major bottleneck, limiting the practical feasibility of this approach to larger cohorts and more complex models, and highlighting the need for scalable simulation implementations. Here, we introduce cuBNM, a Python package which leverages parallel processing of graphics processing units to massively accelerate simulations of brain network models. We show running simulations on graphics processing units is several hundred times faster compared to central processing units. We demonstrate the usage of cuBNM by running optimization of group-level and individualized low- and high-dimensional models. As examples of the utility of individualized models, we investigated test-retest reliability and heritability of simulated and empirical measures in the Human Connectome Project dataset, and found simulated features to be fairly reliable and heritable. Overall, cuBNM provides an efficient framework for large-scale simulations of brain network models, facilitating investigations of latent neural processes across larger cohorts, denser networks, and higher-dimensional models, which were previously less feasible in practice.