Modelling Discrete States and Long-Term Dynamics in Functional Brain Networks

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Functional brain network dynamics underlie fundamental aspects of human cognition and behaviour, including memory, ageing, and a range of clinical disorders. It has been shown that ongoing brain network dynamics can be reliably inferred at fast, sub-second timescales from electrophysiological data using unsupervised machine learning. However, these methods often struggle with inherent trade-offs. For example, Hidden Markov Models (HMMs) have been used to infer categorical brain network states that provide good interpretability but do not model long-range temporal structure. Recently, deep learning approaches using recurrent neural networks (e.g., Dynamic Network Modes) have been proposed to model long-range temporal dependencies, but at the expense of interpretability. In this paper, we introduce Dynamic Network States (DyNeStE) to address this problem. This new model employs amortised Bayesian inference with recurrent neural networks to model long-range temporal structure and uses a Gumbel-Softmax distribution to enforce categorical states for greater interpretability. In both simulations and real resting-state magnetoencephalography data, DyNeStE was able to recover plausible dynamic brain network states and showed superior performance over the HMM in capturing long-range temporal dependencies in network dynamics. These dynamic networks were reproducible across independent data splits and build on established HMM-based findings. Together, these results highlight DyNeStE as an interpretable and temporally informative framework, capable of representing large-scale neural activity as discrete state transitions while capturing transient and long-range brain network dynamics.

Article activity feed