Quantifying Structure-Function Coupling in the Human Brain using Variational Graph Contrastive Learning

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study proposes a novel method based on variational graph contrastive learning to quantify structure–function coupling at the regional level of the brain. We obtained whole-cortex scale structural connectivity matrices from publicly available studies and constructed matched functional connectivity matrices using resting-state functional MRI (rs-fMRI) from the WU-Minn Human Connectome Project (HCP). The core of the model is a dual-branch variational graph convolutional network, which aligns the latent representations of the same brain region across structural and functional modalities via contrastive learning, augmented with distance constraints and regularization. We define the Gaussian kernel output of the latent representations of a brain region in structural and functional connectivity as its structure–function coupling (SFC). The results indicate that the SFC is strongest in the Visual network and weakest in the Orbito-affective network, with its distribution pattern aligning with known cognitive hierarchy principles. Further ablation experiments and single run experiments validate the effectiveness of the model components and the robustness of the SFC metric. This study provides a new computational framework for extracting stable multimodal coupling features from complex brain network data.

Article activity feed