Decision Voting Based Multiscale Convolutional Learning of Brain Networks With Explainability
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The diagnosis of neurological disorders requires comprehensive frameworks that incorporate multimodal neuroimaging data while ensuring clinical interpretability. Recent neuroimaging research is focusing on the integration of brain structure and function to reveal some of the prominent alteration caused by a brain disorder at the system level. This work presents a fresh multiscale graph convolutional network (GCN) framework that integrates structural connectivity (SC) from diffusion tensor imaging and functional connectivity (FC) from resting-state fMRI on three different anatomical scales. We present softmax-based decision fusion for cross-modal multiscale integration in our architecture. The preprocessing pipeline improves connectivity representations by means of graph diffusion, topological sparsification, and noise augmentation. Using five-fold cross-valuation, evaluated on a schizophrenia classification dataset, our model achieves 71.59% accuracy, outperforming single-scale methods and conventional machine learning bench-marks. Explainability analysis reveals that different dysconnectivity patterns in schizophrenia patients overlapping with biomarkers reported in literature. The multiscale approach shows complementing insights: coarse scales capture global network changes while finer scales identify localized subcortical disruptions. Combining diagnostic precision with biologically interpretable modeling, this work creates a new paradigm for interpretable brain network analysis.