Joint representations from multi-view MRI-based learning support cognitive and functional performance domains

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background

Multimodal MRI (sMRI, dMRI, rsfMRI) encodes complementary aspects of brain structure and function; principled joint representations promise more sensitive and interpretable markers of brain health than single-modality features.

Methods

We evaluate Normative Neurological Health Embedding (NNHEmbed), a flexible multi-view framework that uses constrained cross-modal similarity objectives to learn low-dimensional embeddings. Models were trained and tested on the UK Biobank (n = 21,300) and evaluated for transfer and longitudinal sensitivity in independent cohorts (Normative Neuroimaging Library, Alzheimer’s Disease Neuroimaging Initiative, Parkinson’s Progression Markers Initiative).

Results

NNHEmbed produced compact, biologically interpretable components that (a) maps to established neurocognitive systems (e.g., episodic memory, processing-speed, sensorimotor/basal-ganglia circuits), (b) generalizes across cohorts, and (c) captures within-subject change over time. Best configurations balance reconstruction fidelity and shared covariance, improving interpretability while preserving predictive utility. Case demonstrations illustrate individualized normative profiling across multiple visits.

Conclusions

NNHEmbed yields stable, transferable multimodal embeddings suitable for normative mapping and longitudinal monitoring. Software, NNHEmbed configurations and derived bases are available for reproduction and reuse.

Article activity feed