OmiMRI: A Clinical-adaptive AI Framework for Format-Free Interpretation of Heterogeneous Brain MRIs

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Clinical brain MRI analysis faces a fundamental challenge: bridging the gap between oversimplified research developments and the inherent heterogeneity of real-world clinical practice. Quantifying this gap, our analysis of 26 MRI attributes across 22 clinical datasets reveals substantial heterogeneity across institutions and patients. Current AI tools typically require rigid input formats, necessitating extensive data exclusion or preprocessing that severely limits their real-world utility. Here we present OmiMRI, a unified, format-free framework designed to bridge this gap by enabling adaptive processing of arbitrary MRI combinations. Rather than strictly defining a standalone architecture, OmiMRI functions as a universal framework that integrates diverse pretrained 2D/3D convolutional and Transformer-based networks as feature encoders. Through a self-attention mechanism and dynamic weighting to fuse features from variable inputs, OmiMRI decouples clinical performance from rigid input specifications and enables adaptive processing of arbitrary MRI combinations. Across 15 diverse classification, segmentation, and regression tasks, OmiMRI demonstrates robust input-scaling capabilities, yielding significant improvements over traditional fixed-input models. Notably, OmiMRI outperforms advanced medical imaging foundation models (e.g., BrainIAC and BrainSegFounder) in 94.4% of comparisons involving 2–4 input MRIs under consistent experimental conditions. Furthermore, the framework exhibits continuous performance gains through the incremental incorporation of multi-center data. In a clinically demanding, data-limited task distinguishing glioblastoma from metastasis, OmiMRI achieved diagnostic performance matching or exceeding that of senior neuroradiologists (AUROC 0.931 vs. 0.907, P > 0.05; AUPRC 0.973 vs. 0.930, P < 0.05), while providing interpretable attention maps aligned with radiological landmarks. Together, these results establish OmiMRI as a clinically adaptive AI paradigm that transform format-rigid modeling into flexible, expert-level systems capable of embracing the heterogeneity of real-world patient data.

Article activity feed