Anatomy-guided, modality-agnostic segmentation of neuroimaging abnormalities

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Magnetic resonance imaging (MRI) offers multiple sequences that provide complementary views of brain anatomy and pathology. However, real-world datasets often exhibit variability in sequence availability due to clinical and logistical constraints. This variability complicates radiological interpretation and limits the generalizability of machine learning models that depend on consistent multimodal input. In this work, we propose an anatomy-guided and modality-agnostic framework for assessing disease-related abnormalities in brain MRI, leveraging structural context to enhance robustness across diverse input configurations. We introduce a novel augmentation strategy, Region ModalMix, which integrates anatomical priors during training to improve model performance when some modalities are absent or variable. We conducted extensive experiments on brain tumor segmentation using the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2020 dataset (n=369). The results demonstrate that our proposed framework outperforms state-of-the-art methods on various missing modality conditions, especially by an average 9.68 mm reduction in 95 th percentile Hausdorff Distance and a 1.36% improvement in Dice Similarity Coefficient over baseline models with only one available modailty. Our method is model-agnostic, training-compatible, and broadly applicable to multi-modal neuroimaging pipelines, enabling more reliable abnormality detection in settings with heterogeneous data availability.

Article activity feed