GlioMODA: Robust Glioma Segmentation in Clinical Routine

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background

Precise glioma segmentation in MRI is essential for accurate diagnosis, optimal treatment planning, and advancing clinical research. However, most deep learning approaches require complete, standardized MRI protocols that are frequently unavailable in routine clinical practice. This study presents and evaluates GlioMODA, a robust deep learning framework designed for automated glioma segmentation that delivers consistent high performance across varied and incomplete MRI protocols.

Methods

GlioMODA was trained and validated on the BraTS 2021 dataset (1,251 training, 219 testing cases), systematically assessing performance across eleven clinically relevant MRI protocol combinations. Segmentation accuracy was evaluated using Dice similarity coefficients (DSC) and panoptic quality metrics. Volumetric accuracy was benchmarked against manual ground truth, and statistical significance was established via Wilcoxon signed-rank tests with Benjamini–Yekutieli correction.

Results

GlioMODA demonstrated state-of-the-art segmentation accuracy across tumor subregions, maintaining robust performance with incomplete or heterogeneous MRI protocols. Protocols including both T1-weighted contrast-enhanced and T2-FLAIR sequences yielded volumetric differences versus manual ground truth that were not statistically significant for enhancing tumor (ET: median difference 55 mm³, p = 0.157) and whole tumor (WT: median difference –7 mm³, p = 1.0), and exhibited median DSC differences close to zero relative to the four-sequence reference protocol. Omitting either sequence led to substantial and significant volumetric errors.

Conclusions

GlioMODA facilitates reliable, automated glioma segmentation using a streamlined two-sequence protocol (T1-contrast + T2-FLAIR), supporting clinical workflow optimization and broader implementation of quantitative volumetry compatible with RANO 2.0 criteria. GlioMODA is published as an open-source, easy-to-use Python package at https://github.com/BrainLesion/GlioMODA/ .

Key Points

  • T1-CE + T2-FLAIR maintains enhancing and whole tumor segmentation comparable to four-sequence MRI.

  • Consistent volumes with T1-CE + T2-FLAIR support reliable RANO 2.0 assessment.

  • Open-source GlioMODA (models + code) supports rapid integration.

Importance of the Study

Automated glioma segmentation is limited in practice by incomplete or heterogeneous MRI protocols. GlioMODA directly addresses this barrier by delivering consistent accuracy across 11 clinically relevant sequence combinations and identifying a streamlined protocol (T1-contrast and T2-FLAIR) whose enhancing- and whole-tumor volumes are not statistically different from expert reference. This enables shorter scans and reproducible volumetry compatible with RANO 2.0, facilitating reliable response assessment in trials and routine care. By releasing trained models and code as an easy-to-use open-source package, this work enables external validation and integration into neuro-oncology workflows.

Article activity feed