Survival Risk Stratification in Glioma from Multimodal MRI: An Interpretable Tool for Preoperative Treatment Planning

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background Glioma exhibits highly variable survival outcomes, yet current prognostic tools lack accuracy and interpretability. This study aims to develop and validate an interpretable machine learning model for preoperative survival risk stratification in glioma using multimodal MRI. Methods We utilized the BraTS2020 survival dataset, including 236 patients with recorded overall survival. Patients were stratified into binary (≤ 365 vs. >365 days) and ternary (< 300, 300–450, > 450 days) groups and randomly split (7:3) into training and test sets. Radiomic features were extracted from three regions of interest: necrotic core (T1ce), edema (FLAIR), and whole tumor (T1ce + FLAIR) with a 3mm peritumoral extension. We trained both single-task and multi-task CNNs for survival prediction, then combined their outputs with radiomics via late fusion. Model performance and interpretability were evaluated using Grad-CAM and SHAP. Results Among the four radiomic models, the one based on the whole tumor region from contrast-enhanced T1 (T1ce) MRI demonstrated the best predictive performance, achieving an AUC of 0.748 and accuracy of 0.750 for binary classification, and an AUC of 0.683 and accuracy of 0.479 for ternary classification. Among deep learning approaches, the single-task CNN outperformed the multi-task variant, with an AUC of 0.720 and accuracy of 0.688 (binary), and an AUC of 0.662 and accuracy of 0.438 (ternary). The late-fusion ensemble model integrating radiomics and deep learning yielded the highest overall performance: AUC = 0.781 and accuracy = 0.792 in binary classification, and AUC = 0.709 and accuracy = 0.604 in ternary classification. Furthermore, interpretability analyses using Grad-CAM and SHAP confirmed that the model decisions were grounded in clinically relevant imaging patterns and key radiomic features, supporting their transparency and clinical plausibility. Conclusion Our late-fusion model combining radiomics and deep learning achieves robust survival prediction in glioma using multimodal MRI. With dual interpretability from Grad-CAM and SHAP, it offers clinically transparent insights to support preoperative risk stratification and treatment planning.

Article activity feed