Joint Optimization of Deep Attention-Based Multimodal MRI Feature Fusion with Shape-Aware Loss Function and Monte Carol Dropout for Segmentation of Non-Ellipsoidal Brain Tumors
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Background: Accurate segmentation of brain tumors in magnetic resonance imaging is essential for guided surgical planning, radiotherapy targeting, tumor progression monitoring, and tumor subtype classification. However, the task remains challenging due to the heterogeneous, non-ellipsoidal morphology of many tumor subtypes. Traditional deep learning models often assume spatial regularity and lack the ability to model uncertainty, which limits their reliability in clinical practice. Methods: We propose a novel model that jointly optimizes deep attention-based multimodal MRI feature fusion with shape-aware boundary penalizing loss and Monte Carlo dropout-based uncertainty estimation for precise segmentation of non-ellipsoidal brain tumors. The model integrates spatially aligned features from four MRI modalities (T1, T1Gd, T2, and FLAIR) using a learned attention-driven fusion mechanism. A shape-aware loss function explicitly penalizes deviations in tumor morphologies, while Monte Carlo dropout is applied at inference to estimate epistemic uncertainty. This encourages the preservation of fine-grained contours, supports accurate delineation of non-ellipsoidal tumor shapes, and reduces anatomically implausible segmentations. When jointly optimized, these components guide the model to focus on informative and high-confidence structurally valid regions while suppressing unreliable predictions. The model was evaluated on three publicly available datasets i.e. BraTS 2023, TCGA-LGG, and TCIA-REMBRANDT, using Dice scores as the primary metric. Results: Our proposed model achieved state-of-the-art performance with Dice scores of 95.2% for whole tumor (WT), 91.7% for tumor core (TC), and 93.4% for enhancing tumor (ET) on the BraTS 2023 dataset. It consistently outperformed competitive baselines such as nnUNet, mmFormer, UNETR++, and BayesFormer. The model’s uncertainty maps aligned with regions of inter-radiologist variability and anatomical ambiguity, providing clinically interpretable outputs. Conclusions: Joint optimization of multimodal MRI attention-based fusion, shape-aware loss, and uncertainty modeling through Monte Carlo dropout enhances both segmentation accuracy and interpretability of non-ellipsoidal brain tumors. The proposed model offers a robust and trustworthy tool for clinical neuro-oncology, paving the way for safer, uncertainty-aware AI systems to support high-stakes clinical decision-making.