A Unified Uncertainty-Aware Deep Learning Framework for Multi-Modal Brain Tumor Segmentation and Classification
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The detection of brain tumors is still a major challenge in medical image analysis because of heterogeneous appearance of tumors, low contrast and ambiguity in locating boundaries in imaging modalities. This work offers an integrated and holistic system of brain tumor detection, segmentation, and classification based on CT, MRI, and Fig share data. Adaptive Neuro-Fuzzy Contrast Harmonization (ANFCH) of intelligent preprocessing is incorporated into the methodology and sets the contrast in an enhanced and well-defined contrast, without losing key structural information. To enable a better delineation of the tumor, Uncertainty-Aware Dual Attention Tumor Segmentation Network (UADAT-Net) is proposed, which integrates channel attention, spatial attention with the Bayesian uncertainty modeling to enhance robustness and boundary accuracy. Hybrid Spatial-Spectral tumor Representation Learning (HSSTRL), is then used to get discriminative features of the space and frequency-domain in a way that best represents tumor morphology and texture after segmentation. A Self-Regularized Ensemble Capsule Network (SREC-Net) is used to perform the classification by maintaining the hierarchical relationship between the space and enhances the discrimination of the classes by utilizing the ensemble-based regularization. Also, Groupers and Moray Eels–based Hyperparameter Tuning (GME-HT) is utilized to optimally select the model parameters, is used to make convergence more stable and improve the generalization performance. The efficacy of the proposed framework is proved by the large-scale experiments that were performed on CT, MRI, and Fig share datasets. The model has high accuracy, 98.80% on CT, 98.90% on MRI and 99.20% on Figshare data with high precision, recall, specificity, and AUC comparing to the existing methods EfficientNetV2 and Vision Transformer (ViT), Multiscale Deformable Attention Module (MS-DAM), Gradient Vector Flow (GVF), Gray level Co-occurrence matrix (GLCM). The better performance, lower computational complexity and higher accuracy in segmentation are further affirmed by comparative and ablation studies which are among the best state-of-the-art. The outcomes bring out the clinical utility of the suggested framework in the analysis of brain tumors reliably and effectively.