Hybrid Swin Transformer EfficientNet U-Net Model for Enhanced Brain Tumor Segmentation

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Accurate segmentation of brain tumors from MRI is essential for diagnosis, treatment, and cognitive preservation during neurosurgical treatment. In this article, we present a novel hybrid deep learning architecture that synergistically integrates a Swin Transformer encoder, an EfficientNet3D lightweight feature extractor, and a U-Net inspired decoder. The model combines the global contextual representation ability of transformers with the efficacy and spatial information of convolutional networks. Our model was trained and evaluated on BraTS2020 dataset from using multimodal MRI inputs (T1, T1ce, T2, FLAIR) and achieved a mean Dice Similarity Coefficient (DSC) score of 0.7086, having subregion-wise scores of 0.8590 (Whole Tumor), 0.6551 (Enhancing Tumor), and 0.6117 (Tumor Core). The achievements outperform baseline CNN-based structures and demonstrate the superiority of our approach in depicting heterogeneous tumor structures. Segmentation output not only enhances radiological assessment but also enables surgical planning near functionally critical areas of the brain, reducing the risk of cognitive impairment. This two-stream hybrid network offers a very effective and robust solution for high-fidelity brain tumor segmentation, with strong potential for clinical adoption in neuro-oncologic workflows.

Article activity feed