W-Attention-Residual U-Net Architecture for Massive Brain Tumor Segmentation

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Recently, there has been a notable growth in the utilization of diverse magnetic resonance imaging (MRI) techniques for examining brain tissue. However, the manual examination of each generated image can be a tedious task. UNet and its modifications are among the state-of-the-art models for segmenting medical images that have demonstrated promising MRI performance. This is due to the rapid growth of deep learning and its application to medical imaging challenges. Motivated by UNet's success and its modifications to enhance overall segmentation performance, this paper proposes an architecture linking two basic Attention-Residual UNet, W- AG-Res-U-Net. Each Attention Residual U-Net in the proposed model is integrated with the Atrous Spatial Pyramid Pooling (ASPP) mechanisms. This new architecture connects two Attention U-Net integrated with the two different Atrous Spatial Pyramid Pooling (ASPP) mechanisms. However, using a single Atrous pyramid pooling module, the image contains objects of various sizes in the same class and cannot capture targets that are too large or too small. Therefore, we designed an unsymmetric feature extraction module, the Atrous Spatial Pyramid Module (ASPM1), which integrates the encoder and decoder of the first net at a low atrous dilation convolution rate to capture small objects in the image. In addition, ASPM2 integrates the second net to capture large objects in the image by using the dilation convolution with a more significant atrous rate enhanced residual unit. Moreover, the residual unit in the proposed model is optimized by using a squeeze-and-excitation block to extract adaptive features, suppress wasteful regions, and highlight features significant to certain segmentation tasks. We assessed our brain tumor segmentation model on the Figshare public dataset, achieving an accuracy of 99.8%, a DSC of 97.76%, an IoU of 97.36%, a sensitivity of 93.42%, and a precision of 95.14%. Furthermore, our approach outperforms state-of-the-art techniques in segmentation outcomes.

Article activity feed