GoLoCo-Net: Global-Local guided Contextual Attention Network for Medical Images Segmentation

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Accurate medical image segmentation plays a vital role in assisting diagnosis with quantifiable visual evidence. Due to the complex structure and diverse patterns in medical images, it is crucial to capture both short and long-range pixel relations. While transformers are adept at modeling long-range spatial dependencies in images, they struggle with learning local pixel relationships. To address this, we propose a deep learning network named GoLoCo-Net incorporating a dual decoder structure. More specifically, one decoder entails a Contextual Attention Feature Enhancement (CAFE) module to enhance the features for a broader capture of local and global contexts, whereas the other uses a Global-Guide-Local Feature (GGLF) module that leverages high-level features to enrich low-level features with a global context. The proposed method is evaluated on two dynamic MRI datasets and one multi-organ CT dataset. Experimental results show that the model achieves state-of-the-art performance across all three datasets. The code is available:https://github.com/Yhe9718/GoLoCoNet.

Article activity feed