SAFNet: A Spatially Adaptive Fusion Network for Dual-Domain Undersampled MRI Reconstruction
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Undersampled magnetic resonance imaging (MRI) reconstruction aims to minimize scanning time while maintaining optimal image quality, enhancing patient comfort and clinical efficiency. Currently, parallel reconstruction strategies in both k-space and image domains effectively utilize dual-domain information to enhance image feature capture and reconstruction accuracy. However, most existing dual-domain information fusion methods primarily utilize straightforward fusion techniques, such as weighted fusion and cascade processing, neglecting differences in image spatial features and limiting the full exploitation of dual-domain information. Moreover, these methods are plagued by limited receptive field scales, which curtails the network's ability to comprehend and depict complex image structures. In this paper, we introduce a spatially adaptive fusion network (SAFNet) for dual-domain undersampled MRI reconstruction. SAFNet comprises two parallel reconstruction branches. The employment of weighted shortcut module empowers the network to effectuate a dynamic adjustment of the reconstruction strategy, enhancing its flexibility and responsiveness in handling diverse reconstruction scenarios. Spatial adaptive fusion modules are integrated within the decoder components of each branch. By spatially adaptively fusing the dual-domain features, it facilitates the enhanced extraction and utilization of intrinsic correlated features across dual domains. Furthermore, we incorporate a dynamic perception initialization module in the encoder of each branch to enrich the network's receptive fields, enhancing its ability to capture useful information across different scales. Experimental results indicate that SAFNet achieves more accurate reconstruction and demonstrates superior adaptability compared to several state-of-the-art methods. The framework presented in this paper offers valuable insights for image reconstruction and multimodal information fusion.