Bridging Scale, Semantics, and Boundaries: A Hybrid CNN-Transformer Architecture with Bidirectional Spatial-Channel Fusion for Medical Image Segmentation
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Accurate segmentation of anatomical structures in medical images is fundamental to a wide range of clinical applications, from disease diagnosis to treatment planning. However, this task remains persistently challenging due to substantial variations in anatomical scale, ambiguous tissue boundaries, and heterogeneous image appearances. Existing approaches, whether convolutional neural networks or vision transformers, often struggle to simultaneously capture long-range dependencies, preserve fine structural details, and adapt to diverse morphological contexts. To address these limitations, we introduce BRF-Net, a hybrid CNN-Transformer framework that unifies adaptive multi-scale feature aggregation, bidirectional spatial-channel refinement, and frequency-domain detail preservation within a single architecture. Specifically, we propose an Adaptive Gated Multi-Scale (AGMS) block that dynamically selects receptive fields based on image content; a Bidirectional Refinement and Fusion (BRF) Attention Block that enforces reciprocal conditioning between spatial and semantic features; and a Patch-wise Fourier Feed-Forward Network (PF-FFN) that explicitly preserves high-frequency boundary information through learnable spectral filtering. Here we show that BRF-Net achieves state-of-the-art performance across eight diverse public benchmarks covering abdominal organs, cardiac structures, polyps, skin lesions, breast lesions, and nuclei. It surpasses the strongest competing methods by an average of 0.87 points in Dice and 1.45 points in IoU on six binary datasets, while reducing the Hausdorff distance by 4.43. On the multi-organ Synapse dataset, it improves average Dice and IoU by 3.44 and 4.05 points, respectively. These results demonstrate that explicitly coupling scale adaptivity, spatial-semantic consistency, and boundary awareness yields substantial and robust improvements in segmentation fidelity, offering a more reliable tool for clinical image analysis. The source code is publicly available at GitHub and archived in Zenodo with DOI: \url{https://doi.org/10.5281/zenodo.19129179}.