ADG-Net: A Lightweight Adaptive Deformable Convolution Network with Gradient-Optimized Pooling for Medical Image Segmentation

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Medical image segmentation remains challenging due to the limitations of traditional methods, such as constrained receptive fields and high computational complexity. While UNet and its transformer-based variants excel in accuracy, they suffer from insufficient sensitivity to fine anatomical structures and excessive resource consumption. To address these issues, we propose ADG-Net, a lightweight deep learning framework integrating three key innovations for efficient and precise segmentation. First, deformable convolutional kernels dynamically adjust their receptive fields via learnable offsets, significantly enhancing edge feature extraction and adaptability to irregular structures. Second, a gradient-optimized pooling mechanism replaces conventional attention, enabling efficient global context modeling by aggregating multi-scale features while avoiding high-dimensional matrix computations. Third, a self-adaptive loss function automatically balances class weights through dataset characteristic analysis, improving cross-dataset generalization and boundary detail preservation. Extensive experiments on DRIVE and CHASE\_DB1 datasets demonstrate that ADG-Net achieves state-of-the-art performance, with MIoU and Dice scores of 76.18\% (+1.65\% over UNet) and 86.12\% (+4.7\% over UNet), respectively, while reducing computational overhead by 32\% compared to transformer-based counterparts. The proposed approach not only addresses the trade-off between accuracy and efficiency but also offers practical value for deployment in resource-constrained clinical environments.

Article activity feed