ZAP-2.5DSAM: Zero Additional Parameters Advancing 2.5D SAM Adaptation to 3D Tumor Segmentation

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The Segment Anything Model (SAM) demonstrated outstanding performance in 2D segmentation tasks, exhibiting robust generalization to natural images through its prompt-driven design. However, due to the lack of volumetric spatial information modeling and the domain gap between nature and medical images, its direct application to 3D medical image segmentation is suboptimal. Existing approaches to adapting SAM for 3D segmentation typically involve architectural adjustments by integrating additional components, thereby increasing trainable parameters and requiring higher GPU memory during fine-tuning. Moreover, retraining the prompt encoder may result in degraded spatial localization, especially when annotated data is scarce. To address these limitations, we propose ZAP-2.5DSAM, a parameter-efficient fine-tuning framework, which effectively extends the segmentation capacity of SAM to 3D medical images through a 2.5D decomposition scheme without introducing any additional adapter modules. Our method fine-tunes only 3.51M parameters from the original SAM, significantly reducing GPU memory requirements during training. Extensive experiments on multiple 3D tumor segmentation benchmarks demonstrate that ZAP-2.5DSAM achieves superior segmentation accuracy compared to conventional fine-tuning methods. Our code and models are available at: https://github.com/CaiGuoHS/ZAP-2.5DSAM.git.

Article activity feed