MedSeg-Adapt: Clinical Query-Guided Adaptive Medical Image Segmentation via Generative Data Augmentation and Benchmarking

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Medical image segmentation systems, despite their recent sophistication, often face substantial performance degradation when exposed to unseen imaging environments—caused by differences in scanner types, acquisition protocols, or rare pathological conditions. To address this crucial issue, we introduce \textbf{MedSeg-Adapt}, a novel framework that enables \textit{clinical query-guided adaptive medical image segmentation}. MedSeg-Adapt features an autonomous generative data augmentation module that dynamically synthesizes environment-specific and clinically diverse training data using advanced medical image Diffusion models in combination with large language models (LLMs). This module automatically generates realistic image variants, natural language clinical queries, and pseudo-annotations—without requiring new reinforcement learning policies or manual labeling. In addition, we establish \textbf{MedScanDiff}, a new benchmark comprising five challenging medical imaging environments: Higher-resolution CT, Low-dose CT, Varying-field MRI, Specific Pathology Variant, and Pediatric Imaging. Extensive experiments demonstrate that fine-tuning state-of-the-art models such as MedSeg-Net, VMed-LLM, and UniMedSeg on MedSeg-Adapt-generated data significantly enhances robustness and segmentation accuracy across unseen settings, achieving an improvement of Dice Similarity Coefficient (DSC). MedSeg-Adapt thus provides a practical and effective pathway toward self-adaptive, clinically grounded medical image segmentation.

Article activity feed