Dynamic Multi-Expert Diffusion Segmentation for Semi-Supervised 3D Medical Image Segmentation

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Three-dimensional medical image segmentation is critical for clinical applications, yet expert annotations are costly, driving the need for semi-supervised learning. Current semi-supervised methods struggle with robustly integrating diverse network architectures and managing pseudo-label quality, especially in complex three-dimensional scenarios. We propose Dynamic Multi-Expert Diffusion Segmentation (DMED-Seg), a novel framework for semi-supervised three-dimensional medical image segmentation. DMED-Seg leverages a Diffusion Expert for global contextual understanding and a Convolutional Expert for fine-grained local detail extraction. A key innovation is the Dynamic Fusion Module, a lightweight Transformer that adaptively integrates multi-scale features and predictions from both experts based on their confidence. Complementing this, Confidence-Aware Consistency Learning enhances pseudo-label quality for unlabeled data using DFM-derived confidence, while Inter-expert Feature Alignment fosters synergistic learning between experts through contrastive loss. Extensive experiments on multiple public three-dimensional medical datasets demonstrate DMED-Seg consistently achieves superior performance across various labeled data ratios, outperforming state-of-the-art methods. Ablation studies confirm the efficacy of each proposed component, highlighting DMED-Seg as a highly effective and practical solution for three-dimensional medical image segmentation.

Article activity feed