GUDA-Net: An End-to-End Domain Adaptation Network for Multi-Modal MRI Segmentation
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Accurate segmentation of multimodal MRI is essential for brain tumor diagnosis, yet remains hindered by domain shifts across imaging modalities and the scarcity of labeled data. We introduce GUDA-Net, a novel end-to-end framework that performs task-driven unsupervised cross-modality adaptation via segmentation-guided generative learning. Unlike traditional CycleGAN-based pipelines that treat modality translation and segmentation independently, GUDA-Net establishes a bidirectional learning loop, where the segmentation network supervises the synthesis process, enforcing anatomical and semantic consistency in the generated modality. Central to our design is a modality-adaptive generator trained with dual consistency constraints—capturing both low-level visual fidelity and high-level semantic alignment. This strategy enables structure-preserving translation and enhances cross-domain generalization. Evaluated on the BraTS 2018 dataset, GUDA-Net achieves a Dice score of 89.90% in T2-to-T1 segmentation, surpassing existing state-of-the-art methods and underscoring its potential for clinically deployable multimodal MRI analysis under missing modality conditions.