UITI: Unpaired image-to-image source-free domain adaptation for semantic segmentation

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Source-free domain adaptation (SFDA) assumes that source data are inaccessible during domain adaptation. Current SFDA methods commonly utilize source-trained models to generate pseudolabels for unlabelled target data. SFDA for semantic segmentation has become topical and focuses on challenges such as pseudolabel noise, model overfitting, and class imbalance. To address these issues, this paper proposes an unpaired image-to-image (UITI) learning framework. Specifically, we select valid pseudolabels on the basis of image-style consistency via two source-trained discriminators, to reduce pseudolabel noise caused by domain discrepancies. To prevent the source model from overfitting on the target domain, we generate augmented data as supplementary samples for the target data. These synthetic samples maintain feature-level knowledge of source data while preserving domain-invariant structural characteristics of target data. Furthermore, these synthetic samples foster rare-class patches and key-region patches. Additionally, we propose a class alignment loss to balance the appearance frequency of classes, and a region alignment loss to preserve both global semantics and local details. Extensive experiments on two widely used benchmarks, GTA5 → Cityscapes and SYNTHIA → Cityscapes, show that the proposed method achieves state-of-the-art mIoU scores of 58.3% and 61.3%, respectively.

Article activity feed