Unsupervised Domain Adaptation for Cross-domain Remote Sensing Object Detection Via Joint Input and Feature Space

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The rapid advancement of deep learning has led to significant achievements in remote sensing object detection. However, domain shift often causes notable performance drops when models trained on one domain are applied to real-world scenarios. Unsupervised domain adaptation (UDA) offers a solution by narrowing domain gaps. Generative adversarial networks (GANs) are commonly used for this purpose, but they can degrade key textures and details in source images. To address this, we propose a method that integrates transformations in both input and feature spaces. First, we standardize image dimensions across source and target domains. Then, a Joint Color Space Transformation (JCST) module operates in the feature space to decouple and recombine color channels, preserving crucial image details while aligning data distributions. We validated our approach on a dataset containing large-, medium-, and small-scale objects, using multiple object detection models. Results show that our method boosts average detection accuracy by 2–4% on source domain images, demonstrating improved generalization and robustness in cross-domain tasks.

Article activity feed