Unsupervised Camouflaged Object Segmentation as Domain Adaptation

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Deep learning for unsupervised image segmentation re- mains challenging due to the absence of human labels. The common idea is to train a segmentation head, with the su- pervision of pixel-wise pseudo-labels generated based on the representation of self-supervised backbones. By doing so, the model performance depends much on the distance between the distribution of target datasets, and the one of backbones’ pre-training dataset (e.g., ImageNet). In this work, we investigate a new task, namely unsupervised cam- ouflaged object segmentation (UCOS), where the target ob- jects own a common rarely-seen attribute, i.e., camouflage. Unsurprisingly, we find that the state-of-the-art unsuper- vised models struggle in adapting UCOS, due to the domain gap between the properties of generic and camouflaged ob- jects. To this end, we formulate the UCOS as a source-free unsupervised domain adaptation task (UCOS-DA), where both source labels and target labels are absent during the whole model training process. Specifically, we define a source model consisting of self-supervised vision transform- ers pre-trained on ImageNet. On the other hand, the tar- get domain includes a simple linear layer (i.e., our target model) and unlabeled camouflaged objects. We then de- sign a pipeline for foreground-background-contrastive self- adversarial domain adaptation, to achieve robust UCOS. As a result, our baseline model achieves superior segmenta- tion performance when compared with competing unsuper- vised models on the UCOS benchmark, with the training set which’s scale is only one tenth of the supervised COS coun- terpart. The UCOS benchmark and our baseline model are now publicly available

Article activity feed