Deep Transferable Label Propagation with Prototypical Augmentation

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Domain adaptation (DA) seeks to utilize ample labeled data from a source domain to boost the generalization capability of models on an unlabeled target domain with divergent data distributions. Label Propagation (LP) has emerged as an efficient semi-supervised learning paradigm for DA, transferring labels between the source and target domains based on a similarity graph. Nevertheless, existing LP-based DA methods still face significant challenges: 1) Semantic insufficiency in the source training domain impairs the performance of classes with sparse structures, particularly minority classes; 2) Generated pseudo-labels exhibit low reliability due to ambiguous feature distributions; 3) The two-phase architecture decouples domain-invariant feature learning from label propagation, thus failing to achieve mutual enhancement between these two processes for DA tasks; 4) Sample-level graph construction incurs prohibitive computational costs and poor scalability when handling large-scale datasets. To address these issues, we propose a novel DA strategy, Deep Transferable Label Propagation (DTLP), that integrates prototypical augmentation techniques. Specifically, DTLP embeds three core modules into a unified end-to-end system: 1) Prototype-guided feature augmentation, termed Prototypical Augmentation (ProAug), which enriches the semantic content of the source domain by interpolating samples with class prototypes to mitigate semantic deficiency; 2) Prototype graph-based label propagation, which constructs a class-level prototypical graph rather than a sample-level one to reduce computational complexity and alleviate class imbalance; 3) Domain alignment via prototypical contrastive learning, which facilitates dynamic mutual optimization between domain-invariant feature extraction and robust label propagation while narrowing domain discrepancy. Comprehensive experiments on various benchmark datasets demonstrate that the proposed DTLP outperforms state-of-the-art LP-based DA methods, validating its effectiveness and generalizability.

Article activity feed