Context-Aware Feature Adaptation for Mitigating Negative Transfer in 3D LiDAR Semantic Segmentation

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Semantic segmentation of 3D LiDAR point clouds is crucial for autonomous driving and urban modeling, but requires extensive labeled data. Unsupervised domain adaptation from synthetic to real data offers a promising solution, yet faces the challenge of negative transfer, particularly due to context shifts between domains. This paper introduces Context-Aware Feature Adaptation, a novel approach to mitigate negative transfer in 3D unsupervised domain adaptation. The proposed approach disentangles object-specific and context-specific features, refines source context features through cross-attention with target information, and adaptively fuses the results. We evaluate our approach on challenging synthetic-to-real adaptation scenarios, demonstrating consistent improvements over state-of-the-art domain adaptation methods with up to 7.9% improvement in classes subject to context shift. Our comprehensive domain shift analysis reveals a positive correlation between context shift magnitude and performance improvement. Extensive ablation studies and visualizations further validate the efficacy in handling context shift for 3D semantic segmentation.

Article activity feed