Relation Graph Alignment for Learning with Noisy Labels
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Noisy label learning is crucial for improving model performance on datasets with unreliable labels, which enables models to better tolerate inaccurate data. Existing methods either focus on optimizing the loss function to mitigate the interference from noise, or design procedures to detect potential noise and correct errors. However, their effectiveness is often compromised in representation learning due to the dilemma where models overfit to noisy labels. To address the aforementioned issue, this paper proposes a relation graph alignment framework that models inter-sample relationships via self-supervised learning and employs knowledge distillation to enhance understanding of latent associations, which mitigate the impact of noisy labels. Specifically, the proposed method, termed RMDNet, includes two main modules, where the relation modeling (RM) module implements the contrastive learning technique to learn representations of all data, an unsupervised approach that effectively eliminates the interference of noisy tags on feature extraction. The relation-guided representation learning (RGRL) module utilizes inter-sample relation learned from the RM module to calibrate the representation distribution for noisy samples, which is capable of improving the generalization of the model in the inference phase. Notably, the proposed RMDNet is a plug-and-play framework that can integrate multiple methods to its advantage. The RMDNet can achieve a performance improvement of 1% to 8% when combined with existing methods and learn discriminative representations for noisy data, which results in superior performance than the existing methods.