Meta-HGNet: Meta-Heterogeneous Generalized Network for Visible Infrared Person Re-identification

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Visible infrared person re-identification plays an important role in heterogeneous modality image person identity matching, however, significant modality differences increase cross modality image matching errors. The existing cross-modality re-identification person re-identification methods only utilize modality homogeneous shared features, ignoring specific representations of modality heterogeneity, which prevents fast fitting convergence and reduces cross-modality re-identification accuracy. To address the above problems, this paper proposes a novel Meta-heterogeneous generalized network (Meta-HGNet), which introduces a meta-learning strategy to simulate the training and testing process in order to improve the heterogeneous generalization ability of the network. The Meta-HGNet consists of two modules: the Meta-Optimization Feature Module (MOF) and the Meta-Feature Coupling module (MFC). To enhance the discriminative properties of homogeneous and heterogeneous modality features, the MOF is used to extract homogeneous shared features, separate heterogeneous specific representations and learn their contextual relationships. Then, the MFC reduces the discrepancy between meta-sample features and shared discriminative features by extracting meta-sample features, fusing them with modality shared discriminative features, and the feature fusion weights are adjusted by using the dual constraints of meta-sample testing loss (MTloss) and meta-sample coupling loss (MSCloss). As a result, the proposed model adaptively adjusts the weighting parameters without introducing additional network parameters for fast fitting convergence, which further improves the generalization of visible-infrared person re-identification, and the effectiveness and superiority of the proposed method is verified through experiments on three cross-modality re-identification datasets.

Article activity feed