Channel-level Feature Selection and Fusion Network for Visible-infrared Person Re-identification
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Visible-infrared person re-identification aims to identify person images between infrared and visible cameras. Existing methods generally embed shared features and specific features into the same space directly, which may impair the high discriminative features and therefore limit the recognition accuracy. To address this issue, this paper proposes a channel-level feature selection and fusion network (CFSFNet), in which the contributions of features are evaluated at the channel level for weighted fusion to enhance the discriminability of the features. The proposed CFSFNet consists of three main components including a feature extraction module (FEM), a feature selection module (FSM), and a channel-level feature fusion module (CFFM). Modality-shared and modality-specific features are first extracted by different multi-channel feature extractors in the FEM. The contributions of different features to identification are then evaluated at the channel level in the FSM. According to their contributions, modality-shared and modality-specific features are combined on selected high-response channels by a dual-channel-attention mean-weighted fusion in the CFFM. Through the collaboration between the CFFM and the FSM, the proposed method not only exploits both modality-shared properties and modality-specific characteristics but enhances the high discriminative features. Moreover, a novel tri-directional center triplet loss is proposed to combine with other loss constraints to guide the model to tighten the intra-class individuals and widen the inter-class gaps. Extensive experiments on SYSU-MM01 and RegDB datasets demonstrate the superiority of the proposed method over state-of-the-art methods. Ablation studies also illustrate the effectiveness of each component of the proposed network. The source code of this work is publicly available at https://github.com/CV-ReID/CFSFNet.