Bridging the Gap in ReID: A Fusion Approach to Integrating Motion Information in Static and Video Data
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The challenge of accurately identifying individuals in different surveillance settings, particularly where obstructions occur, is significant in the field of person re-identification (ReID). We introduce a groundbreaking framework called the Motion-Aware FUsion (MAFU) network, crafted to utilize motion cues derived from static images to enhance the accuracy of ReID. This innovative network employs a dual-input mechanism that integrates the analysis of both still imagery and motion videos, improving the identification of crucial features. Key to our approach is the integration of a motion consistency task within the motion-aware transformer, facilitating precise detection and analysis of human movement dynamics. This development is vital for augmenting feature discernment in occlusion-heavy environments, thus advancing the ReID techniques. Rigorous evaluations across various ReID benchmarks, including tests tailored for occlusion, comprehensive, and video-based scenarios, demonstrate the superior performance of our MAFU network over existing models.