Virtual Try-On–Based Data Augmentation for Robust Person Re-Identification in Emergency Surveillance Scenarios

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Person Re-identification (person Re-ID) plays an important role in dynamic evacuation path planning and person tracking in emergency scenarios. However, its robustness is severely challenged in such conditions, where persons’ appearances may change rapidly due to stress responses or environmental interventions. Meanwhile, privacy regulations and data access constraints limit the availability of long-term surveillance data, hindering the generalization capability of re-identification models. Virtual try-on technologies offer a promising means of enriching appearance diversity under limited data conditions. In this study, a virtual try-on–based data augmentation method for person Re-ID is developed. To address inaccurate clothing mask extraction caused by low image resolution, occlusions, and complex backgrounds, the original mask generation module used in existing virtual try-on pipelines is replaced with a composite framework integrating Grounding DINO and the Segment Anything Model (SAM). The proposed framework enables precise extraction of clothing regions using text-based prompts, through which appearance-diverse person images are generated. Extensive comparative experiments and multi-level analyses demonstrate that the generated images exhibit high visual realism, preserve identity-related information, and do not introduce systematic distribution shifts. Controlled experiments on a ResNet-50–based benchmark further confirm that the proposed data augmentation strategy consistently improves re-identification performance.

Article activity feed