An Efficient Pedestrian Gender Recognition Method Based on Key Area Feature Extraction and Information Fusion

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Aiming to address the problems of scale uncertainty, feature extraction difficulty, model training difficulty, poor real-time performance, and sample imbalance in low-resolution images for gender recognition, this study proposes an efficient pedestrian gender recognition model based on key area feature extraction and fusion. First, a discrete cosine transform (DCT) based local super-resolution preprocessing algorithm is developed for facial image gender recognition. Then, a key area feature extraction and information fusion model is designed, using additional appearance features to assist in gender recognition and improve accuracy. The proposed model preprocesses images using the DCT image fusion and super-resolution methods, dividing pedestrian images into three regions: face, hair, and lower body (legs) regions. Features are separately extracted from each of the three image regions. Finally, a multi-region local gender recognition classifier is designed and trained, employing decision-level information fusion. The results of the three local classifiers are fused using a Bayesian computation-based fusion strategy to obtain the final recognition result of a pedestrian's gender. This study uses surveillance video data to create a dataset for experimental comparison. The experimental results show that, compared with the state-of-the-art algorithms, the proposed pedestrian gender recognition model based on multi-region feature extraction and information fusion can improve recognition accuracy, demonstrating high application potential for gender recognition tasks in real-world blurred images.

Article activity feed