VIT-driven adaptive low-light pedestrian image enhancement algorithm

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Pedestrian detection in low-light environments has emerged as a critical challenge in computer vision, particularly with the increasing deployment of intelligent monitoring systems. This study addresses the inherent difficulties of color deviation, insufficient brightness, reduced contrast, and noise interference in low-light pedestrian images. The proposed Enhanced Low-Light Pedestrian Detection (ELLPD) method integrates Haar Wavelet Downsampling (HWD) with an adaptive parameter prediction module based on Vision Transformer (VIT), coupled with a comprehensive image enhancement module. Experiments conducted on the LLVIP and WiderPerson\_dark datasets demonstrate the efficacy of ELLPD. When cascaded with YOLOv8n, ELLPD improved recall and mean Average Precision ($mAP$) by 3\% and 1.1\% respectively on the LLVIP dataset. Notably, on the WiderPerson\_dark dataset, ELLPD achieved a substantial 7.8\% increase in recall compared to YOLOv8n alone. These significant performance enhancements underscore ELLPD's effectiveness in addressing the challenges of pedestrian detection under low-light conditions. By optimizing image quality and dynamically adjusting algorithm parameters, ELLPD offers a robust solution for intelligent monitoring systems operating in challenging low-light scenarios.

Article activity feed