Traffic Light Recognition Assistant for Color Vision Deficiency Using YOLO with Multilingual Audio Feedback
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Drivers with color vision deficiency (CVD) often face difficulty recognizing traffic light colors at intersections, putting at risk their safety and independence while driving in city environments. This study presents the development of an assistive prototype designed with Python and a PyQt5 graphical user interface. The system applies a YOLOv12 model, a Convolutional Neural Network-based object identification method that uses the OpenCV Python library that has been trained and evaluated on a comprehensive dataset consisting of various conditions, such as daytime and nighttime circumstances, clear and rainy weather, and traffic density, to recognize traffic light signals as red, yellow, and green. The detection result of traffic light color from a car webcam is delivered to users with offline audio feedback available in Indonesian, Mandarin, and English. During testing, we found a mean average precision of 0.74 across eight challenging scenarios and a maximum confidence of 0.95. The system aims to improve driving safety for individuals with color vision deficiency, offering an additional assistive device rather than replacing standard driving regulations.