Weather-Resilient Object Detection Framework for Autonomous Vehicles Using Conditional Preprocessing and YOLOv8

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Autonomous vehicles face significant challenges in adverse weather conditions like fog, snow, rain, and sandstorms, which degrade image quality and hinder object detection. Most of the existing systems fail here because they are trained on clear-weather datasets and use generalized image preprocessing. This study proposes a robust framework based on weather-conditional preprocessing, image enhancement, and optimized object detection. Weather classification is performed with ResNet-18, followed by specific preprocessing techniques such as dehazing and denoising to improve the images. The images are super-resolved through ESRGAN, which retains crucial information for detection. Then, using Bayesian optimization, real-time object detection is completed at precision and recall levels by the YOLOv8 model. Experiments reveal substantial improvements across all performance metrics investigated. The framework provides 81.08% precision in snow, 85.64% in rain, 85.44% in fog, and 65.76% in sand. The respective mAP values are 75.95% for snow, 70.80% for rain, 75.35% for fog, and 49.50% for sand, proving the resilience of the system under diverse scenarios. Thus, this framework provides a weather-resilient solution for autonomous vehicles to increase reliability in detection and ensure safe navigation through real-world environments.

Article activity feed