YOLOv11-TWCS: Enhancing Object Detection for Autonomous Vehicles in Adverse Weather Conditions Using YOLOv11 with Self-Attention

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Object detection for autonomous vehicles under adverse weather conditions—such as rain, fog, snow, and low light—remains a significant challenge due to severe visual distortions that degrade image quality and obscure critical features. This paper presents YOLOv11-TWCS, an enhanced object detection model that integrates TransWeather, the Convolutional Block Attention Module (CBAM), and Spatial-Channel Decoupled Downsampling (SCDown) to improve feature extraction and emphasize critical features in weather-degraded scenes while maintaining real-time performance. Our approach addresses the dual challenges of weather-induced feature degradation and computational efficiency by combining adaptive attention mechanisms with optimized network architecture. Evaluations on DAWN, KITTI, and Udacity datasets show improved accuracy over baseline YOLOv11 and competitive performance against other state-of-the-art methods, achieving mAP@0.5 of 59.1%, 81.9%, and 88.5%, respectively. The model reduces parameters and GFLOPs by approximately 19–21% while sustaining high inference speed ( 105 FPS), making it suitable for real-time autonomous driving in challenging weather conditions.

Article activity feed