Enhanced YOLOv11n for Small Object Detection in UAV Imagery: Higher Accuracy with Fewer Parameters

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Object detection in UAV imagery faces several challenges due to high-altitude aerial capture: targets are densely distributed, small objects account for a large proportion, and onboard computing power is limited, leading to low detection accuracy and high rates of false and missed detections. To address these issues, this article proposes an improved YOLOv11 model. First, we design a Multiscale Edge-Feature Adaptive Selection (MSEAF) module in the backbone to effectively cope with the predominance of small objects and weak edge information. Second, we use ScalCat and Scal3DC modules to reconstruct the neck and add a P2 small object detection head, alleviating feature degradation in multiscale processing and improving high-resolution information utilization. Finally, we design a shared, reparameterized lightweight detection head (SRepD) to resolve computational redundancy and insufficient feature fusion in conventional heads. The experimental results show that, compared to the YOLOv11n baseline, our model increases mAP50 and precision by 4.6% while reducing the parameters by approximately 8.5%. On datasets containing extremely small object categories, our model improves mAP50 and precision by 5.5% and 5.6%, respectively, with a 7.7% reduction in parameters relative to YOLOv11n. Compared with the larger YOLOv11s, our model achieves gains of 3.8% in mAP50 and 3.2% in precision while using only 25% of its parameters, demonstrating cross-scale performance superiority.

Article activity feed