Lightweight apple detection method in complex environment based on YOLOv10s-Star

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In order to achieve high-precision and fast detection of apple targets in complex orchard environments, this study proposed a lightweight target recognition method YOLOv10s-Star. First, based on the YOLOv10s model, StarNet is used as the backbone network to reduce the number of parameters and calculations, and the SCSA attention mechanism is added to the PSA module. By co-focusing on the spatial and channel attention mechanisms, the feature extraction ability of the model is enhanced; the improved BiFPN module structure is used in the neck network to achieve full fusion and utilization of the deep feature map target semantic information and the shallow feature map target position information, thereby improving the detection accuracy; finally, the DyHead detection head is designed to replace the original detection head to achieve scale perception, spatial perception, and task perception, thereby improving the accuracy and efficiency of the target detection task. Experimental results show that the mAP value of the YOLOv10s-Star model is 92.4%, the number of parameters is 5.06M, the amount of calculation is 12.9G, and the average inference speed is 126.3 fps. It maintains high detection accuracy while being lightweight and improves the detection speed. It is suitable for deployment on embedded devices of apple picking robots, laying the foundation for the realization of intelligent apple picking.

Article activity feed