Improved DINO Model-Based Approach for ObjectDetection of Sesame Seedlings and Weeds

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In the application scenario of precise pesticide spraying on sesame seedlings, existing object detection models exhibit lowrecognition accuracy and detection efficiency due to the complex imaging environment and the limited computational powerof agricultural intelligent equipment. Addressing this issue, this study proposes a denoising and lightweight object detectionmethod for sesame seedling and weed detection based on the DINO model. By introducing MobileNetV3 to replace theoriginal ResNet-50 backbone network in DINO, the neural network layers of the model are reduced by 70%, and the number ofparameters is decreased from 47M to 39M. Furthermore, the study employs multi-scale feature extraction from the 6th, 9th,and 13th layers of MobileNetV3. On the COCO dataset, the model achieves a recognition accuracy of 34.3% for targets smallerthan 32x32, which is an improvement of 2.3% over the original DINO model. The overall recognition accuracy is increasedby 2.1%, with a 17% reduction in parameters. Additionally, the model incorporates a three-layer lightweight image denoisingpreprocessing network based on the N2N strategy, achieving a peak signal-to-noise ratio of 34.52dB under level 10 Gaussiannoise. Furthermore, the model is trained and tested on a custom noise-added sesame seedling and weed dataset, achievinga recognition accuracy of 81.8% and a detection speed of 24 frames per second, which is a 5.6% improvement over thewell-established object detection model YOLOv7. The study demonstrates that the improved DINO object detection model hassignificant advantages in accuracy and lightweight design.

Article activity feed