GSBF-YOLO: a lightweight model for tomato ripeness detection in natural environments

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Accurate tomato ripeness detection is essential for optimizing harvest timing and maximizing yield. Deep learning-based object detection has proven effective in this task. However, many existing algorithms have numerous parameters and substantial computational demands, making them unsuitable for agricultural environments with limited computational resources. Additionally, accurate detection becomes challenging with overlapping fruits, leaf occlusion, or complex backgrounds. To address these issues, this paper proposes a lightweight detection model, GSBF-YOLO. This model designs the GSim module to reduce parameters while maintaining detection accuracy. The C3Ghost module further reduces parameter count by replacing the traditional C3 module. The PANet multi-scale feature fusion network in the neck is replaced with the Bi-directional Feature Pyramid Network (BiFPN), which adjusts weights based on the importance of input features. Lastly, the fine-tuned FocalEIOU Loss function is used to calculate the bounding box regression loss, enhancing the model's ability to adjust the weights of high-quality anchor boxes for better detection of targets in occlusion scenarios. Experimental results show that GSBF-YOLO reduces parameters and computational load by 42% and 45%, respectively, while mean Average Precision (mAP) increases by 1.9% and 1.6% on two datasets. The model achieves 110 Frames Per Second (FPS), meeting real-time detection requirements, and has fewer parameters and higher accuracy compared to models like YOLOv8. The research indicates that the proposed lightweight model can effectively detect tomato ripeness in natural environments.

Article activity feed