An Integrated YOLOv7–Fuzzy Reasoning Framework for Interpretable and Robust Cantaloupe (Cucumis melo) Growth-Stage Assessment
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Background Precision agriculture increasingly relies on computer vision systems to monitor crop growth; however, most existing approaches remain limited to frame-level object detection and do not support agronomic decision-making under uncertainty. To address this limitation, this study develops an interpretable and robust framework for cantaloupe ( Cucumis melo ) growth-stage assessment by integrating deep learning–based visual perception with fuzzy reasoning. Results A YOLOv7 detector was fine-tuned to identify healthy leaves, wilted leaves, flowers, and fruits from greenhouse imagery collected across eleven cultivation cycles at three production sites. The detected class counts were temporally aggregated and used as inputs to a Mamdani-type fuzzy inference system encoding expert agronomic knowledge and growth-stage expectations. Experimental evaluation showed that YOLOv7 achieved the highest mAP@0.5 (0.771) and balanced precision–recall performance compared with other YOLO variants, while the fuzzy reasoning layer transformed noisy object-level outputs into consistent crop-condition states with associated confidence levels. Real-world deployment on an edge device further demonstrated the system’s ability to generate actionable alerts, such as “Check Flower” and “Abnormal Condition,” aligned with expected phenological trends. Conclusions The proposed framework advances beyond conventional detection pipelines by enabling decision-level crop assessment that is interpretable, temporally aware, and robust to visual uncertainty. This approach provides a practical decision-support tool for greenhouse crop monitoring and supports the broader adoption of intelligent, confidence-aware systems in precision agriculture.