Deep Learning-Driven Weed Detection in Lettuce Farms: Box Annotation and Post-Segmentation
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Weed infestations cause billions of dollars in annual loss and devastate natural habitats. Current weed recognition methods remain vulnerable to seasonal and environmental variations, and their performance relies on tedious manual curation. To address these limitations, we proposed a straightforward framework that combined pre-trained deep learning models (including transformers) with simple box annotations and Segment Anything Model (SAM) for precise postprocessing boundary delineation. We evaluated this approach by comparing the state-of-the-art Faster R-CNN (Region-based Convolutional Neural Network) against the pioneering transformer-based DETR on lettuce-farm imagery. Of 939 annotated images, 760 (≈81%) were used for training, 92 (≈10%) for validation, and the remaining 87 (≈9%) reserved for independent testing. Faster R-CNN achieved an overall F1 score of 95.0%—97.5% for lettuce and 92.5% for weeds—while DETR achieved 87.1% overall, with 88.1% for lettuce and 86.1% for weeds. In both models, SAM achieved near-perfect segmentation—even for overlapping or closely spaced objects—by focusing on a single object per bounding box. This research not only automates weed detection to boost lettuce yield, but also enables targeted weeding application, reducing the treatment cost and environmental impact.