Deep Learning for Sorghum Yield Forecasting using Uncrewed Aerial Systems and Lab-Derived Imagery

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The AI revolution, advanced Graphics Processing Units (GPUs), and open-source platforms have enabled Machine Learning (ML) and Deep Learning (DL) algorithms to rapidly and accurately extract phenotypic features from Uncrewed Aerial System (UAS)-derived imagery. Such advancement leads to phenotypic digitization and sorghum yield forecasting. Yield analytics are critical for breeding programs to assess the genetics and breeding potential of genotypes to enhance cultivar development. This trial followed a three-replicated Randomized Complete Block Design (RCBD) with 36 diverse sorghum genotypes in 2023 at Ashland Bottoms, Kansas. The field images were captured 6 meters above using a DJI M300 drone equipped with the P1 sensor at nadir (90 degrees) and oblique (45 degrees) angles. This research trained YOLO and the Faster R-CNN (Detectron2) models to harness yield attributes from UAS field and lab images. The YOLO models outperformed the Faster R-CNN model in detecting sorghum panicles, achieving a mean average precision at 50% Intersection over Union (IoU) ranging from 0.92 to 0.98, compared to 0.61 to 0.89. Panicle detection from field imagery correlated at 0.86 with ground truth. Lab imagery analyses measured panicle size, seed counts, and seed area with correlation coefficients of 0.71, 0.95, and 0.25, respectively. Three machine learning models: Support Vector Regression (SVR), Decision Tree Regression (DTR), and Random Forest Regression (RFR) are used to predict yield with correlation coefficients of 0.58, 0.76, and 0.70, respectively. We observed that YOLO models are well-suited for extracting yield-attributing traits from images, which are then incorporated into ML regression models to improve yield prediction performance.

Article activity feed