grassUNet: a U-Net model for semantic segmentation of upward-facing plant canopy images using low-cost instrumentation
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Accurate and cost-effective measurement of canopy variables such as the Plant Area Index (PAI) is crucial for agronomic decision-making and crop modeling. This paper presents grassUNet, a semantic segmentation model based on the U-Net architecture designed to analyze upward-facing plant canopy images captured with off-the-shelf smartphone cameras. The model was trained and evaluated using a dataset comprising 2170 images of bread wheat (Triticum aestivum) photographed at a zenith angle of 57.5° across three plots in northern Morocco. Segmentation masks were initially generated with the CAN-EYE software, and a subset of the dataset was annotated by experts to validate segmentation quality. Despite potential noise in the semi-automatic masks, grassUNet achieves a Dice coefficient of 91.4% on the test set and 94.2% on expert-labeled images. Preliminary observations suggest generalizability to other crops such as corn, highlighting the potential of low-cost hardware and deep learning to democratize canopy monitoring in agriculture. Accompanying scripts, available on GitHub at https://github.com/sowit-labs/grassUNet/ under the GPL-3.0 Licence, further facilitate biophysical variable calculations, making this approach applicable to both research and practical farm management.