PickAMoo: LIDAR-Enhanced Mask R-CNN segmentation for Precision Weight Estimation in Dairy Cattle Using Smartphone Imaging.
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Data on body weight, as well as objective measures of body condition and size, are essential for appropriate decision-making on farm level, e.g. for calculations of nutrient requirements, health control and assessments for breeding purposes. Cows with suboptimal body condition score are at higher risk for transition diseases (e.g. metritis, subclinical ketosis, retained placenta) and lameness. Weighing dairy cattle and assessing their body condition is laborious and therefore often not performed on farms as frequently as desired for best production results. Despite recent research findings advocating a strong potential of using computer vision and image analysis for automated estimation of dairy cows’ weight, body condition score (BCS) and conformation, current technologies are still not widely applied in everyday practice, and the majority of methods used for BCS or weight estimation in cattle utilize the multi-camera stationary setups or 3D-cameras, which leads to high computational costs. We propose a new, two-step, AI-based method for easy live weight estimation. The first step includes Mask R-CNN segmentation network trained on 565 unique cow images (both left and right side) collected at distances varying from 1.90 meters to 2.10 meters, under different lightning conditions and at various angles. The final segmentation accuracy of Mask R-CNN was 0.98 in this first step. In the second step, weight was discretized into nine data-driven categories using a Gaussian Mixture Model (BIC-selected), after which the source weight variable was removed to prevent leakage and a leak-safe pipeline (imputation, robust scaling, fold-internal SMOTE, Extra Trees) was trained with stratified cross-validation and evaluated on an untouched holdout; a PyCaret implementation was used as an independent cross-check. On the 216-animal holdout, the tuned Extra Trees model achieved a macro-F1 of 0.936 (95% CI 0.913–0.956), with a 4.2% error rate composed entirely of adjacent (neighbouring-bin) mistakes. These results were obtained on 1080 images collected using the developed camera app and not used during the Mask R-CNN training. The idea is to further streamline the algorithm to allow its downscaling and transition in the form of a smartphone application to be used on-farm as an open-source support tool.