Improving interpretability and applicability of welfare management decisions in dairy cows through explainable artificial intelligence
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Background Improving animal welfare and health is a key objective in modern dairy systems. However, translating the nowcommonly available high-frequency bovine-dedicated sensor instrument data into transparent, actionable decisions remains challenging, particularly when using complex machine learning (ML) models. In this study, we used ML to predict several welfare indicators (WIs) from routinely recorded milk-related data in dairy cows, and explainable AI (XAI) to interpret and visualise ML models for practical use. Results Monthly individual WIs for mastitis, subclinical acidosis, subclinical ketosis, longevity, and reproduction were predicted used with random forest models trained on routinely available test-day traits in dairy cows (milk acetone, fat, lactose, protein, urea, and electrical conductivity). The individual WIs were then used to derive an overall welfare score, with both individual and overall scores classified into good (G), intermediate (I), and risk (B) classes. SHapley Additive exPlanations (SHAP) values quantified feature importance and interactions, revealing relationships largely consistent with known physiology (e.g., extreme high and low values of milk components and urea associated with increased welfare risk) and highlighting class-specific and U-shaped effects that call for context-dependent interpretation. Counterfactual explanations were used to identify minimal changes in milk traits required to shift predictions from B to G class, thereby translating model outputs into candidate management adjustments. While most counterfactuals followed biologically plausible patterns, occasional non-intuitive suggestions underscored the need for expert oversight. Conclusions This study illustrates how SHAP and counterfactual explanations can be layered on top of ML models to generate interpretable, customizable decision-support tools for precision dairy cattle welfare, while emphasizing the need to field-validate their usability, economic value, and ethical implications.