Micro-Expression–Based Facial Analysis for Automated Pain Recognition in Dairy Cattle

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Timely, objective pain recognition in dairy cattle is essential for welfare assurance, productivity, and ethical husbandry, yet remains elusive because evolutionary pressure renders bovine distress signals brief and inconspicuous. Without verbal self‑report, cows suppress overt cues, so automated vision is indispensable for on‑farm triage. Earlier systems tracked whole‑body posture or static grimace scales; frame‑level detection of facial micro‑expressions has not been demonstrated in livestock. We translate micro‑expression analytics from automotive driver monitoring to the barn, linking modern computer vision with veterinary ethology. Our two‑stage pipeline first detects faces and 30 landmarks with a custom YOLOv8‑Pose network, achieving 96.9 % mAP@0.50 for detection and 83.8 % OKS for keypoint placement. Cropped eye, ear, and muzzle patches are encoded by a pretrained MobileNetV2, generating 3 840‑dimensional descriptors that capture millisecond muscle twitches. Sequences of five consecutive frames feed a 128‑unit long short‑term memory classifier that outputs pain probabilities. On a held‑out validation set of 1 700 frames the system records 99.65 % accuracy and an F1‑score of 0.997, with only three false positives and three false negatives. Tested on 14 unseen barn videos, it attains 64.3 % clip‑level accuracy and 83 % precision for the pain class, using a hybrid aggregation rule that combines a 30 % mean‑probability threshold with micro‑burst counting to temper false alarms. These results show that micro‑expression mining can deliver scalable, non‑invasive pain surveillance across variations in illumination, camera angle, background, and individual morphology. Future work will explore attention‑based temporal pooling, curriculum learning for variable window lengths, domain‑adaptive fine‑tuning, and multimodal fusion with accelerometry to elevate performance toward clinical deployment.

Article activity feed