Automated Facial Landmark Analysis vs. Manual Coding: Accuracy in Dog Emotional Expression Classification

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Emotional expression in dogs is central to dog-human interactions. Reliable indicators are essential for interpreting animal emotions; however, their predictive value remains debated. Fireworks, which elicit fear in many dogs, provide a real-world context for examining affective states. We used machine learning to classify firework vs. non-firework situations from dogs’ behaviour and expressions using two approaches: (1) ethogram-based manual coding of behaviours and (2) automated analysis of facial landmarks. The Random Forest model based on manual coding achieved a high accuracy (0.83) and 1.0 predictive validity, identifying backwards-directed ears and blinking as key fear indicators. The best automated facial landmark model reached up to 0.80 accuracy and 0.77 predictive validity. Manual coding performed better, likely due to richer semantic content and full-body observations. Our findings demonstrate that machine learning can classify canine emotional states from both manually coded and automated methods, offering potential for future automated welfare monitoring.

Highlights

  • Machine learning approaches classify firework-related fear in dogs reliably

  • Ear position and blinking are key indicators

  • Ethogram-based coding yielded stronger predictions than automatically detected facial landmarks

  • Automated landmark analysis shows potential for scalable welfare monitoring

Subject areas

Emotional expression · Behavioural classification · Machine learning · Facial landmarks · Facial expressions

Article activity feed