Machine Learning Guided Video Analysis Identifies Sound-Evoked Pain Behaviors from Facial Grimace and Body Cues in Mice

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Humans can experience auditory pain in response to sound, either from extremely loud noise or in cases of pain hyperacusis, where typically tolerable sounds become painful. However, the mechanisms underlying auditory pain remain poorly understood. Developing behavioral methods to measure sound-evoked pain in animal models is critical for elucidating these mechanisms. Here, a deep learning-based approach was developed to measure auditory pain in freely moving mice by analyzing facial grimace and body position from video recordings during sound exposure. Facial grimace, a validated marker of spontaneous, ongoing pain in mice, was quantified using a deep neural network trained to extract established facial features. Postural changes, additional indicators of pain, were analyzed from the same camera angle. To validate the model, a known painful state, migraine induced by injection of the neuropeptide calcitonin gene-related peptide (CGRP) was used. With this approach the ability to quantify a pain response distinct from baseline behavior was demonstrated, resulting in a defined pain threshold. Sound exposure at high intensities elicited significant changes in facial grimace and body posture, in comparison, surpassing the pain threshold established during migraine validation. These behavioral changes were absent in TMIE-knockout mice, which lack functional cochlear transduction. This automated, high-throughput framework enables objective and sensitive analysis of sound-evoked pain and provides a foundation for future studies investigating the peripheral and central mechanisms of auditory pain.

Significance

This study introduces a quantitative framework for assessing affective pain using a single-camera setup and machine learning guided analysis to capture and analyze mouse behavior. By integrating two established pain metrics, facial grimace and attenuated movement, this method enables precise, non-invasive quantification of pain-related behaviors. The approach was validated with a well-characterized pain model, migraine, induced by injection of the neuropeptide CGRP, demonstrating the ability to quantify a pain response distinct from baseline behavior. Applying this framework to auditory pain, the data reveal that exposure to intense sound triggers significant nociceptive behavioral responses. These findings provide novel insights into the behavioral manifestations and neural underpinnings of auditory pain, offering a robust tool for studying the mechanisms of pain perception.

Article activity feed