Feeling the Music: Measuring Enhanced Emotion in Musicians via CNN-based Sentiment Analysis
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Empathy has long been associated with musical expertise, but quantifying empathy remains a difficult task. This study investigates the correlation between musical training and emotional reactivity via webcam based facial expression recognition. By using a Convolutional Neural Network (CNN), facial expressions of musically trained and untrained participants were analyzed as they watched emotionally intense videos, with and without audio. Musically trained individuals, defined as those with over 400 hours of total musical training, had significantly higher emotional reactivity than those without training. The musically trained group also showed a greater difference in their emotional response to videos with audio compared to videos without audio, suggesting an increase in emotional sensitivity to audio. Statistical analyses, using Mann-Whitney U tests and Cohen’s d effect size calculations confirmed that the differences were statistically significant, and had a large effect size, thereby reinforcing the accepted hypothesis that musical training enhances emotional processing. The findings aid research on the cognitive and emotional benefits of musical training, providing quantitative backing that musicians have higher emotional reactivity (empathy). Additionally, the study highlights potential use cases of AI in facial sentiment analysis as an objective measure for psychological research concerning emotions. Future research could further explore causal links behind musical training and empathy and implications for the technology in music education, media analytics, and therapeutic methods.