Beyond Inferring Emotions: Inferring Contextual Information from Dynamic Facial Displays of Emotion
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The ability to process facial displays of emotion is a crucial aspect of everyday interpersonal interactions, yet the use of emotions as communicative tools for situational context information remains underexplored. The current study addresses this gap by using a novel dynamic display of targets’ reactions to a poker game to assess perceivers ' ability to detect contextual information from facial displays of emotion. We categorised targets’ dynamic facial displays of emotional reactions to game outcomes using MorphCast Emotion AI HTML5 SDK, an online machine-learning tool that detects emotions in recorded videos. Additionally, we investigated the role of perceivers ' covert facial mimicry during the categorisation task using facial electromyography (fEMG). Results indicate that while perceivers achieved above-chance accuracy in guessing targets’ outcomes (win, loss, or draw) from facial displays of emotion, they struggled to distinguish between losses and draws. Further, although emotions (positive vs. negative)appear to act as cues for outcome detection, fEMG data reveal no significant pattern of activation linked to outcome accuracy. These findings highlight facial display of emotion as a communicative tool for contextual information and the role of experience, societal and cultural norms in constructing subjective yet socially expected reactions across individuals.