Spatiotemporal dynamics and substates underlie emotional signalling in facial movements

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

From overt emotional displays to a subtle eyebrow raise during speech, facial expressions are key cues for social interaction. How these inherently dynamic facial signals encode emotion across non-verbal expression and speech remains only partially understood. In Study 1 we recorded participants’ facial movements signalling happy, sad and angry emotions in Expression-only and Emotive-speech conditions. We employed a data-driven pipeline integrating facial motion quantification, spatiotemporal classification and clustering to investigate the structure and function of facial dynamics in signalling emotion. Results reveal that a few spatiotemporal patterns reliably differentiated emotion in non-verbal expressions and emotive speech facial signals. Furthermore, we identified transient substates – or dynamic phases – that are diagnostic of emotion intent and conditions. A perceptual validation with naïve observers (Study 2) showed that the low-dimensional spatiotemporal structure captures meaningful cues that closely predict human emotion categorisations. We discuss theoretical implications of a low-dimensional spatiotemporal structure for optimal transmission and perception of dynamic facial emotion signals and face-to-face interaction. This work also provides a framework for modelling dynamic social cues and insights for the design of expressive emotive capabilities in social agents.

Article activity feed