Ensemble encoding of facial expressions is greater for static faces than dynamic faces

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Understanding the behaviour and emotions of other people is predicated on accurately decoding nonverbal facial cues. In the current study, we investigated whether the ensemble encoding of these facial cues is processed differently between moving and static faces. Across two experiments, participants performed a delayed-match-to-sample expression recognition task using dynamic and static stimuli. Ensemble size was manipulated by increasing the number of faces in the target array. In Experiment 1, we presented one, two, and four target faces. In Experiment 2, we presented one, four, and eight target faces. Results demonstrated that while recognition accuracy for dynamic and static faces was comparable when one target face was presented, task accuracy diverged as the ensemble size increased. Specifically, increasing the number of faces in the target array failed to improve task performance for the dynamic expression stimuli. In addition, the results of Experiment 2 demonstrated that when eight faces were presented in the dynamic condition, performance was impaired for all expressions (disgust, fear, and happy) as well as in the non-emotional air-puff condition. This suggests motion in general disrupted the participants ability to extract an accurate summary statistic as the ensemble size was increased.

Article activity feed