Observer-Generated Maps of Diagnostic Facial Features Enable Categorization and Prediction of Emotion Expressions

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

According to one prominent model, facial expressions of emotion can be categorized into depicting happiness, disgust, anger, sadness, fear and surprise. One open question is which facial features observers use to recognize the different expressions and whether the features indicated by observers can be used to predict which expression they saw.We created fine-grained maps of diagnostic facial features by asking participants to use mouse clicks to highlight those parts of a face that they deem useful for recognizing its expression. We tested how well the resulting maps align with models of emotion expressions (based on Action Units) and how the maps relate to the accuracy with which observers recognize full or partly masked faces.As expected, observers focused on the eyes and mouth regions in all faces. However, each expression deviated from this global pattern in a unique way, allowing to create maps of diagnostic face regions. Action Units considered most important for expressing an emotion were highlighted most often, indicating their psychological validity. The maps of facial features also allowed to correctly predict which expression a participant had seen, with above-chance accuracies for all expressions. For happiness, fear and anger, the face half which was highlighted the most was also the half whose visibility led to higher recognition accuracies. The results suggest that diagnostic facial features are distributed in unique patterns for each expression, which observers seem to intuitively extract and use when categorizing facial displays of emotion.

Article activity feed