Can the Visual System Represent Distribution of Emotional Expressions?
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The visual system can quickly extract statistical properties of multiple objects (ensembles). Observers can explicitly access and report the distribution of low-level features, such as color. Here, we investigate whether explicit access extends to the distributions of high-level features, e.g., emotional expressions of a group of faces. In Experiment 1, we presented observers with ensembles that have a Gaussian, uniform or bimodal distribution of emotional expressions (or colors as a low-level baseline condition). The task was to report the frequency of a randomly chosen feature value. Observers’ responses closely followed the underlying distribution for colors but were much noisier for the emotional expressions. Observers could distinguish all the three color distributions while they could discern only the most contrasting distributions of emotional expressions (Gaussian and bimodal). Additionally, modelling showed that observers’ performance in both conditions was based on the integration of global information rather than the subsampling of a few objects. Experiment 2 showed that disrupting holistic face processing via face inversion manipulation does not change the observer’s performance for ensembles of emotional expressions. It indicates that the visual system cannot form a high-level feature distributional representation per se. Instead, it relies on low-level correlates (e.g., mouth curvature, brows’ tilt, etc.) to build a distributional representation of multiple facial expressions. Our findings help to explain how people can quickly obtain emotional information from many faces in a crowd and have a rich perceptual experience despite severe capacity limitations of visual attention and working memory.