Taken at Face Value: Do Robots Trigger Face-Typical Processing?

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Social robots are increasingly integrated into everyday environments, yet effective and socially engaging interactions remain a challenge – particularly due to difficulties in interpreting the robots’ internal states and behaviors. A key obstacle is that robots often fail to engage the social cognitive mechanisms typically elicited by human interaction partners. Human faces, in particular, are processed holistically through specialized neural networks that support rapid inferences about identity, emotion, and intention. In contrast, robot faces vary widely in their resemblance to the human face configuration, both in terms of the number (i.e., eyes, eyebrows, nose, mouth), and the arrangement of features (e.g., eyes-over-nose-over-mouth), raising the question of whether they elicit human-level face processing. Building on previous research, we use the face inversion paradigm to examine whether and under which conditions robot faces are processed in a face-typical manner. Experiment 1 demonstrated enhanced face-typical processing when robot faces incorporated three or four vs one or two human facial features. Experiment 2 showed that this ‘number effect’ was specific to distinctly human facial features vs non-human features. Comparisons to data from previous experiments suggest that synthetic faces with a high number of human-like features engage holistic face processing comparable to that seen with real human faces. Given the link between holistic face processing and the attribution of mental states and higher-order social cognition, these results may indicate that meticulous facial design could make robots appear more social, which in turn could promote social cognition in human-robot interactions.

Article activity feed