Mind Meld or Mismatch: A Comparison of Visual Perspective Taking Towards Humans and Robots in Face-to-Face Interactions
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Visual Perspective Taking (VPT), the ability to spontaneously represent how another sees the world, underpins human social interaction, from joint action to predicting others’ future actions and mentalizing about their goals and mental states. Due to highly customisable, repeatable behaviours, robots provide an ideal platform to investigate cognitive abilities, such as VPT, in tightly controlled face-to-face interactions. Here, we validate a novel experimental paradigm that robustly measures the extent that people take a human and robot’s visual perspective during an interaction. We do this by measuring how much a partner’s perspective is spontaneously integrated with one’s own. In our study, participants are paired with either another person, an inanimate humanoid robot, or an animate humanoid robot that engages with the task alongside the participant and performs socially interactive behaviours. We show - for the first time in a face-to-face interaction - that participants generally take other people’s visual perspectives, but do not take the perspective of either the inanimate or animate robot. Our study demonstrates that, unlike with 2D depictions of robots, the moderately humanlike appearance of a physically present robot is not enough to promote VPT, neither are basic socially reactive or goal directed behaviours. We contribute an implicit, novel measure of social reasoning towards humans and robots in face-to-face interactions, a benchmark for social roboticists to aim for when creating cognitively penetrable robots, and an open challenge to the social robotics community – to create a robot or robot behaviours that facilitates VPT towards robots.