The Role of Mentalising and Agent Appearance on Gaze Use during Joint Attention

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Interactions with artificial agents (e.g. robots and avatars) are becoming increasinglycommonplace. While research has established that user beliefs, or an artificial agent'sappearance, can shape social outcomes with artificial agents, little is known about how thesefactors interact. We used virtual reality (VR) with eye- and motion-tracking, to examine theextent to which people attend to and use an agent's gaze in a collaborative task. Participantsinitiated and responded to joint attention bids using hand gestures, while coordinationcould be implicitly facilitated by attending to the agent's eye gaze. However, this didnot result in any behavioural differences. Participants persistently used these gaze cues,reflected in coordination accuracy, face-looking frequency, and reaction times. Participantswho believed their partner was human, rather than an AI system, showed no behaviouraldifferences. However, they subjectively reported more gaze-following and positive socialexperiences. When the agent looked like a human, rather than a robot, participants lookedless frequently at the eyes, responded faster, and reported more negative social experiences.Our data suggests that social AI systems are approached more like humans than computers,irrespective of explicit beliefs about their humanness. Furthermore, when the artificial agentappeared in a non-human form, believing that it was human- rather than AI-controlledimproved subjective experiences, highlighting how beliefs about AI systems may shapesubjective social outcomes. This suggests that disclosure around the true intentional stanceof artificial agents (e.g., in online VR contexts) may be important for shaping subjectivesocial outcomes.

Article activity feed