The Role of Intentional Stance Beliefs and Agent Appearance on Gaze Use during Joint Attention

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Interactions with artificial agents (e.g. robots and avatars) are becoming increasingly commonplace. While research has established that user beliefs, or an artificial agent’s appearance, can shape social outcomes with artificial agents, little is known about how these factors interact. We used virtual reality (VR) with eye- and motion-tracking, to examine the extent to which people attend to and use an agent’s gaze in a collaborative task. Participants initiated and responded to joint attention bids using hand gestures, while coordination could be implicitly facilitated by attending to the agent’s eye gaze. However, this did not result in any behavioural differences. Participants persistently used these gaze cues, reflected in coordination accuracy, face-looking frequency, and reaction times. Participants who *believed* their partner was human, rather than an AI system, subjectively reported more gaze-following and positive social experiences. When the agent *looked* like a human, rather than a robot, participants looked less frequently at the eyes, responded faster, and reported more negative social experiences. Our data suggests that social AI systems are approached more like humans than computers, irrespective of explicit beliefs about their humanness. Furthermore, when the artificial agent appeared with a non-human form, believing that it was human- rather than AI-controlled improved subjective experiences, highlighting how beliefs about AI systems may shape subjective social outcomes. This suggests that disclosure around the true intentional stance of artificial agents (e.g., in online VR contexts) may be important for shaping subjective social outcomes.

Article activity feed