Judging interactions of a Chatbot: uncertainty and anthropomorphism aspects
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Large Language Models (LLMs) have experienced a rapid increase in fame due to their versatile capabilities, including document processing, image generation, and even providing psychological support. Autonomous agents—defined as entities such as robots or LLMs that perceive and respond to environmental stimuli—are increasingly integrated into human teams. These agents are capable of interacting and collaborating with both humans and other autonomous systems to solve problems and achieve shared objectives. As human reliance on autonomous agents grows, understanding the mechanisms of trust becomes critical, given its central role in team effectiveness.This study investigates two dimensions of trust in autonomous agents: a cognitive component, uncertainty communication, and an affective component, anthropomorphized language. It was hypothesized that both communicating uncertainty and anthropomorphized language would contribute to trusting the LLM. A total of 606 participants were recruited via the online platform PanelClix. Participants viewed video scenarios depicting a first responder interacting with a Chatbot that employed either anthropomorphized or machine-like language. Additionally, participants were exposed to varying levels of reliability information: no reliability cue, a numerical reliability estimate (approximately 75% certainty), or a colour-coded reliability indicator (dark blue for ~75% certainty, light blue for 25% certainty).The findings revealed that neither anthropomorphized language nor uncertainty communication significantly influenced trust in the Chatbot. However, trust was positively associated with perceived intelligence, general propensity to trust artificial intelligence, and perceived liveliness of the Chatbot. These results suggest that both cognitive and affective factors contribute to the development of trust in autonomous agents, albeit in more nuanced ways than initially hypothesized. Implications and future directions are discussed in the concluding section.