The effects of AI anthropomorphism on trust and responsibility
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
AI companies often anthropomorphize their products with the aim of increasing user engagement and trust. What downstream consequences result from such anthropomorphism? One possibility is a deflection of perceived responsibility from the AI’s creators to the AI itself. Across two studies, we tested this hypothesis. In Study 1 (N = 309), participants interacted with an LLM chatbot that varied in anthropomorphic language (high or low) and completed behavioral and self-reported measures of trust. Anthropomorphism increased trust across all measures, with the LLM’s degree of emotional attunement fully mediating the relationship between anthropomorphism and behavioral trust. In Study 2 (N = 430), participants read six descriptions of an AI home assistant that performed a range of positive and negative actions. The assistant varied in anthropomorphism (high, low, or none). Participants provided ratings of responsibility to multiple actors, including the creators. The data revealed, first, that anthropomorphism increased blame directed toward the AI entity itself. Second, we found strong evidence for responsibility displacement: participants who attributed more responsibility to AI attributed less to the company that created it (r = -0.68). Together, these findings reveal perverse incentives in AI design: anthropomorphism simultaneously increases user trust while deflecting accountability from developers. These dynamics create a responsibility gap with significant implications for AI governance and institutional accountability.