Trusting the Machine: How Anthropomorphism Impacts Epistemic Trust in Generative AI
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
As Generative Artificial Intelligence (GenAI) rapidly becomes more sophisticated and more widespread, understanding the impact of its perceived humanness on trust is increasingly important. Previous work has shown that anthropomorphism increases the perceived trustworthiness of an agent as an advisor, but here we asked whether it might also impact the trustworthiness of informational communications it produces in a non-advisory role. In two experiments, participants read a science communication blog post attributed to a GenAI with varying levels of anthropomorphic cues. They were asked to report the trustworthiness of both the agent and the content it produced. In Experiment 1 (N = 270), anthropomorphic cues in a description of the author directly increased author trust but not content trust. However, anthropomorphism appears positively correlated with both author and content trust due to the positive correlation between author and content trust. Experiment 2 (N = 144) replicated these findings using a more immersive design: participants chatted with a GenAI chatbot presented as the author of the blog. Partial correlations revealed that the effect of anthropomorphism on author trust explains much of the correlation between anthropomorphism and content trust in Experiment 1 but less so in Experiment 2. These effects reveal a dissociation between the perceived trustworthiness of the author and the content, suggesting that anthropomorphic cues may directly impact the former, but not the latter. Additionally, the salience of the GenAI author may impact the cognitive mechanisms involved in these trustworthiness judgments.