Tricking into trusting? The influence of social cues of a generative AI on perceived trust

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Generative AI systems such as ChatGPT are increasingly used to assist with tasks or to receive information. Given that the systems are not perfectly reliable regarding the content they produce, human users need to carefully navigate to which degree they can place trust in the system (calibrated trust). However, based on media equation assumptions it can be hypothesized that social cues displayed by the system might instill more trust than is warranted. Against this background, the present study investigates in a 2x2 between subject design (N=617) whether the social cues "typing behavior" and "personalized address" used by ChatGPT increase perceived trust (benevolence, ability and integrity) in the system and whether this is mediated by perceived similarity and moderated by anthropomorphism inclination. The results show that the social cue "typing behavior" leads to a significant increase in trust in the dimension benevolence. Neither perceived similarity nor anthropomorphism inclination modulate this effect. However, as a side effect, the mediator perceived similarity was found to significantly predict trust in ChatGPT.

Article activity feed