When AI is (not) in your camp: The role of ideological fit for trust in AI chatbots
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
As AI chatbots increasingly serve as central information sources, understanding what drives user trust is critical. In light of the growing politicization of these bots, we investigate how their perceived political ideology shapes trust. We hypothesized that user trust is driven by the ideological fit between a user's own political leanings and the chatbot's perceived ideology. Across five preregistered studies (total N = 3,293) utilizing correlational and experimental designs in U.S. and German samples, we tested this "ideological fit effect." We found robust evidence that political conservatives trust conservative AI chatbots more, while liberals trust liberal AI chatbots more (Studies 1, 2, and 4). Mediation analyses suggest that this effect is driven by perceived ideological similarity. Furthermore, directly manipulating this similarity likewise increased trust and intentions to use an AI chatbot (Studies 3a and 3b). Finally, exploratory analyses suggest that among participants unfamiliar with a specific charity, the bot's recommendations influenced real prosocial decision-making in a behavioral donation task (Study 4). These findings demonstrate that trust in AI chatbots is strongly impacted by perceived ideological fit. A deliberate or accidental politicization of AI chatbots thus risks dividing the AI landscape into ideological echo chambers, carrying profound implications for AI ethics, politics, and society.