Believing ChatGPT: perceived truthfulness of AI-generated political themed headlines and its relationship with ideology and congruence.
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Fake News is a current relevant phenomenon. It presents important implications for societies and democracies such as increased discrimination, prejudice and intolerance, worsening social relations or loss of confidence in the media and institutions. Furthermore, Fake News today is highly linked to social media, bots and artificial intelligence systems. Thus, understanding how people perceive AI-generated Fake News, and not only human-generated Fake News, arises as something particularly relevant in the understanding of the current disinformation phenomenon. The aim of this research is to understand the influence of ideology, attitudes, and ideology-information congruence in the perceived truthfulness of AI-generated politically themed fictitious headlines. Using a sample of 204 adults we used Kruskal-Wallis, U of Mann-Whitney and Spearman’s correlations to test differences and relationships between perceived truthfulness, ideology and congruence. We found that participants believed the headlines generated by artificial intelligence, and that ideology strength, ideology-information congruence and political orientation play a key role in the perceived headlines’ truthfulness. Keeping in mind that a huge percentage of current social media accounts are bots, understanding how artificial intelligence, Fake News and belief in AI-generated Fake News interacts with each other, seems relevant for combating Fake News generation and spreading and phenomena such as polarization and disinformation.