Beliefs and Sharing Intentions of Human- and AI-Generated Fake News: Evidence from 27 European Countries

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Misinformation remains a major challenge in today’s information environment, and rapid advances in AI-driven content generation risk amplifying this problem. Generative AI represents a double-edged sword: it holds great promise for improving the detection of false information, yet it also enables the rapid, large-scale production of highly persuasive fake content. Understanding how people perceive AI-generated misinformation is therefore crucial for designing effective interventions and safeguarding information integrity.To address this, we embedded a pre-registered experiment in a large-scale web survey conducted across 27 European countries. Participants were presented with eight short news headlines related to the Russo-Ukrainian war: four AI-generated and four human-generated, evenly split between real and fake news. For each headline, respondents assessed its perceived veracity and their willingness to share it.Our findings show that, regardless of authorship, fake news is less likely to be perceived as accurate and less likely to be shared, although differences between human- and AI-generated content were minimal. This pattern was remarkably consistent across the 27 countries, with some variation by individual characteristics. Importantly, our study demonstrates that GPT-4 models can generate convincing fake news on the Russo-Ukrainian war from a simple prompt, producing content perceived as equally – if not more – credible than human-written news.

Article activity feed