When AI gets it wrong: False inference and political harm

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

AI systems are increasingly active agents in political discourse, shaping reputations, narratives, and public perceptions. This commentary examines three real-world cases from Serbia where AI chatbots—Grok and ChatGPT—asserted false claims, spreading false narratives about political collectives or regime-critical individuals. These incidents illustrate how, under the guise of technical neutrality, AI can reinforce dominant narratives, amplify disinformation, and undermine dissent. Drawing on a recently proposed framework for AI regulation (Tomić & Štimac, 2025), we show how failures across three dimensions—decision models, data sourcing, and interface semantics—create pathways for political manipulation and reputational harm. We conclude by reflecting on implications for political deliberation and calling for targeted regulatory and empirical responses.

Article activity feed