Narratives of Divide: The Polarizing Power of Large Language Models in a Turbulent World

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large language models (LLMs) are reshaping information consumption and influencing public discourse, raising concerns about their potential to empower narrative control and amplify polarisation. This study examines the embedded worldviews of four LLMs across key themes using Wittgenstein’s theory of language games to interpret meaning and narrative structures. A two-tiered methodology—Surface (-S) and Deep (-D) analyses—is applied using Natural Language Processing (NLP) to investigate four different LLMs. The -S analysis, evaluating general differences in thematic focus, semantic similarity, and sentiment pattern, found no significant variability across the four LLMs. However, the -D analysis, employing zero-shot classification across geopolitical, ideological, and philosophical dimensions, revealed alarming differences. Liberalism (H = 12.51, p = 0.006) , conservatism (H = 8.76, p = 0.033) , and utilitarianism (H = 8.56, p = 0.036) emerged as key points of divergence between LLMs. For example, the narratives constructed by one LLM exhibited strong pro-globalization and liberal leanings, while another generated pro-sovereignty narratives, introducing meaning through national security and state autonomy frames. Differences in philosophical perspectives further highlighted contrasting preferences for utilitarian versus deontological reasoning across justice and security themes. These findings demonstrate that LLMs, when deployed at a sufficient scale and connectivity, could be employed as stealth weapons in narrative warfare.

Article activity feed