Large language models and conversational counter-arguments to anti-public sector bias

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Can a good argument change an individual's mind? In two pre-registered experiments, we explore this question in the domain of public sector organizational performance. In each of these experiments, we observe human subjects as they engage in a conversation with a generative artificial intelligence (AI) programmed to persuade its interlocutors that United States federal agencies perform, on the whole, quite well. In a third pre-registered experiment, we explore the potential dark side of AI-based persuasion strategies by testing whether subjects find an AI that panders to their pre-existing beliefs about public sector performance more credible than an AI that challenges those beliefs. We develop a theory of effective argumentation to synthesize potential answers to an array of specific, practical questions of persuasion. To attempt to answer these questions, we program a large language model (LLM) to argue in one of seven distinct ``styles,'' including an aggressive style, a didactic style, and a sycophantic style. Our results suggest that these different argumentative styles vary in their effectiveness, and that there may be a dark side to LLM-based persuasion.

Article activity feed