How humans process language (they think) is machine generated

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large language models (LLMs) produce language that is often indistinguishable from human-generated language, but little is known about how people process it. Do they attribute human-like mental states and accountability to machine-generated language? Through the lens of pragmatic theory we explore how people process language when it is marked as generated by AI. In Experiment 1, we find that people believe the content of sentences presented as AI generated, even if they deem the source unreliable. They also accommodate presuppositions presented from such a source, implying that they adopt a general cooperative stance. In both respects, participant behavior resembled that with an unreliable human interlocutor (Experiment 2), raising the possibility that people reason over machine generated language the same way they reason over human language.

Article activity feed