When Do LLMs Say “Some”? An Investigation of Scalar Implicature and Politeness Mitigation in Large Language Models

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Scalar implicatures link semantic meaning to pragmatic reasoning: although some is logically compatible with all, listeners often enrich some to “some but not all.” Yet some also functions as a politeness hedge that mitigates face-threatening acts, potentially canceling the usual “not all” inference. This study investigates whether large language models (LLMs) exhibit the same context-sensitive trade-off between informativeness and politeness as humans, and whether chain-of-thought (CoT) optimization yields more human-like pragmatic flexibility. Mandarin-speaking participants completed parallel production and judgment tasks that manipulated face threat (face-threatening vs. neutral) and factual state ( All vs. Most ). In Experiment 1, participants completed sentences among seven quantifiers, including some in Experiment 2, they evaluated the acceptability of under-informative some statements versus fully informative factual statements, with reaction times recorded. Three LLMs (DeepSeek-V3.2, DeepSeek-R1, GPT-4o) were tested with identical materials. Humans showed a robust informativeness–politeness reweighting: face threat increased production and acceptance of some while sharply reducing acceptance of blunt, fully informative statements. LLMs generally captured the face-based licensing of vagueness but did not reliably penalize socially costly maximal informativeness. The CoT-optimized model showed greater context sensitivity and more human-like distributional patterns than the non-CoT models. Together, these findings are consistent with partial alignment with human mitigation but incomplete social-norm calibration, with CoT optimization offering a modest reduction in the mismatch.

Article activity feed