Large language models exhibit human-like sensitivity to framing manipulations
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Humans are highly susceptible to framing manipulations in intertemporal decisionmaking—choices that involve tradeoffs between immediate and delayed rewards.Well-known decision-making biases such as the magnitude effect, in which largerreward amounts increase patience, have been attributed to self-control, rewardsystem activation, and other cognitive mechanisms. Here, we show that largelanguage models (LLMs) exhibit similar sensitivities to framing manipulations,including the magnitude, sign, and hidden-zero effects. Unlike humans, LLMsdiscount delayed rewards exponentially rather than hyperbolically and do notexhibit a decimal effect, suggesting their behavior does not simply reflectknowledge of published decision-making phenomena learned during training.Analysis of LLM embedding spaces, which encode semantic knowledge,revealed that large monetary amounts are represented more closely to wordsrelated to delay and the future. This suggests that framing manipulations biasLLM choices through semantic proximity to words linked to the present or future.Together, these results introduce a conceptual framework whereby linguisticstructure shapes decisions, suggesting that human decision-making biases mayemerge, in part, from the organization of choice options in semantic space.