Large language models accurately identify decision reasons in verbal reports
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Understanding the reasons behind human choices under risk is a central goal of the decision sciences, yet traditional methods relying on behavioral data are limited by strict invariance assumptions. Here, we introduce a scalable method using large language models (LLMs) to analyze verbal reports and identify the articulated reasons for choices between monetary lotteries. We show that a validated LLM accurately identifies predefined decision reasons in participants' free-text reports, aligning with their actual choices in over 92\% of trials. Our analysis reveals that reason usage varies systematically and is driven more by the choice problem's structure than by individual differences. A predictive model based on these problem-specific reason profiles outperforms prospect theory in out-of-sample prediction. This work demonstrates that verbal reports are a rich data source and that LLMs can unlock their potential, challenging foundational invariance assumptions and paving the way for more context-aware models of human decision-making.