AI Tools Can Enhance, Not Threaten, Generalizability
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
In their recent Trends in Cognitive Sciences (TiCS) opinion piece, Crockett and Messeri argue that AI surrogates perpetuate generalizability problems in cognitive science by entrenching WEIRD samples and decontextualized tasks. Their critique is correct but incomplete: it conflates large language models (LLMs) as surrogates that replace participants with LLMs as tools that make diverse human research more tractable. The persistent lack of generalization reflects structural barriers—historical populations are inaccessible, cross-cultural studies costly, marginalized communities resistant to conventional recruitment. As methodological tools paired with human validation, LLMs lower these barriers without creating surrogates. This distinction determines whether LLMs expand or narrow cognitive science's scope. I operationalize this distinction through a decision framework that specifies when LLMs enhance rather than threaten generalizability, providing clear standards for responsible methodological practice.