From Prompts to Constructs: A Dual-Validity Framework for Large Language Model Research in Psychology

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large language models (LLMs) are rapidly being adopted across psychology, serving as research tools, evaluation targets, human simulators, and cognitive models. However, the application of human measurement tools to these systems produces brittle responses, raising concerns that many findings are measurement phantoms—statistical artifacts rather than genuine psychological phenomena. We argue that building a robust science of AI–psychology requires integrating two of our field’s foundational pillars: the principles of reliable measurement and the standards for sound causal inference. We present a dual-validity framework to guide this integration, which clarifies how the evidence needed to support a claim scales with its scientific ambition. Using an LLM to classify text may require only basic accuracy checks, whereas claiming it can simulate anxiety demands far more rigorous validation processes. Current practice systematically fails to meet these requirements, often treating statistical pattern matching as evidence of psychological phenomena. The same model output—endorsing “I am anxious”—requires different validation strategies depending on whether researchers claim to classify, characterize, simulate, or model psychological constructs. Moving forward requires developing computational analogues of psychological constructs and establishing clear, scalable standards of evidence rather than the uncritical application of human measurement tools.

Article activity feed