AI Surrogates and Illusions of Generalizability in Cognitive Science Private Project Public Project

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Recent advances in artificial intelligence (AI) have generated enthusiasm for using AI simulations of human research participants to generate new knowledge about human cognition and behavior. This vision of ‘AI Surrogates’ promises to enhance research in cognitive science by addressing longstanding challenges to the generalizability of human subjects research. AI Surrogates are envisioned as expanding the diversity of populations and contexts that we can feasibly study with the tools of cognitive science. Here, we caution that investing in AI Surrogates risks entrenching research practices that narrow the scope of cognitive science research, perpetuating ‘illusions of generalizability’ where we believe our findings are more generalizable than they actually are. Taking the vision of AI Surrogates seriously helps illuminate a path toward a more inclusive cognitive science.

Article activity feed