You Must Not Fool Yourself: Feynman, Neurodiversity, and Honest AI in Digital Mental Health
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Digital mental health is undergoing rapid transformation through the deployment of artificial intelligence (AI) systems including conversational agents, risk-prediction tools, and clinical decision-support platforms. Despite growing enthusiasm, the field lacks an operational standard for what constitutes honest, trustworthy AI in psychiatric and psychological contexts. This Perspective draws on Richard Feynman’s foundational principle of scientific integrity—articulated in his 1974 Caltech commencement address, ‘Cargo Cult Science’—and Carl Sagan’s dimensional metaphors to develop a practical honesty-focused ethical framework for AI in digital mental health. The neurodiversity paradigm provides a critical stress test: AI tools trained on narrow behavioral and linguistic norms systematically misrepresent neurodivergent users, silently re-encoding stigma, misclassification, and inequity into clinical systems. This concern is urgent given that neurodivergent individuals experience substantially higher rates of depression, anxiety, and suicide risk, making them particularly vulnerable to algorithmic harm. Drawing on predictive processing theory from cognitive neuroscience and recent empirical evidence of AI bias against neurodivergent populations, this work proposes five criteria for Feynman-honest AI: scope clarity, population-limit disclosure, uncertainty communication, role integrity, and participatory co-development. The analysis concludes with a forward-looking research agenda for clinicians, developers, and policymakers, arguing that digital mental health will be judged not only by algorithmic performance but by collective candor about the limits, blind spots, and boundaries of the systems the field builds and deploys.