Human Psychology in the Age of AI: What Changes and What Doesn’t

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

As generative AI drafts emails, summarizes medical results, and shapes social feeds, psychological science faces two questions: What does AI do to human behavior, judgment, and wellbeing? What does it reveal about the theories we use to explain them? This article examines emerging findings on trust and the dehumanization of AI users, literacy-dependent fear and awe, and large-scale persuasion, advice, and delegation. Across these domains, familiar mechanisms—source credibility, mind perception, ownership, control—operate in new environments where content generation is effortless, interaction is seamless, and influence scales invisibly. AI is less a synthetic mind than a stress test on three clusters of human functioning: epistemic (how people handle evidence and uncertainty), social-evaluative (how they perceive and judge others), and motivational (how they pursue goals and maintain identity). When current theories fail to predict behavior around AI—when people simultaneously overtrust hidden AI advice and distrust visible AI users—those failures reveal where core constructs remain underspecified. This framing clarifies what changes in the age of AI (scale, speed, opacity), what does not (the underlying psychological processes), and where theory must be sharpened to accommodate minds operating in AI-saturated environments.

Article activity feed