Do People Think ChatGPT Is Conscious? Evidence from a Large Polish Sample
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Large language models (LLMs) like ChatGPT are increasingly treated by the public as if theypossess subjective experience. Colombatto and Fleming (2024) recently showed that most U.S.adults attribute at least some phenomenal consciousness to ChatGPT, despite a scientific consensusthat current AI systems are not conscious. The present study provides the first large-scalemeasurement of this phenomenon in Poland. A nationally distributed online survey panel (N =1,736) rated ChatGPT’s capacity for conscious experience on a 1–100 scale. Most respondents(74%) attributed ChatGPT at least minimal consciousness, closely mirroring prior U.S. findings.Small exploratory associations suggested that older and more highly educated participants wereslightly more skeptical, whereas right-leaning respondents attributed slightly more consciousness.These effects, though modest, indicate theoretically relevant variation. Interpreting these resultsthrough Epley’s three-factor anthropomorphism model, the work argues that limited mechanisticunderstanding of LLMs, highly human-like behavior, and unmet social needs likely contribute towidespread phenomenal consciousness attribution. Given ethical concerns about over-trust,parasocial attachment, and responsibility misperception, these beliefs matter for AI deploymentand governance. Our findings highlight the need for evidence-based educational strategies thatimprove public understanding of how LLMs work while allowing beneficial use of the technology.