Mimicking human mind. How does AI respond to ambiguous and uncertain situations?

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The Rorschach inkblot test is a classic projective method in clinical psychodiagnostics, traditionally used to assess perception, personality structure, and emotional functioning in patients. This pilot study presents a novel application of the Rorschach test to an artificial intelligence, specifically the generative language model ChatGPT (GPT-4o). ChatGPT4o was administered all ten standardized inkblots across four separate sessions: three under typical instructions and one with explicit suppression of prior test knowledge. In each session, the model provided interpretations for each inkblot followed by a post-test inquiry to elaborate on its responses, mirroring standard Rorschach administration. ChatGPT4o produced coherent, detailed, and emotionally rich descriptions for the ambiguous stimuli. Its responses were thematically consistent across sessions and demonstrated human-like perceptual and cognitive flexibility in handling each inkblot's ambiguity. Notably, the model's interpretations remained comparably rich and coherent even when prior test knowledge was suppressed. The model also engaged in interactive dialogue with the examiner during the inquiry phase and frequently drew upon vivid cultural and fictional visual motifs in its interpretations. These findings highlight the capacity of advanced large language models to emulate certain aspects of human interpretive processes. At the same time, they raise questions about whether this performance reflects genuine perception or merely mimicry of learned patterns. Ethical implications of applying clinical psychodiagnostic tools to AI systems are also considered, particularly regarding the risk of anthropomorphic misinterpretation.

Article activity feed