Overconfidence without Understanding: AI Explanations Increase the Illusion of Explanatory Depth
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The Illusion of Explanatory Depth (IOED), i.e., the tendency to overestimate the coherence and depth of one’s understanding, tends to increase when people search for explanations online. However, it is less clear whether this applies to information obtained from AI chatbots. We tested whether receiving information from a chatbot magnifies IOED and how it affects the quality of participants’ explanations. University students (N = 102) were presented with four questions and predicted how well they could explain them. However, the GPT group preliminarily asked for explanations a custom version of ChatGPT and received pre-controlled answers; the no-GPT group received the same texts directly; the control group received no materials. Afterward, they wrote their explanations and rated them. Results showed the biggest difference between the initial prediction and subsequent self-evaluation in the GPT group. Moreover, coder evaluations and text analysis measures, including length, lexical diversity, and semantic similarity, revealed that GPT group produced less accurate explanations than no-GPT group.