Trusting AI Too Much? Psychological Predictors of Overtrust and the Mitigating Role of AI Literacy

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background The increasing adoption of AI tools such as ChatGPT in academic contexts has introduced a new psychological risk: overtrust in AI. However, the psychological foundations and boundary conditions of AI overtrust remain insufficiently examined. This study examined how self-efficacy, intrinsic motivation, and extrinsic motivation influence AI overtrust, and whether AI literacy can mitigate its negative consequences. Methods A cross-sectional survey was conducted with 300 university students in South Korea who had prior academic experience using AI tools. The questionnaire measured academic self-efficacy, intrinsic and extrinsic motivation, AI literacy, and AI overtrust. Structural equation modeling was used to test direct and mediated effects among the variables, with bootstrapping employed to evaluate indirect pathways. Results Self-efficacy and intrinsic motivation were positively associated with AI overtrust, suggesting that learners with higher confidence and autonomy exhibit overtrust in AI-generated information. Extrinsic motivation showed a weaker direct effect. AI literacy had a significant negative effect on AI overtrust and mediated the relationship between both self-efficacy and intrinsic motivation and overtrust. A sequential mediation model indicated that performance expectation and AI literacy together suppressed overtrust by strengthening reflective awareness. Conclusions These findings demonstrate that high-performing and intrinsically motivated learners are psychologically vulnerable to AI overtrust. Although high academic self-efficacy fosters autonomy in using AI tools, it also increases the risk of overtrust in AI. Targeted AI literacy education addresses this risk by enhancing learners’ critical thinking, ethical awareness, and responsible AI use. AI literacy education should be adapted to learners’ psychological profiles to effectively prevent AI overtrust and support responsible use.

Article activity feed