How AI-assisted problem solving dissociates competence and performance in higher education

Read the full article

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This paper introduces and quantifies the Hyperfocus Bias Index (HBI)—a narrowing of attentional and strategic focus toward a subset of often advanced items during problem-solving tasks assisted by large language models (LLMs). We conduct a large-scale simulation of item response theory (IRT)-calibrated multiple-choice assessments, comparing a no-assistance condition to two LLM-assisted conditions of differing reliability (Qwen2, Mistral-7B), to examine how AI reshapes effort allocation, decision sequences, and, ultimately, competence assessment. The HBI is operationalized through a composite index combining temporal concentration, clustering of AI calls, reduced post-error switching, advanced-item tunneling, and post-error slowing. Results show that highly reliable assistance markedly increases the HBI: time and queries become concentrated on a few tasks, extended advanced-item sequences emerge, and post-failure flexibility decreases. Crucially, hyperfocal profiles are associated with local gains (success on difficult items) but lower overall performance, suggesting a dissociation between internalized competence and assisted performance. We discuss pedagogical safeguards—transparency regarding the “optimal dose” of assistance, incentives for strategic switching, and calibration of AI support—to preserve the integrity of learning and assessment in the era of LLMs.

Article activity feed