Assessment Format Matters: Evidence for Differences in Metacogni-tive Resolution Between Multiple-Choice and Open-Ended Exams
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Assessment format may influence not only students’ performance but also how they monitor and evaluate their own learning. This study examined how multiple-choice and open-ended questions are associated with different components of metacognitive monitoring in a real university exam context. A sample of 150 undergraduate stu-dents completed an exam including both formats and provided self-assessments (SSA) and confidence judgments (JC) for each section. Results showed that students achieved higher performance (STD), higher self-assessments, and higher confidence judgments in multiple-choice questions compared to open-ended questions. In terms of calibration, both formats showed similar levels of calibration bias, characterized by a general tendency toward overestimation of performance, although absolute calibration error was slightly lower in the open-ended format. The most robust difference emerged in metacognitive resolution: the relationship between self-assessment and actual performance was substantially stronger in open-ended questions than in multiple-choice questions, indicat-ing greater sensitivity of self-assessments to performance in this format. Confidence judgments showed weak and inconsistent associations with calibration accuracy. These findings suggest that assessment format shapes not only performance outcomes but also the quality of metacognitive monitoring, with generative tasks potentially providing more diagnostic cues for evaluating one’s own knowledge.