Assessing students’ DRIVE: An evidence-based framework to evaluate learning through students’ interactions with generative AI
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
As generative AI (GenAI) transforms how students learn and work, higher education must rethink its assessment strategies. This paper presents a taxonomy and conceptual framework (DRIVE) to evaluate student learning from GenAI interactions (prompting strategies), focusing on cognitive engagement (Directive Reasoning Interaction) and knowledge infusion (Visible Expertise). Despite extensive research mapping student GenAI writing behaviors, practical tools for assessing domain-specific learning remain underexplored. This paper shows how GenAI interactions inform such learning in authentic classroom contexts, moving beyond technical skills or low-stakes assignments. We conducted multi-methods analysis of GenAI interaction annotations (n=1450) from graded essays (n=70) in STEM writing courses. A strong positive correlation was found between high-quality GenAI interactions and final essay scores, validating the feasibility of this assessment approach. Furthermore, our taxonomy revealed distinct interaction profiles: High essay scores correlated with a ”Targeted Improvement Partnership” focused on text refinement, while high interaction scores were linked to a ”Collaborative Intellectual Partnership” centered on idea development. Conversely, below-average performance was associated with ”Basic Information Retrieval” or ”Passive Task Delegation”. These findings demonstrate how the assessment method (output vs. process focus) may shape students’ GenAI usage and learning depth. These findings demonstrate that the assessment method (output vs. process) shapes student AI use. Traditional assessment can reinforce text optimization, while process-focused evaluation may reward the exploratory partnership crucial for deeper learning. The DRIVE framework and related taxonomy offers educators a practical tool to design assessments that capture authentic learning in AI-integrated classrooms.