Participatory Co-Design and Evaluation of a Novel Approach to Generative AI-Integrated Coursework Assessment in Higher Education
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Generative AI tools offer opportunities for enhancing learning and assessment but raise concerns about equity, academic integrity, and the ability to critically engage with AI-generated content. This study explored these issues within a psychology-oriented postgraduate programme at a UK university. We co-designed and evaluated a novel AI-integrated assessment aimed at improving critical AI literacy among students and teaching staff (pre-registration: https://osf.io/jqpce). Students were randomly allocated to two groups: the ‘honest’ group used AI tools to assist with writing a blog and critically reflected on the outputs, while the ‘cheating’ group had free rein to use AI to produce the assessment. Teaching staff, blinded to group allocation, marked the blogs using an adapted rubric. Focus groups, interviews, and workshops were conducted to assess the feasibility, acceptability, and perceived integrity of the approach. Findings suggest that, when carefully scaffolded, integrating AI into assessment can promote both technical fluency and ethical reflection. Students engaged critically with AI outputs, while staff recognised the potential to support critical thinking and maintain academic standards. The approach also supports growing calls for authentic assessment that mirrors real-world professional tasks. However, tensions between preserving academic integrity and supporting skill development must continue to be addressed in future work.