"Definitely not to be fully relied on, but useful all the same": Developing critical AI literacy through structured assessment redesign in postgraduate health management education
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The use of generative artificial intelligence has grown explosively in professional and educational settings. For health professional education this poses a fundamental tension: early institutional messaging has often discouraged AI use, yet graduates will increasingly encounter AI use in management and health policy contexts. This teaching innovation case study describes the redesign of an assessment task in a large postgraduate health management course (n=410) to require rather than prohibit generative AI use, with the explicit goal of developing critical AI literacy. The intervention drew on Bearman et al.’s concept of evaluative judgment and critical AI literacy. It involved adding a structured reflection component requiring students to use AI to analyse an organisational strategic plan and critically appraise its outputs. Analysis of student feedback (n=156 course evaluation responses; n=127 supplementary evaluation responses) reveals that while the approach was largely well-received, important tensions exist. Some students experienced anxiety stemming from prior prohibitive messaging about generative AI use in other courses, others found the AI component distracting from substantive course content, and many engaged with AI instrumentally rather than critically. Grade distributions showed improvement compared to prior years (M=87.8%, SD=7.9% in intervention year versus M=81.7%, SD=12.8% in prior year), though multiple factors including enhanced scaffolding and cohort composition changes mean we cannot regard this innovation alone as causal. The intervention achieved its primary aim of normalising AI engagement while exposing students to its limitations, though deeper critical evaluation was more elusive than anticipated. Our findings suggest that developing critical AI literacy in health professional education requires careful sequencing within courses, explicit scaffolding through prompting frameworks, clear alignment with professional relevance, and ultimately program-level integration rather than isolated assessment innovations.