Decision Matrix for Prioritizing Generative AI Risks in Higher Education

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Generative AI is rapidly reshaping higher education, yet institutions lack a transparent, replicable way to prioritise risks and allocate mitigation effort. Objective. We propose an Analytic Hierarchy Process (AHP) framework to rank 24 AI-related risks across six criteria relevant to universities. Methods. Following PRISMA 2020 reporting, we systematically screened ~200 records, assessed 42 full texts, and included 29 sources informing the criteria, sub-risks, and severity scales. Pairwise comparisons yielded normalised weights; consistency was checked (CR ≤ 0.10). We release replication files (pairwise matrices, computed weights, PRISMA flow). Results. The top priorities are academic integrity (C1, 29%) and data protection/compliance (C5, 19%). Misinformation-related risks (C3, 12%) and student disengagement/critical thinking (C4, 12%) form a second tier; bias/discrimination (C2, 19%) remains structurally important across equity-sensitive contexts, while transparency/dependency (C6, 8%) completes the profile. Implications. The framework converts strategic concerns into actionable governance: (i) Responsible-Use Charter and exam integrity controls; (ii) by-design DPIAs and data-minimisation; (iii) model cards/datasheets and incident logging aligned with recognised AI risk-management practices; (iv) annual recalibration by discipline. Conclusion. The AHP approach offers a transparent, auditable basis for prioritising AI risks in higher education and for steering policy, investment, and assurance activities. We provide materials to facilitate adoption and adaptation to non-EU contexts.

Article activity feed