AI-Assisted Assessment and Instruction in Higher Education: Foundations, Applications, and Implications for Exam Design

Read the full article

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The rapid diffusion of large language models (LLMs) such as ChatGPT challenges fundamental assumptions about assessment validity in higher education. This paper examines how AI systems capable of performing many traditional academic tasks necessitate a principled redesign of assessment practices. Drawing on the framework of constructive alignment, the paper analyzes the capabilities and limitations of LLMs and their implications for the alignment of learning objectives, instructional activities, and assessment formats. Empirical evidence indicates that LLM performance is strongest at lower cognitive levels (e.g., remembering, understanding, applying) and declines for higher-order processes, suggesting that conventional text-based assessments are particularly vulnerable to AI-assisted completion. Building on this insight, the paper proposes a shift from AI-resistant to AI-robust assessment design, emphasizing process-oriented, personalized, practical, and oral formats that require authentic student competence. In addition, the paper outlines how educators can productively integrate AI into assessment workflows, including item construction, automated scoring, and formative feedback, supported by structured prompt engineering strategies and agentic AI systems. Ethical, legal, and institutional considerations are discussed, particularly regarding data protection and academic integrity. The paper concludes that effective responses to AI in education require not restriction, but systematic redesign grounded in clear educational principles and an expanded conception of academic competence.

Article activity feed