AI-Supported Feedback as Assessment for Learning: Learner Agency, Trust, and Ethical Sensemaking in a Postgraduate Music Education Context

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The integration of artificial intelligence (AI) into educational assessment has intensified ethical and pedagogical debates concerning learner agency, trust, and responsibility. While AI-enabled systems are frequently promoted for efficiency and personalization, their use in assessment contexts raises concerns that extend beyond technical performance into questions of judgment, power, and ethical governance. Framed within an Assessment for Learning (AfL) perspective, this qualitative interpretive study examines how postgraduate music education students make sense of an AI-based Assessment and Feedback Assistant embedded within a Master’s-level course at a Malaysian public university. Drawing on reflective forum posts produced as part of routine coursework, the study explores how learners articulate perceptions of usefulness, negotiate trust, assert agency, and establish ethical boundaries around AI-supported feedback. Data was analysed using reflexive thematic analysis, informed by concepts of AfL, feedback literacy, trust, and human-centred AI ethics. The findings indicate that students do not experience AI as an authoritative assessor, but as a provisional and dialogic resource that supports reflective sense-making when embedded within an AfL-oriented pedagogical design. Trust in AI emerged as conditional and negotiated, grounded in alignment with human judgment and contextual relevance rather than technological authority. The study argues that ethical and trustworthy AI use in assessment is enacted through pedagogical governance rather than guaranteed by system design alone. By foregrounding reflection, transparency, and learner agency, AfL-oriented environments can enable AI to function as support for assessment literacy without displacing human responsibility. The study contributes empirical insight into human-centred approaches to AI ethics in assessment, particularly within qualitative, judgment-intensive disciplines such as music education.

Article activity feed