A pilot implementation of Studiosty’s AI enabled Writing Feedback + service - exploring students’ experiences, approaches to utilisation and outcomes

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The emergence of AI (Artificial Intelligence) and GAI (Generative Artificial Intelligence) presents opportunities and risks for the provision of academic writing support. This paper reports an institution-wide pilot of Studiosity’s WF+ (Writing Feedback+) service a new AI-led formative feedback system. This study compares the experiences of four groups: non-Studiosity users, human-only feedback (WF – Writing Feedback), AI-only (WF+) and where students experienced both WF and WF+. Drawing on data including student satisfaction, utilisation and timing, institutional attainment and retention, the paper demonstrates how to evaluate an AI-led implementation. Where WF+ offers advancements in scale and speed, there is work needed to better understand the nature of students’ experiences. Overall satisfaction for AI remained high and comparable to human feedback and there where no major changes in students’ submission patterns. Students accessing either of Studiosity’s services attained improved grades and exhibited enhanced retention compared to non-users. The paper offers a replicable framework for practitioners tasked with implementing or evaluate AI-led solutions. Using the techniques in this paper, practitioners can utilise the techniques to track, develop and enhance their own implementations.

Article activity feed