Prediction of Performance in Standardised Assessments from Computer-Based Formative Assessment Data
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Summative assessments (SAs) and formative assessments (FAs) fulfil complementary functions in the educational endeavour. SAs measure knowledge at the end of a unit in a standardised, high-stakes setting, while FAs evaluate student performance during daily classroom activities to tailor feedback and instruction. Computer-based FA (CBFA) systems enable collecting unprecedented amounts of data objectively and with minimal disruption for students, under conditions that more closely resemble real-life behaviour. Given concerns about student stress and ecological validity associated with SAs, potential biases in teacher judgements, and the high burden entailed by traditional classroom assessments, we investigated whether and how well FA outcomes can predict SA outcomes. Specifically, we estimated student abilities in a large sample of children evaluated at different time points during compulsory schooling and performed a systematic comparison of regression models trained to predict SA abilities on different subsets of features derived from FA abilities and auxiliary variables. A model that included mean abilities in different competence domains performed best, accounting for a considerable proportion of variance (30 - 48 %), although this was still below that explained by past SA measures. Most predictive FA features generally corresponded to abilities from the same or a similar competence domain as the predicted SA ability. We report systematic model biases that would warrant consideration when using the models for decision-making. Our findings provide valuable insights into how learning progress connects to future achievement, which can help teachers adapt instruction earlier and inform policies to reduce reliance on high-stakes testing.