Improving Validity and Efficiency of Digital Dyslexia Screening through Trial By Trial Feedback
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Trial-by-trial feedback is a powerful tool across scientific disciplines, offering real-time information that shapes learning, attention, and behavioral adjustment. However, its role in large-scale educational and psychological assessments remains underexplored. This study examines whether providing trial-by-trial performance feedback during a digital dyslexia screening task enhances engagement while preserving the validity and interpretability of the scores it produces. Engagement is a critical threat to validity in assessments—particularly among young children in digital universal screening contexts. We conducted two large-scale experiments involving 6,610 students from Grade 1 to 12 in Colombia and the United States in which students were randomly assigned to one of two conditions: (1) informative feedback (auditory cues indicating correct/incorrect) or (2) neutral feedback (non-informative sound). The assessment was administered in two formats: an adaptively ordered English version and a randomly ordered Spanish version. We evaluated differences in response behaviors, item response theory (IRT) model fit, and concurrent validity between conditions. Results indicate that informative feedback significantly increased compliance and reduced disengagement. Reading scores and IRT model fit were comparable across conditions. Notably, informative feedback improved the concurrent validity of scores and was associated with shorter completion times. These findings suggest that trial-by-trial feedback can enhance engagement without degrading score interpretability thus leading to improvements in assessment efficiency and validity in a universal screening context.