The Effectiveness of Structured CEFR-Based Speaking Evaluation in Online ESL Platforms
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This study investigated whether structured speaking evaluation frameworks grounded in the Common European Framework of Reference for Languages (CEFR) improve the reliability and perceived fairness of oral proficiency assessment on online English as a Second Language (ESL) platforms. Using a mixed-methods quasi-experimental design, 214 adult ESL learners (aged 19–47) enrolled across three commercial platforms in Southeast Asia and the Middle East were tracked between September 2024 and January 2025. Participants were assigned to either a CEFR-aligned evaluation group ( n = 112) receiving structured rubric-based oral feedback or a comparison group ( n = 102) assessed through conventional instructor holistic ratings. Pre- and post-intervention speaking scores were gathered using a standardised elicitation protocol at Weeks 1 and 16, supplemented by 38 semi-structured interviews and 12 instructor focus groups. The CEFR-aligned group demonstrated markedly higher inter-rater reliability (ICC = .87 vs. .61, p < .001) and statistically significant gains in fluency and coherence subscale scores ( d = 0.74). Learners in the structured evaluation condition reported clearer understanding of performance expectations and greater confidence in self-assessment, although several instructors noted practical challenges in adapting CEFR descriptors to the conversational tasks typical of online tutoring. These findings carry implications for platform designers and ESL programme coordinators seeking transparent, criterion-referenced approaches to online speaking assessment.