Active Grade Estimator on Short Answer Assessment
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
When introducing machine learning to automated grading systems, active learning is advantageous by helping teachers provide as little guidance as possible to offer proper automatic grading results in the end. However, given a high variety of Q/A nature, one has to pay attention to a diverse set ofpossible correct answers to offer suitable grading. Moreover, students who lack self-confidence in replying to exam questionsusually put long-winded or formless answers which jeopardizes the automatic gradingto reach its reliable performance. All aforementioned factors prohibit applying the conventional keyword-matching approaches to solve the problem. To deal with the difficulty, we propose a novel two-stage query strategy based on large language models (LLMs).In the first stage, we match a student's answer to a key answer from a prepared answer set. Afterward, the student's answer shall be summarized through LLMs to a well-structured answer. The proposed LLM-based approach is considered a prototype-based learning method. We also attempt to reduce the possibility of bias labeling by having multiple graders as oracles to join the proposed active learning framework. The proposed approach has been demonstrated on two publicly automated short answer grading datasets with different grading tasks. Overall, the proposed model exhibits outstanding performance compared to state-of-the-art methods focusing on similar topics in terms of effectiveness and efficiency.