A Conceptual Framework for Evaluating Computer-Assisted Language Learning-Dedicated Applications

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In language teaching and learning domains, evaluation plays a prominent role in visualizing the scope of progress and achievements. Therefore, evaluation occurs constantly in all teaching aspects (materials, content, pedagogical practices, and other related issues). However, evaluating materials remains complex owing to their distinctiveness. This complexity is attributed to the excessive application of Web-based resources in teaching and learning settings to create authentic learning opportunities. Consequently, evaluating materials’ suitability requires guidance and practical frameworks that constitute common ground for evaluation. As technology offers a tremendous solution to a particular learning/ teaching context, including Computer-Assisted Language Dedicated Apps, the question of how these apps fit into specific teaching/learning contexts remains controversial. However, the evaluation frameworks that Hubbard, Chapelle, Richards, and Rodgers developed have paved the way for more effective evaluation of CALL resources and applications. In light of this, the study attempts to take part in revealing the myth of CDAPPS evaluation by adopting the conceptual research methodology in association with a systematic review of the previous models for evaluating Computer-Assisted Language Learning Dedicated Applications where a conceptual and principled framework entitled Mudawe and Maslamani Framework is proposed. The proposed framework embraces four levels of analysis for evaluation: Learner/user fit, language professional Fit, Technology fit, and institutional administrators Fit. Each consideration contains several criteria associated with the main level of the analysis that can be used through judgmental or empirical evaluation.

Article activity feed