Pedagogical Roles of Large Language Models in Computing Education: A Systematic Literature Review
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Large Language Models (LLMs) are rapidly transforming educational practices, offering new possibilities for personalized, interactive, and adaptive learning. However, their specific pedagogical roles and impact on student learning processes in computing education remain underexplored. This systematic literature review investigates how LLM-based tools support student learning, specifically highlighting pedagogical support mechanisms, student strategies, and factors shaping tool interaction. Through a rigorous analysis of 46 articles identified from the ACM, IEEE Xplore, and Scopus databases, this study employed a thematic analysis methodology to categorize the current evidence. The results identified five primary support mechanisms: adaptive scaffolding, interactive pedagogical structures, formative feedback generation, personalization, and structured learning activities. These mechanisms facilitate a deeper conceptual understanding, learning transfer, and critical thinking. Conversely, the absence of these pedagogical supports can lead to negative outcomes, including skill atrophy, dependency, cognitive offloading, and reduced self-efficacy. Furthermore, the review identifies learner characteristics, AI literacy, tool design, and socio-institutional context as critical factors determining the quality of student-LLM interactions. The study concludes that when integrated with appropriate pedagogical frameworks, LLMs act as dynamic co-learners that scaffold learning, ultimately transforming computing education into a more adaptive and reflective ecosystem.