A Survey of Techniques, Key Components, Strategies, Challenges, and Student Perspectives on Prompt Engineering for Large Language Models (LLMs) in Education

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study presented a comprehensive investigation into prompt engineering for large language models (LLMs) within educational contexts, combining a systematic literature review with a 12-week empirical study involving primary school students using a chatbot-based tutor in a Python programming course. The research explored the breadth of prompt engineering techniques, identified essential components for effective educational prompts, examined strategic applications, highlighted key implementation challenges, and captured learner perspectives on interacting with LLMs.Our review categorized prompt engineering techniques into foundational (e.g., zero-shot, few-shot, and direct instruction), structured reasoning (e.g., chain-of-thought, tree-of-thought, and graph-based models), hallucination reduction (e.g., retrieval-augmented generation, CoVe, ReAct), user-centric strategies (e.g., automatic prompt engineering, active prompting), and domain-specific applications (e.g., emotion prompting, contrastive reasoning, and code generation tools like PoT and CoC). We also examined advanced optimization methods including prompt tuning, abstraction, and self-consistency approaches that enhanced both reasoning and factual reliability.Key components of effective educational prompt engineering were distilled into nine categories: content knowledge, critical thinking, iterative refinement, clarity, creativity, collaboration, digital literacy, ethical reasoning, and contextual integration. These elements collectively supported both the quality of LLM outputs and the development of students’ cognitive and metacognitive skills.Strategically, we identified ten educational prompt engineering practices—contextual framing, task segmentation, prompt sequencing, role-based prompting, reflection, counterfactual exploration, constraint-based creativity, ethical consideration, interactive refinement, and comparative analysis—as essential for guiding LLM interactions aligned with pedagogical goals.We also addressed core challenges in prompt engineering for education, including ambiguity in model interpretation, balancing specificity and flexibility, ensuring consistency, mitigating hallucinations, safeguarding ethics and privacy, and maintaining student engagement. These challenges highlighted the need for explicit instructional support and adaptive prompt design in classrooms.Empirically, our study of primary school learners revealed a surprising level of sophistication in students’ prompt construction and refinement. Students developed intuitive understandings of prompt clarity, used context to guide AI responses, adopted role-based and scenario-based prompting, applied constraints to improve learning outcomes, and created reusable prompt templates. Furthermore, they engaged in iterative refinement, developed evaluation criteria for AI responses, and differentiated between general and specific prompts based on their learning objectives. These findings underscored students’ emerging metacognitive awareness and adaptability in AI-mediated learning.

Article activity feed