Responsible Integration of Large Language Models in Legal English Education: Theoretical Foundations and Research Design

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The rapid proliferation of large language models (LLMs) in higher education creates both unprecedented opportunities and significant pedagogical challenges for English for Specific Purposes (ESP) instruction. This paper presents the research design and theoretical framework of a pre-registered quasi-experimental pilot study examining the structured integration of LLM tools — specifically ChatGPT — into Legal English instruction for undergraduate law students at Kyiv National Economic University named after Vadym Hetman, Ukraine. Approximately 30–50 participants are divided into experimental and control groups over a ten-week instructional period. The experimental group engages in LLM-assisted tasks including legal case analysis, role-based simulations of professional legal communication, legal argumentation exercises, and drafting clauses for international agreements, while the control group follows traditional instructional approaches. The study is grounded in TPACK, Bloom’s Taxonomy, Scaffolded Learning Theory, and ESP pedagogy frameworks. Pre-test and post-test instruments measure gains in legal writing, argumentation, and professional communication competencies. The study additionally examines academic integrity attitudes across both groups. Expected outcomes include empirically grounded pedagogical recommendations for responsible AI integration in Legal English and broader ESP contexts. The study has been pre-registered on AsPredicted (№278663) prior to data collection, ensuring full methodological transparency.

Article activity feed