Borderline Disaster: An Empirical Study on Student Usage of GenAI in a Law Assignment

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This empirical study examines the outcomes of integrating Generative AI (GenAI) into a law assignment. Despite receiving instruction on the importance of verifying GenAI outputs and feedback on their attempts to use these tools effectively, a notable portion of students included fabricated or inaccurate information that had been generated by AI in their assignments. This overreliance on AI outputs suggests that instruction and guided practice alone may not sufficiently mitigate the risks associated with the inappropriate use of GenAI. A particularly concerning issue is the difficulty of identifying AI-generated inaccuracies in assessment tasks, which often requires considerable time and effort. Consequently, such errors may go unnoticed, potentially allowing students to bypass the development of essential skills, such as critical thinking, analytical reasoning, and the ability to independently evaluate information. Addressing overreliance on GenAI will require developing robust strategies that should be implemented for the entire duration of a student’s university degree to ensure they engage with AI tools effectively and responsibly.

Article activity feed