Student Perceptions of AI-Assisted Writing and Academic Integrity: Ethical Concerns, Academic Misconduct, and Use of Generative AI in Higher Education

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The rise of generative AI in higher education has disrupted our traditional understandings of academic integrity, moving our focus from clear-cut infractions to evolving ethical judgment. In this study, 401 students from major U.S. universities provide insight into how beliefs, behaviors, and policy awareness intersect in shaping how students interact with AI-assisted writing. The findings indicate that students’ ethical beliefs – not institutional policies – are the strongest predictors of perceived misconduct and actual AI use in writing. Policy awareness was found to have no significant effect on ethical judgments or behavior. Instead, students who believe AI writing is cheating were found to be substantially less likely to view it as ethical or engage with it. These findings suggest that many students do not treat AI use in learning activities as an extension of conventional cheating (e.g., plagiarism), but rather as a distinct category of academic conduct/misconduct. Rather than using punitive models to attempt to punish students for using AI, this study suggests that education about AI ethics and the risk of AI overreliance may prove more successful for curbing unethical AI use in higher education.

Article activity feed