Non-Nativeness in AI-Generated Writing: How Credible Is ChatGPT’s Output to ESL Assessors?

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study examines the extent to which AI-generated English mimics the writing style of non-native speakers. To investigate this, ChatGPT was tasked with producing English compositions under 17 different conditions, which were subsequently evaluated by a native-speaking English instructor without disclosing that the texts were AI-generated. The findings indicate that for one essay, which was set at a relatively advanced learner level, the assessor suspected AI involvement due to the fluency of expression. However, for the remaining compositions, while they noted that the grammatical accuracy exceeded that typically observed in non-native writing, they judged the texts as having undergone only minor grammatical refinements, akin to those facilitated by software such as MS Word. Moreover, they attributed the non-native-like quality of the writing primarily to unnatural phrasing and awkward expressions. In terms of overall evaluation, they found the logical structure of the essays to be highly simplistic and unvaried. Even when considering linguistic quality alone, the essays that were not immediately recognizable as AI-generated were, at best, borderline passable or slightly above average at an intermediate level. These results suggest that producing AI-generated compositions that attain high marks without arousing suspicion of academic dishonesty remains a significant challenge.

Article activity feed