Writing with AI at the Margins: Student Voice and Authenticity at a Minority-Serving Institution

Read the full article

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The rapid rise in generative artificial intelligence programs like ChatGPT and Claude has prompted important questions around authorship, student voice, and academic integrity. This mixed-methods study surveyed 102 students and 25 faculty at a minority-serving institution to explore perceptions of AI writing tools and their impacts on writing authenticity. The surveys included 16–18 quantitative items on five-point Likert scales and 5 qualitative open-ended questions, gathering information on patterns of AI use, confidence, ethics, and institutional supports. Few students (21%) used AI to complete assignments, but 64% used it for revisions and 43% for clarity support. Faculty and students viewed grammar support as AI's most positive use, though students expressed concerns about originality. A significant majority (76%) used AI without disclosure, constituting an academic integrity violation. Responses about ethics were split between "neither agree nor disagree" (54%) and those acknowledging violations (36%). Multilingual students valued AI assistance with Standard Academic English grammar, viewing it as a positive learning addition. However, students worried these tools could diminish student voice and homogenize written perspectives across cultures. The 47% perception gap between faculty estimates (68%) and student self-reports (21%) of AI use for complete drafting suggests prohibitionist policies may address faculty concerns more than student realities. Findings support distinguishing instrumental support (grammar, mechanics) from expressive support (ideas, voice) in developing AI policies that preserve authentic student perspective while acknowledging AI's legitimate uses.

Article activity feed