Development and Validation of a Scale Assessing College Students’ Negative Attitudes Toward Generative AI-Assisted Academic Writing

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study adopted a multi-stage research design to develop and validate a scale measuring university students’ attitudes toward the application of generative artificial intelligence (Generative AI) in academic writing. Based on preliminary interviews and qualitative analysis, three core dimensions were identified: language homogenization, thought outsourcing, and identity ambiguity, reflecting the potential impacts of AI on language style, cognitive processes, and authorship identity, respectively. A total of 2,019 participants were recruited. During scale development, item analysis and exploratory factor analysis (EFA) were first conducted. Parallel analysis confirmed the appropriateness of a three-factor structure. Subsequent confirmatory factor analysis (CFA) demonstrated good model fit, as well as strong convergent and discriminant validity. Finally, multi-group confirmatory factor analysis (MG-CFA) further supported the structural invariance of the scale across subgroups defined by gender, birthplace, and educational level, indicating its broad applicability. Despite its solid reliability, validity and broad applicability, the scale still has limitations, such as the need for longitudinal studies, predictive validity testing, and validation using multiple methods. Overall, the scale enriches the understanding of user attitudes toward technology beyond the traditional Affective–Behavioral–Cognitive model, and offers concrete educational insights for the responsible integration of generative AI in academic writing.

Article activity feed