Harnessing GPT for Enhanced Academic Writing: Evidence from a Field Experiment with Early-Career Researchers in the Social Sciences
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Large Language Models (LLMs) bring both risk and opportunity to scholarly communication, yet their effects on expert‐level research remain untested. We conducted the first randomized controlled trial of GPT-4 in expert academic writing, implemented as a three-day hackathon (n=22 early-career social scientists) in collaboration with Springer Nature. Participants were randomized to unrestricted GPT access or a no-AI control. Manuscripts were evaluated pre- and post-intervention by expert faculty and postdoctoral researchers on six dimensions: clarity, coherence, originality, methodological rigor, depth of analysis, and literature relevance. AI assistance produced significant improvements in clarity (Δ=+0.49, p=0.009) and coherence (Δ=+0.43, p=0.036) without affecting other dimensions. Principal-components analysis confirmed AI’s selective enhancement of organizational features. SakanaAI, a GPT-based reviewer, aligned broadly with human assessments. A linguistic analysis of participants’ reflective journals revealed increased process-oriented language and reduced temporal references, suggesting a reallocation of cognitive resources. Although concerns about overreliance and academic integrity persist, our findings imply that delegating routine structuring tasks to AI can free scholars to focus on generating novel insights.