LLM-Era College Admissions Essays Exhibit Paradoxical Semantic Trends
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Large Language Models (LLMs) often generate outputs with limited diversity in response to similar prompts, yet these outputs are frequently perceived as more creative than those produced by humans. We propose this phenomenon stems from a paradoxical homogenization, where LLMs can enhance surface lexical diversity that masks deeper conceptual convergence. In four large-scale natural experiments across four U.S. universities (N = 372,793), we examined this phenomenon in the high-stakes context of personal statement admissions essays. Comparing writing before and after the 2022 release of ChatGPT, multi-level semantic analyses revealed post-ChatGPT essays had higher lexical diversity but lower sentence-level idea diversity and document-level thematic distinctiveness. These effects were widespread but uneven, with particularly pronounced shifts among racial and linguistic minority applicants. Mediation analyses indicated that by enhancing lexical diversity, ChatGPT’s release inflated perceived creativity, overriding the negative influence via conceptual convergence. A controlled experiment replicated paradoxical homogenization, supporting AI’s causal role. Our findings highlight the risks of mistaking superficial diversity for genuine originality, key components of creativity.