Guidance over guidelines? Unpacking the uses and concerns of generative AI in Communication Science

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The rapid adoption of generative Artificial Intelligence (genAI) tools has transformed research practices across communication science. As genAI tools (and most notetably large language models), made the generation of text, images, and other media content extremely easy, they can potentially disrupt the way science is conducted. Yet, their integration into academic workflows has outpaced systematic understanding of how, when, and why researchers use them. Drawing on a quantitative survey of $N=1,138$ communication scientists and four qualitative focus groups, this study provides the first comprehensive mapping of genAI use across communucation research stages and contexts. Findings reveal three key dynamics: a paradox between widespread use and persistent concern; conflicting expectations among individual scholars, institutions, and journals; and the influence of cultural and linguistic contexts on adoption patterns. Together, these findings highlight a multi-level governance challenge encompassing individual, institutional, and disciplinary dimensions. With this paper, we seek to provide a starting point for an open and transparent discussion about the role of generative AI in communication research by proposing a structured set of recommendations for responsible genAI use at three levels: field-wide coordination and collaboration, institutional guidance and support, and individual continuous critical reflection.

Article activity feed