Exploring the Impact of Artificial Intelligence-Mediated Communication on Bias and Information Loss in Non-academic and Academic Writing Contexts
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Artificial Intelligence-Mediated Communication (AI-MC) has significantly transformed text production and perception, yet its effects on bias and information loss across various writing contexts remain underexplored. To address this, we conducted two studies involving non-academic texts (N = 572) and academic texts (N = 420). Participants were randomly assigned to read either original texts or texts refined by one of three AI models: ChatGPT 4.0, Claude 3 Opus, or Gemini Advanced. We assessed bias perception using a 5-point Likert scale and measured information loss through multiple-choice comprehension questions. Using Mann-Whitney U tests, we compared differences between groups. In non-academic contexts, ChatGPT 4.0 significantly reduced perceived emotional bias compared to original texts (p < .01), whereas Gemini Advanced slightly increased bias in specific emotional scenarios (p < .05). No AI models led to significant differences in information loss (p > .05). In academic contexts, neither ChatGPT 4.0 nor Claude 3 Opus significantly impacted bias perception or information loss (p > .05). These findings suggest that while certain large language models can mitigate perceived bias in non-academic writing, they do not markedly influence information loss in either non-academic or academic texts. Consequently, the integration of AI-refined texts in academic publishing should proceed with caution, and further research is warranted to explore AI-MC's effects across diverse linguistic and cultural environments.