“Always Check Important Information!” - The Role of Disclaimers in the Perception of AI-generated Content
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Generative AI (genAI), and large language models (LLMs) in particular, have become a prevalent source of digital content. Despite their accessibility and rise as an information source, these models, however, often struggle with factual accuracy. Therefore, we explored in three experimental studies how disclaimers affect people’s perceptions of text and authorship in scientific information generated by AI. Additionally, we investigated the impact of information presentation and authorship attributions—whether content is authored solely by AI or co-authored with humans. Across the experiments, no effects of disclaimer type on text perceptions and only minor effects on authorship perceptions were found. In Study 1, an evaluative (vs. neutral) information presentation decreased credibility perceptions, while informing about AI’s strengths vs. limitations did not. In addition, participants attributed higher machine heuristic values to AI than to human authors. Study 2 revealed interaction effects between authorship attribution and disclaimer type, providing early insights into possible balancing effects of human-AI co-authorship. No difference between providing no vs. a basic disclaimer was found in Study 3. However, both strengths and limitations disclaimers induced higher credibility ratings. This research suggests that disclaimers alone do not affect the perception of AI-generated output. Greater efforts are needed to raise awareness of the capabilities and limitations of LLMs and to advocate for ethical practices in handling AI-generated content, especially regarding factual information.