Perceptions of AI as a Misinformation Moderator: The Roles of Argument Type and Group Chat Size

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Content moderators aim to prevent misinformation and abuse in online communities. Human moderators have limitations like limited capacity that can potentially be addressed by AI moderators. However, in order for AI to be effective as a moderator, we need to understand how people perceive an AI moderator compared to a human moderator. We report two experiments (N = 698) that showed a human or AI moderating misinformation in online groups, varying the moderator's arguments and the group's size. In Experiment~1, AI was seen as less effective when it used harm-based arguments to refute misinformation; however, this gap disappeared a year later in Experiment~2, perhaps as attitudes towards AI shifted. Perceptions of AI moderator effectiveness were unaffected by the moderated group's size. We suggest that there may be growing acceptance of AI as content moderators in online groups, and highlight some of the potential challenges their use may raise.

Article activity feed