AI Can Correct but Not Convince: Epistemic Authority and Emotionalized Communication in TikTok Health Misinformation Corrections

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Short-form video platforms such as TikTok have developed captive audiences vulnerable to the spread of online health misinformation, creating demand for scalable video-based correction strategies. However, little is known about how communicator characteristics and message style jointly shape the effectiveness of debunking videos. Drawing on epistemic trustworthiness and source credibility theory, this study examines whether communicator type (scientist vs. influencer), communicator appearance (human vs. AI), and communication style (neutral vs. emotionalized) influence the efficacy of TikTok-style debunking videos addressing common health misinformation. In a pre-registered experiment (N = 996), participants viewed a misinformation video followed by one of sixteen debunking videos or no correction (control). We assessed perceived accuracy of the misinformation, epistemic trustworthiness of the communicator, credibility of the debunking content, and sharing intentions. Both human and AI-generated debunking videos significantly reduced belief in misinformation relative to control (∼25% reduction), with no significant difference between them. However, human communicators were rated as more credible and trustworthy across all dimensions and elicited stronger sharing intentions among participants than AI-generated communicators. This disadvantage for AI was moderated by individual differences, with participants high in deference toward AI showing no additional preference for human communicators. Communicator type and communication style had no effect on the outcomes. These findings reveal a dissociation between belief change and source evaluation: AI-generated corrections may scale and be functionally effective, but deficits in perceived trustworthiness and sharing intentions may limit their diffusion on social media platforms.

Article activity feed