Misinformation interventions and online sharing behavior: Lessons learned from two preregistered field studies

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The spread of misinformation on social media continues to pose challenges for researchers, policymakers, and technology companies. While prior research has shown some success in reducing susceptibility to misinformation at scale how individual-level interventions impact the quality of the content that people share on social networks, particularly over time, remains understudied. Across two pre-registered longitudinal studies, we ran two Twitter/X ad campaigns, targeting a total of 967,640 Twitter/X users with either a previously validated video-based “inoculation” intervention about emotional manipulation or a control video. We hypothesized that Twitter/X users who saw the inoculation video would engage less with negative emotional content and share less content from unreliable sources. We do not find evidence for any of our hypotheses, and instead observe no meaningful changes in posting, retweeting, or other sharing behavior as a result of the intervention. Our findings are most likely compromised by the Twitter/X ad space “fuzzy matching” policy, which introduced substantial noise in our data. Importantly, we demonstrate that different statistical analyses and time windows (looking at the intervention’s effects over 1 hour versus 6 hours or 24 hours, etc.) can yield different and sometimes entirely opposite significant effects, highlighting the risk of interpreting noise from field studies as signal.

Article activity feed