Misinformation interventions and online sharing behavior: Lessons learned from two preregistered field studies
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The spread of misinformation on social media continues to pose challenges. While prior research has shown some success in reducing susceptibility to misinformation at scale, how individual-level interventions impact the quality of content shared on social networks remains understudied. Across two pre-registered longitudinal studies, we ran two Twitter/X ad campaigns, targeting a total of 967,640 Twitter/X users with either a previously validated “inoculation” video about emotional manipulation or a control video. We hypothesized that Twitter/X users who saw the inoculation video would engage less with negative-emotional content and share less content from unreliable sources. We do not find evidence for our hypotheses, observing no meaningful changes in posting or retweeting post-intervention. Our findings are most likely compromised by Twitter/X’s “fuzzy matching” policy, which introduced substantial noise in our data (~7.5% of targeted individuals were actually exposed to the intervention). Our findings are thus likely the result of treatment non-compliance rather than “true” null effects. Importantly, we also demonstrate that different statistical analyses and time windows (looking at the intervention’s effects over 1 hour versus 6 hours or 24 hours, etc.) can yield different and even opposite significant effects, highlighting the risk of interpreting noise from field studies as signal.