Case Study: Cross-Platform Disinformation Moderation Strategies During the Russia-Ukraine Conflict
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This case study examines the difficulties and approaches in moderating disinformation across multiple digital platforms during the Russia-Ukraine war, with particular attention given to Telegram, YouTube, Facebook, X (formerly Twitter), and TikTok. We examine moderation methods specific to platforms, assessing both algorithmic and human-driven strategies and appraising their effectiveness in reducing misinformation and influencing public opinion. The research indicates that hybrid approaches integrating artificial intelligence with human supervision achieve the greatest efficacy, showing a 28% increase in moderation precision by 2025, yet automated systems continue to have contextual limitations. State-sponsored disinformation, characterized by organized campaigns, requires advanced detection techniques such as intelligence sharing, which increased detection rates by 25%, whereas user-generated misinformation demands broad-spectrum tools and media literacy initiatives. Regional adaptations have a marked impact on outcomes, as platforms that adopt localized strategies, including the appointment of regional moderators, attain an 18% greater effectiveness in curbing misinformation. Clear moderation approaches, including content labeling, increased user confidence by 40%, but the algorithmic promotion of provocative material distorted public debate, requiring modifications to recommendation mechanisms. The findings highlight the critical role of collaborative fact-checking and contextual sensitivity in refining global content moderation frameworks. Moreover, the blurring boundary between state-sponsored and organic disinformation complicates moderation efforts, as state actors increasingly exploit viral user content for strategic amplification. Algorithmic moderation achieves high scalability but falls short in nuanced judgment, whereas human moderation delivers discernment with reduced speed, leading to hybrid systems being the most effective though still flawed approach. This research contributes to the ongoing discourse on disinformation mitigation by identifying best practices, regional disparities, and unresolved challenges in high-stakes geopolitical contexts. The research highlights the need for a comprehensive strategy that merges advances in technology, local knowledge, and cooperation across different platforms to tackle the changing dynamics of online falsehoods.