Under what conditions do social media users perceive borderline content moderation as more legitimate?
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Content moderation navigates tensions between efficiency, consistency, and contextual understanding, yet theoretical foundations explaining user perceptions of moderation practices remain underdeveloped, particularly across diverse cultural contexts. Through a pre-registered survey experiment with TikTok users in Indonesia (n=304) and Pakistan (n=303), this study examines how moderation source, procedural contestability, and outcome alignment with user preferences individually and jointly influence perceived legitimacy of content moderation for morally, emotionally, and politically charged content. Results reveal that outcome alignment with users’ moderation preferences most consistently and significantly enhances perceived legitimacy across both countries. Moderation source (algorithmic systems, human moderators, governmental agencies, or civil society organizations) and procedural contestability showed limited direct effects, with only marginal benefits observed in Indonesia. Critically, when algorithmic moderation decisions are contestable and align with user preferences, perceived legitimacy peaks; however, when contestable algorithmic decisions contradict user preferences, legitimacy perceptions drop significantly. Individual differences in support for moral restrictions on free speech also influence user perceptions of content moderation. As algorithmic content moderation systems continue expanding globally, understanding these legitimation dynamics, especially in these understudied morally conservative contexts, proves essential for designing approaches that effectively govern online speech while maintaining user trust across diverse cultural communities.