Thigh gaps and filtered snaps: a qualitative study exploring opportunities to mitigate social media harm through content moderation for people with eating disorders

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background

The ubiquity of social media has increased exposure to idealised beauty standards, often unrealistic and harmful. Repeated exposure has been linked to body dissatisfaction, harmful behaviours, and potentially the development of eating disorders (ED). Given the volume of content produced daily, effective harm mitigation strategies (automated or user-driven) are essential, requiring an informed understanding of the contexts and nuances surrounding harmful content.

Objective

The study has two key aims: (1) to understand the perspectives of experts by profession and people with lived experience of ED, on what makes social media content harmful in the context of body image and ED, including why and how this harm occurs; and (2) to explore how technology might help mitigate these effects.

Methods

We engaged n  = 30 participants, including 12 interviews with experts by profession ( n  = 2 ED support service providers and n  = 10 body image and ED experts), and five focus groups with experts by lived experience ( n  = 18 people with lived experience of ED).

Results

Using the Framework Method guided by inductive thematic analysis, we developed six prominent themes: (1) Spectrum of harmful and ambiguous content on social media, (2) The “echo chamber” of harmful content amplified by social media algorithms, (3) Balancing safety, freedom and responsibility in social media moderation, (4) Shared responsibility and collaboration for safer social media environments, (5) The role of representation and diversity in social media recovery and support, and (6) Harnessing digital innovation to reduce harm on social media. We developed an eight-category framework of harmful social media content, offering an underlying contextual understanding of harmful content and guidance for harm-reducing technologies.

Conclusions

Manual safeguards place significant responsibility on users. This work supports informed distinctions between harmful, ambiguous and safe content and provides design insights for classification systems and adaptable automated moderation.

Article activity feed