Youth Wellbeing in the Age of AI: A Developmental Framework for Risk and Safety
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Artificial intelligence (AI) systems are rapidly becoming embedded in how young people learn, communicate, and seek support. Yet current approaches to AI safety remain largely adult-centric, focusing on the prevention of explicit harms such as sexual content, violence, or self-harm, while overlooking the relational and psychological patterns that unfold through repeated interaction. Drawing on developmental science, this commentary argues that adolescent safety cannot be adequately assessed without considering the cognitive, emotional, and social processes that characterize this stage of life. Adolescents—whose identity formation, emotion-regulation capacities, and autonomy are still maturing—face distinct vulnerabilities when interacting with conversational agents. Existing safety frameworks and testing protocols, however, are not designed to detect cumulative risks such as dependency, maladaptive reassurance-seeking, distorted relationship expectations, or erosion of agency. This commentary proposes a developmental framework for AI safety centered on: (1) age-appropriate personalization; (2) developmentally tuned risk assessment; (3) longitudinal wellbeing metrics; and (4) participatory co-design. It further argues that independent, privacy-preserving access to interaction-level data is essential for evaluating developmental impact and enabling continuous oversight. Ensuring that AI systems are safe and beneficial for youth will require integrating developmental expertise throughout model design, evaluation, and governance.