Not All AI Use Is Equal: Socio-Emotional Engagement, Mental Health Risks, and the Protective Role of Emotion Regulation in Adolescents
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Adolescents increasingly turn to AI chatbots for social and emotional support, yet the mental health implications of this trend remain poorly understood, particularly in real-world, naturalistic settings. The present study examined the prevalence and contextual correlates of socio-emotional AI engagement among adolescents, its associations with mental health outcomes, and whether emotion regulation ability moderates these associations. A large-scale survey was conducted with 2,500 high school adolescents (48.6% male; Mage = 15.42) in a less-developed region of China, where many adolescents lack adequate parental guidance and social support due to labor migration and limited institutional resources, and where AI literacy education has not yet been systematically implemented. Participants reported their AI usage across four functional domains (instrumental, informational, evaluative, and socio-emotional), their tendency to prefer AI over human interaction (AI-over-human preference), emotion regulation ability, and three mental health outcomes: depressive symptoms, anxiety symptoms, and self-injurious thoughts and behaviors (SITBs). Results showed that AI-over-human preference and socio-emotional AI use— but not instrumental, informational, or evaluative use — were positively associated with depression, anxiety, and SITBs. Both forms of socio-emotional engagement (AI-over-human preference and socio-emotional AI use) were more common among adolescents experiencing family disadvantage and peer victimization. Critically, emotion regulation ability moderated these associations. The mental health risks of socio-emotional AI engagement were pronounced among those with weaker skills, and substantially attenuated among adolescents with stronger regulation skills. These findings highlight the importance of distinguishing AI use by purpose rather than frequency, and identify emotion regulation ability as a key protective factor and potential intervention target for adolescents navigating AI engagement without adequate human support.