The Relational Amplifier: How Anthropomorphism of Generative AI Backfires for Distressed Users

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

General-purpose Generative Artificial Intelligence (GenAI) is increasingly utilized as an unregulated source of mental health support, yet the psychological dynamics of users' interactions with these agents, and the associated risks, remain underexplored. Integrating social-cognitive models of anthropomorphism with attachment theory via the proposed Relational Amplifier framework, this study addressed two primary objectives: (1) to identify the unique predictors driving GenAI adoption for mental health support, and (2) to examine how psychological distress moderates the impact of anthropomorphism on Projected AI-anxiety. A representative sample of 584 adults (using a pre-registered data collection protocol) completed the DASS-21 (assessing psychological distress), the AIPAS (measuring anthropomorphism), and a Projected AI-Attachment scale adapted from the ECR-RS. Psychometric analysis (CFA) confirmed the validity of this new construct and its distinction from general attachment orientations. Logistic regression confirmed a three-way interaction hypothesis: the likelihood of utilizing GenAI for support was not driven by distress alone, but by a specific constellation of high distress, high anthropomorphism, and elevated Projected AI-anxiety. Moderation analysis revealed a critical backfire effect: while anthropomorphism reduced Projected AI-anxiety in low-distress users, it amplified it in highly distressed users. The results indicate that for vulnerable individuals, humanizing the agent does not provide genuine emotional security but rather serves as a screen for projecting internal insecurities. Ultimately, these findings suggest that anthropomorphic design of AI agents may increase engagement but also exacerbate anxiety for vulnerable users, challenging the assumption that maximizing human-likeness necessarily benefits mental health support.

Article activity feed