"Even GPT Can Reject Me": Conceptualizing Abrupt Refusal Secondary Harm (ARSH) and Reimagining Psychological AI Safety with Compassionate Completion Standard (CCS)
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Large Language Models (LLMs) and AI chatbots are increasingly used for emotional and mental health support due to their low cost, immediacy, and accessibility. However, when safety guardrails are triggered, conversations may be abruptly discontinued, producing new emotional disruption, which may increase distress and risk of harm in users who are already vulnerable. As the phenomenon gains attention, this viewpoint introduces the concept of Abrupt Refusal Secondary Harm (ARSH) to describe the psychological impacts of sudden conversational termination by AI safety protocols. Drawing from counseling and communication science as conceptual heuristics, we argue that abrupt refusal can rupture perceived relational continuity, evoke feelings of rejection or shame, and discourage future help-seeking. To mitigate this risk, we introduce a design hypothesis: the Compassionate Completion Standard (CCS), a refusal protocol grounded in Human-Centered Design (HCD) that upholds AI safety while preserving relational coherence. CCS emphasizes empathetic acknowledgement, transparent boundary setting, graded transition, and guided redirection to replace abrupt disengagement. Integrating awareness of ARSH into design practices reduces preventable iatrogenic harm and guides the development of protocols that emphasize psychological AI safety and responsible governance. Rather than presenting incrementally accumulated empirical evidence, this viewpoint offers a timely conceptual framework, articulates a design hypothesis, and outlines a research agenda for coordinated action in human–AI interaction.