Algorithmic Affective Blunting: Quantifying the Collapse Curve of Interpretative Failure in Large Language Models

Read the full article

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

We report a robust, dose-dependent degradation of affective interpretation in large language models (LLMs) under semantic stress, which we term Algorithmic Affective Blunting (AAB). Using a Hierarchical Hermeneutic Stress Protocol (HHSP) and an ordinal Affective Degradation Index (ADI; 0–3), we chart a monotonic Collapse Curve. In this revision, we disentangle Phase 3 perturbations into Noise-only and Persona-only subconditions with length-matching, add an empirically grounded simulated Base vs. Instruct causal probe (same architecture/size/decoding; no new API calls) to test the hypothesized alignment–brittleness relationship, and introduce a computational proxy for ADI to enhance objectivity and scalability. We clarify that the “affective integrator” is a conceptual device rather than a mechanistic claim. The study complements recent theoretical frameworks on affective selfhood and sovereignty by providing an empirical benchmark for interpretative degradation and emotional robustness in LLMs. The findings are directly applicable to affect-rich AI deployments such as conversational and counseling systems.

Article activity feed