It’s Not My Responsibility: How Autonomy-Restricting Algorithms Enable Ethical Disengagement and Responsibility Displacement

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Abstract: This research examines how autonomy-restricting algorithms influence ethical behavior through psychological processes of responsibility displacement and moral disengagement. Through a mixed-methods approach combining survey data (N=187), semi-structured interviews (N=42), and experimental vignettes, this study identifies three key mechanisms—responsibility displacement, diffusion of responsibility, and moral distancing—through which algorithmic systems influence ethical reasoning. Quantitative analysis reveals that perceived decision-making autonomy significantly predicts moral engagement (β = 0.47, p < .001, R² = .22), while qualitative findings demonstrate how algorithmic interfaces create "ethical buffer zones" where responsibility becomes diffused or displaced entirely. Drawing on sociotechnical systems theory and the moral disengagement framework, the study analyzes these dynamics across financial services, healthcare, and criminal justice contexts. Results indicate that even when humans retain ultimate decision authority, algorithmic mediation can reduce ethical accountability by 32% compared to baseline conditions. Interventions including transparent algorithm design, pre-recommendation reasoning requirements, and explicit responsibility frameworks were found effective in enhancing ethical engagement. This research contributes to the emerging literature on algorithmic ethics by empirically validating theoretical mechanisms of responsibility displacement and offering evidence-based strategies for developing "morally engaged algorithmic systems" that enhance rather than diminish human ethical responsibility in algorithmic decision environments.

Article activity feed