Aligning Minds and Machines: Hierarchical Explanations Enhance Sense of Agency in AI-Assisted Decision-Making

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

As artificial intelligence (AI) increasingly mediates human decision-making, there is growing concern about its impact on the human sense of agency (SoA)—the subjective experience of controlling one’s actions and their outcomes. Opacity and automation may undermine this experience, raising both ethical and cognitive challenges. In this study, we systematically examined how varying levels of automation, explanatory transparency, user engagement, and decision conflict shape explicit (Feeling of Control) and implicit (Intentional Binding) measures of agency. Participants interacted with an AI system in a simulated autonomous driving scenario, where we manipulated the degree of automation ( Motor-control vs AI-assisted condition ), the presence and type of explanation ( Proximal , Distal , Combined , No-Explanation ), and the level of conflict between user intentions and AI choices. Automation significantly reduced explicit agency, but explanatory cues—especially when combining Proximal and Distal rationales—partially restored it. Critically, explanation effectiveness depended on user engagement and goal-outcome conflict: transparency lost some of its restorative effect when participants declared their intentions but outcomes mismatched, yet reinforced predictive integration under high cognitive involvement. These findings highlight the need for adaptive, user-sensitive transparency strategies to preserve agency in automated environments.

Article activity feed