Split-Second Decisions: Exploring the relationship between Time Pressure, AI Assistance, Moral Decision-Making and Responsibility
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
As artificial intelligence (AI) systems is increasingly integrated into high-stakes decision-making environments, understanding how humans interact with these systems under time pressure is critical for ethical and effective human-AI teaming. While previous research has shown that time pressure can impair cognitive flexibility and increase overreliance on automation, little is known about how these phenomena unfolds with intelligent systems in morally sensitive contexts where decisions have ethical consequences. This study investigates the interplay of time pressure, AI assistance, and moral responsibility attribution on moral decision-making. In an experimental ad hoc paradigm, military cadets and officers took on the role of drone operators who had to decide whether to bomb a position-with or without AI support- under varying levels of time pressure (high vs. low). Crucially, the AI suggestions were sometimes obviously wrong, allowing us to isolate the impact of time pressure on overreliance. Our results suggest that while AI assistance increased the likelihood that participants followed wrong and morally questionable recommendations, this effect did not increase under time pressure. Contrary to our expectations, time pressure alone did not significantly alter moral decision making or increase reliance on AI, suggesting that moral decision-making is more robust to cognitive constraints than previously thought. Importantly, despite the decrease in subjective responsibility during interaction with the AI, especially under high time pressure, participants appear to recalibrate their sense of responsibility when the AI makes erroneous recommendations. These results shed new light on the interplay between time constraints, AI influence and moral cognition. They suggest that, on morally loaded situations, time pressure alone may not predict AI overreliance or a diminished sense of moral responsibility. Implications for the responsible integration of AI and the development of decision support systems in defence and other areas where the stakes are high are discussed.