Leading Judgment in the Age of AI: A Leadership-Centered Sociotechnical Canvas for Trust, Deference, and Override

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In an era of proliferating AI decision support, organizational leaders face unprecedented challenges in judgment. This paper introduces the L-Canvas, a leadership-centered sociotechnical framework that guides when to trust algorithmic outputs, when to defer to automated recommendations, and when to override them. Drawing on concepts of sensemaking, trust, psychological safety, organizational ambidexterity, and high-reliability organizing, we propose a structured “leadership playbook” for integrating human judgment with AI in high-stakes decisions. We illustrate the framework with public case examples – a pretrial bail risk algorithm and the UK’s 2020 A-level grading algorithm – highlighting how leadership actions (or inaction) impacted trust and outcomes. We present a clearly labeled contribution statement and outline a preregistration-ready evaluation plan with open metrics (e.g. override precision, fairness gaps, decision variance, procedural justice indicators), accompanied by a printable L-Canvas diagram and a sample metrics CSV workbook. By centering leadership practices in sociotechnical systems, this work offers both a practical guide for decision-makers and a foundation for scholarly replication. The findings underscore that effective “leading judgment” in the age of AI requires not only technical excellence, but also cultural resilience, ethical commitment, and a willingness to learn and adapt.

Article activity feed