Judging AI: Exploring Professional Identity and Attitudes Toward Artificial Intelligence in the Legal System

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Objective(s): Artificial intelligence (AI) is increasingly integrated into the legal system, raising questions about trust, fairness, and legitimacy in judicial decision-making. This study examined how professional identity, cognitive appraisals, and perceptions of fairness shape judges’ openness to AI.Research Questions: Five questions guided the study: (1) Do different message framings (role enhancement, efficiency, neutral) influence judges’ trust, perceived usefulness, ease of use, and adoption intentions? (2) How do professional identity and prior experience with technology and AI shape evaluations and willingness to adopt AI? (3) How are procedural justice, legitimacy, and trust interrelated in predicting adoption? (4) Do judges differ in their levels of trust across legal contexts? (5) What concerns or reflections do judges express about AI in courtroom decision-making?Method: A national sample of 317 judges (56.6% male, 41.1% female; M age = 60.41, SD = 10.24; 80.2% White, 8.4% Black, 3.4% Hispanic/Latinx, 2.7% Asian, 1.5% American Indian/Alaska Native, 1.9% Native Hawaiian/Pacific Islander) was recruited through the National Judicial College. Judges were randomly assigned to one of three message framings. Measures included professional identity, trust in AI across eight legal contexts, perceived usefulness, ease of use, attitudes, procedural justice, legitimacy, and adoption intentions.Results: Role-enhancement framing significantly increased trust (η² = .02), perceived usefulness (η² = .06), and adoption intentions (η² = .04). Mediation analyses indicated that procedural justice predicted adoption indirectly through legitimacy and trust (indirect effect = .08, 95% CI [.05, .10]). Prior AI experience was the strongest predictor across outcomes (e.g., trust: β = .10, p < .001; adoption: β = .37, p < .001). Judges expressed higher trust in AI for research and educational tasks than for sentencing, bail, or parole.Conclusions: Adoption of AI by judges depends on both functional evaluations (usefulness, familiarity) and normative judgments (fairness, legitimacy, identity alignment). Findings suggest that role-congruent framing, experiential training, and governance safeguards that emphasize fairness and transparency are critical for the responsible integration of AI into courts.

Article activity feed