Authority Bias in Human-AI Decision Making: The Effects of AI Appraisals and Journal Cues in Abstract Screening

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Human-AI collaboration becomes increasingly embedded in decision-making tasks, including systematic reviewing workflows such as title and abstract screening. Yet humans as final arbiters remain susceptible to influence from peripheral cues, leaving these hybrid workflows vulnerable to new forms of bias. This paper examined how two authority cues influence screening judgements: AI appraisals and journal prestige. Using preregistered experiments, we investigated how these cues shape inclusion decisions in a realistic abstract-screening task. We employed a 3 × 3 mixed design. Participants were randomly assigned to receive AI recommendations, AI disapprovals, or no AI input. Across trials, each participant evaluated abstracts that appeared with all three journal cues: prestigious labels, non-prestigious labels, and no journal information. Across Western graduate students, Asian bachelor’s-degree holders, and Western professionals (total N = 977), AI appraisals functioned as a strong and consistent authority cue, systematically biasing screening decisions. Journal prestige produced limited influence, emerging primarily when irrelevant abstracts were paired with prestigious journals, which increased incorrect inclusion. These findings demonstrate that AI-generated cues can introduce powerful new authority biases in human-AI collaborative screening. Implications for designing and governing (AI-aided) reviewing systems to ensure accurate, unbiased decision making are discussed.

Article activity feed