AI Resource Allocation: On the Contribution of Distributive and Procedural Fairness

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study investigates how users perceive AI-driven decision-making systems in resource allocation contexts based on three key factors: outcome favourability, transparency, and task type. We conducted an online experiment with a 2 (outcome: favourable vs. unfavourable) x 3 (transparency: low vs. balanced vs. high) mixed design across four different resource allocation scenarios that reflected tasks perceived as more mechanical versus more human (N = 929). Our Bayesian linear mixed-effects models revealed that outcome favourability was the strongest single predictor of perceived fairness, trust, acceptance, and behavioural intention to use the respective AI. Task type (perceived “humanness”) further influenced user perceptions and additionally interacted with outcome favourability and transparency. Surprisingly and in contrast to prior studies, we found evidence against a main effect of transparency. Our findings highlight the critical importance of AI-based resource allocation performance, that is, outcome optimization, for users’ perception of corresponding AI systems. Conversely, the impact of transparency on user perceptions appears to be even more nuanced than previously thought.

Article activity feed