AI-based Decision-Making: Not the Decision-Maker but the Outcome’s Favourability Determines the Perception of University Topic Allocations

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Artificial intelligence (AI) is increasingly used for decision-making, but its perception compared to human decision-makers remains underexplored, especially in educational contexts. In this study, we investigated the influence of the decision-maker (AI vs. human) on fairness perception, trust, and emotional responses in participants (N = 329) who are allocated university course topics. While the allocation process was identical, participants were either told that a human lecturer or an AI made the decision based on the same pieces of information. Furthermore, we manipulated whether the corresponding decision-making process was explicitly communicated as fair or whether no comment was made regarding the process’ fairness. Finally, we assessed how favourable students rated the outcome of the allocation process. Against our hypotheses, Bayesian evidence indicated that neither the decision-maker nor whether the decision-making process was communicated as fair had an impact on students' fairness perception, trust, or emotional responses. Students’ evaluations of the university course topic allocation process were strongly associated with the favourability of the outcome. Given that an AI can better optimize allocations according to students’ preferences than human decision-makers, these findings support a broader implementation of AI-based decision-making in the context of allocation decisions at university.

Article activity feed