Ethnic and gender bias in Large Language Models across contexts

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

In this study, we assessed if Large Language Models provided biased answers when prompted to assist with the evaluation of requests made by individuals with different ethnic backgrounds and gender. We emulated an experimental procedure traditionally used in correspondence studies to test discrimination in social settings. The preference given as recommendation from the language models were compared across groups revealing a significant bias against names associated with ethnic minorities, particularly in the housing domain. However, the magnitude of this ethnic bias as well as differences by gender depended on the context mentioned in the prompt to the model. Finally, directing the model to take into consideration regulatory provisions on Artificial Intelligence or potential gender and ethnic discrimination does not seem to mitigate the observed bias between groups.

Article activity feed