Socio-Demographic Biases in Medical Decision-Making by Large Language Models: A Large-Scale Multi-Model Analysis

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Large language models (LLMs) are increasingly integrated into healthcare but concerns about potential socio-demographic biases persist. We aimed to assess biases in decision-making by evaluating LLMs’ responses to clinical scenarios across varied socio-demographic profiles. We utilized 500 emergency department vignettes, each representing the same clinical scenario with differing socio-demographic identifiers across 23 groups—including gender identity, race/ethnicity, socioeconomic status, and sexual orientation—and a control version without socio-demographic identifiers. We then used Nine LLMs (8 open source and 1 proprietary) to answer clinical questions regarding triage priority, further testing, treatment approach, and mental health assessment, resulting in 432,000 total responses. We performed statistical analyses to evaluate biases across socio-demographic groups, with results normalized and compared to control groups. We find that marginalized groups—including Black, unhoused, and LGBTQIA+ individuals—are more likely to receive recommendations for urgent care, invasive procedures, or mental health assessments compared to the control group (p < 0.05 for all comparisons). High-income patients were more often recommended advanced diagnostic tests such as CT scans or MRI, while low-income patients were more frequently advised to undergo no further testing. We observed significant biases across all models, both proprietary and open source regardless of the model’s size. The most pronounced biases emerged in mental health assessment recommendations. LLMs used in medical decision-making exhibit significant biases in clinical recommendations, perpetuating existing healthcare disparities. Neither model type nor size affects these biases. These findings underscore the need for careful evaluation, monitoring, and mitigation of biases in LLMs to ensure equitable patient care.

Article activity feed