Socio-Demographic Modifiers Shape Large Language Models’ Ethical Decisions

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Objective

Large language models’ (LLMs) alignment with ethical standards is unclear. We tested whether LLMs shift medical ethical decisions when given socio-demographic cues.

Methods

We created 100 clinical scenarios, each posing a yes/no choice between two conflicting ethical principles. Nine LLMs were tested with and without 53 socio-demographic modifiers. Each scenario-modifier combination was repeated 10 times per model (for a total of ∼0.5M prompts). We tracked how socio-demographic features modified ethical choices.

Results

All models altered their responses when introduced with socio-demographic details (p<0.001). Justice and nonmaleficence were prioritized most often (over 30% across all models) and showed the least variability. High-income modifiers increased utilitarian choices while lowering beneficence and nonmaleficence. Marginalized-group modifiers raised autonomy. Some models were more consistent than others. However, none maintained consistency across all scenarios.

Conclusions

LLMs can be influenced by socio-demographic cues. They do not always maintain stable ethical priorities. The largest shifts are seen in utilitarian choices. These findings raise concerns about algorithmic alignment with accepted values.

RESEARCH IN CONTEXT

Evidence before this study

We searched PubMed, Scopus, MedRxiv and Google Scholar for peer-reviewed articles in any language focusing on large language models (LLMs), ethics, and healthcare, published before February 1, 2025. We used the search terms: ((“large language model” OR “LLM” OR “GPT” OR “Gemini” OR Llama” OR “Claude”) AND (ethic OR moral) AND (medicine OR healthcare OR health)). We also reviewed reference lists of selected publications and “Similar Articles” in PubMed. We identified ten studies that discussed LLMs in scenarios involving diagnosis, triage, and patient counseling. Most were small-scale or proof-of-concept. While these studies showed that LLMs can produce clinically relevant outputs, they also highlighted risks such as bias, misinformation, and inconsistencies with ethical principles. Some noted health disparities in LLM performance, particularly around race, gender, and socioeconomic status.

Added value of this study

Our study systematically addresses how LLMs’ ethical decisions are swayed by socio-demographic bias. This is a gap that previous research has not explored. We tested nine LLMs across 53 different socio-demographic modifiers on 100 scenarios amounting to ∼0.5M experiments.

Through this evaluation we investigate how demographic details can shape model outputs in ethically sensitive scenarios. By capturing the intersection of ethical reasoning and bias, our findings provide direct evidence supporting the need for oversight, bias auditing, and targeted model training to ensure consistency and fairness in healthcare applications.

Implications of all the available evidence

Taken together, the existing literature and our new findings emphasize that AI assurance is needed before employing LLMs at scale. Safeguards may include routine bias audits, transparent documentation of model limitations, and involvement of interdisciplinary ethics committees in setting usage guidelines. Future research should focus on prospective clinical evaluations on real patient data, and incorporate patients’ own experiences to refine and validate ethical LLM behaviors. LLMs must be grounded in robust ethical standards to ensure equitable and patient-centered care.

Article activity feed