Bias and Fairness in Medical LLMs: An Extensive Scoping Review

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Medical large language models (Med LLMs) are being increasingly used in various applications across healthcare, yet numerous questions remain open about whether and how they maintain an unbiased and fair stance in such applications. Accordingly, a growing family of studies has aimed at evaluating or addressing patterns related to bias and fairness in Med LLMs. As this family of studiesrapidly grows and diversifies, there is a critical need to identify the key steps taken and the remaining path to follow. To this end, this study presents a comprehensive survey to offer a structured bird’s eye view of the trends in the evaluation and mitigation of bias and fairness patterns in Med LLMs. Notably, on the evaluation side, this survey covers the measures, framework, and benchmarks for bias evaluation, categorized based on the source of bias that the studies target. Additionally, on the mitigation side, the survey covers various data manipulation, prompting, and fine-tuning strategies. Informed by the identified trends in the field, a list of suggested directions for future research is presented at the end. A structured list of the studies covered in this survey is also available at:https://github.com/healthylaife/MedLLM-Bias-Fairness-Resourcses.

Article activity feed