Integrating Group and Individual Fairness in Clinical AI: A Post-Hoc, Model-Agnostic Framework for Fairness Auditing
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Fairness in clinical AI is typically assessed with group-level metrics, which overlook within-group variation, or individual-level metrics, which miss systemic inequities. We introduce EquiLense, a posthoc, model-agnostic auditing tool centered on the Mean Predicted Probability Difference (MPPD), a novel integrated metric that quantifies inconsistencies in predictions among clinically similar patients across demographic groups. We applied EquiLense to electronic health record data from over 59,000 surgical patients, evaluating fairness in post-surgical delirium and readmission prediction models. EquiLense identified disparities in prediction consistency across demographic groups, even when overall model performance appeared strong. EquiLense provides a practical framework for fairness auditing in clinical prediction models, integrating group- and individual-level perspectives in a clinically interpretable manner. As algorithmic tools continue to influence care delivery, accessible methods like EquiLense will be essential for advancing responsible AI in healthcare.