Integrating Group and Individual Fairness in Clinical AI: A Post-Hoc, Model-Agnostic Framework for Fairness Auditing

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Ensuring fairness across diverse patient populations is a fundamental challenge for clinical AI systems, yet current fairness evaluation approaches create critical blind spots. Group-level metrics capture systemic disparities but miss patient-level variations, while individual fairness frameworks ensure consistency but potentially obscure structural biases. In this paper, we propose EquiLense, a post-hoc, model-agnostic framework that bridges these perspectives through clinical similarity matching and comprehensive fairness auditing. Our method introduces the Mean Predicted Probability Difference (MPPD), which quantifies prediction inconsistencies between clinically similar patients across demographic groups, integrating both individual-level consistency and group-level equity assessment. Moreover, we provide flexible similarity matching using clinical features and comprehensive visualization tools that support practical deployment in healthcare settings. Applied to electronic health record data from over 59,000 surgical patients, our framework revealed disparities in prediction consistency even when overall model performance appeared strong. EquiLense identified differences in predicted probabilities between clinically similar patients from different racial groups, disparities that were substantially reduced when sensitive attributes were excluded from model training. Our method provides a clinically relevant and interpretable approach to fairness auditing that enables healthcare practitioners to identify, understand, and address algorithmic disparities in real-world deployment settings.

Article activity feed