Algorithm Fairness in Predicting Unmet Preventive Care: Evidence from 16 European Countries using SHARE

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Background

Preventive cares are critical to achieve health equity but remain underutilized, particularly among socioeconomically disadvantaged populations. While machine learning (ML) models have shown promise in predicting unmet needs, the fairness and generalizability across national contexts remain poorly understood. This study evaluates the predictive performance and algorithmic fairness of ML models in identifying unmet preventive care needs across 16 European countries.

Methods

The study used cross-sectional data of 51,720 adults from Wave 9 of Survey of Health, Ageing and Retirement in Europe (SHARE). We trained and tested ML models including Logistic Regression, Random Forest, XGBoost, LightGBM, Gradient Boosting, DNN and FCN, applied to predict five preventive care outcomes. Model performance was assessed by the area under the receiver operating characteristic curve (AUC). Fairness was evaluated by demographic parity and equalized odds across countries and socioeconomic subgroups. SHAP values quantified the feature importance.

Results

LightGBM achieved the highest AUC (0.73–0.81) but exhibited substantial variability across countries (AUC range: 0.53–0.94) and socioeconomic strata. Fairness assessments revealed pronounced disparities, demographic parity differences ranged from 0.0027 to 0.9613 across countries, and inequities were notable among high-income and high education subgroups. Age, income, outpatient visits, and social engagement identified as key predictors.

Conclusion

This study provides the evaluations of algorithmic fairness in ML-based prediction of preventive care needs across multiple national contexts. Significant geographic and socioeconomic disparities in model performance highlight the need for localized model calibration and fairness-aware to prevent the reinforcement of health inequities.

Article activity feed