Fairness Calibration in Credit Scoring via Counterfactual Perturbation and Group-Wise Regularization

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This study develops a fairness-calibrated credit-scoring method that combines counterfactual perturbation with group-wise regularization. Using 3.1 million credit files with demographic annotations, the model evaluates whether score outputs remain stable when protected attributes are perturbed in a causal graph. A fairness-adjusted gradient-boosting model is then trained with penalties on group-level prediction disparities. The final model reduces demographic disparity in predicted default probability from 0.112 to 0.034 while maintaining an ROC-AUC of 0.89. Counterfactual-stability checks show that 94.6% of predictions remain invariant after perturbation. This demonstrates that fairness calibration can be achieved with minimal loss in predictive power.

Article activity feed