Fairness Regularization in CNNs for Demographic Bias in Facial Recognition
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The rapid integration of facial recognition systems into public and commercial infrastructures has intensified concerns about algorithmic bias, particularly when performance varies across demographic groups. This study conducts a cross-dataset evaluation of three models—ResNet-50, ResNet-101, and an Adversarial Invariance Regularization (INV-REG) classifier—using the FairFace and AffectNet datasets. Each model is assessed across racial subgroups using accuracy, per-class standard deviation, and max–min performance gaps to quantify demographic bias. Results show that INV-REG reduces racial performance disparities relative to both ResNet baselines, decreasing the max–min accuracy gap by 18–27% and lowering standard deviation across subgroups by 12–20%. However, deeper networks such as ResNet-101 exhibit larger cross-group variance than ResNet-50, suggesting that model capacity can amplify demographic imbalance even under balanced training conditions. These findings indicate that fairness regularization provides measurable improvements but does not fully eliminate demographic gaps, highlighting the need for architectural choices that prioritize robustness across demographic boundaries. Overall, this work provides a comparative analysis of baseline CNNs and fairness-regularized models, clarifies the limits of dataset balancing alone, and offers evidence that both model complexity and regularization strategies jointly shape racial bias in facial recognition systems.