Fairness-Aware Deep Learning for Job Application Screening

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Automated hiring systems offer scalability, yet they often reinforce previous biases, disproportionately filtering out women, minorities, and older applicants, a significant issue as AI becomes increasingly common in recruitment decisions. A distinctive decision-level adversarial framework is introduced to tackle this problem, guaranteeing algorithmic fairness while eliminating access to sensitive features during inference. This approach employs a lightweight discriminator to deduce gender exclusively from the predictor's output probability, rather than altering latent representations, while the primary model is adversarially refined to diminish this signal, effectively decoupling hiring recommendations from demographic leakage while preserving predictive capabilities. In a dataset comprising 73,462 applicants, the suggested approach achieves an AUC of 0.875 (+0.004 compared to the baseline), an F1-score of 0.810, and a disparate effect ratio of 0.907, which surpasses the 0.8 regulatory standard for fairness. SHAP research shows that decisions are affected by job-related factors like technical skills and experience, instead of using proxy variables. Ablation studies indicate that gentle adversarial regularization enhances generalization, suggesting that fairness standards might function as efficient regularizers. These results offer a scalable, understandable, and ethically grounded approach for fair AI-driven recruitment, showing that fairness and performance can enhance concurrently rather than being mutually exclusive.

Article activity feed