Optimizing Fairness in Machine Learning: A Hyperparameter Tuning Approach

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Machine learning models are increasingly utilized in critical areas such as finance, hiring, and criminal justice, yet they often inherit or amplify societal biases, leading to unfair outcomes. Addressing algorithmic fairness is no longer optional but essential for building trustworthy systems. This paper proposes a novel framework that integrates fairness as a primary optimization goal during the hyperparameter tuning phase of model development. Using the FLASH algorithm, a fast sequential model-based optimization technique, we demonstrate that it is possible to simultaneously optimize for predictive accuracy and fairness metrics. Our experiments across multiple real-world datasets reveal that incorporating fairness constraints during model optimization significantly reduces bias without substantially compromising performance. Furthermore, the proposed approach outperforms several established bias mitigation techniques. These findings highlight the critical role of software engineers in embedding fairness into the machine learning lifecycle and present a practical, scalable path toward more equitable AI systems.

Article activity feed