Fairness, Justice, and Social Inequality in Machine Learning

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

As machine learning (ML) systems increasingly shape decision-making across crucial societal domains, the discourse around fairness in algorithmic systems (fairML) has intensified. Although fairML research is rapidly expanding, contributions from social science, particularly sociology, remain limited. This chapter aims to address this gap by examining fairness in ML through a sociological lens, focusing on the interplay between algorithmic decision-making and social inequality. We argue that fairML frameworks must explicitly distinguish technical fairness—focused on unbiased predictions—from normative justice, which addresses broader ethical and distributive considerations.We identify and discuss five key challenges confronting fairML today: (1) clearly separating fairness and justice, (2) developing more sophisticated measures of vulnerability and protected attributes, (3) incorporating historical disadvantage and social origin into fairness evaluations, (4) assessing unintended social consequences of algorithmic interventions, and (5) empirically investigating stakeholder preferences toward AI systems. By highlighting these sociologically informed challenges, this chapter advocates for a more holistic, context-sensitive approach to algorithmic fairness. Ultimately, our analysis proposes a sociologically grounded research agenda aimed at critically assessing and enhancing the role of fairML in either perpetuating or alleviating social inequalities.

Article activity feed