Fair or Unfair Differentiation? Reconsidering the Concept of Equality for the Regulation of Algorithmically Guided Decision-Making.

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Algorithms are increasingly relied upon to help decision-makers automate, streamline, structure and guide a variety of decision-making processes, ranging from trivial to critical, in both the public and private sector. This dissertation is concerned with a specific type of injustice that may come along with the development and deployment of algorithmically guided decision-making systems: the emergence of unjust (in)equality brought about by various instances of differentiation and differential treatment that take place within and as part of these systems. In this data-driven environment, people and groups of people are continuously classified, categorised, ranked and scored on a variety of features or attributes, such as their characteristics, interests, behaviour and preferences. For decision-subjects, these differentiation acts can generate significant consequences regarding their social position and life’s prospects: they affect the choices and options they are presented, the interactions and relationships they hold with others and themselves, the opportunities they are given, the burdens and benefits they carry, and so forth. Yet, when applied on a large enough scale, these decisions may also initiate significant societal change. Due to the complexity of the digital environment and the distinctive characteristics algorithmically guided decisions exhibit, it has become increasingly difficult to assess whether the decisions these knowledge and data-driven systems inform, and the (in)equalities they produce, can be justified. As a consequence, uncertainty exists as to whether current and future regulatory efforts, and the approach to equality they (will) adhere to, have the capacity to respond to the egalitarian harms algorithms risk to introduce in an appropriate manner.In this dissertation, I reposition and operationalise the notion of equality as a practicable and interpretative lens to strengthen the evaluation and regulation of algorithmically guided decision-making practices in light of the inequalities they (risk to) produce. This dissertation begins with a definition of the algorithmic research context in which the notion of equality will be operationalised. I explore a series of characteristics that typify algorithmic systems and render the inequalities they generate distinctive in terms of their form and scope. Due to these unique characteristics, algorithmic inequalities have the potential to restructure the fabric of society alongside new and existing dimensions: they may not only reinforce existing social injustice, but they may also introduce new forms of non-representational injustice (Chapter 1). Drawing inspiration from both European equality and non-discrimination law and political philosophical theories of justice, and informed by the (practical) functioning of algorithmic decision-making systems and the particular challenges these bring along, I propose equality as a multidimensional concept that can be specified alongside three (interrelated) axes. The model represents a core set of ideals commonly associated with the notion of equality as a social value: equal concern and respect (the moral dimension), equal social standing and equal social relationships (the socio-relational dimension), and/or equal access to certain justice-relevant goods (the distributive dimension) (Chapter 2). Throughout this dissertation, this multidimensional understanding of equality is operationalised to identify, articulate and evaluate algorithmic injustice, and the response formulated thereto within a given policy, law, code or theory. In a first step, my understanding of equality is positioned against the algorithmic environment in order to articulate and identify the egalitarian harms algorithms risk to impose onto decision-subjects and society at large (Part I: Identification, Chapter 2). In a second step, the multidimensional model functions as a support mechanism to uncover and evaluate the legal conceptualisation of equality found within European equality and non-discrimination law (Council of Europe and European Union). By investigating whom the law protects (equality of whom?) against which inequalities (what egalitarian ideals do they promote?) and how (what review mechanisms have been put in place when equality is positioned against competing values?), I assess whether the legal approach to equality can address the injustice algorithms risk to introduce (Part II: Evaluation; Chapters 3-5). In a third step, I rely upon the model to locate and examine specific notions of equality – chosen due to their correspondence with the aforementioned dimensions. These socio-relational (domination and oppression) and distributive (primary goods and capabilities) notions are examined to concretise the egalitarian harms algorithms risk to produce, the conditions under which these harms may manifest in practice, and the safeguards that can be provided to protect decision-subjects and society at large against them (Part III: Navigation; Chapter 6 and 7). Finally, based upon my findings, I formulate a set of normative recommendations aimed at repositioning the concept of equality within the algorithmic governance debate in an effort to strengthen its guiding function for the evaluation and regulation of algorithmically informed decision-making systems (Part IV: Synthesis; Chapter 8).

Article activity feed