The Fairness–Accuracy Frontier: Impossibility Theorems and Optimal Tradeoffs in Algorithmic Decision-Making
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This paper provides a comprehensive theoretical and empirical analysis of the fundamental tradeoffs between fairness and accuracy in algorithmic decision-making systems. We present a unified mathematical framework that characterizes the Pareto frontier of achievable fairness–accuracy combinations, establishing impossibility theorems that delineate the boundaries of what any classifier can achieve under realistic distributional assumptions. We show that when base rates differ across protected groups, no classifier can simultaneously satisfy statistical parity, equalized odds, and predictive parity except in degenerate cases. We extend existing results by deriving tight bounds on approximate fairness tradeoffs, characterizing the geometric structure of the fairness–accuracy frontier, and developing optimal post-processing, in-processing, and pre-processing mechanisms that achieve points on this frontier with provable guarantees. Empirical evaluations on standard benchmark datasets validate the theoretical predictions and illustrate the cost of imposing different fairness constraints. We further develop a welfare-theoretic framework that interprets fairness criteria as constraints on the distribution of social welfare, providing principled guidance for navigating fairness tradeoffs in high-stakes algorithmic decision-making contexts.