Egalitarian Machine Learning

Res Publica 29 (2):237–264 (2023)
  Copy   BIBTEX

Abstract

Prediction-based decisions, which are often made by utilizing the tools of machine learning, influence nearly all facets of modern life. Ethical concerns about this widespread practice have given rise to the field of fair machine learning and a number of fairness measures, mathematically precise definitions of fairness that purport to determine whether a given prediction-based decision system is fair. Following Reuben Binns (2017), we take ‘fairness’ in this context to be a placeholder for a variety of normative egalitarian considerations. We explore a few fairness measures to suss out their egalitarian roots and evaluate them, both as formalizations of egalitarian ideas and as assertions of what fairness demands of predictive systems. We pay special attention to a recent and popular fairness measure, counterfactual fairness, which holds that a prediction about an individual is fair if it is the same in the actual world and any counterfactual world where the individual belongs to a different demographic group (cf. Kusner et al. (2018)).

Author Profiles

Clinton Castro
University of Wisconsin, Madison
David O'Brien
Harvard University
Ben Schwan
Case Western Reserve University

Analytics

Added to PP
2022-08-05

Downloads
450 (#37,924)

6 months
197 (#14,246)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?