Algorithmic Indirect Discrimination, Fairness, and Harm

AI and Ethics (2023)
  Copy   BIBTEX

Abstract

Over the past decade, scholars, institutions, and activists have voiced strong concerns about the potential of automated decision systems to indirectly discriminate against vulnerable groups. This article analyses the ethics of algorithmic indirect discrimination, and argues that we can explain what is morally bad about such discrimination by reference to the fact that it causes harm. The article first sketches certain elements of the technical and conceptual background, including definitions of direct and indirect algorithmic differential treatment. It next introduces three prominent accounts of fairness as potential explanations if the badness of algorithmic indirect discrimination, but argues that all three are vulnerable to powerful levelling-down-style objections. Instead, the article demonstrates how proper attention to the way differences in decision-scenarios affect the distribution of harms can help us account for intuitions in prominent cases. Finally, the article considers a potential objection based on the fact that certain forms of algorithmic indirect discrimination appear to distribute rather than cause harm, and notes that we can explain how such distributions cause harm by attending to differences in individual and group vulnerability.

Author's Profile

Frej Thomsen
Danish National Centre for Ethics

Analytics

Added to PP
2022-12-01

Downloads
630 (#36,242)

6 months
207 (#11,984)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?