Democratizing Algorithmic Fairness

Philosophy and Technology 33 (2):225-244 (2020)
  Copy   BIBTEX

Abstract

Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as biased. While researchers have taken the problem of algorithmic bias seriously, but the current discussion on algorithmic fairness tends to conceptualize ‘fairness’ in algorithmic fairness primarily as a technical issue and attempts to implement pre-existing ideas of ‘fairness’ into algorithms. In this paper, I show that such a view of algorithmic fairness as technical issue is unsatisfactory for the type of problem algorithmic fairness presents. Since decisions on fairness measure and the related techniques for algorithms essentially involve choices between competing values, ‘fairness’ in algorithmic fairness should be conceptualized first and foremost as a political issue, and it should be (re)solved by democratic communication. The aim of this paper, therefore, is to explicitly reconceptualize algorithmic fairness as a political question and suggest the current discussion of algorithmic fairness can be strengthened by adopting the accountability for reasonableness framework.

Author's Profile

Pak-Hang Wong
Hong Kong Baptist University

Analytics

Added to PP
2019-05-18

Downloads
2,767 (#2,552)

6 months
356 (#5,127)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?