Democratizing Algorithmic Fairness

Philosophy and Technology 33 (2):225-244 (2020)
Download Edit this record How to cite View on PhilPapers
Algorithms can now identify patterns and correlations in the (big) datasets, and predict outcomes based on those identified patterns and correlations with the use of machine learning techniques and big data, decisions can then be made by algorithms themselves in accordance with the predicted outcomes. Yet, algorithms can inherit questionable values from the datasets and acquire biases in the course of (machine) learning, and automated algorithmic decision-making makes it more difficult for people to see algorithms as biased. While researchers have taken the problem of algorithmic bias seriously, but the current discussion on algorithmic fairness tends to conceptualize ‘fairness’ in algorithmic fairness primarily as a technical issue and attempts to implement pre-existing ideas of ‘fairness’ into algorithms. In this paper, I show that such a view of algorithmic fairness as technical issue is unsatisfactory for the type of problem algorithmic fairness presents. Since decisions on fairness measure and the related techniques for algorithms essentially involve choices between competing values, ‘fairness’ in algorithmic fairness should be conceptualized first and foremost as a political issue, and it should be (re)solved by democratic communication. The aim of this paper, therefore, is to explicitly reconceptualize algorithmic fairness as a political question and suggest the current discussion of algorithmic fairness can be strengthened by adopting the accountability for reasonableness framework.
PhilPapers/Archive ID
Upload history
Archival date: 2019-05-18
View other versions
Added to PP index

Total views
1,456 ( #2,842 of 64,083 )

Recent downloads (6 months)
300 ( #1,335 of 64,083 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.