Liberalism and Automated Injustice

In Duncan Ivison (ed.), Research Handbook on Liberalism. Cheltenham: Edward Elgar Publishing (2024)
  Copy   BIBTEX

Abstract

Many of the benefits and burdens we might experience in our lives — from bank loans to bail terms — are increasingly decided by institutions relying on algorithms. In a sense, this is nothing new: algorithms — instructions whose steps can, in principle, be mechanically executed to solve a decision problem — are at least as old as allocative social institutions themselves. Algorithms, after all, help decision-makers to navigate the complexity and variation of whatever domains they are designed for. In another sense, however, this development is startlingly new: not only are algorithms being deployed in ever more social contexts, they are being mechanically executed not merely in principle, but pervasively in practice. Due to recent advances in computing technology, the benefits and burdens we experience in our lives are now increasingly decided by automata, rather than each other. How are we to morally assess these technologies? In this chapter, I propose a preliminary conceptual schema for identifying and locating the various injustices of automated, algorithmic social decision systems, from a broadly liberal perspective.

Author's Profile

Chad Lee-Stronach
Northeastern University

Analytics

Added to PP
2023-06-20

Downloads
205 (#75,213)

6 months
95 (#53,696)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?