African Reasons Why Artificial Intelligence Should Not Maximize Utility

In Beatrice Dedaa Okyere-Manu (ed.), African Values, Ethics, and Technology: Questions, Issues, and Approaches. Palgrave-Macmillan. pp. 55-72 (2021)
  Copy   BIBTEX

Abstract

Insofar as artificial intelligence is to be used to guide automated systems in their interactions with humans, the dominant view is probably that it would be appropriate to programme them to maximize (expected) utility. According to utilitarianism, which is a characteristically western conception of moral reason, machines should be programmed to do whatever they could in a given circumstance to produce in the long run the highest net balance of what is good for human beings minus what is bad for them. In this essay, I appeal to values that are characteristically African––but that will resonate with those from a variety of moral-philosophical traditions, particularly in the Global South––to cast doubt on a utilitarian approach. Drawing on norms salient in sub-Saharan ethics, I provide four reasons for thinking it would be immoral for automated systems governed by artificial intelligence to maximize utility. In catchphrases, I argue that utilitarianism cannot make adequate sense of the ways that human dignity, group rights, family first, and (surprisingly) self-sacrifice should determine the behaviour of smart machines.

Author's Profile

Thaddeus Metz
Cornell University (PhD)

Analytics

Added to PP
2020-10-14

Downloads
1,372 (#11,092)

6 months
250 (#7,974)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?