Abstract
Insofar as artificial intelligence is to be used to guide automated systems in their interactions with humans, the dominant view is probably that it would be appropriate to programme them to maximize (expected) utility. According to utilitarianism, which is a characteristically western conception of moral reason, machines should be programmed to do whatever they could in a given circumstance to produce in the long run the highest net balance of what is good for human beings minus what is bad for them. In this essay, I appeal to values that are characteristically African––but that will resonate with those from a variety of moral-philosophical traditions, particularly in the Global South––to cast doubt on a utilitarian approach. Drawing on norms salient in sub-Saharan ethics, I provide four reasons for thinking it would be immoral for automated systems governed by artificial intelligence to maximize utility. In catchphrases, I argue that utilitarianism cannot make adequate sense of the ways that human dignity, group rights, family first, and (surprisingly) self-sacrifice should determine the behaviour of smart machines.