The moral decision machine: a challenge for artificial moral agency based on moral deference

AI and Ethics (2024)
  Copy   BIBTEX

Abstract

Humans are responsible moral agents in part because they can competently respond to moral reasons. Several philosophers have argued that artificial agents cannot do this and therefore cannot be responsible moral agents. I present a counterexample to these arguments: the ‘Moral Decision Machine’. I argue that the ‘Moral Decision Machine’ responds to moral reasons just as competently as humans do. However, I suggest that, while a hopeful development, this does not warrant strong optimism about ‘artificial moral agency’. The ‘Moral Decision Machine’ (and similar agents) can only respond to moral reasons by deferring to others, and there are good reasons to think this is incompatible with responsible moral agency. While the challenge to artificial moral agency based on moral reasons-responsiveness can be satisfactorily addressed; the challenge based on moral deference remains an open question. The right way to understand the challenge, I argue, is as a route to the claim that artificial agents are unlikely to be responsible moral agents because they cannot be authentic.

Author's Profile

Zacharus Gudmunsen
Koc University

Analytics

Added to PP
2024-07-22

Downloads
97 (#102,037)

6 months
92 (#74,299)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?