Making moral machines: why we need artificial moral agents

AI and Society (forthcoming)
Download Edit this record How to cite View on PhilPapers
Abstract
As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis of the relevant arguments for and against creating AMAs, and we argue that all things considered we have strong reasons to continue to responsibly develop AMAs. The key contributions of this paper are threefold. First, to provide the first comprehensive response to the important arguments made against AMAs by Wynsberghe and Robbins (in “Critiquing the Reasons for Making Artificial Moral Agents”, Science and Engineering Ethics 25, 2019) and to introduce several novel lines of argument in the process. Second, to collate and thematise for the first time the key arguments for and against AMAs in a single paper. Third, to recast the debate away from blanket arguments for or against AMAs in general, to a more nuanced discussion about the use of what sort of AMAs, in what sort of contexts, and for what sort of purposes is morally appropriate.
Categories
(categorize this paper)
PhilPapers/Archive ID
FORMMM
Upload history
Archival date: 2020-11-04
View other versions
Added to PP index
2020-11-04

Total views
98 ( #42,966 of 2,439,597 )

Recent downloads (6 months)
37 ( #19,208 of 2,439,597 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.