Building machines that learn and think about morality

In Proceedings of the Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB 2018). Society for the Study of Artificial Intelligence and Simulation of Behaviour (2018)
  Copy   BIBTEX

Abstract

Lake et al. propose three criteria which, they argue, will bring artificial intelligence (AI) systems closer to human cognitive abilities. In this paper, we explore the application of these criteria to a particular domain of human cognition: our capacity for moral reasoning. In doing so, we explore a set of considerations relevant to the development of AI moral decision-making. Our main focus is on the relation between dual-process accounts of moral reasoning and model-free/model-based forms of machine learning. We also discuss how work in embodied and situated cognition could provide a valu- able perspective on future research.

Author Profiles

Christopher Burr
The Alan Turing Institute
Geoff Keeling
Stanford University

Analytics

Added to PP
2020-05-08

Downloads
360 (#44,206)

6 months
80 (#49,444)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?