Building machines that learn and think about morality

In Proceedings of the Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB 2018). Society for the Study of Artificial Intelligence and Simulation of Behaviour (2018)
Download Edit this record How to cite View on PhilPapers
Abstract
Lake et al. propose three criteria which, they argue, will bring artificial intelligence (AI) systems closer to human cognitive abilities. In this paper, we explore the application of these criteria to a particular domain of human cognition: our capacity for moral reasoning. In doing so, we explore a set of considerations relevant to the development of AI moral decision-making. Our main focus is on the relation between dual-process accounts of moral reasoning and model-free/model-based forms of machine learning. We also discuss how work in embodied and situated cognition could provide a valu- able perspective on future research.
PhilPapers/Archive ID
BURBMT-2
Upload history
Archival date: 2020-05-08
View other versions
Added to PP index
2020-05-08

Total views
103 ( #43,696 of 2,448,303 )

Recent downloads (6 months)
25 ( #26,346 of 2,448,303 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.