Artificial moral agents are infeasible with foreseeable technologies

Ethics and Information Technology 16 (3):197-206 (2014)
  Copy   BIBTEX

Abstract

For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.

Author's Profile

Analytics

Added to PP
2014-05-17

Downloads
1,156 (#9,735)

6 months
163 (#16,615)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?