Artificial moral agents are infeasible with foreseeable technologies

Ethics and Information Technology 16 (3):197-206 (2014)
Download Edit this record How to cite View on PhilPapers
For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
Reprint years
PhilPapers/Archive ID
Upload history
Archival date: 2015-05-14
View other versions
Added to PP index

Total views
700 ( #8,154 of 64,121 )

Recent downloads (6 months)
63 ( #11,409 of 64,121 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.