When is a robot a moral agent

International Review of Information Ethics 6 (12):23-30 (2006)
  Copy   BIBTEX

Abstract

In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. The second is when one can analyze or explain the robot's behavior only by ascribing to it some predisposition or 'intention' to do good or harm. And finally, robot moral agency requires the robot to behave in a way that shows and understanding of responsibility to some other moral agent. Robots with all of these criteria will have moral rights as well as responsibilities regardless of their status as persons

Author's Profile

John P. Sullins
Sonoma State University

Analytics

Added to PP
2014-01-16

Downloads
1,337 (#11,553)

6 months
169 (#18,778)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?