Moral difference between humans and robots: paternalism and human-relative reason

AI and Society:1-11 (forthcoming)
Download Edit this record How to cite View on PhilPapers
According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency. However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, I argue that a distinct kind of reason available to humans—call it human-relative reason—is not available to robots. The difference in moral reason entails that sometimes an action is morally permissible for humans, but not for robots. Therefore, when developing moral robots, we cannot consider only what humans can or cannot do. I use examples of paternalism to illustrate my argument.
PhilPapers/Archive ID
Upload history
Archival date: 2021-05-25
View other versions
Added to PP index

Total views
61 ( #53,725 of 64,194 )

Recent downloads (6 months)
48 ( #16,034 of 64,194 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.