Moral difference between humans and robots: paternalism and human-relative reason

AI and Society 37 (4):1533-1543 (2022)
  Copy   BIBTEX

Abstract

According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency (call it the _equivalence thesis_). However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, I argue that a distinct kind of reason available to humans—call it _human-relative reason_—is not available to robots. The difference in moral reason entails that sometimes an action is morally permissible for humans, but not for robots. Therefore, when developing moral robots, we cannot consider only what humans can or cannot do. I use examples of paternalism to illustrate my argument.

Author's Profile

Tsung-Hsing Ho (何宗興)
National Chung Cheng University

Analytics

Added to PP
2021-05-24

Downloads
611 (#23,536)

6 months
234 (#8,757)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?