Can humanoid robots be moral?

Ethics in Science and Environmental Politics 18:49-60 (2018)
  Copy   BIBTEX

Abstract

The concept of morality underpins the moral responsibility that not only depends on the outward practices (or ‘output,’ in the case of humanoid robots) of the agents but on the internal attitudes (‘input’) that rational and responsible intentioned beings generate. The primary question that has initiated the extensive debate, i.e., ‘Can humanoid robots be moral?’, stems from the normative outlook where morality includes human conscience and socio-linguistic background. This paper advances the thesis that the conceptions of morality and creativity interplay with linguistic human beings instead of non-linguistic humanoid robots, as humanoid robots are indeed docile automata that cannot be responsible for their actions. To eradicate human ethics in order to make way for humanoid robot ethics highlights the moral actions and adequacy that hinges the myth of creative agency and self-dependency, which a humanoid robot can scarcely express.

Author's Profile

Dr Sanjit Chakraborty
Vellore Institute of Technology-AP University

Analytics

Added to PP
2018-06-28

Downloads
605 (#38,182)

6 months
172 (#18,079)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?