Can Humanoid Robots be Moral?

Download Edit this record How to cite View on PhilPapers
Abstract
The concept of morality underpins the moral responsibility that not only depends on the outward practices (or ‘output’, in the case of humanoid robots) of the agents but on the internal attitudes (‘input’) that rational and responsible intentioned beings generate. The primary question that has initiated extensive debate, i.e. ‘Can humanoid robots be moral?’, stems from the normative outlook where morality includes human conscience and socio-linguistic background. This paper advances the thesis that the conceptions of morality and creativity interplay with linguistic human beings instead of non-linguistic humanoid robots, as humanoid robots are indeed docile automata that cannot be responsible for their actions. To eradicate human ethics in order to make way for humanoid robot ethics highlights the moral actions and adequacy that hinges the myth of creative agency and self-dependency, which a humanoid robot can scarcely express.
PhilPapers/Archive ID
CHACHR-3
Upload history
Archival date: 2018-09-19
View other versions
Added to PP index
2018-09-19

Total views
236 ( #26,473 of 2,448,410 )

Recent downloads (6 months)
22 ( #29,345 of 2,448,410 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.