Can humanoid robots be moral?

Download Edit this record How to cite View on PhilPapers
Abstract
The concept of morality underpins the moral responsibility that not only depends on the outward practices (or ‘output,’ in the case of humanoid robots) of the agents but on the internal attitudes (‘input’) that rational and responsible intentioned beings generate. The primary question that has initiated the extensive debate, i.e., ‘Can humanoid robots be moral?’, stems from the normative outlook where morality includes human conscience and socio-linguistic background. This paper advances the thesis that the conceptions of morality and creativity interplay with linguistic human beings instead of non-linguistic humanoid robots, as humanoid robots are indeed docile automata that cannot be responsible for their actions. To eradicate human ethics in order to make way for humanoid robot ethics highlights the moral actions and adequacy that hinges the myth of creative agency and self-dependency, which a humanoid robot can scarcely express.
Keywords
No keywords specified (fix it)
Categories
(categorize this paper)
PhilPapers/Archive ID
CHACHR-2
Upload history
Archival date: 2021-03-12
View other versions
Added to PP index
2018-07-02

Total views
39 ( #58,014 of 2,449,137 )

Recent downloads (6 months)
4 ( #59,439 of 2,449,137 )

How can I increase my downloads?

Downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.