Switch to: References

Citations of:

Homo sapiens 2.0 Why we should build the better robots of our nature

In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press (2011)

Add citations

You must login to add citations.
  1. Ethical Decision Making in Autonomous Vehicles: The AV Ethics Project.Katherine Evans, Nelson de Moura, Stéphane Chauvier, Raja Chatila & Ebru Dogan - 2020 - Science and Engineering Ethics 26 (6):3285-3312.
    The ethics of autonomous vehicles has received a great amount of attention in recent years, specifically in regard to their decisional policies in accident situations in which human harm is a likely consequence. Starting from the assumption that human harm is unavoidable, many authors have developed differing accounts of what morality requires in these situations. In this article, a strategy for AV decision-making is proposed, the Ethical Valence Theory, which paints AV decision-making as a type of claim mitigation: different road (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Moral Difference Between Humans and Robots: Paternalism and Human-Relative Reason.Tsung-Hsing Ho - forthcoming - AI and Society:1-11.
    According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency. However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, I argue that a distinct kind of reason available to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial Intelligence as a Socratic Assistant for Moral Enhancement.Francisco Lara & Jan Deckers - 2020 - Neuroethics 13 (3):275-287.
    The moral enhancement of human beings is a constant theme in the history of humanity. Today, faced with the threats of a new, globalised world, concern over this matter is more pressing. For this reason, the use of biotechnology to make human beings more moral has been considered. However, this approach is dangerous and very controversial. The purpose of this article is to argue that the use of another new technology, AI, would be preferable to achieve this goal. Whilst several (...)
    Download  
    Translate
     
     
    Export citation  
     
    Bookmark   6 citations  
  • When is a Robot a Moral Agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. The second is when (...)
    Download  
     
    Export citation  
     
    Bookmark   54 citations