Switch to: References

Citations of:

What matters to a machine

In Michael Anderson & Susan Leigh Anderson (eds.), Machine Ethics. Cambridge Univ. Press. pp. 88--114 (2011)

Add citations

You must login to add citations.
  1. B-Theory and Time Biases.Sayid Bnefsi - 2019 - In Patrick Blackburn, Per Hasle & Peter Øhrstrøm (eds.), Logic and Philosophy of Time: Further Themes from Prior. Aalborg University Press. pp. 41-52.
    We care not only about what experiences we have, but when we have them too. However, on the B-theory of time, something’s timing isn’t an intrinsic way for that thing to be or become. Given B-theory, should we be rationally indifferent about the timing per se of an experience? In this paper, I argue that B-theorists can justify time-biased preferences for pains to be past rather than present and for pleasures to be present rather than past. In support of this (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Incorporating Ethics into Artificial Intelligence.Amitai Etzioni & Oren Etzioni - 2017 - The Journal of Ethics 21 (4):403-418.
    This article reviews the reasons scholars hold that driverless cars and many other AI equipped machines must be able to make ethical decisions, and the difficulties this approach faces. It then shows that cars have no moral agency, and that the term ‘autonomous’, commonly applied to these machines, is misleading, and leads to invalid conclusions about the ways these machines can be kept ethical. The article’s most important claim is that a significant part of the challenge posed by AI-equipped machines (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • Artificial virtue: the machine question and perceptions of moral character in artificial moral agents.Patrick Gamez, Daniel B. Shank, Carson Arnold & Mallory North - 2020 - AI and Society 35 (4):795-809.
    Virtue ethics seems to be a promising moral theory for understanding and interpreting the development and behavior of artificial moral agents. Virtuous artificial agents would blur traditional distinctions between different sorts of moral machines and could make a claim to membership in the moral community. Accordingly, we investigate the “machine question” by studying whether virtue or vice can be attributed to artificial intelligence; that is, are people willing to judge machines as possessing moral character? An experiment describes situations where either (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
    The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to do this I take (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Autonomous Weapons and Distributed Responsibility.Marcus Schulzke - 2013 - Philosophy and Technology 26 (2):203-219.
    The possibility that autonomous weapons will be deployed on the battlefields of the future raises the challenge of determining who can be held responsible for how these weapons act. Robert Sparrow has argued that it would be impossible to attribute responsibility for autonomous robots' actions to their creators, their commanders, or the robots themselves. This essay reaches a much different conclusion. It argues that the problem of determining responsibility for autonomous robots can be solved by addressing it within the context (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents.Wendell Wallach, Stan Franklin & Colin Allen - 2010 - Topics in Cognitive Science 2 (3):454-485.
    Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for computational (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Superethics Instead of Superintelligence: Know Thyself, and Apply Science Accordingly.Pim Haselager & Giulio Mecacci - 2020 - American Journal of Bioethics Neuroscience 11 (2):113-119.
    Download  
     
    Export citation  
     
    Bookmark