Switch to: Citations

Add references

You must login to add references.
  1. Computer systems: Moral entities but not moral agents. [REVIEW]Deborah G. Johnson - 2006 - Ethics and Information Technology 8 (4):195-204.
    After discussing the distinction between artifacts and natural entities, and the distinction between artifacts and technology, the conditions of the traditional account of moral agency are identified. While computer system behavior meets four of the five conditions, it does not and cannot meet a key condition. Computer systems do not have mental states, and even if they could be construed as having mental states, they do not have intendings to act, which arise from an agent’s freedom. On the other hand, (...)
    Download  
     
    Export citation  
     
    Bookmark   91 citations  
  • On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...)
    Download  
     
    Export citation  
     
    Bookmark   294 citations  
  • Mapping Value Sensitive Design onto AI for Social Good Principles.Steven Umbrello & Ibo van de Poel - 2021 - AI and Ethics 1 (3):283–296.
    Value Sensitive Design (VSD) is an established method for integrating values into technical design. It has been applied to different technologies and, more recently, to artificial intelligence (AI). We argue that AI poses a number of challenges specific to VSD that require a somewhat modified VSD approach. Machine learning (ML), in particular, poses two challenges. First, humans may not understand how an AI system learns certain things. This requires paying attention to values such as transparency, explicability, and accountability. Second, ML (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • Robots and Respect: Assessing the Case Against Autonomous Weapon Systems.Robert Sparrow - 2016 - Ethics and International Affairs 30 (1):93-116.
    There is increasing speculation within military and policy circles that the future of armed conflict is likely to include extensive deployment of robots designed to identify targets and destroy them without the direct oversight of a human operator. My aim in this paper is twofold. First, I will argue that the ethical case for allowing autonomous targeting, at least in specific restricted domains, is stronger than critics have acknowledged. Second, I will attempt to uncover, explicate, and defend the intuition that (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • The Future of War: The Ethical Potential of Leaving War to Lethal Autonomous Weapons.Steven Umbrello, Phil Torres & Angelo F. De Bellis - 2020 - AI and Society 35 (1):273-282.
    Lethal Autonomous Weapons (LAWs) are robotic weapons systems, primarily of value to the military, that could engage in offensive or defensive actions without human intervention. This paper assesses and engages the current arguments for and against the use of LAWs through the lens of achieving more ethical warfare. Specific interest is given particularly to ethical LAWs, which are artificially intelligent weapons systems that make decisions within the bounds of their ethics-based code. To ensure that a wide, but not exhaustive, survey (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • The Strategic Robot Problem: Lethal Autonomous Weapons in War.Heather M. Roff - 2014 - Journal of Military Ethics 13 (3):211-227.
    The present debate over the creation and potential deployment of lethal autonomous weapons, or ‘killer robots’, is garnering more and more attention. Much of the argument revolves around whether such machines would be able to uphold the principle of noncombatant immunity. However, much of the present debate fails to take into consideration the practical realties of contemporary armed conflict, particularly generating military objectives and the adherence to a targeting process. This paper argues that we must look to the targeting process (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • The Case for Ethical Autonomy in Unmanned Systems.Ronald C. Arkin - 2010 - Journal of Military Ethics 9 (4):332-341.
    The underlying thesis of the research in ethical autonomy for lethal autonomous unmanned systems is that they will potentially be capable of performing more ethically on the battlefield than are human soldiers. In this article this hypothesis is supported by ongoing and foreseen technological advances and perhaps equally important by an assessment of the fundamental ability of human warfighters in today's battlespace. If this goal of better-than-human performance is achieved, even if still imperfect, it can result in a reduction in (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • Autonomous Military Robotics: Risk, Ethics, and Design.Patrick Lin, George Bekey & Keith Abney - unknown
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • Being-in-the-World: A Commentary on Heidegger's Being and Time, Division I.Mark Okrent & Hubert L. Dreyfus - 1993 - Philosophical Review 102 (2):290.
    Download  
     
    Export citation  
     
    Bookmark   179 citations  
  • The Later Heidegger.George Pattison - 2002 - Philosophical Quarterly 52 (208):401-403.
    Download  
     
    Export citation  
     
    Bookmark   17 citations