Switch to: References

Add citations

You must login to add citations.
  1. Kant's Just War Theory.Steven Charles Starke - unknown
    The main thesis of my dissertation is that Kant has a just war theory, and it is universal just war theory, not a traditional just war theory. This is supported by first establishing the history of secular just war theory, specifically through a consideration of the work of Hugo Grotius, Rights of War and Peace. I take his approach, from a natural law perspective, as indicative of the just war theory tradition. I also offer a brief critique of this tradition, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What’s wrong with “Death by Algorithm”? Classifying dignity-based objections to LAWS.Masakazu Matsumoto & Koki Arai - forthcoming - AI and Society:1-12.
    The rapid technological advancement of AI in the civilian sector is accompanied by accelerating attempts to apply this technology in the military sector. This study focuses on the argument that AI-equipped lethal autonomous weapons systems (LAWS) pose a threat to human dignity. However, the precise meaning of why and how LAWS violate human dignity is not always clear because the concept of human dignity itself remains ambiguous. Drawing on philosophical research on this concept, this study distinguishes the multiple meanings of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why Command Responsibility May (not) Be a Solution to Address Responsibility Gaps in LAWS.Ann-Katrien Oimann - 2024 - Criminal Law and Philosophy 18 (3):765-791.
    The possible future use of lethal autonomous weapons systems (LAWS) and the challenges associated with assigning moral responsibility leads to several debates. Some authors argue that the highly autonomous capability of such systems may lead to a so-called responsibility gap in situations where LAWS cause serious violations of international humanitarian law. One proposed solution is the doctrine of command responsibility. Despite the doctrine’s original development to govern human interactions on the battlefield, it is worth considering whether the doctrine of command (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Correction to: The Responsibility Gap and LAWS: a Critical Mapping of the Debate.Ann-Katrien Oimann - 2023 - Philosophy and Technology 36 (1):1-2.
    AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in philosophical literature (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethical Principles for Artificial Intelligence in National Defence.Mariarosaria Taddeo, David McNeish, Alexander Blanchard & Elizabeth Edgar - 2021 - Philosophy and Technology 34 (4):1707-1729.
    Defence agencies across the globe identify artificial intelligence as a key technology to maintain an edge over adversaries. As a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Unfortunately, they remain unmatched by efforts to define ethical frameworks to guide the use of AI in the defence domain. This article provides one such framework. It identifies five principles—justified and overridable uses, just and transparent systems and processes, human moral responsibility, meaningful human control (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Responsibility for Killer Robots.Johannes Himmelreich - 2019 - Ethical Theory and Moral Practice 22 (3):731-747.
    Future weapons will make life-or-death decisions without a human in the loop. When such weapons inflict unwarranted harm, no one appears to be responsible. There seems to be a responsibility gap. I first reconstruct the argument for such responsibility gaps to then argue that this argument is not sound. The argument assumes that commanders have no control over whether autonomous weapons inflict harm. I argue against this assumption. Although this investigation concerns a specific case of autonomous weapons systems, I take (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model for (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • People are averse to machines making moral decisions.Yochanan E. Bigman & Kurt Gray - 2018 - Cognition 181 (C):21-34.
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • The Future of War: The Ethical Potential of Leaving War to Lethal Autonomous Weapons.Steven Umbrello, Phil Torres & Angelo F. De Bellis - 2020 - AI and Society 35 (1):273-282.
    Lethal Autonomous Weapons (LAWs) are robotic weapons systems, primarily of value to the military, that could engage in offensive or defensive actions without human intervention. This paper assesses and engages the current arguments for and against the use of LAWs through the lens of achieving more ethical warfare. Specific interest is given particularly to ethical LAWs, which are artificially intelligent weapons systems that make decisions within the bounds of their ethics-based code. To ensure that a wide, but not exhaustive, survey (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • A Value-Sensitive Design Approach to Intelligent Agents.Steven Umbrello & Angelo Frank De Bellis - 2018 - In Yampolskiy Roman (ed.), Artificial Intelligence Safety and Security. CRC Press. pp. 395-410.
    This chapter proposed a novel design methodology called Value-Sensitive Design and its potential application to the field of artificial intelligence research and design. It discusses the imperatives in adopting a design philosophy that embeds values into the design of artificial agents at the early stages of AI development. Because of the high risk stakes in the unmitigated design of artificial agents, this chapter proposes that even though VSD may turn out to be a less-than-optimal design methodology, it currently provides a (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • The Problem with Killer Robots.Nathan Gabriel Wood - 2020 - Journal of Military Ethics 19 (3):220-240.
    Warfare is becoming increasingly automated, from automatic missile defense systems to micro-UAVs (WASPs) that can maneuver through urban environments with ease, and each advance brings with it ethical questions in need of resolving. Proponents of lethal autonomous weapons systems (LAWS) provide varied arguments in their favor; robots are capable of better identifying combatants and civilians, thus reducing "collateral damage"; robots need not protect themselves and so can incur more risks to protect innocents or gather more information before using deadly force; (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Robots and Respect: Assessing the Case Against Autonomous Weapon Systems.Robert Sparrow - 2016 - Ethics and International Affairs 30 (1):93-116.
    There is increasing speculation within military and policy circles that the future of armed conflict is likely to include extensive deployment of robots designed to identify targets and destroy them without the direct oversight of a human operator. My aim in this paper is twofold. First, I will argue that the ethical case for allowing autonomous targeting, at least in specific restricted domains, is stronger than critics have acknowledged. Second, I will attempt to uncover, explicate, and defend the intuition that (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  • Autonomous weapons systems, killer robots and human dignity.Amanda Sharkey - 2019 - Ethics and Information Technology 21 (2):75-87.
    One of the several reasons given in calls for the prohibition of autonomous weapons systems (AWS) is that they are against human dignity (Asaro, 2012; Docherty, 2014; Heyns, 2017; Ulgen, 2016). However there have been criticisms of the reliance on human dignity in arguments against AWS (Birnbacher, 2016; Pop, 2018; Saxton, 2016). This paper critically examines the relationship between human dignity and autonomous weapons systems. Three main types of objection to AWS are identified; (i) arguments based on technology and the (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Integrating robot ethics and machine morality: the study and design of moral competence in robots.Bertram F. Malle - 2016 - Ethics and Information Technology 18 (4):243-256.
    Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like and discuss a number (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • A Comparative Analysis of the Definitions of Autonomous Weapons Systems.Mariarosaria Taddeo & Alexander Blanchard - 2022 - Science and Engineering Ethics 28 (5):1-22.
    In this report we focus on the definition of autonomous weapons systems (AWS). We provide a comparative analysis of existing official definitions of AWS as provided by States and international organisations, like ICRC and NATO. The analysis highlights that the definitions draw focus on different aspects of AWS and hence lead to different approaches to address the ethical and legal problems of these weapons systems. This approach is detrimental both in terms of fostering an understanding of AWS and in facilitating (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • A Taste of Armageddon: A Virtue Ethics Perspective on Autonomous Weapons and Moral Injury.Massimiliano Lorenzo Cappuccio, Jai Christian Galliott & Fady Shibata Alnajjar - 2022 - Journal of Military Ethics 21 (1):19-38.
    Autonomous weapon systems could in principle release military personnel from the onus of killing during combat missions, reducing the related risk of suffering a moral injury and its debilita...
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • When stigmatization does not work: over-securitization in efforts of the Campaign to Stop Killer Robots.Anzhelika Solovyeva & Nik Hynek - 2023 - AI and Society 38 (6):2547-2569.
    This article reflects on securitization efforts with respect to ‘killer robots’, known more impartially as autonomous weapons systems (AWS). Our contribution focuses, theoretically and empirically, on the Campaign to Stop Killer Robots, a transnational advocacy network vigorously pushing for a pre-emptive ban on AWS. Marking exactly a decade of its activity, there is still no international regime formally banning, or even purposefully regulating, AWS. Our objective is to understand why the Campaign has not been able to advance its disarmament agenda (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Responsibility Gap and LAWS: a Critical Mapping of the Debate.Ann-Katrien Oimann - 2023 - Philosophy and Technology 36 (1):1-22.
    AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in philosophical literature (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • On the indignity of killer robots.Garry Young - 2021 - Ethics and Information Technology 23 (3):473-482.
    Recent discussion on the ethics of killer robots has focused on the supposed lack of respect their deployment would show to combatants targeted, thereby causing their undignified deaths. I present two rebuttals of this argument. The weak rebuttal maintains that while deploying killer robots is an affront to the dignity of combatants, their use should nevertheless be thought of as a pro tanto wrong, making deployment permissible if the affront is outweighed by some right-making feature. This rebuttal is, however, vulnerable (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Just research into killer robots.Patrick Taylor Smith - 2019 - Ethics and Information Technology 21 (4):281-293.
    This paper argues that it is permissible for computer scientists and engineers—working with advanced militaries that are making good faith efforts to follow the laws of war—to engage in the research and development of lethal autonomous weapons systems. Research and development into a new weapons system is permissible if and only if the new weapons system can plausibly generate a superior risk profile for all morally relevant classes and it is not intrinsically wrong. The paper then suggests that these conditions (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Operations of power in autonomous weapon systems: ethical conditions and socio-political prospects.Nik Hynek & Anzhelika Solovyeva - 2021 - AI and Society 36 (1):79-99.
    The purpose of this article is to provide a multi-perspective examination of one of the most important contemporary security issues: weaponized, and especially lethal, artificial intelligence. This technology is increasingly associated with the approaching dramatic change in the nature of warfare. What becomes particularly important and evermore intensely contested is how it becomes embedded with and concurrently impacts two social structures: ethics and law. While there has not been a global regime banning this technology, regulatory attempts at establishing a ban (...)
    Download  
     
    Export citation  
     
    Bookmark