Switch to: References

Add citations

You must login to add citations.
  1. Why Command Responsibility May (not) Be a Solution to Address Responsibility Gaps in LAWS.Ann-Katrien Oimann - forthcoming - Criminal Law and Philosophy:1-27.
    The possible future use of lethal autonomous weapons systems (LAWS) and the challenges associated with assigning moral responsibility leads to several debates. Some authors argue that the highly autonomous capability of such systems may lead to a so-called responsibility gap in situations where LAWS cause serious violations of international humanitarian law. One proposed solution is the doctrine of command responsibility. Despite the doctrine’s original development to govern human interactions on the battlefield, it is worth considering whether the doctrine of command (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Soldier’s Share: Considering Narrow Responsibility for Lethal Autonomous Weapons.Kevin Schieman - 2023 - Journal of Military Ethics (3):228-245.
    Robert Sparrow (among others) claims that if an autonomous weapon were to commit a war crime, it would cause harm for which no one could reasonably be blamed. Since no one would bear responsibility for the soldier’s share of killing in such cases, he argues that they would necessarily violate the requirements of jus in bello, and should be prohibited by international law. I argue this view is mistaken and that our moral understanding of war is sufficient to determine blame (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Correction to: The Responsibility Gap and LAWS: a Critical Mapping of the Debate.Ann-Katrien Oimann - 2023 - Philosophy and Technology 36 (1):1-2.
    AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in philosophical literature (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Jus in bello Necessity, The Requirement of Minimal Force, and Autonomous Weapons Systems.Alexander Blanchard & Mariarosaria Taddeo - 2022 - Journal of Military Ethics 21 (3):286-303.
    In this article we focus on the jus in bello principle of necessity for guiding the use of autonomous weapons systems (AWS). We begin our analysis with an account of the principle of necessity as entailing the requirement of minimal force found in Just War Theory, before highlighting the absence of this principle in existing work on AWS. Overlooking this principle means discounting the obligations that combatants have towards one another in times of war. We argue that the requirement of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Responsibility Gap and LAWS: a Critical Mapping of the Debate.Ann-Katrien Oimann - 2023 - Philosophy and Technology 36 (1):1-22.
    AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in philosophical literature (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Artificial intelligence and responsibility gaps: what is the problem?Peter Königs - 2022 - Ethics and Information Technology 24 (3):1-11.
    Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility gaps. This has been the cause of (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Accepting Moral Responsibility for the Actions of Autonomous Weapons Systems—a Moral Gambit.Mariarosaria Taddeo & Alexander Blanchard - 2022 - Philosophy and Technology 35 (3):1-24.
    In this article, we focus on the attribution of moral responsibility for the actions of autonomous weapons systems (AWS). To do so, we suggest that the responsibility gap can be closed if human agents can take meaningful moral responsibility for the actions of AWS. This is a moral responsibility attributed to individuals in a justified and fair way and which is accepted by individuals as an assessment of their own moral character. We argue that, given the unpredictability of AWS, meaningful (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Is Explainable AI Responsible AI?Isaac Taylor - forthcoming - AI and Society.
    When artificial intelligence (AI) is used to make high-stakes decisions, some worry that this will create a morally troubling responsibility gap—that is, a situation in which nobody is morally responsible for the actions and outcomes that result. Since the responsibility gap might be thought to result from individuals lacking knowledge of the future behavior of AI systems, it can be and has been suggested that deploying explainable artificial intelligence (XAI) techniques will help us to avoid it. These techniques provide humans (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Collective Responsibility and Artificial Intelligence.Isaac Taylor - 2024 - Philosophy and Technology 37 (1):1-18.
    The use of artificial intelligence (AI) to make high-stakes decisions is sometimes thought to create a troubling responsibility gap – that is, a situation where nobody can be held morally responsible for the outcomes that are brought about. However, philosophers and practitioners have recently claimed that, even though no individual can be held morally responsible, groups of individuals might be. Consequently, they think, we have less to fear from the use of AI than might appear to be the case. This (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Justice by Algorithm: The Limits of AI in Criminal Sentencing.Isaac Taylor - 2023 - Criminal Justice Ethics 42 (3):193-213.
    Criminal justice systems have traditionally relied heavily on human decision-making, but new technologies are increasingly supplementing the human role in this sector. This paper considers what general limits need to be placed on the use of algorithms in sentencing decisions. It argues that, even once we can build algorithms that equal human decision-making capacities, strict constraints need to be placed on how they are designed and developed. The act of condemnation is a valuable element of criminal sentencing, and using algorithms (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations