Switch to: References

Add citations

You must login to add citations.
  1. Artificial intelligence and responsibility gaps: what is the problem?Peter Königs - 2022 - Ethics and Information Technology 24 (3):1-11.
    Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility gaps. This has been the cause of (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • The Responsibility Gap and LAWS: a Critical Mapping of the Debate.Ann-Katrien Oimann - 2023 - Philosophy and Technology 36 (1):1-22.
    AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in philosophical literature (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • How AI Systems Can Be Blameworthy.Hannah Altehenger, Leonhard Menges & Peter Schulte - 2024 - Philosophia (4):1-24.
    AI systems, like self-driving cars, healthcare robots, or Autonomous Weapon Systems, already play an increasingly important role in our lives and will do so to an even greater extent in the near future. This raises a fundamental philosophical question: who is morally responsible when such systems cause unjustified harm? In the paper, we argue for the admittedly surprising claim that some of these systems can themselves be morally responsible for their conduct in an important and everyday sense of the term—the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Understanding Moral Responsibility in Automated Decision-Making: Responsibility Gaps and Strategies to Address Them.Andrea Berber & Jelena Mijić - 2024 - Theoria: Beograd 67 (3):177-192.
    This paper delves into the use of machine learning-based systems in decision-making processes and its implications for moral responsibility as traditionally defined. It focuses on the emergence of responsibility gaps and examines proposed strategies to address them. The paper aims to provide an introductory and comprehensive overview of the ongoing debate surrounding moral responsibility in automated decision-making. By thoroughly examining these issues, we seek to contribute to a deeper understanding of the implications of AI integration in society.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Revisiting the ought implies can dictum in light of disruptive medical innovation.Michiel De Proost & Seppe Segers - 2024 - Journal of Medical Ethics 50 (7):466-470.
    It is a dominant dictum in ethics that ‘ought implies can’ (OIC): if an agent morally ought to do an action, the agent must be capable of performing that action. Yet, with current technological developments, such as in direct-to-consumer genomics, big data analytics and wearable technologies, there may be reasons to reorient this ethical principle. It is our modest aim in this article to explore how the current wave of allegedly disruptive innovation calls for a renewed interest for this dictum. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What responsibility gaps are and what they should be.Herman Veluwenkamp - 2025 - Ethics and Information Technology 27 (1):1-13.
    Responsibility gaps traditionally refer to scenarios in which no one is responsible for harm caused by artificial agents, such as autonomous machines or collective agents. By carefully examining the different ways this concept has been defined in the social ontology and ethics of technology literature, I argue that our current concept of responsibility gaps is defective. To address this conceptual flaw, I argue that the concept of responsibility gaps should be revised by distinguishing it into two more precise concepts: epistemic (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Uses and Abuses of AI Ethics.Lily E. Frank & Michal Klincewicz - 2024 - In David J. Gunkel, Handbook on the Ethics of Artificial Intelligence. Edward Elgar Publishing.
    In this chapter we take stock of some of the complexities of the sprawling field of AI ethics. We consider questions like "what is the proper scope of AI ethics?" And "who counts as an AI ethicist?" At the same time, we flag several potential uses and abuses of AI ethics. These include challenges for the AI ethicist, including what qualifications they should have; the proper place and extent of futuring and speculation in the field; and the dilemmas concerning how (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Correction to: The Responsibility Gap and LAWS: a Critical Mapping of the Debate.Ann-Katrien Oimann - 2023 - Philosophy and Technology 36 (1):1-2.
    AI has numerous applications and in various fields, including the military domain. The increase in the degree of autonomy in some decision-making systems leads to discussions on the possible future use of lethal autonomous weapons systems (LAWS). A central issue in these discussions is the assignment of moral responsibility for some AI-based outcomes. Several authors claim that the high autonomous capability of such systems leads to a so-called “responsibility gap.” In recent years, there has been a surge in philosophical literature (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A way forward for responsibility in the age of AI.Dane Leigh Gogoshin - 2024 - Inquiry: An Interdisciplinary Journal of Philosophy:1-34.
    Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Designing responsible agents.Zacharus Gudmunsen - 2025 - Ethics and Information Technology 27 (1):1-11.
    Raul Hakli & Pekka Mäkelä (2016, 2019) make a popular assumption in machine ethics explicit by arguing that artificial agents cannot be responsible because they are designed. Designed agents, they think, are analogous to manipulated humans and therefore not meaningfully in control of their actions. Contrary to this, I argue that under all mainstream theories of responsibility, designed agents can be responsible. To do so, I identify the closest parallel discussion in the literature on responsibility and free will, which concerns (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • When to Fill Responsibility Gaps: A Proposal.Michael Da Silva - forthcoming - Journal of Value Inquiry:1-26.
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - manuscript
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situation in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (i) people manifest a considerable willingness (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • From liability gaps to liability overlaps: shared responsibilities and fiduciary duties in AI and other complex technologies.Bart Custers, Henning Lahmann & Benjamyn I. Scott - forthcoming - AI and Society:1-16.
    Complex technologies such as Artificial Intelligence (AI) can cause harm, raising the question of who is liable for the harm caused. Research has identified multiple liability gaps (i.e., unsatisfactory outcomes when applying existing liability rules) in legal frameworks. In this paper, the concepts of shared responsibilities and fiduciary duties are explored as avenues to address liability gaps. The development, deployment and use of complex technologies are not clearly distinguishable stages, as often suggested, but are processes of cooperation and co-creation. At (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Responsibility Gap(s) Due to the Introduction of AI in Healthcare: An Ubuntu-Inspired Approach.Brandon Ferlito, Seppe Segers, Michiel De Proost & Heidi Mertes - 2024 - Science and Engineering Ethics 30 (4):1-14.
    Due to its enormous potential, artificial intelligence (AI) can transform healthcare on a seemingly infinite scale. However, as we continue to explore the immense potential of AI, it is vital to consider the ethical concerns associated with its development and deployment. One specific concern that has been flagged in the literature is the responsibility gap (RG) due to the introduction of AI in healthcare. When the use of an AI algorithm or system results in a negative outcome for a patient(s), (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany.Markus Kneer & Markus Christen - 2024 - Science and Engineering Ethics 30 (6):1-19.
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situations in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (1) people manifest a considerable willingness (...)
    Download  
     
    Export citation  
     
    Bookmark