Switch to: References

Add citations

You must login to add citations.
  1. Artificial agents: responsibility & control gaps.Herman Veluwenkamp & Frank Hindriks - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    Artificial agents create significant moral opportunities and challenges. Over the last two decades, discourse has largely focused on the concept of a ‘responsibility gap.’ We argue that this concept is incoherent, misguided, and diverts attention from the core issue of ‘control gaps.’ Control gaps arise when there is a discrepancy between the causal control an agent exercises and the moral control it should possess or emulate. Such gaps present moral risks, often leading to harm or ethical violations. We propose a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI responsibility gap: not new, inevitable, unproblematic.Huzeyfe Demirtas - 2025 - Ethics and Information Technology 27 (1):1-10.
    Who is responsible for a harm caused by AI, or a machine or system that relies on artificial intelligence? Given that current AI is neither conscious nor sentient, it’s unclear that AI itself is responsible for it. But given that AI acts independently of its developer or user, it’s also unclear that the developer or user is responsible for the harm. This gives rise to the so-called responsibility gap: cases where AI causes a harm, but no one is responsible for (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • What responsibility gaps are and what they should be.Herman Veluwenkamp - 2025 - Ethics and Information Technology 27 (1):1-13.
    Responsibility gaps traditionally refer to scenarios in which no one is responsible for harm caused by artificial agents, such as autonomous machines or collective agents. By carefully examining the different ways this concept has been defined in the social ontology and ethics of technology literature, I argue that our current concept of responsibility gaps is defective. To address this conceptual flaw, I argue that the concept of responsibility gaps should be revised by distinguishing it into two more precise concepts: epistemic (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Design for operator contestability: control over autonomous systems by introducing defeaters.Herman Veluwenkamp & Stefan Buijsman - 2025 - AI and Ethics 1.
    This paper introduces the concept of Operator Contestability in AI systems: the principle that those overseeing AI systems (operators) must have the necessary control to be accountable for the decisions made by these algorithms. We argue that designers have a duty to ensure operator contestability. We demonstrate how this duty can be fulfilled by applying the'Design for Defeaters' framework, which provides strategies to embed tools within AI systems that enable operators to challenge decisions. Defeaters are designed to contest either the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A way forward for responsibility in the age of AI.Dane Leigh Gogoshin - 2024 - Inquiry: An Interdisciplinary Journal of Philosophy:1-34.
    Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • From liability gaps to liability overlaps: shared responsibilities and fiduciary duties in AI and other complex technologies.Bart Custers, Henning Lahmann & Benjamyn I. Scott - forthcoming - AI and Society:1-16.
    Complex technologies such as Artificial Intelligence (AI) can cause harm, raising the question of who is liable for the harm caused. Research has identified multiple liability gaps (i.e., unsatisfactory outcomes when applying existing liability rules) in legal frameworks. In this paper, the concepts of shared responsibilities and fiduciary duties are explored as avenues to address liability gaps. The development, deployment and use of complex technologies are not clearly distinguishable stages, as often suggested, but are processes of cooperation and co-creation. At (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Two Reasons for Subjecting Medical AI Systems to Lower Standards than Humans.Jakob Mainz, Jens Christian Bjerring & Lauritz Munch - 2023 - Acm Proceedings of Fairness, Accountability, and Transaparency (Facct) 2023 1 (1):44-49.
    This paper concerns the double standard debate in the ethics of AI literature. This debate essentially revolves around the question of whether we should subject AI systems to different normative standards than humans. So far, the debate has centered around the desideratum of transparency. That is, the debate has focused on whether AI systems must be more transparent than humans in their decision-making processes in order for it to be morally permissible to use such systems. Some have argued that the (...)
    Download  
     
    Export citation  
     
    Bookmark