Switch to: References

Add citations

You must login to add citations.
  1. The value of responsibility gaps in algorithmic decision-making.Lauritz Munch, Jakob Mainz & Jens Christian Bjerring - 2023 - Ethics and Information Technology 25 (1):1-11.
    Many seem to think that AI-induced responsibility gaps are morally bad and therefore ought to be avoided. We argue, by contrast, that there is at least a pro tanto reason to welcome responsibility gaps. The central reason is that it can be bad for people to be responsible for wrongdoing. This, we argue, gives us one reason to prefer automated decision-making over human decision-making, especially in contexts where the risks of wrongdoing are high. While we are not the first to (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Find the Gap: AI, Responsible Agency and Vulnerability.Shannon Vallor & Tillmann Vierkant - 2024 - Minds and Machines 34 (3):1-23.
    The responsibility gap, commonly described as a core challenge for the effective governance of, and trust in, AI and autonomous systems (AI/AS), is traditionally associated with a failure of the epistemic and/or the control condition of moral responsibility: the ability to know what we are doing and exercise competent control over this doing. Yet these two conditions are a red herring when it comes to understanding the responsibility challenges presented by AI/AS, since evidence from the cognitive sciences shows that individual (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Is AI the Future of Mental Healthcare?Francesca Minerva & Alberto Giubilini - 2023 - Topoi 42 (3):809-817.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • How AI Systems Can Be Blameworthy.Hannah Altehenger, Leonhard Menges & Peter Schulte - 2024 - Philosophia (4):1-24.
    AI systems, like self-driving cars, healthcare robots, or Autonomous Weapon Systems, already play an increasingly important role in our lives and will do so to an even greater extent in the near future. This raises a fundamental philosophical question: who is morally responsible when such systems cause unjustified harm? In the paper, we argue for the admittedly surprising claim that some of these systems can themselves be morally responsible for their conduct in an important and everyday sense of the term—the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI responsibility gap: not new, inevitable, unproblematic.Huzeyfe Demirtas - 2025 - Ethics and Information Technology 27 (1):1-10.
    Who is responsible for a harm caused by AI, or a machine or system that relies on artificial intelligence? Given that current AI is neither conscious nor sentient, it’s unclear that AI itself is responsible for it. But given that AI acts independently of its developer or user, it’s also unclear that the developer or user is responsible for the harm. This gives rise to the so-called responsibility gap: cases where AI causes a harm, but no one is responsible for (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • When to Fill Responsibility Gaps: A Proposal.Michael Da Silva - forthcoming - Journal of Value Inquiry:1-26.
    Download  
     
    Export citation  
     
    Bookmark  
  • A Comparative Defense of Self-initiated Prospective Moral Answerability for Autonomous Robot harm.Marc Champagne & Ryan Tonkens - 2023 - Science and Engineering Ethics 29 (4):1-26.
    As artificial intelligence becomes more sophisticated and robots approach autonomous decision-making, debates about how to assign moral responsibility have gained importance, urgency, and sophistication. Answering Stenseke’s (2022a) call for scaffolds that can help us classify views and commitments, we think the current debate space can be represented hierarchically, as answers to key questions. We use the resulting taxonomy of five stances to differentiate—and defend—what is known as the “blank check” proposal. According to this proposal, a person activating a robot could (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Engineers on responsibility: feminist approaches to who’s responsible for ethical AI.Eleanor Drage, Kerry McInerney & Jude Browne - 2024 - Ethics and Information Technology 26 (1):1-13.
    Responsibility has become a central concept in AI ethics; however, little research has been conducted into practitioners’ personal understandings of responsibility in the context of AI, including how responsibility should be defined and who is responsible when something goes wrong. In this article, we present findings from a 2020–2021 data set of interviews with AI practitioners and tech workers at a single multinational technology company and interpret them through the lens of feminist political thought. We reimagine responsibility in the context (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Owning Decisions: AI Decision-Support and the Attributability-Gap.Jannik Zeiser - 2024 - Science and Engineering Ethics 30 (4):1-19.
    Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine’s behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today’s AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The impact of digital health technologies on moral responsibility: a scoping review.E. Meier, T. Rigter, M. P. Schijven, M. van den Hoven & M. A. R. Bak - forthcoming - Medicine, Health Care and Philosophy:1-15.
    Recent publications on digital health technologies highlight the importance of ‘responsible’ use. References to the concept of responsibility are, however, frequently made without providing clear definitions of responsibility, thus leaving room for ambiguities. Addressing these uncertainties is critical since they might lead to misunderstandings, impacting the quality and safety of healthcare delivery. Therefore, this study investigates how responsibility is interpreted in the context of using digital health technologies, including artificial intelligence (AI), telemonitoring, wearables and mobile apps. We conducted a scoping (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Responsibility Internalism and Responsibility for AI.Huzeyfe Demirtas - 2023 - Dissertation, Syracuse University
    I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or being apt for praise or blame) depends only on factors internal to agents. Employing this view, I also argue that no one is responsible for what AI does but this isn’t morally problematic in a way that counts against developing or using AI. Responsibility is grounded in three potential conditions: the control (or freedom) condition, the epistemic (or awareness) condition, and the causal responsibility condition (or consequences). I argue (...)
    Download  
     
    Export citation  
     
    Bookmark