Switch to: Citations

Add references

You must login to add references.
  1. Black-box assisted medical decisions: AI power vs. ethical physician care.Berman Chan - 2023 - Medicine, Health Care and Philosophy 26 (3):285-292.
    Without doctors being able to explain medical decisions to patients, I argue their use of black box AIs would erode the effective and respectful care they provide patients. In addition, I argue that physicians should use AI black boxes only for patients in dire straits, or when physicians use AI as a “co-pilot” (analogous to a spellchecker) but can independently confirm its accuracy. I respond to A.J. London’s objection that physicians already prescribe some drugs without knowing why they work.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):323-332.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of interpretability (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • “I’m afraid I can’t let you do that, Doctor”: meaningful disagreements with AI in medical contexts.Hendrik Kempt, Jan-Christoph Heilinger & Saskia K. Nagel - forthcoming - AI and Society:1-8.
    This paper explores the role and resolution of disagreements between physicians and their diagnostic AI-based decision support systems. With an ever-growing number of applications for these independently operating diagnostic tools, it becomes less and less clear what a physician ought to do in case their diagnosis is in faultless conflict with the results of the DSS. The consequences of such uncertainty can ultimately lead to effects detrimental to the intended purpose of such machines, e.g. by shifting the burden of proof (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions.Sebastian Krügel, Andreas Ostermaier & Matthias Uhl - 2022 - Philosophy and Technology 35 (1):1-37.
    Departing from the claim that AI needs to be trustworthy, we find that ethical advice from an AI-powered algorithm is trusted even when its users know nothing about its training data and when they learn information about it that warrants distrust. We conducted online experiments where the subjects took the role of decision-makers who received advice from an algorithm on how to deal with an ethical dilemma. We manipulated the information about the algorithm and studied its influence. Our findings suggest (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • A riddle, wrapped in a mystery, inside an enigma: How semantic black boxes and opaque artificial intelligence confuse medical decision‐making.Robin Pierce, Sigrid Sterckx & Wim Van Biesen - 2021 - Bioethics 36 (2):113-120.
    The use of artificial intelligence (AI) in healthcare comes with opportunities but also numerous challenges. A specific challenge that remains underexplored is the lack of clear and distinct definitions of the concepts used in and/or produced by these algorithms, and how their real world meaning is translated into machine language and vice versa, how their output is understood by the end user. This “semantic” black box adds to the “mathematical” black box present in many AI systems in which the underlying (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems.Andrea Ferrario - 2022 - Journal of Medical Ethics 48 (7):492-494.
    In their article ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’, Durán and Jongsma discuss the epistemic and ethical challenges raised by black box algorithms in medical practice. The opacity of black box algorithms is an obstacle to the trustworthiness of their outcomes. Moreover, the use of opaque algorithms is not normatively justified in medical practice. The authors introduce a formalism, called computational reliabilism, which allows generating justified beliefs on the (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Limits of trust in medical AI.Joshua James Hatherley - 2020 - Journal of Medical Ethics 46 (7):478-481.
    Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • In AI we trust? Perceptions about automated decision-making by artificial intelligence.Theo Araujo, Natali Helberger, Sanne Kruikemeier & Claes H. de Vreese - 2020 - AI and Society 35 (3):611-623.
    Fueled by ever-growing amounts of (digital) data and advances in artificial intelligence, decision-making in contemporary societies is increasingly delegated to automated processes. Drawing from social science theories and from the emerging body of research about algorithmic appreciation and algorithmic perceptions, the current study explores the extent to which personal characteristics can be linked to perceptions of automated decision-making by AI, and the boundary conditions of these perceptions, namely the extent to which such perceptions differ across media, (public) health, and judicial (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • Society-in-the-loop: programming the algorithmic social contract.Iyad Rahwan - 2018 - Ethics and Information Technology 20 (1):5-14.
    Recent rapid advances in Artificial Intelligence (AI) and Machine Learning have raised many questions about the regulatory and governance mechanisms for autonomous machines. Many commentators, scholars, and policy-makers now call for ensuring that algorithms governing our lives are transparent, fair, and accountable. Here, I propose a conceptual framework for the regulation of AI and algorithmic systems. I argue that we need tools to program, debug and maintain an algorithmic social contract, a pact between various human stakeholders, mediated by machines. To (...)
    Download  
     
    Export citation  
     
    Bookmark   61 citations  
  • Correction to: Excavating AI: the politics of images in machine learning training sets.Kate Crawford & Trevor Paglen - 2021 - AI and Society 36 (4):1399-1399.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • When Does Physician Use of AI Increase Liability?Kevin Tobia, Aileen Nielsen & Alexander Stremitzer - 2021 - Journal of Nuclear Medicine 62.
    An increasing number of automated and artificially intelligent (AI) systems make medical treatment recommendations, including “personalized” recommendations, which can deviate from standard care. Legal scholars argue that following such nonstandard treatment recommendations will increase liability in medical malpractice, undermining the use of potentially beneficial medical AI. However, such liability depends in part on lay judgments by jurors: When physicians use AI systems, in which circumstances would jurors hold physicians liable? To determine potential jurors’ judgments of liability, we conducted an online (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Keeping the “Human in the Loop” in the Age of Artificial Intelligence.Fabrice Jotterand & Clara Bosco - 2020 - Science and Engineering Ethics 26 (5):2455-2460.
    The benefits of Artificial Intelligence (AI) in medicine are unquestionable and it is unlikely that the pace of its development will slow down. From better diagnosis, prognosis, and prevention to more precise surgical procedures, AI has the potential to offer unique opportunities to enhance patient care and improve clinical practice overall. However, at this stage of AI technology development it is unclear whether it will de-humanize or re-humanize medicine. Will AI allow clinicians to spend less time on administrative tasks and (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it.Alice Liefgreen, Netta Weinstein, Sandra Wachter & Brent Mittelstadt - 2024 - AI and Society 39 (5):2183-2199.
    Artificial intelligence (AI) is increasingly relied upon by clinicians for making diagnostic and treatment decisions, playing an important role in imaging, diagnosis, risk analysis, lifestyle monitoring, and health information management. While research has identified biases in healthcare AI systems and proposed technical solutions to address these, we argue that effective solutions require human engagement. Furthermore, there is a lack of research on how to motivate the adoption of these solutions and promote investment in designing AI systems that align with values (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Trustworthy medical AI systems need to know when they don’t know.Thomas Grote - forthcoming - Journal of Medical Ethics.
    There is much to learn from Durán and Jongsma’s paper.1 One particularly important insight concerns the relationship between epistemology and ethics in medical artificial intelligence. In clinical environments, the task of AI systems is to provide risk estimates or diagnostic decisions, which then need to be weighed by physicians. Hence, while the implementation of AI systems might give rise to ethical issues—for example, overtreatment, defensive medicine or paternalism2—the issue that lies at the heart is an epistemic problem: how can physicians (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Philosophy of Expertise in the Age of Medical Informatics: How Healthcare Technology is Transforming Our Understanding of Expertise and Expert Knowledge?Marcin Rządeczka - 2020 - Studies in Logic, Grammar and Rhetoric 63 (1):209-225.
    The unprecedented development of medical informatics is constantly transforming the concept of expertise in medical sciences in a way that has far-reaching consequences for both the theory of knowledge and the philosophy of informatics. Deep medicine is based on the assumption that medical diagnosis should take into account the wide array of possible health factors involved in the diagnostic process, such as not only genome analysis alone, but also the metabolome (analysis of all body metabolites important for e.g. drug-drug interactions), (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Are physicians requesting a second opinion really engaging in a reason-giving dialectic? Normative questions on the standards for second opinions and AI.Benjamin H. Lang - 2022 - Journal of Medical Ethics 48 (4):234-235.
    In their article, ‘Responsibility, Second Opinions, and Peer-Disagreement—Ethical and Epistemological Challenges of Using AI in Clinical Diagnostic Contexts,’ Kempt and Nagel argue for a ‘rule of disagreement’ for the integration of diagnostic AI in healthcare contexts. The type of AI in question is a ‘decision support system’, the purpose of which is to augment human judgement and decision-making in the clinical context by automating or supplementing parts of the cognitive labor. Under the authors’ proposal, artificial decision support systems which produce (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Keeping the organization in the loop: a socio-technical extension of human-centered artificial intelligence.Thomas Herrmann & Sabine Pfeiffer - forthcoming - AI and Society:1-20.
    The human-centered AI approach posits a future in which the work done by humans and machines will become ever more interactive and integrated. This article takes human-centered AI one step further. It argues that the integration of human and machine intelligence is achievable only if human organizations—not just individual human workers—are kept “in the loop.” We support this argument with evidence of two case studies in the area of predictive maintenance, by which we show how organizational practices are needed and (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The AI doctor will see you now: assessing the framing of AI in news coverage.Mercedes Bunz & Marco Braghieri - 2022 - AI and Society 37 (1):9-22.
    One of the sectors for which Artificial Intelligence applications have been considered as exceptionally promising is the healthcare sector. As a public-facing sector, the introduction of AI applications has been subject to extended news coverage. This article conducts a quantitative and qualitative data analysis of English news media articles covering AI systems that allow the automation of tasks that so far needed to be done by a medical expert such as a doctor or a nurse thereby redistributing their agency. We (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations