Switch to: References

Add citations

You must login to add citations.
  1. Medical AI: Is Trust Really the Issue?Jakob Thrane Mainz - forthcoming - Journal of Medical Ethics.
    I discuss an influential argument put forward by Joshua Hatherley. Drawing on influential philosophical accounts of inter-personal trust, Hatherley claims that medical Artificial Intelligence is capable of being reliable, but not trustworthy. Furthermore, Hatherley argues that trust generates moral obligations on behalf of the trustee. For instance, when a patient trusts a clinician, it generates certain moral obligations on behalf of the clinician for her to do what she is entrusted to do. I make three objections to Hatherley’s claims: (1) (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial intelligence and identity: the rise of the statistical individual.Jens Christian Bjerring & Jacob Busch - forthcoming - AI and Society:1-13.
    Algorithms are used across a wide range of societal sectors such as banking, administration, and healthcare to make predictions that impact on our lives. While the predictions can be incredibly accurate about our present and future behavior, there is an important question about how these algorithms in fact represent human identity. In this paper, we explore this question and argue that machine learning algorithms represent human identity in terms of what we shall call the statistical individual. This statisticalized representation of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The impact of artificial intelligence on jobs and work in New Zealand.James Maclaurin, Colin Gavaghan & Alistair Knott - 2021 - Wellington, New Zealand: New Zealand Law Foundation.
    Artificial Intelligence (AI) is a diverse technology. It is already having significant effects on many jobs and sectors of the economy and over the next ten to twenty years it will drive profound changes in the way New Zealanders live and work. Within the workplace AI will have three dominant effects. This report (funded by the New Zealand Law Foundation) addresses: Chapter 1 Defining the Technology of Interest; Chapter 2 The changing nature and value of work; Chapter 3 AI and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Should AI allocate livers for transplant? Public attitudes and ethical considerations.Max Drezga-Kleiminger, Joanna Demaree-Cotton, Julian Koplin, Julian Savulescu & Dominic Wilkinson - 2023 - BMC Medical Ethics 24 (1):1-11.
    Background: Allocation of scarce organs for transplantation is ethically challenging. Artificial intelligence (AI) has been proposed to assist in liver allocation, however the ethics of this remains unexplored and the view of the public unknown. The aim of this paper was to assess public attitudes on whether AI should be used in liver allocation and how it should be implemented. Methods: We first introduce some potential ethical issues concerning AI in liver allocation, before analysing a pilot survey including online responses (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • When Doctors and AI Interact: on Human Responsibility for Artificial Risks.Mario Verdicchio & Andrea Perin - 2022 - Philosophy and Technology 35 (1):1-28.
    A discussion concerning whether to conceive Artificial Intelligence systems as responsible moral entities, also known as “artificial moral agents”, has been going on for some time. In this regard, we argue that the notion of “moral agency” is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. With (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Explanatory pragmatism: a context-sensitive framework for explainable medical AI.Diana Robinson & Rune Nyrup - 2022 - Ethics and Information Technology 24 (1).
    Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the problem we (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Relative explainability and double standards in medical decision-making: Should medical AI be subjected to higher standards in medical decision-making than doctors?Saskia K. Nagel, Jan-Christoph Heilinger & Hendrik Kempt - 2022 - Ethics and Information Technology 24 (2).
    The increased presence of medical AI in clinical use raises the ethical question which standard of explainability is required for an acceptable and responsible implementation of AI-based applications in medical contexts. In this paper, we elaborate on the emerging debate surrounding the standards of explainability for medical AI. For this, we first distinguish several goods explainability is usually considered to contribute to the use of AI in general, and medical AI in specific. Second, we propose to understand the value of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Justice and the Normative Standards of Explainability in Healthcare.Saskia K. Nagel, Nils Freyer & Hendrik Kempt - 2022 - Philosophy and Technology 35 (4):1-19.
    Providing healthcare services frequently involves cognitively demanding tasks, including diagnoses and analyses as well as complex decisions about treatments and therapy. From a global perspective, ethically significant inequalities exist between regions where the expert knowledge required for these tasks is scarce or abundant. One possible strategy to diminish such inequalities and increase healthcare opportunities in expert-scarce settings is to provide healthcare solutions involving digital technologies that do not necessarily require the presence of a human expert, e.g., in the form of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The value of responsibility gaps in algorithmic decision-making.Lauritz Munch, Jakob Mainz & Jens Christian Bjerring - 2023 - Ethics and Information Technology 25 (1):1-11.
    Many seem to think that AI-induced responsibility gaps are morally bad and therefore ought to be avoided. We argue, by contrast, that there is at least a pro tanto reason to welcome responsibility gaps. The central reason is that it can be bad for people to be responsible for wrongdoing. This, we argue, gives us one reason to prefer automated decision-making over human decision-making, especially in contexts where the risks of wrongdoing are high. While we are not the first to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Evidence, ethics and the promise of artificial intelligence in psychiatry.Melissa McCradden, Katrina Hui & Daniel Z. Buchman - 2023 - Journal of Medical Ethics 49 (8):573-579.
    Researchers are studying how artificial intelligence (AI) can be used to better detect, prognosticate and subgroup diseases. The idea that AI might advance medicine’s understanding of biological categories of psychiatric disorders, as well as provide better treatments, is appealing given the historical challenges with prediction, diagnosis and treatment in psychiatry. Given the power of AI to analyse vast amounts of information, some clinicians may feel obligated to align their clinical judgements with the outputs of the AI system. However, a potential (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Why algorithmic speed can be more important than algorithmic accuracy.Jakob Mainz, Lauritz Munch, Jens Christian Bjerring & Sissel Godtfredsen - 2023 - Clinical Ethics 18 (2):161-164.
    Artificial Intelligence (AI) often outperforms human doctors in terms of decisional speed. For some diseases, the expected benefit of a fast but less accurate decision exceeds the benefit of a slow but more accurate one. In such cases, we argue, it is often justified to rely on a medical AI to maximise decision speed – even if the AI is less accurate than human doctors.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts.Hendrik Kempt & Saskia K. Nagel - 2022 - Journal of Medical Ethics 48 (4):222-229.
    In this paper, we first classify different types of second opinions and evaluate the ethical and epistemological implications of providing those in a clinical context. Second, we discuss the issue of how artificial intelligent could replace the human cognitive labour of providing such second opinion and find that several AI reach the levels of accuracy and efficiency needed to clarify their use an urgent ethical issue. Third, we outline the normative conditions of how AI may be used as second opinion (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Enabling Fairness in Healthcare Through Machine Learning.Geoff Keeling & Thomas Grote - 2022 - Ethics and Information Technology 24 (3):1-13.
    The use of machine learning systems for decision-support in healthcare may exacerbate health inequalities. However, recent work suggests that algorithms trained on sufficiently diverse datasets could in principle combat health inequalities. One concern about these algorithms is that their performance for patients in traditionally disadvantaged groups exceeds their performance for patients in traditionally advantaged groups. This renders the algorithmic decisions unfair relative to the standard fairness metrics in machine learning. In this paper, we defend the permissible use of affirmative algorithms; (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Algorithmic legitimacy in clinical decision-making.Sune Holm - 2023 - Ethics and Information Technology 25 (3):1-10.
    Machine learning algorithms are expected to improve referral decisions. In this article I discuss the legitimacy of deferring referral decisions in primary care to recommendations from such algorithms. The standard justification for introducing algorithmic decision procedures to make referral decisions is that they are more accurate than the available practitioners. The improvement in accuracy will ensure more efficient use of scarce health resources and improve patient care. In this article I introduce a proceduralist framework for discussing the legitimacy of algorithmic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • On the Ethical and Epistemological Utility of Explicable AI in Medicine.Christian Herzog - 2022 - Philosophy and Technology 35 (2):1-31.
    In this article, I will argue in favor of both the ethical and epistemological utility of explanations in artificial intelligence -based medical technology. I will build on the notion of “explicability” due to Floridi, which considers both the intelligibility and accountability of AI systems to be important for truly delivering AI-powered services that strengthen autonomy, beneficence, and fairness. I maintain that explicable algorithms do, in fact, strengthen these ethical principles in medicine, e.g., in terms of direct patient–physician contact, as well (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Limits of trust in medical AI.Joshua James Hatherley - 2020 - Journal of Medical Ethics 46 (7):478-481.
    Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Reflection Machines: Supporting Effective Human Oversight Over Medical Decision Support Systems.Pim Haselager, Hanna Schraffenberger, Serge Thill, Simon Fischer, Pablo Lanillos, Sebastiaan van de Groes & Miranda van Hooff - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.
    Human decisions are increasingly supported by decision support systems (DSS). Humans are required to remain “on the loop,” by monitoring and approving/rejecting machine recommendations. However, use of DSS can lead to overreliance on machines, reducing human oversight. This paper proposes “reflection machines” (RM) to increase meaningful human control. An RM provides a medical expert not with suggestions for a decision, but with questions that stimulate reflection about decisions. It can refer to data points or suggest counterarguments that are less compatible (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Uncertainty, Evidence, and the Integration of Machine Learning into Medical Practice.Thomas Grote & Philipp Berens - 2023 - Journal of Medicine and Philosophy 48 (1):84-97.
    In light of recent advances in machine learning for medical applications, the automation of medical diagnostics is imminent. That said, before machine learning algorithms find their way into clinical practice, various problems at the epistemic level need to be overcome. In this paper, we discuss different sources of uncertainty arising for clinicians trying to evaluate the trustworthiness of algorithmic evidence when making diagnostic judgments. Thereby, we examine many of the limitations of current machine learning algorithms (with deep learning in particular) (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Randomised controlled trials in medical AI: ethical considerations.Thomas Grote - 2022 - Journal of Medical Ethics 48 (11):899-906.
    In recent years, there has been a surge of high-profile publications on applications of artificial intelligence (AI) systems for medical diagnosis and prognosis. While AI provides various opportunities for medical practice, there is an emerging consensus that the existing studies show considerable deficits and are unable to establish the clinical benefit of AI systems. Hence, the view that the clinical benefit of AI systems needs to be studied in clinical trials—particularly randomised controlled trials (RCTs)—is gaining ground. However, an issue that (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and fairness in healthcare. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Allure of Simplicity.Thomas Grote - 2023 - Philosophy of Medicine 4 (1).
    This paper develops an account of the opacity problem in medical machine learning (ML). Guided by pragmatist assumptions, I argue that opacity in ML models is problematic insofar as it potentially undermines the achievement of two key purposes: ensuring generalizability and optimizing clinician–machine decision-making. Three opacity amelioration strategies are examined, with explainable artificial intelligence (XAI) as the predominant approach, challenged by two revisionary strategies in the form of reliabilism and the interpretability by design. Comparing the three strategies, I argue that (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Biased Face Recognition Technology Used by Government: A Problem for Liberal Democracy.Michael Gentzel - 2021 - Philosophy and Technology 34 (4):1639-1663.
    This paper presents a novel philosophical analysis of the problem of law enforcement’s use of biased face recognition technology in liberal democracies. FRT programs used by law enforcement in identifying crime suspects are substantially more error-prone on facial images depicting darker skin tones and females as compared to facial images depicting Caucasian males. This bias can lead to citizens being wrongfully investigated by police along racial and gender lines. The author develops and defends “A Liberal Argument Against Biased FRT,” which (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Deception of Certainty: how Non-Interpretable Machine Learning Outcomes Challenge the Epistemic Authority of Physicians. A deliberative-relational Approach.Florian Funer - 2022 - Medicine, Health Care and Philosophy 25 (2):167-178.
    Developments in Machine Learning (ML) have attracted attention in a wide range of healthcare fields to improve medical practice and the benefit of patients. Particularly, this should be achieved by providing more or less automated decision recommendations to the treating physician. However, some hopes placed in ML for healthcare seem to be disappointed, at least in part, by a lack of transparency or traceability. Skepticism exists primarily in the fact that the physician, as the person responsible for diagnosis, therapy, and (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Responsibility and decision-making authority in using clinical decision support systems: an empirical-ethical exploration of German prospective professionals preferences and concerns.Florian Funer, Wenke Liedtke, Sara Tinnemeyer, Andrea Diana Klausen, Diana Schneider, Helena U. Zacharias, Martin Langanke & Sabine Salloch - 2023 - Journal of Medical Ethics 50 (1):6-11.
    Machine learning-driven clinical decision support systems (ML-CDSSs) seem impressively promising for future routine and emergency care. However, reflection on their clinical implementation reveals a wide array of ethical challenges. The preferences, concerns and expectations of professional stakeholders remain largely unexplored. Empirical research, however, may help to clarify the conceptual debate and its aspects in terms of their relevance for clinical practice. This study explores, from an ethical point of view, future healthcare professionals’ attitudes to potential changes of responsibility and decision-making (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Accuracy and Interpretability: Struggling with the Epistemic Foundations of Machine Learning-Generated Medical Information and Their Practical Implications for the Doctor-Patient Relationship.Florian Funer - 2022 - Philosophy and Technology 35 (1):1-20.
    The initial successes in recent years in harnessing machine learning technologies to improve medical practice and benefit patients have attracted attention in a wide range of healthcare fields. Particularly, it should be achieved by providing automated decision recommendations to the treating clinician. Some hopes placed in such ML-based systems for healthcare, however, seem to be unwarranted, at least partially because of their inherent lack of transparency, although their results seem convincing in accuracy and reliability. Skepticism arises when the physician as (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems.Andrea Ferrario - 2022 - Journal of Medical Ethics 48 (7):492-494.
    In their article ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’, Durán and Jongsma discuss the epistemic and ethical challenges raised by black box algorithms in medical practice. The opacity of black box algorithms is an obstacle to the trustworthiness of their outcomes. Moreover, the use of opaque algorithms is not normatively justified in medical practice. The authors introduce a formalism, called computational reliabilism, which allows generating justified beliefs on the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Black-box assisted medical decisions: AI power vs. ethical physician care.Berman Chan - 2023 - Medicine, Health Care and Philosophy 26 (3):285-292.
    Without doctors being able to explain medical decisions to patients, I argue their use of black box AIs would erode the effective and respectful care they provide patients. In addition, I argue that physicians should use AI black boxes only for patients in dire straits, or when physicians use AI as a “co-pilot” (analogous to a spellchecker) but can independently confirm its accuracy. I respond to A.J. London’s objection that physicians already prescribe some drugs without knowing why they work.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI support for ethical decision-making around resuscitation: proceed with care.Nikola Biller-Andorno, Andrea Ferrario, Susanne Joebges, Tanja Krones, Federico Massini, Phyllis Barth, Georgios Arampatzis & Michael Krauthammer - 2022 - Journal of Medical Ethics 48 (3):175-183.
    Artificial intelligence (AI) systems are increasingly being used in healthcare, thanks to the high level of performance that these systems have proven to deliver. So far, clinical applications have focused on diagnosis and on prediction of outcomes. It is less clear in what way AI can or should support complex clinical decisions that crucially depend on patient preferences. In this paper, we focus on the ethical questions arising from the design, development and deployment of AI systems to support decision-making around (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • AI as an Epistemic Technology.Ramón Alvarado - 2023 - Science and Engineering Ethics 29 (5):1-30.
    In this paper I argue that Artificial Intelligence and the many data science methods associated with it, such as machine learning and large language models, are first and foremost epistemic technologies. In order to establish this claim, I first argue that epistemic technologies can be conceptually and practically distinguished from other technologies in virtue of what they are designed for, what they do and how they do it. I then proceed to show that unlike other kinds of technology (_including_ other (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Two Reasons for Subjecting Medical AI Systems to Lower Standards than Humans.Jakob Mainz, Jens Christian Bjerring & Lauritz Munch - 2023 - Acm Proceedings of Fairness, Accountability, and Transaparency (Facct) 2023 1 (1):44-49.
    This paper concerns the double standard debate in the ethics of AI literature. This debate essentially revolves around the question of whether we should subject AI systems to different normative standards than humans. So far, the debate has centered around the desideratum of transparency. That is, the debate has focused on whether AI systems must be more transparent than humans in their decision-making processes in order for it to be morally permissible to use such systems. Some have argued that the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • How does Responsible Research & Innovation apply to the concept of the Digital Self, in consideration of privacy, ownership and democracy?Sijmen van Schagen - unknown
    This master thesis studies to what degree Responsible Research & Innovation can be applied to the concept of the Digital Self. In order to examine this properly, it focuses on aspects of privacy, ownership and democracy. This work is inspired by the digital health domain, where a growing number of patients become enabled to benefit from AI-powered clinical decision sup port. Aim of this study is to provide insight into what cases can be considered for exploring new design requirements for (...)
    Download  
     
    Export citation  
     
    Bookmark