Switch to: References

Add citations

You must login to add citations.
  1. What is morally at stake when using algorithms to make medical diagnoses? Expanding the discussion beyond risks and harms.Bas de Boer & Olya Kudina - 2021 - Theoretical Medicine and Bioethics 42 (5):245-266.
    In this paper, we examine the qualitative moral impact of machine learning-based clinical decision support systems in the process of medical diagnosis. To date, discussions about machine learning in this context have focused on problems that can be measured and assessed quantitatively, such as by estimating the extent of potential harm or calculating incurred risks. We maintain that such discussions neglect the qualitative moral impact of these technologies. Drawing on the philosophical approaches of technomoral change and technological mediation theory, which (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Explainability, Public Reason, and Medical Artificial Intelligence.Michael Da Silva - 2023 - Ethical Theory and Moral Practice 26 (5):743-762.
    The contention that medical artificial intelligence (AI) should be ‘explainable’ is widespread in contemporary philosophy and in legal and best practice documents. Yet critics argue that ‘explainability’ is not a stable concept; non-explainable AI is often more accurate; mechanisms intended to improve explainability do not improve understanding and introduce new epistemic concerns; and explainability requirements are ad hoc where human medical decision-making is often opaque. A recent ‘political response’ to these issues contends that AI used in high-stakes scenarios, including medical (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Black-box assisted medical decisions: AI power vs. ethical physician care.Berman Chan - 2023 - Medicine, Health Care and Philosophy 26 (3):285-292.
    Without doctors being able to explain medical decisions to patients, I argue their use of black box AIs would erode the effective and respectful care they provide patients. In addition, I argue that physicians should use AI black boxes only for patients in dire straits, or when physicians use AI as a “co-pilot” (analogous to a spellchecker) but can independently confirm its accuracy. I respond to A.J. London’s objection that physicians already prescribe some drugs without knowing why they work.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Applying a principle of explicability to AI research in Africa: should we do it?Mary Carman & Benjamin Rosman - 2020 - Ethics and Information Technology 23 (2):107-117.
    Developing and implementing artificial intelligence (AI) systems in an ethical manner faces several challenges specific to the kind of technology at hand, including ensuring that decision-making systems making use of machine learning are just, fair, and intelligible, and are aligned with our human values. Given that values vary across cultures, an additional ethical challenge is to ensure that these AI systems are not developed according to some unquestioned but questionable assumption of universal norms but are in fact compatible with the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Responsible nudging for social good: new healthcare skills for AI-driven digital personal assistants.Marianna Capasso & Steven Umbrello - 2022 - Medicine, Health Care and Philosophy 25 (1):11-22.
    Traditional medical practices and relationships are changing given the widespread adoption of AI-driven technologies across the various domains of health and healthcare. In many cases, these new technologies are not specific to the field of healthcare. Still, they are existent, ubiquitous, and commercially available systems upskilled to integrate these novel care practices. Given the widespread adoption, coupled with the dramatic changes in practices, new ethical and social issues emerge due to how these systems nudge users into making decisions and changing (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Coercive Potential of Digital Mental Health.Isobel Butorac & Adrian Carter - 2021 - American Journal of Bioethics 21 (7):28-30.
    Digital mental health can be understood as the in situ quantification of an individual’s data from personal devices to measure human behavior in both health and disease (Huckvale, Venkatesh and Chr...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Putting explainable AI in context: institutional explanations for medical AI.Jacob Browning & Mark Theunissen - 2022 - Ethics and Information Technology 24 (2).
    There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations—and it (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive a (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Artificial Intelligence and Patient-Centered Decision-Making.Jens Christian Bjerring & Jacob Busch - 2020 - Philosophy and Technology 34 (2):349-371.
    Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • Hiring, Algorithms, and Choice: Why Interviews Still Matter.Vikram R. Bhargava & Pooria Assadi - 2024 - Business Ethics Quarterly 34 (2):201-230.
    Why do organizations conduct job interviews? The traditional view of interviewing holds that interviews are conducted, despite their steep costs, to predict a candidate’s future performance and fit. This view faces a twofold threat: the behavioral and algorithmic threats. Specifically, an overwhelming body of behavioral research suggests that we are bad at predicting performance and fit; furthermore, algorithms are already better than us at making these predictions in various domains. If the traditional view captures the whole story, then interviews seem (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethical concerns around privacy and data security in AI health monitoring for Parkinson’s disease: insights from patients, family members, and healthcare professionals.Itai Bavli, Anita Ho, Ravneet Mahal & Martin J. McKeown - forthcoming - AI and Society:1-11.
    Artificial intelligence (AI) technologies in medicine are gradually changing biomedical research and patient care. High expectations and promises from novel AI applications aiming to positively impact society raise new ethical considerations for patients and caregivers who use these technologies. Based on a qualitative content analysis of semi-structured interviews and focus groups with healthcare professionals (HCPs), patients, and family members of patients with Parkinson’s Disease (PD), the present study investigates participant views on the comparative benefits and problems of using human versus (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Error, Reliability and Health-Related Digital Autonomy in AI Diagnoses of Social Media Analysis.Ramón Alvarado & Nicolae Morar - 2021 - American Journal of Bioethics 21 (7):26-28.
    The rapid expansion of computational tools and of data science methods in healthcare has, undoubtedly, raised a whole new set of bioethical challenges. As Laacke and colleagues rightly note,...
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • ChatGPT’s Relevance for Bioethics: A Novel Challenge to the Intrinsically Relational, Critical, and Reason-Giving Aspect of Healthcare.Ramón Alvarado & Nicolae Morar - 2023 - American Journal of Bioethics 23 (10):71-73.
    The rapid development of large language models (LLM’s) and of their associated interfaces such as ChatGPT has brought forth a wave of epistemic and moral concerns in a variety of domains of inquiry...
    Download  
     
    Export citation  
     
    Bookmark  
  • AI as an Epistemic Technology.Ramón Alvarado - 2023 - Science and Engineering Ethics 29 (5):1-30.
    In this paper I argue that Artificial Intelligence and the many data science methods associated with it, such as machine learning and large language models, are first and foremost epistemic technologies. In order to establish this claim, I first argue that epistemic technologies can be conceptually and practically distinguished from other technologies in virtue of what they are designed for, what they do and how they do it. I then proceed to show that unlike other kinds of technology (_including_ other (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Defending explicability as a principle for the ethics of artificial intelligence in medicine.Jonathan Adams - 2023 - Medicine, Health Care and Philosophy 26 (4):615-623.
    The difficulty of explaining the outputs of artificial intelligence (AI) models and what has led to them is a notorious ethical problem wherever these technologies are applied, including in the medical domain, and one that has no obvious solution. This paper examines the proposal, made by Luciano Floridi and colleagues, to include a new ‘principle of explicability’ alongside the traditional four principles of bioethics that make up the theory of ‘principlism’. It specifically responds to a recent set of criticisms that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI and the need for justification (to the patient).Anantharaman Muralidharan, Julian Savulescu & G. Owen Schaefer - 2024 - Ethics and Information Technology 26 (1):1-12.
    This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient’s values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Algorithmic Transparency, Manipulation, and Two Concepts of Liberty.Ulrik Franke - 2024 - Philosophy and Technology 37 (1):1-6.
    As more decisions are made by automated algorithmic systems, the transparency of these systems has come under scrutiny. While such transparency is typically seen as beneficial, there is a also a critical, Foucauldian account of it. From this perspective, worries have recently been articulated that algorithmic transparency can be used for manipulation, as part of a disciplinary power structure. Klenk (Philosophy & Technology 36, 79, 2023) recently argued that such manipulation should not be understood as exploitation of vulnerable victims, but (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Machine learning in medicine: should the pursuit of enhanced interpretability be abandoned?Chang Ho Yoon, Robert Torrance & Naomi Scheinerman - 2022 - Journal of Medical Ethics 48 (9):581-585.
    We argue why interpretability should have primacy alongside empiricism for several reasons: first, if machine learning models are beginning to render some of the high-risk healthcare decisions instead of clinicians, these models pose a novel medicolegal and ethical frontier that is incompletely addressed by current methods of appraising medical interventions like pharmacological therapies; second, a number of judicial precedents underpinning medical liability and negligence are compromised when ‘autonomous’ ML recommendations are considered to be en par with human instruction in specific (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Assessing the communication gap between AI models and healthcare professionals: Explainability, utility and trust in AI-driven clinical decision-making.Oskar Wysocki, Jessica Katharine Davies, Markel Vigo, Anne Caroline Armstrong, Dónal Landers, Rebecca Lee & André Freitas - 2023 - Artificial Intelligence 316 (C):103839.
    Download  
     
    Export citation  
     
    Bookmark  
  • When Doctors and AI Interact: on Human Responsibility for Artificial Risks.Mario Verdicchio & Andrea Perin - 2022 - Philosophy and Technology 35 (1):1-28.
    A discussion concerning whether to conceive Artificial Intelligence systems as responsible moral entities, also known as “artificial moral agents”, has been going on for some time. In this regard, we argue that the notion of “moral agency” is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. With (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch & Cristian Timmermann - 2023 - Ethik in der Medizin 35 (2):173-199.
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Mechanisms in clinical practice: use and justification.Mark R. Tonelli & Jon Williamson - 2020 - Medicine, Health Care and Philosophy 23 (1):115-124.
    While the importance of mechanisms in determining causality in medicine is currently the subject of active debate, the role of mechanistic reasoning in clinical practice has received far less attention. In this paper we look at this question in the context of the treatment of a particular individual, and argue that evidence of mechanisms is indeed key to various aspects of clinical practice, including assessing population-level research reports, diagnostic as well as therapeutic decision making, and the assessment of treatment effects. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Technological Answerability and the Severance Problem: Staying Connected by Demanding Answers.Daniel W. Tigard - 2021 - Science and Engineering Ethics 27 (5):1-20.
    Artificial intelligence and robotic technologies have become nearly ubiquitous. In some ways, the developments have likely helped us, but in other ways sophisticated technologies set back our interests. Among the latter sort is what has been dubbed the ‘severance problem’—the idea that technologies sever our connection to the world, a connection which is necessary for us to flourish and live meaningful lives. I grant that the severance problem is a threat we should mitigate and I ask: how can we stave (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Explanation and the Right to Explanation.Elanor Taylor - 2023 - Journal of the American Philosophical Association 1:1-16.
    In response to widespread use of automated decision-making technology, some have considered a right to explanation. In this paper I draw on insights from philosophical work on explanation to present a series of challenges to this idea, showing that the normative motivations for access to such explanations ask for something difficult, if not impossible, to extract from automated systems. I consider an alternative, outcomes-focused approach to the normative evaluation of automated decision-making, and recommend it as a way to pursue the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Towards a pragmatist dealing with algorithmic bias in medical machine learning.Georg Starke, Eva De Clercq & Bernice S. Elger - 2021 - Medicine, Health Care and Philosophy 24 (3):341-349.
    Machine Learning (ML) is on the rise in medicine, promising improved diagnostic, therapeutic and prognostic clinical tools. While these technological innovations are bound to transform health care, they also bring new ethical concerns to the forefront. One particularly elusive challenge regards discriminatory algorithmic judgements based on biases inherent in the training data. A common line of reasoning distinguishes between justified differential treatments that mirror true disparities between socially salient groups, and unjustified biases which do not, leading to misdiagnosis and erroneous (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI and the expert; a blueprint for the ethical use of opaque AI.Amber Ross - forthcoming - AI and Society:1-12.
    The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest that a (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Explanatory pragmatism: a context-sensitive framework for explainable medical AI.Diana Robinson & Rune Nyrup - 2022 - Ethics and Information Technology 24 (1).
    Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the problem we (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Integrating Artificial Intelligence in Scientific Practice: Explicable AI as an Interface.Emanuele Ratti - 2022 - Philosophy and Technology 35 (3):1-5.
    A recent article by Herzog provides a much-needed integration of ethical and epistemological arguments in favor of explicable AI in medicine. In this short piece, I suggest a way in which its epistemological intuition of XAI as “explanatory interface” can be further developed to delineate the relation between AI tools and scientific research.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI-Enhanced Healthcare: Not a new Paradigm for Informed Consent.M. Pruski - forthcoming - Journal of Bioethical Inquiry:1-15.
    With the increasing prevalence of artificial intelligence (AI) and other digital technologies in healthcare, the ethical debate surrounding their adoption is becoming more prominent. Here I consider the issue of gaining informed patient consent to AI-enhanced care from the vantage point of the United Kingdom’s National Health Service setting. I build my discussion around two claims from the World Health Organization: that healthcare services should not be denied to individuals who refuse AI-enhanced care and that there is no precedence to (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare.Giorgia Pozzi - 2023 - Ethics and Information Technology 25 (1):1-12.
    Artificial intelligence-based (AI) technologies such as machine learning (ML) systems are playing an increasingly relevant role in medicine and healthcare, bringing about novel ethical and epistemological issues that need to be timely addressed. Even though ethical questions connected to epistemic concerns have been at the center of the debate, it is going unnoticed how epistemic forms of injustice can be ML-induced, specifically in healthcare. I analyze the shortcomings of an ML system currently deployed in the USA to predict patients’ likelihood (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Karl Jaspers and artificial neural nets: on the relation of explaining and understanding artificial intelligence in medicine.Christopher Poppe & Georg Starke - 2022 - Ethics and Information Technology 24 (3):1-10.
    Assistive systems based on Artificial Intelligence (AI) are bound to reshape decision-making in all areas of society. One of the most intricate challenges arising from their implementation in high-stakes environments such as medicine concerns their frequently unsatisfying levels of explainability, especially in the guise of the so-called black-box problem: highly successful models based on deep learning seem to be inherently opaque, resisting comprehensive explanations. This may explain why some scholars claim that research should focus on rendering AI systems understandable, rather (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Relative explainability and double standards in medical decision-making: Should medical AI be subjected to higher standards in medical decision-making than doctors?Saskia K. Nagel, Jan-Christoph Heilinger & Hendrik Kempt - 2022 - Ethics and Information Technology 24 (2).
    The increased presence of medical AI in clinical use raises the ethical question which standard of explainability is required for an acceptable and responsible implementation of AI-based applications in medical contexts. In this paper, we elaborate on the emerging debate surrounding the standards of explainability for medical AI. For this, we first distinguish several goods explainability is usually considered to contribute to the use of AI in general, and medical AI in specific. Second, we propose to understand the value of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Justice and the Normative Standards of Explainability in Healthcare.Saskia K. Nagel, Nils Freyer & Hendrik Kempt - 2022 - Philosophy and Technology 35 (4):1-19.
    Providing healthcare services frequently involves cognitively demanding tasks, including diagnoses and analyses as well as complex decisions about treatments and therapy. From a global perspective, ethically significant inequalities exist between regions where the expert knowledge required for these tasks is scarce or abundant. One possible strategy to diminish such inequalities and increase healthcare opportunities in expert-scarce settings is to provide healthcare solutions involving digital technologies that do not necessarily require the presence of a human expert, e.g., in the form of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Wrestling with Social and Behavioral Genomics: Risks, Potential Benefits, and Ethical Responsibility.Michelle N. Meyer, Paul S. Appelbaum, Daniel J. Benjamin, Shawneequa L. Callier, Nathaniel Comfort, Dalton Conley, Jeremy Freese, Nanibaa' A. Garrison, Evelynn M. Hammonds, K. Paige Harden, Sandra Soo-Jin Lee, Alicia R. Martin, Daphne Oluwaseun Martschenko, Benjamin M. Neale, Rohan H. C. Palmer, James Tabery, Eric Turkheimer, Patrick Turley & Erik Parens - 2023 - Hastings Center Report 53 (S1):2-49.
    In this consensus report by a diverse group of academics who conduct and/or are concerned about social and behavioral genomics (SBG) research, the authors recount the often‐ugly history of scientific attempts to understand the genetic contributions to human behaviors and social outcomes. They then describe what the current science—including genomewide association studies and polygenic indexes—can and cannot tell us, as well as its risks and potential benefits. They conclude with a discussion of responsible behavior in the context of SBG research. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Evidence, ethics and the promise of artificial intelligence in psychiatry.Melissa McCradden, Katrina Hui & Daniel Z. Buchman - 2023 - Journal of Medical Ethics 49 (8):573-579.
    Researchers are studying how artificial intelligence (AI) can be used to better detect, prognosticate and subgroup diseases. The idea that AI might advance medicine’s understanding of biological categories of psychiatric disorders, as well as provide better treatments, is appealing given the historical challenges with prediction, diagnosis and treatment in psychiatry. Given the power of AI to analyse vast amounts of information, some clinicians may feel obligated to align their clinical judgements with the outputs of the AI system. However, a potential (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Rethinking explainability: toward a postphenomenology of black-box artificial intelligence in medicine.Jay R. Malone, Jordan Mason & Annie B. Friedrich - 2022 - Ethics and Information Technology 24 (1).
    In recent years, increasingly advanced artificial intelligence (AI), and in particular machine learning, has shown great promise as a tool in various healthcare contexts. Yet as machine learning in medicine has become more useful and more widely adopted, concerns have arisen about the “black-box” nature of some of these AI models, or the inability to understand—and explain—the inner workings of the technology. Some critics argue that AI algorithms must be explainable to be responsibly used in the clinical encounter, while supporters (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ethical machine decisions and the input-selection problem.Björn Lundgren - 2021 - Synthese 199 (3-4):11423-11443.
    This article is about the role of factual uncertainty for moral decision-making as it concerns the ethics of machine decision-making. The view that is defended here is that factual uncertainties require a normative evaluation and that ethics of machine decision faces a triple-edged problem, which concerns what a machine ought to do, given its technical constraints, what decisional uncertainty is acceptable, and what trade-offs are acceptable to decrease the decisional uncertainty.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Machine learning applications in healthcare and the role of informed consent: Ethical and practical considerations.Giorgia Lorenzini, David Martin Shaw, Laura Arbelaez Ossa & Bernice Simone Elger - forthcoming - Clinical Ethics:147775092210944.
    Informed consent is at the core of the clinical relationship. With the introduction of machine learning in healthcare, the role of informed consent is challenged. This paper addresses the issue of whether patients must be informed about medical ML applications and asked for consent. It aims to expose the discrepancy between ethical and practical considerations, while arguing that this polarization is a false dichotomy: in reality, ethics is applied to specific contexts and situations. Bridging this gap and considering the whole (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Ethics of AI and Health Care: Towards a Substantive Human Rights Framework.S. Matthew Liao - 2023 - Topoi 42 (3):857-866.
    There is enormous interest in using artificial intelligence (AI) in health care contexts. But before AI can be used in such settings, we need to make sure that AI researchers and organizations follow appropriate ethical frameworks and guidelines when developing these technologies. In recent years, a great number of ethical frameworks for AI have been proposed. However, these frameworks have tended to be abstract and not explain what grounds and justifies their recommendations and how one should use these recommendations in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • AI models and the future of genomic research and medicine: True sons of knowledge?Harald König, Daniel Frank, Martina Baumann & Reinhard Heil - 2021 - Bioessays 43 (10):2100025.
    The increasing availability of large‐scale, complex data has made research into how human genomes determine physiology in health and disease, as well as its application to drug development and medicine, an attractive field for artificial intelligence (AI) approaches. Looking at recent developments, we explore how such approaches interconnect and may conflict with needs for and notions of causal knowledge in molecular genetics and genomic medicine. We provide reasons to suggest that—while capable of generating predictive knowledge at unprecedented pace and scale—if (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial intelligence in medicine and the disclosure of risks.Maximilian Kiener - 2021 - AI and Society 36 (3):705-713.
    This paper focuses on the use of ‘black box’ AI in medicine and asks whether the physician needs to disclose to patients that even the best AI comes with the risks of cyberattacks, systematic bias, and a particular type of mismatch between AI’s implicit assumptions and an individual patient’s background situation.Pacecurrent clinical practice, I argue that, under certain circumstances, these risks do need to be disclosed. Otherwise, the physician either vitiates a patient’s informed consent or violates a more general obligation (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Explanation and Agency: exploring the normative-epistemic landscape of the “Right to Explanation”.Esther Keymolen & Fleur Jongepier - 2022 - Ethics and Information Technology 24 (4):1-11.
    A large part of the explainable AI literature focuses on what explanations are in general, what algorithmic explainability is more specifically, and how to code these principles of explainability into AI systems. Much less attention has been devoted to the question of why algorithmic decisions and systems should be explainable and whether there ought to be a right to explanation and why. We therefore explore the normative landscape of the need for AI to be explainable and individuals having a right (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts.Hendrik Kempt & Saskia K. Nagel - 2022 - Journal of Medical Ethics 48 (4):222-229.
    In this paper, we first classify different types of second opinions and evaluate the ethical and epistemological implications of providing those in a clinical context. Second, we discuss the issue of how artificial intelligent could replace the human cognitive labour of providing such second opinion and find that several AI reach the levels of accuracy and efficiency needed to clarify their use an urgent ethical issue. Third, we outline the normative conditions of how AI may be used as second opinion (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • The ethics and epistemology of explanatory AI in medicine and healthcare.Karin Jongsma, Martin Sand & Juan M. Durán - 2022 - Ethics and Information Technology 24 (4):1-4.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Possibilities and ethical issues of entrusting nursing tasks to robots and artificial intelligence.Tomohide Ibuki, Ai Ibuki & Eisuke Nakazawa - forthcoming - Nursing Ethics.
    In recent years, research in robotics and artificial intelligence (AI) has made rapid progress. It is expected that robots and AI will play a part in the field of nursing and their role might broaden in the future. However, there are areas of nursing practice that cannot or should not be entrusted to robots and AI, because nursing is a highly humane practice, and therefore, there would, perhaps, be some practices that should not be replicated by robots or AI. Therefore, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Multi-Level Ethical Considerations of Artificial Intelligence Health Monitoring for People Living with Parkinson’s Disease.Anita Ho, Itai Bavli, Ravneet Mahal & Martin J. McKeown - forthcoming - AJOB Empirical Bioethics.
    Artificial intelligence (AI) has garnered tremendous attention in health care, and many hope that AI can enhance our health system’s ability to care for people with chronic and degenerative conditions, including Parkinson’s Disease (PD). This paper reports the themes and lessons derived from a qualitative study with people living with PD, family caregivers, and health care providers regarding the ethical dimensions of using AI to monitor, assess, and predict PD symptoms and progression. Thematic analysis identified ethical concerns at four intersecting (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • On the Justified Use of AI Decision Support in Evidence-Based Medicine: Validity, Explainability, and Responsibility.Sune Holm - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-7.
    When is it justified to use opaque artificial intelligence (AI) output in medical decision-making? Consideration of this question is of central importance for the responsible use of opaque machine learning (ML) models, which have been shown to produce accurate and reliable diagnoses, prognoses, and treatment suggestions in medicine. In this article, I discuss the merits of two answers to the question. According to the Explanation View, clinicians must have access to an explanation of why an output was produced. According to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Algorithmic legitimacy in clinical decision-making.Sune Holm - 2023 - Ethics and Information Technology 25 (3):1-10.
    Machine learning algorithms are expected to improve referral decisions. In this article I discuss the legitimacy of deferring referral decisions in primary care to recommendations from such algorithms. The standard justification for introducing algorithmic decision procedures to make referral decisions is that they are more accurate than the available practitioners. The improvement in accuracy will ensure more efficient use of scarce health resources and improve patient care. In this article I introduce a proceduralist framework for discussing the legitimacy of algorithmic (...)
    Download  
     
    Export citation  
     
    Bookmark