Switch to: Citations

Add references

You must login to add references.
  1. Three Problems with Big Data and Artificial Intelligence in Medicine.Benjamin Chin-Yee & Ross Upshur - 2019 - Perspectives in Biology and Medicine 62 (2):237-256.
    We live in the Age of Big Data. In medicine, artificial intelligence and machine learning algorithms, fueled by big data, promise to change how physicians make diagnoses, determine prognoses, and develop new treatments. An exponential rise in articles on these topics is seen in the medical literature. Recent applications range from the use of deep learning neural networks to diagnose diabetic retinopathy and skin cancer from image databases, to the use of various machine learning algorithms for prognostication in cancer and (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Artificial Intelligence and Patient-Centered Decision-Making.Jens Christian Bjerring & Jacob Busch - 2020 - Philosophy and Technology 34 (2):349-371.
    Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • Conversational Artificial Intelligence in Psychotherapy: A New Therapeutic Tool or Agent?Jana Sedlakova & Manuel Trachsel - 2022 - American Journal of Bioethics 23 (5):4-13.
    Conversational artificial intelligence (CAI) presents many opportunities in the psychotherapeutic landscape—such as therapeutic support for people with mental health problems and without access to care. The adoption of CAI poses many risks that need in-depth ethical scrutiny. The objective of this paper is to complement current research on the ethics of AI for mental health by proposing a holistic, ethical, and epistemic analysis of CAI adoption. First, we focus on the question of whether CAI is rather a tool or an (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Epistemic Humility and Medical Practice: Translating Epistemic Categories into Ethical Obligations.A. Schwab - 2012 - Journal of Medicine and Philosophy 37 (1):28-48.
    Physicians and other medical practitioners make untold numbers of judgments about patient care on a daily, weekly, and monthly basis. These judgments fall along a number of spectrums, from the mundane to the tragic, from the obvious to the challenging. Under the rubric of evidence-based medicine, these judgments will be informed by the robust conclusions of medical research. In the ideal circumstance, medical research makes the best decision obvious to the trained professional. Even when practice approximates this ideal, it does (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them.Filippo Santoni de Sio & Giulio Mecacci - 2021 - Philosophy and Technology 34 (4):1057-1084.
    The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected (...)
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • Believing in Black Boxes: Must Machine Learning in Healthcare be Explainable to be Evidence-Based?Liam McCoy, Connor Brenna, Stacy Chen, Karina Vold & Sunit Das - forthcoming - Journal of Clinical Epidemiology.
    Objective: To examine the role of explainability in machine learning for healthcare (MLHC), and its necessity and significance with respect to effective and ethical MLHC application. Study Design and Setting: This commentary engages with the growing and dynamic corpus of literature on the use of MLHC and artificial intelligence (AI) in medicine, which provide the context for a focused narrative review of arguments presented in favour of and opposition to explainability in MLHC. Results: We find that concerns regarding explainability are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A Research Ethics Framework for the Clinical Translation of Healthcare Machine Learning.Melissa D. McCradden, James A. Anderson, Elizabeth A. Stephenson, Erik Drysdale, Lauren Erdman, Anna Goldenberg & Randi Zlotnik Shaul - 2022 - American Journal of Bioethics 22 (5):8-22.
    The application of artificial intelligence and machine learning technologies in healthcare have immense potential to improve the care of patients. While there are some emerging practices surro...
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability.Alex John London - 2019 - Hastings Center Report 49 (1):15-21.
    Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.” Without an explanation in terms of reasons or (...)
    Download  
     
    Export citation  
     
    Bookmark   67 citations  
  • Epistemic Injustice in Healthcare: A Philosophical Analysis.Ian James Kidd & Havi Carel - 2014 - Medicine, Health Care and Philosophy 17 (4):529-540.
    In this paper we argue that ill persons are particularly vulnerable to epistemic injustice in the sense articulated by Fricker. Ill persons are vulnerable to testimonial injustice through the presumptive attribution of characteristics like cognitive unreliability and emotional instability that downgrade the credibility of their testimonies. Ill persons are also vulnerable to hermeneutical injustice because many aspects of the experience of illness are difficult to understand and communicate and this often owes to gaps in collective hermeneutical resources. We then argue (...)
    Download  
     
    Export citation  
     
    Bookmark   152 citations  
  • The importance of values in evidence-based medicine.Michael P. Kelly, Iona Heath, Jeremy Howick & Trisha Greenhalgh - 2015 - BMC Medical Ethics 16 (1):69.
    Evidence-based medicine has always required integration of patient values with ‘best’ clinical evidence. It is widely recognized that scientific practices and discoveries, including those of EBM, are value-laden. But to date, the science of EBM has focused primarily on methods for reducing bias in the evidence, while the role of values in the different aspects of the EBM process has been almost completely ignored.
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Trusting experts and epistemic humility in disability.Anita Ho - 2011 - International Journal of Feminist Approaches to Bioethics 4 (2):102-123.
    It is generally accepted that the therapeutic relationship between professionals and patients is one of trust. Nonetheless, some patient groups carry certain social vulnerabilities that can be exacerbated when they extend trust to health-care professionals. In exploring the epistemic and ethical implications of expert status, this paper examines how calls to trust may increase epistemic oppression and perpetuate the vulnerability of people with impairments. It critically evaluates the processes through which epistemic communities are formed or determined, and examines the institutional (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Resisting the Digital Medicine Panopticon: Toward a Bioethics of the Oppressed.Adrian Guta, Jijian Voronka & Marilou Gagnon - 2018 - American Journal of Bioethics 18 (9):62-64.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Does evidence-based medicine apply to psychiatry?Mona Gupta - 2007 - Theoretical Medicine and Bioethics 28 (2):103.
    Evidence-based psychiatry (EBP) has arisen through the application of evidence-based medicine (EBM) to psychiatry. However, there may be aspects of psychiatric disorders and treatments that do not conform well to the assumptions of EBM. This paper reviews the ongoing debate about evidence-based psychiatry and investigates the applicability, to psychiatry, of two basic methodological features of EBM: prognostic homogeneity of clinical trial groups and quantification of trial outcomes. This paper argues that EBM may not be the best way to pursue psychiatric (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Machine Learning Healthcare Applications (ML-HCAs) Are No Stand-Alone Systems but Part of an Ecosystem – A Broader Ethical and Health Technology Assessment Approach is Needed.Helene Gerhards, Karsten Weber, Uta Bittner & Heiner Fangerau - 2020 - American Journal of Bioethics 20 (11):46-48.
    ML-HCAs have the potential to significantly change an entire healthcare system. It is not even necessary to presume that this will be disruptive but sufficient to assume that the mere adaptation of...
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations.Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke & Effy Vayena - 2018 - Minds and Machines 28 (4):689-707.
    This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...)
    Download  
     
    Export citation  
     
    Bookmark   176 citations