Switch to: References

Add citations

You must login to add citations.
  1. Should the use of adaptive machine learning systems in medicine be classified as research?Robert Sparrow, Joshua Hatherley, Justin Oakley & Chris Bain - 2024 - American Journal of Bioethics 24 (10):58-69.
    A novel advantage of the use of machine learning (ML) systems in medicine is their potential to continue learning from new data after implementation in clinical practice. To date, considerations of the ethical questions raised by the design and use of adaptive machine learning systems in medicine have, for the most part, been confined to discussion of the so-called “update problem,” which concerns how regulators should approach systems whose performance and parameters continue to change even after they have received regulatory (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3).
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are “black boxes.” The initial response in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a “lethal prejudice.” In this paper, we defend the value of interpretability (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):323-332.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of interpretability (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Accuracy and Interpretability: Struggling with the Epistemic Foundations of Machine Learning-Generated Medical Information and Their Practical Implications for the Doctor-Patient Relationship.Florian Funer - 2022 - Philosophy and Technology 35 (1):1-20.
    The initial successes in recent years in harnessing machine learning technologies to improve medical practice and benefit patients have attracted attention in a wide range of healthcare fields. Particularly, it should be achieved by providing automated decision recommendations to the treating clinician. Some hopes placed in such ML-based systems for healthcare, however, seem to be unwarranted, at least partially because of their inherent lack of transparency, although their results seem convincing in accuracy and reliability. Skepticism arises when the physician as (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Ethics of generative AI.Hazem Zohny, John McMillan & Mike King - 2023 - Journal of Medical Ethics 49 (2):79-80.
    Artificial intelligence (AI) and its introduction into clinical pathways presents an array of ethical issues that are being discussed in the JME. 1–7 The development of AI technologies that can produce text that will pass plagiarism detectors 8 and are capable of appearing to be written by a human author 9 present new issues for medical ethics. One set of worries concerns authorship and whether it will now be possible to know that an author or student in fact produced submitted (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Non-empirical methods for ethics research on digital technologies in medicine, health care and public health: a systematic journal review.Frank Ursin, Regina Müller, Florian Funer, Wenke Liedtke, David Renz, Svenja Wiertz & Robert Ranisch - 2024 - Medicine, Health Care and Philosophy 27 (4):513-528.
    Bioethics has developed approaches to address ethical issues in health care, similar to how technology ethics provides guidelines for ethical research on artificial intelligence, big data, and robotic applications. As these digital technologies are increasingly used in medicine, health care and public health, thus, it is plausible that the approaches of technology ethics have influenced bioethical research. Similar to the “empirical turn” in bioethics, which led to intense debates about appropriate moral theories, ethical frameworks and meta-ethics due to the increased (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Machine learning, healthcare resource allocation, and patient consent.Jamie Webb - forthcoming - The New Bioethics:1-22.
    The impact of machine learning in healthcare on patient informed consent is now the subject of significant inquiry in bioethics. However, the topic has predominantly been considered in the context of black box diagnostic or treatment recommendation algorithms. The impact of machine learning involved in healthcare resource allocation on patient consent remains undertheorized. This paper will establish where patient consent is relevant in healthcare resource allocation, before exploring the impact on informed consent from the introduction of black box machine learning (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Whether Designated as Research or Not, Who Resolves Ethical Considerations Emerging with Healthcare AI?Danton Char - 2024 - American Journal of Bioethics 24 (10):93-95.
    Volume 24, Issue 10, October 2024, Page 93-95.
    Download  
     
    Export citation  
     
    Bookmark