Switch to: References

Add citations

You must login to add citations.
  1. What is Interpretability?Adrian Erasmus, Tyler D. P. Brunet & Eyal Fisher - 2021 - Philosophy and Technology 34:833–862.
    We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: Are networks explainable, and if so, what does it mean to explain the output of a network? And what does it mean for a network to be interpretable? We argue that accounts of “explanation” tailored specifically to neural networks have ineffectively reinvented the wheel. In (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Concordance as evidence in the Watson for Oncology decision-support system.Aaro Tupasela & Ezio Di Nucci - 2020 - AI and Society 35 (4):811-818.
    Machine learning platforms have emerged as a new promissory technology that some argue will revolutionize work practices across a broad range of professions, including medical care. During the past few years, IBM has been testing its Watson for Oncology platform at several oncology departments around the world. Published reports, news stories, as well as our own empirical research show that in some cases, the levels of concordance over recommended treatment protocols between the platform and human oncologists have been quite low. (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Coming to Terms with the Black Box Problem: How to Justify AI Systems in Health Care.Ryan Marshall Felder - 2021 - Hastings Center Report 51 (4):38-45.
    The use of opaque, uninterpretable artificial intelligence systems in health care can be medically beneficial, but it is often viewed as potentially morally problematic on account of this opacity—because the systems are black boxes. Alex John London has recently argued that opacity is not generally problematic, given that many standard therapies are explanatorily opaque and that we can rely on statistical validation of the systems in deciding whether to implement them. But is statistical validation sufficient to justify implementation of these (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Primer on an ethics of AI-based decision support systems in the clinic.Matthias Braun, Patrik Hummel, Susanne Beck & Peter Dabrock - 2021 - Journal of Medical Ethics 47 (12):3-3.
    Making good decisions in extremely complex and difficult processes and situations has always been both a key task as well as a challenge in the clinic and has led to a large amount of clinical, legal and ethical routines, protocols and reflections in order to guarantee fair, participatory and up-to-date pathways for clinical decision-making. Nevertheless, the complexity of processes and physical phenomena, time as well as economic constraints and not least further endeavours as well as achievements in medicine and healthcare (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • Watson, autonomy and value flexibility: revisiting the debate.Jasper Debrabander & Heidi Mertes - 2022 - Journal of Medical Ethics 48 (12):1043-1047.
    Many ethical concerns have been voiced about Clinical Decision Support Systems (CDSSs). Special attention has been paid to the effect of CDSSs on autonomy, responsibility, fairness and transparency. This journal has featured a discussion between Rosalind McDougall and Ezio Di Nucci that focused on the impact of IBM’s Watson for Oncology (Watson) on autonomy. The present article elaborates on this discussion in three ways. First, using Jonathan Pugh’s account of rational autonomy we show that how Watson presents its results might (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations