Switch to: References

Add citations

You must login to add citations.
  1. Introduction to the Topical Collection on AI and Responsibility.Niël Conradie, Hendrik Kempt & Peter Königs - 2022 - Philosophy and Technology 35 (4):1-6.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethics of generative AI.Hazem Zohny, John McMillan & Mike King - 2023 - Journal of Medical Ethics 49 (2):79-80.
    Artificial intelligence (AI) and its introduction into clinical pathways presents an array of ethical issues that are being discussed in the JME. 1–7 The development of AI technologies that can produce text that will pass plagiarism detectors 8 and are capable of appearing to be written by a human author 9 present new issues for medical ethics. One set of worries concerns authorship and whether it will now be possible to know that an author or student in fact produced submitted (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch & Cristian Timmermann - 2023 - Ethik in der Medizin 35 (2):173-199.
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • “Many roads lead to Rome and the Artificial Intelligence only shows me one road”: an interview study on physician attitudes regarding the implementation of computerised clinical decision support systems.Sigrid Sterckx, Tamara Leune, Johan Decruyenaere, Wim Van Biesen & Daan Van Cauwenberge - 2022 - BMC Medical Ethics 23 (1):1-14.
    Research regarding the drivers of acceptance of clinical decision support systems by physicians is still rather limited. The literature that does exist, however, tends to focus on problems regarding the user-friendliness of CDSS. We have performed a thematic analysis of 24 interviews with physicians concerning specific clinical case vignettes, in order to explore their underlying opinions and attitudes regarding the introduction of CDSS in clinical practice, to allow a more in-depth analysis of factors underlying acceptance of CDSS. We identified three (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Justice and the Normative Standards of Explainability in Healthcare.Saskia K. Nagel, Nils Freyer & Hendrik Kempt - 2022 - Philosophy and Technology 35 (4):1-19.
    Providing healthcare services frequently involves cognitively demanding tasks, including diagnoses and analyses as well as complex decisions about treatments and therapy. From a global perspective, ethically significant inequalities exist between regions where the expert knowledge required for these tasks is scarce or abundant. One possible strategy to diminish such inequalities and increase healthcare opportunities in expert-scarce settings is to provide healthcare solutions involving digital technologies that do not necessarily require the presence of a human expert, e.g., in the form of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Why algorithmic speed can be more important than algorithmic accuracy.Jakob Mainz, Lauritz Munch, Jens Christian Bjerring & Sissel Godtfredsen - 2023 - Clinical Ethics 18 (2):161-164.
    Artificial Intelligence (AI) often outperforms human doctors in terms of decisional speed. For some diseases, the expected benefit of a fast but less accurate decision exceeds the benefit of a slow but more accurate one. In such cases, we argue, it is often justified to rely on a medical AI to maximise decision speed – even if the AI is less accurate than human doctors.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI decision-support: a dystopian future of machine paternalism?David D. Luxton - 2022 - Journal of Medical Ethics 48 (4):232-233.
    Physicians and other healthcare professionals are increasingly finding ways to use artificial intelligent decision support systems in their work. IBM Watson Health, for example, is a commercially available technology that is providing AI-DDS services in genomics, oncology, healthcare management and more.1 AI’s ability to scan massive amounts of data, detect patterns, and derive solutions from data is vastly more superior than that of humans. AI technology is undeniably integral to the future of healthcare and public health, and thoughtful consideration of (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Are physicians requesting a second opinion really engaging in a reason-giving dialectic? Normative questions on the standards for second opinions and AI.Benjamin H. Lang - 2022 - Journal of Medical Ethics 48 (4):234-235.
    In their article, ‘Responsibility, Second Opinions, and Peer-Disagreement—Ethical and Epistemological Challenges of Using AI in Clinical Diagnostic Contexts,’ Kempt and Nagel argue for a ‘rule of disagreement’ for the integration of diagnostic AI in healthcare contexts. The type of AI in question is a ‘decision support system’, the purpose of which is to augment human judgement and decision-making in the clinical context by automating or supplementing parts of the cognitive labor. Under the authors’ proposal, artificial decision support systems which produce (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Artificial intelligence and responsibility gaps: what is the problem?Peter Königs - 2022 - Ethics and Information Technology 24 (3):1-11.
    Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty of allocating responsibility for the actions of an autonomous system, especially when the autonomous system causes harm or damage. The highly autonomous behavior of such systems, for which neither the programmer, the manufacturer, nor the operator seems to be responsible, has been suspected to generate responsibility gaps. This has been the cause of (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Enabling Fairness in Healthcare Through Machine Learning.Geoff Keeling & Thomas Grote - 2022 - Ethics and Information Technology 24 (3):1-13.
    The use of machine learning systems for decision-support in healthcare may exacerbate health inequalities. However, recent work suggests that algorithms trained on sufficiently diverse datasets could in principle combat health inequalities. One concern about these algorithms is that their performance for patients in traditionally disadvantaged groups exceeds their performance for patients in traditionally advantaged groups. This renders the algorithmic decisions unfair relative to the standard fairness metrics in machine learning. In this paper, we defend the permissible use of affirmative algorithms; (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Agree to disagree: the symmetry of burden of proof in human–AI collaboration.Karin Rolanda Jongsma & Martin Sand - 2022 - Journal of Medical Ethics 48 (4):230-231.
    In their paper ‘Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts’, Kempt and Nagel discuss the use of medical AI systems and the resulting need for second opinions by human physicians, when physicians and AI disagree, which they call the rule of disagreement.1 The authors defend RoD based on three premises: First, they argue that in cases of disagreement in medical practice, there is an increased burden of proof for the physician in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Responsibility and decision-making authority in using clinical decision support systems: an empirical-ethical exploration of German prospective professionals preferences and concerns.Florian Funer, Wenke Liedtke, Sara Tinnemeyer, Andrea Diana Klausen, Diana Schneider, Helena U. Zacharias, Martin Langanke & Sabine Salloch - 2023 - Journal of Medical Ethics 50 (1):6-11.
    Machine learning-driven clinical decision support systems (ML-CDSSs) seem impressively promising for future routine and emergency care. However, reflection on their clinical implementation reveals a wide array of ethical challenges. The preferences, concerns and expectations of professional stakeholders remain largely unexplored. Empirical research, however, may help to clarify the conceptual debate and its aspects in terms of their relevance for clinical practice. This study explores, from an ethical point of view, future healthcare professionals’ attitudes to potential changes of responsibility and decision-making (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • ‘Can I trust my patient?’ Machine Learning support for predicting patient behaviour.Florian Funer & Sabine Salloch - 2023 - Journal of Medical Ethics 49 (8):543-544.
    Giorgia Pozzi’s feature article1 on the risks of testimonial injustice when using automated prediction drug monitoring programmes (PDMPs) turns the spotlight on a pressing and well-known clinical problem: physicians’ challenges to predict patient behaviour, so that treatment decisions can be made based on this information, despite any fallibility. Currently, as one possible way to improve prognostic assessments of patient behaviour, Machine Learning-driven clinical decision support systems (ML-CDSS) are being developed and deployed. To make her point, Pozzi discusses ML-CDSSs that are (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • When the frameworks don’t work: data protection, trust and artificial intelligence.Zoë Fritz - 2022 - Journal of Medical Ethics 48 (4):213-214.
    With new technologies come new ethical challenges. Often, we can apply previously established principles, even though it may take some time to fully understand the detail of the new technology - or the questions that arise from it. The International Commission on Radiological Protection, for example, was founded in 1928 and has based its advice on balancing the radiation exposure associated with X-rays and CT scans with the diagnostic benefits of the new investigations. They have regularly updated their advice as (...)
    Download  
     
    Export citation  
     
    Bookmark