Switch to: References

Add citations

You must login to add citations.
  1. Take five? A coherentist argument why medical AI does not require a new ethical principle.Seppe Segers & Michiel De Proost - 2024 - Theoretical Medicine and Bioethics 45 (5):387-400.
    With the growing application of machine learning models in medicine, principlist bioethics has been put forward as needing revision. This paper reflects on the dominant trope in AI ethics to include a new ‘principle of explicability’ alongside the traditional four principles of bioethics that make up the theory of principlism. It specifically suggests that these four principles are sufficient and challenges the relevance of explicability as a separate ethical principle by emphasizing the coherentist affinity of principlism. We argue that, through (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Defending explicability as a principle for the ethics of artificial intelligence in medicine.Jonathan Adams - 2023 - Medicine, Health Care and Philosophy 26 (4):615-623.
    The difficulty of explaining the outputs of artificial intelligence (AI) models and what has led to them is a notorious ethical problem wherever these technologies are applied, including in the medical domain, and one that has no obvious solution. This paper examines the proposal, made by Luciano Floridi and colleagues, to include a new ‘principle of explicability’ alongside the traditional four principles of bioethics that make up the theory of ‘principlism’. It specifically responds to a recent set of criticisms that (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • The ethical requirement of explainability for AI-DSS in healthcare: a systematic review of reasons.Nils Freyer, Dominik Groß & Myriam Lipprandt - 2024 - BMC Medical Ethics 25 (1):1-11.
    Background Despite continuous performance improvements, especially in clinical contexts, a major challenge of Artificial Intelligence based Decision Support Systems (AI-DSS) remains their degree of epistemic opacity. The conditions of and the solutions for the justified use of the occasionally unexplainable technology in healthcare are an active field of research. In March 2024, the European Union agreed upon the Artificial Intelligence Act (AIA), requiring medical AI-DSS to be ad-hoc explainable or to use post-hoc explainability methods. The ethical debate does not seem (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Clinicians’ roles and necessary levels of understanding in the use of artificial intelligence: A qualitative interview study with German medical students.F. Funer, S. Tinnemeyer, W. Liedtke & S. Salloch - 2024 - BMC Medical Ethics 25 (1):1-13.
    Background Artificial intelligence-driven Clinical Decision Support Systems (AI-CDSS) are being increasingly introduced into various domains of health care for diagnostic, prognostic, therapeutic and other purposes. A significant part of the discourse on ethically appropriate conditions relate to the levels of understanding and explicability needed for ensuring responsible clinical decision-making when using AI-CDSS. Empirical evidence on stakeholders’ viewpoints on these issues is scarce so far. The present study complements the empirical-ethical body of research by, on the one hand, investigating the requirements (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Percentages and reasons: AI explainability and ultimate human responsibility within the medical field.Eva Winkler, Andreas Wabro & Markus Herrmann - 2024 - Ethics and Information Technology 26 (2):1-10.
    With regard to current debates on the ethical implementation of AI, especially two demands are linked: the call for explainability and for ultimate human responsibility. In the medical field, both are condensed into the role of one person: It is the physician to whom AI output should be explainable and who should thus bear ultimate responsibility for diagnostic or treatment decisions that are based on such AI output. In this article, we argue that a black box AI indeed creates a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Four Fundamental Components for Intelligibility and Interpretability in AI Ethics.Moto Kamiura - forthcoming - American Philosophical Quarterly.
    Intelligibility and interpretability related to artificial intelligence (AI) are crucial for enabling explicability, which is vital for establishing constructive communication and agreement among various stakeholders, including users and designers of AI. It is essential to overcome the challenges of sharing an understanding of the details of the various structures of diverse AI systems, to facilitate effective communication and collaboration. In this paper, we propose four fundamental terms: “I/O,” “Constraints,” “Objectives,” and “Architecture.” These terms help mitigate the challenges associated with intelligibility (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Between academic standards and wild innovation: assessing big data and artificial intelligence projects in research ethics committees.Andreas Brenneis, Petra Gehring & Annegret Lamadé - forthcoming - Ethik in der Medizin:1-19.
    Definition of the problem In medicine, as well as in other disciplines, computer science expertise is becoming increasingly important. This requires a culture of interdisciplinary assessment, for which medical ethics committees are not well prepared. The use of big data and artificial intelligence (AI) methods (whether developed in-house or in the form of “tools”) pose further challenges for research ethics reviews. Arguments This paper describes the problems and suggests solving them through procedural changes. Conclusion An assessment that is interdisciplinary from (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Intelligence to support ethical decision-making for incapacitated patients: a survey among German anesthesiologists and internists.Lasse Benzinger, Jelena Epping, Frank Ursin & Sabine Salloch - 2024 - BMC Medical Ethics 25 (1):1-10.
    Background Artificial intelligence (AI) has revolutionized various healthcare domains, where AI algorithms sometimes even outperform human specialists. However, the field of clinical ethics has remained largely untouched by AI advances. This study explores the attitudes of anesthesiologists and internists towards the use of AI-driven preference prediction tools to support ethical decision-making for incapacitated patients. Methods A questionnaire was developed and pretested among medical students. The questionnaire was distributed to 200 German anesthesiologists and 200 German internists, thereby focusing on physicians who (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation