Switch to: References

Add citations

You must login to add citations.
  1. The ethical requirement of explainability for AI-DSS in healthcare: a systematic review of reasons.Nils Freyer, Dominik Groß & Myriam Lipprandt - 2024 - BMC Medical Ethics 25 (1):1-11.
    Background Despite continuous performance improvements, especially in clinical contexts, a major challenge of Artificial Intelligence based Decision Support Systems (AI-DSS) remains their degree of epistemic opacity. The conditions of and the solutions for the justified use of the occasionally unexplainable technology in healthcare are an active field of research. In March 2024, the European Union agreed upon the Artificial Intelligence Act (AIA), requiring medical AI-DSS to be ad-hoc explainable or to use post-hoc explainability methods. The ethical debate does not seem (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Take five? A coherentist argument why medical AI does not require a new ethical principle.Seppe Segers & Michiel De Proost - 2024 - Theoretical Medicine and Bioethics 45 (5):387-400.
    With the growing application of machine learning models in medicine, principlist bioethics has been put forward as needing revision. This paper reflects on the dominant trope in AI ethics to include a new ‘principle of explicability’ alongside the traditional four principles of bioethics that make up the theory of principlism. It specifically suggests that these four principles are sufficient and challenges the relevance of explicability as a separate ethical principle by emphasizing the coherentist affinity of principlism. We argue that, through (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Four Fundamental Components for Intelligibility and Interpretability in AI Ethics.Moto Kamiura - forthcoming - American Philosophical Quarterly.
    Intelligibility and interpretability related to artificial intelligence (AI) are crucial for enabling explicability, which is vital for establishing constructive communication and agreement among various stakeholders, including users and designers of AI. It is essential to overcome the challenges of sharing an understanding of the details of the various structures of diverse AI systems, to facilitate effective communication and collaboration. In this paper, we propose four fundamental terms: “I/O,” “Constraints,” “Objectives,” and “Architecture.” These terms help mitigate the challenges associated with intelligibility (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What Are Humans Doing in the Loop? Co-Reasoning and Practical Judgment When Using Machine Learning-Driven Decision Aids.Sabine Salloch & Andreas Eriksen - 2024 - American Journal of Bioethics 24 (9):67-78.
    Within the ethical debate on Machine Learning-driven decision support systems (ML_CDSS), notions such as “human in the loop” or “meaningful human control” are often cited as being necessary for ethical legitimacy. In addition, ethical principles usually serve as the major point of reference in ethical guidance documents, stating that conflicts between principles need to be weighed and balanced against each other. Starting from a neo-Kantian viewpoint inspired by Onora O'Neill, this article makes a concrete suggestion of how to interpret the (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Clinicians’ roles and necessary levels of understanding in the use of artificial intelligence: A qualitative interview study with German medical students.F. Funer, S. Tinnemeyer, W. Liedtke & S. Salloch - 2024 - BMC Medical Ethics 25 (1):1-13.
    Background Artificial intelligence-driven Clinical Decision Support Systems (AI-CDSS) are being increasingly introduced into various domains of health care for diagnostic, prognostic, therapeutic and other purposes. A significant part of the discourse on ethically appropriate conditions relate to the levels of understanding and explicability needed for ensuring responsible clinical decision-making when using AI-CDSS. Empirical evidence on stakeholders’ viewpoints on these issues is scarce so far. The present study complements the empirical-ethical body of research by, on the one hand, investigating the requirements (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Non-empirical methods for ethics research on digital technologies in medicine, health care and public health: a systematic journal review.Frank Ursin, Regina Müller, Florian Funer, Wenke Liedtke, David Renz, Svenja Wiertz & Robert Ranisch - 2024 - Medicine, Health Care and Philosophy 27 (4):513-528.
    Bioethics has developed approaches to address ethical issues in health care, similar to how technology ethics provides guidelines for ethical research on artificial intelligence, big data, and robotic applications. As these digital technologies are increasingly used in medicine, health care and public health, thus, it is plausible that the approaches of technology ethics have influenced bioethical research. Similar to the “empirical turn” in bioethics, which led to intense debates about appropriate moral theories, ethical frameworks and meta-ethics due to the increased (...)
    Download  
     
    Export citation  
     
    Bookmark