Switch to: References

Add citations

You must login to add citations.
  1. Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch & Cristian Timmermann - 2023 - Ethik in der Medizin 35 (2):173-199.
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • The Deception of Certainty: how Non-Interpretable Machine Learning Outcomes Challenge the Epistemic Authority of Physicians. A deliberative-relational Approach.Florian Funer - 2022 - Medicine, Health Care and Philosophy 25 (2):167-178.
    Developments in Machine Learning (ML) have attracted attention in a wide range of healthcare fields to improve medical practice and the benefit of patients. Particularly, this should be achieved by providing more or less automated decision recommendations to the treating physician. However, some hopes placed in ML for healthcare seem to be disappointed, at least in part, by a lack of transparency or traceability. Skepticism exists primarily in the fact that the physician, as the person responsible for diagnosis, therapy, and (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Clinicians’ roles and necessary levels of understanding in the use of artificial intelligence: A qualitative interview study with German medical students.F. Funer, S. Tinnemeyer, W. Liedtke & S. Salloch - 2024 - BMC Medical Ethics 25 (1):1-13.
    Background Artificial intelligence-driven Clinical Decision Support Systems (AI-CDSS) are being increasingly introduced into various domains of health care for diagnostic, prognostic, therapeutic and other purposes. A significant part of the discourse on ethically appropriate conditions relate to the levels of understanding and explicability needed for ensuring responsible clinical decision-making when using AI-CDSS. Empirical evidence on stakeholders’ viewpoints on these issues is scarce so far. The present study complements the empirical-ethical body of research by, on the one hand, investigating the requirements (...)
    Download  
     
    Export citation  
     
    Bookmark