Switch to: References

Add citations

You must login to add citations.
  1. Artificial Intelligence in medicine: reshaping the face of medical practice.Max Tretter, David Samhammer & Peter Dabrock - 2023 - Ethik in der Medizin 36 (1):7-29.
    Background The use of Artificial Intelligence (AI) has the potential to provide relief in the challenging and often stressful clinical setting for physicians. So far, however, the actual changes in work for physicians remain a prediction for the future, including new demands on the social level of medical practice. Thus, the question of how the requirements for physicians will change due to the implementation of AI is addressed. Methods The question is approached through conceptual considerations based on the potentials that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Explainable AI and Causal Understanding: Counterfactual Approaches Considered.Sam Baron - 2023 - Minds and Machines 33 (2):347-377.
    The counterfactual approach to explainable AI (XAI) seeks to provide understanding of AI systems through the provision of counterfactual explanations. In a recent systematic review, Chou et al. (Inform Fus 81:59–83, 2022) argue that the counterfactual approach does not clearly provide causal understanding. They diagnose the problem in terms of the underlying framework within which the counterfactual approach has been developed. To date, the counterfactual approach has not been developed in concert with the approach for specifying causes developed by Pearl (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch & Cristian Timmermann - 2023 - Ethik in der Medizin 35 (2):173-199.
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Karl Jaspers and artificial neural nets: on the relation of explaining and understanding artificial intelligence in medicine.Christopher Poppe & Georg Starke - 2022 - Ethics and Information Technology 24 (3):1-10.
    Assistive systems based on Artificial Intelligence (AI) are bound to reshape decision-making in all areas of society. One of the most intricate challenges arising from their implementation in high-stakes environments such as medicine concerns their frequently unsatisfying levels of explainability, especially in the guise of the so-called black-box problem: highly successful models based on deep learning seem to be inherently opaque, resisting comprehensive explanations. This may explain why some scholars claim that research should focus on rendering AI systems understandable, rather (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations  
  • Spatial relation learning for explainable image classification and annotation in critical applications.Régis Pierrard, Jean-Philippe Poli & Céline Hudelot - 2021 - Artificial Intelligence 292 (C):103434.
    Download  
     
    Export citation  
     
    Bookmark  
  • Evaluating XAI: A comparison of rule-based and example-based explanations.Jasper van der Waa, Elisabeth Nieuwburg, Anita Cremers & Mark Neerincx - 2021 - Artificial Intelligence 291 (C):103404.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • AI, Radical Ignorance, and the Institutional Approach to Consent.Etye Steinberg - 2024 - Philosophy and Technology 37 (3):1-26.
    More and more, we face AI-based products and services. Using these services often requires our explicit consent, e.g., by agreeing to the services’ Terms and Conditions clause. Current advances introduce the ability of AI to evolve and change its own modus operandi over time in such a way that we cannot know, at the moment of consent, what it is in the future to which we are now agreeing. Therefore, informed consent is impossible regarding certain kinds of AI. Call this (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Dissecting scientific explanation in AI (sXAI): A case for medicine and healthcare.Juan M. Durán - 2021 - Artificial Intelligence 297 (C):103498.
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Embedding deep networks into visual explanations.Zhongang Qi, Saeed Khorram & Li Fuxin - 2021 - Artificial Intelligence 292:103435.
    Download  
     
    Export citation  
     
    Bookmark  
  • Kandinsky Patterns.Heimo Müller & Andreas Holzinger - 2021 - Artificial Intelligence 300 (C):103546.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Psychosocial Fuzziness of Fear in the Coronavirus (COVID-19) Era and the Role of Robots.Antonella Marchetti, Cinzia Di Dio, Davide Massaro & Federico Manzi - 2020 - Frontiers in Psychology 11.
    Download  
     
    Export citation  
     
    Bookmark   1 citation