Switch to: References

Add citations

You must login to add citations.
  1. Explicability of artificial intelligence in radiology: Is a fifth bioethical principle conceptually necessary?Frank Ursin, Cristian Timmermann & Florian Steger - 2022 - Bioethics 36 (2):143-153.
    Recent years have witnessed intensive efforts to specify which requirements ethical artificial intelligence (AI) must meet. General guidelines for ethical AI consider a varying number of principles important. A frequent novel element in these guidelines, that we have bundled together under the term explicability, aims to reduce the black-box character of machine learning algorithms. The centrality of this element invites reflection on the conceptual relation between explicability and the four bioethical principles. This is important because the application of general ethical (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Artificial Intelligence and Black‐Box Medical Decisions: Accuracy versus Explainability.Alex John London - 2019 - Hastings Center Report 49 (1):15-21.
    Although decision‐making algorithms are not new to medicine, the availability of vast stores of medical data, gains in computing power, and breakthroughs in machine learning are accelerating the pace of their development, expanding the range of questions they can address, and increasing their predictive power. In many cases, however, the most powerful machine learning techniques purchase diagnostic or predictive accuracy at the expense of our ability to access “the knowledge within the machine.” Without an explanation in terms of reasons or (...)
    Download  
     
    Export citation  
     
    Bookmark   79 citations  
  • Case-based reasoning and its implications for legal expert systems.Kevin D. Ashley - 1992 - Artificial Intelligence and Law 1 (2):113-208.
    Reasoners compare problems to prior cases to draw conclusions about a problem and guide decision making. All Case-Based Reasoning (CBR) employs some methods for generalizing from cases to support indexing and relevance assessment and evidences two basic inference methods: constraining search by tracing a solution from a past case or evaluating a case by comparing it to past cases. Across domains and tasks, however, humans reason with cases in subtly different ways evidencing different mixes of and mechanisms for these components.In (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch & Cristian Timmermann - 2023 - Ethik in der Medizin 35 (2):173-199.
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Two theses of knowledge representation: Language restrictions, taxonomic classification, and the utility of representation services.Jon Doyle & Ramesh S. Patil - 1991 - Artificial Intelligence 48 (3):261-297.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • SALT: A knowledge acquisition language for propose-and-revise systems.Sandra Marcus & John McDermott - 1989 - Artificial Intelligence 39 (1):1-37.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Fundamental concepts of qualitative probabilistic networks.Michael P. Wellman - 1990 - Artificial Intelligence 44 (3):257-303.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Levels of explainable artificial intelligence for human-aligned conversational explanations.Richard Dazeley, Peter Vamplew, Cameron Foale, Charlotte Young, Sunil Aryal & Francisco Cruz - 2021 - Artificial Intelligence 299 (C):103525.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Reconstructive expert system explanation.Michael R. Wick & William B. Thompson - 1992 - Artificial Intelligence 54 (1-2):33-70.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Local and global explanations of agent behavior: Integrating strategy summaries with saliency maps.Tobias Huber, Katharina Weitz, Elisabeth André & Ofra Amir - 2021 - Artificial Intelligence 301 (C):103571.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Goal‐Based Explanation Evaluation.David B. Leake - 1991 - Cognitive Science 15 (4):509-545.
    Many theories of explanation evaluation are based on context‐independent criteria. Such theories either restrict their consideration to explanation towards a fixed goal, or assume that all valid explanations are equivalent, so that evaluation criteria can be neutral to the goals underlying the attempt to explain. However, explanation can serve a range of purposes that place widely divergent requirements on the information an explanation must provide. It is argued that understanding what determines explanations' goodness requires a dynamic theory of evaluation, based (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Knowledge-intensive systems in the social service agency: Anticipated impacts on the organisation. [REVIEW]William J. Ferns & Abbe Mowshowitz - 1995 - AI and Society 9 (2-3):161-183.
    Shrinking resources and the increasing complexity of clinical decisions are stimulating research in knowledge-intensive computer applications for the delivery of social services. The expected benefits of knowledge-intensive applications such as expert systems include improvement in both the quality and the consistency of service delivery, augmentation of institutional memory, and reduced labour costs through greater reliance on paraprofessionals. This paper analyses the likely impacts of knowledge-intensive systems on social service organisations, drawing on trends in related service-delivery fields, and on known impacts (...)
    Download  
     
    Export citation  
     
    Bookmark