Switch to: References

Add citations

You must login to add citations.
  1. Notes on "epistemology of a rule-based expert system".William J. Clancey - 1993 - Artificial Intelligence 59 (1-2):191-204.
    In the 1970s, we conceived of a rule explanation as supplying the causal and social context that justifies a rule, an objective documentation for why a rule is correct. Today we would call such descriptions post-hoc design rationales, not proving the rules? correctness, but providing a means for later interpreting why the rule was written and facilitating later improvements.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • SOAR: An architecture for general intelligence.John E. Laird, Allen Newell & Paul S. Rosenbloom - 1987 - Artificial Intelligence 33 (1):1-64.
    Download  
     
    Export citation  
     
    Bookmark   222 citations  
  • Goal-directed diagnosis—a diagnostic reasoning framework for exploratory-corrective domains.Ron Rymon - 1996 - Artificial Intelligence 84 (1-2):257-297.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • A history of AI and Law in 50 papers: 25 years of the international conference on AI and Law. [REVIEW]Trevor Bench-Capon, Michał Araszkiewicz, Kevin Ashley, Katie Atkinson, Floris Bex, Filipe Borges, Daniele Bourcier, Paul Bourgine, Jack G. Conrad, Enrico Francesconi, Thomas F. Gordon, Guido Governatori, Jochen L. Leidner, David D. Lewis, Ronald P. Loui, L. Thorne McCarty, Henry Prakken, Frank Schilder, Erich Schweighofer, Paul Thompson, Alex Tyrrell, Bart Verheij, Douglas N. Walton & Adam Z. Wyner - 2012 - Artificial Intelligence and Law 20 (3):215-319.
    We provide a retrospective of 25 years of the International Conference on AI and Law, which was first held in 1987. Fifty papers have been selected from the thirteen conferences and each of them is described in a short subsection individually written by one of the 24 authors. These subsections attempt to place the paper discussed in the context of the development of AI and Law, while often offering some personal reactions and reflections. As a whole, the subsections build into (...)
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Case-based reasoning and its implications for legal expert systems.Kevin D. Ashley - 1992 - Artificial Intelligence and Law 1 (2-3):113-208.
    Reasoners compare problems to prior cases to draw conclusions about a problem and guide decision making. All Case-Based Reasoning (CBR) employs some methods for generalizing from cases to support indexing and relevance assessment and evidences two basic inference methods: constraining search by tracing a solution from a past case or evaluating a case by comparing it to past cases. Across domains and tasks, however, humans reason with cases in subtly different ways evidencing different mixes of and mechanisms for these components.In (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Model construction operators.William J. Clancey - 1992 - Artificial Intelligence 53 (1):1-115.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Intelligent tutoring systems.Mark Stefik - 1985 - Artificial Intelligence 26 (2):238-245.
    Download  
     
    Export citation  
     
    Bookmark  
  • Controlling recursive inference.David E. Smith, Michael R. Genesereth & Matthew L. Ginsberg - 1986 - Artificial Intelligence 30 (3):343-389.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • AI at work: understanding its uses and consequences on work activities and organization in radiology.Tamari Gamkrelidze, Moustafa Zouinar & Flore Barcellini - forthcoming - AI and Society:1-19.
    The progressive dissemination of artificial intelligence (AI) systems in work settings raises numerous questions and concerns regarding their consequences, whether positive or negative, on work activities and organizations. This paper presents an empirical study that was designed to identify and analyze these consequences in radiology. This study focuses on two AI systems: a voice recognition dictation system for radiological reports and a system for detecting fractures on X-rays images. Based on a qualitative analysis of field observations of work activities and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • FERMI: A Flexible Expert Reasoner with Multi‐Domain Inferencing.Jill H. Larkin, Frederick Reif, Jaime Carbonell & Angela Gugliotta - 1988 - Cognitive Science 12 (1):101-138.
    Expert reasoning combines voluminous domain‐specific knowledge with more general factual and strategic knowledge. Whereas expert system builders have recognized the need for specificity and problem‐solving researchers the need for generality, few attempts have been made to develop expert reasoning engines combining different kinds of knowledge at different levels of generality. This paper reports on the FERMI project, a computer‐implemented expert reasoner in the natural sciences that encodes factual and strategic knowledge in separate semantic hierarchies. The principled decomposition of knowledge according (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Automatic knowledge base refinement for classification systems.Allen Ginsberg, Sholom M. Weiss & Peter Politakis - 1988 - Artificial Intelligence 35 (2):197-226.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Modeling Novice‐to‐Expert Shifts in Problem‐Solving Strategy and Knowledge Organization.Renée Elio & Peternela B. Scharf - 1990 - Cognitive Science 14 (4):579-639.
    This research presents a computer model called EUREKA that begins with novice‐like strategies and knowledge organizations for solving physics word problems and acquires features of knowledge organizations and basic approaches that characterize experts in this domain. EUREKA learns a highly interrelated network of problem‐type schemas with associated solution methodologies. Initially, superficial features of the problem statement form the basis for both the problem‐type schemas and the discriminating features that organize them in the P‐MOP (Problem Memory Organization Packet) network. As EUREKA (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Model-based reasoning about learner behaviour.Kees de Koning, Bert Bredeweg, Joost Breuker & Bob Wielinga - 2000 - Artificial Intelligence 117 (2):173-229.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Heuristic classification.William J. Clancey - 1985 - Artificial Intelligence 27 (3):289-350.
    Download  
     
    Export citation  
     
    Bookmark   53 citations  
  • An explanation space to align user studies with the technical development of Explainable AI.Garrick Cabour, Andrés Morales-Forero, Élise Ledoux & Samuel Bassetto - 2023 - AI and Society 38 (2):869-887.
    Providing meaningful and actionable explanations for end-users is a situated problem requiring the intersection of multiple disciplines to address social, operational, and technical challenges. However, the explainable artificial intelligence community has not commonly adopted or created tangible design tools that allow interdisciplinary work to develop reliable AI-powered solutions. This paper proposes a formative architecture that defines the explanation space from a user-inspired perspective. The architecture comprises five intertwined components to outline explanation requirements for a task: (1) the end-users’ mental models, (...)
    Download  
     
    Export citation  
     
    Bookmark