Switch to: References

Add citations

You must login to add citations.
  1. Why do We Need to Employ Exemplars in Moral Education? Insights from Recent Advances in Research on Artificial Intelligence.Hyemin Han - forthcoming - Ethics and Behavior.
    In this paper, I examine why moral exemplars are useful and even necessary in moral education despite several critiques from researchers and educators. To support my point, I review recent AI research demonstrating that exemplar-based learning is superior to rule-based learning in model performance in training neural networks, such as large language models. I particularly focus on why education aiming at promoting the development of multifaceted moral functioning can be done effectively by using exemplars, which is similar to exemplar-based learning (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Counterfactual explanations for misclassified images: How human and machine explanations differ.Eoin Delaney, Arjun Pakrashi, Derek Greene & Mark T. Keane - 2023 - Artificial Intelligence 324 (C):103995.
    Download  
     
    Export citation  
     
    Bookmark  
  • Temporal logic explanations for dynamic decision systems using anchors and Monte Carlo Tree Search.Tzu-Yi Chiu, Jerome Le Ny & Jean-Pierre David - 2023 - Artificial Intelligence 318 (C):103897.
    Download  
     
    Export citation  
     
    Bookmark  
  • Defining Explanation and Explanatory Depth in XAI.Stefan Buijsman - 2022 - Minds and Machines 32 (3):563-584.
    Explainable artificial intelligence (XAI) aims to help people understand black box algorithms, particularly of their outputs. But what are these explanations and when is one explanation better than another? The manipulationist definition of explanation from the philosophy of science offers good answers to these questions, holding that an explanation consists of a generalization that shows what happens in counterfactual cases. Furthermore, when it comes to explanatory depth this account holds that a generalization that has more abstract variables, is broader in (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Explainable Artificial Intelligence in Data Science.Joaquín Borrego-Díaz & Juan Galán-Páez - 2022 - Minds and Machines 32 (3):485-531.
    A widespread need to explain the behavior and outcomes of AI-based systems has emerged, due to their ubiquitous presence. Thus, providing renewed momentum to the relatively new research area of eXplainable AI (XAI). Nowadays, the importance of XAI lies in the fact that the increasing control transference to this kind of system for decision making -or, at least, its use for assisting executive stakeholders- already affects many sensitive realms (as in Politics, Social Sciences, or Law). The decision-making power handover to (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Explainable AI and Causal Understanding: Counterfactual Approaches Considered.Sam Baron - 2023 - Minds and Machines 33 (2):347-377.
    The counterfactual approach to explainable AI (XAI) seeks to provide understanding of AI systems through the provision of counterfactual explanations. In a recent systematic review, Chou et al. (Inform Fus 81:59–83, 2022) argue that the counterfactual approach does not clearly provide causal understanding. They diagnose the problem in terms of the underlying framework within which the counterfactual approach has been developed. To date, the counterfactual approach has not been developed in concert with the approach for specifying causes developed by Pearl (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Axe the X in XAI: A Plea for Understandable AI.Andrés Páez - forthcoming - In Juan Manuel Durán & Giorgia Pozzi (eds.), Philosophy of science for machine learning: Core issues and new perspectives. Springer.
    In a recent paper, Erasmus et al. (2021) defend the idea that the ambiguity of the term “explanation” in explainable AI (XAI) can be solved by adopting any of four different extant accounts of explanation in the philosophy of science: the Deductive Nomological, Inductive Statistical, Causal Mechanical, and New Mechanist models. In this chapter, I show that the authors’ claim that these accounts can be applied to deep neural networks as they would to any natural phenomenon is mistaken. I also (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explanation Hacking: The perils of algorithmic recourse.E. Sullivan & Atoosa Kasirzadeh - forthcoming - In Juan Manuel Durán & Giorgia Pozzi (eds.), Philosophy of science for machine learning: Core issues and new perspectives. Springer.
    We argue that the trend toward providing users with feasible and actionable explanations of AI decisions—known as recourse explanations—comes with ethical downsides. Specifically, we argue that recourse explanations face several conceptual pitfalls and can lead to problematic explanation hacking, which undermines their ethical status. As an alternative, we advocate that explanations of AI decisions should aim at understanding.
    Download  
     
    Export citation  
     
    Bookmark  
  • Assessing the communication gap between AI models and healthcare professionals: Explainability, utility and trust in AI-driven clinical decision-making.Oskar Wysocki, Jessica Katharine Davies, Markel Vigo, Anne Caroline Armstrong, Dónal Landers, Rebecca Lee & André Freitas - 2023 - Artificial Intelligence 316 (C):103839.
    Download  
     
    Export citation  
     
    Bookmark  
  • Rationalizing predictions by adversarial information calibration.Lei Sha, Oana-Maria Camburu & Thomas Lukasiewicz - 2023 - Artificial Intelligence 315 (C):103828.
    Download  
     
    Export citation  
     
    Bookmark  
  • “That's (not) the output I expected!” On the role of end user expectations in creating explanations of AI systems.Maria Riveiro & Serge Thill - 2021 - Artificial Intelligence 298:103507.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • G -LIME: Statistical learning for local interpretations of deep neural networks using global priors.Xuhong Li, Haoyi Xiong, Xingjian Li, Xiao Zhang, Ji Liu, Haiyan Jiang, Zeyu Chen & Dejing Dou - 2023 - Artificial Intelligence 314 (C):103823.
    Download  
     
    Export citation  
     
    Bookmark  
  • Human performance consequences of normative and contrastive explanations: An experiment in machine learning for reliability maintenance.Davide Gentile, Birsen Donmez & Greg A. Jamieson - 2023 - Artificial Intelligence 321 (C):103945.
    Download  
     
    Export citation  
     
    Bookmark