Switch to: Citations

Add references

You must login to add references.
  1. A Misdirected Principle with a Catch: Explicability for AI.Scott Robbins - 2019 - Minds and Machines 29 (4):495-514.
    There is widespread agreement that there should be a principle requiring that artificial intelligence be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” :689–707, 2018). There (...)
    Download  
     
    Export citation  
     
    Bookmark   41 citations  
  • The Pragmatic Turn in Explainable Artificial Intelligence.Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will (...)
    Download  
     
    Export citation  
     
    Bookmark   38 citations  
  • How many kinds of reasons?Maria Alvarez - 2007 - Philosophical Explorations 12 (2):181 – 193.
    Reasons can play a variety of roles in a variety of contexts. For instance, reasons can motivate and guide us in our actions (and omissions), in the sense that we often act in the light of reasons. And reasons can be grounds for beliefs, desires and emotions and can be used to evaluate, and sometimes to justify, all these. In addition, reasons are used in explanations: both in explanations of human actions, beliefs, desires, emotions, etc., and in explanations of a (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Depth: An Account of Scientific Explanation.Michael Strevens - 2008 - Cambridge: Harvard University Press.
    Approaches to explanation -- Causal and explanatory relevance -- The kairetic account of /D making -- The kairetic account of explanation -- Extending the kairetic account -- Event explanation and causal claims -- Regularity explanation -- Abstraction in regularity explanation -- Approaches to probabilistic explanation -- Kairetic explanation of frequencies -- Kairetic explanation of single outcomes -- Looking outward -- Looking inward.
    Download  
     
    Export citation  
     
    Bookmark   480 citations  
  • Reasons for Action.Pamela Hieronymi - 2011 - Proceedings of the Aristotelian Society 111 (3pt3):407-427.
    Donald Davidson opens ‘Actions, Reasons, and Causes’ by asking, ‘What is the relation between a reason and an action when the reason explains the action by giving the agent's reason for doing what he did?’ His answer has generated some confusion about reasons for action and made for some difficulty in understanding the place for the agent's own reasons for acting, in the explanation of an action. I offer here a different account of the explanation of action, one that, though (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • Explanation in artificial intelligence: Insights from the social sciences.Tim Miller - 2019 - Artificial Intelligence 267 (C):1-38.
    Download  
     
    Export citation  
     
    Bookmark   148 citations  
  • The Intentional Stance.Daniel Clement Dennett - 1981 - MIT Press.
    Through the use of such "folk" concepts as belief, desire, intention, and expectation, Daniel Dennett asserts in this first full scale presentation of...
    Download  
     
    Export citation  
     
    Bookmark   1473 citations  
  • Against Interpretability: a Critical Examination of the Interpretability Problem in Machine Learning.Maya Krishnan - 2020 - Philosophy and Technology 33 (3):487-502.
    The usefulness of machine learning algorithms has led to their widespread adoption prior to the development of a conceptual framework for making sense of them. One common response to this situation is to say that machine learning suffers from a “black box problem.” That is, machine learning algorithms are “opaque” to human users, failing to be “interpretable” or “explicable” in terms that would render categorization procedures “understandable.” The purpose of this paper is to challenge the widespread agreement about the existence (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Aspects of scientific explanation.Carl G. Hempel - 1965 - In Carl Gustav Hempel (ed.), Aspects of Scientific Explanation and Other Essays in the Philosophy of Science. New York: The Free Press. pp. 504.
    Download  
     
    Export citation  
     
    Bookmark   851 citations  
  • A Survey of Methods for Explaining Black Box Models.Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti & Dino Pedreschi - 2019 - ACM Computing Surveys 51 (5):1-42.
    Download  
     
    Export citation  
     
    Bookmark   57 citations  
  • The Scientific Image.William Demopoulos & Bas C. van Fraassen - 1982 - Philosophical Review 91 (4):603.
    Download  
     
    Export citation  
     
    Bookmark   1798 citations  
  • Scientific progress: Knowledge versus understanding.Finnur Dellsén - 2016 - Studies in History and Philosophy of Science Part A 56 (C):72-83.
    What is scientific progress? On Alexander Bird’s epistemic account of scientific progress, an episode in science is progressive precisely when there is more scientific knowledge at the end of the episode than at the beginning. Using Bird’s epistemic account as a foil, this paper develops an alternative understanding-based account on which an episode in science is progressive precisely when scientists grasp how to correctly explain or predict more aspects of the world at the end of the episode than at the (...)
    Download  
     
    Export citation  
     
    Bookmark   57 citations  
  • Transparency in Complex Computational Systems.Kathleen A. Creel - 2020 - Philosophy of Science 87 (4):568-589.
    Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have s...
    Download  
     
    Export citation  
     
    Bookmark   58 citations  
  • Black-box artificial intelligence: an epistemological and critical analysis.Manuel Carabantes - 2020 - AI and Society 35 (2):309-317.
    The artificial intelligence models with machine learning that exhibit the best predictive accuracy, and therefore, the most powerful ones, are, paradoxically, those with the most opaque black-box architectures. At the same time, the unstoppable computerization of advanced industrial societies demands the use of these machines in a growing number of domains. The conjunction of both phenomena gives rise to a control problem on AI that in this paper we analyze by dividing the issue into two. First, we carry out an (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • How the machine ‘thinks’: Understanding opacity in machine learning algorithms.Jenna Burrell - 2016 - Big Data and Society 3 (1):205395171562251.
    This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: opacity as intentional corporate or state (...)
    Download  
     
    Export citation  
     
    Bookmark   213 citations  
  • Peeking inside the black-box: A survey on explainable artificial intelligence (XAI).A. Adadi & M. Berrada - 2018 - IEEE Access 6.
    Download  
     
    Export citation  
     
    Bookmark   68 citations  
  • Scientific Exploration and Explainable Artificial Intelligence.Carlos Zednik & Hannes Boelsen - 2022 - Minds and Machines 32 (1):219-239.
    Models developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • (2 other versions)The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2021 - Synthese 198 (10):9211-9242.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealisedexplanation gamein which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Why Attention is Not Explanation: Surgical Intervention and Causal Reasoning about Neural Models.Christopher Grimsley, Elijah Mayfield & Julia Bursten - 2020 - Proceedings of the 12th Conference on Language Resources and Evaluation.
    As the demand for explainable deep learning grows in the evaluation of language technologies, the value of a principled grounding for those explanations grows as well. Here we study the state-of-the-art in explanation for neural models for natural-language processing (NLP) tasks from the viewpoint of philosophy of science. We focus on recent evaluation work that finds brittleness in explanations obtained through attention mechanisms.We harness philosophical accounts of explanation to suggest broader conclusions from these studies. From this analysis, we assert the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (2 other versions)The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2020 - Synthese 198 (10):1–⁠32.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from (...)
    Download  
     
    Export citation  
     
    Bookmark   64 citations  
  • Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  • Understanding Scientific Understanding.Henk W. de Regt - 2017 - New York: Oup Usa.
    Understanding is a central aim of science and highly important in present-day society. But what precisely is scientific understanding and how can it be achieved? This book answers these questions, through philosophical analysis and historical case studies, and presents a philosophical theory of scientific understanding that highlights its contextual nature.
    Download  
     
    Export citation  
     
    Bookmark   93 citations  
  • (1 other version)The Value of Understanding.Jonathan L. Kvanvig - 2009 - In Adrian Haddock, Alan Millar & Duncan Pritchard (eds.), Epistemic value. New York: Oxford University Press. pp. 95-112.
    Download  
     
    Export citation  
     
    Bookmark   89 citations  
  • Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?John Zerilli, Alistair Knott, James Maclaurin & Colin Gavaghan - 2018 - Philosophy and Technology 32 (4):661-683.
    We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that (...)
    Download  
     
    Export citation  
     
    Bookmark   79 citations  
  • (2 other versions)The Explanation Game: A Formal Framework for Interpretable Machine Learning.David S. Watson & Luciano Floridi - 2021 - In Josh Cowls & Jessica Morley (eds.), The 2020 Yearbook of the Digital Ethics Lab. Springer Verlag. pp. 109-143.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations