Switch to: References

Add citations

You must login to add citations.
  1. Is knowledge of causes sufficient for understanding?Xingming Hu - 2019 - Canadian Journal of Philosophy 49 (3):291-313.
    ABSTRACT: According to a traditional account, understanding why X occurred is equivalent to knowing that X was caused by Y. This paper defends the account against a major objection, viz., knowing-that is not sufficient for understanding-why, for understanding-why requires a kind of grasp while knowledge-that does not. I discuss two accounts of grasp in recent literature and argue that if either is true, then knowing that X was caused by Y entails at least a rudimentary understanding of why X occurred. (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Understanding the Progress of Science.C. D. McCoy - 2022 - In Insa Lawler, Kareem Khalifa & Elay Shech (eds.), Scientific Understanding and Representation: Modeling in the Physical Sciences. New York, NY: Routledge. pp. 353-369.
    Philosophical debates on how to account for the progress of science have traditionally divided along the realism-anti-realism axis. Relatively recent developments in epistemology, however, have opened up a new knowledge-understanding axis to the debate. This chapter presents a novel understanding-based account of scientific progress that takes its motivation from problem-solving practices in science. Problem-solving is characterized as a means of measuring degree of understanding, which is argued to be the principal epistemic (or cognitive) aim of science, over and against knowledge. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Grasp and scientific understanding: a recognition account.Michael Strevens - 2024 - Philosophical Studies 181 (4):741-762.
    To understand why a phenomenon occurs, it is not enough to possess a correct explanation of the phenomenon: you must grasp the explanation. In this formulation, “grasp” is a placeholder, standing for the psychological or epistemic relation that connects a mind to the explanatory facts in such a way as to produce understanding. This paper proposes and defends an account of the “grasping” relation according to which grasp of a property (to take one example of the sort of entity that (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Axe the X in XAI: A Plea for Understandable AI.Andrés Páez - forthcoming - In Juan Manuel Durán & Giorgia Pozzi (eds.), Philosophy of science for machine learning: Core issues and new perspectives. Springer.
    In a recent paper, Erasmus et al. (2021) defend the idea that the ambiguity of the term “explanation” in explainable AI (XAI) can be solved by adopting any of four different extant accounts of explanation in the philosophy of science: the Deductive Nomological, Inductive Statistical, Causal Mechanical, and New Mechanist models. In this chapter, I show that the authors’ claim that these accounts can be applied to deep neural networks as they would to any natural phenomenon is mistaken. I also (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explainable AI and Causal Understanding: Counterfactual Approaches Considered.Sam Baron - 2023 - Minds and Machines 33 (2):347-377.
    The counterfactual approach to explainable AI (XAI) seeks to provide understanding of AI systems through the provision of counterfactual explanations. In a recent systematic review, Chou et al. (Inform Fus 81:59–83, 2022) argue that the counterfactual approach does not clearly provide causal understanding. They diagnose the problem in terms of the underlying framework within which the counterfactual approach has been developed. To date, the counterfactual approach has not been developed in concert with the approach for specifying causes developed by Pearl (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Hot-cold empathy gaps and the grounds of authenticity.Grace Helton & Christopher Register - 2023 - Synthese 202 (5):1-24.
    Hot-cold empathy gaps are a pervasive phenomena wherein one’s predictions about others tend to skew ‘in the direction’ of one’s own current visceral states. For instance, when one predicts how hungry someone else is, one’s prediction will tend to reflect one’s own current hunger state. These gaps also obtain intrapersonally, when one attempts to predict what one oneself would do at a different time. In this paper, we do three things: We draw on empirical evidence to argue that so-called hot-cold (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Grounding, Understanding, and Explanation.Wes Siscoe - 2022 - Pacific Philosophical Quarterly 103 (4):791-815.
    Starting with the slogan that understanding is a ‘knowledge of causes’, Stephen Grimm and John Greco have argued that understanding comes from a knowledge of dependence relations. Grounding is the trendiest dependence relation on the market, and if Grimm and Greco are correct, then instances of grounding should also give rise to understanding. In this paper, I will show that this prediction is correct – grounding does indeed generate understanding in just the way that Grimm and Greco anticipate. However, grounding (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Group (epistemic) competence.Dani Pino - 2021 - Synthese 199 (3-4):11377-11396.
    In this paper, I present an account of group competence that is explicitly framed for cases of epistemic performances. According to it, we must consider group epistemic competence as the group agents’ capacity to produce knowledge, and not the result of the summation of its individual members’ competences to produce knowledge. Additionally, I contend that group competence must be understood in terms of group normative status. To introduce my view, I present Jesper Kallestrup’s denial that group competence involves anything over (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Recent Work in the Epistemology of Understanding.Michael Hannon - 2021 - American Philosophical Quarterly 58 (3):269-290.
    The philosophical interest in the nature, value, and varieties of human understanding has swelled in recent years. This article will provide an overview of new research in the epistemology of understanding, with a particular focus on the following questions: What is understanding and why should we care about it? Is understanding reducible to knowledge? Does it require truth, belief, or justification? Can there be lucky understanding? Does it require ‘grasping’ or some kind of ‘know-how’? This cluster of questions has largely (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  • Framing the Epistemic Schism of Statistical Mechanics.Javier Anta - 2021 - Proceedings of the X Conference of the Spanish Society of Logic, Methodology and Philosophy of Science.
    In this talk I present the main results from Anta (2021), namely, that the theoretical division between Boltzmannian and Gibbsian statistical mechanics should be understood as a separation in the epistemic capabilities of this physical discipline. In particular, while from the Boltzmannian framework one can generate powerful explanations of thermal processes by appealing to their microdynamics, from the Gibbsian framework one can predict observable values in a computationally effective way. Finally, I argue that this statistical mechanical schism contradicts the Hempelian (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explanatory pragmatism: a context-sensitive framework for explainable medical AI.Diana Robinson & Rune Nyrup - 2022 - Ethics and Information Technology 24 (1).
    Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the problem we (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations