Switch to: Citations

Add references

You must login to add references.
  1. A Contextual Approach to Scientific Understanding.Henk W. de Regt & Dennis Dieks - 2005 - Synthese 144 (1):137-170.
    Achieving understanding of nature is one of the aims of science. In this paper we offer an analysis of the nature of scientific understanding that accords with actual scientific practice and accommodates the historical diversity of conceptions of understanding. Its core idea is a general criterion for the intelligibility of scientific theories that is essentially contextual: which theories conform to this criterion depends on contextual factors, and can change in the course of time. Our analysis provides a general account of (...)
    Download  
     
    Export citation  
     
    Bookmark   205 citations  
  • Extending Ourselves: Computational Science, Empiricism, and Scientific Method.Paul Humphreys - 2004 - New York, US: Oxford University Press.
    Computational methods such as computer simulations, Monte Carlo methods, and agent-based modeling have become the dominant techniques in many areas of science. Extending Ourselves contains the first systematic philosophical account of these new methods, and how they require a different approach to scientific method. Paul Humphreys draws a parallel between the ways in which such computational methods have enhanced our abilities to mathematically model the world, and the more familiar ways in which scientific instruments have expanded our access to the (...)
    Download  
     
    Export citation  
     
    Bookmark   281 citations  
  • Introduction” to his.D. Lewis - 1986 - Philosophical Papers 2.
    Download  
     
    Export citation  
     
    Bookmark   194 citations  
  • Understanding, explanation, and unification.Victor Gijsbers - 2013 - Studies in History and Philosophy of Science Part A 44 (3):516-522.
    In this article I argue that there are two different types of understanding: the understanding we get from explanations, and the understanding we get from unification. This claim is defended by first showing that explanation and unification are not as closely related as has sometimes been thought. A critical appraisal of recent proposals for understanding without explanation leads us to discuss the example of a purely classificatory biology: it turns out that such a science can give us understanding of the (...)
    Download  
     
    Export citation  
     
    Bookmark   39 citations  
  • (1 other version)Responses to Critics.Jonathan Kvanvig - 2009 - In Adrian Haddock, Alan Millar & Duncan Pritchard (eds.), Epistemic value. New York: Oxford University Press. pp. 339-353.
    I begin by expressing my sincere thanks to my critics for taking time from their own impressive projects in epistemology to consider mine. Often, in reading their criticisms, I had the feeling of having received more help than I really wanted! But the truth of the matter is that we learn best by making mistakes, and I appreciate the conscientious attention to my work that my critics have shown.
    Download  
     
    Export citation  
     
    Bookmark   63 citations  
  • Varieties of Justification in Machine Learning.David Corfield - 2010 - Minds and Machines 20 (2):291-301.
    Forms of justification for inductive machine learning techniques are discussed and classified into four types. This is done with a view to introduce some of these techniques and their justificatory guarantees to the attention of philosophers, and to initiate a discussion as to whether they must be treated separately or rather can be viewed consistently from within a single framework.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Simulation and the sense of understanding.Jaakko Kuorikoski - 2011 - In Paul Humphreys & Cyrille Imbert (eds.), Models, Simulations, and Representations. New York: Routledge. pp. 168-187.
    Whether simulation models provide the right kind of understanding comparable to that of analytic models has been and remains a contentious issue. The assessment of understanding provided by simulations is often hampered by a conflation between the sense of understanding and understanding proper. This paper presents a deflationist conception of understanding and argues for the need to replace appeals to the sense of understanding with explicit criteria of explanatory relevance and for rethinking the proper way of conceptualizing the role of (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • The philosophical novelty of computer simulation methods.Paul Humphreys - 2009 - Synthese 169 (3):615 - 626.
    Reasons are given to justify the claim that computer simulations and computational science constitute a distinctively new set of scientific methods and that these methods introduce new issues in the philosophy of science. These issues are both epistemological and methodological in kind.
    Download  
     
    Export citation  
     
    Bookmark   136 citations  
  • Scientific explanation and the sense of understanding.J. D. Trout - 2002 - Philosophy of Science 69 (2):212-233.
    Scientists and laypeople alike use the sense of understanding that an explanation conveys as a cue to good or correct explanation. Although the occurrence of this sense or feeling of understanding is neither necessary nor sufficient for good explanation, it does drive judgments of the plausibility and, ultimately, the acceptability, of an explanation. This paper presents evidence that the sense of understanding is in part the routine consequence of two well-documented biases in cognitive psychology: overconfidence and hindsight. In light of (...)
    Download  
     
    Export citation  
     
    Bookmark   165 citations  
  • Explanatory unification.Philip Kitcher - 1981 - Philosophy of Science 48 (4):507-531.
    The official model of explanation proposed by the logical empiricists, the covering law model, is subject to familiar objections. The goal of the present paper is to explore an unofficial view of explanation which logical empiricists have sometimes suggested, the view of explanation as unification. I try to show that this view can be developed so as to provide insight into major episodes in the history of science, and that it can overcome some of the most serious difficulties besetting the (...)
    Download  
     
    Export citation  
     
    Bookmark   590 citations  
  • (1 other version)Studies in the logic of explanation.Carl Gustav Hempel & Paul Oppenheim - 1948 - Philosophy of Science 15 (2):135-175.
    To explain the phenomena in the world of our experience, to answer the question “why?” rather than only the question “what?”, is one of the foremost objectives of all rational inquiry; and especially, scientific research in its various branches strives to go beyond a mere description of its subject matter by providing an explanation of the phenomena it investigates. While there is rather general agreement about this chief objective of science, there exists considerable difference of opinion as to the function (...)
    Download  
     
    Export citation  
     
    Bookmark   711 citations  
  • Explanation and scientific understanding.Michael Friedman - 1974 - Journal of Philosophy 71 (1):5-19.
    Download  
     
    Export citation  
     
    Bookmark   589 citations  
  • Understanding and the facts.Catherine Elgin - 2007 - Philosophical Studies 132 (1):33 - 42.
    If understanding is factive, the propositions that express an understanding are true. I argue that a factive conception of understanding is unduly restrictive. It neither reflects our practices in ascribing understanding nor does justice to contemporary science. For science uses idealizations and models that do not mirror the facts. Strictly speaking, they are false. By appeal to exemplification, I devise a more generous, flexible conception of understanding that accommodates science, reflects our practices, and shows a sufficient but not slavish sensitivity (...)
    Download  
     
    Export citation  
     
    Bookmark   198 citations  
  • Dermatologist-level classification of skin cancer with deep neural networks.Andre Esteva, Brett Kuprel, Roberto A. Novoa, Justin Ko, Susan M. Swetter, Helen M. Blau & Sebastian Thrun - 2017 - Nature 542 (7639):115-118.
    Download  
     
    Export citation  
     
    Bookmark   63 citations  
  • Understanding Deep Learning with Statistical Relevance.Tim Räz - 2022 - Philosophy of Science 89 (1):20-41.
    This paper argues that a notion of statistical explanation, based on Salmon’s statistical relevance model, can help us better understand deep neural networks. It is proved that homogeneous partitions, the core notion of Salmon’s model, are equivalent to minimal sufficient statistics, an important notion from statistical inference. This establishes a link to deep neural networks via the so-called Information Bottleneck method, an information-theoretic framework, according to which deep neural networks implicitly solve an optimization problem that generalizes minimal sufficient statistics. The (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • The Right to Explanation.Kate Vredenburgh - 2021 - Journal of Political Philosophy 30 (2):209-229.
    Journal of Political Philosophy, Volume 30, Issue 2, Page 209-229, June 2022.
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Two Dimensions of Opacity and the Deep Learning Predicament.Florian J. Boge - 2021 - Minds and Machines 32 (1):43-75.
    Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Opacity thought through: on the intransparency of computer simulations.Claus Beisbart - 2021 - Synthese 199 (3-4):11643-11666.
    Computer simulations are often claimed to be opaque and thus to lack transparency. But what exactly is the opacity of simulations? This paper aims to answer that question by proposing an explication of opacity. Such an explication is needed, I argue, because the pioneering definition of opacity by P. Humphreys and a recent elaboration by Durán and Formanek are too narrow. While it is true that simulations are opaque in that they include too many computations and thus cannot be checked (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Machine learning in medicine: should the pursuit of enhanced interpretability be abandoned?Chang Ho Yoon, Robert Torrance & Naomi Scheinerman - 2022 - Journal of Medical Ethics 48 (9):581-585.
    We argue why interpretability should have primacy alongside empiricism for several reasons: first, if machine learning models are beginning to render some of the high-risk healthcare decisions instead of clinicians, these models pose a novel medicolegal and ethical frontier that is incompletely addressed by current methods of appraising medical interventions like pharmacological therapies; second, a number of judicial precedents underpinning medical liability and negligence are compromised when ‘autonomous’ ML recommendations are considered to be en par with human instruction in specific (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • Explanation in artificial intelligence: Insights from the social sciences.Tim Miller - 2019 - Artificial Intelligence 267 (C):1-38.
    Download  
     
    Export citation  
     
    Bookmark   142 citations  
  • Understanding climate change with statistical downscaling and machine learning.Julie Jebeile, Vincent Lam & Tim Räz - 2020 - Synthese (1-2):1-21.
    Machine learning methods have recently created high expectations in the climate modelling context in view of addressing climate change, but they are often considered as non-physics-based ‘black boxes’ that may not provide any understanding. However, in many ways, understanding seems indispensable to appropriately evaluate climate models and to build confidence in climate projections. Relying on two case studies, we compare how machine learning and standard statistical techniques affect our ability to understand the climate system. For that purpose, we put five (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • What is Interpretability?Adrian Erasmus, Tyler D. P. Brunet & Eyal Fisher - 2021 - Philosophy and Technology 34:833–862.
    We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: Are networks explainable, and if so, what does it mean to explain the output of a network? And what does it mean for a network to be interpretable? We argue that accounts of “explanation” tailored specifically to neural networks have ineffectively reinvented the wheel. In (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Computer Simulations, Machine Learning and the Laplacean Demon: Opacity in the Case of High Energy Physics.Florian J. Boge & Paul Grünke - forthcoming - In Andreas Kaminski, Michael Resch & Petra Gehring (eds.), The Science and Art of Simulation II.
    In this paper, we pursue three general aims: (I) We will define a notion of fundamental opacity and ask whether it can be found in High Energy Physics (HEP), given the involvement of machine learning (ML) and computer simulations (CS) therein. (II) We identify two kinds of non-fundamental, contingent opacity associated with CS and ML in HEP respectively, and ask whether, and if so how, they may be overcome. (III) We address the question of whether any kind of opacity, contingent (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • (2 other versions)The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2020 - Synthese 198 (10):1–⁠32.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Transparency in Complex Computational Systems.Kathleen A. Creel - 2020 - Philosophy of Science 87 (4):568-589.
    Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have s...
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  • The Pragmatic Turn in Explainable Artificial Intelligence.Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • Scientific understanding and felicitous legitimate falsehoods.Insa Lawler - 2021 - Synthese 198 (7):6859-6887.
    Science is replete with falsehoods that epistemically facilitate understanding by virtue of being the very falsehoods they are. In view of this puzzling fact, some have relaxed the truth requirement on understanding. I offer a factive view of understanding that fully accommodates the puzzling fact in four steps: (i) I argue that the question how these falsehoods are related to the phenomenon to be understood and the question how they figure into the content of understanding it are independent. (ii) I (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from (...)
    Download  
     
    Export citation  
     
    Bookmark   62 citations  
  • Deep learning: A philosophical introduction.Cameron Buckner - 2019 - Philosophy Compass 14 (10):e12625.
    Deep learning is currently the most prominent and widely successful method in artificial intelligence. Despite having played an active role in earlier artificial intelligence and neural network research, philosophers have been largely silent on this technology so far. This is remarkable, given that deep learning neural networks have blown past predicted upper limits on artificial intelligence performance—recognizing complex objects in natural photographs and defeating world champions in strategy games as complex as Go and chess—yet there remains no universally accepted explanation (...)
    Download  
     
    Export citation  
     
    Bookmark   46 citations  
  • Against Interpretability: a Critical Examination of the Interpretability Problem in Machine Learning.Maya Krishnan - 2020 - Philosophy and Technology 33 (3):487-502.
    The usefulness of machine learning algorithms has led to their widespread adoption prior to the development of a conceptual framework for making sense of them. One common response to this situation is to say that machine learning suffers from a “black box problem.” That is, machine learning algorithms are “opaque” to human users, failing to be “interpretable” or “explicable” in terms that would render categorization procedures “understandable.” The purpose of this paper is to challenge the widespread agreement about the existence (...)
    Download  
     
    Export citation  
     
    Bookmark   34 citations  
  • Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In (...)
    Download  
     
    Export citation  
     
    Bookmark   54 citations  
  • Judging machines: philosophical aspects of deep learning.Arno Schubbach - 2019 - Synthese 198 (2):1807-1827.
    Although machine learning has been successful in recent years and is increasingly being deployed in the sciences, enterprises or administrations, it has rarely been discussed in philosophy beyond the philosophy of mathematics and machine learning. The present contribution addresses the resulting lack of conceptual tools for an epistemological discussion of machine learning by conceiving of deep learning networks as ‘judging machines’ and using the Kantian analysis of judgments for specifying the type of judgment they are capable of. At the center (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Grounds for Trust: Essential Epistemic Opacity and Computational Reliabilism.Juan M. Durán & Nico Formanek - 2018 - Minds and Machines 28 (4):645-666.
    Several philosophical issues in connection with computer simulations rely on the assumption that results of simulations are trustworthy. Examples of these include the debate on the experimental role of computer simulations :483–496, 2009; Morrison in Philos Stud 143:33–57, 2009), the nature of computer data Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013; Humphreys, in: Durán, Arnold Computer simulations and the changing face of scientific experimentation, Cambridge Scholars Publishing, Barcelona, 2013), and the explanatory power of (...)
    Download  
     
    Export citation  
     
    Bookmark   44 citations  
  • What is Understanding? An Overview of Recent Debates in Epistemology and Philosophy of Science.Christoph Baumberger, Claus Beisbart & Georg Brun - 2017 - In Stephen Grimm Christoph Baumberger & Sabine Ammon (eds.), Explaining Understanding: New Perspectives from Epistemology and Philosophy of Science. Routledge. pp. 1-34.
    The paper provides a systematic overview of recent debates in epistemology and philosophy of science on the nature of understanding. We explain why philosophers have turned their attention to understanding and discuss conditions for “explanatory” understanding of why something is the case and for “objectual” understanding of a whole subject matter. The most debated conditions for these types of understanding roughly resemble the three traditional conditions for knowledge: truth, justification and belief. We discuss prominent views about how to construe these (...)
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  • Philosophical Papers.Graeme Forbes & David Lewis - 1985 - Philosophical Review 94 (1):108.
    Download  
     
    Export citation  
     
    Bookmark   209 citations  
  • Understanding and its Relation to Knowledge.Christoph Baumberger - 2011 - In Christoph Jäger Winfrid Löffler (ed.), Epistemology: Contexts, Values, Disagreement. Papers of the 34th International Wittgenstein Symposium. Austrian Ludwig Wittgenstein Society. pp. 16-18.
    Is understanding the same as or at least a species of knowledge? This question has to be answered with respect to each of three types of understanding and of knowledge. I argue that understanding-why and objectual understanding are not reducible to one another and neither identical with nor a species of the corresponding or any other type of knowledge. My discussion reveals important characteristics of these two types of understanding and has consequences for propositional understanding.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Is understanding a species of knowledge?Stephen R. Grimm - 2006 - British Journal for the Philosophy of Science 57 (3):515-535.
    Among philosophers of science there seems to be a general consensus that understanding represents a species of knowledge, but virtually every major epistemologist who has thought seriously about understanding has come to deny this claim. Against this prevailing tide in epistemology, I argue that understanding is, in fact, a species of knowledge: just like knowledge, for example, understanding is not transparent and can be Gettiered. I then consider how the psychological act of "grasping" that seems to be characteristic of understanding (...)
    Download  
     
    Export citation  
     
    Bookmark   236 citations  
  • Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.Cynthia Rudin - 2019 - Nature Machine Intelligence 1.
    Download  
     
    Export citation  
     
    Bookmark   115 citations  
  • Peeking inside the black-box: A survey on explainable artificial intelligence (XAI).A. Adadi & M. Berrada - 2018 - IEEE Access 6.
    Download  
     
    Export citation  
     
    Bookmark   66 citations