Switch to: Citations

Add references

You must login to add references.
  1. Galilean Idealization.Ernan McMullin - 1985 - Studies in History and Philosophy of Science Part A 16 (3):247.
    Download  
     
    Export citation  
     
    Bookmark   317 citations  
  • Understanding: not know-how.Emily Sullivan - 2018 - Philosophical Studies 175 (1):221-240.
    There is considerable agreement among epistemologists that certain abilities are constitutive of understanding-why. These abilities include: constructing explanations, drawing conclusions, and answering questions. This agreement has led epistemologists to conclude that understanding is a kind of know-how. However, in this paper, I argue that the abilities constitutive of understanding are the same kind of cognitive abilities that we find in ordinary cases of knowledge-that and not the kind of practical abilities associated with know-how. I argue for this by disambiguating between (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • What is Understanding? An Overview of Recent Debates in Epistemology and Philosophy of Science.Christoph Baumberger, Claus Beisbart & Georg Brun - 2017 - In Stephen Grimm Christoph Baumberger & Sabine Ammon (eds.), Explaining Understanding: New Perspectives from Epistemology and Philosophy of Science. Routledge. pp. 1-34.
    The paper provides a systematic overview of recent debates in epistemology and philosophy of science on the nature of understanding. We explain why philosophers have turned their attention to understanding and discuss conditions for “explanatory” understanding of why something is the case and for “objectual” understanding of a whole subject matter. The most debated conditions for these types of understanding roughly resemble the three traditional conditions for knowledge: truth, justification and belief. We discuss prominent views about how to construe these (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • Understanding and Coming to Understand.Michael Lynch - 2017 - In Stephen Robert Grimm (ed.), Making Sense of the World: New Essays on the Philosophy of Understanding. New York, NY, United States of America: Oxford University Press.
    Many philosophers take understanding to be a distinctive kind of knowledge that involves grasping dependency relations; moreover, they hold it to be particularly valuable. This paper aims to investigate and address two well-known puzzles that arise from this conception: (1) the nature of understanding itself—in particular, the nature of “grasping”; (2) the source of understanding’s distinctive value. In what follows, I’ll argue that we can shed light on both puzzles by recognizing first, the importance of the distinction between the act (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Scientific representation.Roman Frigg & James Nguyen - 2016 - Stanford Encyclopedia of Philosophy.
    Science provides us with representations of atoms, elementary particles, polymers, populations, genetic trees, economies, rational decisions, aeroplanes, earthquakes, forest fires, irrigation systems, and the world’s climate. It's through these representations that we learn about the world. This entry explores various different accounts of scientific representation, with a particular focus on how scientific models represent their target systems. As philosophers of science are increasingly acknowledging the importance, if not the primacy, of scientific models as representational units of science, it's important to (...)
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • Understanding Why.Alison Hills - 2015 - Noûs 49 (2):661-688.
    I argue that understanding why p involves a kind of intellectual know how and differsfrom both knowledge that p and knowledge why p (as they are standardly understood).I argue that understanding, in this sense, is valuable.
    Download  
     
    Export citation  
     
    Bookmark   190 citations  
  • A Contextual Approach to Scientific Understanding.Henk W. de Regt & Dennis Dieks - 2005 - Synthese 144 (1):137-170.
    Achieving understanding of nature is one of the aims of science. In this paper we offer an analysis of the nature of scientific understanding that accords with actual scientific practice and accommodates the historical diversity of conceptions of understanding. Its core idea is a general criterion for the intelligibility of scientific theories that is essentially contextual: which theories conform to this criterion depends on contextual factors, and can change in the course of time. Our analysis provides a general account of (...)
    Download  
     
    Export citation  
     
    Bookmark   208 citations  
  • The Structure of Scientific Theories.Rasmus Grønfeldt Winther - 2015 - Stanford Encyclopedia of Philosophy.
    Scientific inquiry has led to immense explanatory and technological successes, partly as a result of the pervasiveness of scientific theories. Relativity theory, evolutionary theory, and plate tectonics were, and continue to be, wildly successful families of theories within physics, biology, and geology. Other powerful theory clusters inhabit comparatively recent disciplines such as cognitive science, climate science, molecular biology, microeconomics, and Geographic Information Science (GIS). Effective scientific theories magnify understanding, help supply legitimate explanations, and assist in formulating predictions. Moving from their (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • (1 other version)Hypothetical Pattern Idealization and Explanatory Models.Yasha Rohwer & Collin Rice - 2013 - Philosophy of Science 80 (3):334-355.
    Highly idealized models, such as the Hawk-Dove game, are pervasive in biological theorizing. We argue that the process and motivation that leads to the introduction of various idealizations into these models is not adequately captured by Michael Weisberg’s taxonomy of three kinds of idealization. Consequently, a fourth kind of idealization is required, which we call hypothetical pattern idealization. This kind of idealization is used to construct models that aim to be explanatory but do not aim to be explanations.
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Modeling social and evolutionary games.Angela Potochnik - 2012 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 43 (1):202-208.
    When game theory was introduced to biology, the components of classic game theory models were replaced with elements more befitting evolutionary phenomena. The actions of intelligent agents are replaced by phenotypic traits; utility is replaced by fitness; rational deliberation is replaced by natural selection. In this paper, I argue that this classic conception of comprehensive reapplication is misleading, for it overemphasizes the discontinuity between human behavior and evolved traits. Explicitly considering the representational roles of evolutionary game theory brings to attention (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • No understanding without explanation.Michael Strevens - 2013 - Studies in History and Philosophy of Science Part A 44 (3):510-515.
    Scientific understanding, this paper argues, can be analyzed entirely in terms of a mental act of “grasping” and a notion of explanation. To understand why a phenomenon occurs is to grasp a correct explanation of the phenomenon. To understand a scientific theory is to be able to construct, or at least to grasp, a range of potential explanations in which that theory accounts for other phenomena. There is no route to scientific understanding, then, that does not go by way of (...)
    Download  
     
    Export citation  
     
    Bookmark   171 citations  
  • Three Kinds of Idealization.Michael Weisberg - 2007 - Journal of Philosophy 104 (12):639-659.
    Philosophers of science increasingly recognize the importance of idealization: the intentional introduction of distortion into scientific theories. Yet this recognition has not yielded consensus about the nature of idealization. e literature of the past thirty years contains disparate characterizations and justifications, but little evidence of convergence towards a common position.
    Download  
     
    Export citation  
     
    Bookmark   293 citations  
  • (1 other version)Models in Science (2nd edition).Roman Frigg & Stephan Hartmann - 2021 - The Stanford Encyclopedia of Philosophy.
    Models are of central importance in many scientific contexts. The centrality of models such as inflationary models in cosmology, general-circulation models of the global climate, the double-helix model of DNA, evolutionary models in biology, agent-based models in the social sciences, and general-equilibrium models of markets in their respective domains is a case in point (the Other Internet Resources section at the end of this entry contains links to online resources that discuss these models). Scientists spend significant amounts of time building, (...)
    Download  
     
    Export citation  
     
    Bookmark   235 citations  
  • True enough.Catherine Z. Elgin - 2004 - Philosophical Issues 14 (1):113–131.
    Truth is standardly considered a requirement on epistemic acceptability. But science and philosophy deploy models, idealizations and thought experiments that prescind from truth to achieve other cognitive ends. I argue that such felicitous falsehoods function as cognitively useful fictions. They are cognitively useful because they exemplify and afford epistemic access to features they share with the relevant facts. They are falsehoods in that they diverge from the facts. Nonetheless, they are true enough to serve their epistemic purposes. Theories that contain (...)
    Download  
     
    Export citation  
     
    Bookmark   327 citations  
  • The Right to Explanation.Kate Vredenburgh - 2021 - Journal of Political Philosophy 30 (2):209-229.
    Journal of Political Philosophy, Volume 30, Issue 2, Page 209-229, June 2022.
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • How the machine ‘thinks’: Understanding opacity in machine learning algorithms.Jenna Burrell - 2016 - Big Data and Society 3 (1):205395171562251.
    This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: opacity as intentional corporate or state (...)
    Download  
     
    Export citation  
     
    Bookmark   210 citations  
  • Transparency in Complex Computational Systems.Kathleen A. Creel - 2020 - Philosophy of Science 87 (4):568-589.
    Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have s...
    Download  
     
    Export citation  
     
    Bookmark   58 citations  
  • Recent Work in the Epistemology of Understanding.Michael Hannon - 2021 - American Philosophical Quarterly 58 (3):269-290.
    The philosophical interest in the nature, value, and varieties of human understanding has swelled in recent years. This article will provide an overview of new research in the epistemology of understanding, with a particular focus on the following questions: What is understanding and why should we care about it? Is understanding reducible to knowledge? Does it require truth, belief, or justification? Can there be lucky understanding? Does it require ‘grasping’ or some kind of ‘know-how’? This cluster of questions has largely (...)
    Download  
     
    Export citation  
     
    Bookmark   29 citations  
  • The Pragmatic Turn in Explainable Artificial Intelligence.Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will (...)
    Download  
     
    Export citation  
     
    Bookmark   38 citations  
  • Against Interpretability: a Critical Examination of the Interpretability Problem in Machine Learning.Maya Krishnan - 2020 - Philosophy and Technology 33 (3):487-502.
    The usefulness of machine learning algorithms has led to their widespread adoption prior to the development of a conceptual framework for making sense of them. One common response to this situation is to say that machine learning suffers from a “black box problem.” That is, machine learning algorithms are “opaque” to human users, failing to be “interpretable” or “explicable” in terms that would render categorization procedures “understandable.” The purpose of this paper is to challenge the widespread agreement about the existence (...)
    Download  
     
    Export citation  
     
    Bookmark   35 citations  
  • Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  • Idealizations and Understanding: Much Ado About Nothing?Emily Sullivan & Kareem Khalifa - 2019 - Australasian Journal of Philosophy 97 (4):673-689.
    Because idealizations frequently advance scientific understanding, many claim that falsehoods play an epistemic role. In this paper, we argue that these positions greatly overstate idealiza...
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Idealizations and scientific understanding.Moti Mizrahi - 2012 - Philosophical Studies 160 (2):237-252.
    In this paper, I propose that the debate in epistemology concerning the nature and value of understanding can shed light on the role of scientific idealizations in producing scientific understanding. In philosophy of science, the received view seems to be that understanding is a species of knowledge. On this view, understanding is factive just as knowledge is, i.e., if S knows that p, then p is true. Epistemologists, however, distinguish between different kinds of understanding. Among epistemologists, there are those who (...)
    Download  
     
    Export citation  
     
    Bookmark   73 citations  
  • Explanatory unification.Philip Kitcher - 1981 - Philosophy of Science 48 (4):507-531.
    The official model of explanation proposed by the logical empiricists, the covering law model, is subject to familiar objections. The goal of the present paper is to explore an unofficial view of explanation which logical empiricists have sometimes suggested, the view of explanation as unification. I try to show that this view can be developed so as to provide insight into major episodes in the history of science, and that it can overcome some of the most serious difficulties besetting the (...)
    Download  
     
    Export citation  
     
    Bookmark   592 citations  
  • A Survey of Methods for Explaining Black Box Models.Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti & Dino Pedreschi - 2019 - ACM Computing Surveys 51 (5):1-42.
    Download  
     
    Export citation  
     
    Bookmark   57 citations  
  • Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI.A. Barredo Arrieta, N. Díaz-Rodríguez, J. Ser, A. Bennetot, S. Tabik & A. Barbado - 2020 - Information Fusion 58.
    Download  
     
    Export citation  
     
    Bookmark   80 citations  
  • What is Interpretability?Adrian Erasmus, Tyler D. P. Brunet & Eyal Fisher - 2021 - Philosophy and Technology 34:833–862.
    We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: Are networks explainable, and if so, what does it mean to explain the output of a network? And what does it mean for a network to be interpretable? We argue that accounts of “explanation” tailored specifically to neural networks have ineffectively reinvented the wheel. In (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from (...)
    Download  
     
    Export citation  
     
    Bookmark   63 citations  
  • Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.Cynthia Rudin - 2019 - Nature Machine Intelligence 1.
    Download  
     
    Export citation  
     
    Bookmark   121 citations  
  • (1 other version)10. Referees for Philosophy of Science Referees for Philosophy of Science (pp. 479-482).Justin Garson, Yasha Rohwer, Collin Rice, Matteo Colombo, Peter Brössel, Davide Rizza, Simon M. Huttegger, Richard Healey, Alyssa Ney & Kathryn Phillips - 2013 - Philosophy of Science 80 (3):334-355.
    Highly idealized models, such as the Hawk-Dove game, are pervasive in biological theorizing. We argue that the process and motivation that leads to the introduction of various idealizations into these models is not adequately captured by Michael Weisberg’s taxonomy of three kinds of idealization. Consequently, a fourth kind of idealization is required, which we call hypothetical pattern idealization. This kind of idealization is used to construct models that aim to be explanatory but do not aim to be explanations.
    Download  
     
    Export citation  
     
    Bookmark   25 citations