Switch to: Citations

Add references

You must login to add references.
  1. Deflationary representation, inference, and practice.Mauricio Suárez - 2015 - Studies in History and Philosophy of Science Part A 49 (C):36-47.
    This paper defends the deflationary character of two recent views regarding scientific representation, namely RIG Hughes’ DDI model and the inferential conception. It is first argued that these views’ deflationism is akin to the homonymous position in discussions regarding the nature of truth. There, we are invited to consider the platitudes that the predicate “true” obeys at the level of practice, disregarding any deeper, or more substantive, account of its nature. More generally, for any concept X, a deflationary approach is (...)
    Download  
     
    Export citation  
     
    Bookmark   44 citations  
  • How models are used to represent reality.Ronald N. Giere - 2004 - Philosophy of Science 71 (5):742-752.
    Most recent philosophical thought about the scientific representation of the world has focused on dyadic relationships between language-like entities and the world, particularly the semantic relationships of reference and truth. Drawing inspiration from diverse sources, I argue that we should focus on the pragmatic activity of representing, so that the basic representational relationship has the form: Scientists use models to represent aspects of the world for specific purposes. Leaving aside the terms "law" and "theory," I distinguish principles, specific conditions, models, (...)
    Download  
     
    Export citation  
     
    Bookmark   308 citations  
  • Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The Importance of Understanding Deep Learning.Tim Räz & Claus Beisbart - 2024 - Erkenntnis 89 (5).
    Some machine learning models, in particular deep neural networks (DNNs), are not very well understood; nevertheless, they are frequently used in science. Does this lack of understanding pose a problem for using DNNs to understand empirical phenomena? Emily Sullivan has recently argued that understanding with DNNs is not limited by our lack of understanding of DNNs themselves. In the present paper, we will argue, _contra_ Sullivan, that our current lack of understanding of DNNs does limit our ability to understand with (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Inductive Risk, Understanding, and Opaque Machine Learning Models.Emily Sullivan - 2022 - Philosophy of Science 89 (5):1065-1074.
    Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this article, I argue that nonepistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an internal opacity (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Transparency in Complex Computational Systems.Kathleen A. Creel - 2020 - Philosophy of Science 87 (4):568-589.
    Scientists depend on complex computational systems that are often ineliminably opaque, to the detriment of our ability to give scientific explanations and detect artifacts. Some philosophers have s...
    Download  
     
    Export citation  
     
    Bookmark   57 citations  
  • Scientific understanding and felicitous legitimate falsehoods.Insa Lawler - 2021 - Synthese 198 (7):6859-6887.
    Science is replete with falsehoods that epistemically facilitate understanding by virtue of being the very falsehoods they are. In view of this puzzling fact, some have relaxed the truth requirement on understanding. I offer a factive view of understanding that fully accommodates the puzzling fact in four steps: (i) I argue that the question how these falsehoods are related to the phenomenon to be understood and the question how they figure into the content of understanding it are independent. (ii) I (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
    Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding misguided? In (...)
    Download  
     
    Export citation  
     
    Bookmark   56 citations  
  • How could models possibly provide how-possibly explanations?Philippe Verreault-Julien - 2019 - Studies in History and Philosophy of Science Part A 73:1-12.
    One puzzle concerning highly idealized models is whether they explain. Some suggest they provide so-called ‘how-possibly explanations’. However, this raises an important question about the nature of how-possibly explanations, namely what distinguishes them from ‘normal’, or how-actually, explanations? I provide an account of how-possibly explanations that clarifies their nature in the context of solving the puzzle of model-based explanation. I argue that the modal notions of actuality and possibility provide the relevant dividing lines between how-possibly and how-actually explanations. Whereas how-possibly (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • True Enough.Catherine Z. Elgin - 2017 - Cambridge: MIT Press.
    Science relies on models and idealizations that are known not to be true. Even so, science is epistemically reputable. To accommodate science, epistemology should focus on understanding rather than knowledge and should recognize that the understanding of a topic need not be factive. This requires reconfiguring the norms of epistemic acceptability. If epistemology has the resources to accommodate science, it will also have the resources to show that art too advances understanding.
    Download  
     
    Export citation  
     
    Bookmark   199 citations  
  • Idealization and abstraction: refining the distinction.Arnon Levy - 2018 - Synthese 198 (Suppl 24):5855-5872.
    Idealization and abstraction are central concepts in the philosophy of science and in science itself. My goal in this paper is suggest an account of these concepts, building on and refining an existing view due to Jones Idealization XII: correcting the model. Idealization and abstraction in the sciences, vol 86. Rodopi, Amsterdam, pp 173–217, 2005) and Godfrey-Smith Mapping the future of biology: evolving concepts and theories. Springer, Berlin, 2009). On this line of thought, abstraction—which I call, for reasons to be (...)
    Download  
     
    Export citation  
     
    Bookmark   38 citations  
  • Idealization and the Aims of Science.Angela Potochnik - 2017 - Chicago: University of Chicago Press.
    Science is the study of our world, as it is in its messy reality. Nonetheless, science requires idealization to function—if we are to attempt to understand the world, we have to find ways to reduce its complexity. Idealization and the Aims of Science shows just how crucial idealization is to science and why it matters. Beginning with the acknowledgment of our status as limited human agents trying to make sense of an exceedingly complex world, Angela Potochnik moves on to explain (...)
    Download  
     
    Export citation  
     
    Bookmark   151 citations  
  • Deep Learning Opacity in Scientific Discovery.Eamon Duede - 2023 - Philosophy of Science 90 (5):1089 - 1099.
    Philosophers have recently focused on critical, epistemological challenges that arise from the opacity of deep neural networks. One might conclude from this literature that doing good science with opaque models is exceptionally challenging, if not impossible. Yet, this is hard to square with the recent boom in optimism for AI in science alongside a flood of recent scientific breakthroughs driven by AI methods. In this paper, I argue that the disconnect between philosophical pessimism and scientific optimism is driven by a (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Two Dimensions of Opacity and the Deep Learning Predicament.Florian J. Boge - 2021 - Minds and Machines 32 (1):43-75.
    Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...)
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • How Idealizations Provide Understanding.Michael Strevens - 2016 - In Stephen Robert Grimm, Christoph Baumberger & Sabine Ammon (eds.), Explaining Understanding: New Perspectives From Epistemology and Philosophy of Science. London: Routledge.
    How can a model that stops short of representing the whole truth about the causal production of a phenomenon help us to understand the phenomenon? I answer this question from the perspective of what I call the simple view of understanding, on which to understand a phenomenon is to grasp a correct explanation of the phenomenon. Idealizations, I have argued in previous work, flag factors that are casually relevant but explanatorily irrelevant to the phenomena to be explained. Though useful to (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Idealizations and scientific understanding.Moti Mizrahi - 2012 - Philosophical Studies 160 (2):237-252.
    In this paper, I propose that the debate in epistemology concerning the nature and value of understanding can shed light on the role of scientific idealizations in producing scientific understanding. In philosophy of science, the received view seems to be that understanding is a species of knowledge. On this view, understanding is factive just as knowledge is, i.e., if S knows that p, then p is true. Epistemologists, however, distinguish between different kinds of understanding. Among epistemologists, there are those who (...)
    Download  
     
    Export citation  
     
    Bookmark   73 citations  
  • Minimal Model Explanations.Robert W. Batterman & Collin C. Rice - 2014 - Philosophy of Science 81 (3):349-376.
    This article discusses minimal model explanations, which we argue are distinct from various causal, mechanical, difference-making, and so on, strategies prominent in the philosophical literature. We contend that what accounts for the explanatory power of these models is not that they have certain features in common with real systems. Rather, the models are explanatory because of a story about why a class of systems will all display the same large-scale behavior because the details that distinguish them are irrelevant. This story (...)
    Download  
     
    Export citation  
     
    Bookmark   176 citations  
  • Understanding climate phenomena with data-driven models.Benedikt Knüsel & Christoph Baumberger - 2020 - Studies in History and Philosophy of Science Part A 84 (C):46-56.
    In climate science, climate models are one of the main tools for understanding phenomena. Here, we develop a framework to assess the fitness of a climate model for providing understanding. The framework is based on three dimensions: representational accuracy, representational depth, and graspability. We show that this framework does justice to the intuition that classical process-based climate models give understanding of phenomena. While simple climate models are characterized by a larger graspability, state-of-the-art models have a higher representational accuracy and representational (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • (1 other version)The british journal for the philosophy of science.[author unknown] - 1956 - Dialectica 10 (1):94-95.
    Download  
     
    Export citation  
     
    Bookmark   122 citations