Switch to: References

Add citations

You must login to add citations.
  1. Reliability in Machine Learning.Thomas Grote, Konstantin Genin & Emily Sullivan - 2024 - Philosophy Compass 19 (5):e12974.
    Issues of reliability are claiming center-stage in the epistemology of machine learning. This paper unifies different branches in the literature and points to promising research directions, whilst also providing an accessible introduction to key concepts in statistics and machine learning – as far as they are concerned with reliability.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • On the Opacity of Deep Neural Networks.Anders Søgaard - 2023 - Canadian Journal of Philosophy:1-16.
    Deep neural networks are said to be opaque, impeding the development of safe and trustworthy artificial intelligence, but where this opacity stems from is less clear. What are the sufficient properties for neural network opacity? Here, I discuss five common properties of deep neural networks and two different kinds of opacity. Which of these properties are sufficient for what type of opacity? I show how each kind of opacity stems from only one of these five properties, and then discuss to (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Explanatory Role of Machine Learning in Molecular Biology.Fridolin Gross - forthcoming - Erkenntnis:1-21.
    The philosophical debate around the impact of machine learning in science is often framed in terms of a choice between AI and classical methods as mutually exclusive alternatives involving difficult epistemological trade-offs. A common worry regarding machine learning methods specifically is that they lead to opaque models that make predictions but do not lead to explanation or understanding. Focusing on the field of molecular biology, I argue that in practice machine learning is often used with explanatory aims. More specifically, I (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Linguistic Competence and New Empiricism in Philosophy and Science.Vanja Subotić - 2023 - Dissertation, University of Belgrade
    The topic of this dissertation is the nature of linguistic competence, the capacity to understand and produce sentences of natural language. I defend the empiricist account of linguistic competence embedded in the connectionist cognitive science. This strand of cognitive science has been opposed to the traditional symbolic cognitive science, coupled with transformational-generative grammar, which was committed to nativism due to the view that human cognition, including language capacity, should be construed in terms of symbolic representations and hardwired rules. Similarly, linguistic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Do ML models represent their targets?Emily Sullivan - forthcoming - Philosophy of Science.
    I argue that ML models used in science function as highly idealized toy models. If we treat ML models as a type of highly idealized toy model, then we can deploy standard representational and epistemic strategies from the toy model literature to explain why ML models can still provide epistemic success despite their lack of similarity to their targets.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Models, Algorithms, and the Subjects of Transparency.Hajo Greif - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin: Springer. pp. 27-37.
    Concerns over epistemic opacity abound in contemporary debates on Artificial Intelligence (AI). However, it is not always clear to what extent these concerns refer to the same set of problems. We can observe, first, that the terms 'transparency' and 'opacity' are used either in reference to the computational elements of an AI model or to the models to which they pertain. Second, opacity and transparency might either be understood to refer to the properties of AI systems or to the epistemic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Decentring the discoverer: how AI helps us rethink scientific discovery.Elinor Clark & Donal Khosrowi - 2022 - Synthese 200 (6):1-26.
    This paper investigates how intuitions about scientific discovery using artificial intelligence can be used to improve our understanding of scientific discovery more generally. Traditional accounts of discovery have been agent-centred: they place emphasis on identifying a specific agent who is responsible for conducting all, or at least the important part, of a discovery process. We argue that these accounts experience difficulties capturing scientific discovery involving AI and that similar issues arise for human discovery. We propose an alternative, collective-centred view as (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Philosophy of science at sea: Clarifying the interpretability of machine learning.Claus Beisbart & Tim Räz - 2022 - Philosophy Compass 17 (6):e12830.
    Philosophy Compass, Volume 17, Issue 6, June 2022.
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Explaining AI through mechanistic interpretability.Lena Kästner & Barnaby Crook - 2024 - European Journal for Philosophy of Science 14 (4):1-25.
    Recent work in explainable artificial intelligence (XAI) attempts to render opaque AI systems understandable through a divide-and-conquer strategy. However, this fails to illuminate how trained AI systems work as a whole. Precisely this kind of functional understanding is needed, though, to satisfy important societal desiderata such as safety. To remedy this situation, we argue, AI researchers should seek mechanistic interpretability, viz. apply coordinated discovery strategies familiar from the life sciences to uncover the functional organisation of complex AI systems. Additionally, theorists (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Beyond transparency and explainability: on the need for adequate and contextualized user guidelines for LLM use.Kristian González Barman, Nathan Wood & Pawel Pawlowski - 2024 - Ethics and Information Technology 26 (3):1-12.
    Large language models (LLMs) such as ChatGPT present immense opportunities, but without proper training for users (and potentially oversight), they carry risks of misuse as well. We argue that current approaches focusing predominantly on transparency and explainability fall short in addressing the diverse needs and concerns of various user groups. We highlight the limitations of existing methodologies and propose a framework anchored on user-centric guidelines. In particular, we argue that LLM users should be given guidelines on what tasks LLMs can (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • An Alternative to Cognitivism: Computational Phenomenology for Deep Learning.Pierre Beckmann, Guillaume Köstner & Inês Hipólito - 2023 - Minds and Machines 33 (3):397-427.
    We propose a non-representationalist framework for deep learning relying on a novel method computational phenomenology, a dialogue between the first-person perspective (relying on phenomenology) and the mechanisms of computational models. We thereby propose an alternative to the modern cognitivist interpretation of deep learning, according to which artificial neural networks encode representations of external entities. This interpretation mainly relies on neuro-representationalism, a position that combines a strong ontological commitment towards scientific theoretical entities and the idea that the brain operates on symbolic (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Evidence, computation and AI: why evidence is not just in the head.Darrell P. Rowbottom, André Curtis-Trudel & William Peden - 2023 - Asian Journal of Philosophy 2 (1):1-17.
    Can scientific evidence outstretch what scientists have mentally entertained, or could ever entertain? This article focuses on the plausibility and consequences of an affirmative answer in a special case. Specifically, it discusses how we may treat automated scientific data-gathering systems—especially AI systems used to make predictions or to generate novel theories—from the point of view of confirmation theory. It uses AlphaFold2 as a case study.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Understanding models understanding language.Anders Søgaard - 2022 - Synthese 200 (6):1-16.
    Landgrebe and Smith :2061–2081, 2021) present an unflattering diagnosis of recent advances in what they call language-centric artificial intelligence—perhaps more widely known as natural language processing: The models that are currently employed do not have sufficient expressivity, will not generalize, and are fundamentally unable to induce linguistic semantics, they say. The diagnosis is mainly derived from an analysis of the widely used Transformer architecture. Here I address a number of misunderstandings in their analysis, and present what I take to be (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • AI as an Epistemic Technology.Ramón Alvarado - 2023 - Science and Engineering Ethics 29 (5):1-30.
    In this paper I argue that Artificial Intelligence and the many data science methods associated with it, such as machine learning and large language models, are first and foremost epistemic technologies. In order to establish this claim, I first argue that epistemic technologies can be conceptually and practically distinguished from other technologies in virtue of what they are designed for, what they do and how they do it. I then proceed to show that unlike other kinds of technology (_including_ other (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Scientific Inference with Interpretable Machine Learning: Analyzing Models to Learn About Real-World Phenomena.Timo Freiesleben, Gunnar König, Christoph Molnar & Álvaro Tejero-Cantero - 2024 - Minds and Machines 34 (3):1-39.
    To learn about real world phenomena, scientists have traditionally used models with clearly interpretable elements. However, modern machine learning (ML) models, while powerful predictors, lack this direct elementwise interpretability (e.g. neural network weights). Interpretable machine learning (IML) offers a solution by analyzing models holistically to derive interpretations. Yet, current IML research is focused on auditing ML models rather than leveraging them for scientific inference. Our work bridges this gap, presenting a framework for designing IML methods—termed ’property descriptors’—that illuminate not just (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Searching for Features with Artificial Neural Networks in Science: The Problem of Non-Uniqueness.Siyu Yao & Amit Hagar - 2024 - International Studies in the Philosophy of Science 37 (1):51-67.
    Artificial neural networks and supervised learning have become an essential part of science. Beyond using them for accurate input-output mapping, there is growing attention to a new feature-oriented approach. Under the assumption that networks optimised for a task may have learned to represent and utilise important features of the target system for that task, scientists examine how those networks manipulate inputs and employ the features networks capture for scientific discovery. We analyse this approach, show its hidden caveats, and suggest its (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Predicting and explaining with machine learning models: Social science as a touchstone.Oliver Buchholz & Thomas Grote - 2023 - Studies in History and Philosophy of Science Part A 102 (C):60-69.
    Machine learning (ML) models recently led to major breakthroughs in predictive tasks in the natural sciences. Yet their benefits for the social sciences are less evident, as even high-profile studies on the prediction of life trajectories have shown to be largely unsuccessful – at least when measured in traditional criteria of scientific success. This paper tries to shed light on this remarkable performance gap. Comparing two social science case studies to a paradigm example from the natural sciences, we argue that, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Reliability and Interpretability in Science and Deep Learning.Luigi Scorzato - 2024 - Minds and Machines 34 (3):1-31.
    In recent years, the question of the reliability of Machine Learning (ML) methods has acquired significant importance, and the analysis of the associated uncertainties has motivated a growing amount of research. However, most of these studies have applied standard error analysis to ML models—and in particular Deep Neural Network (DNN) models—which represent a rather significant departure from standard scientific modelling. It is therefore necessary to integrate the standard error analysis with a deeper epistemological analysis of the possible differences between DNN (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding via exemplification in XAI: how explaining image classification benefits from exemplars.Sara Mann - forthcoming - AI and Society:1-16.
    Artificial intelligent (AI) systems that perform image classification tasks are being used to great success in many application contexts. However, many of these systems are opaque, even to experts. This lack of understanding can be problematic for ethical, legal, or practical reasons. The research field Explainable AI (XAI) has therefore developed several approaches to explain image classifiers. The hope is to bring about understanding, e.g., regarding why certain images are classified as belonging to a particular target class. Most of these (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)Experts or Authorities? The Strange Case of the Presumed Epistemic Superiority of Artificial Intelligence Systems.Andrea Ferrario, Alessandro Facchini & Alberto Termine - 2024 - Minds and Machines 34 (3):1-27.
    The high predictive accuracy of contemporary machine learning-based AI systems has led some scholars to argue that, in certain cases, we should grant them epistemic expertise and authority over humans. This approach suggests that humans would have the epistemic obligation of relying on the predictions of a highly accurate AI system. Contrary to this view, in this work we claim that it is not possible to endow AI systems with a genuine account of epistemic expertise. In fact, relying on accounts (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation