Switch to: References

Add citations

You must login to add citations.
  1. AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Deep Learning Applied to Scientific Discovery: A Hot Interface with Philosophy of Science.Louis Vervoort, Henry Shevlin, Alexey A. Melnikov & Alexander Alodjants - forthcoming - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie:1-13.
    We review publications in automated scientific discovery using deep learning, with the aim of shedding light on problems with strong connections to philosophy of science, of physics in particular. We show that core issues of philosophy of science, related, notably, to the nature of scientific theories; the nature of unification; and of causation loom large in scientific deep learning. Therefore, advances in deep learning could, and ideally should, have impact on philosophy of science, and vice versa. We suggest lines of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Making AI Intelligible: Philosophical Foundations.Herman Cappelen & Josh Dever - 2021 - New York, USA: Oxford University Press.
    Can humans and artificial intelligences share concepts and communicate? Making AI Intelligible shows that philosophical work on the metaphysics of meaning can help answer these questions. Herman Cappelen and Josh Dever use the externalist tradition in philosophy to create models of how AIs and humans can understand each other. In doing so, they illustrate ways in which that philosophical tradition can be improved. The questions addressed in the book are not only theoretically interesting, but the answers have pressing practical implications. (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2020 - Synthese 198 (10):1–⁠32.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2021 - Synthese 198 (10):9211-9242.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealisedexplanation gamein which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Conceptual challenges for interpretable machine learning.David S. Watson - 2022 - Synthese 200 (2):1-33.
    As machine learning has gradually entered into ever more sectors of public and private life, there has been a growing demand for algorithmic explainability. How can we make the predictions of complex statistical models more intelligible to end users? A subdiscipline of computer science known as interpretable machine learning has emerged to address this urgent question. Numerous influential methods have been proposed, from local linear approximations to rule lists and counterfactuals. In this article, I highlight three conceptual challenges that are (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Defining the undefinable: the black box problem in healthcare artificial intelligence.Jordan Joseph Wadden - 2022 - Journal of Medical Ethics 48 (10):764-768.
    The ‘black box problem’ is a long-standing talking point in debates about artificial intelligence. This is a significant point of tension between ethicists, programmers, clinicians and anyone else working on developing AI for healthcare applications. However, the precise definition of these systems are often left undefined, vague, unclear or are assumed to be standardised within AI circles. This leads to situations where individuals working on AI talk over each other and has been invoked in numerous debates between opaque and explainable (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Transparency and the Black Box Problem: Why We Do Not Trust AI.Warren J. von Eschenbach - 2021 - Philosophy and Technology 34 (4):1607-1622.
    With automation of routine decisions coupled with more intricate and complex information architecture operating this automation, concerns are increasing about the trustworthiness of these systems. These concerns are exacerbated by a class of artificial intelligence that uses deep learning, an algorithmic system of deep neural networks, which on the whole remain opaque or hidden from human comprehension. This situation is commonly referred to as the black box problem in AI. Without understanding how AI reaches its conclusions, it is an open (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Knowledge graphs as tools for explainable machine learning: A survey.Ilaria Tiddi & Stefan Schlobach - 2022 - Artificial Intelligence 302 (C):103627.
    Download  
     
    Export citation  
     
    Bookmark  
  • The Automated Laplacean Demon: How ML Challenges Our Views on Prediction and Explanation.Sanja Srećković, Andrea Berber & Nenad Filipović - 2022 - Minds and Machines 32 (1):159-183.
    Certain characteristics make machine learning a powerful tool for processing large amounts of data, and also particularly unsuitable for explanatory purposes. There are worries that its increasing use in science may sideline the explanatory goals of research. We analyze the key characteristics of ML that might have implications for the future directions in scientific research: epistemic opacity and the ‘theory-agnostic’ modeling. These characteristics are further analyzed in a comparison of ML with the traditional statistical methods, in order to demonstrate what (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare.Giorgia Pozzi - 2023 - Ethics and Information Technology 25 (1):1-12.
    Artificial intelligence-based (AI) technologies such as machine learning (ML) systems are playing an increasingly relevant role in medicine and healthcare, bringing about novel ethical and epistemological issues that need to be timely addressed. Even though ethical questions connected to epistemic concerns have been at the center of the debate, it is going unnoticed how epistemic forms of injustice can be ML-induced, specifically in healthcare. I analyze the shortcomings of an ML system currently deployed in the USA to predict patients’ likelihood (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Karl Jaspers and artificial neural nets: on the relation of explaining and understanding artificial intelligence in medicine.Christopher Poppe & Georg Starke - 2022 - Ethics and Information Technology 24 (3):1-10.
    Assistive systems based on Artificial Intelligence are bound to reshape decision-making in all areas of society. One of the most intricate challenges arising from their implementation in high-stakes environments such as medicine concerns their frequently unsatisfying levels of explainability, especially in the guise of the so-called black-box problem: highly successful models based on deep learning seem to be inherently opaque, resisting comprehensive explanations. This may explain why some scholars claim that research should focus on rendering AI systems understandable, rather than (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Sources of Understanding in Supervised Machine Learning Models.Paulo Pirozelli - 2022 - Philosophy and Technology 35 (2):1-19.
    In the last decades, supervised machine learning has seen the widespread growth of highly complex, non-interpretable models, of which deep neural networks are the most typical representative. Due to their complexity, these models have showed an outstanding performance in a series of tasks, as in image recognition and machine translation. Recently, though, there has been an important discussion over whether those non-interpretable models are able to provide any sort of understanding whatsoever. For some scholars, only interpretable models can provide understanding. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Are machines radically contextualist?Ryan M. Nefdt - forthcoming - Mind and Language.
    In this article, I describe a novel position on the semantics of artificial intelligence. I present a problem for the current artificial neural networks used in machine learning, specifically with relation to natural language tasks. I then propose that from a metasemantic level, meaning in machines can best be interpreted as radically contextualist. Finally, I consider what this might mean for human-level semantic competence from a comparative perspective.
    Download  
     
    Export citation  
     
    Bookmark  
  • The State Space of Artificial Intelligence.Holger Lyre - 2020 - Minds and Machines 30 (3):325-347.
    The goal of the paper is to develop and propose a general model of the state space of AI. Given the breathtaking progress in AI research and technologies in recent years, such conceptual work is of substantial theoretical interest. The present AI hype is mainly driven by the triumph of deep learning neural networks. As the distinguishing feature of such networks is the ability to self-learn, self-learning is identified as one important dimension of the AI state space. Another dimension is (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. [REVIEW]Tomáš Kliegr, Štěpán Bahník & Johannes Fürnkranz - 2021 - Artificial Intelligence 295 (C):103458.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples.Timo Freiesleben - 2021 - Minds and Machines 32 (1):1-33.
    The same method that creates adversarial examples to fool image-classifiers can be used to generate counterfactual explanations that explain algorithmic decisions. This observation has led researchers to consider CEs as AEs by another name. We argue that the relationship to the true label and the tolerance with respect to proximity are two properties that formally distinguish CEs and AEs. Based on these arguments, we introduce CEs, AEs, and related concepts mathematically in a common framework. Furthermore, we show connections between current (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples.Timo Freiesleben - 2021 - Minds and Machines 32 (1):77-109.
    The same method that creates adversarial examples to fool image-classifiers can be used to generate counterfactual explanations that explain algorithmic decisions. This observation has led researchers to consider CEs as AEs by another name. We argue that the relationship to the true label and the tolerance with respect to proximity are two properties that formally distinguish CEs and AEs. Based on these arguments, we introduce CEs, AEs, and related concepts mathematically in a common framework. Furthermore, we show connections between current (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What is Interpretability?Adrian Erasmus, Tyler D. P. Brunet & Eyal Fisher - 2021 - Philosophy and Technology 34:833–862.
    We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: Are networks explainable, and if so, what does it mean to explain the output of a network? And what does it mean for a network to be interpretable? We argue that accounts of “explanation” tailored specifically to neural networks have ineffectively reinvented the wheel. In (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Dissecting scientific explanation in AI (sXAI): A case for medicine and healthcare.Juan M. Durán - 2021 - Artificial Intelligence 297 (C):103498.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Philosophy of science at sea: Clarifying the interpretability of machine learning.Claus Beisbart & Tim Räz - 2022 - Philosophy Compass 17 (6):e12830.
    Philosophy Compass, Volume 17, Issue 6, June 2022.
    Download  
     
    Export citation  
     
    Bookmark