Switch to: Citations

Add references

You must login to add references.
  1. Ockham’s Razors: A User’s Manual.Elliott Sober - 2015 - Cambridge: Cambridge University Press.
    Ockham's razor, the principle of parsimony, states that simpler theories are better than theories that are more complex. It has a history dating back to Aristotle and it plays an important role in current physics, biology, and psychology. The razor also gets used outside of science - in everyday life and in philosophy. This book evaluates the principle and discusses its many applications. Fascinating examples from different domains provide a rich basis for contemplating the principle's promises and perils. It is (...)
    Download  
     
    Export citation  
     
    Bookmark   91 citations  
  • Making AI Meaningful Again.Jobst Landgrebe & Barry Smith - 2021 - Synthese 198 (March):2061-2081.
    Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Simplicity.Alan Baker - 2008 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   106 citations  
  • (2 other versions)The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2020 - Synthese 198 (10):1–⁠32.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Simplicity As Evidence of Truth.Richard Swinburne - 1997 - Milwaukee: Marquette University Press.
    Content Description #"Under the auspices of the Wisconsin-Alpha Chapter of Phi Sigma Tau."#Includes bibliographical references.
    Download  
     
    Export citation  
     
    Bookmark   43 citations  
  • Reliable Reasoning: Induction and Statistical Learning Theory.Gilbert Harman & Sanjeev Kulkarni - 2007 - Bradford.
    In _Reliable Reasoning_, Gilbert Harman and Sanjeev Kulkarni -- a philosopher and an engineer -- argue that philosophy and cognitive science can benefit from statistical learning theory, the theory that lies behind recent advances in machine learning. The philosophical problem of induction, for example, is in part about the reliability of inductive reasoning, where the reliability of a method is measured by its statistically expected percentage of errors -- a central topic in SLT. After discussing philosophical attempts to evade the (...)
    Download  
     
    Export citation  
     
    Bookmark   37 citations  
  • Progress as Approximation to the Truth: A Defence of the Verisimilitudinarian Approach.Gustavo Cevolani & Luca Tambolo - 2013 - Erkenntnis 78 (4):921-935.
    In this paper we provide a compact presentation of the verisimilitudinarian approach to scientific progress (VS, for short) and defend it against the sustained attack recently mounted by Alexander Bird (2007). Advocated by such authors as Ilkka Niiniluoto and Theo Kuipers, VS is the view that progress can be explained in terms of the increasing verisimilitude (or, equivalently, truthlikeness, or approximation to the truth) of scientific theories. According to Bird, VS overlooks the central issue of the appropriate grounding of scientific (...)
    Download  
     
    Export citation  
     
    Bookmark   45 citations  
  • (2 other versions)The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2021 - Synthese 198 (10):9211-9242.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealisedexplanation gamein which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of (...)
    Download  
     
    Export citation  
     
    Bookmark   17 citations  
  • (2 other versions)Scientific Progress.I. Niiniluoto - 2012 - In Ed Zalta (ed.), Stanford Encyclopedia of Philosophy. Stanford, CA: Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   49 citations  
  • Judging machines: philosophical aspects of deep learning.Arno Schubbach - 2019 - Synthese 198 (2):1807-1827.
    Although machine learning has been successful in recent years and is increasingly being deployed in the sciences, enterprises or administrations, it has rarely been discussed in philosophy beyond the philosophy of mathematics and machine learning. The present contribution addresses the resulting lack of conceptual tools for an epistemological discussion of machine learning by conceiving of deep learning networks as ‘judging machines’ and using the Kantian analysis of judgments for specifying the type of judgment they are capable of. At the center (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • (2 other versions)Scientific progress.Ilkka Niiniluoto - 1980 - Synthese 45 (3):427 - 462.
    Download  
     
    Export citation  
     
    Bookmark   55 citations  
  • (2 other versions)The Explanation Game: A Formal Framework for Interpretable Machine Learning.David S. Watson & Luciano Floridi - 2021 - In Josh Cowls & Jessica Morley (eds.), The 2020 Yearbook of the Digital Ethics Lab. Springer Verlag. pp. 109-143.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • No Free Lunch Theorem, Inductive Skepticism, and the Optimality of Meta-induction.Gerhard Schurz - 2017 - Philosophy of Science 84 (5):825-839.
    The no free lunch theorem is a radicalized version of Hume’s induction skepticism. It asserts that relative to a uniform probability distribution over all possible worlds, all computable prediction algorithms—whether ‘clever’ inductive or ‘stupid’ guessing methods —have the same expected predictive success. This theorem seems to be in conflict with results about meta-induction. According to these results, certain meta-inductive prediction strategies may dominate other methods in their predictive success. In this article this conflict is analyzed and dissolved, by means of (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Simplicity in the philosophy of science.Simon Fitzpatrick - 2013 - Internet Encyclopedia of Philosophy:xx.
    Encyclopedia entry on the debate over simplicity/parsimony and Ockham's Razor in the philosophy of science.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • The Lack of A Priori Distinctions Between Learning Algorithms.David H. Wolpert - 1996 - Neural Computation 8 (7):1341–1390.
    This is the first of two papers that use off-training set (OTS) error to investigate the assumption-free relationship between learning algorithms. This first paper discusses the senses in which there are no a priori distinctions between learning algorithms. (The second paper discusses the senses in which there are such distinctions.) In this first paper it is shown, loosely speaking, that for any two algorithms A and B, there are “as many” targets (or priors over targets) for which A has lower (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • The Nature of Statistical Learning Theory.Vladimir Vapnik - 1999 - Springer: New York.
    The aim of this book is to discuss the fundamental ideas which lie behind the statistical theory of learning and generalization. It considers learning as a general problem of function estimation based on empirical data. Omitting proofs and technical details, the author concentrates on discussing the main results of learning theory and their connections to fundamental problems in statistics. This second edition contains three new chapters devoted to further development of the learning theory and SVM techniques. Written in a readable (...)
    Download  
     
    Export citation  
     
    Bookmark   67 citations  
  • Falsificationism and Statistical Learning Theory: Comparing the Popper and Vapnik-Chervonenkis Dimensions.David Corfield, Bernhard Schölkopf & Vladimir Vapnik - 2009 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 40 (1):51-58.
    We compare Karl Popper’s ideas concerning the falsifiability of a theory with similar notions from the part of statistical learning theory known as VC-theory . Popper’s notion of the dimension of a theory is contrasted with the apparently very similar VC-dimension. Having located some divergences, we discuss how best to view Popper’s work from the perspective of statistical learning theory, either as a precursor or as aiming to capture a different learning activity.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Varieties of Justification in Machine Learning.David Corfield - 2010 - Minds and Machines 20 (2):291-301.
    Forms of justification for inductive machine learning techniques are discussed and classified into four types. This is done with a view to introduce some of these techniques and their justificatory guarantees to the attention of philosophers, and to initiate a discussion as to whether they must be treated separately or rather can be viewed consistently from within a single framework.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • PAC Learning and Occam’s Razor: Probably Approximately Incorrect.Daniel A. Herrmann - 2020 - Philosophy of Science 87 (4):685-703.
    Computer scientists have provided a distinct justification of Occam’s Razor. Using the probably approximately correct framework, they provide a theorem that they claim demonstrates that we should favor simpler hypotheses. The argument relies on a philosophical interpretation of the theorem. I argue that the standard interpretation of the result in the literature is misguided and that a better reading does not, in fact, support Occam’s Razor at all. To this end, I state and prove a very similar theorem that, if (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Introduction: Machine learning as philosophy of science.Kevin B. Korb - 2004 - Minds and Machines 14 (4):433-440.
    I consider three aspects in which machine learning and philosophy of science can illuminate each other: methodology, inductive simplicity and theoretical terms. I examine the relations between the two subjects and conclude by claiming these relations to be very close.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Testability and Ockham’s Razor: How Formal and Statistical Learning Theory Converge in the New Riddle of Induction.Daniel Steel - 2009 - Journal of Philosophical Logic 38 (5):471-489.
    Nelson Goodman's new riddle of induction forcefully illustrates a challenge that must be confronted by any adequate theory of inductive inference: provide some basis for choosing among alternative hypotheses that fit past data but make divergent predictions. One response to this challenge is to distinguish among alternatives by means of some epistemically significant characteristic beyond fit with the data. Statistical learning theory takes this approach by showing how a concept similar to Popper's notion of degrees of testability is linked to (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Philosophy and machine learning.Paul Thagard - 1990 - Canadian Journal of Philosophy 20 (2):261-76.
    This article discusses the philosophical relevance of recent computational work on inductive inference being conducted in the rapidly growing branch of artificial intelligence called machine learning.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • The Big Data razor.Ezequiel López-Rubio - 2020 - European Journal for Philosophy of Science 10 (2):1-20.
    Classic conceptions of model simplicity for machine learning are mainly based on the analysis of the structure of the model. Bayesian, Frequentist, information theoretic and expressive power concepts are the best known of them, which are reviewed in this work, along with their underlying assumptions and weaknesses. These approaches were developed before the advent of the Big Data deluge, which has overturned the importance of structural simplicity. The computational simplicity concept is presented, and it is argued that it is more (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A dynamic interaction between machine learning and the philosophy of science.Jon Williamson - 2004 - Minds and Machines 14 (4):539-549.
    The relationship between machine learning and the philosophy of science can be classed as a dynamic interaction: a mutually beneficial connection between two autonomous fields that changes direction over time. I discuss the nature of this interaction and give a case study highlighting interactions between research on Bayesian networks in machine learning and research on causality and probability in the philosophy of science.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Inductive logic, verisimilitude, and machine learning.Ilkka Niiniluoto - 2005 - In Petr Hájek, Luis Valdés-Villanueva & Dag Westerståhl (eds.), Logic, Methodology, and Philosophy of Science. College Publications. pp. 295/314.
    This paper starts by summarizing work that philosophers have done in the fields of inductive logic since 1950s and truth approximation since 1970s. It then proceeds to interpret and critically evaluate the studies on machine learning within artificial intelligence since 1980s. Parallels are drawn between identifiability results within formal learning theory and convergence results within Hintikka’s inductive logic. Another comparison is made between the PAC-learning of concepts and the notion of probable approximate truth.
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Statistical learning theory as a framework for the philosophy of induction.Gilbert Harman & Sanjeev Kulkarni - manuscript
    Statistical Learning Theory (e.g., Hastie et al., 2001; Vapnik, 1998, 2000, 2006) is the basic theory behind contemporary machine learning and data-mining. We suggest that the theory provides an excellent framework for philosophical thinking about inductive inference.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The philosophy of science and its relation to machine learning.Jon Williamson - unknown
    In this chapter I discuss connections between machine learning and the philosophy of science. First I consider the relationship between the two disciplines. There is a clear analogy between hypothesis choice in science and model selection in machine learning. While this analogy has been invoked to argue that the two disciplines are essentially doing the same thing and should merge, I maintain that the disciplines are distinct but related and that there is a dynamic interaction operating between the two: a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Falsification and future performance.David Balduzzi - manuscript
    We information-theoretically reformulate two measures of capacity from statistical learning theory: empirical VC-entropy and empirical Rademacher complexity. We show these capacity measures count the number of hypotheses about a dataset that a learning algorithm falsifies when it finds the classifier in its repertoire minimizing empirical risk. It then follows from that the future performance of predictors on unseen data is controlled in part by how many hypotheses the learner falsifies. As a corollary we show that empirical VC-entropy quantifies the message (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations