Switch to: References

Add citations

You must login to add citations.
  1. Language Acquisition Meets Language Evolution.Nick Chater & Morten H. Christiansen - 2010 - Cognitive Science 34 (7):1131-1157.
    Recent research suggests that language evolution is a process of cultural change, in which linguistic structures are shaped through repeated cycles of learning and use by domain-general mechanisms. This paper draws out the implications of this viewpoint for understanding the problem of language acquisition, which is cast in a new, and much more tractable, form. In essence, the child faces a problem of induction, where the objective is to coordinate with others (C-induction), rather than to model the structure of the (...)
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Varieties of Justification in Machine Learning.David Corfield - 2010 - Minds and Machines 20 (2):291-301.
    Forms of justification for inductive machine learning techniques are discussed and classified into four types. This is done with a view to introduce some of these techniques and their justificatory guarantees to the attention of philosophers, and to initiate a discussion as to whether they must be treated separately or rather can be viewed consistently from within a single framework.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • (3 other versions)Hans Reichenbach.Clark Glymour - 2008 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • (1 other version)Common sense as evidence: Against revisionary ontology and skepticism.Thomas Kelly - 2008 - Midwest Studies in Philosophy 32 (1):53-78.
    In this age of post-Moorean modesty, many of us are inclined to doubt that philosophy is in possession of arguments that might genuinely serve to undermine what we ordinarily believe. It may perhaps be conceded that the arguments of the skeptic appear to be utterly compelling; but the Mooreans among us will hold that the very plausibility of our ordinary beliefs is reason enough for supposing that there must be something wrong in the skeptic’s arguments, even if we are unable (...)
    Download  
     
    Export citation  
     
    Bookmark   37 citations  
  • Formal learning theory.Oliver Schulte - 2008 - Stanford Encyclopedia of Philosophy.
    Formal learning theory is the mathematical embodiment of a normative epistemology. It deals with the question of how an agent should use observations about her environment to arrive at correct and informative conclusions. Philosophers such as Putnam, Glymour and Kelly have developed learning theory as a normative framework for scientific reasoning and inductive inference.
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Connectionism.James Garson & Cameron Buckner - 2019 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   26 citations  
  • Goodman e o equilíbrio reflexivo.Eros Moreira Carvalho - 2013 - Veritas – Revista de Filosofia da Pucrs 58 (3):467-481.
    Goodman sustentou que o ajuste mútuo entre inferências indutivas particulares e princípios indutivos constitui a única justificação necessária para ambos. Porém, a sua caracterização desse ajuste, posteriormente denominado de “equilíbrio reflexivo”, foi superficial. Isso levantou dúvida sobre a sua adequação. Neste artigo, argumento que o equilíbrio reflexivo, corretamente caracterizado, fornece a única justificação necessária e a melhor que podemos dar para a prática indutiva.
    Download  
     
    Export citation  
     
    Bookmark  
  • Reliability in Machine Learning.Thomas Grote, Konstantin Genin & Emily Sullivan - 2024 - Philosophy Compass 19 (5):e12974.
    Issues of reliability are claiming center-stage in the epistemology of machine learning. This paper unifies different branches in the literature and points to promising research directions, whilst also providing an accessible introduction to key concepts in statistics and machine learning – as far as they are concerned with reliability.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Simple Models in Complex Worlds: Occam’s Razor and Statistical Learning Theory.Falco J. Bargagli Stoffi, Gustavo Cevolani & Giorgio Gnecco - 2022 - Minds and Machines 32 (1):13-42.
    The idea that “simplicity is a sign of truth”, and the related “Occam’s razor” principle, stating that, all other things being equal, simpler models should be preferred to more complex ones, have been long discussed in philosophy and science. We explore these ideas in the context of supervised machine learning, namely the branch of artificial intelligence that studies algorithms which balance simplicity and accuracy in order to effectively learn about the features of the underlying domain. Focusing on statistical learning theory, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)The epistemological foundations of data science: a critical analysis.Jules Desai, David Watson, Vincent Wang, Mariarosaria Taddeo & Luciano Floridi - manuscript
    The modern abundance and prominence of data has led to the development of “data science” as a new field of enquiry, along with a body of epistemological reflections upon its foundations, methods, and consequences. This article provides a systematic analysis and critical review of significant open problems and debates in the epistemology of data science. We propose a partition of the epistemology of data science into the following five domains: (i) the constitution of data science; (ii) the kind of enquiry (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Human Induction in Machine Learning: A Survey of the Nexus.Petr Spelda & Vit Stritecky - 2021 - ACM Computing Surveys 54 (3):1-18.
    As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target environments. Yet (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (2 other versions)The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2020 - Synthese 198 (10):1–⁠32.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Why Simpler Computer Simulation Models Can Be Epistemically Better for Informing Decisions.Casey Helgeson, Vivek Srikrishnan, Klaus Keller & Nancy Tuana - 2021 - Philosophy of Science 88 (2):213-233.
    For computer simulation models to usefully inform climate risk management, uncertainties in model projections must be explored and characterized. Because doing so requires running the model many ti...
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • A statistical learning approach to a problem of induction.Kino Zhao - manuscript
    At its strongest, Hume's problem of induction denies the existence of any well justified assumptionless inductive inference rule. At the weakest, it challenges our ability to articulate and apply good inductive inference rules. This paper examines an analysis that is closer to the latter camp. It reviews one answer to this problem drawn from the VC theorem in statistical learning theory and argues for its inadequacy. In particular, I show that it cannot be computed, in general, whether we are in (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Machine learning, inductive reasoning, and reliability of generalisations.Petr Spelda - 2020 - AI and Society 35 (1):29-37.
    The present paper shows how statistical learning theory and machine learning models can be used to enhance understanding of AI-related epistemological issues regarding inductive reasoning and reliability of generalisations. Towards this aim, the paper proceeds as follows. First, it expounds Price’s dual image of representation in terms of the notions of e-representations and i-representations that constitute subject naturalism. For Price, this is not a strictly anti-representationalist position but rather a dualist one (e- and i-representations). Second, the paper links this debate (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Falsificationism and Statistical Learning Theory: Comparing the Popper and Vapnik-Chervonenkis Dimensions.David Corfield, Bernhard Schölkopf & Vladimir Vapnik - 2009 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 40 (1):51-58.
    We compare Karl Popper’s ideas concerning the falsifiability of a theory with similar notions from the part of statistical learning theory known as VC-theory . Popper’s notion of the dimension of a theory is contrasted with the apparently very similar VC-dimension. Having located some divergences, we discuss how best to view Popper’s work from the perspective of statistical learning theory, either as a precursor or as aiming to capture a different learning activity.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Testability and Ockham’s Razor: How Formal and Statistical Learning Theory Converge in the New Riddle of Induction.Daniel Steel - 2009 - Journal of Philosophical Logic 38 (5):471-489.
    Nelson Goodman's new riddle of induction forcefully illustrates a challenge that must be confronted by any adequate theory of inductive inference: provide some basis for choosing among alternative hypotheses that fit past data but make divergent predictions. One response to this challenge is to distinguish among alternatives by means of some epistemically significant characteristic beyond fit with the data. Statistical learning theory takes this approach by showing how a concept similar to Popper's notion of degrees of testability is linked to (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The myth of language universals: Language diversity and its importance for cognitive science.Nicholas Evans & Stephen C. Levinson - 2009 - Behavioral and Brain Sciences 32 (5):429-448.
    Talk of linguistic universals has given cognitive scientists the impression that languages are all built to a common pattern. In fact, there are vanishingly few universals of language in the direct sense that all languages exhibit them. Instead, diversity can be found at almost every level of linguistic organization. This fundamentally changes the object of enquiry from a cognitive science perspective. This target article summarizes decades of cross-linguistic work by typologists and descriptive linguists, showing just how few and unprofound the (...)
    Download  
     
    Export citation  
     
    Bookmark   183 citations  
  • With diversity in mind: Freeing the language sciences from Universal Grammar.Nicholas Evans & Stephen C. Levinson - 2009 - Behavioral and Brain Sciences 32 (5):472-492.
    Our response takes advantage of the wide-ranging commentary to clarify some aspects of our original proposal and augment others. We argue against the generative critics of our coevolutionary program for the language sciences, defend the use of close-to-surface models as minimizing cross-linguistic data distortion, and stress the growing role of stochastic simulations in making generalized historical accounts testable. These methods lead the search for general principles away from idealized representations and towards selective processes. Putting cultural evolution central in understanding language (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Statistical Learning Theory: A Tutorial.Sanjeev R. Kulkarni & Gilbert Harman - 2011 - Wiley Interdisciplinary Reviews: Computational Statistics 3 (6):543-556.
    In this article, we provide a tutorial overview of some aspects of statistical learning theory, which also goes by other names such as statistical pattern recognition, nonparametric classification and estimation, and supervised learning. We focus on the problem of two-class pattern classification for various reasons. This problem is rich enough to capture many of the interesting aspects that are present in the cases of more than two classes and in the problem of estimation, and many of the results can be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Ockham Efficiency Theorem for Stochastic Empirical Methods.Kevin T. Kelly & Conor Mayo-Wilson - 2010 - Journal of Philosophical Logic 39 (6):679-712.
    Ockham’s razor is the principle that, all other things being equal, scientists ought to prefer simpler theories. In recent years, philosophers have argued that simpler theories make better predictions, possess theoretical virtues like explanatory power, and have other pragmatic virtues like computational tractability. However, such arguments fail to explain how and why a preference for simplicity can help one find true theories in scientific inquiry, unless one already assumes that the truth is simple. One new solution to that problem is (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • (1 other version)The epistemological foundations of data science: a critical review.Luciano Floridi, Mariarosaria Taddeo, Vincent Wang, David Watson & Jules Desai - 2022 - Synthese 200 (6):1-27.
    The modern abundance and prominence of data have led to the development of “data science” as a new field of enquiry, along with a body of epistemological reflections upon its foundations, methods, and consequences. This article provides a systematic analysis and critical review of significant open problems and debates in the epistemology of data science. We propose a partition of the epistemology of data science into the following five domains: (i) the constitution of data science; (ii) the kind of enquiry (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Why Cognitive Science Needs Philosophy and Vice Versa.Paul Thagard - 2009 - Topics in Cognitive Science 1 (2):237-254.
    Contrary to common views that philosophy is extraneous to cognitive science, this paper argues that philosophy has a crucial role to play in cognitive science with respect to generality and normativity. General questions include the nature of theories and explanations, the role of computer simulation in cognitive theorizing, and the relations among the different fields of cognitive science. Normative questions include whether human thinking should be Bayesian, whether decision making should maximize expected utility, and how norms should be established. These (...)
    Download  
     
    Export citation  
     
    Bookmark   24 citations  
  • Statistical learning theory as a framework for the philosophy of induction.Gilbert Harman & Sanjeev Kulkarni - manuscript
    Statistical Learning Theory (e.g., Hastie et al., 2001; Vapnik, 1998, 2000, 2006) is the basic theory behind contemporary machine learning and data-mining. We suggest that the theory provides an excellent framework for philosophical thinking about inductive inference.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Inductive rules are no problem.Daniel Steel - manuscript
    This essay defends the view that inductive reasoning involves following inductive rules against objections that inductive rules are undesirable because they ignore background knowledge and unnecessary because Bayesianism is not an inductive rule. I propose that inductive rules be understood as sets of functions from data to hypotheses that are intended as solutions to inductive problems. According to this proposal, background knowledge is important in the application of inductive rules and Bayesianism qualifies as an inductive rule. Finally, I consider a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Judging machines: philosophical aspects of deep learning.Arno Schubbach - 2019 - Synthese 198 (2):1807-1827.
    Although machine learning has been successful in recent years and is increasingly being deployed in the sciences, enterprises or administrations, it has rarely been discussed in philosophy beyond the philosophy of mathematics and machine learning. The present contribution addresses the resulting lack of conceptual tools for an epistemological discussion of machine learning by conceiving of deep learning networks as ‘judging machines’ and using the Kantian analysis of judgments for specifying the type of judgment they are capable of. At the center (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations  
  • The Epistemology of Non-distributive Profiles.Patrick Allo - 2020 - Philosophy and Technology 33 (3):379-409.
    The distinction between distributive and non-distributive profiles figures prominently in current evaluations of the ethical and epistemological risks that are associated with automated profiling practices. The diagnosis that non-distributive profiles may coincidentally situate an individual in the wrong category is often perceived as the central shortcoming of such profiles. According to this diagnosis, most risks can be retraced to the use of non-universal generalisations and various other statistical associations. This article develops a top-down analysis of non-distributive profiles in which this (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Notes on practical reasoning.Gilbert Harman - manuscript
    In these notes, I will use the word “reasoning” to refer to something people do. The general category includes both internal reasoning, reasoning things out by oneself—inference and deliberation—and external reasoning with others—arguing, discussing and negotiating.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Inductive Learning in Small and Large Worlds.Simon M. Huttegger - 2017 - Philosophy and Phenomenological Research 95 (1):90-116.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Inference to the Best Inductive Practices.Paul Thagard - 2009 - Abstracta 5 (S3):18-26.
    Harman and Kulkarni provide a rigorous and informative discussion of reliable reasoning, drawing philosophical conclusions from the elegant formal results of statistical learning theory. They have presented a strong case that statistical learning theory is highly relevant to issues in philosophy and psychology concerning inductive inferences. Although I agree with their general thrust, I want to take issue with some of the philosophical and psychological conclusions they reach. I will first discuss the general problem of assessing norms and propose a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Remarks on Harman and Kulkarni's "Reliable Reasoning".Michael Strevens - 2009 - Abstracta 5 (S3):27-41.
    Reliable Reasoning is a simple, accessible, beautifully explained introduction to Vapnik and Chervonenkis’s statistical learning theory. It includes a modest discussion of the application of the theory to the philosophy of induction; the purpose of these remarks is to say something more. 27.
    Download  
     
    Export citation  
     
    Bookmark  
  • Inductive rules, background knowledge, and skepticism.Daniel Steel & S. Kedzie Hall - unknown
    This essay defends the view that inductive reasoning involves following inductive rules against objections that inductive rules are undesirable because they ignore background knowledge and unnecessary because Bayesianism is not an inductive rule. I propose that inductive rules be understood as sets of functions from data to hypotheses that are intended as solutions to inductive problems. According to this proposal, background knowledge is important in the application of inductive rules and Bayesianism qualifies as an inductive rule. Finally, I consider a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Response to Shaffer, Thagard, Strevens and Hanson.Gilbert Harman & Sanjeev Kulkarni - 2009 - Abstracta 5 (S3):47-56.
    Like Glenn Shafer, we are nostalgic for the time when “philosophers, mathematicians, and scientists interested in probability, induction, and scientific methodology talked with each other more than they do now”, [p.10]. 1 Shafer goes on to mention other relevant contemporary communities. He himself has been at the interface of many of these communities while at the same time making major contributions to them and this very symposium represents something of that desired discussion. We begin with a couple of general points (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Online versions of recently published work.Gilbert Harman - manuscript
    "What Is Cognitive Access?" PDF. Behavioral and Brain Sciences 30 (2007 [published 2008]): 505. Brief comments on a paper of Ned Block's. "Mechanical Mind," a review of Mind as Machine: A History of Cognitive Science by Margaret Boden. Online Published Version . From American Scientist (2008): 76-81.
    Download  
     
    Export citation  
     
    Bookmark