Switch to: References

Add citations

You must login to add citations.
  1. Testability and Ockham’s Razor: How Formal and Statistical Learning Theory Converge in the New Riddle of Induction.Daniel Steel - 2009 - Journal of Philosophical Logic 38 (5):471-489.
    Nelson Goodman's new riddle of induction forcefully illustrates a challenge that must be confronted by any adequate theory of inductive inference: provide some basis for choosing among alternative hypotheses that fit past data but make divergent predictions. One response to this challenge is to distinguish among alternatives by means of some epistemically significant characteristic beyond fit with the data. Statistical learning theory takes this approach by showing how a concept similar to Popper's notion of degrees of testability is linked to (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Mind changes and testability: How formal and statistical learning theory converge in the new Riddle of induction.Daniel Steel - manuscript
    This essay demonstrates a previously unnoticed connection between formal and statistical learning theory with regard to Nelson Goodman’s new riddle of induction. Discussions of Goodman’s riddle in formal learning theory explain how conjecturing “all green” before “all grue” can enhance efficient convergence to the truth, where efficiency is understood in terms of minimizing the maximum number of retractions or “mind changes.” Vapnik-Chervonenkis (VC) dimension is a central concept in statistical learning theory and is similar to Popper’s notion of degrees of (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Efficient convergence implies ockham's razor.Kevin Kelly - 2002 - Proceedings of the 2002 International Workshop on Computational Models of Scientific Reasoning and Applications.
    A finite data set is consistent with infinitely many alternative theories. Scientific realists recommend that we prefer the simplest one. Anti-realists ask how a fixed simplicity bias could track the truth when the truth might be complex. It is no solution to impose a prior probability distribution biased toward simplicity, for such a distribution merely embodies the bias at issue without explaining its efficacy. In this note, I argue, on the basis of computational learning theory, that a fixed simplicity bias (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • What If the Principle of Induction Is Normative? Formal Learning Theory and Hume’s Problem.Daniel Steel & S. Kedzie Hall - 2010 - International Studies in the Philosophy of Science 24 (2):171-185.
    This article argues that a successful answer to Hume's problem of induction can be developed from a sub-genre of philosophy of science known as formal learning theory. One of the central concepts of formal learning theory is logical reliability: roughly, a method is logically reliable when it is assured of eventually settling on the truth for every sequence of data that is possible given what we know. I show that the principle of induction (PI) is necessary and sufficient for logical (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • No Answer to Hume.Colin Howson - 2011 - International Studies in the Philosophy of Science 25 (3):279 - 284.
    In a recent article in this journal, Daniel Steel charges me with committing a fallacy in my discussion of inductive rules. I show that the charge is false, and that Steel's own attempt to validate enumerative induction in terms of formal learning theory is itself fallacious. I go on to argue that, contra Steel, formal learning theory is in principle incapable of answering Hume's famous claim that any attempt to justify induction will beg the question.
    Download  
     
    Export citation  
     
    Bookmark   5 citations