Switch to: References

Citations of:

Inductive learning by machines

Philosophical Studies 64 (October):37-64 (1991)

Add citations

You must login to add citations.
  1. Choosing from competing theories in computerised learning.Abraham Meidan & Boris Levin - 2002 - Minds and Machines 12 (1):119-129.
    In this paper we refer to a machine learning method that reveals all the if–then rules in the data, and on the basis of these rules issues predictions for new cases. When issuing predictions this method faces the problem of choosing from competing theories. We dealt with this problem by calculating the probability that the rule is accidental. The lower this probability, the more the rule can be `trusted' when issuing predictions. The method was tested empirically and found to be (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Statistical Learning Theory and Occam’s Razor: The Core Argument.Tom F. Sterkenburg - 2024 - Minds and Machines 35 (1):1-28.
    Statistical learning theory is often associated with the principle of Occam’s razor, which recommends a simplicity preference in inductive inference. This paper distills the core argument for simplicity obtainable from statistical learning theory, built on the theory’s central learning guarantee for the method of empirical risk minimization. This core “means-ends” argument is that a simpler hypothesis class or inductive model is better because it has better learning guarantees; however, these guarantees are model-relative and so the theoretical push towards simplicity is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The no-free-lunch theorems of supervised learning.Tom F. Sterkenburg & Peter D. Grünwald - 2021 - Synthese 199 (3-4):9979-10015.
    The no-free-lunch theorems promote a skeptical conclusion that all possible machine learning algorithms equally lack justification. But how could this leave room for a learning theory, that shows that some algorithms are better than others? Drawing parallels to the philosophy of induction, we point out that the no-free-lunch results presuppose a conception of learning algorithms as purely data-driven. On this conception, every algorithm must have an inherent inductive bias, that wants justification. We argue that many standard learning algorithms should rather (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Induction, focused sampling and the law of small numbers.Joel Pust - 1996 - Synthese 108 (1):89 - 104.
    Hilary Kornblith (1993) has recently offered a reliabilist defense of the use of the Law of Small Numbers in inductive inference. In this paper I argue that Kornblith's defense of this inferential rule fails for a number of reasons. First, I argue that the sort of inferences that Kornblith seeks to justify are not really inductive inferences based on small samples. Instead, they are knowledge-based deductive inferences. Second, I address Kornblith's attempt to find support in the work of Dorrit Billman (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Machine learning and the foundations of inductive inference.Francesco Bergadano - 1993 - Minds and Machines 3 (1):31-51.
    The problem of valid induction could be stated as follows: are we justified in accepting a given hypothesis on the basis of observations that frequently confirm it? The present paper argues that this question is relevant for the understanding of Machine Learning, but insufficient. Recent research in inductive reasoning has prompted another, more fundamental question: there is not just one given rule to be tested, there are a large number of possible rules, and many of these are somehow confirmed by (...)
    Download  
     
    Export citation  
     
    Bookmark