Switch to: References

Add citations

You must login to add citations.
  1. On the equivalence of Goodman’s and Hempel’s paradoxes.Kenneth Boyce - 2014 - Studies in History and Philosophy of Science Part A 45:32-42.
    Historically, Nelson Goodman’s paradox involving the predicates ‘grue’ and ‘bleen’ has been taken to furnish a serious blow to Carl Hempel’s theory of confirmation in particular and to purely formal theories of confirmation in general. In this paper, I argue that Goodman’s paradox is no more serious of a threat to Hempel’s theory of confirmation than is Hempel’s own paradox of the ravens. I proceed by developing a suggestion from R. D. Rosenkrantz into an argument for the conclusion that these (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Hempel's Raven paradox: A lacuna in the standard bayesian solution.Peter B. M. Vranas - 2004 - British Journal for the Philosophy of Science 55 (3):545-560.
    According to Hempel's paradox, evidence (E) that an object is a nonblack nonraven confirms the hypothesis (H) that every raven is black. According to the standard Bayesian solution, E does confirm H but only to a minute degree. This solution relies on the almost never explicitly defended assumption that the probability of H should not be affected by evidence that an object is nonblack. I argue that this assumption is implausible, and I propose a way out for Bayesians. Introduction Hempel's (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • The no-free-lunch theorems of supervised learning.Tom F. Sterkenburg & Peter D. Grünwald - 2021 - Synthese 199 (3-4):9979-10015.
    The no-free-lunch theorems promote a skeptical conclusion that all possible machine learning algorithms equally lack justification. But how could this leave room for a learning theory, that shows that some algorithms are better than others? Drawing parallels to the philosophy of induction, we point out that the no-free-lunch results presuppose a conception of learning algorithms as purely data-driven. On this conception, every algorithm must have an inherent inductive bias, that wants justification. We argue that many standard learning algorithms should rather (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Bayesian confirmation: Paradise regained.R. D. Rosenkrantz - 1994 - British Journal for the Philosophy of Science 45 (2):467-476.
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • A New Bayesian Solution to the Paradox of the Ravens.Susanna Rinard - 2014 - Philosophy of Science 81 (1):81-100.
    The canonical Bayesian solution to the ravens paradox faces a problem: it entails that black non-ravens disconfirm the hypothesis that all ravens are black. I provide a new solution that avoids this problem. On my solution, black ravens confirm that all ravens are black, while non-black non-ravens and black non-ravens are neutral. My approach is grounded in certain relations of epistemic dependence, which, in turn, are grounded in the fact that the kind raven is more natural than the kind black. (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • What did Hume really show about induction?Samir Okasha - 2001 - Philosophical Quarterly 51 (204):307-327.
    Many philosophers agree that Hume was not simply objecting to inductive inferences on the grounds of their logical invalidity and that his description of our inductive behaviour was inadequate, but none the less regard his argument against induction as irrefutable. I argue that this constellation of opinions contains a serious tension. In the light of the tension, I re-examine Hume’s actual sceptical argument and show that the argument as it stands is valid but unsound. I argue that it can only (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • Is there a Bayesian justification of hypothetico‐deductive inference?Samir Okasha & Karim Thébault - 2020 - Noûs 54 (4):774-794.
    Many philosophers have claimed that Bayesianism can provide a simple justification for hypothetico-deductive inference, long regarded as a cornerstone of the scientific method. Following up a remark of van Fraassen, we analyze a problem for the putative Bayesian justification of H-D inference in the case where what we learn from observation is logically stronger than what our theory implies. Firstly, we demonstrate that in such cases the simple Bayesian justification does not necessarily apply. Secondly, we identify a set of sufficient (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Experiment, observation and the confirmation of laws.S. Okasha - 2011 - Analysis 71 (2):222-232.
    It is customary to distinguish experimental from purely observational sciences. The former include physics and molecular biology, the latter astronomy and palaeontology. Experiments involve actively intervening in the course of nature, as opposed to observing events that would have happened anyway. When a molecular biologist inserts viral DNA into a bacterium in his laboratory, this is an experiment; but when an astronomer points his telescope at the heavens, this is an observation. Without the biologist’s handiwork the bacterium would never have (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Inductive logic and the ravens paradox.Patrick Maher - 1999 - Philosophy of Science 66 (1):50-70.
    Hempel's paradox of the ravens arises from the inconsistency of three prima facie plausible principles of confirmation. This paper uses Carnapian inductive logic to (a) identify which of the principles is false, (b) give insight into why this principle is false, and (c) identify a true principle that is sufficiently similar to the false one that failure to distinguish the two might explain why the false principle is prima facie plausible. This solution to the paradox is compared with a variety (...)
    Download  
     
    Export citation  
     
    Bookmark   30 citations  
  • Reliabilism and induction.Michael Levin - 1993 - Synthese 97 (3):297 - 334.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • On probabilism and induction.John Hosack - 1991 - Topoi 10 (2):227-229.
    R. C. Jeffrey has proposed probabilism as a solution to Hume's problem of justifying induction. This paper shows that the assumptions of his Estimation Theorem, used to justify induction, can be weakened to provide a more satisfactory interpretation. It is also questioned whether the use of probabilism adds significantly to our understanding (or even Hume's understanding) of the problem of induction.
    Download  
     
    Export citation  
     
    Bookmark  
  • Truth does not explain predictive success.Carsten Held - 2011 - Analysis 71 (2):232-234.
    Laudan famously argued that neither truth nor approximate truth can be part of an explanation of a scientific theory's predictive success because in the history of science there were theories that enjoyed some limited success but now are considered outright false. The power of his argument lay in the many historic examples he listed . Realists have disputed that all theories on Laudan's list can be regarded as predictively successful but let's suppose momentarily that at least some exist that support (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations