Switch to: References

Add citations

You must login to add citations.
  1. Karl Popper, Science and Enlightenment.Nicholas Maxwell - 2017 - London: UCL Press.
    Karl Popper is famous for having proposed that science advances by a process of conjecture and refutation. He is also famous for defending the open society against what he saw as its arch enemies – Plato and Marx. Popper’s contributions to thought are of profound importance, but they are not the last word on the subject. They need to be improved. My concern in this book is to spell out what is of greatest importance in Popper’s work, what its failings (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Probability and Inductive Logic.Antony Eagle - manuscript
    Reasoning from inconclusive evidence, or ‘induction’, is central to science and any applications we make of it. For that reason alone it demands the attention of philosophers of science. This Element explores the prospects of using probability theory to provide an inductive logic, a framework for representing evidential support. Constraints on the ideal evaluation of hypotheses suggest that overall support for a hypothesis is represented by its probability in light of the total evidence, and incremental support, or confirmation, indicated by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Dilemma for Solomonoff Prediction.Sven Neth - 2023 - Philosophy of Science 90 (2):288-306.
    The framework of Solomonoff prediction assigns prior probability to hypotheses inversely proportional to their Kolmogorov complexity. There are two well-known problems. First, the Solomonoff prior is relative to a choice of Universal Turing machine. Second, the Solomonoff prior is not computable. However, there are responses to both problems. Different Solomonoff priors converge with more and more data. Further, there are computable approximations to the Solomonoff prior. I argue that there is a tension between these two responses. This is because computable (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • PAC Learning and Occam’s Razor: Probably Approximately Incorrect.Daniel A. Herrmann - 2020 - Philosophy of Science 87 (4):685-703.
    Computer scientists have provided a distinct justification of Occam’s Razor. Using the probably approximately correct framework, they provide a theorem that they claim demonstrates that we should favor simpler hypotheses. The argument relies on a philosophical interpretation of the theorem. I argue that the standard interpretation of the result in the literature is misguided and that a better reading does not, in fact, support Occam’s Razor at all. To this end, I state and prove a very similar theorem that, if (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Is behavioural flexibility evidence of cognitive complexity? How evolution can inform comparative cognition.Irina Mikhalevich, Russell Powell & Corina Logan - 2017 - Interface Focus 7.
    Behavioural flexibility is often treated as the gold standard of evidence for more sophisticated or complex forms of animal cognition, such as planning, metacognition and mindreading. However, the evidential link between behavioural flexibility and complex cognition has not been explicitly or systematically defended. Such a defence is particularly pressing because observed flexible behaviours can frequently be explained by putatively simpler cognitive mechanisms. This leaves complex cognition hypotheses open to ‘deflationary’ challenges that are accorded greater evidential weight precisely because they offer (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Statistical Learning Theory and Occam’s Razor: The Core Argument.Tom F. Sterkenburg - 2024 - Minds and Machines 35 (1):1-28.
    Statistical learning theory is often associated with the principle of Occam’s razor, which recommends a simplicity preference in inductive inference. This paper distills the core argument for simplicity obtainable from statistical learning theory, built on the theory’s central learning guarantee for the method of empirical risk minimization. This core “means-ends” argument is that a simpler hypothesis class or inductive model is better because it has better learning guarantees; however, these guarantees are model-relative and so the theoretical push towards simplicity is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Putnam’s Diagonal Argument and the Impossibility of a Universal Learning Machine.Tom F. Sterkenburg - 2019 - Erkenntnis 84 (3):633-656.
    Putnam construed the aim of Carnap’s program of inductive logic as the specification of a “universal learning machine,” and presented a diagonal proof against the very possibility of such a thing. Yet the ideas of Solomonoff and Levin lead to a mathematical foundation of precisely those aspects of Carnap’s program that Putnam took issue with, and in particular, resurrect the notion of a universal mechanical rule for induction. In this paper, I take up the question whether the Solomonoff–Levin proposal is (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations