Switch to: References

Add citations

You must login to add citations.
  1. On the computational complexity of ethics: moral tractability for minds and machines.Jakob Stenseke - 2024 - Artificial Intelligence Review 57 (105):90.
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do so, we analyze normative (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Simple Models in Complex Worlds: Occam’s Razor and Statistical Learning Theory.Falco J. Bargagli Stoffi, Gustavo Cevolani & Giorgio Gnecco - 2022 - Minds and Machines 32 (1):13-42.
    The idea that “simplicity is a sign of truth”, and the related “Occam’s razor” principle, stating that, all other things being equal, simpler models should be preferred to more complex ones, have been long discussed in philosophy and science. We explore these ideas in the context of supervised machine learning, namely the branch of artificial intelligence that studies algorithms which balance simplicity and accuracy in order to effectively learn about the features of the underlying domain. Focusing on statistical learning theory, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The no-free-lunch theorems of supervised learning.Tom F. Sterkenburg & Peter D. Grünwald - 2021 - Synthese 199 (3-4):9979-10015.
    The no-free-lunch theorems promote a skeptical conclusion that all possible machine learning algorithms equally lack justification. But how could this leave room for a learning theory, that shows that some algorithms are better than others? Drawing parallels to the philosophy of induction, we point out that the no-free-lunch results presuppose a conception of learning algorithms as purely data-driven. On this conception, every algorithm must have an inherent inductive bias, that wants justification. We argue that many standard learning algorithms should rather (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Metainduction over Unboundedly Many Prediction Methods: A Reply to Arnold and Sterkenburg.Gerhard Schurz - 2021 - Philosophy of Science 88 (2):320-340.
    The universal optimality theorem for metainduction works for epistemic agents faced with a choice among finitely many prediction methods. Eckhart Arnold and Tom Sterkenburg objected that it breaks...
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The meta-inductive justification of induction.Tom F. Sterkenburg - 2020 - Episteme 17 (4):519-541.
    I evaluate Schurz's proposed meta-inductive justification of induction, a refinement of Reichenbach's pragmatic justification that rests on results from the machine learning branch of prediction with expert advice. My conclusion is that the argument, suitably explicated, comes remarkably close to its grand aim: an actual justification of induction. This finding, however, is subject to two main qualifications, and still disregards one important challenge. The first qualification concerns the empirical success of induction. Even though, I argue, Schurz's argument does not need (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Fool me once: Can indifference vindicate induction?Zach Barnett & Han Li - 2018 - Episteme 15 (2):202-208.
    Roger White (2015) sketches an ingenious new solution to the problem of induction. He argues from the principle of indifference for the conclusion that the world is more likely to be induction- friendly than induction-unfriendly. But there is reason to be skeptical about the proposed indifference-based vindication of induction. It can be shown that, in the crucial test cases White concentrates on, the assumption of indifference renders induction no more accurate than random guessing. After discussing this result, the paper explains (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • No free theory choice from machine learning.Bruce Rushing - 2022 - Synthese 200 (5):1-21.
    Ravit Dotan argues that a No Free Lunch theorem from machine learning shows epistemic values are insufficient for deciding the truth of scientific hypotheses. She argues that NFL shows that the best case accuracy of scientific hypotheses is no more than chance. Since accuracy underpins every epistemic value, non-epistemic values are needed to assess the truth of scientific hypotheses. However, NFL cannot be coherently applied to the problem of theory choice. The NFL theorem Dotan’s argument relies upon is a member (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Explaining the Success of Induction.Igor Douven - 2023 - British Journal for the Philosophy of Science 74 (2):381-404.
    It is undeniable that inductive reasoning has brought us much good. At least since Hume, however, philosophers have wondered how to justify our reliance on induction. In important recent work, Schurz points out that philosophers have been wrongly assuming that justifying induction is tantamount to showing induction to be reliable. According to him, to justify our reliance on induction, it is enough to show that induction is optimal. His optimality approach consists of two steps: an analytic argument for meta-induction (that (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Does Meta-induction Justify Induction: Or Maybe Something Else?J. Brian Pitts - 2023 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 54 (3):393-419.
    According to the Feigl–Reichenbach–Salmon–Schurz pragmatic justification of induction, no predictive method is guaranteed or even likely to work for predicting the future; but if anything will work, induction will work—at least when induction is employed at the meta-level of predictive methods in light of their track records. One entertains a priori all manner of esoteric prediction methods, and is said to arrive a posteriori at the conclusion, based on the actual past, that object-level induction is optimal. Schurz’s refinements largely solve (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The No Free Lunch Theorem: Bad News for (white's Account of) the Problem of Induction.Gerhard Schurz - 2021 - Episteme 18 (1):31-45.
    White (2015) proposes an a priori justification of the reliability of inductive prediction methods based on his thesis of induction-friendliness. It asserts that there are by far more induction-friendly event sequences than induction-unfriendly event sequences. In this paper I contrast White's thesis with the famous no free lunch (NFL) theorem. I explain two versions of this theorem, the strong NFL theorem applying to binary and the weak NFL theorem applying to real-valued predictions. I show that both versions refute the thesis (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • The evolving hierarchy of naturalized philosophy: A metaphilosophical sketch.Luca Rivelli - 2024 - Metaphilosophy 55 (3):285-300.
    Some scholars claim that epistemology of science and machine learning are actually overlapping disciplines studying induction, respectively affected by Hume's problem of induction and its formal machine-learning counterpart, the “no-free-lunch” (NFL) theorems, to which even advanced AI systems such as LLMs are not immune. Extending Kevin Korb's view, this paper envisions a hierarchy of disciplines where the lowermost is a basic science, and, recursively, the metascience at each level inductively learns which methods work best at the immediately lower level. Due (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Cognitive Success: A Consequentialist Account of Rationality in Cognition.Gerhard Schurz & Ralph Hertwig - 2019 - Topics in Cognitive Science 11 (1):7-36.
    One of the most discussed issues in psychology—presently and in the past—is how to define and measure the extent to which human cognition is rational. The rationality of human cognition is often evaluated in terms of normative standards based on a priori intuitions. Yet this approach has been challenged by two recent developments in psychology that we review in this article: ecological rationality and descriptivism. Going beyond these contributions, we consider it a good moment for psychologists and philosophers to join (...)
    Download  
     
    Export citation  
     
    Bookmark   15 citations