Contents
19 found
Order:
  1. Practical foundations for probability: Prediction methods and calibration.Benedikt Höltgen - manuscript
    Although probabilistic statements are ubiquitous, probability is still poorly understood. This shows itself, for example, in the mere stipulation of policies like expected utility maximisation and in disagreements about the correct interpretation of probability. In this work, we provide an account of probabilistic predictions that explains when, how, and why they can be useful for decision-making. We demonstrate that a calibration criterion on finite sets of predictions allows one to anticipate the distribution of utilities that a given policy will yield. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. Hume's Fallacy: Miracles, Probability, and Frequency.Paul Mayer - manuscript
    Frequency-based arguments against rational belief in a miracle occurring have been present for centuries, the most notable being from David Hume. In this essay, I will show Hume's argument rests on an equivocation of probability, with him using the term interchangeably to refer to two different and incompatible perspectives: Bayesianism and Frequentism. Additionally, I will show that any frequentist arguments against miracles relies on a view of probability that is only dubiously linked to rationality. In other words, the frequentist cannot (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. Maximum Likelihood is Likely Wrong.Paul Mayer - manuscript
    It is argued that Maximum Likelihood Estimation (MLE) is wrong, both conceptually and in terms of results it produces (except in two very special cases, which are discussed). While the use of MLE can still be justified on the basis of its practical performance, we argue there are better estimation methods that overcome MLE's empirical and philosophical shortcomings while retaining all of MLE's benefits.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. Preregistration Does Not Improve the Transparent Evaluation of Severity in Popper’s Philosophy of Science or When Deviations are Allowed.Mark Rubin - manuscript
    One justification for preregistering research hypotheses, methods, and analyses is that it improves the transparent evaluation of the severity of hypothesis tests. In this article, I consider two cases in which preregistration does not improve this evaluation. First, I argue that, although preregistration can facilitate the transparent evaluation of severity in Mayo’s error statistical philosophy of science, it does not facilitate this evaluation in Popper’s theory-centric approach. To illustrate, I show that associated concerns about Type I error rate inflation are (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. The concept of probability in physics: an analytic version of von Mises’ interpretation.Louis Vervoort - manuscript
    In the following we will investigate whether von Mises’ frequency interpretation of probability can be modified to make it philosophically acceptable. We will reject certain elements of von Mises’ theory, but retain others. In the interpretation we propose we do not use von Mises’ often criticized ‘infinite collectives’ but we retain two essential claims of his interpretation, stating that probability can only be defined for events that can be repeated in similar conditions, and that exhibit frequency stabilization. The central idea (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. That Does Not Compute: David Lewis on Credence and Chance.Gordon Belot - forthcoming - Philosophy of Science.
    Like Lewis, many philosophers hold reductionist accounts of chance (on which claims about chance are to be understood as claims that certain patterns of events are instantiated) and maintain that rationality requires that credence should defer to chance (in the sense that under certain circumstances one's credence in an event must coincide with the chance of that event). It is a shortcoming of an account of chance if it implies that this norm of rationality is unsatisfiable by computable agents. This (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Type I error rates are not usually inflated.Mark Rubin - 2024 - Journal of Trial and Error 1.
    The inflation of Type I error rates is thought to be one of the causes of the replication crisis. Questionable research practices such as p-hacking are thought to inflate Type I error rates above their nominal level, leading to unexpectedly high levels of false positives in the literature and, consequently, unexpectedly low replication rates. In this article, I offer an alternative view. I argue that questionable and other research practices do not usually inflate relevant Type I error rates. I begin (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  8. Hypothetical Frequencies as Approximations.Jer Steeger - 2024 - Erkenntnis 89 (4):1295-1325.
    Hájek (Erkenntnis 70(2):211–235, 2009) argues that probabilities cannot be the limits of relative frequencies in counterfactual infinite sequences. I argue for a different understanding of these limits, drawing on Norton’s (Philos Sci 79(2):207–232, 2012) distinction between approximations (inexact descriptions of a target) and idealizations (separate models that bear analogies to the target). Then, I adapt Hájek’s arguments to this new context. These arguments provide excellent reasons not to use hypothetical frequencies as idealizations, but no reason not to use them as (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Probability and Informed Consent.Nir Ben-Moshe, Benjamin A. Levinstein & Jonathan Livengood - 2023 - Theoretical Medicine and Bioethics 44 (6):545-566.
    In this paper, we illustrate some serious difficulties involved in conveying information about uncertain risks and securing informed consent for risky interventions in a clinical setting. We argue that in order to secure informed consent for a medical intervention, physicians often need to do more than report a bare, numerical probability value. When probabilities are given, securing informed consent generally requires communicating how probability expressions are to be interpreted and communicating something about the quality and quantity of the evidence for (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. Westphal, Kenneth, Kant’s Critical Epistemology: Why Epistemology Must Consider Judgment First. [REVIEW]Ekin Erkan - 2021 - Argumenta 12:366-373.
    Book Review of Kenneth Westphal's Kant’s Critical Epistemology: Why Epistemology Must Consider Judgment First.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. Reviving Frequentism.Mario Hubert - 2021 - Synthese 199:5255–5584.
    Philosophers now seem to agree that frequentism is an untenable strategy to explain the meaning of probabilities. Nevertheless, I want to revive frequentism, and I will do so by grounding probabilities on typicality in the same way as the thermodynamic arrow of time can be grounded on typicality within statistical mechanics. This account, which I will call typicality frequentism, will evade the major criticisms raised against previous forms of frequentism. In this theory, probabilities arise within a physical theory from statistical (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   6 citations  
  12. Laura Papish, Kant on Evil, Self-Deception, and Moral Reform. [REVIEW]Samuel Kahn - 2021 - Ethics 132 (1):266-269.
    Laura Papish’s Kant on Evil, Self-Deception, and Moral Reform is an ambitious attempt to breath new life into old debates and a welcome contribution to a recent renaissance of interest in Kant’s theory of evil. ​The book has eight chapters, and these chapters fall into three main divisions. Chapters 1 and 2 focus on the psychology of nonmoral and immoral action. Chapters 3, 4, and 5 focus on self-deception, evil, and dissimulation. And chapters 6, 7, and 8 focus on self-cognition, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. What type of Type I error? Contrasting the Neyman–Pearson and Fisherian approaches in the context of exact and direct replications.Mark Rubin - 2021 - Synthese 198 (6):5809–5834.
    The replication crisis has caused researchers to distinguish between exact replications, which duplicate all aspects of a study that could potentially affect the results, and direct replications, which duplicate only those aspects of the study that are thought to be theoretically essential to reproduce the original effect. The replication crisis has also prompted researchers to think more carefully about the possibility of making Type I errors when rejecting null hypotheses. In this context, the present article considers the utility of two (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   9 citations  
  14. “Repeated sampling from the same population?” A critique of Neyman and Pearson’s responses to Fisher.Mark Rubin - 2020 - European Journal for Philosophy of Science 10 (3):1-15.
    Fisher criticised the Neyman-Pearson approach to hypothesis testing by arguing that it relies on the assumption of “repeated sampling from the same population.” The present article considers the responses to this criticism provided by Pearson and Neyman. Pearson interpreted alpha levels in relation to imaginary replications of the original test. This interpretation is appropriate when test users are sure that their replications will be equivalent to one another. However, by definition, scientific researchers do not possess sufficient knowledge about the relevant (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  15. How Explanation Guides Confirmation.Nevin Climenhaga - 2017 - Philosophy of Science 84 (2):359-68.
    Where E is the proposition that [If H and O were true, H would explain O], William Roche and Elliot Sober have argued that P(H|O&E) = P(H|O). In this paper I argue that not only is this equality not generally true, it is false in the very kinds of cases that Roche and Sober focus on, involving frequency data. In fact, in such cases O raises the probability of H only given that there is an explanatory connection between them.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   33 citations  
  16. (2 other versions)Probability and Randomness.Antony Eagle - 2016 - In Alan Hájek & Christopher Hitchcock (eds.), The Oxford Handbook of Probability and Philosophy. Oxford: Oxford University Press. pp. 440-459.
    Early work on the frequency theory of probability made extensive use of the notion of randomness, conceived of as a property possessed by disorderly collections of outcomes. Growing out of this work, a rich mathematical literature on algorithmic randomness and Kolmogorov complexity developed through the twentieth century, but largely lost contact with the philosophical literature on physical probability. The present chapter begins with a clarification of the notions of randomness and probability, conceiving of the former as a property of a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   5 citations  
  17. A response to Prelec.Luc Bovens - 2013 - In Adam Oliver (ed.), Essays in Behavioural Public Policy. Cambridge University Press. pp. 228-33.
    At the heart of Drazen Prelec’s chapter is the distinction between outcome utility and diagnostic utility. There is a particular distinction in the literature on causal networks (Pearl 2000), namely the distinction between observing and intervening, that maps onto Prelec’s distinction between diagnostic and outcome utility. I will explore the connection between both frameworks.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  18. The Undetectable Difference: An Experimental Look at the ‘Problem’ of p-Values.William M. Goodman - 2010 - Statistical Literacy Website/Papers: Www.Statlit.Org/Pdf/2010GoodmanASA.Pdf.
    In the face of continuing assumptions by many scientists and journal editors that p-values provide a gold standard for inference, counter warnings are published periodically. But the core problem is not with p-values, per se. A finding that “p-value is less than α” could merely signal that a critical value has been exceeded. The question is why, when estimating a parameter, we provide a range (a confidence interval), but when testing a hypothesis about a parameter (e.g. µ = x) we (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. John Maynard Keynes and Ludwig von Mises on Probability.Ludwig van den Hauwe - 2010 - Journal of Libertarian Studies 22 (1):471-507.
    The economic paradigms of Ludwig von Mises on the one hand and of John Maynard Keynes on the other have been correctly recognized as antithetical at the theoretical level, and as antagonistic with respect to their practical and public policy implications. Characteristically they have also been vindicated by opposing sides of the political spectrum. Nevertheless the respective views of these authors with respect to the meaning and interpretation of probability exhibit a closer conceptual affinity than has been acknowledged in the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation