Switch to: References

Add citations

You must login to add citations.
  1. Objective evidence and rules of strategy: Achinstein on method: Peter Achinstein: Evidence and method: Scientific strategies of Isaac Newton and James Clerk Maxwell. Oxford and New York: Oxford University Press, 2013, 177pp, $24.95 HB.William L. Harper, Kent W. Staley, Henk W. de Regt & Peter Achinstein - 2014 - Metascience 23 (3):413-442.
    Download  
     
    Export citation  
     
    Bookmark  
  • Science is judgement, not only calculation: a reply to Aris Spanos’s review of The cult of statistical significance.Stephen T. Ziliak & Deirdre Nansen McCloskey - 2008 - Erasmus Journal for Philosophy and Economics 1 (1):165-170.
    Download  
     
    Export citation  
     
    Bookmark  
  • Prediction in selectionist evolutionary theory.Rasmus Gr⊘Nfeldt Winther - 2009 - Philosophy of Science 76 (5):889-901.
    Selectionist evolutionary theory has often been faulted for not making novel predictions that are surprising, risky, and correct. I argue that it in fact exhibits the theoretical virtue of predictive capacity in addition to two other virtues: explanatory unification and model fitting. Two case studies show the predictive capacity of selectionist evolutionary theory: parallel evolutionary change in E. coli, and the origin of eukaryotic cells through endosymbiosis.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Conceptual challenges for interpretable machine learning.David S. Watson - 2022 - Synthese 200 (2):1-33.
    As machine learning has gradually entered into ever more sectors of public and private life, there has been a growing demand for algorithmic explainability. How can we make the predictions of complex statistical models more intelligible to end users? A subdiscipline of computer science known as interpretable machine learning (IML) has emerged to address this urgent question. Numerous influential methods have been proposed, from local linear approximations to rule lists and counterfactuals. In this article, I highlight three conceptual challenges that (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Evidence in biology and the conditions of success.Jacob Stegenga - 2013 - Biology and Philosophy 28 (6):981-1004.
    I describe two traditions of philosophical accounts of evidence: one characterizes the notion in terms of signs of success, the other characterizes the notion in terms of conditions of success. The best examples of the former rely on the probability calculus, and have the virtues of generality and theoretical simplicity. The best examples of the latter describe the features of evidence which scientists appeal to in practice, which include general features of methods, such as quality and relevance, and general features (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Strategies for securing evidence through model criticism.Kent W. Staley - 2012 - European Journal for Philosophy of Science 2 (1):21-43.
    Some accounts of evidence regard it as an objective relationship holding between data and hypotheses, perhaps mediated by a testing procedure. Mayo’s error-statistical theory of evidence is an example of such an approach. Such a view leaves open the question of when an epistemic agent is justified in drawing an inference from such data to a hypothesis. Using Mayo’s account as an illustration, I propose a framework for addressing the justification question via a relativized notion, which I designate security , (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Internalist and externalist aspects of justification in scientific inquiry.Kent Staley & Aaron Cobb - 2011 - Synthese 182 (3):475-492.
    While epistemic justification is a central concern for both contemporary epistemology and philosophy of science, debates in contemporary epistemology about the nature of epistemic justification have not been discussed extensively by philosophers of science. As a step toward a coherent account of scientific justification that is informed by, and sheds light on, justificatory practices in the sciences, this paper examines one of these debates—the internalist-externalist debate—from the perspective of objective accounts of scientific evidence. In particular, we focus on Deborah Mayo’s (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Early stopping of RCTs: two potential issues for error statistics.Roger Stanev - 2015 - Synthese 192 (4):1089-1116.
    Error statistics is an important methodological view in philosophy of statistics and philosophy of science that can be applied to scientific experiments such as clinical trials. In this paper, I raise two potential issues for ES when it comes to guiding, and explaining early stopping of randomized controlled trials : ES provides limited guidance in cases of early unfavorable trends due to the possibility of trend reversal; ES is silent on how to prospectively control error rates in experiments requiring multiple (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Evidence and Justification in Groups with Conflicting Background Beliefs.Kent W. Staley - 2010 - Episteme 7 (3):232-247.
    Some prominent accounts of scientific evidence treat evidence as an unrelativized concept. But whether belief in a hypothesis is justified seems relative to the epistemic situation of the believer. The issue becomes yet more complicated in the context of group epistemic agents, for then one confronts the problem of relativizing to an epistemic situation that may include conflicting beliefs. As a step toward resolution of these difficulties, an ideal of justification is here proposed that incorporates both an unrelativized evidence requirement (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The objectivity of Subjective Bayesianism.Jan Sprenger - 2018 - European Journal for Philosophy of Science 8 (3):539-558.
    Subjective Bayesianism is a major school of uncertain reasoning and statistical inference. It is often criticized for a lack of objectivity: it opens the door to the influence of values and biases, evidence judgments can vary substantially between scientists, it is not suited for informing policy decisions. My paper rebuts these concerns by connecting the debates on scientific objectivity and statistical method. First, I show that the above concerns arise equally for standard frequentist inference with null hypothesis significance tests. Second, (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Evidence and experimental design in sequential trials.Jan Sprenger - 2009 - Philosophy of Science 76 (5):637-649.
    To what extent does the design of statistical experiments, in particular sequential trials, affect their interpretation? Should postexperimental decisions depend on the observed data alone, or should they account for the used stopping rule? Bayesians and frequentists are apparently deadlocked in their controversy over these questions. To resolve the deadlock, I suggest a three‐part strategy that combines conceptual, methodological, and decision‐theoretic arguments. This approach maintains the pre‐experimental relevance of experimental design and stopping rules but vindicates their evidential, postexperimental irrelevance. †To (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Who Should Be Afraid of the Jeffreys-Lindley Paradox?Aris Spanos - 2013 - Philosophy of Science 80 (1):73-93.
    The article revisits the large n problem as it relates to the Jeffreys-Lindley paradox to compare the frequentist, Bayesian, and likelihoodist approaches to inference and evidence. It is argued that what is fallacious is to interpret a rejection of as providing the same evidence for a particular alternative, irrespective of n; this is an example of the fallacy of rejection. Moreover, the Bayesian and likelihoodist approaches are shown to be susceptible to the fallacy of acceptance. The key difference is that (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • The discovery of argon: A case for learning from data?Aris Spanos - 2010 - Philosophy of Science 77 (3):359-380.
    Rayleigh and Ramsay discovered the inert gas argon in the atmospheric air in 1895 using a carefully designed sequence of experiments guided by an informal statistical analysis of the resulting data. The primary objective of this article is to revisit this remarkable historical episode in order to make a case that the error‐statistical perspective can be used to bring out and systematize (not to reconstruct) these scientists' resourceful ways and strategies for detecting and eliminating error, as well as dealing with (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Severity and Trustworthy Evidence: Foundational Problems versus Misuses of Frequentist Testing.Aris Spanos - 2022 - Philosophy of Science 89 (2):378-397.
    For model-based frequentist statistics, based on a parametric statistical model ${{\cal M}_\theta }$, the trustworthiness of the ensuing evidence depends crucially on the validity of the probabilistic assumptions comprising ${{\cal M}_\theta }$, the optimality of the inference procedures employed, and the adequateness of the sample size to learn from data by securing –. It is argued that the criticism of the postdata severity evaluation of testing results based on a small n by Rochefort-Maranda is meritless because it conflates [a] misuses (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Revisiting Haavelmo's structural econometrics: bridging the gap between theory and data.Aris Spanos - 2015 - Journal of Economic Methodology 22 (2):171-196.
    The objective of the paper is threefold. First, to argue that some of Haavelmo's methodological ideas and insights have been neglected because they are largely at odds with the traditional perspective that views empirical modeling in economics as an exercise in curve-fitting. Second, to make a case that this neglect has contributed to the unreliability of empirical evidence in economics that is largely due to statistical misspecification. The latter affects the reliability of inference by inducing discrepancies between the actual and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Is frequentist testing vulnerable to the base-rate fallacy?Aris Spanos - 2010 - Philosophy of Science 77 (4):565-583.
    This article calls into question the charge that frequentist testing is susceptible to the base-rate fallacy. It is argued that the apparent similarity between examples like the Harvard Medical School test and frequentist testing is highly misleading. A closer scrutiny reveals that such examples have none of the basic features of a proper frequentist test, such as legitimate data, hypotheses, test statistics, and sampling distributions. Indeed, the relevant error probabilities are replaced with the false positive/negative rates that constitute deductive calculations (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Error statistical modeling and inference: Where methodology meets ontology.Aris Spanos & Deborah G. Mayo - 2015 - Synthese 192 (11):3533-3555.
    In empirical modeling, an important desiderata for deeming theoretical entities and processes as real is that they can be reproducible in a statistical sense. Current day crises regarding replicability in science intertwines with the question of how statistical methods link data to statistical and substantive theories and models. Different answers to this question have important methodological consequences for inference, which are intertwined with a contrast between the ontological commitments of the two types of models. The key to untangling them is (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Curve Fitting, the Reliability of Inductive Inference, and the Error‐Statistical Approach.Aris Spanos - 2007 - Philosophy of Science 74 (5):1046-1066.
    The main aim of this paper is to revisit the curve fitting problem using the reliability of inductive inference as a primary criterion for the ‘fittest' curve. Viewed from this perspective, it is argued that a crucial concern with the current framework for addressing the curve fitting problem is, on the one hand, the undue influence of the mathematical approximation perspective, and on the other, the insufficient attention paid to the statistical modeling aspects of the problem. Using goodness-of-fit as the (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Bernoulli’s golden theorem in retrospect: error probabilities and trustworthy evidence.Aris Spanos - 2021 - Synthese 199 (5-6):13949-13976.
    Bernoulli’s 1713 golden theorem is viewed retrospectively in the context of modern model-based frequentist inference that revolves around the concept of a prespecified statistical model Mθx\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal{M}}_{{{\varvec{\uptheta}}}} \left( {\mathbf{x}} \right)$$\end{document}, defining the inductive premises of inference. It is argued that several widely-accepted claims relating to the golden theorem and frequentist inference are either misleading or erroneous: (a) Bernoulli solved the problem of inference ‘from probability to frequency’, and thus (b) the golden theorem (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A frequentist interpretation of probability for model-based inductive inference.Aris Spanos - 2013 - Synthese 190 (9):1555-1585.
    The main objective of the paper is to propose a frequentist interpretation of probability in the context of model-based induction, anchored on the Strong Law of Large Numbers (SLLN) and justifiable on empirical grounds. It is argued that the prevailing views in philosophy of science concerning induction and the frequentist interpretation of probability are unduly influenced by enumerative induction, and the von Mises rendering, both of which are at odds with frequentist model-based induction that dominates current practice. The differences between (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • What type of Type I error? Contrasting the Neyman–Pearson and Fisherian approaches in the context of exact and direct replications.Mark Rubin - 2021 - Synthese 198 (6):5809–5834.
    The replication crisis has caused researchers to distinguish between exact replications, which duplicate all aspects of a study that could potentially affect the results, and direct replications, which duplicate only those aspects of the study that are thought to be theoretically essential to reproduce the original effect. The replication crisis has also prompted researchers to think more carefully about the possibility of making Type I errors when rejecting null hypotheses. In this context, the present article considers the utility of two (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Inflated effect sizes and underpowered tests: how the severity measure of evidence is affected by the winner’s curse.Guillaume Rochefort-Maranda - 2021 - Philosophical Studies 178 (1):133-145.
    My aim in this paper is to show how the problem of inflated effect sizes corrupts the severity measure of evidence. This has never been done. In fact, the Winner’s Curse is barely mentioned in the philosophical literature. Since the severity score is the predominant measure of evidence for frequentist tests in the philosophical literature, it is important to underscore its flaws. It is also crucial to bring the philosophical literature up to speed with the limits of classical testing. The (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • What is epistemically wrong with research affected by sponsorship bias? The evidential account.Alexander Reutlinger - 2020 - European Journal for Philosophy of Science 10 (2):1-26.
    Biased research occurs frequently in the sciences. In this paper, I will focus on one particular kind of biased research: research that is subject to sponsorship bias. I will address the following epistemological question: what precisely is epistemically wrong with biased research of this kind? I will defend the evidential account of epistemic wrongness: that is, research affected by sponsorship bias is epistemically wrong if and only if the researchers in question make false claims about the evidential support of some (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Rejoinder: Reviews Symposium.Julian Reiss - 2009 - Economics and Philosophy 25 (2):210-215.
    Download  
     
    Export citation  
     
    Bookmark  
  • Rejoinder error in economics. Towards a more evidence-based methodology , Julian Reiss, Routledge, 2007, XXIV + 246 pages. [REVIEW]Julian Reiss - 2009 - Economics and Philosophy 25 (2):210-215.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • How Theories of Induction Can Streamline Measurements of Scientific Performance.Slobodan Perović & Vlasta Sikimić - 2020 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 51 (2):267-291.
    We argue that inductive analysis and operational assessment of the scientific process can be justifiably and fruitfully brought together, whereby the citation metrics used in the operational analysis can effectively track the inductive dynamics and measure the research efficiency. We specify the conditions for the use of such inductive streamlining, demonstrate it in the cases of high energy physics experimentation and phylogenetic research, and propose a test of the method’s applicability.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Commentary: Psychological Science's Aversion to the Null.Jose D. Perezgonzalez, Dolores Frías-Navarro & Juan Pascual-Llobell - 2017 - Frontiers in Psychology 8.
    Download  
     
    Export citation  
     
    Bookmark  
  • Significance Tests: Vitiated or Vindicated by the Replication Crisis in Psychology?Deborah G. Mayo - 2020 - Review of Philosophy and Psychology 12 (1):101-120.
    The crisis of replication has led many to blame statistical significance tests for making it too easy to find impressive looking effects that do not replicate. However, the very fact it becomes difficult to replicate effects when features of the tests are tied down actually serves to vindicate statistical significance tests. While statistical significance tests, used correctly, serve to bound the probabilities of erroneous interpretations of data, this error control is nullified by data-dredging, multiple testing, and other biasing selection effects. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Statistical significance and its critics: practicing damaging science, or damaging scientific practice?Deborah G. Mayo & David Hand - 2022 - Synthese 200 (3):1-33.
    While the common procedure of statistical significance testing and its accompanying concept of p-values have long been surrounded by controversy, renewed concern has been triggered by the replication crisis in science. Many blame statistical significance tests themselves, and some regard them as sufficiently damaging to scientific practice as to warrant being abandoned. We take a contrary position, arguing that the central criticisms arise from misunderstanding and misusing the statistical tools, and that in fact the purported remedies themselves risk damaging science. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Some surprising facts about surprising facts.D. Mayo - 2014 - Studies in History and Philosophy of Science Part A 45:79-86.
    A common intuition about evidence is that if data x have been used to construct a hypothesis H, then x should not be used again in support of H. It is no surprise that x fits H, if H was deliberately constructed to accord with x. The question of when and why we should avoid such “double-counting” continues to be debated in philosophy and statistics. It arises as a prohibition against data mining, hunting for significance, tuning on the signal, and (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Philosophical Scrutiny of Evidence of Risks: From Bioethics to Bioevidence.Deborah G. Mayo & Aris Spanos - 2006 - Philosophy of Science 73 (5):803-816.
    We argue that a responsible analysis of today's evidence-based risk assessments and risk debates in biology demands a critical or metascientific scrutiny of the uncertainties, assumptions, and threats of error along the manifold steps in risk analysis. Without an accompanying methodological critique, neither sensitivity to social and ethical values, nor conceptual clarification alone, suffices. In this view, restricting the invitation for philosophical involvement to those wearing a "bioethicist" label precludes the vitally important role philosophers of science may be able to (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Ockham Efficiency Theorem for Stochastic Empirical Methods.Kevin T. Kelly & Conor Mayo-Wilson - 2010 - Journal of Philosophical Logic 39 (6):679-712.
    Ockham’s razor is the principle that, all other things being equal, scientists ought to prefer simpler theories. In recent years, philosophers have argued that simpler theories make better predictions, possess theoretical virtues like explanatory power, and have other pragmatic virtues like computational tractability. However, such arguments fail to explain how and why a preference for simplicity can help one find true theories in scientific inquiry, unless one already assumes that the truth is simple. One new solution to that problem is (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • How to discount double-counting when it counts: Some clarifications.Deborah G. Mayo - 2008 - British Journal for the Philosophy of Science 59 (4):857-879.
    The issues of double-counting, use-constructing, and selection effects have long been the subject of debate in the philosophical as well as statistical literature. I have argued that it is the severity, stringency, or probativeness of the test—or lack of it—that should determine if a double-use of data is admissible. Hitchcock and Sober ([2004]) question whether this ‘severity criterion' can perform its intended job. I argue that their criticisms stem from a flawed interpretation of the severity criterion. Taking their criticism as (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Eight journals over eight decades: a computational topic-modeling approach to contemporary philosophy of science.Christophe Malaterre, Francis Lareau, Davide Pulizzotto & Jonathan St-Onge - 2020 - Synthese 199 (1-2):2883-2923.
    As a discipline of its own, the philosophy of science can be traced back to the founding of its academic journals, some of which go back to the first half of the twentieth century. While the discipline has been the object of many historical studies, notably focusing on specific schools or major figures of the field, little work has focused on the journals themselves. Here, we investigate contemporary philosophy of science by means of computational text-mining approaches: we apply topic-modeling algorithms (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Model change and reliability in scientific inference.Erich Kummerfeld & David Danks - 2014 - Synthese 191 (12):2673-2693.
    One persistent challenge in scientific practice is that the structure of the world can be unstable: changes in the broader context can alter which model of a phenomenon is preferred, all without any overt signal. Scientific discovery becomes much harder when we have a moving target, and the resulting incorrect understandings of relationships in the world can have significant real-world and practical consequences. In this paper, we argue that it is common (in certain sciences) to have changes of context that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The epistemic consequences of pragmatic value-laden scientific inference.Adam P. Kubiak & Paweł Kawalec - 2021 - European Journal for Philosophy of Science 11 (2):1-26.
    In this work, we explore the epistemic import of the value-ladenness of Neyman-Pearson’s Theory of Testing Hypotheses by reconstructing and extending Daniel Steel’s argument for the legitimate influence of pragmatic values on scientific inference. We focus on how to properly understand N-P’s pragmatic value-ladenness and the epistemic reliability of N-P. We develop an account of the twofold influence of pragmatic values on N-P’s epistemic reliability and replicability. We refer to these two distinguished aspects as “direct” and “indirect”. We discuss the (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Prior Information in Frequentist Research Designs: The Case of Neyman’s Sampling Theory.Adam P. Kubiak & Paweł Kawalec - 2022 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 53 (4):381-402.
    We analyse the issue of using prior information in frequentist statistical inference. For that purpose, we scrutinise different kinds of sampling designs in Jerzy Neyman’s theory to reveal a variety of ways to explicitly and objectively engage with prior information. Further, we turn to the debate on sampling paradigms (design-based vs. model-based approaches) to argue that Neyman’s theory supports an argument for the intermediate approach in the frequentism vs. Bayesianism debate. We also demonstrate that Neyman’s theory, by allowing non-epistemic values (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Neyman-Pearson Hypothesis Testing, Epistemic Reliability and Pragmatic Value-Laden Asymmetric Error Risks.Adam P. Kubiak, Paweł Kawalec & Adam Kiersztyn - 2022 - Axiomathes 32 (4):585-604.
    We show that if among the tested hypotheses the number of true hypotheses is not equal to the number of false hypotheses, then Neyman-Pearson theory of testing hypotheses does not warrant minimal epistemic reliability. We also argue that N-P does not protect from the possible negative effects of the pragmatic value-laden unequal setting of error probabilities on N-P’s epistemic reliability. Most importantly, we argue that in the case of a negative impact no methodological adjustment is available to neutralize it, so (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Classical versus Bayesian Statistics.Eric Johannesson - 2020 - Philosophy of Science 87 (2):302-318.
    In statistics, there are two main paradigms: classical and Bayesian statistics. The purpose of this article is to investigate the extent to which classicists and Bayesians can agree. My conclusion is that, in certain situations, they cannot. The upshot is that, if we assume that the classicist is not allowed to have a higher degree of belief in a null hypothesis after he has rejected it than before, then he has to either have trivial or incoherent credences to begin with (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Logical consistency in simultaneous statistical test procedures.Rafael Izbicki & Luís Gustavo Esteves - 2015 - Logic Journal of the IGPL 23 (5):732-758.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Prediction in Selectionist Evolutionary Theory.Rasmus Gr⊘Nfeldt Winther - 2009 - Philosophy of Science 76 (5):889-901.
    Selectionist evolutionary theory has often been faulted for not making novel predictions that are surprising, risky, and correct. I argue that it in fact exhibits the theoretical virtue of predictive capacity in addition to two other virtues: explanatory unification and model fitting. Two case studies show the predictive capacity of selectionist evolutionary theory: parallel evolutionary change in E. coli, and the origin of eukaryotic cells through endosymbiosis.
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • On Falsifiable Statistical Hypotheses.Konstantin Genin - 2022 - Philosophies 7 (2):40.
    Popper argued that a statistical falsification required a prior methodological decision to regard sufficiently improbable events as ruled out. That suggestion has generated a number of fruitful approaches, but also a number of apparent paradoxes and ultimately, no clear consensus. It is still commonly claimed that, since random samples are logically consistent with all the statistical hypotheses on the table, falsification simply does not apply in realistic statistical settings. We claim that the situation is considerably improved if we ask a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Pursuit and inquisitive reasons.Will Fleisher - 2022 - Studies in History and Philosophy of Science Part A 94 (C):17-30.
    Sometimes inquirers may rationally pursue a theory even when the available evidence does not favor that theory over others. Features of a theory that favor pursuing it are known as considerations of promise or pursuitworthiness. Examples of such reasons include that a theory is testable, that it has a useful associated analogy, and that it suggests new research and experiments. These reasons need not be evidence in favor of the theory. This raises the question: what kinds of reasons are provided (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Error, error-statistics and self-directed anticipative learning.R. P. Farrell & C. A. Hooker - 2008 - Foundations of Science 14 (4):249-271.
    Error is protean, ubiquitous and crucial in scientific process. In this paper it is argued that understanding scientific process requires what is currently absent: an adaptable, context-sensitive functional role for error in science that naturally harnesses error identification and avoidance to positive, success-driven, science. This paper develops a new account of scientific process of this sort, error and success driving Self-Directed Anticipative Learning (SDAL) cycling, using a recent re-analysis of ape-language research as test example. The example shows the limitations of (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Higgs Discovery and the Look Elsewhere Effect.Richard Dawid - 2015 - Philosophy of Science 82 (1):76-96.
    The discovery of the Higgs particle required a signal of 5σ significance. The rigid application of that condition is a convention that disregards more specific aspects of the given experiment. In particular, it does not account for the characteristics of the look elsewhere effect in the individual experimental context. The paper relates this aspect of data analysis to the question as to what extent theoretical reasoning should be admitted to play a role in the assessment of the significance of empirical (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Bayesian Perspectives on the Discovery of the Higgs Particle.Richard Dawid - 2017 - Synthese 194 (2):377-394.
    It is argued that the high degree of trust in the Higgs particle before its discovery raises the question of a Bayesian perspective on data analysis in high energy physics in an interesting way that differs from other suggestions regarding the deployment of Bayesian strategies in the field.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • The Jeffreys–Lindley paradox and discovery criteria in high energy physics.Robert D. Cousins - 2017 - Synthese 194 (2):395-432.
    The Jeffreys–Lindley paradox displays how the use of a \ value ) in a frequentist hypothesis test can lead to an inference that is radically different from that of a Bayesian hypothesis test in the form advocated by Harold Jeffreys in the 1930s and common today. The setting is the test of a well-specified null hypothesis versus a composite alternative. The \ value, as well as the ratio of the likelihood under the null hypothesis to the maximized likelihood under the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Statistical Inference and the Replication Crisis.Lincoln J. Colling & Dénes Szűcs - 2018 - Review of Philosophy and Psychology 12 (1):121-147.
    The replication crisis has prompted many to call for statistical reform within the psychological sciences. Here we examine issues within Frequentist statistics that may have led to the replication crisis, and we examine the alternative—Bayesian statistics—that many have suggested as a replacement. The Frequentist approach and the Bayesian approach offer radically different perspectives on evidence and inference with the Frequentist approach prioritising error control and the Bayesian approach offering a formal method for quantifying the relative strength of evidence for hypotheses. (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Not null enough: pseudo-null hypotheses in community ecology and comparative psychology.William Bausman & Marta Halina - 2018 - Biology and Philosophy 33 (3-4):30.
    We evaluate a common reasoning strategy used in community ecology and comparative psychology for selecting between competing hypotheses. This strategy labels one hypothesis as a “null” on the grounds of its simplicity and epistemically privileges it as accepted until rejected. We argue that this strategy is unjustified. The asymmetrical treatment of statistical null hypotheses is justified through the experimental and mathematical contexts in which they are used, but these contexts are missing in the case of the “pseudo-null hypotheses” found in (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • How experimental algorithmics can benefit from Mayo’s extensions to Neyman–Pearson theory of testing.Thomas Bartz-Beielstein - 2008 - Synthese 163 (3):385 - 396.
    Although theoretical results for several algorithms in many application domains were presented during the last decades, not all algorithms can be analyzed fully theoretically. Experimentation is necessary. The analysis of algorithms should follow the same principles and standards of other empirical sciences. This article focuses on stochastic search algorithms, such as evolutionary algorithms or particle swarm optimization. Stochastic search algorithms tackle hard real-world optimization problems, e.g., problems from chemical engineering, airfoil optimization, or bio-informatics, where classical methods from mathematical optimization fail. (...)
    Download  
     
    Export citation  
     
    Bookmark