Switch to: References

Citations of:

A statistical paradox

Biometrika 44 (1/2):187-192 (1957)

Add citations

You must login to add citations.
  1. Using Bayes to get the most out of non-significant results.Zoltan Dienes - 2014 - Frontiers in Psychology 5:85883.
    No scientific conclusion follows automatically from a statistically non-significant result, yet people routinely use non-significant results to guide conclusions about the status of theories (or the effectiveness of practices). To know whether a non-significant result counts against a theory, or if it just indicates data insensitivity, researchers must use one of: power, intervals (such as confidence or credibility intervals), or else an indicator of the relative evidence for one theory over another, such as a Bayes factor. I argue Bayes factors (...)
    Download  
     
    Export citation  
     
    Bookmark   116 citations  
  • Why Most Published Research Findings Are False.John P. A. Ioannidis - 2005 - PLoS Med 2 (8):e124.
    Published research findings are sometimes refuted by subsequent evidence, says Ioannidis, with ensuing confusion and disappointment.
    Download  
     
    Export citation  
     
    Bookmark   366 citations  
  • Seeing the wood for the trees: philosophical aspects of classical, Bayesian and likelihood approaches in statistical inference and some implications for phylogenetic analysis.Daniel Barker - 2015 - Biology and Philosophy 30 (4):505-525.
    The three main approaches in statistical inference—classical statistics, Bayesian and likelihood—are in current use in phylogeny research. The three approaches are discussed and compared, with particular emphasis on theoretical properties illustrated by simple thought-experiments. The methods are problematic on axiomatic grounds, extra-mathematical grounds relating to the use of a prior or practical grounds. This essay aims to increase understanding of these limits among those with an interest in phylogeny.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • (1 other version)Inference to the Best explanation.Peter Lipton - 2005 - In Martin Curd & Stathis Psillos (eds.), The Routledge Companion to Philosophy of Science. New York: Routledge. pp. 193.
    Science depends on judgments of the bearing of evidence on theory. Scientists must judge whether an observation or the result of an experiment supports, disconfirms, or is simply irrelevant to a given hypothesis. Similarly, scientists may judge that, given all the available evidence, a hypothesis ought to be accepted as correct or nearly so, rejected as false, or neither. Occasionally, these evidential judgments can be made on deductive grounds. If an experimental result strictly contradicts a hypothesis, then the truth of (...)
    Download  
     
    Export citation  
     
    Bookmark   306 citations  
  • Religion, the Nature of Ultimate Owner, and Corporate Philanthropic Giving: Evidence from China.Xingqiang Du, Wei Jian, Yingjie Du, Wentao Feng & Quan Zeng - 2014 - Journal of Business Ethics 123 (2):235-256.
    Using a sample of Chinese listed firms for the period of 2004–2010, this study examines the impact of religion on corporate philanthropic giving. Based on hand-collected data of religion and corporate philanthropic giving, we provide strong and robust evidence that religion is significantly positively associated with Chinese listed firms’ philanthropic giving. This finding is consistent with the view that religiosity has remarkable effects on individual thinking and behavior, and can serve as social norms to influence corporate philanthropy. Moreover, religion and (...)
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • Testing a precise null hypothesis: the case of Lindley’s paradox.Jan Sprenger - 2013 - Philosophy of Science 80 (5):733-744.
    The interpretation of tests of a point null hypothesis against an unspecified alternative is a classical and yet unresolved issue in statistical methodology. This paper approaches the problem from the perspective of Lindley's Paradox: the divergence of Bayesian and frequentist inference in hypothesis tests with large sample size. I contend that the standard approaches in both frameworks fail to resolve the paradox. As an alternative, I suggest the Bayesian Reference Criterion: it targets the predictive performance of the null hypothesis in (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Severe testing as a basic concept in a neyman–pearson philosophy of induction.Deborah G. Mayo & Aris Spanos - 2006 - British Journal for the Philosophy of Science 57 (2):323-357.
    Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and long-standing problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (pre-data) error probabilities are to be used for (post-data) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities (...)
    Download  
     
    Export citation  
     
    Bookmark   66 citations  
  • How to Tell When Simpler, More Unified, or Less A d Hoc Theories Will Provide More Accurate Predictions.Malcolm R. Forster & Elliott Sober - 1994 - British Journal for the Philosophy of Science 45 (1):1-35.
    Traditional analyses of the curve fitting problem maintain that the data do not indicate what form the fitted curve should take. Rather, this issue is said to be settled by prior probabilities, by simplicity, or by a background theory. In this paper, we describe a result due to Akaike [1973], which shows how the data can underwrite an inference concerning the curve's form based on an estimate of how predictively accurate it will be. We argue that this approach throws light (...)
    Download  
     
    Export citation  
     
    Bookmark   229 citations  
  • FBST Regularization and Model Selection.Julio Michael Stern & Carlos Alberto de Braganca Pereira - 2001 - In Julio Michael Stern & Carlos Alberto de Braganca Pereira (eds.), Annals of the 7th International Conference on Information Systems Analysis and Synthesis. Orlando FL: pp. 7: 60-65..
    We show how the Full Bayesian Significance Test (FBST) can be used as a model selection criterion. The FBST was presented by Pereira and Stern as a coherent Bayesian significance test. Key Words: Bayesian test; Evidence; Global optimization; Information; Model selection; Numerical integration; Posterior density; Precise hypothesis; Regularization. AMS: 62A15; 62F15; 62H15.
    Download  
     
    Export citation  
     
    Bookmark  
  • Revisiting the two predominant statistical problems: the stopping-rule problem and the catch-all hypothesis problem.Yusaku Ohkubo - 2021 - Annals of the Japan Association for Philosophy of Science 30:23-41.
    The history of statistics is filled with many controversies, in which the prime focus has been the difference in the “interpretation of probability” between Fre- quentist and Bayesian theories. Many philosophical arguments have been elabo- rated to examine the problems of both theories based on this dichotomized view of statistics, including the well-known stopping-rule problem and the catch-all hy- pothesis problem. However, there are also several “hybrid” approaches in theory, practice, and philosophical analysis. This poses many fundamental questions. This paper (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Full Bayesian Significance Test Applied to Multivariate Normal Structure Models.Marcelo de Souza Lauretto, Carlos Alberto de Braganca Pereira, Julio Michael Stern & Shelemiahu Zacks - 2003 - Brazilian Journal of Probability and Statistics 17:147-168.
    Abstract: The Pull Bayesian Significance Test (FBST) for precise hy- potheses is applied to a Multivariate Normal Structure (MNS) model. In the FBST we compute the evidence against the precise hypothesis. This evi- dence is the probability of the Highest Relative Surprise Set (HRSS) tangent to the sub-manifold (of the parameter space) that defines the null hypothesis. The MNS model we present appears when testing equivalence conditions for genetic expression measurements, using micro-array technology.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Unit Roots: Bayesian Significance Test.Julio Michael Stern, Marcio Alves Diniz & Carlos Alberto de Braganca Pereira - 2011 - Communications in Statistics 40 (23):4200-4213.
    The unit root problem plays a central role in empirical applications in the time series econometric literature. However, significance tests developed under the frequentist tradition present various conceptual problems that jeopardize the power of these tests, especially for small samples. Bayesian alternatives, although having interesting interpretations and being precisely defined, experience problems due to the fact that that the hypothesis of interest in this case is sharp or precise. The Bayesian significance test used in this article, for the unit root (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A Weibull Wearout Test: Full Bayesian Approach.Julio Michael Stern, Telba Zalkind Irony, Marcelo de Souza Lauretto & Carlos Alberto de Braganca Pereira - 2001 - Reliability and Engineering Statistics 5:287-300.
    The Full Bayesian Significance Test (FBST) for precise hypotheses is presented, with some applications relevant to reliability theory. The FBST is an alternative to significance tests or, equivalently, to p-ualue.s. In the FBST we compute the evidence of the precise hypothesis. This evidence is the probability of the complement of a credible set "tangent" to the sub-manifold (of the para,rreter space) that defines the null hypothesis. We use the FBST in an application requiring a quality control of used components, based (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Bayesian Evidence Test for Precise Hypotheses.Julio Michael Stern - 2003 - Journal of Statistical Planning and Inference 117 (2):185-198.
    The full Bayesian signi/cance test (FBST) for precise hypotheses is presented, with some illustrative applications. In the FBST we compute the evidence against the precise hypothesis. We discuss some of the theoretical properties of the FBST, and provide an invariant formulation for coordinate transformations, provided a reference density has been established. This evidence is the probability of the highest relative surprise set, “tangential” to the sub-manifold (of the parameter space) that defines the null hypothesis.
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Can a Significance Test Be Genuinely Bayesian?Julio Michael Stern, Carlos Alberto de Braganca Pereira & Sergio Wechsler - 2008 - Bayesian Analysis 3 (1):79-100.
    The Full Bayesian Significance Test, FBST, is extensively reviewed. Its test statistic, a genuine Bayesian measure of evidence, is discussed in detail. Its behavior in some problems of statistical inference like testing for independence in contingency tables is discussed.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Cosmic Bayes. Datasets and priors in the hunt for dark energy.Michela Massimi - 2021 - European Journal for Philosophy of Science 11 (1):1-21.
    Bayesian methods are ubiquitous in contemporary observational cosmology. They enter into three main tasks: cross-checking datasets for consistency; fixing constraints on cosmological parameters; and model selection. This article explores some epistemic limits of using Bayesian methods. The first limit concerns the degree of informativeness of the Bayesian priors and an ensuing methodological tension between task and task. The second limit concerns the choice of wide flat priors and related tension between parameter estimation and model selection. The Dark Energy Survey and (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Classical versus Bayesian Statistics.Eric Johannesson - 2020 - Philosophy of Science 87 (2):302-318.
    In statistics, there are two main paradigms: classical and Bayesian statistics. The purpose of this article is to investigate the extent to which classicists and Bayesians can agree. My conclusion is that, in certain situations, they cannot. The upshot is that, if we assume that the classicist is not allowed to have a higher degree of belief in a null hypothesis after he has rejected it than before, then he has to either have trivial or incoherent credences to begin with (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Why is Bayesian confirmation theory rarely practiced.Robert W. P. Luk - 2019 - Science and Philosophy 7 (1):3-20.
    Bayesian confirmation theory is a leading theory to decide the confirmation/refutation of a hypothesis based on probability calculus. While it may be much discussed in philosophy of science, is it actually practiced in terms of hypothesis testing by scientists? Since the assignment of some of the probabilities in the theory is open to debate and the risk of making the wrong decision is unknown, many scientists do not use the theory in hypothesis testing. Instead, they use alternative statistical tests that (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The Alpha War.Edouard Machery - 2019 - Review of Philosophy and Psychology 12 (1):75-99.
    Benjamin et al. Nature Human Behavior 2, 6–10 proposed decreasing the significance level by an order of magnitude to improve the replicability of psychology. This modest, practical proposal has been widely criticized, and its prospects remain unclear. This article defends this proposal against these criticisms and highlights its virtues.
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Genuine Bayesian Multiallelic Significance Test for the Hardy-Weinberg Equilibrium Law.Julio Michael Stern, Carlos Alberto de Braganca Pereira, Fabio Nakano & Martin Ritter Whittle - 2006 - Genetics and Molecular Research 5 (4):619-631.
    Statistical tests that detect and measure deviation from the Hardy-Weinberg equilibrium (HWE) have been devised but are limited when testing for deviation at multiallelic DNA loci is attempted. Here we present the full Bayesian significance test (FBST) for the HWE. This test depends neither on asymptotic results nor on the number of possible alleles for the particular locus being evaluated. The FBST is based on the computation of an evidence index in favor of the HWE hypothesis. A great deal of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Jeffreys–Lindley paradox and discovery criteria in high energy physics.Robert D. Cousins - 2017 - Synthese 194 (2):395-432.
    The Jeffreys–Lindley paradox displays how the use of a \ value ) in a frequentist hypothesis test can lead to an inference that is radically different from that of a Bayesian hypothesis test in the form advocated by Harold Jeffreys in the 1930s and common today. The setting is the test of a well-specified null hypothesis versus a composite alternative. The \ value, as well as the ratio of the likelihood under the null hypothesis to the maximized likelihood under the (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Scientific consistency, two-stage priors and the true value of a parameter.Gareth Jones - 1982 - British Journal for the Philosophy of Science 33 (2):133-160.
    Download  
     
    Export citation  
     
    Bookmark  
  • How Strong is the Confirmation of a Hypothesis by Significant Data?Thomas Bartelborth - 2016 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 47 (2):277-291.
    The aim of the article is to propose a way to determine to what extent a hypothesis H is confirmed if it has successfully passed a classical significance test. Bayesians have already raised many serious objections against significance testing, but in doing so they have always had to rely on epistemic probabilities and a further Bayesian analysis, which are rejected by classical statisticians. Therefore, I will suggest a purely frequentist evaluation procedure for significance tests that should also be accepted by (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Significance Testing with No Alternative Hypothesis: A Measure of Surprise.J. V. Howard - 2009 - Erkenntnis 70 (2):253-270.
    A pure significance test would check the agreement of a statistical model with the observed data even when no alternative model was available. The paper proposes the use of a modified p -value to make such a test. The model will be rejected if something surprising is observed. It is shown that the relation between this measure of surprise and the surprise indices of Weaver and Good is similar to the relationship between a p -value, a corresponding odds-ratio, and a (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Logic with numbers.Colin Howson - 2007 - Synthese 156 (3):491-512.
    Many people regard utility theory as the only rigorous foundation for subjective probability, and even de Finetti thought the betting approach supplemented by Dutch Book arguments only good as an approximation to a utility-theoretic account. I think that there are good reasons to doubt this judgment, and I propose an alternative, in which the probability axioms are consistency constraints on distributions of fair betting quotients. The idea itself is hardly new: it is in de Finetti and also Ramsey. What is (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Neural Correlates of Subjective Awareness for Natural Scene Categorization of Color Photographs and Line-Drawings.Qiufang Fu, Yong-Jin Liu, Zoltan Dienes, Jianhui Wu, Wenfeng Chen & Xiaolan Fu - 2017 - Frontiers in Psychology 8.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Support Interval.Eric-Jan Wagenmakers, Quentin F. Gronau, Fabian Dablander & Alexander Etz - 2020 - Erkenntnis 87 (2):589-601.
    A frequentist confidence interval can be constructed by inverting a hypothesis test, such that the interval contains only parameter values that would not have been rejected by the test. We show how a similar definition can be employed to construct a Bayesian support interval. Consistent with Carnap’s theory of corroboration, the support interval contains only parameter values that receive at least some minimum amount of support from the data. The support interval is not subject to Lindley’s paradox and provides an (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Some Evidence for an Association Between Early Life Adversity and Decision Urgency.Johanne P. Knowles, Nathan J. Evans & Darren Burke - 2019 - Frontiers in Psychology 10.
    Download  
     
    Export citation  
     
    Bookmark  
  • History and nature of the Jeffreys–Lindley paradox.Eric-Jan Wagenmakers & Alexander Ly - 2023 - Archive for History of Exact Sciences 77 (1):25-72.
    The Jeffreys–Lindley paradox exposes a rift between Bayesian and frequentist hypothesis testing that strikes at the heart of statistical inference. Contrary to what most current literature suggests, the paradox was central to the Bayesian testing methodology developed by Sir Harold Jeffreys in the late 1930s. Jeffreys showed that the evidence for a point-null hypothesis $${\mathcal {H}}_0$$ H 0 scales with $$\sqrt{n}$$ n and repeatedly argued that it would, therefore, be mistaken to set a threshold for rejecting $${\mathcal {H}}_0$$ H 0 (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Significance Testing in Theory and Practice.Daniel Greco - 2011 - British Journal for the Philosophy of Science 62 (3):607-637.
    Frequentism and Bayesianism represent very different approaches to hypothesis testing, and this presents a skeptical challenge for Bayesians. Given that most empirical research uses frequentist methods, why (if at all) should we rely on it? While it is well known that there are conditions under which Bayesian and frequentist methods agree, without some reason to think these conditions are typically met, the Bayesian hasn’t shown why we are usually safe in relying on results reported by significance testers. In this article, (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • The dilemma of statistics: Rigorous mathematical methods cannot compensate messy interpretations and lousy data.Peter Schuster - 2014 - Complexity 20 (1):11-15.
    Download  
     
    Export citation  
     
    Bookmark