Switch to: Citations

Add references

You must login to add references.
  1. Logic of Statistical Inference.Ian Hacking - 1965 - Cambridge, England: Cambridge University Press.
    One of Ian Hacking's earliest publications, this book showcases his early ideas on the central concepts and questions surrounding statistical reasoning. He explores the basic principles of statistical reasoning and tests them, both at a philosophical level and in terms of their practical consequences for statisticians. Presented in a fresh twenty-first-century series livery, and including a specially commissioned preface written by Jan-Willem Romeijn, illuminating its enduring importance and relevance to philosophical enquiry, Hacking's influential and original work has been revived for (...)
    Download  
     
    Export citation  
     
    Bookmark   207 citations  
  • Toward a More Objective Understanding of the Evidence of Carcinogenic Risk.Deborah G. Mayo - 1988 - PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1988:489 - 503.
    I argue that although the judgments required to reach statistical risk assessments may reflect policy values, it does not follow that the task of evaluating whether a given risk assessment is warranted by the evidence need also be imbued with policy values. What has led many to conclude otherwise, I claim, stems from misuses of the statistical testing methods involved. I set out rules for interpreting what specific test results do and do not say about the extent of a given (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Severe testing as a basic concept in a neyman–pearson philosophy of induction.Deborah G. Mayo & Aris Spanos - 2006 - British Journal for the Philosophy of Science 57 (2):323-357.
    Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and long-standing problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (pre-data) error probabilities are to be used for (post-data) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities (...)
    Download  
     
    Export citation  
     
    Bookmark   63 citations  
  • Experimental practice and an error statistical account of evidence.Deborah G. Mayo - 2000 - Philosophy of Science 67 (3):207.
    In seeking general accounts of evidence, confirmation, or inference, philosophers have looked to logical relationships between evidence and hypotheses. Such logics of evidential relationship, whether hypothetico-deductive, Bayesian, or instantiationist fail to capture or be relevant to scientific practice. They require information that scientists do not generally have (e.g., an exhaustive set of hypotheses), while lacking slots within which to include considerations to which scientists regularly appeal (e.g., error probabilities). Building on my co-symposiasts contributions, I suggest some directions in which a (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Error statistics and learning from error: Making a virtue of necessity.Deborah G. Mayo - 1997 - Philosophy of Science 64 (4):212.
    The error statistical account of testing uses statistical considerations, not to provide a measure of probability of hypotheses, but to model patterns of irregularity that are useful for controlling, distinguishing, and learning from errors. The aim of this paper is (1) to explain the main points of contrast between the error statistical and the subjective Bayesian approach and (2) to elucidate the key errors that underlie the central objection raised by Colin Howson at our PSA 96 Symposium.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Error and the Growth of Experimental Knowledge.Deborah G. Mayo - 1996 - University of Chicago.
    This text provides a critique of the subjective Bayesian view of statistical inference, and proposes the author's own error-statistical approach as an alternative framework for the epistemology of experiment. It seeks to address the needs of researchers who work with statistical analysis.
    Download  
     
    Export citation  
     
    Bookmark   228 citations  
  • Bayesian Statistics in Radiocarbon Calibration.Daniel Steel - 2001 - Philosophy of Science 68 (S3):S153-S164.
    Critics of Bayesianism often assert that scientists are not Bayesians. The widespread use of Bayesian statistics in the field of radiocarbon calibration is discussed in relation to this charge. This case study illustrates the willingness of scientists to use Bayesian statistics when the approach offers some advantage, while continuing to use orthodox methods in other contexts. The case of radiocarbon calibration, therefore, suggests a picture of statistical practice in science as eclectic and pragmatic rather than rigidly adhering to any one (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Error and the growth of experimental knowledge.Deborah Mayo - 1996 - International Studies in the Philosophy of Science 15 (1):455-459.
    Download  
     
    Export citation  
     
    Bookmark   327 citations  
  • Principles of inference and their consequences.Deborah G. Mayo & Michael Kruse - 2001 - In David Corfield & Jon Williamson (eds.), Foundations of Bayesianism. Kluwer Academic Publishers. pp. 381--403.
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Modelling and simulating early stopping of RCTs: a case study of early stop due to harm.Roger Stanev - 2012 - Journal of Experimental and Theoretical Artificial Intelligence 24 (4):513-526.
    Despite efforts from regulatory agencies (e.g. NIH, FDA), recent systematic reviews of randomised controlled trials (RCTs) show that top medical journals continue to publish trials without requiring authors to report details for readers to evaluate early stopping decisions carefully. This article presents a systematic way of modelling and simulating interim monitoring decisions of RCTs. By taking an approach that is both general and rigorous, the proposed framework models and evaluates early stopping decisions of RCTs based on a clear and consistent (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Stopping rules and data monitoring in clinical trials.Roger Stanev - 2012 - In H. W. De Regt (ed.), EPSA Philosophy of Science: Amsterdam 2009, The European Philosophy of Science Association Proceedings Vol. 1, 375-386. Springer. pp. 375--386.
    Stopping rules — rules dictating when to stop accumulating data and start analyzing it for the purposes of inferring from the experiment — divide Bayesians, Likelihoodists and classical statistical approaches to inference. Although the relationship between Bayesian philosophy of science and stopping rules can be complex (cf. Steel 2003), in general, Bayesians regard stopping rules as irrelevant to what inference should be drawn from the data. This position clashes with classical statistical accounts. For orthodox statistics, stopping rules do matter to (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Statistical decisions and the interim analyses of clinical trials.Roger Stanev - 2011 - Theoretical Medicine and Bioethics 32 (1):61-74.
    This paper analyzes statistical decisions during the interim analyses of clinical trials. After some general remarks about the ethical and scientific demands of clinical trials, I introduce the notion of a hard-case clinical trial, explain the basic idea behind it, and provide a real example involving the interim analyses of zidovudine in asymptomatic HIV-infected patients. The example leads me to propose a decision analytic framework for handling ethical conflicts that might arise during the monitoring of hard-case clinical trials. I use (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Bayesian statistics in radiocarbon calibration.Daniel Steel - 2001 - Proceedings of the Philosophy of Science Association 2001 (3):S153-.
    Critics of Bayesianism often assert that scientists are not Bayesians. The widespread use of Bayesian statistics in the field of radiocarbon calibration is discussed in relation to this charge. This case study illustrates the willingness of scientists to use Bayesian statistics when the approach offers some advantage, while continuing to use orthodox methods in other contexts. The case of radiocarbon calibration, therefore, suggests a picture of statistical practice in science as eclectic and pragmatic rather than rigidly adhering to any one (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations