Switch to: References

Add citations

You must login to add citations.
  1. Toward a More Objective Understanding of the Evidence of Carcinogenic Risk.Deborah G. Mayo - 1988 - PSA Proceedings of the Biennial Meeting of the Philosophy of Science Association 1988 (2):489-503.
    The field of quantified risk assessment is a new field, only about 20 years old, and already it is considered to be in a crisis. As Funtowicz and J.R. Ravetz (1985) put it:The concept of risk in terms of probability has proved to be so elusive, and statistical inference so problematic, that many experts in the field have recently either lost hope of finding a scientific solution or lost faith in Risk Analysis as a tool for decisionmaking. (p.219)Thus the ‘art’ (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A New Proof of the Likelihood Principle.Greg Gandenberger - 2015 - British Journal for the Philosophy of Science 66 (3):475-503.
    I present a new proof of the likelihood principle that avoids two responses to a well-known proof due to Birnbaum ([1962]). I also respond to arguments that Birnbaum’s proof is fallacious, which if correct could be adapted to this new proof. On the other hand, I urge caution in interpreting proofs of the likelihood principle as arguments against the use of frequentist statistical methods. 1 Introduction2 The New Proof3 How the New Proof Addresses Proposals to Restrict Birnbaum’s Premises4 A Response (...)
    Download  
     
    Export citation  
     
    Bookmark   12 citations  
  • Severe testing as a basic concept in a neyman–pearson philosophy of induction.Deborah G. Mayo & Aris Spanos - 2006 - British Journal for the Philosophy of Science 57 (2):323-357.
    Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and long-standing problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (pre-data) error probabilities are to be used for (post-data) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities (...)
    Download  
     
    Export citation  
     
    Bookmark   63 citations  
  • Did Pearson reject the Neyman-Pearson philosophy of statistics?Deborah G. Mayo - 1992 - Synthese 90 (2):233 - 262.
    I document some of the main evidence showing that E. S. Pearson rejected the key features of the behavioral-decision philosophy that became associated with the Neyman-Pearson Theory of statistics (NPT). I argue that NPT principles arose not out of behavioral aims, where the concern is solely with behaving correctly sufficiently often in some long run, but out of the epistemological aim of learning about causes of experimental results (e.g., distinguishing genuine from spurious effects). The view Pearson did hold gives a (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Cartwright, Causality, and Coincidence.Deborah G. Mayo - 1986 - PSA Proceedings of the Biennial Meeting of the Philosophy of Science Association 1986 (1):42-58.
    In How the Laws of Physics Lie (1983)2 Cartwright argues for being a realist about theoretical entities but non-realist about theoretical laws. Her reason for this distinction is that only the former involves causal explanation, and accepting causal explanations commits us to the existence of the causal entity invoked. “What is special about explanation by theoretical entity is that it is causal explanation, and existence is an internal characteristic of causal claims. There is nothing similar for theoretical laws.” (p. 93). (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Mathematical statistics and metastatistical analysis.Andrés Rivadulla - 1991 - Erkenntnis 34 (2):211 - 236.
    This paper deals with meta-statistical questions concerning frequentist statistics. In Sections 2 to 4 I analyse the dispute between Fisher and Neyman on the so called logic of statistical inference, a polemic that has been concomitant of the development of mathematical statistics. My conclusion is that, whenever mathematical statistics makes it possible to draw inferences, it only uses deductive reasoning. Therefore I reject Fisher's inductive approach to the statistical estimation theory and adhere to Neyman's deductive one. On the other hand, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Novel evidence and severe tests.Deborah G. Mayo - 1991 - Philosophy of Science 58 (4):523-552.
    While many philosophers of science have accorded special evidential significance to tests whose results are "novel facts", there continues to be disagreement over both the definition of novelty and why it should matter. The view of novelty favored by Giere, Lakatos, Worrall and many others is that of use-novelty: An accordance between evidence e and hypothesis h provides a genuine test of h only if e is not used in h's construction. I argue that what lies behind the intuition that (...)
    Download  
     
    Export citation  
     
    Bookmark   60 citations  
  • A Visible Hand in the Marketplace of Ideas: Precision Measurement as Arbitage.Philip Mirowski - 1994 - Science in Context 7 (3):563-589.
    The ArgumentWhile there has been muchattention given to experiment in modern science studies, there has been astoundingly little concern spared over the practice ofquanitataivemeasurment.Thus myths about the unreasonable effectiveness of mathematice in science still abound. This paper presents: An explicit mathematical model of the stabilization of quantitative constants in a mathematical science to rival older Bayesian and classical accounts;a framework for writing a history of pracitces with regard to treatment of quantitative measurement erroe; resourece for the comparative sociology of differing (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Duhem's problem, the bayesian way, and error statistics, or "what's belief got to do with it?".Deborah G. Mayo - 1997 - Philosophy of Science 64 (2):222-244.
    I argue that the Bayesian Way of reconstructing Duhem's problem fails to advance a solution to the problem of which of a group of hypotheses ought to be rejected or "blamed" when experiment disagrees with prediction. But scientists do regularly tackle and often enough solve Duhemian problems. When they do, they employ a logic and methodology which may be called error statistics. I discuss the key properties of this approach which enable it to split off the task of testing auxiliary (...)
    Download  
     
    Export citation  
     
    Bookmark   19 citations