Switch to: References

Citations of:

Error and the Growth of Experimental Knowledge

University of Chicago (1996)

Add citations

You must login to add citations.
  1. Bayesian Confirmation Theory and The Likelihood Principle.Daniel Steel - 2007 - Synthese 156 (1):53-77.
    The likelihood principle (LP) is a core issue in disagreements between Bayesian and frequentist statistical theories. Yet statements of the LP are often ambiguous, while arguments for why a Bayesian must accept it rely upon unexamined implicit premises. I distinguish two propositions associated with the LP, which I label LP1 and LP2. I maintain that there is a compelling Bayesian argument for LP1, based upon strict conditionalization, standard Bayesian decision theory, and a proposition I call the practical relevance principle. In (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Probabilistic Alternatives to Bayesianism: The Case of Explanationism.Igor Douven & Jonah N. Schupbach - 2015 - Frontiers in Psychology 6.
    There has been a probabilistic turn in contemporary cognitive science. Far and away, most of the work in this vein is Bayesian, at least in name. Coinciding with this development, philosophers have increasingly promoted Bayesianism as the best normative account of how humans ought to reason. In this paper, we make a push for exploring the probabilistic terrain outside of Bayesianism. Non-Bayesian, but still probabilistic, theories provide plausible competitors both to descriptive and normative Bayesian accounts. We argue for this general (...)
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • A Pragmatist Theory of Evidence.Julian Reiss - 2015 - Philosophy of Science 82 (3):341-362.
    Two approaches to evidential reasoning compete in the biomedical and social sciences: the experimental and the pragmatist. Whereas experimentalism has received considerable philosophical analysis and support since the times of Bacon and Mill, pragmatism about evidence has been neither articulated nor defended. The overall aim is to fill this gap and develop a theory that articulates the latter. The main ideas of the theory will be illustrated and supported by a case study on the smoking/lung cancer controversy in the 1950s.
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Feminist Philosophy of Science.Lynn Hankinson Nelson - 2002 - In Peter Machamer & Michael Silberstein (eds.), The Blackwell guide to the philosophy of science. Malden, Mass.: Blackwell. pp. 312–331.
    This chapter contains sections titled: Highlights of Past Literature Current Work Future Work.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Bias and Conditioning in Sequential medical trials.Cecilia Nardini & Jan Sprenger - 2013 - Philosophy of Science 80 (5):1053-1064.
    Randomized Controlled Trials are currently the gold standard within evidence-based medicine. Usually, they are conducted as sequential trials allowing for monitoring for early signs of effectiveness or harm. However, evidence from early stopped trials is often charged with being biased towards implausibly large effects. To our mind, this skeptical attitude is unfounded and caused by the failure to perform appropriate conditioning in the statistical analysis of the evidence. We contend that a shift from unconditional hypothesis tests in the style of (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Testing a precise null hypothesis: the case of Lindley’s paradox.Jan Sprenger - 2013 - Philosophy of Science 80 (5):733-744.
    The interpretation of tests of a point null hypothesis against an unspecified alternative is a classical and yet unresolved issue in statistical methodology. This paper approaches the problem from the perspective of Lindley's Paradox: the divergence of Bayesian and frequentist inference in hypothesis tests with large sample size. I contend that the standard approaches in both frameworks fail to resolve the paradox. As an alternative, I suggest the Bayesian Reference Criterion: it targets the predictive performance of the null hypothesis in (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Pragmatic norms in science: making them explicit.María Caamaño Alegre - 2013 - Synthese 190 (15):3227-3246.
    The present work constitutes an attempt to make explicit those pragmatic norms successfully operating in empirical science. I will first comment on the initial presuppositions of the discussion, in particular, on those concerning the instrumental character of scientific practice and the nature of scientific goals. Then I will depict the moderately naturalistic frame in which, from this approach, the pragmatic norms make sense. Third, I will focus on the specificity of the pragmatic norms, making special emphasis on what I regard (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Probabilistic Logics and Probabilistic Networks.Rolf Haenni, Jan-Willem Romeijn, Gregory Wheeler & Jon Williamson - 2010 - Dordrecht, Netherland: Synthese Library. Edited by Gregory Wheeler, Rolf Haenni, Jan-Willem Romeijn & and Jon Williamson.
    Additionally, the text shows how to develop computationally feasible methods to mesh with this framework.
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Significance Testing in Theory and Practice.Daniel Greco - 2011 - British Journal for the Philosophy of Science 62 (3):607-637.
    Frequentism and Bayesianism represent very different approaches to hypothesis testing, and this presents a skeptical challenge for Bayesians. Given that most empirical research uses frequentist methods, why (if at all) should we rely on it? While it is well known that there are conditions under which Bayesian and frequentist methods agree, without some reason to think these conditions are typically met, the Bayesian hasn’t shown why we are usually safe in relying on results reported by significance testers. In this article, (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Experimental Validity and Pragmatic Modes in Empirical Science.Maria Caamaño Alegre - 2009 - International Studies in the Philosophy of Science 23 (1):19-45.
    The purpose of this paper is to show how the degree of experimental validity of scientific procedures is crucially involved in determining two typical pragmatic modes in science, namely, the preservation of useful procedures and the disposal of useless ideas. The term 'pragmatic' will here be used following Schurz's characterisation of being internally pragmatic, as referring to that which proves useful for scientific or epistemic goals. The first part of the paper consists in a characterisation of the notion of experimental (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Scientific Reasoning Is Material Inference: Combining Confirmation, Discovery, and Explanation.Ingo Brigandt - 2010 - International Studies in the Philosophy of Science 24 (1):31-43.
    Whereas an inference (deductive as well as inductive) is usually viewed as being valid in virtue of its argument form, the present paper argues that scientific reasoning is material inference, i.e., justified in virtue of its content. A material inference is licensed by the empirical content embodied in the concepts contained in the premises and conclusion. Understanding scientific reasoning as material inference has the advantage of combining different aspects of scientific reasoning, such as confirmation, discovery, and explanation. This approach explains (...)
    Download  
     
    Export citation  
     
    Bookmark   28 citations  
  • Two ways to rule out error: Severity and security.Kent Staley - unknown
    I contrast two modes of error-elimination relevant to evaluating evidence in accounts that emphasize frequentist reliability. The contrast corresponds to that between the use of of a reliable inference procedure and the critical scrutiny of a procedure with regard to its reliability, in light of what is and is not known about the setting in which the procedure is used. I propose a notion of security as a category of evidential assessment for the latter. In statistical settings, robustness theory and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • A little survey of induction.John D. Norton - 2005 - In Peter Achinstein (ed.), Scientific Evidence: Philosophical Theories & Applications. The Johns Hopkins University Press. pp. 9-34.
    My purpose in this chapter is to survey some of the principal approaches to inductive inference in the philosophy of science literature. My first concern will be the general principles that underlie the many accounts of induction in this literature. When these accounts are considered in isolation, as is more commonly the case, it is easy to overlook that virtually all accounts depend on one of very few basic principles and that the proliferation of accounts can be understood as efforts (...)
    Download  
     
    Export citation  
     
    Bookmark   42 citations  
  • The quantitative problem of old evidence.E. C. Barnes - 1999 - British Journal for the Philosophy of Science 50 (2):249-264.
    The quantitative problem of old evidence is the problem of how to measure the degree to which e confirms h for agent A at time t when A regards e as justified at t. Existing attempts to solve this problem have applied the e-difference approach, which compares A's probability for h at t with what probability A would assign h if A did not regard e as justified at t. The quantitative problem has been widely regarded as unsolvable primarily on (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Explaining disease: Correlations, causes, and mechanisms. [REVIEW]Paul Thagard - 1998 - Minds and Machines 8 (1):61-78.
    Why do people get sick? I argue that a disease explanation is best thought of as causal network instantiation, where a causal network describes the interrelations among multiple factors, and instantiation consists of observational or hypothetical assignment of factors to the patient whose disease is being explained. This paper first discusses inference from correlation to causation, integrating recent psychological discussions of causal reasoning with epidemiological approaches to understanding disease causation, particularly concerning ulcers and lung cancer. It then shows how causal (...)
    Download  
     
    Export citation  
     
    Bookmark   32 citations  
  • Experimental practice and an error statistical account of evidence.Deborah G. Mayo - 2000 - Philosophy of Science 67 (3):207.
    In seeking general accounts of evidence, confirmation, or inference, philosophers have looked to logical relationships between evidence and hypotheses. Such logics of evidential relationship, whether hypothetico-deductive, Bayesian, or instantiationist fail to capture or be relevant to scientific practice. They require information that scientists do not generally have (e.g., an exhaustive set of hypotheses), while lacking slots within which to include considerations to which scientists regularly appeal (e.g., error probabilities). Building on my co-symposiasts contributions, I suggest some directions in which a (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • Resolving Neyman's paradox.Max Albert - 2002 - British Journal for the Philosophy of Science 53 (1):69-76.
    According to Fisher, a hypothesis specifying a density function for X is falsified (at the level of significance ) if the realization of X is in the size- region of lowest densities. However, non-linear transformations of X can map low-density into high-density regions. Apparently, then, falsifications can always be turned into corroborations (and vice versa) by looking at suitable transformations of X (Neyman's Paradox). The present paper shows that, contrary to the view taken in the literature, this provides no argument (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Kettlewell from an error statisticians's point of view.David Wÿss Rudge - 2001 - Perspectives on Science 9 (1):59-77.
    : Bayesians and error statisticians have relied heavily upon examples from physics in developing their accounts of scientific inference. The present essay demonstrates it is possible to analyze H.B.D. Kettlewell's classic study of natural selection from Deborah Mayo's error statistical point of view (Mayo 1996). A comparison with a previous analysis of this episode from a Bayesian perspective (Rudge 1998) reveals that the error statistical account makes better sense of investigations such as Kettlewell's because it clarifies how core elements in (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Framing the Epistemic Schism of Statistical Mechanics.Javier Anta - 2021 - Proceedings of the X Conference of the Spanish Society of Logic, Methodology and Philosophy of Science.
    In this talk I present the main results from Anta (2021), namely, that the theoretical division between Boltzmannian and Gibbsian statistical mechanics should be understood as a separation in the epistemic capabilities of this physical discipline. In particular, while from the Boltzmannian framework one can generate powerful explanations of thermal processes by appealing to their microdynamics, from the Gibbsian framework one can predict observable values in a computationally effective way. Finally, I argue that this statistical mechanical schism contradicts the Hempelian (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Perspectival Instruments.Ana-Maria Creţu - 2022 - Philosophy of Science 89 (3):521-541.
    Despite its potential implications for the objectivity of scientific knowledge, the claim that “scientific instruments are perspectival” has received little critical attention. I show that this claim is best understood as highlighting the dependence of instruments on different perspectives. When closely analyzed, instead of constituting a novel epistemic challenge, this dependence can be exploited to mount novel strategies for resolving two old epistemic problems: conceptual relativism and theory-ladeness. The novel content of this article consists in articulating and developing these strategies (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • (1 other version)Lessons from the Large Hadron Collider for model-based experimentation: the concept of a model of data acquisition and the scope of the hierarchy of models.Koray Karaca - 2018 - Synthese 195 (12):5431-5452.
    According to the hierarchy of models (HoM) account of scientific experimentation developed by Patrick Suppes and elaborated by Deborah Mayo, theoretical considerations about the phenomena of interest are involved in an experiment through theoretical models that in turn relate to experimental data through data models, via the linkage of experimental models. In this paper, I dispute the HoM account in the context of present-day high-energy physics (HEP) experiments. I argue that even though the HoM account aims to characterize experimentation as (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • From Ontological Traits to Validity Challenges in Social Science: The Cases of Economic Experiments and Research Questionnaires.María Caamaño-Alegre & José Caamaño-Alegre - 2019 - International Studies in the Philosophy of Science 32 (2):101-127.
    This article examines how problems of validity in empirical social research differ from those in natural science. Specifically, we focus on how some ontological peculiarities of the object of study in social science bear on validity requirements. We consider these issues in experimental validity as well as in test validity because, while both fields hold large intellectual traditions, research tests or questionnaires are less closely connected to natural science methodology than experiments.
    Download  
     
    Export citation  
     
    Bookmark  
  • Teaching evolutionary developmental biology: concepts, problems, and controversy.A. C. Love - 2013 - In Kostas Kampourakis (ed.), The Philosophy of Biology: a Companion for Educators. Dordrecht: Springer. pp. 323-341.
    Although sciences are often conceptualized in terms of theory confirmation and hypothesis testing, an equally important dimension of scientific reasoning is the structure of problems that guide inquiry. This problem structure is evident in several concepts central to evolutionary developmental biology (Evo-devo)—constraints, modularity, evolvability, and novelty. Because problems play an important role in biological practice, they should be included in biological pedagogy, especially when treating the issue of scientific controversy. A key feature of resolving controversy is synthesizing methodologies from different (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Bayesian Perspectives on the Discovery of the Higgs Particle.Richard Dawid - 2017 - Synthese 194 (2):377-394.
    It is argued that the high degree of trust in the Higgs particle before its discovery raises the question of a Bayesian perspective on data analysis in high energy physics in an interesting way that differs from other suggestions regarding the deployment of Bayesian strategies in the field.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Early stopping of RCTs: two potential issues for error statistics.Roger Stanev - 2015 - Synthese 192 (4):1089-1116.
    Error statistics is an important methodological view in philosophy of statistics and philosophy of science that can be applied to scientific experiments such as clinical trials. In this paper, I raise two potential issues for ES when it comes to guiding, and explaining early stopping of randomized controlled trials : ES provides limited guidance in cases of early unfavorable trends due to the possibility of trend reversal; ES is silent on how to prospectively control error rates in experiments requiring multiple (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • How to Undermine Underdetermination?Prasanta S. Bandyopadhyay, John G. Bennett & Megan D. Higgs - 2015 - Foundations of Science 20 (2):107-127.
    The underdetermination thesis poses a threat to rational choice of scientific theories. We discuss two arguments for the thesis. One draws its strength from deductivism together with the existence thesis, and the other is defended on the basis of the failure of a reliable inductive method. We adopt a partially subjective/objective pragmatic Bayesian epistemology of science framework, and reject both arguments for the thesis. Thus, in science we are able to reinstate rational choice called into question by the underdetermination thesis.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Towards the Methodological Turn in the Philosophy of Science.Hsiang-Ke Chao, Szu-Ting Chen & Roberta L. Millstein - 2013 - In Hsiang-Ke Chao, Szu-Ting Chen & Roberta L. Millstein (eds.), Mechanism and Causality in Biology and Economics. Dordrecht: Springer.
    This chapter provides an introduction to the study of the philosophical notions of mechanisms and causality in biology and economics. This chapter sets the stage for this volume, Mechanism and Causality in Biology and Economics, in three ways. First, it gives a broad review of the recent changes and current state of the study of mechanisms and causality in the philosophy of science. Second, consistent with a recent trend in the philosophy of science to focus on scientific practices, it in (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • A Synthesis of Hempelian and Hypothetico-Deductive Confirmation.Jan Sprenger - 2013 - Erkenntnis 78 (4):727-738.
    This paper synthesizes confirmation by instances and confirmation by successful predictions, and thereby the Hempelian and the hypothetico-deductive traditions in confirmation theory. The merger of these two approaches is subsequently extended to the piecemeal confirmation of entire theories. It is then argued that this synthetic account makes a useful contribution from both a historical and a systematic perspective.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Discussion Note: McCain on Weak Predictivism and External World Scepticism.David William Harker - 2013 - Philosophia 41 (1):195-202.
    In a recent paper McCain (2012) argues that weak predictivism creates an important challenge for external world scepticism. McCain regards weak predictivism as uncontroversial and assumes the thesis within his argument. There is a sense in which the predictivist literature supports his conviction that weak predictivism is uncontroversial. This absence of controversy, however, is a product of significant plasticity within the thesis, which renders McCain’s argument worryingly vague. For McCain’s argument to work he either needs a stronger version of weak (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Hypothetico‐Deductive Confirmation.Jan Sprenger - 2011 - Philosophy Compass 6 (7):497-508.
    Hypothetico-deductive (H-D) confirmation builds on the idea that confirming evidence consists of successful predictions that deductively follow from the hypothesis under test. This article reviews scope, history and recent development of the venerable H-D account: First, we motivate the approach and clarify its relationship to Bayesian confirmation theory. Second, we explain and discuss the tacking paradoxes which exploit the fact that H-D confirmation gives no account of evidential relevance. Third, we review several recent proposals that aim at a sounder and (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Securing reliable evidence.Kent W. Staley - unknown
    : Evidence claims depend on fallible assumptions. Three strategies for making true evidence claims in spite of this fallibility are strengthening the support for those assumptions, weakening conclusions, and using multiple independent tests to produce robust evidence. Reliability itself, understood in frequentist terms, does not explain the usefulness of all three strategies; robustness, in particular, sometimes functions in a way that is not well-characterized in terms of reliability. I argue that, in addition to reliability, the security of evidence claims is (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Charles Sanders Peirce.Robert W. Burch - 2008 - Stanford Encyclopedia of Philosophy.
    Download  
     
    Export citation  
     
    Bookmark   18 citations  
  • Severe testing as a basic concept in a neyman–pearson philosophy of induction.Deborah G. Mayo & Aris Spanos - 2006 - British Journal for the Philosophy of Science 57 (2):323-357.
    Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and long-standing problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (pre-data) error probabilities are to be used for (post-data) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities (...)
    Download  
     
    Export citation  
     
    Bookmark   64 citations  
  • A new solution to the paradoxes of rational acceptability.Igor Douven - 2002 - British Journal for the Philosophy of Science 53 (3):391-410.
    The Lottery Paradox and the Preface Paradox both involve the thesis that high probability is sufficient for rational acceptability. The standard solution to these paradoxes denies that rational acceptability is deductively closed. This solution has a number of untoward consequences. The present paper suggests that a better solution to the paradoxes is to replace the thesis that high probability suffices for rational acceptability with a somewhat stricter thesis. This avoids the untoward consequences of the standard solution. The new solution will (...)
    Download  
     
    Export citation  
     
    Bookmark   63 citations  
  • Ducks, Rabbits, and Normal Science: Recasting the Kuhn’s-Eye View of Popper’s Demarcation of Science.Deborah G. Mayo - 1996 - British Journal for the Philosophy of Science 47 (2):271-290.
    Kuhn maintains that what marks the transition to a science is the ability to carry out ‘normal’ science—a practice he characterizes as abandoning the kind of testing that Popper lauds as the hallmark of science. Examining Kuhn's own contrast with Popper, I propose to recast Kuhnian normal science. Thus recast, it is seen to consist of severe and reliable tests of low-level experimental hypotheses (normal tests) and is, indeed, the place to look to demarcate science. While thereby vindicating Kuhn on (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Neyman-Pearson Hypothesis Testing, Epistemic Reliability and Pragmatic Value-Laden Asymmetric Error Risks.Adam P. Kubiak, Paweł Kawalec & Adam Kiersztyn - 2022 - Axiomathes 32 (4):585-604.
    We show that if among the tested hypotheses the number of true hypotheses is not equal to the number of false hypotheses, then Neyman-Pearson theory of testing hypotheses does not warrant minimal epistemic reliability. We also argue that N-P does not protect from the possible negative effects of the pragmatic value-laden unequal setting of error probabilities on N-P’s epistemic reliability. Most importantly, we argue that in the case of a negative impact no methodological adjustment is available to neutralize it, so (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • (1 other version)Scientific self-correction: the Bayesian way.Felipe Romero & Jan Sprenger - 2020 - Synthese (Suppl 23):1-21.
    The enduring replication crisis in many scientific disciplines casts doubt on the ability of science to estimate effect sizes accurately, and in a wider sense, to self-correct its findings and to produce reliable knowledge. We investigate the merits of a particular countermeasure—replacing null hypothesis significance testing with Bayesian inference—in the context of the meta-analytic aggregation of effect sizes. In particular, we elaborate on the advantages of this Bayesian reform proposal under conditions of publication bias and other methodological imperfections that are (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Psychopathy: Morally Incapacitated Persons.Heidi Maibom - 2017 - In Thomas Schramme & Steven Edwards (eds.), Handbook of the Philosophy of Medicine. Springer. pp. 1109-1129.
    After describing the disorder of psychopathy, I examine the theories and the evidence concerning the psychopaths’ deficient moral capacities. I first examine whether or not psychopaths can pass tests of moral knowledge. Most of the evidence suggests that they can. If there is a lack of moral understanding, then it has to be due to an incapacity that affects not their declarative knowledge of moral norms, but their deeper understanding of them. I then examine two suggestions: it is their deficient (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Frequentist statistics as a theory of inductive inference.Deborah G. Mayo & David Cox - 2009 - In Deborah G. Mayo & Aris Spanos (eds.), Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science. New York: Cambridge University Press.
    After some general remarks about the interrelation between philosophical and statistical thinking, the discussion centres largely on significance tests. These are defined as the calculation of p-values rather than as formal procedures for ‘acceptance‘ and ‘rejection‘. A number of types of null hypothesis are described and a principle for evidential interpretation set out governing the implications of p- values in the specific circumstances of each application, as contrasted with a long-run interpretation. A number of more complicated situ- ations are discussed (...)
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Objectivity in confirmation: Post hoc monsters and novel predictions.Ioannis Votsis - 2014 - Studies in History and Philosophy of Science Part A 45:70-78.
    The aim of this paper is to put in place some cornerstones in the foundations for an objective theory of confirmation by considering lessons from the failures of predictivism. Discussion begins with a widely accepted challenge, to find out what is needed in addition to the right kind of inferential–semantical relations between hypothesis and evidence to have a complete account of confirmation, one that gives a definitive answer to the question whether hypotheses branded as “post hoc monsters” can be confirmed. (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Why Frequentists and Bayesians Need Each Other.Jon Williamson - 2013 - Erkenntnis 78 (2):293-318.
    The orthodox view in statistics has it that frequentism and Bayesianism are diametrically opposed—two totally incompatible takes on the problem of statistical inference. This paper argues to the contrary that the two approaches are complementary and need to mesh if probabilistic reasoning is to be carried out correctly.
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Philosophy of Biology.Ingo Brigandt - 2011 - In Steven French & Juha Saatsi (eds.), Continuum Companion to the Philosophy of Science. Continuum. pp. 246-267.
    This overview of philosophy of biology lays out what implications biology and recent philosophy of biology have for general philosophy of science. The following topics are addressed in five sections: natural kinds, conceptual change, discovery and confirmation, explanation and reduction, and naturalism.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • (1 other version)Statistics is not enough: revisiting Ronald A. Fisher's critique (1936) of Mendel's experimental results (1866).Avital Pilpel - 2007 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 38 (3):618-626.
    This paper is concerned with the role of rational belief change theory in the philosophical understanding of experimental error. Today, philosophers seek insight about error in the investigation of specific experiments, rather than in general theories. Nevertheless, rational belief change theory adds to our understanding of just such cases: R. A. Fisher’s criticism of Mendel’s experiments being a case in point. After an historical introduction, the main part of this paper investigates Fisher’s paper from the point of view of rational (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • No evidence amalgamation without evidence measurement.Veronica J. Vieland & Hasok Chang - 2019 - Synthese 196 (8):3139-3161.
    In this paper we consider the problem of how to measure the strength of statistical evidence from the perspective of evidence amalgamation operations. We begin with a fundamental measurement amalgamation principle : for any measurement, the inputs and outputs of an amalgamation procedure must be on the same scale, and this scale must have a meaningful interpretation vis a vis the object of measurement. Using the p value as a candidate evidence measure, we examine various commonly used approaches to amalgamation (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • From Discovery to Justification: Outline of an Ideal Research Program in Empirical Psychology.Erich H. Witte & Frank Zenker - 2017 - Frontiers in Psychology 8.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • The Climate Wars and ‘the Pause’ – Are Both Sides Wrong?Roger Jones & James Ricketts - 2016 - Victoria University, Victoria Institute of Strategic Economic Studies.
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Simplicity, Inference and Modelling: Keeping It Sophisticatedly Simple.Arnold Zellner, Hugo A. Keuzenkamp & Michael McAleer (eds.) - 2001 - New York: Cambridge University Press.
    The idea that simplicity matters in science is as old as science itself, with the much cited example of Ockham's Razor, 'entia non sunt multiplicanda praeter necessitatem': entities are not to be multiplied beyond necessity. A problem with Ockham's razor is that nearly everybody seems to accept it, but few are able to define its exact meaning and to make it operational in a non-arbitrary way. Using a multidisciplinary perspective including philosophers, mathematicians, econometricians and economists, this 2002 monograph examines simplicity (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Variety-of-evidence reasoning about the distant past: A case study in paleoclimate reconstruction.Martin A. Vezér - 2017 - European Journal for Philosophy of Science 7 (2):257-265.
    The epistemology of studies addressing questions about historical and prehistorical phenomena is a subject of increasing discussion among philosophers of science. A related field of inquiry that has yet to be connected to this topic is the epistemology of climate science. Branching these areas of research, I show how variety-of-evidence reasoning accounts for scientific inferences about the past by detailing a case study in paleoclimate reconstruction. This analysis aims to clarify the logic of historical inquiry in general and, by focusing (...)
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Expanding Our Grasp: Causal Knowledge and the Problem of Unconceived Alternatives.Matthias Egg - 2016 - British Journal for the Philosophy of Science 67 (1):115-141.
    I argue that scientific realism, insofar as it is only committed to those scientific posits of which we have causal knowledge, is immune to Kyle Stanford’s argument from unconceived alternatives. This causal strategy is shown not to repeat the shortcomings of previous realist responses to Stanford’s argument. Furthermore, I show that the notion of causal knowledge underlying it can be made sufficiently precise by means of conceptual tools recently introduced into the debate on scientific realism. Finally, I apply this strategy (...)
    Download  
     
    Export citation  
     
    Bookmark   20 citations  
  • (1 other version)What’s Wrong With Our Theories of Evidence?Julian Reiss - 2014 - Theoria: Revista de Teoría, Historia y Fundamentos de la Ciencia 29 (2):283-306.
    This paper surveys and critically assesses existing theories of evidence with respect to four desiderata. A good theory of evidence should be both a theory of evidential support (i.e., be informative about what kinds of facts speak in favour of a hypothesis), and of warrant (i.e., be informative about how strongly a given set of facts speaks in favour of the hypothesis), it should apply to the non-ideal cases in which scientists typically find themselves, and it should be ‘descriptively adequate’, (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation