Switch to: References

Citations of:

Statistical methods and scientific inference

Edinburgh,: Oliver & Boyd (1955)

Add citations

You must login to add citations.
  1. Type I error rates are not usually inflated.Mark Rubin - 2024 - Journal of Trial and Error 1.
    The inflation of Type I error rates is thought to be one of the causes of the replication crisis. Questionable research practices such as p-hacking are thought to inflate Type I error rates above their nominal level, leading to unexpectedly high levels of false positives in the literature and, consequently, unexpectedly low replication rates. In this article, I offer an alternative view. I argue that questionable and other research practices do not usually inflate relevant Type I error rates. I begin (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Exploratory hypothesis tests can be more compelling than confirmatory hypothesis tests.Mark Rubin & Chris Donkin - 2024 - Philosophical Psychology 37 (8):2019-2047.
    Preregistration has been proposed as a useful method for making a publicly verifiable distinction between confirmatory hypothesis tests, which involve planned tests of ante hoc hypotheses, and exploratory hypothesis tests, which involve unplanned tests of post hoc hypotheses. This distinction is thought to be important because it has been proposed that confirmatory hypothesis tests provide more compelling results (less uncertain, less tentative, less open to bias) than exploratory hypothesis tests. In this article, we challenge this proposition and argue that there (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Genuine Bayesian Multiallelic Significance Test for the Hardy-Weinberg Equilibrium Law.Julio Michael Stern, Carlos Alberto de Braganca Pereira, Fabio Nakano & Martin Ritter Whittle - 2006 - Genetics and Molecular Research 5 (4):619-631.
    Statistical tests that detect and measure deviation from the Hardy-Weinberg equilibrium (HWE) have been devised but are limited when testing for deviation at multiallelic DNA loci is attempted. Here we present the full Bayesian significance test (FBST) for the HWE. This test depends neither on asymptotic results nor on the number of possible alleles for the particular locus being evaluated. The FBST is based on the computation of an evidence index in favor of the HWE hypothesis. A great deal of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Statistical Significance Testing in Economics.William Peden & Jan Sprenger - 2022 - In Conrad Heilmann & Julian Reiss (eds.), Routledge Handbook of Philosophy of Economics. Routledge.
    The origins of testing scientific models with statistical techniques go back to 18th century mathematics. However, the modern theory of statistical testing was primarily developed through the work of Sir R.A. Fisher, Jerzy Neyman, and Egon Pearson in the inter-war period. Some of Fisher's papers on testing were published in economics journals (Fisher, 1923, 1935) and exerted a notable influence on the discipline. The development of econometrics and the rise of quantitative economic models in the mid-20th century made statistical significance (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Neyman-Pearson Hypothesis Testing, Epistemic Reliability and Pragmatic Value-Laden Asymmetric Error Risks.Adam P. Kubiak, Paweł Kawalec & Adam Kiersztyn - 2022 - Axiomathes 32 (4):585-604.
    We show that if among the tested hypotheses the number of true hypotheses is not equal to the number of false hypotheses, then Neyman-Pearson theory of testing hypotheses does not warrant minimal epistemic reliability (the feature of driving to true conclusions more often than to false ones). We also argue that N-P does not protect from the possible negative effects of the pragmatic value-laden unequal setting of error probabilities on N-P’s epistemic reliability. Most importantly, we argue that in the case (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • “Repeated sampling from the same population?” A critique of Neyman and Pearson’s responses to Fisher.Mark Rubin - 2020 - European Journal for Philosophy of Science 10 (3):1-15.
    Fisher criticised the Neyman-Pearson approach to hypothesis testing by arguing that it relies on the assumption of “repeated sampling from the same population.” The present article considers the responses to this criticism provided by Pearson and Neyman. Pearson interpreted alpha levels in relation to imaginary replications of the original test. This interpretation is appropriate when test users are sure that their replications will be equivalent to one another. However, by definition, scientific researchers do not possess sufficient knowledge about the relevant (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • What type of Type I error? Contrasting the Neyman–Pearson and Fisherian approaches in the context of exact and direct replications.Mark Rubin - 2021 - Synthese 198 (6):5809–5834.
    The replication crisis has caused researchers to distinguish between exact replications, which duplicate all aspects of a study that could potentially affect the results, and direct replications, which duplicate only those aspects of the study that are thought to be theoretically essential to reproduce the original effect. The replication crisis has also prompted researchers to think more carefully about the possibility of making Type I errors when rejecting null hypotheses. In this context, the present article considers the utility of two (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Statistics and Probability Have Always Been Value-Laden: An Historical Ontology of Quantitative Research Methods.Michael J. Zyphur & Dean C. Pierides - 2020 - Journal of Business Ethics 167 (1):1-18.
    Quantitative researchers often discuss research ethics as if specific ethical problems can be reduced to abstract normative logics (e.g., virtue ethics, utilitarianism, deontology). Such approaches overlook how values are embedded in every aspect of quantitative methods, including ‘observations,’ ‘facts,’ and notions of ‘objectivity.’ We describe how quantitative research practices, concepts, discourses, and their objects/subjects of study have always been value-laden, from the invention of statistics and probability in the 1600s to their subsequent adoption as a logic made to appear as (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Context of Communication: What Philosophers can Contribute.Wayne C. Myrvold - unknown
    Once an experiment is done, the observations have been made and the data have been analyzed, what should scientists communicate to the world at large, and how should they do it? This, I will argue, is an intricate question, and one that philosophers can make a contribution to. I will illustrate these points by reference to the debate between Fisher and Neyman & Pearson in the 1950s, which I take to be, at heart, a debate about norms of scientific communication. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Examining coincidences: Towards an integrated approach.Laurence Browne - unknown
    Download  
     
    Export citation  
     
    Bookmark  
  • Probability logic, logical probability, and inductive support.Isaac Levi - 2010 - Synthese 172 (1):97-118.
    This paper seeks to defend the following conclusions: The program advanced by Carnap and other necessarians for probability logic has little to recommend it except for one important point. Credal probability judgments ought to be adapted to changes in evidence or states of full belief in a principled manner in conformity with the inquirer’s confirmational commitments—except when the inquirer has good reason to modify his or her confirmational commitment. Probability logic ought to spell out the constraints on rationally coherent confirmational (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Conditional Degree of Belief and Bayesian Inference.Jan Sprenger - 2020 - Philosophy of Science 87 (2):319-335.
    Why are conditional degrees of belief in an observation E, given a statistical hypothesis H, aligned with the objective probabilities expressed by H? After showing that standard replies are not satisfactory, I develop a suppositional analysis of conditional degree of belief, transferring Ramsey’s classical proposal to statistical inference. The analysis saves the alignment, explains the role of chance-credence coordination, and rebuts the charge of arbitrary assessment of evidence in Bayesian inference. Finally, I explore the implications of this analysis for Bayesian (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • TTB vs. Franklin's Rule in Environments of Different Redundancy.Gerhard Schurz & Paul D. Thorn - 2014 - Frontiers in Psychology 5:15-16.
    This addendum presents results that confound some commonly made claims about the sorts of environments in which the performance of TTB exceeds that of Franklin's rule, and vice versa.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Hypothetico‐Deductive Confirmation.Jan Sprenger - 2011 - Philosophy Compass 6 (7):497-508.
    Hypothetico-deductive (H-D) confirmation builds on the idea that confirming evidence consists of successful predictions that deductively follow from the hypothesis under test. This article reviews scope, history and recent development of the venerable H-D account: First, we motivate the approach and clarify its relationship to Bayesian confirmation theory. Second, we explain and discuss the tacking paradoxes which exploit the fact that H-D confirmation gives no account of evidential relevance. Third, we review several recent proposals that aim at a sounder and (...)
    Download  
     
    Export citation  
     
    Bookmark   16 citations  
  • Two dogmas of strong objective bayesianism.Prasanta S. Bandyopadhyay & Gordon Brittan - 2010 - International Studies in the Philosophy of Science 24 (1):45 – 65.
    We introduce a distinction, unnoticed in the literature, between four varieties of objective Bayesianism. What we call ' strong objective Bayesianism' is characterized by two claims, that all scientific inference is 'logical' and that, given the same background information two agents will ascribe a unique probability to their priors. We think that neither of these claims can be sustained; in this sense, they are 'dogmatic'. The first fails to recognize that some scientific inference, in particular that concerning evidential relations, is (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Inductivism and probabilism.Roger Rosenkrantz - 1971 - Synthese 23 (2-3):167 - 205.
    I I set out my view that all inference is essentially deductive and pinpoint what I take to be the major shortcomings of the induction rule.II The import of data depends on the probability model of the experiment, a dependence ignored by the induction rule. Inductivists admit background knowledge must be taken into account but never spell out how this is to be done. As I see it, that is the problem of induction.III The induction rule, far from providing a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Mathematical statistics and metastatistical analysis.Andrés Rivadulla - 1991 - Erkenntnis 34 (2):211 - 236.
    This paper deals with meta-statistical questions concerning frequentist statistics. In Sections 2 to 4 I analyse the dispute between Fisher and Neyman on the so called logic of statistical inference, a polemic that has been concomitant of the development of mathematical statistics. My conclusion is that, whenever mathematical statistics makes it possible to draw inferences, it only uses deductive reasoning. Therefore I reject Fisher's inductive approach to the statistical estimation theory and adhere to Neyman's deductive one. On the other hand, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Severe testing as a basic concept in a neyman–pearson philosophy of induction.Deborah G. Mayo & Aris Spanos - 2006 - British Journal for the Philosophy of Science 57 (2):323-357.
    Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and long-standing problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (pre-data) error probabilities are to be used for (post-data) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities (...)
    Download  
     
    Export citation  
     
    Bookmark   66 citations  
  • Evidence in medicine and evidence-based medicine.John Worrall - 2007 - Philosophy Compass 2 (6):981–1022.
    It is surely obvious that medicine, like any other rational activity, must be based on evidence. The interest is in the details: how exactly are the general principles of the logic of evidence to be applied in medicine? Focussing on the development, and current claims of the ‘Evidence-Based Medicine’ movement, this article raises a number of difficulties with the rationales that have been supplied in particular for the ‘evidence hierarchy’ and for the very special role within that hierarchy of randomized (...)
    Download  
     
    Export citation  
     
    Bookmark   68 citations  
  • The ethics of alpha: Reflections on statistics, evidence and values in medicine.R. E. G. Upshur - 2001 - Theoretical Medicine and Bioethics 22 (6):565-576.
    As health care embraces the tenets of evidence-based medicine it is important to ask questions about how evidence is produced and interpreted. This essay explores normative dimensions of evidence production, particularly around issues of setting the tolerable level of uncertainty of results. Four specific aspects are explored: what health care providers know about statistics, why alpha levels have been set at 0.05, the role of randomization in the generation of sufficient grounds of belief, and the role of observational studies. The (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Probability: A new logico-semantical approach. [REVIEW]Christina Schneider - 1994 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 25 (1):107 - 124.
    This approach does not define a probability measure by syntactical structures. It reveals a link between modal logic and mathematical probability theory. This is shown (1) by adding an operator (and two further connectives and constants) to a system of lower predicate calculus and (2) regarding the models of that extended system. These models are models of the modal system S₅ (without the Barcan formula), where a usual probability measure is defined on their set of possible worlds. Mathematical probability models (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What is wrong with intelligent design?Elliott Sober - 2007 - Quarterly Review of Biology 82 (1):3-8.
    This article reviews two standard criticisms of creationism/intelligent design (ID): it is unfalsifiable, and it is refuted by the many imperfect adaptations found in nature. Problems with both criticisms are discussed. A conception of testability is described that avoids the defects in Karl Popper’s falsifiability criterion. Although ID comes in multiple forms, which call for different criticisms, it emerges that ID fails to constitute a serious alternative to evolutionary theory.
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • Theories of probability.Colin Howson - 1995 - British Journal for the Philosophy of Science 46 (1):1-32.
    My title is intended to recall Terence Fine's excellent survey, Theories of Probability [1973]. I shall consider some developments that have occurred in the intervening years, and try to place some of the theories he discussed in what is now a slightly longer perspective. Completeness is not something one can reasonably hope to achieve in a journal article, and any selection is bound to reflect a view of what is salient. In a subject as prone to dispute as this, there (...)
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Corroboration, explanation, evolving probability, simplicity and a sharpened razor.I. J. Good - 1968 - British Journal for the Philosophy of Science 19 (2):123-143.
    Download  
     
    Export citation  
     
    Bookmark   31 citations  
  • The objectivity of Subjective Bayesianism.Jan Sprenger - 2018 - European Journal for Philosophy of Science 8 (3):539-558.
    Subjective Bayesianism is a major school of uncertain reasoning and statistical inference. It is often criticized for a lack of objectivity: it opens the door to the influence of values and biases, evidence judgments can vary substantially between scientists, it is not suited for informing policy decisions. My paper rebuts these concerns by connecting the debates on scientific objectivity and statistical method. First, I show that the above concerns arise equally for standard frequentist inference with null hypothesis significance tests. Second, (...)
    Download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Human genetic diversity: Lewontin's fallacy.Anthony W. F. Edwards - 2003 - Bioessays 25 (8):798-801.
    In popular articles that play down the genetical differences among human populations, it is often stated that about 85% of the total genetical variation is due to individual differences within populations and only 15% to differences between populations or ethnic groups. It has therefore been proposed that the division of Homo sapiens into these groups is not justified by the genetic data. This conclusion, due to R.C. Lewontin in 1972, is unwarranted because the argument ignores the fact that most of (...)
    Download  
     
    Export citation  
     
    Bookmark   48 citations  
  • Monitoring in clinical trials: benefit or bias?Cecilia Nardini - 2013 - Theoretical Medicine and Bioethics 34 (4):259-274.
    Monitoring ongoing clinical trials for early signs of effectiveness is an option for improving cost-effectiveness of trials that is becoming increasingly common. Alongside the obvious advantages made possible by monitoring, however, there are some downsides. In particular, there is growing concern in the medical community that trials stopped early for benefit tend to overestimate treatment effect. In this paper, I examine this problem from the point of view of statistical methodology, starting from the observation that the overestimation is caused by (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Testing a precise null hypothesis: the case of Lindley’s paradox.Jan Sprenger - 2013 - Philosophy of Science 80 (5):733-744.
    The interpretation of tests of a point null hypothesis against an unspecified alternative is a classical and yet unresolved issue in statistical methodology. This paper approaches the problem from the perspective of Lindley's Paradox: the divergence of Bayesian and frequentist inference in hypothesis tests with large sample size. I contend that the standard approaches in both frameworks fail to resolve the paradox. As an alternative, I suggest the Bayesian Reference Criterion: it targets the predictive performance of the null hypothesis in (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • The rule of succession.Sandy L. Zabell - 1989 - Erkenntnis 31 (2-3):283 - 321.
    Download  
     
    Export citation  
     
    Bookmark   27 citations  
  • The Neyman-Pearson theory as decision theory, and as inference theory; with a criticism of the Lindley-Savage argument for bayesian theory.Allan Birnbaum - 1977 - Synthese 36 (1):19 - 49.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • On the necessity for random sampling.D. J. Johnstone - 1989 - British Journal for the Philosophy of Science 40 (4):443-457.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Generics and mental representations.Ariel Cohen - 2004 - Linguistics and Philosophy 27 (5):529-556.
    It is widely agreed that generics tolerate exceptions. It turns out, however, that exceptions are tolerated only so long as they do not violate homogeneity: when the exceptions are not concentrated in a salient “chunk” of the domain of the generic. The criterion for salience of a chunk is cognitive: it is dependent on the way in which the domain is mentally represented. Findings of psychological experiments about the ways in which different domains are represented, and the actors affecting such (...)
    Download  
     
    Export citation  
     
    Bookmark   38 citations  
  • Preregistration Does Not Improve the Transparent Evaluation of Severity in Popper’s Philosophy of Science or When Deviations are Allowed.Mark Rubin - manuscript
    One justification for preregistering research hypotheses, methods, and analyses is that it improves the transparent evaluation of the severity of hypothesis tests. In this article, I consider two cases in which preregistration does not improve this evaluation. First, I argue that, although preregistration can facilitate the transparent evaluation of severity in Mayo’s error statistical philosophy of science, it does not facilitate this evaluation in Popper’s theory-centric approach. To illustrate, I show that associated concerns about Type I error rate inflation are (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • From Discovery to Justification: Outline of an Ideal Research Program in Empirical Psychology.Erich H. Witte & Frank Zenker - 2017 - Frontiers in Psychology 8.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • The limits of probability modelling: A serendipitous tale of goldfish, transfinite numbers, and pieces of string. [REVIEW]Ranald R. Macdonald - 2000 - Mind and Society 1 (2):17-38.
    This paper is about the differences between probabilities and beliefs and why reasoning should not always conform to probability laws. Probability is defined in terms of urn models from which probability laws can be derived. This means that probabilities are expressed in rational numbers, they suppose the existence of veridical representations and, when viewed as parts of a probability model, they are determined by a restricted set of variables. Moreover, probabilities are subjective, in that they apply to classes of events (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The relevance criterion of confirmation.J. L. Mackie - 1969 - British Journal for the Philosophy of Science 20 (1):27-40.
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Statistical Approach Involving Bayes' Theorem and the Estimation of the Prior Distribution.Hirosi Hudimoto - 1971 - Annals of the Japan Association for Philosophy of Science 4 (1):35-45.
    Download  
     
    Export citation  
     
    Bookmark  
  • The constraint rule of the maximum entropy principle.Jos Uffink - 1996 - Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 27 (1):47-79.
    The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability distributions. In practical applications, however, the information consists of empirical data. A constraint rule is then employed to construct constraints on probability distributions out of these data. Usually one adopts the rule that equates the expectation (...)
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Gender Issues in Corporate Leadership.Devora Shapiro & Marilea Bramer - 2013 - Handbook of the Philosophical Foundations of Business Ethics:1177-1189.
    Gender greatly impacts access to opportunities, potential, and success in corporate leadership roles. We begin with a general presentation of why such discussion is necessary for basic considerations of justice and fairness in gender equality and how the issues we raise must impact any ethical perspective on gender in the corporate workplace. We continue with a breakdown of the central categories affecting the success of women in corporate leadership roles. The first of these includes gender-influenced behavioral factors, such as the (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Probabilistic Logics and Probabilistic Networks.Rolf Haenni, Jan-Willem Romeijn, Gregory Wheeler & Jon Williamson - 2010 - Dordrecht, Netherland: Synthese Library. Edited by Gregory Wheeler, Rolf Haenni, Jan-Willem Romeijn & and Jon Williamson.
    Additionally, the text shows how to develop computationally feasible methods to mesh with this framework.
    Download  
     
    Export citation  
     
    Bookmark   23 citations  
  • A unifying framework of probabilistic reasoning: Rolf Haenni, Jan-Willem Romeijn, Gregory Wheeler and Jon Williamson: Probabilistic logic and probabilistic networks. Dordrecht: Springer, 2011, xiii+155pp, €59.95 HB. [REVIEW]Jan Sprenger - 2011 - Metascience 21 (2):459-462.
    A unifying framework of probabilistic reasoning Content Type Journal Article Category Book Review Pages 1-4 DOI 10.1007/s11016-011-9573-x Authors Jan Sprenger, Tilburg Center for Logic and Philosophy of Science, Tilburg University, P.O. Box 90153, 5000 LE Tilburg, The Netherlands Journal Metascience Online ISSN 1467-9981 Print ISSN 0815-0796.
    Download  
     
    Export citation  
     
    Bookmark  
  • Die Falsifikation Statistischer Hypothesen/The falsification of statistical hypotheses.Max Albert - 1992 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 23 (1):1-32.
    It is widely held that falsification of statistical hypotheses is impossible. This view is supported by an analysis of the most important theories of statistical testing: these theories are not compatible with falsificationism. On the other hand, falsificationism yields a basically viable solution to the problems of explanation, prediction and theory testing in a deterministic context. The present paper shows how to introduce the falsificationist solution into the realm of statistics. This is done mainly by applying the concept of empirical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Johannes von Kries’s Principien: A Brief Guide for the Perplexed.Sandy Zabell - 2016 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 47 (1):131-150.
    This paper has the aim of making Johannes von Kries’s masterpiece, Die Principien der Wahrscheinlichkeitsrechnung of 1886, a little more accessible to the modern reader in three modest ways: first, it discusses the historical background to the book ; next, it summarizes the basic elements of von Kries’s approach ; and finally, it examines the so-called “principle of cogent reason” with which von Kries’s name is often identified in the English literature.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Statistics as Inductive Inference.Jan-Willem Romeijn - unknown
    An inductive logic is a system of inference that describes the relation between propositions on data, and propositions that extend beyond the data, such as predictions over future data, and general conclusions on all possible data. Statistics, on the other hand, is a mathematical discipline that describes procedures for deriving results about a population from sample data. These results include predictions on future samples, decisions on rejecting or accepting a hypothesis about the population, the determination of probability assignments over such (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The significance test controversy. [REVIEW]Ronald N. Giere - 1972 - British Journal for the Philosophy of Science 23 (2):170-181.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Statistical significance and its critics: practicing damaging science, or damaging scientific practice?Deborah G. Mayo & David Hand - 2022 - Synthese 200 (3):1-33.
    While the common procedure of statistical significance testing and its accompanying concept of p-values have long been surrounded by controversy, renewed concern has been triggered by the replication crisis in science. Many blame statistical significance tests themselves, and some regard them as sufficiently damaging to scientific practice as to warrant being abandoned. We take a contrary position, arguing that the central criticisms arise from misunderstanding and misusing the statistical tools, and that in fact the purported remedies themselves risk damaging science. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Looking at the Arrow of Time and Loschmidt’s Paradox Through the Magnifying Glass of Mathematical-Billiard.Mario Stefanon - 2019 - Foundations of Physics 49 (10):1231-1251.
    The contrast between the past-future symmetry of mechanical theories and the time-arrow observed in the behaviour of real complex systems doesn’t have nowadays a fully satisfactory explanation. If one confides in the Laplace-dream that everything be exactly and completely describable by the known mechanical differential equations, the whole experimental evidence of the irreversibility of real complex processes can only be interpreted as an illusion due to the limits of human brain and shortness of human history. In this work it is (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Two Impossibility Results for Measures of Corroboration.Jan Sprenger - 2018 - British Journal for the Philosophy of Science 69 (1):139--159.
    According to influential accounts of scientific method, such as critical rationalism, scientific knowledge grows by repeatedly testing our best hypotheses. But despite the popularity of hypothesis tests in statistical inference and science in general, their philosophical foundations remain shaky. In particular, the interpretation of non-significant results—those that do not reject the tested hypothesis—poses a major philosophical challenge. To what extent do they corroborate the tested hypothesis, or provide a reason to accept it? Popper sought for measures of corroboration that could (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Evidence and expertise.John Paley - 2006 - Nursing Inquiry 13 (2):82-93.
    This paper evaluates attempts to defend established concepts of expertise and clinical judgement against the incursions of evidence‐based practice. Two related arguments are considered. The first suggests that standard accounts of evidence‐based practice imply an overly narrow view of ‘evidence’, and that a more inclusive concept, incorporating ‘patterns of knowing’ not recognised by the familiar evidence hierarchies, should be adopted. The second suggests that statistical generalisations cannot be applied non‐problematically to individual patients in specific contexts, and points out that this (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • The plain man's guide to probability. [REVIEW]Colin Howson - 1972 - British Journal for the Philosophy of Science 23 (2):157-170.
    Download  
     
    Export citation  
     
    Bookmark