Switch to: References

Citations of:

Statistical methods and scientific inference

Edinburgh,: Oliver & Boyd (1956)

Add citations

You must login to add citations.
  1. Disciplining Qualitative Decision Exercises: Aspects of a Transempirical Protocol, I.John W. Sutherland - 1990 - Theory and Decision 28 (1):73.
    Download  
     
    Export citation  
     
    Bookmark  
  • Abductive inferences.Domenico Costantini - 1987 - Erkenntnis 26 (3):409 - 422.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Generics and mental representations.Ariel Cohen - 2004 - Linguistics and Philosophy 27 (5):529-556.
    It is widely agreed that generics tolerate exceptions. It turns out, however, that exceptions are tolerated only so long as they do not violate homogeneity: when the exceptions are not concentrated in a salient “chunk” of the domain of the generic. The criterion for salience of a chunk is cognitive: it is dependent on the way in which the domain is mentally represented. Findings of psychological experiments about the ways in which different domains are represented, and the actors affecting such (...)
    Download  
     
    Export citation  
     
    Bookmark   36 citations  
  • The Neyman-Pearson theory as decision theory, and as inference theory; with a criticism of the Lindley-Savage argument for bayesian theory.Allan Birnbaum - 1977 - Synthese 36 (1):19 - 49.
    Download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Seeing the wood for the trees: philosophical aspects of classical, Bayesian and likelihood approaches in statistical inference and some implications for phylogenetic analysis.Daniel Barker - 2015 - Biology and Philosophy 30 (4):505-525.
    The three main approaches in statistical inference—classical statistics, Bayesian and likelihood—are in current use in phylogeny research. The three approaches are discussed and compared, with particular emphasis on theoretical properties illustrated by simple thought-experiments. The methods are problematic on axiomatic grounds, extra-mathematical grounds relating to the use of a prior or practical grounds. This essay aims to increase understanding of these limits among those with an interest in phylogeny.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Two dogmas of strong objective bayesianism.Prasanta S. Bandyopadhyay & Gordon Brittan - 2010 - International Studies in the Philosophy of Science 24 (1):45 – 65.
    We introduce a distinction, unnoticed in the literature, between four varieties of objective Bayesianism. What we call ' strong objective Bayesianism' is characterized by two claims, that all scientific inference is 'logical' and that, given the same background information two agents will ascribe a unique probability to their priors. We think that neither of these claims can be sustained; in this sense, they are 'dogmatic'. The first fails to recognize that some scientific inference, in particular that concerning evidential relations, is (...)
    Download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Statistics is not enough: revisiting Ronald A. Fisher's critique (1936) of Mendel's experimental results (1866).Avital Pilpel - 2007 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 38 (3):618-626.
    This paper is concerned with the role of rational belief change theory in the philosophical understanding of experimental error. Today, philosophers seek insight about error in the investigation of specific experiments, rather than in general theories. Nevertheless, rational belief change theory adds to our understanding of just such cases: R. A. Fisher’s criticism of Mendel’s experiments being a case in point. After an historical introduction, the main part of this paper investigates Fisher’s paper from the point of view of rational (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The significance test controversy. [REVIEW]Ronald N. Giere - 1972 - British Journal for the Philosophy of Science 23 (2):170-181.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Johannes von Kries’s Principien: A Brief Guide for the Perplexed.Sandy L. Zabell - 2016 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 47 (1):131-150.
    This paper has the aim of making Johannes von Kries’s masterpiece, Die Principien der Wahrscheinlichkeitsrechnung of 1886, a little more accessible to the modern reader in three modest ways: first, it discusses the historical background to the book ; next, it summarizes the basic elements of von Kries’s approach ; and finally, it examines the so-called “principle of cogent reason” with which von Kries’s name is often identified in the English literature.
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Statistics and Probability Have Always Been Value-Laden: An Historical Ontology of Quantitative Research Methods.Michael J. Zyphur & Dean C. Pierides - 2020 - Journal of Business Ethics 167 (1):1-18.
    Quantitative researchers often discuss research ethics as if specific ethical problems can be reduced to abstract normative logics (e.g., virtue ethics, utilitarianism, deontology). Such approaches overlook how values are embedded in every aspect of quantitative methods, including ‘observations,’ ‘facts,’ and notions of ‘objectivity.’ We describe how quantitative research practices, concepts, discourses, and their objects/subjects of study have always been value-laden, from the invention of statistics and probability in the 1600s to their subsequent adoption as a logic made to appear as (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • The rule of succession.Sandy L. Zabell - 1989 - Erkenntnis 31 (2-3):283 - 321.
    Download  
     
    Export citation  
     
    Bookmark   25 citations  
  • Ramsey, truth, and probability.S. L. Zabell - 1991 - Theoria 57 (3):211-238.
    Download  
     
    Export citation  
     
    Bookmark   6 citations  
  • Information Processing: The Language and Analytical Tools for Cognitive Psychology in the Information Age.Aiping Xiong & Robert W. Proctor - 2018 - Frontiers in Psychology 9:362645.
    The information age can be dated to the work of Norbert Wiener and Claude Shannon in the 1940s. Their work on cybernetics and information theory, and many subsequent developments, had a profound influence on reshaping the field of psychology from what it was prior to the 1950s. Contemporaneously, advances also occurred in experimental design and inferential statistical testing stemming from the work of Ronald Fisher, Jerzy Neyman, and Egon Pearson. These interdisciplinary advances from outside of psychology provided the conceptual and (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Evidence in medicine and evidence-based medicine.John Worrall - 2007 - Philosophy Compass 2 (6):981–1022.
    It is surely obvious that medicine, like any other rational activity, must be based on evidence. The interest is in the details: how exactly are the general principles of the logic of evidence to be applied in medicine? Focussing on the development, and current claims of the ‘Evidence-Based Medicine’ movement, this article raises a number of difficulties with the rationales that have been supplied in particular for the ‘evidence hierarchy’ and for the very special role within that hierarchy of randomized (...)
    Download  
     
    Export citation  
     
    Bookmark   67 citations  
  • From Discovery to Justification: Outline of an Ideal Research Program in Empirical Psychology.Erich H. Witte & Frank Zenker - 2017 - Frontiers in Psychology 8.
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Frequentist statistical inference without repeated sampling.Paul Vos & Don Holbert - 2022 - Synthese 200 (2):1-25.
    Frequentist inference typically is described in terms of hypothetical repeated sampling but there are advantages to an interpretation that uses a single random sample. Contemporary examples are given that indicate probabilities for random phenomena are interpreted as classical probabilities, and this interpretation of equally likely chance outcomes is applied to statistical inference using urn models. These are used to address Bayesian criticisms of frequentist methods. Recent descriptions of p-values, confidence intervals, and power are viewed through the lens of classical probability (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Chance, Variation and Shared Ancestry: Population Genetics After the Synthesis.Michel Veuille - 2019 - Journal of the History of Biology 52 (4):537-567.
    Chance has been a focus of attention ever since the beginning of population genetics, but neutrality has not, as natural selection once appeared to be the only worthwhile issue. Neutral change became a major source of interest during the neutralist–selectionist debate, 1970–1980. It retained interest beyond this period for two reasons that contributed to its becoming foundational for evolutionary reasoning. On the one hand, neutral evolution was the first mathematical prediction to emerge from Mendelian inheritance: until then evolution by natural (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The ethics of alpha: Reflections on statistics, evidence and values in medicine.R. E. G. Upshur - 2001 - Theoretical Medicine and Bioethics 22 (6):565-576.
    As health care embraces the tenets of evidence-based medicine it is important to ask questions about how evidence is produced and interpreted. This essay explores normative dimensions of evidence production, particularly around issues of setting the tolerable level of uncertainty of results. Four specific aspects are explored: what health care providers know about statistics, why alpha levels have been set at 0.05, the role of randomization in the generation of sufficient grounds of belief, and the role of observational studies. The (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The constraint rule of the maximum entropy principle.Jos Uffink - 1996 - Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 27 (1):47-79.
    The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability distributions. In practical applications, however, the information consists of empirical data. A constraint rule is then employed to construct constraints on probability distributions out of these data. Usually one adopts the rule that equates the expectation (...)
    Download  
     
    Export citation  
     
    Bookmark   21 citations  
  • On some basic patterns of statistical inference.Klemens Szaniawski - 1961 - Studia Logica 11 (1):77 - 89.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Looking at the Arrow of Time and Loschmidt’s Paradox Through the Magnifying Glass of Mathematical-Billiard.Mario Stefanon - 2019 - Foundations of Physics 49 (10):1231-1251.
    The contrast between the past-future symmetry of mechanical theories and the time-arrow observed in the behaviour of real complex systems doesn’t have nowadays a fully satisfactory explanation. If one confides in the Laplace-dream that everything be exactly and completely describable by the known mechanical differential equations, the whole experimental evidence of the irreversibility of real complex processes can only be interpreted as an illusion due to the limits of human brain and shortness of human history. In this work it is (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Some aspects of Carnap's theory of inductive inference.Carl-Erik Särndal - 1968 - British Journal for the Philosophy of Science 19 (3):225-246.
    Download  
     
    Export citation  
     
    Bookmark  
  • Two Impossibility Results for Measures of Corroboration.Jan Sprenger - 2018 - British Journal for the Philosophy of Science 69 (1):139--159.
    According to influential accounts of scientific method, such as critical rationalism, scientific knowledge grows by repeatedly testing our best hypotheses. But despite the popularity of hypothesis tests in statistical inference and science in general, their philosophical foundations remain shaky. In particular, the interpretation of non-significant results—those that do not reject the tested hypothesis—poses a major philosophical challenge. To what extent do they corroborate the tested hypothesis, or provide a reason to accept it? Popper sought for measures of corroboration that could (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • The objectivity of Subjective Bayesianism.Jan Sprenger - 2018 - European Journal for Philosophy of Science 8 (3):539-558.
    Subjective Bayesianism is a major school of uncertain reasoning and statistical inference. It is often criticized for a lack of objectivity: it opens the door to the influence of values and biases, evidence judgments can vary substantially between scientists, it is not suited for informing policy decisions. My paper rebuts these concerns by connecting the debates on scientific objectivity and statistical method. First, I show that the above concerns arise equally for standard frequentist inference with null hypothesis significance tests. Second, (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Testing a precise null hypothesis: the case of Lindley’s paradox.Jan Sprenger - 2013 - Philosophy of Science 80 (5):733-744.
    The interpretation of tests of a point null hypothesis against an unspecified alternative is a classical and yet unresolved issue in statistical methodology. This paper approaches the problem from the perspective of Lindley's Paradox: the divergence of Bayesian and frequentist inference in hypothesis tests with large sample size. I contend that the standard approaches in both frameworks fail to resolve the paradox. As an alternative, I suggest the Bayesian Reference Criterion: it targets the predictive performance of the null hypothesis in (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Hypothetico‐Deductive Confirmation.Jan Sprenger - 2011 - Philosophy Compass 6 (7):497-508.
    Hypothetico-deductive (H-D) confirmation builds on the idea that confirming evidence consists of successful predictions that deductively follow from the hypothesis under test. This article reviews scope, history and recent development of the venerable H-D account: First, we motivate the approach and clarify its relationship to Bayesian confirmation theory. Second, we explain and discuss the tacking paradoxes which exploit the fact that H-D confirmation gives no account of evidential relevance. Third, we review several recent proposals that aim at a sounder and (...)
    Download  
     
    Export citation  
     
    Bookmark   14 citations  
  • Conditional Degree of Belief and Bayesian Inference.Jan Sprenger - 2020 - Philosophy of Science 87 (2):319-335.
    Why are conditional degrees of belief in an observation E, given a statistical hypothesis H, aligned with the objective probabilities expressed by H? After showing that standard replies are not satisfactory, I develop a suppositional analysis of conditional degree of belief, transferring Ramsey’s classical proposal to statistical inference. The analysis saves the alignment, explains the role of chance-credence coordination, and rebuts the charge of arbitrary assessment of evidence in Bayesian inference. Finally, I explore the implications of this analysis for Bayesian (...)
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • A unifying framework of probabilistic reasoning: Rolf Haenni, Jan-Willem Romeijn, Gregory Wheeler and Jon Williamson: Probabilistic logic and probabilistic networks. Dordrecht: Springer, 2011, xiii+155pp, €59.95 HB. [REVIEW]Jan Sprenger - 2011 - Metascience 21 (2):459-462.
    A unifying framework of probabilistic reasoning Content Type Journal Article Category Book Review Pages 1-4 DOI 10.1007/s11016-011-9573-x Authors Jan Sprenger, Tilburg Center for Logic and Philosophy of Science, Tilburg University, P.O. Box 90153, 5000 LE Tilburg, The Netherlands Journal Metascience Online ISSN 1467-9981 Print ISSN 0815-0796.
    Download  
     
    Export citation  
     
    Bookmark  
  • Probability: A new logico-semantical approach. [REVIEW]Christina Schneider - 1994 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 25 (1):107 - 124.
    This approach does not define a probability measure by syntactical structures. It reveals a link between modal logic and mathematical probability theory. This is shown (1) by adding an operator (and two further connectives and constants) to a system of lower predicate calculus and (2) regarding the models of that extended system. These models are models of the modal system S₅ (without the Barcan formula), where a usual probability measure is defined on their set of possible worlds. Mathematical probability models (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Induction: A Logical Analysis.Uwe Saint-Mont - 2022 - Foundations of Science 27 (2):455-487.
    The aim of this contribution is to provide a rather general answer to Hume’s problem. To this end, induction is treated within a straightforward formal paradigm, i.e., several connected levels of abstraction. Within this setting, many concrete models are discussed. On the one hand, models from mathematics, statistics and information science demonstrate how induction might succeed. On the other hand, standard examples from philosophy highlight fundamental difficulties. Thus it transpires that the difference between unbounded and bounded inductive steps is crucial: (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • What type of Type I error? Contrasting the Neyman–Pearson and Fisherian approaches in the context of exact and direct replications.Mark Rubin - 2021 - Synthese 198 (6):5809–5834.
    The replication crisis has caused researchers to distinguish between exact replications, which duplicate all aspects of a study that could potentially affect the results, and direct replications, which duplicate only those aspects of the study that are thought to be theoretically essential to reproduce the original effect. The replication crisis has also prompted researchers to think more carefully about the possibility of making Type I errors when rejecting null hypotheses. In this context, the present article considers the utility of two (...)
    Download  
     
    Export citation  
     
    Bookmark   8 citations  
  • “Repeated sampling from the same population?” A critique of Neyman and Pearson’s responses to Fisher.Mark Rubin - 2020 - European Journal for Philosophy of Science 10 (3):1-15.
    Fisher criticised the Neyman-Pearson approach to hypothesis testing by arguing that it relies on the assumption of “repeated sampling from the same population.” The present article considers the responses to this criticism provided by Pearson and Neyman. Pearson interpreted alpha levels in relation to imaginary replications of the original test. This interpretation is appropriate when test users are sure that their replications will be equivalent to one another. However, by definition, scientific researchers do not possess sufficient knowledge about the relevant (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Exploratory hypothesis tests can be more compelling than confirmatory hypothesis tests.Mark Rubin & Chris Donkin - 2022 - Philosophical Psychology.
    Preregistration has been proposed as a useful method for making a publicly verifiable distinction between confirmatory hypothesis tests, which involve planned tests of ante hoc hypotheses, and exploratory hypothesis tests, which involve unplanned tests of post hoc hypotheses. This distinction is thought to be important because it has been proposed that confirmatory hypothesis tests provide more compelling results (less uncertain, less tentative, less open to bias) than exploratory hypothesis tests. In this article, we challenge this proposition and argue that there (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The significance test controversy.R. D. Rosenkrantz - 1973 - Synthese 26 (2):304 - 321.
    The pre-designationist, anti-inductivist and operationalist tenor of Neyman-Pearson theory give that theory an obvious affinity to several currently influential philosophies of science, most particularly, the Popperian. In fact, one might fairly regard Neyman-Pearson theory as the statistical embodiment of Popperian methodology. The difficulties raised in this paper have, then, wider purport, and should serve as something of a touchstone for those who would construct a theory of evidence adequate to statistics without recourse to the notion of inductive probability.
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Support.R. D. Rosenkrantz - 1977 - Synthese 36 (2):181 - 193.
    Download  
     
    Export citation  
     
    Bookmark  
  • Inductivism and probabilism.Roger Rosenkrantz - 1971 - Synthese 23 (2-3):167 - 205.
    I I set out my view that all inference is essentially deductive and pinpoint what I take to be the major shortcomings of the induction rule.II The import of data depends on the probability model of the experiment, a dependence ignored by the induction rule. Inductivists admit background knowledge must be taken into account but never spell out how this is to be done. As I see it, that is the problem of induction.III The induction rule, far from providing a (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Mathematical statistics and metastatistical analysis.Andrés Rivadulla - 1991 - Erkenntnis 34 (2):211 - 236.
    This paper deals with meta-statistical questions concerning frequentist statistics. In Sections 2 to 4 I analyse the dispute between Fisher and Neyman on the so called logic of statistical inference, a polemic that has been concomitant of the development of mathematical statistics. My conclusion is that, whenever mathematical statistics makes it possible to draw inferences, it only uses deductive reasoning. Therefore I reject Fisher's inductive approach to the statistical estimation theory and adhere to Neyman's deductive one. On the other hand, (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • The limits of probability modelling: A serendipitous tale of goldfish, transfinite numbers, and pieces of string. [REVIEW]Ranald R. Macdonald - 2000 - Mind and Society 1 (2):17-38.
    This paper is about the differences between probabilities and beliefs and why reasoning should not always conform to probability laws. Probability is defined in terms of urn models from which probability laws can be derived. This means that probabilities are expressed in rational numbers, they suppose the existence of veridical representations and, when viewed as parts of a probability model, they are determined by a restricted set of variables. Moreover, probabilities are subjective, in that they apply to classes of events (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Evidence and expertise.John Paley - 2006 - Nursing Inquiry 13 (2):82-93.
    This paper evaluates attempts to defend established concepts of expertise and clinical judgement against the incursions of evidence‐based practice. Two related arguments are considered. The first suggests that standard accounts of evidence‐based practice imply an overly narrow view of ‘evidence’, and that a more inclusive concept, incorporating ‘patterns of knowing’ not recognised by the familiar evidence hierarchies, should be adopted. The second suggests that statistical generalisations cannot be applied non‐problematically to individual patients in specific contexts, and points out that this (...)
    Download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Monitoring in clinical trials: benefit or bias?Cecilia Nardini - 2013 - Theoretical Medicine and Bioethics 34 (4):259-274.
    Monitoring ongoing clinical trials for early signs of effectiveness is an option for improving cost-effectiveness of trials that is becoming increasingly common. Alongside the obvious advantages made possible by monitoring, however, there are some downsides. In particular, there is growing concern in the medical community that trials stopped early for benefit tend to overestimate treatment effect. In this paper, I examine this problem from the point of view of statistical methodology, starting from the observation that the overestimation is caused by (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Statistical significance and its critics: practicing damaging science, or damaging scientific practice?Deborah G. Mayo & David Hand - 2022 - Synthese 200 (3):1-33.
    While the common procedure of statistical significance testing and its accompanying concept of p-values have long been surrounded by controversy, renewed concern has been triggered by the replication crisis in science. Many blame statistical significance tests themselves, and some regard them as sufficiently damaging to scientific practice as to warrant being abandoned. We take a contrary position, arguing that the central criticisms arise from misunderstanding and misusing the statistical tools, and that in fact the purported remedies themselves risk damaging science. (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Severe testing as a basic concept in a neyman–pearson philosophy of induction.Deborah G. Mayo & Aris Spanos - 2006 - British Journal for the Philosophy of Science 57 (2):323-357.
    Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and long-standing problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test's (pre-data) error probabilities are to be used for (post-data) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities (...)
    Download  
     
    Export citation  
     
    Bookmark   62 citations  
  • Die Falsifikation Statistischer Hypothesen/The falsification of statistical hypotheses.Max Albert - 1992 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 23 (1):1-32.
    It is widely held that falsification of statistical hypotheses is impossible. This view is supported by an analysis of the most important theories of statistical testing: these theories are not compatible with falsificationism. On the other hand, falsificationism yields a basically viable solution to the problems of explanation, prediction and theory testing in a deterministic context. The present paper shows how to introduce the falsificationist solution into the realm of statistics. This is done mainly by applying the concept of empirical (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • The relevance criterion of confirmation.J. L. Mackie - 1969 - British Journal for the Philosophy of Science 20 (1):27-40.
    Download  
     
    Export citation  
     
    Bookmark   22 citations  
  • Probability logic, logical probability, and inductive support.Isaac Levi - 2010 - Synthese 172 (1):97-118.
    This paper seeks to defend the following conclusions: The program advanced by Carnap and other necessarians for probability logic has little to recommend it except for one important point. Credal probability judgments ought to be adapted to changes in evidence or states of full belief in a principled manner in conformity with the inquirer’s confirmational commitments—except when the inquirer has good reason to modify his or her confirmational commitment. Probability logic ought to spell out the constraints on rationally coherent confirmational (...)
    Download  
     
    Export citation  
     
    Bookmark   4 citations  
  • Neyman-Pearson Hypothesis Testing, Epistemic Reliability and Pragmatic Value-Laden Asymmetric Error Risks.Adam P. Kubiak, Paweł Kawalec & Adam Kiersztyn - 2022 - Axiomathes 32 (4):585-604.
    We show that if among the tested hypotheses the number of true hypotheses is not equal to the number of false hypotheses, then Neyman-Pearson theory of testing hypotheses does not warrant minimal epistemic reliability. We also argue that N-P does not protect from the possible negative effects of the pragmatic value-laden unequal setting of error probabilities on N-P’s epistemic reliability. Most importantly, we argue that in the case of a negative impact no methodological adjustment is available to neutralize it, so (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Hypothesis-Testing Demands Trustworthy Data—A Simulation Approach to Inferential Statistics Advocating the Research Program Strategy.Antonia Krefeld-Schwalb, Erich H. Witte & Frank Zenker - 2018 - Frontiers in Psychology 9.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Cross-Sectional Analysis of Students’ Intuitions When Interpreting CIs.Pav Kalinowski, Jerry Lai & Geoff Cumming - 2018 - Frontiers in Psychology 9.
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Tests of significance following R. A. Fisher.D. J. Johnstone - 1987 - British Journal for the Philosophy of Science 38 (4):481-499.
    Download  
     
    Export citation  
     
    Bookmark   9 citations  
  • On the necessity for random sampling.D. J. Johnstone - 1989 - British Journal for the Philosophy of Science 40 (4):443-457.
    Download  
     
    Export citation  
     
    Bookmark   2 citations