Switch to: References

Add citations

You must login to add citations.
  1. Type I error rates are not usually inflated.Mark Rubin - 2024 - Journal of Trial and Error 1.
    The inflation of Type I error rates is thought to be one of the causes of the replication crisis. Questionable research practices such as p-hacking are thought to inflate Type I error rates above their nominal level, leading to unexpectedly high levels of false positives in the literature and, consequently, unexpectedly low replication rates. In this article, I offer an alternative view. I argue that questionable and other research practices do not usually inflate relevant Type I error rates. I begin (...)
    Download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Questionable metascience practices.Mark Rubin - 2023 - Journal of Trial and Error 1.
    Metascientists have studied questionable research practices in science. The present article considers the parallel concept of questionable metascience practices (QMPs). A QMP is a research practice, assumption, or perspective that has been questioned by several commentators as being potentially problematic for metascience and/or the science reform movement. The present article reviews ten QMPs that relate to criticism, replication, bias, generalization, and the characterization of science. Specifically, the following QMPs are considered: (1) rejecting or ignoring self-criticism; (2) a fast ‘n’ bropen (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Exploratory hypothesis tests can be more compelling than confirmatory hypothesis tests.Mark Rubin & Chris Donkin - 2022 - Philosophical Psychology.
    Preregistration has been proposed as a useful method for making a publicly verifiable distinction between confirmatory hypothesis tests, which involve planned tests of ante hoc hypotheses, and exploratory hypothesis tests, which involve unplanned tests of post hoc hypotheses. This distinction is thought to be important because it has been proposed that confirmatory hypothesis tests provide more compelling results (less uncertain, less tentative, less open to bias) than exploratory hypothesis tests. In this article, we challenge this proposition and argue that there (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Preregistration Does Not Improve the Transparent Evaluation of Severity in Popper’s Philosophy of Science or When Deviations are Allowed.Mark Rubin - manuscript
    One justification for preregistering research hypotheses, methods, and analyses is that it improves the transparent evaluation of the severity of hypothesis tests. In this article, I consider two cases in which preregistration does not improve this evaluation. First, I argue that, although preregistration can facilitate the transparent evaluation of severity in Mayo’s error statistical philosophy of science, it does not facilitate this evaluation in Popper’s theory-centric approach. To illustrate, I show that associated concerns about Type I error rate inflation are (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • When to adjust alpha during multiple testing: a consideration of disjunction, conjunction, and individual testing.Mark Rubin - 2021 - Synthese 199 (3-4):10969-11000.
    Scientists often adjust their significance threshold during null hypothesis significance testing in order to take into account multiple testing and multiple comparisons. This alpha adjustment has become particularly relevant in the context of the replication crisis in science. The present article considers the conditions in which this alpha adjustment is appropriate and the conditions in which it is inappropriate. A distinction is drawn between three types of multiple testing: disjunction testing, conjunction testing, and individual testing. It is argued that alpha (...)
    Download  
     
    Export citation  
     
    Bookmark   7 citations  
  • HARKing: From Misdiagnosis to Mispescription.Aydin Mohseni - unknown
    The practice of HARKing---hypothesizing after results are known---is commonly maligned as undermining the reliability of scientific findings. There are several accounts in the literature as to why HARKing undermines the reliability of findings. We argue that none of these is right and that the correct account is a Bayesian one. HARKing can indeed decrease the reliability of scientific findings, but it can also increase it; which effect HARKing produces depends on the difference of the prior odds of hypotheses characteristically selected (...)
    Download  
     
    Export citation  
     
    Bookmark