Switch to: References

Add citations

You must login to add citations.
  1. Error Statistics Using the Akaike and Bayesian Information Criteria.Henrique Cheng & Beckett Sterner - forthcoming - Erkenntnis.
    Many biologists, especially in ecology and evolution, analyze their data by estimating fits to a set of candidate models and selecting the best model according to the Akaike Information Criterion (AIC) or the Bayesian Information Criteria (BIC). When the candidate models represent alternative hypotheses, biologists may want to limit the chance of a false positive to a specified level. Existing model selection methodology, however, allows for only indirect control over error rates by setting a threshold for the difference in AIC (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • (1 other version)Why Experimental Balance Is Still a Reason to Randomize.Marco Martinez & David Teira - forthcoming - British Journal for the Philosophy of Science.
    Experimental balance is usually understood as the control for the value of the conditions, other than the one under study, which are liable to affect the result of a test. We discuss three different approaches to balance. ‘Millean balance’ requires identifying and equalizing ex ante the value of these conditions in order to conduct solid causal inferences. ‘Fisherian balance’ measures ex post the influence of uncontrolled conditions through the analysis of variance. In ‘efficiency balance’ the value of the antecedent conditions (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Preregistration does not improve the transparent evaluation of severity in Popper’s philosophy of science or when deviations are allowed.Mark Rubin - manuscript
    One justification for preregistering research hypotheses, methods, and analyses is that it improves the transparent evaluation of the severity of hypothesis tests. In this article, I consider two cases in which preregistration does not improve this evaluation. First, I argue that, although preregistration can facilitate the transparent evaluation of severity in Mayo’s error statistical philosophy of science, it does not facilitate this evaluation in Popper’s theory-centric approach. To illustrate, I show that associated concerns about Type I error rate inflation are (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Placebo trials without mechanisms: How far can they go?David Teira - 2019 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 77 (C):101177.
    In this paper, Isuggest that placebo effects, as we know them today, should be understood as experimental phenomena, low-level regularities whose causal structure is grasped through particular experimental designs with little theoretical guidance. Focusing on placebo interventions with needles for pain reduction -one of the few placebo regularities that seems to arise in meta-analytical studies- I discuss the extent to which it is possible to decompose the different factors at play through more fine-grained randomized clinical trials. My sceptical argument is (...)
    Download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Ontology & Methodology.Benjamin C. Jantzen, Deborah G. Mayo & Lydia Patton - 2015 - Synthese 192 (11):3413-3423.
    Philosophers of science have long been concerned with the question of what a given scientific theory tells us about the contents of the world, but relatively little attention has been paid to how we set out to build theories and to the relevance of pre-theoretical methodology on a theory’s interpretation. In the traditional view, the form and content of a mature theory can be separated from any tentative ontological assumptions that went into its development. For this reason, the target of (...)
    Download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ontology, neural networks, and the social sciences.David Strohmaier - 2020 - Synthese 199 (1-2):4775-4794.
    The ontology of social objects and facts remains a field of continued controversy. This situation complicates the life of social scientists who seek to make predictive models of social phenomena. For the purposes of modelling a social phenomenon, we would like to avoid having to make any controversial ontological commitments. The overwhelming majority of models in the social sciences, including statistical models, are built upon ontological assumptions that can be questioned. Recently, however, artificial neural networks have made their way into (...)
    Download  
     
    Export citation  
     
    Bookmark  
  • Bernoulli’s golden theorem in retrospect: error probabilities and trustworthy evidence.Aris Spanos - 2021 - Synthese 199 (5-6):13949-13976.
    Bernoulli’s 1713 golden theorem is viewed retrospectively in the context of modern model-based frequentist inference that revolves around the concept of a prespecified statistical model Mθx\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal{M}}_{{{\varvec{\uptheta}}}} \left( {\mathbf{x}} \right)$$\end{document}, defining the inductive premises of inference. It is argued that several widely-accepted claims relating to the golden theorem and frequentist inference are either misleading or erroneous: (a) Bernoulli solved the problem of inference ‘from probability to frequency’, and thus (b) the golden theorem (...)
    Download  
     
    Export citation  
     
    Bookmark