Error Statistics Using the Akaike and Bayesian Information Criteria

Erkenntnis (forthcoming)
  Copy   BIBTEX

Abstract

Many biologists, especially in ecology and evolution, analyze their data by estimating fits to a set of candidate models and selecting the best model according to the Akaike Information Criterion (AIC) or the Bayesian Information Criteria (BIC). When the candidate models represent alternative hypotheses, biologists may want to limit the chance of a false positive to a specified level. Existing model selection methodology, however, allows for only indirect control over error rates by setting a threshold for the difference in AIC scores. We present a novel theoretical framework for parametric Neyman-Pearson (NP) model selection using information criteria that does not require a pre-data null and applies to three or more non-nested models simultaneously. We apply the theoretical framework to the Error Control for Information Criteria (ECIC) procedure introduced by Cullan et al. (J Appl Stat 47: 2565–2581, 2019), and we show it shares many of the desirable properties of AIC-type methods, including false positive and negative rates that converge to zero asymptotically. We discuss implications for the compatibility of evidentialist and severity-based approach to evidence in philosophy of science.

Author's Profile

Beckett Sterner
Arizona State University

Analytics

Added to PP
2024-12-09

Downloads
93 (#98,988)

6 months
93 (#62,323)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?