Climate Models, Calibration, and Confirmation

British Journal for the Philosophy of Science 64 (3):609-635 (2013)
  Copy   BIBTEX

Abstract

We argue that concerns about double-counting—using the same evidence both to calibrate or tune climate models and also to confirm or verify that the models are adequate—deserve more careful scrutiny in climate modelling circles. It is widely held that double-counting is bad and that separate data must be used for calibration and confirmation. We show that this is far from obviously true, and that climate scientists may be confusing their targets. Our analysis turns on a Bayesian/relative-likelihood approach to incremental confirmation. According to this approach, double-counting is entirely proper. We go on to discuss plausible difficulties with calibrating climate models, and we distinguish more and less ambitious notions of confirmation. Strong claims of confirmation may not, in many cases, be warranted, but it would be a mistake to regard double-counting as the culprit. 1 Introduction2 Remarks about Models and Adequacy-for-Purpose3 Evidence for Calibration Can Also Yield Comparative Confirmation3.1 Double-counting I3.2 Double-counting II4 Climate Science Examples: Comparative Confirmation in Practice4.1 Confirmation due to better and worse best fits4.2 Confirmation due to more and less plausible forcings values5 Old Evidence6 Doubts about the Relevance of Past Data7 Non-comparative Confirmation and Catch-Alls8 Climate Science Example: Non-comparative Confirmation and Catch-Alls in Practice9 Concluding Remarks

Author Profiles

Katie Steele
Australian National University
Charlotte Werndl
University of Salzburg
Charlotte Sophie Werndl
London School of Economics

Analytics

Added to PP
2013-03-01

Downloads
608 (#24,844)

6 months
108 (#32,922)

Historical graph of downloads since first upload
This graph includes both downloads from PhilArchive and clicks on external links on PhilPapers.
How can I increase my downloads?