Evluation of Climate Models: An Argument Analysis Approach
Project description
Climate models are idealized representations of the climate system. They abstract many features of the climate system and distort in different ways the processes and aspects they represent to make them mathematically and computationally tractable. Which idealizations are adequate depends on the target questions we want to answer, e.g. under a given emission scenario, would Earth’s global mean surface temperature in 2100 be more than 2°C warmer than it is today? To what extent is the global warming of the last fifty years due to human causes? Does climate change affect extreme weather in the northern hemisphere? For this reason, the evaluation of climate models needs to be specific to the purpose they are intended to serve. But how can we argue for the claim that a climate model is adequate for a particular purpose? This question has not gotten sufficient attention in the climate modelling community.
The common way to evaluate a climate model is to assess in a quantitative way degrees of “model fit”; i.e. how well model results reproduce past and present climate and how well they agree with results of other models or model versions. However, such assessments of the empirical accuracy and robustness of model results are largely silent about what those instances of fit or misfit to data and results of other models actually imply for the trustworthiness of model applications. Only three of the roughly ninety pages of the IPCC chapter on model evaluation of the most recent report from Working Group I deal with the implications of model fit or misfit for climate change detection and attribution and for projections of future climate. It is, e.g. often assumed that the fact that state-of-the-art climate models reproduce many important features of current and past climate reasonably well warrants increased confidence in the model’s suitability for quantitative climate projections, particularly at continental scales and above. And the robustness of climate model projections is typically seen to warrant a further increase in confidence in the projected outcome. But the arguments for these assumptions are hardly ever made explicit. Moreover, successful instances of model fit are often uncritically interpreted as confirming the models as such. But model performance metrics are simply numbers that quantify the agreement between model results and observation-based data, and it is an open question what they imply for the model’s adequacy for a purpose. It seems fair to say that it is the transition from statements about quantitative measures of model fit (often termed “model performance metrics”) to hypotheses about a model’s adequacy for different purposes such as projecting future climate change (“model quality metrics”) where climate science struggles.
In this project, we provide a conceptual framework for discussing the evaluation of the adequacy of models for different purposes. In a first step, we focus on climate model projections. They are relevant for policy decisions and currently constitute a major focus of climate research. We discuss the potential and limits of arguments from model fit (i.e. arguments from empirical accuracy of model results and arguments from robustness of model results). We suggest additional considerations that can be appealed to in arguments for a model’s adequacy for long-term projections: support of a model by background knowledge and the performance of a model with respect to cognitive goals such as simplicity, scope and resolution. Empirical accuracy, robustness, support by background knowledge and performance with respect to cognitive goals together do not constitute sufficient conditions for a model’s adequacy for long-term projections. But they provide reasons that can be strengthened by additional information and can thus contribute to a complex non-deductive argument for the adequacy of a climate model for long-term climate projections. Furthermore, we discuss implications of our argumentation for climate modelling strategies, uncertainty quantification and decision-making. In a second step, we will show how the suggested framework can be adapted for purposes other than projections, .e.g. explanations and the advancement of our understanding of the climate system.
Contact
Publications
- Christoph Baumberger, Reto Knutti, Gertrude Hirsch Hadorn. 2016. Building Confidence in Climate Model Projections: An Analysis of Inferences from Fit. WIREs Climate Change, e454. external page Open Access. external page doi: 10.1002/wcc.454.
- Baumberger, Christoph; Hirsch Hadorn, Deborah; Mühlebach, Debroah. 2015. Enhancing argumentative skills in environmental science education (PDF, 318 KB). GAIA 24(3), 206–208.