Climate uncertainty & risk

by Judith Curry
I’ve been invited to write an article on climate uncertainty and risk.

It’s been about 5  years since I’ve written a new article on this topic; this article provides my current perspectives on this topic.
This article is in draft form; I will submit it in a few weeks.  I would appreciate your suggestions and constructive comments.
———-
CLIMATE UNCERTAINTY AND RISK
Research scientists focus on the knowledge frontier, where doubt and uncertainty are inherent. Formal uncertainty quantification of computer models is less relevant to science than an assessment of whether the model helps us learn about how the system works.
However in context of the science-policy interface, uncertainty matters. There is a growing need for more constructive approaches to accountability about the different dimensions of uncertainty in climate change as related to policy making– what may happen in the future and what actions might be appropriate now.
Risk is the probability that some undesirable event will occur, and often describes the combination of that probability and the corresponding consequence of the event. Economists have a specific definition of risk and uncertainty that harkens back to Knight (1921).  Knightian risk denotes the calculable and thus controllable part of what is unknowable, implying thatrobust probability information is available about future outcomes. Knightian uncertainty addresses what is incalculable and uncontrollable.
This essay on climate uncertainty and risk integrates perspectives from climate modeling, philosophy of science and decision making under uncertainty, extending previous analyses by the author (Curry and Webster, 2011; Curry, 2011).  The objective is to explore the kinds of evidence and reasoning that can help inform decision makers as to whether and how they should use climate models for different applications.
Characterizing uncertainty
There are numerous categorizations and hierarchies of risk and uncertainty, which are further complicated by different disciplines using terms in different ways (for a summary, see Spiegelhaler and Rausch, 2011). The categorization presented here discriminates among three dimensions of uncertainty in context of model-based decision support (Walker et al, 2013): nature, location, and level of uncertainty.
The nature of uncertainty relates to whether the uncertainty is in principle reducible, versus uncertainty that is intrinsic and hence irreducible.

  • Epistemic uncertainty is associated with imperfections of knowledge, which may be reduced by further research and empirical investigation.
  • Aleatory uncertainty is associated with inherent variability or randomness, and is by definition irreducible. Natural internal variability of the climate system contributes to aleatory uncertainty.

 The location of uncertainty refers to where the uncertainty manifests itself within the model complex:

  • Framing and context identifies the boundaries of the modeled system. Portions of the real world that are outside the modeled system leave an invisible range of other uncertainties.
  • Model structure uncertainty is uncertainty about the conceptual modeling of the physical system, including the selection of subsystems to include, often introduced as a pragmatic compromise given limited computational resources.
  • Model technical uncertainty arises from the implementation of the model solution on a computer, including solution approximation and numerical errors.
  • Input uncertainty relates to uncertainty in model inputs that describe the system and the external forces that drive system changes.
  • Parameter uncertainty includes uncertain constants and other parameters that are largely contained in subgridscale parameterizations.
  • Model outcome uncertainty, also referred to as prediction error, arises from the propagation of the aforementioned uncertainties through the model simulation.
  • Uncertainty quantification error arises due to Monte Carlo sampling used in the error quantification procedure itself (for both epistemic and aleatory uncertainties).

The level of uncertainty relates to where the model outcome uncertainty ranks in the spectrum between complete certainty and total ignorance:

  • Complete certainty: deterministic knowledge; no uncertainty
  • Statistical uncertainty (Knightian risk): outcomes are not known precisely, but precise, decision-relevant probability statements can be provided.
  • Scenario uncertainty (Knightian uncertainty or ambiguity): A range of plausible outcomes (scenarios) are enumerated but with a weak basis for ranking them in terms of likelihood.
  • Deep uncertainty (recognized ignorance)the scientific basis for developing outcomes (scenarios) is weak; future outcomes lie outside of the realm of regular or quantifiable expectations
  • Total ignorance: the deepest level of uncertainty, to the extent that we do not even know that we do not know.

If the policy making challenge is defined in context of the response of climate to future greenhouse gas emissions, the uncertainty level is characterized as ‘scenario uncertainty.’  In this context, scenario uncertainty arises not only from uncertainty in future emissions but also from uncertainty in the equilibrium climate sensitivity to CO2(ECS).  According to the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (IPCC 2013), “there is high confidence that ECS is extremely unlikely less than 1°C and medium confidence that the ECS is likely between 1.5°C and 4.5°C and very unlikely greater than 6°C.” Thus, we know a range of values within which the climate sensitivity is very likely to fall, with values better constrained on the lower end than on the high end. The AR5 further states:“No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence and studies.”  Despite the fact that we know quite a bit about the value of ECS, we do not have grounds for associating a specific probability distribution with ECS.
If the policy making challenge is defined in the context of the actual evolution of the 21stcentury climate (such as for vulnerability and impact assessments), then the uncertainty level increases to deep uncertainty. Apart from the issue of unknown future greenhouse gas emissions, we have very little basis for developing future scenarios of solar variation, volcanic eruptions and long-term internal variability. The likelihood of unanticipated outcomes (surprises) needs to be acknowledged.
Epistemology of climate models
The IPCC Fourth Assessment Report provided the following conclusion about climate models:
“There is considerable confidence that climate models provide credible quantitative estimates of future climate change, particularly at continental scales and above.” (Randall et al. 2007)
Based on expert judgment provided by the IPCC, policy makers have been assuming that climate models are adequate for purposes such as setting emissions reductions targets and developing regional climate adaptation plans.
Is this level of confidence in climate model projections justified?
The most common ways to evaluate a climate model are to assess how well model results fit observation-based data (empirical accuracy) and how well they agree with other models or model versions (robustness) (e.g. Flato et al. 2013). Parker (2011) has argued that robustness does not objectively increase confidence in simulations of future climate change.  Baumberger et al. (2017) address the challenge of building confidence in future climate model predictions through a combination of empirical accuracy, robustness and coherence with background knowledge. Baumberger et al. acknowledge that the role of coherence with background knowledge is limited because of empirical parameterizations and the epistemic opacity of complex models (Winsberg and Lenhard, 2010).
With regards to empirical adequacy, the climate modeling community is beginning to apply uncertainty quantification (UQ) concepts to climate models (Qian et al. 2016). These endeavors are focused on exploring parameter uncertainty (towards optimizing model parameter selection) and on evaluating prediction error.Additional efforts are identifying which model variables to focus on in prediction error analyses (Burrows, 2018) and evaluating models at shorter weather timescales and process levels.
A broader perspective on this issue is provided by recent scholarship on the epistemology of simulation, including how simulation models are confirmed. Lloyd (2009) describes how observational data are used in the evaluation of climate models and suggests new ways of viewing the significance of these model-data comparisons.  However, attempts to confirm climate models through demonstrating empirical accuracy are fraught with challenges:  inadequacy of data, selection of variables to confirm and on which time and space scales, a vast and multi-dimensional parameter space to be explored, and concerns about circularity with regards to data used in both model tuning and confirmation.
Parker (2009) argues that known climate model error is too pervasive to allow climate model confirmation to be of use.  Parker proposes a shift in approach from confirming climate models to confirming their ‘adequacy for purpose.’ Adequacy-for-purpose assessments involve estimating what the degrees of accuracy of simulations of a wide variety of observed climatic quantities imply about the correctness of uncertain model assumptions and results. Assessing adequacy-for-purpose hypotheses is a daunting task owing to the epistemic opacity of complex models that results in confirmation holism (Lenhard and Winsberg, 2010).
Assessing the adequacy of climate models for the purpose of predicting future climate is particularly difficult. It is often assumed that if climate models reproduce current and past climates reasonably well, then we can have confidence in future predictions. However, empirical accuracy may to some degree be due to tuning rather than to the model structural form. Further, the model may lack representations of processes and feedbacks that would significantly influence future climate change. Hence, reliably reproducing past and present climate is not a sufficient condition for a model to be adequate for long-term projections, particularly for high-forcing scenarios that are well outside those previously observed in the instrumental record.
Given the above concerns, and the unaddressed concerns about uncertainty in model structural form and framing, Katzav (2014) argues that useful climate model assessment does not aim to confirm the model assumptions or prediction outcomes, but rather should aim to demonstrate that the simulations describe real possibilities.  A simulation is taken to be a real possibility if its realization is compatible with our background knowledge and that background knowledge does not exclude the realization of the simulated scenario over the target period.
Developing scenarios of climate futures
The possibilistic view regards the spread of an ensemble as a range of outcomes that cannot be ruled out. However, Stainforth et al. (2007) argue climate models cannot be used to show that some possibilities are not real. Further, owing to structural limitations, existing climate models do not allow exploration of all the theoretical possibilities that are compatible with our knowledge of the basic way the climate system actually is. Some of these unexplored possibilities may turn out to be real ones.
Smith and Stern (2011) argue that there is value in scientific speculation on policy-relevant aspects of plausible, high-impact, scenarios even though we can neither model them realistically nor provide a precise estimate of their probability. 
A surprise occurs if a possibility that had not even been articulated becomes true.  Efforts to avoid surprises begin with ensuring there has been a fully imaginative consideration of possible future outcomes.
When background knowledge supports doing so, additional scenarios can be generated by modifying model results so as to broaden the range of possibilities they represent. Further, the possibilist view extends to scenarios other than those that are created by global climate models. Simple climate models, process models and data-driven models can also be used as the basis for generating scenarios of future climate. These alternative methods for generating future climate scenarios are particularly relevant for developing regional scenarios (for which global models are known to be inadequate) and impact variables such as sea level rise (that are not directly simulated by global climate models).
The potential problem of generating a plethora of potentially useless future scenarios is avoided if we focus on scenarios that we expect to be significant in a policy context.  Smith and Stern (2011) make an argument for estimating whether a scenario outcome has a less than 1-in-200 chance, which is a focus of financial risk managers.
There is also an important role in policy making for articulating the worst-case scenario that would be genuinely catastrophic. The worst-case scenario is judged to be the most extreme scenario that cannot be falsified as impossible based upon our background knowledge (Betz, 2010).
The scientific community involved in predicting future sea level rise has expended considerable effort in articulating the worst-case scenario (e.g. LeBars 2017). Sea level predictions are only indirectly driven by global climate models, since these models do not predict the mass balance of glaciers and ice sheets, land water storage or isostatic adjustments. Hence estimates of the worst-case scenario integrate climate model simulations, process model simulations, estimates from the literature, and paleoclimatic observations.
Integrated Assessment Models
Integrated Assessment Models (IAMs) are widely used to assess impacts of climate change and various policy responses.  In assessing the social cost of carbon, IAMs couple an economic general equilibrium model to an extremely simplified climate model. According to expected utility theory, we should adopt the climate policy that maximizes expected utility — the extent to which an outcome is preferable to the alternatives.
The climate science input to IAMs is the probability density function of equilibrium climate sensitivity (ECS). The dilemma is that with regards to ECS, we are in a situation of scenario (Knightian) uncertainty—we simply do not have grounds for formulating a precise probability distribution. Without precise probability distributions, no expected utility calculation is possible.
This problem is addressed by creating a precise probability distribution based upon the parameters provided by the IPCC assessment reports (NAS 2017). In effect, IAMs convert Knightian uncertainty in ECS into precise probabilities.  Of particular concern is how the upper end of the ECS distribution is treated—either by assuming symmetry or fitting a ‘fat tail.’  The end result is that this most important part of the distribution drives the economic costs of carbon using a statistically-manufactured ‘fat tail.’
Subjective or imprecise probabilities may be the best ones available. However, over-precise numerical expressions of risk are misleading to policy makers. Frisch (2013) argues that such applications of IAMs are dangerous, because while they purport to offer precise numbers to use for policy guidance, that precision is illusory and fraught with assumption and value judgments.
Policies optimized for a ‘likely’ future may fail in the face of surprise. At best, policy makers have a range of possible future scenarios to consider.  Alternative decision-analytic frameworks that are consistent with conditions of deep uncertainty can make more scientifically defensible use of scenarios of climate futures.
For situations of deep uncertainty, precautionary and robust approaches are appropriate. Stirling (2007) has emphasized that precaution arises as part of the risk assessment, and is not a decision rule in itself.  A precautionary appraisal is initiated when there is uncertainty. A robust policyis defined to be one that yields outcomes that are deemed to be 
satisfactory across a wide range of future plausible outcomes (Walker et al. 2016). As such, robust policy making interfaces well with possibilistic approaches that generate a range of possible futures.  Flexible strategies are adaptive, and can be quickly adjusted to advancing scientific insights and clarification of scenarios of future outcomes.
Conclusions
While climate models continue to be used by climate scientists to increase understanding about how the climate system works, most of the investment in global climate models is motivated by the needs of policy makers.
There is a gap between what climate scientists can provide versus the information desired by policy makers.  Spiegelhalter and Rausch (2011) state that it is important for scientists to avoid the attrition of uncertainty in the face of an inappropriate demand for certainty from policy makers.  Betz (2010) reminds us that the difficulties of the problem must not serve as an excuse for scientists to simplify the epistemic situation, thereby pre-determining the complex value judgments involved.
The root of the most significant problem at the climate science-policy interface lies not in the climate models themselves but in the way in which they are used to guide policy making.  Climate scientists have helped exacerbate this problem. Both climate scientists and policy makers need to accept the limits of probabilistic methods in conditions of ambiguity and deep uncertainty that characterize climate change. Encouraging overconfidence in the realism of current climate model simulations or intentionally portraying recognized ignorance incorrectly as if it was statistical uncertainty (Knightian risk) can lead to undesirable policy outcomes.
Smith and Stern (2011) provide this insight into the climate science-policy interface. When asked intractable questions, the temptation is to change the question, slightly, to a tractable question that can be dealt with in terms of probability, rather than face the ambiguity of the original, policy-relevant, question. Science will be of greater service to sound policy making when it handles ambiguity as well as it now handles statistical uncertainty (Knightian risk).
Does this analysis make climate science and climate modeling less relevant to policy making? Not at all, but it does raise questions as to whether the path we are currently on for developing and evaluating climate models (NRC 2012) is the best use of resources for supporting policy making.  Exploring alternative model structures is a rich and important direction for climate research, both for understanding the climate system and for supporting policy making. This analysis also emphasizes new challenges for climate scientists to develop a broader range of future scenarios, including worst-case scenarios and regional scenarios.
How climate science handles uncertainty matters.
 
References
Betz, G. (2010) What’s the worst case?  The method of possibilistic prediction. Analyse & Kritik01, 87-106
Baumberger, C, R Knutti, GH Hadorn, (2017)  Building confidence in climate model projections: an analysis of inferences from fit. WIREs Clim Change, 8:e454. doi: 10.1002/wcc.454
Burrows, SM, A Dasgupta, S Reehl, L Bramer, PL Ma, PJ Rasch, Y. Qian, 2018: Characterizing the relative importance assigned to physical variabiles by climate scientists when assessing atmospheric climate model fidelity. Adv. Atmos. Sci,35, 1101-1113.
Curry, J.A., P.J. Webster (2011)  Climate science and the uncertainty monster.  Bull. Amer. Meteorol. Soc., 1667-1682.
Curry, J.A. (2011) Reasoning about climate uncertainty. Clim. Change,  108: 723. https://doi.org/10.1007/s10584-011-0180-z
Flato, G, J Marotzke, B Abiodun, P Braconnot, S C Chou, WJ Collins, P Cox, et al. 2013. Evaluation of Climate Models. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Climate Change 2013 5: 741–866.
Frisch, M, (2013) Modeling Climate Policies: A Critical Look at Integrated Assessment Models. Philosophy & Technology, 26, 117–137.
IPCC (2013) Summary for Policymakers. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
Katzav, J. (2014) The Epistemology of Climate Models and Some of Its Implications for Climate Science and the Philosophy of Science. Studies in History and Philosophy of Modern Physics, 46, 228–238.
Knight, F. H. (1921)  Risk, Uncertainty and Profit.Boston, MA: Hart, Schaffner & Marx
LeBars, D. (2017) A high-end sea level rise probabilistic projection including rapid Antarctic ice sheet mass loss.  Environ. Res. Lett.,12, 044013.
Lloyd, E. (2009)  Varieties of Support and Confirmation of Climate Models. Aristotelian Society Supplementary Volume, 83, 213–232. https://doi. org/10.1111/j.1467-8349.2009.00179.x.
NRC (2012) A National Strategy for Advance Climate Modeling. National Academies Press, https://doi.org/10.17226/13430
NAS (2017) Valuing Climate Damages: Updating Estimation of the Social Cost of Carbon Dioxide. Washington, DC: The National Academies Press.  https://doi.org/10.17226/24651.
Parker, W.S. (2009) Confirmation and adequacy-for-purpose in climate modeling.  Aristotelian Society Supplementary Volume, 83, 233-249.
Parker, WS  2011. When Climate Models Agree: The Significance of Robust Model Predictions. Philosophy of Science 78 (4): 579–600.
Qian, Y, C. Jackson, F. Giorgi, B Booth, Q Duan, C Forest D Higdon, ZJ Hou, G. Huerta (2016)  Uncertainty Quantification in Climate Modeling and Projection.  Bull. Amer. Meteorol. So.c,821-824 DOI:10.1175/BAMS-D-15-00297.1
Randall, D.A., R.A. Wood, S. Bony, R. Colman, T. Fichefet, J. Fyfe, V. Kattsov, A. Pitman, J. Shukla, J. Srinivasan, R.J. Stouffer, A. Sumi and K.E. Taylor (2007) Climate Models and Their Evaluation. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M.Tignor and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
Smith, L. A. and N. Stern (2011) Uncertainty in Science and Its Role in Climate Policy. Phil. Trans. R. Soc. A,369.1956 (2011): 4818–4841.
Spiegelhalter DJ and Riesch H (2011) Don’t know, can’t know: embracing scientific uncertainty when analyzing risk. Phil Trans Roy Soc.A, 369, 4730–4750
Stainforth D.A., M.R. Allen, E.R. Tredger, L.A. Smith (2007) Confidence, uncertainty, and decision-support relevance in climate prediction. Phil. Trans. Roy. Soc.A, 365, 2145-2161.
Stirling, A (2007):  Risk, precaution and science: toward a more constructive policy debate.  EMBO Reports,8, 309-315
Walker, W.E., P. Harremoes, J. Rotmans, J.P. van der Sluijs, M.B.A. van Asselt, P. Janssen, M.P. Krayer von Krauss (2003)  Defining Uncertainty: A conceptual basis for uncertainty management in model based decision support.  Integrated Assessment, 4, 5-17.
Walker, W.E., R.J. Lempert, J.H. Kwakkel (2016)   Deep Uncertainty.  Encyclopedia of Operations Research and Management Science. Eds SI Gass and MC Fu, Springer
Winsberg, E., and J. Lenhard, (2010). Holism and Entrenchment in Climate Model Validation. In Science in the Context of Application: Methodological Change, Conceptual Transformation, Cultural Reorientation, Carrier, M. and Nordmann, A., eds., Springer.

Source