Managing uncertainty in predictions of climate change and impacts

by Judith Curry
Climatic Change has a new special issue:  Managing Uncertainty Predictions of Climate Change and Impacts.

Special Issue in Climatic Change
The link to the special issue is [here]; all of the papers are thankfully open access.  Here are selected papers from  the table of contents:
Towards a typology for constrained climate model forecasts
A. Lopez, E. B. Suckling, F. E. L. Otto, A. Lorenz, D. Rowlands & M. R. Allen
From the Discussion and Conclusion:
The simple example above illustrates clearly that the choice of ensemble sampling strategy and goodness-of-fit metric has a strong influence on the forecast uncertainty range. As it is well known, the uncertainty in projections for unrestricted ensembles is significantly different depending on the modeling strategy (CMIP3 vs climateprediction.net). When observations are used to constrain uncertainty ranges, the result depends not only on which observations (and what temporal and spatial scales) are used to construct the metric, but also on the relationships between that information and the forecasted variables. 
The proliferation of approaches to uncertainty analysis of climate forecasts is clearly unsatisfactory from the perspective of forecasts users. When confronted with a new forecast with a nominally smaller range of uncertainty than some alternative, it would take considerable insight to work out if the difference results from arbitrary changes in metric, or ensemble sampling, or from new information that reduces the uncertainty in the forecast. 
The assumptions about the decision criterion employed in the analysis are naturally related to the assumptions underlying the generation of the climate forecast. Scenario analysis, robust control, or info-gap frameworks do not rely on probabilistic information or even ranges, but focus on the impacts of decision options and system response under a range of possible futures. However, the applicability of these types of analysis to future decisions rests on a sufficiently comprehensive coverage of the space of possible future climate states.
Climate ensembles providing a range of possible futures can be utilised in decision analysis using the MaxiMin (pessimistic), MaxiMax (optimistic) or Hurwicz (mixture) criteria, which only rely on information about the worst and/or best possible outcomes. The expected utility decision criterion is widely used in cost-benefit, cost-risk, and cost-efficiency  analyses in the climate change context as the current standard of normative decision theory. It requires information about the climate forecasts in the form of probability density functions (pdfs), and naturally relates to Bayesian ensemble sampling approaches. However, among many other shortcomings, the expected utility criterion is unable to represent a situation in which the decision maker is ambiguous about the exact pdfs representing (climate) uncertainty. One possible solution to this shortcoming is the use of imprecise probabilities, where climate information would be given not as a single pdf, but as a set of possible pdfs.
We close this discussion by remarking that, when considering climate forecasts for impacts studies, it is important to keep in mind that the possible range of climate changes might not be fully explored if the analysis relies solely on climate models’ projections. Changes other that the ones currently projected by climate models are plausible, particularly at impacts relevant spatial scales. Therefore decision makers should use a variety of scenarios for their planning, and not restrict their analysis exclusively to model projected ranges of uncertainties.
JC comment:  For context, see this previous CE post:  How should we interpret an ensemble of climate models?
Towards improving the framework for probabilistic forecast evaluation
Leonard A. Smith, Emma B. Suckling, Erica L. Thompson, Trevor Maynard & Hailiang Du
From the Conclusions:
Measures of skill play a critical role in the development, deployment and application of probability forecasts. The choice of score quite literally determines what can be seen in the forecasts, influencing not only forecast system design and model development, but also decisions on whether or not to purchase forecasts from that forecast system or invest in accordance with the probabilities from a forecast system.
The properties of some common skill scores have been discussed and illustrated. Even when the discussion is restricted to proper scores, there remains considerable variability between scores in terms of their sensitivity to outcomes in regions of low (or vanishing) probability; proper scores need not rank competing forecast systems in the same order when each forecast system is imperfect. In general, the Continuous Ranked Probability Score can define the best forecast system to be one which consistently assigns zero probability to the observed outcome, while the Ignorance score will assign an infinite penalty to an outcome which falls in a region the forecast states to be impossible; such issues should be considered when deciding which score is appropriate for a specific task. Ensemble interpretations  which interpret a probability forecast as a single delta function (such as the ensemble mean) or as a collection of delta functions (reflecting, for example, the position of each ensemble member) rather than considering all the probabilistic information available may provide misleading estimates of skill in nonlinear systems. Scores can be used for a variety of different aims, of course. The properties desired of a score for parameter selection can be rather different from those desired in evaluating an operational forecast system.
A general methodology has been applied for probabilistic forecast evaluation, contrasting the properties of several proper scores when evaluating forecast systems of decadal ensemble hindcasts of global mean temperature from the HadCM3 model. The Ignorance score was shown to best discriminate between the performance of the different models. In addition, the Ignorance score can be interpreted directly. Observations like these illustrate the advantages of scores which allow intuitive interpretation of relative forecast merits.
Enhanced use of empirical benchmark models in forecast evaluation and in deployment can motivate a deeper evaluation of simulation models. The use of empirical models as benchmarks allows the comparison of skill between forecast systems based upon state-of-the-art simulation models and those using simpler, inexpensive alternatives. As models evolve and improve, such benchmarks allow one to quantify this improvement. Such evaluations cannot be done purely through the intercomparison of an (evolving) set of state-of-the-art models. The use of task-appropriate scores can better convey the information available from near-term (decadal) forecasts to inform decision making. It can also be of use in judging limits on the likely fidelity of centennial forecasts. Ideally, identifying where the most reliable decadal information lies today, and communicating the limits in the fidelity expected from the best available probability forecasts, can both improve decision-making and strengthen the credibility of science in support of policy making.
JC comment:  For context on climate model validation, see [links]
Attribution analysis of high precipitation events in summer in England and Wales over the last decade
Friederike E. L. Otto, Suzanne M. Rosier, Myles R. Allen, Neil R. Massey, Cameron J. Rye & Jara Imbers Quintana
From the Discussion:
The design of the modelling approach in much of this study aims to quantify the uncertainty in natural variability by simulating large initial condition ensembles very comprehensively; thus our conclusions are unlikely to underestimate this source of uncertainty. However, given the current general lack of skill in simulating precipitation (as opposed to, for example, temperature), the uncertainty in the model structure and parameters have been shown to be of importance for the attribution of extreme precipitation events. The addition of a perturbed parameter ensemble would enable us to quantify more accurately the uncertainty in extreme precipitation events, and this represents a promising avenue for further studies which is especially achievable with the climateprediction.net project.
The uncertainty arising from the fact that we do not know how the world might have been without anthropogenic climate change has been addressed by using two different ensembles, an ensemble of the 1960s including all observed forcings  and a counterfactual ensemble of the decade 2000–2010 excluding anthropogenic greenhouse gas forcing by using SSTs with the anthropogenic warming pattern removed. These two ensembles give some quantification of this uncertainty but most likely this is not comprehensive, as discussed above. This study could also be furthered, and the uncertainty in its conclusions potentially reduced, if not only the extreme event itself were analysed, but also the weather conditions leading up to the event, and the larger-scale circulation patterns as well. A more detailed description of the flood event in the context of the weather of the spring and summer of 2007, therefore, could well lead to a more accurate analysis of the change in risk of such an event occurring.
JC comment:  An alternative perspective on how to approach the attribution of extreme floods in the UK is provided in a previous CE post Reasoning about floods and climate change.
Tall tales and fat tails: the science and economics of extreme warming
Raphael Calel, David A. Stainforth & Simon Dietz
From the Discussion:
Uncertainty about the shape of the fat upper tail of the climate sensitivity distribution can wreak havoc with economic analysis of climate policies. However, the climate sensitivity matters only indirectly. Economic analysis is sensitive to the probability of extreme warming, and high values of the climate sensitivity are only one of the factors that lead to rapid warming. As we have shown, uncertainty about the effective heat capacity also matters a great deal for economic analysis, and this uncertainty greatly amplifies the economic consequences of uncertainty about the shape of the tail of the climate sensitivity distribution.
With results like these, it is perhaps understandable that some have concluded the risk of a climate catastrophe should be the sole determinant of climate policy. Whether one agrees with this assessment or not, it highlights the need to improve our understanding of the relevant risks. It would be valuable to place a greater emphasis on exploring uncertainty about the probability of very high transient temperature changes directly, which would entail a more inclusive discussion of the underlying physical uncertainties that accompany a rapidly warming world. A concrete example of this is carbon cycle feedbacks, which, studies suggest, are both influenced by and themselves influence the likelihood of higher or lower warming.
A secondary conclusion relates to the importance of the damage function in economic analysis. As we saw in Section 3, with one damage function the expected value of the policy was rather insensitive to the probability of extreme warming, while another damage function makes the economic analysis hypersensitive. This is because each damage function implicitly defines what level of warming is considered catastrophic, and uncertainty about extreme warming plays a profoundly different role in economic analysis depending on how we define ‘catastrophic’. For all of the focus on the economics of catastrophic climate change, surprisingly little attention has been paid to this issue. At a basic level, we must try to understand better the limits of human adaptation to climate change. 
The fat tail of the climate sensitivity distribution has perhaps been an effective vehicle for bringing attention to the issue of extreme warming, but it is time to move beyond this convenient metaphor and build a scientific view of society in a rapidly warming world.
JC comment:  An early version of this paper was discussed on a previous CE post Tall tales and fat tails.
JC reflections
This is a good collection of papers that deal with the messy issues surrounding how to interpret climate model simulations, reason about their uncertainty, and use them in decision making. There are unfortunately no simple solutions or recipes for these issues.  I think the issues raised here need to be confronted in the context of any application of climate model predictions/projections to impacts assessment and decision making.Filed under: Uncertainty

Source