What’s the worst case? A possibilistic approach

by Judith Curry
Are all of the ‘worst-case’ climate scenarios and outcomes described in assessment reports, journal publications and the media plausible? Are some of these outcomes impossible? On the other hand, are there unexplored worst-case scenarios that we have missed, that could turn out to be real outcomes? Are there too many unknowns for us to have confidence that we have credibly identified the worst case? What threshold of plausibility or credibility should be used when assessing these extreme scenarios for policy making and risk management?

I’m working on a new paper that explores these issues by integrating climate science with perspectives from the philosophy of science and risk management. The objective is to provide a broader framing of the 21st century climate change problem in context of how we assess and reason about worst-case scenarios. The challenge is to articulate an appropriately broad range of future outcomes, including worst-case outcomes, while acknowledging that the worst-case can have different meanings for a scientist than for a decision maker.
This series will be in four parts, with the other three applying these ideas to the worst case scenarios for:

  • emissions/concentration
  • climate sensitivity
  • sea level rise

3. Possibilistic framework
In evaluating future scenarios of climate change outcomes for decision making, we need to assess the nature of the underlying uncertainties. Knight (1921) famously distinguished between the epistemic modes of certainty, risk, and uncertainty as characterizing situations where deterministic, probabilistic or possibilistic foreknowledge is available.
There are some things about climate change that we know for sure. For example, we are certain that increasing atmospheric carbon dioxide will act to warm the planet. As an example of probabilistic understanding of future climate change, for a given increase in sea surface temperatures, we can assign meaningful probabilities for the expected increase in hurricane intensity in response to a specified temperature increase (e.g. Knutson and Tuleya, 2013). There are statements about the future climate to which we cannot reliably assign probabilities. For example, no attempt has been made to assign probabilities or likelihoods to different emissions/concentrations pathways for greenhouse gases in the 21st century (e.g. van Vuuren et al, 2011).
For a given emissions/concentration pathway, does the multi-model ensemble of simulations of the 21st century climate used in the IPCC assessment reports provide meaningful probabilities? Stainforth et al. (2007) provide a convincing argument that model inadequacy and an inadequate number of simulations in the ensemble preclude producing meaningful probabilities from the frequency of model outcomes of future climate states. Nevertheless, as summarized by Parker (2010), it is becoming increasingly common for results from climate model simulations to be transformed into probabilistic projections of future climate, using Bayesian and other techniques.
Where probabilistic prediction fails, foreknowledge is possibilistic – we can judge some future events to be possible, and others to be impossible. The theory of imprecise probabilities (e.g. Levi 1980) can be considered as an intermediate mode between probabilistic and possibilistic prediction. However, imprecise probabilities require credible upper and lower bounds for the future outcomes, including the worst-case.
Possibility theory is an uncertainty theory devoted to the handling of incomplete information that can capture partial ignorance and represent partial beliefs (for an overview, see Dubois and Prade, 2011). The relevance of analyzing uncertainty with possibility theory is better appreciated when evidence about events are unreliable or when prediction or conclusion is difficult to make due to insufficient information. Possibility theory distinguishes what is necessary and possible from what is impossible. Possibility theory has been developed in two main directions: the qualitative and quantitative settings. The qualitative setting is the focus of the analysis presented here.
Possibility theory represents the state of knowledge of an state of affairs or outcome, distinguishing what is plausible from what is less plausible, what is the normal course of things from what is not, what is surprising from what is expected. In possibility theory, the function π(U) distinguishes an event that is possible from one that is impossible:
π(U) = 1: nothing prevents U from occurring; U is a completely possible value
π(U) = 0: U is rejected as impossible
The necessity function N(U) evaluates to what extent the event is certainly implied by the status of our knowledge:
N(U) = 1: U is necessary, certainly true; implies p(U) = 1
N (U) =0 : U is unnecessary; implies p(U) is unconstrained
Possibility theory has seen little application to climate science. Betz (2010) provided a conceptual framework that distinguishes different categories of possibility and necessity to convey our uncertain knowledge about the future, using predictions of future climate change as an example. In this context, Betz defines ‘possibility’ to mean consistency with our relevant background knowledge – referred to by Levi (1980) as ‘serious possibility.’
Betz (2010) classified possible events to fall into two categories: (i) verified possibilities, i.e. statements which are shown to be possible, and (ii) unverified possibilities, i.e. events that are articulated, but neither shown to be possible nor impossible. The epistemic status of verified possibilities is higher than that of unverified possibilities; however, the most informative scenarios for risk management may be the unverified possibilities.
A useful strategy for categorizing ‘degrees of necessity’ is provided by the plausibility measures articulated by Friedman and Halpern (1995) and Huber (2008). Measures of plausibility incorporate the follow notions of uncertainty:

  • Plausibility of an event is inversely related to the degree of surprise associated with the occurrence of the event;
  • Notions of conditional plausibility of an event A, given event B;
  • Hypotheses are confirmed incrementally for an ordered scale of events, supporting notions of partial belief.

Guided by the frameworks established by Betz (2010), Friedman and Halpern (1995) and Huber (2018), future climate outcomes are categorized here in terms of plausibility and degrees of justification (necessity). A high degree of justification (associated with high p value) implies high robustness and relative immunity to falsification or rejection. Different classifications and associated p values can be articulated, but this categorization serves to illustrate applications of the concepts. Below is a classification of future climate outcomes used in this paper:

  • Strongly verified possibility – strongly supported by basic theoretical considerations and empirical evidence (p = 1)
  • Corroborated possibility – empirical evidence for the outcome; it has happened before under comparable conditions (0.8 ≤ p < 1)
  • Verified possibility – generally agreed to be consistent with relevant background theoretical and empirical knowledge (0.5 ≤ p < 0.8)
  • Contingent possibility – outcome is contingent on a model simulation and the plausibility of input values (0.1 ≤ p < 0.5)
  • Borderline impossible – consistency with background knowledge is disputed (0 < p < 0.1)
  • Impossible – inconsistent with relevant background knowledge (p ≤ 0)

The contingent possibility category is related to Shackle’s (1961) notion of conditional possibility, whereby the degree of surprise of a conjunction of two events A and B is equal to the maximum of the degree of surprise of A, and of the degree of surprise of B, should A prove true.
This possibility scale does not map directly to probabilities; a high value of possibility (p) does not indicate a corresponding high probability value, but rather shows that a probable event is indeed possible and also that an impossible event is not probable.
3.1 Scenario justification
As a practical matter for considering policy-relevant outcomes (scenarios) of future climate change and its impacts, how are we to evaluate whether an outcome is possible or impossible?  In particular, how do we assess the possibility of big surprises or black swans?
If the objective is to capture the full range of policy-relevant outcomes and to broaden the perspective on the concept of scientific justification, then both confirmation and refutation strategies are relevant and complementary. The difference between confirmation and refutation can also be thought of in context of regarding the allocation of burdens of proof (e.g. Curry, 2011c). Consider a contentious outcome (scenario), S. For confirmation, the burden of proof falls on the party that says S is possible. By contrast, for refutation, the party denying that S is possible carries the burden of proof. Hence confirmation and refutation play complementary roles in outcome (scenario) justification.
The problem of generating a plethora of potentially useless future scenarios is avoided by subjecting the scenarios to an assessment as to whether the scenario is deemed possible or impossible, based on our background knowledge. Section 2 addressed how black swan or worst-case scenarios can be created; but how do we approach refuting extreme scenarios or outcomes as impossible or implausible? Extreme scenarios and their outcomes can be evaluated based on the following criteria:

  1. Evaluation of the possibility of each link in the storyline used to create the scenario.
  2. Evaluation of the possibility of the outcome and/or the inferred rate of change, in light of physical or other constraints.

Assessing the strength of background knowledge is an essential element in assessing the possibility or impossibility of extreme scenarios. Extreme scenarios are by definition at the knowledge frontier. Hence the background knowledge against which extreme scenarios and their outcomes are evaluated is continually changing, which argues for frequent re-evaluation of worst-case scenarios and outcomes.
Scenario refutation requires expert judgment, assessed against background knowledge.
This raises several questions: Which experts and how many? By what methods is the expert judgment formulated? What biases enter into the expert judgment?
Expert judgment encompasses a wide variety of techniques, ranging from a single undocumented opinion, to preference surveys, to formal elicitation with external validation (e.g. Oppenheimer et al., 2016). Serious disagreement among experts as to whether a particular scenario (outcome) is possible or impossible justifies a scenario classification of ‘borderline impossible.’
3.3 Worst-case classification
On topics where there is substantial uncertainty and/or a rapidly advancing knowledge frontier, experts disagree on what outcomes they would categorize as a ‘worst case,’ even when considering the same background knowledge and the same input parameters/constraints.
For example, consider the expert elicitation conducted by Horton et al. (2014) on 21st century sea level rise, which reported the results from a broad survey of 90 experts. One question related to the expected 83-percentile of sea level rise for a warming of 4.5oC, in response to RCP8.5. While overall the elicitation provided similar results as cited by the IPCC AR5 (around 1 m), Figure 2 of Horton et al. (2016) shows that 6 of the respondents placed the 83-percentile to be higher than 2.5 m, with the highest estimate exceeding 6 m.
While experts will inevitably disagree on what constitutes a worst case when the knowledge base is uncertain, a classification is presented here that is determined by the extent to which borderline impossible parameters or inputs are employed in developing the scenario. This classification is inspired by the Queen in Alice in Wonderland: “Why, sometimes I’ve believed as many as six impossible things before breakfast.” This scheme articulates three categories of worst-case scenarios:

  • Conceivable worst case: formulated by incorporating all worst-case parameters/inputs (above the 90 or 95-percentile range) into a model; does not survive refutation efforts.
  • Possible worst case: 0 < p < 0.1 (borderline impossible). Includes multiple worst-case parameters/inputs in model-derived scenarios; survives refutation efforts.
  • Plausible worst case: p just above p = 0.1. Includes at most one borderline impossible assumption in model-derived scenarios.

A few comments are in order to avoid oversimplification of this classification for a specific application. Simply counting the number of borderline uncertain parameters/inputs in deriving a scenario can be misleading if these inputs are of little importance in determining the scenario outcome. If these borderline impossible parameters/inputs are independent, then the necessity (and likelihood) of the scenario is reduced relative to the necessity of each individual parameter/outcome. If the collection of borderline impossible parameter/inputs produce nonlinear feedbacks or cascades, then it is conceivable that these parameters/inputs somehow have a cancelling effect on exacerbating the extremity of the outcome. Model sensitivity tests can assess to what extent a collection of borderline impossible parameters/inputs contributes to the extremity of the outcome.
The conceivable worst-case scenario is of academic interest only; the plausible and possible worst-case scenarios are of greater relevance for policy and risk management. In the following three sections, applications of these ideas about worst-case scenarios are applied to emissions/concentrations, climate sensitivity and sea level rise. Apart from their importance in climate science and policy, these three topics are selected to illustrate different types of constraints and uncertainties in assessing worst-case outcomes.
JC note:  I look forward to your comments/feedback.  The next installment will assess RCP8.5 using these criteria.

Source