by Judith Curry
On possibilities, known neglecteds, and the vicious positive feedback loop between scientific assessment and policy making that has created a climate Frankenstein.
I have prepared a new talk that I presented yesterday at Rand Corp. My contact at Rand is Rob Lempert, of deepuncertainty.org fame. Very nice visit and interesting discussion.
My complete presentation can be downloaded [Rand uncertainty]. This post focuses on the new material.
Scientists are saying the 1.5 degree climate report pulled punches, downplaying real risks facing humanity in next few decades, including feedback loops that could cause ‘chaos’ beyond human control.
To my mind, if the scientists really wanted to communicate the risk from future climate change, they should at least articulate the worst possible case (heck, was anyone scared by that 4″ of extra sea level rise?). Emphasis on POSSIBLE. The possible worst case puts upper bounds on what could happen, based upon our current background knowledge. The exercise of trying to articulate the worst case illuminates many things about our understanding (or lack thereof) and the uncertainties. A side effect of such an exercise would be to lop of the ‘fat tails’ that economists/statisticians are so fond of manufacturing. And finally the worst case does have a role in policy making (but NOT as the expected case).
My recent paper Climate uncertainty and risk assessed the epistemic status of climate models, and described their role in generating possible future scenarios. I introduced the possibilistic approach to scenario generation, including the value of scientific speculation on policy-relevant aspects of plausible, high-impact scenarios, even though we can neither model them realistically nor provide a precise estimate of their probability.
How are we to evaluate whether a scenario is possible or impossible? A series of papers by Gregor Betz provides some insights, below is my take on how to approach this for future climate scenarios based upon my reading of Betz and other philosophers working on this problem.
I categorize climate models here as (un)verified possibilities, there is a debate in the philosophy of science literature on this topic. The argument is that some climate models may be regarded as producing verified possibilities for some variables (e.g. temperature).
Maybe I’ll accept that a few models produce useful temperature forecasts, provided that they also produce accurate ocean oscillations when initialized. But that is about as far as I would go towards claiming that climate model simulations are ‘verified’.
An interesting aside regarding the ‘tribes’ in the climate debate, in context of possibility verification:
- Lukewarmers: focus on the verified possibilities
- Consensus/IPCC types: focus on the unverified possibilities generated by climate models.
- Alarmists: focus on impossible scenarios and/or borderline impossible as ‘expected’ scenarios, or worthy of justifying precautionary avoidance of emitting CO2.
This diagram provides a visual that distinguishes the various classes of possibilities, including the impossible and irrelevant. While verified possibilities have higher epistemic status than the unverified possibilities, all of these possibilities are potentially important for decision makers.
The orange triangle illustrates a specific vulnerability assessment, whereby only a fraction of the scenarios are relevant to the decision at hand, and the most relevant ones are unverified possibilities and even the impossible ones. Clarifying what is impossible versus what is not is important to decision makers, and the classification provides important information about uncertainty.
Let’s apply these ideas to interpreting the various estimates of equilibrium climate sensitivity. The AR5 likely value is 1.5 to 4.5 C, which has hasn’t really budged since the 1979 Charney report. The most significant statement in the AR5, which is included in a footnote in the SPM: “No best estimate for equilibrium climate sensitivity can now be given because of lack of agreement on values across assessed lines of evidence and studies.”
The big disagreement is between the CMIP5 model range (values between 2.1 and 4.7 C) and the historical observations using an energy balance model. While Lewis and Curry (2015) was not included in the AR5, it provides the most objective comparison of this approach with the CMIP5 models since it used the same forcing and time period.
The Lewis/Curry estimates are arguably corroborated possibilities, since they are based directly on historical observational data, linked together by a simple energy balance model. It has been argued that LC underestimate values on the high end, and neglect the very slow feedbacks. True, but the same holds for the CMIP5 models, so this remains a valid comparison.
Where to set the borderline impossible range? The IPCC AR5 put a 90% limit at 6 C. None of the ECS values cited in the AR5 extend much beyond 6 C, although in the AR4 many long tails were cited, apparently extending beyond 10 C. Hence in my diagram I put a range of 6-10 C as borderline impossible based on information from the AR4/AR5.
Now for JC’s perspective. We have an anchor on the lower bound — the no-feedback climate sensitivity, which is nominally ~1 C (sorry, skydragons). The latest Lewis/Curry values are reported here over the very likely range (5-95%). I regard this as our current best estimate of observationally based ECS values, and regard these as corroborated possibilities.
I accept the possibility that Lewis/Curry is too low on the upper range, and agree that it could be as high as 3.5C. And I’ll even bow to peer/consensus pressure and put an upper limit of the v likely range as 4.5 C. I think values of 6-10 C are impossible, and I would personally define the borderline impossible region as 4.5 – 6 C. Yes we can disagree on this one, and I would like to see lots more consideration of this upper bound issue. But the defenders of the high ECS values are more focused on trying to convince that ECS can’t be below 2 C.
But can we shake hands and agree that values above 10C are impossible?
Now consider the perspective of economists on equilibrium climate sensitivity. The IPCC AR5 WGIII report based all of its calculations on the assumption that ECS = 3 C, based on the IPCC AR4 WGI Report. Seems like the AR5 WGI folks forgot to give WGIII the memo that there was no longer a preferred ECS value.
Subsequent to the AR5 Report, economists became more sophisticated and began using the ensemble of CMIP5 simulations. One problem is that the CMIP5 models don’t cover the bottom 30% of the IPCC AR5 likely range for ECS.
The situation didn’t get really bad until economists start creating PDFs of ECS. Based on the AR4 assessment, the US Interagency Working Group on the Social Cost of Carbon fitted a distribution that had 5% of the values greater than 7.16 C. Weitzmann (2008) fitted a distribution 0.05% > 11C, and 0.01% >20C. While these probabilities seem small, they happen to dominate the calculation of the social cost of carbon (low probability, high impact events). [see Worst case scenario versus fat tail]. These large values of ECS (nominally beyond 6C and certainly beyond 10 C) are arguably impossible based upon our background knowledge.
For equilibrium climate sensitivity, we have no basis for developing a PDF — no mean, and a weakly defended upper bound. Statistically-manufactured ‘fat tails’, with arguably impossible values of climate sensitivity are driving the social cost of carbon. Instead, effort should be focused on identifying the possible or plausible worst case, that can’t be falsified based on our background knowledge. [see also Climate sensitivity: lopping off the fat tail]
————–
The issue of sea level rise provides a good illustration of how to assess the various scenarios and the challenges of identifying the possible worst case scenario. This slide summarizes expert assessments from the IPCC AR4 (2007), IPCC AR5 (2013), the US Climate Science Special Report (CSSR 2017), and the NOAA Sea Level Rise Scenarios Report (2017). Also included is a range of worst case estimates (from sea level rise acceleration or not).
With all these expert assessments, the issue becomes ‘which experts?’ We have the international and national assessments, with a limited number of experts for each that were selected by whatever mechanism. Then we have expert testimony from individual witnesses that were selected by politicians or lawyers having an agenda.
In this context, the expert elicitation reported by Horton et al. (2014) is significant, which considered expert judgement from 90 scientists publishing on the topic of sea level rise. Also, a warming of 4.5 C is arguably the worst case for 21st century temperature increase (actually I suspect this is an impossible amount of warming for the 21st century, but lets keep it for the sake of argument here). So should we regard Horton’s ‘likely’ SLR of 0.7 to 1.2 m for 4.5 C warming as the the ‘likely’ worst case scenario? The Horton paper gives 0.5 to 1.5 as the very likely range (5 to 95%). These values are much lower than the range 1.6 to 3 m (and don’t even overlap).
There is obviously some fuzziness and different ways of thinking about the worst case scenario for SLR by 2100. Different perspectives are good, but 0.7 to 3 m is a heck of a range for the borderline worst case.
———-
And now for JC’s perspective on sea level rise circa 2100. The corroborated possibilities, from rates of sea level rise in the historical record, are 0.3 m and less.
The values from the IPCC AR4, which were widely criticized for NOT including glacier dynamics, are actually verified possibilities (contingent on a specified temperature change) — focused on what we know, based on straightforward theoretical considerations (e.g. thermal expansion) and processes for which we have defensible empirical relations.
Once you start including ice dynamics and the potential collapse of ice sheets, we are in the land of unverified possibilities
I regard anything beyond 3 m as impossible, with the territory between 1.6 m and 3.0 m as the disputed borderline impossible region. I would like to see another expert elicitation study along the lines of Horton that focused on the worst case scenario. I would also like to see more analysis of the different types of reasoning that are used in creation of a worst case scenario.
The worst case scenario for sea level rise is having very tangible applications NOW in adaptation planning, siting of power plants, and in lawsuits. This is a hot and timely topic, not to mention important. A key topic in the discussion at Rand was how decision makers perceive and use ‘worst case’ scenario information. One challenge is to avoid having the worst case become anchored as the ‘expected’ case.
———
Are we framing the issue of 21st century climate change and sea level rise correctly?
I don’t think Donald Rumsfeld, in his famous unknown taxonomy, included the category of ‘unknown knowns’. Unknown knowns, sometimes referred to as ‘known neglecteds,’ refer to known processes or effects that are neglected for some reason.
Climate science has made a massive framing error, in terms of framing future climate change as being solely driven by CO2 emissions. The known neglecteds listed below are colored blue for an expected cooling effect over the 21st century, and red for an expected warming effect.
——-
Much effort has been expended in imagining future black swan events associated with human caused climate change. At this point, human caused climate change and its dire possible impacts are so ubiquitous in the literature and public discussion that I now regard human-caused climate change as a ‘white swan.’ The white swan is frankly a bit of a ‘rubber ducky’, but nevertheless so many alarming scenarios have been tossed out there, that it is pretty unimaginable that a climate surprise caused by CO2 emissions that has not been imagined.
The black swans related to climate change are associated with natural climate variability. There is much room for the unexpected to occur, especially for the ‘CO2 as climate control knob’ crowd.
Existing climate models do not allow exploration of all possibilities that are compatible with our knowledge of the basic way the climate system actually behaves. Some of these unexplored possibilities may turn out to be real ones.
Scientific speculation on plausible, high-impact scenarios is needed, particularly including the known neglecteds.
Is all this categorization of uncertainty merely academic, the equivalent of angels dancing on the end of a pin? The level of uncertainty, and the relevant physical processes (controllable or uncontrollable) are key elements in selecting the appropriate decision-analytic framework.
Controllability of the climate (the CO2 control knob) is something that has been been implicitly assumed in all this. Perhaps on millennial time scales climate is controlled by CO2 (but on those time scales CO2 is a feedback as well as a forcing). On the time scale of the 21st century anything feasible that we do to reduce CO2 emissions is unlikely to have much of an impact on the climate even if you believe the climate model simulations (see Lomborg)
Optimal control and cost/benefit analysis, which are used in evaluating the social cost of carbon, assume statistical uncertainty and that the climate is controllable — two seriously unsupported assumptions.
Scenario planning, adaptive management and robustness/resilience/antifragility strategies are much better suited to conditions of scenario/deep uncertainty and a climate that is uncontrollable.
How did we land in this situation of such a serious science-policy mismatch? Well, in the early days (late 1980s – early 1990’s) international policy makers put the policy cart before the scientific cart, with a focus on CO2 and dangerous climate change. This focus led climate scientists to make a serious framing error, by focusing only on CO2-driven climate change. In a drive to remain relevant to the policy process, the scientists focused on building consensus and reducing uncertainties. The also began providing probabilities — even though these were unjustified by the scientific knowledge base, there was a perception that policy makers wanted this. And this led to fat tails and cost benefit analyses that are all but meaningless (no matter who they give Nobel prizes to).
The end result is oversimplification of both the science and policies, with positive feedback between the two that has created a climate alarm monster.
This Frankenstein has been created from framing errors, characterization of deep uncertainty with probabilities, and the statistical manufacture of fat tails.
“Monster creation” triggered a memory of a post I wrote in 2010 Heresy and the Creation of Monsters. Yikes I was feisty back then (getting mellow in my old age).