Visit ArabTopics.com

Climate uncertainty monster: What’s the worst case?

by Judith Curry

On possibilities, known neglecteds, and the vicious positive feedback loop between scientific assessment and policy making that has created a climate Frankenstein.

I have prepared a new talk that I presented yesterday at Rand Corp. My contact at Rand is Rob Lempert, of deepuncertainty.org fame.  Very nice visit and interesting discussion.

My complete presentation can be downloaded [Rand uncertainty].  This post focuses on the new material.

Scientists are saying the 1.5 degree climate report pulled punches, downplaying real risks facing humanity in next few decades, including feedback loops that could cause ‘chaos’ beyond human control.

To my mind, if the scientists really wanted to communicate the risk from future climate change, they should at least articulate the worst possible case (heck, was anyone scared by that 4″ of extra sea level rise?).  Emphasis on POSSIBLE.  The possible worst case puts upper bounds on what could happen, based upon our current background knowledge.  The exercise of trying to articulate the worst case illuminates many things about our understanding (or lack thereof) and the uncertainties.  A side effect of such an exercise would be to lop of the ‘fat tails’ that economists/statisticians are so fond of manufacturing. And finally the worst case does have a role in policy making (but NOT as the expected case).

My recent paper Climate uncertainty and risk  assessed the epistemic status of climate models, and described their role in generating possible future scenarios.  I introduced the possibilistic approach to scenario generation, including the value of scientific speculation on policy-relevant aspects of plausible, high-impact scenarios, even though we can neither model them realistically nor provide a precise estimate of their probability.

How are we to evaluate whether a scenario is possible or impossible?  A series of papers by Gregor Betz  provides some insights, below is my take on how to approach this for future climate scenarios based upon my reading of Betz and other philosophers working on this problem.

I categorize climate models here as (un)verified possibilities, there is a debate in the philosophy of science literature on this topic.  The argument is that some climate models may be regarded as producing verified possibilities for some variables (e.g. temperature).

Maybe I’ll accept that a few models produce useful temperature forecasts, provided that they also produce accurate ocean oscillations when initialized.  But that is about as far as I would go towards claiming that climate model simulations are ‘verified’.

An interesting aside regarding the ‘tribes’ in the climate debate, in context of possibility verification:

  • Lukewarmers: focus on the verified possibilities
  • Consensus/IPCC types: focus on the unverified possibilities generated by climate models.
  • Alarmists: focus on impossible scenarios and/or borderline impossible as ‘expected’ scenarios, or worthy of justifying precautionary avoidance of emitting CO2.

This diagram provides a visual that distinguishes the various classes of possibilities, including the impossible and irrelevant.  While verified possibilities have higher epistemic status than the unverified possibilities, all of these possibilities are potentially important for decision makers.

The orange triangle illustrates a specific vulnerability assessment, whereby only a fraction of the scenarios are relevant to the decision at hand, and the most relevant ones are unverified possibilities and even the impossible ones.   Clarifying what is impossible versus what is not is important to decision makers, and the classification provides important information about uncertainty.


Let’s apply these ideas to interpreting the various estimates of equilibrium climate sensitivity.  The AR5  likely value is 1.5 to 4.5 C, which has hasn’t really budged since the 1979 Charney report.  The most significant statement in the AR5, which is included  in a footnote in the SPM:  “No best estimate for equilibrium climate sensitivity can now be given because of lack of agreement on values across assessed lines of evidence and studies.”

The big disagreement is between the CMIP5 model range (values between 2.1 and 4.7 C) and the historical observations using an energy balance model.  While Lewis and Curry (2015) was not included in the AR5, it provides the most objective comparison of this approach with the CMIP5 models since it used the same forcing and time period.

The Lewis/Curry estimates are arguably corroborated possibilities, since they are based directly on historical observational data, linked together by a simple energy balance model.  It has been argued that LC underestimate values on the high end, and neglect the very slow feedbacks.  True, but the same holds for the CMIP5 models, so this remains a valid comparison.


Where to set the borderline impossible range?  The IPCC AR5 put a 90% limit at 6 C.  None of the ECS values cited in the AR5 extend much beyond 6 C, although in the AR4 many long tails were cited, apparently extending beyond 10 C.  Hence in my diagram I put a range of 6-10 C as borderline impossible based on information from the AR4/AR5.


Now for JC’s perspective.  We have an anchor on the lower bound — the no-feedback climate sensitivity, which is nominally ~1 C (sorry, skydragons).  The latest Lewis/Curry values are reported here over the very likely range (5-95%).  I regard this as our current best estimate of observationally based ECS values, and regard these as corroborated possibilities.

I accept the possibility that Lewis/Curry is too low on the upper range, and agree that it could be as high as 3.5C.  And I’ll even bow to peer/consensus pressure and put an upper limit of the v likely range as 4.5 C.  I think values of 6-10 C are impossible, and I would personally define the borderline impossible region as 4.5 – 6 C.  Yes we can disagree on this one, and I would like to see lots more consideration of this upper bound issue.  But the defenders of the high ECS values are more focused on trying to convince that ECS can’t be below 2 C.

But can we shake hands and agree that values above 10C are impossible?


Now consider the perspective of economists on equilibrium climate sensitivity.  The IPCC AR5 WGIII report based all of its calculations on the assumption that ECS = 3 C, based on the IPCC AR4 WGI Report.  Seems like the AR5 WGI folks forgot to give WGIII  the memo that there was no longer a preferred ECS value.

Subsequent to the AR5 Report, economists became more sophisticated and began using the ensemble of CMIP5 simulations.  One problem is that the CMIP5 models don’t cover the bottom 30% of the IPCC AR5 likely range for ECS.


The situation didn’t get really bad until economists start creating PDFs of ECS.  Based on the AR4 assessment, the US Interagency Working Group  on the Social Cost of Carbon fitted a distribution that had 5% of the values greater than 7.16 C.  Weitzmann (2008) fitted a distribution 0.05% > 11C, and 0.01% >20C.  While these probabilities seem small, they happen to dominate the calculation of the social cost of carbon (low probability, high impact events).  [see Worst case scenario versus fat tail].  These large values of ECS (nominally beyond 6C and certainly beyond 10 C) are arguably impossible based upon our background knowledge.

For equilibrium climate sensitivity, we have no basis for developing a PDF — no mean, and a weakly defended upper bound. Statistically-manufactured ‘fat tails’, with arguably impossible values of climate sensitivity are driving the social cost of carbon.  Instead, effort should be focused on identifying the possible or plausible worst case, that can’t be falsified based on our background knowledge. [see also Climate sensitivity: lopping off the fat tail]

————–

The issue of sea level rise provides a good illustration of how to assess the various scenarios and the challenges of identifying the possible worst case scenario.  This slide summarizes expert assessments from the IPCC AR4 (2007), IPCC AR5 (2013), the US Climate Science Special Report (CSSR 2017), and the NOAA Sea Level Rise Scenarios Report (2017).  Also included is a range of worst case estimates (from sea level rise acceleration or not).


With all these expert assessments, the issue becomes ‘which experts?’  We have the international and national assessments, with a limited number of experts for each that were selected by whatever mechanism.  Then we have expert testimony from individual witnesses that were selected by politicians or lawyers having an agenda.

In this context, the expert elicitation reported by Horton et al. (2014) is significant, which  considered expert judgement from 90 scientists publishing on the topic of sea level rise.  Also, a warming of 4.5 C is arguably the worst case for 21st century temperature increase (actually I suspect this is an impossible amount of warming for the 21st century, but lets keep it for the sake of argument here).    So should we regard Horton’s ‘likely’ SLR of 0.7 to 1.2 m for 4.5 C warming as the the ‘likely’ worst case scenario?  The Horton paper gives 0.5 to 1.5 as the very likely range (5 to 95%).  These values are much lower than the range 1.6 to 3 m (and don’t even overlap).

There is obviously some fuzziness and different ways of thinking about the worst case scenario for SLR by 2100.  Different perspectives are good, but 0.7 to 3 m is a heck of a range for the borderline worst case.

———-

And now for JC’s perspective on sea level rise circa 2100.  The corroborated possibilities, from rates of sea level rise in the historical record, are 0.3 m and less.

The values from the IPCC AR4, which were widely criticized for NOT including glacier dynamics, are actually verified possibilities (contingent on a specified temperature change) — focused on what we know, based on straightforward theoretical considerations (e.g. thermal expansion) and processes for which we have defensible empirical relations.

Once you start including ice dynamics and the potential collapse of ice sheets, we are in the land of unverified possibilities

I regard anything beyond 3 m as impossible, with the territory between 1.6 m and 3.0 m as the disputed borderline impossible region.  I would like to see another expert elicitation study along the lines of Horton that focused on the worst case scenario.  I would also like to see more analysis of the different types of reasoning that are used in creation of a worst case scenario.

The worst case scenario for sea level rise is having very tangible applications NOW in adaptation planning, siting of power plants, and in lawsuits.  This is a hot and timely topic, not to mention important.  A key topic in the discussion at Rand was how decision makers perceive and use ‘worst case’ scenario information.  One challenge is to avoid having the worst case become anchored as the ‘expected’ case.

———

Are we framing the issue of 21st century climate change and sea level rise correctly?

I don’t think Donald Rumsfeld, in his famous unknown taxonomy,  included the category of ‘unknown knowns’.    Unknown knowns, sometimes referred to as ‘known neglecteds,’ refer to known processes or effects that are neglected for some reason.

Climate science has made a massive framing error, in terms of framing future climate change as being solely driven by CO2 emissions.  The known neglecteds listed below are colored blue for an expected cooling effect over the 21st century, and red for an expected warming effect.


——-

Much effort has been expended in imagining future black swan events associated with human caused climate change.  At this point, human caused climate change and its dire possible impacts are so ubiquitous in the literature and public discussion that I now regard human-caused climate change as a ‘white swan.’ The white swan is frankly a bit of a ‘rubber ducky’, but nevertheless so many alarming scenarios have been tossed out there, that it is pretty unimaginable that a climate surprise caused by CO2 emissions that has not been imagined.

The black swans related to climate change are associated with natural climate variability.  There is much room for the unexpected to occur, especially for the ‘CO2 as climate control knob’ crowd.

 

 

Existing climate models do not allow exploration of all possibilities that are compatible with our knowledge of the basic way the climate system actually behaves. Some of these unexplored possibilities may turn out to be real ones.

Scientific speculation on plausible, high-impact scenarios is needed, particularly including the known neglecteds.

Is all this categorization of uncertainty merely academic, the equivalent of angels dancing on the end of a pin?  The level of uncertainty, and the relevant physical processes (controllable or uncontrollable) are key elements in selecting the appropriate decision-analytic framework.

Controllability of the climate (the CO2 control knob)  is something that has been been implicitly assumed in all this.  Perhaps on millennial time scales climate is controlled by CO2 (but on those time scales CO2 is a feedback as well as a forcing).  On the time scale of the 21st century anything feasible that we do to reduce CO2 emissions is unlikely to have much of an impact on the climate even if you believe the climate model simulations (see Lomborg)

Optimal control and cost/benefit analysis, which are used in evaluating the social cost of carbon, assume statistical uncertainty and that the climate is controllable — two seriously unsupported assumptions.

Scenario planning, adaptive management and robustness/resilience/antifragility strategies are much better suited to conditions of scenario/deep uncertainty and a climate that is uncontrollable.

How did we land in this situation of such a serious science-policy mismatch?  Well, in the early days (late 1980s – early 1990’s) international policy makers put the policy cart before the scientific cart, with a focus on CO2 and dangerous climate change.  This focus led climate scientists to make a serious framing error, by focusing only on CO2-driven climate change. In a drive to remain relevant to the policy process, the scientists focused on building consensus and reducing uncertainties.  The also began providing probabilities — even though these were unjustified by the scientific knowledge base, there was a perception that policy makers wanted this.  And this led to fat tails and cost benefit analyses that are all but meaningless (no matter who they give Nobel prizes to).

The end result is oversimplification of both the science and policies, with positive feedback between the two that has created a climate alarm monster.

This Frankenstein has been created from framing errors, characterization of deep uncertainty with probabilities, and the statistical manufacture of fat tails.

“Monster creation” triggered a memory of a post I wrote in 2010 Heresy and the Creation of Monsters.  Yikes I was feisty back then (getting mellow in my old age).

 

Source: 
Judith Curry

Dear friends of this aggregator

  • Yes, I intentionally removed Newsbud from the aggregator on Mar 22.
  • Newsbud did not block the aggregator, although their editor blocked me on twitter after a comment I made to her
  • As far as I know, the only site that blocks this aggregator is Global Research. I have no idea why!!
  • Please stop recommending Newsbud and Global Research to be added to the aggregator.

Support this site

News Sources

Source Items
WWI Hidden History 50
Grayzone Project 60
Pass Blue 134
Dilyana Gaytandzhieva 14
John Pilger 415
The Real News 367
Scrutinised Minds 29
Need To Know News 2146
FEE 3975
Marine Le Pen 288
Francois Asselineau 25
Opassande 53
HAX on 5July 220
Henrik Alexandersson 731
Mohamed Omar 297
Professors Blog 10
Arg Blatte Talar 40
Angry Foreigner 17
Fritte Fritzson 12
Teologiska rummet 32
Filosofiska rummet 91
Vetenskapsradion Historia 139
Snedtänkt (Kalle Lind) 199
Les Crises 2363
Richard Falk 141
Ian Sinclair 92
SpinWatch 56
Counter Currents 7764
Kafila 407
Gail Malone 34
Transnational Foundation 221
Rick Falkvinge 94
The Duran 8521
Vanessa Beeley 93
Nina Kouprianova 9
MintPress 5308
Paul Craig Roberts 1448
News Junkie Post 58
Nomi Prins 27
Kurt Nimmo 191
Strategic Culture 4229
Sir Ken Robinson 20
Stephan Kinsella 79
Liberty Blitzkrieg 831
Sami Bedouin 62
Consortium News 2357
21 Century Wire 3257
Burning Blogger 318
Stephen Gowans 76
David D. Friedman 148
Anarchist Standard 16
The BRICS Post 1496
Tom Dispatch 465
Levant Report 18
The Saker 3871
The Barnes Review 501
John Friend 410
Psyche Truth 146
Jonathan Cook 135
New Eastern Outlook 3533
School Sucks Project 1757
Giza Death Star 1738
Andrew Gavin Marshall 15
Red Ice Radio 589
GMWatch 2046
Robert Faurisson 150
Espionage History Archive 34
Jay's Analysis 853
Le 4ème singe 88
Jacob Cohen 203
Agora Vox 13386
Cercle Des Volontaires 427
Panamza 1927
Fairewinds 109
Project Censored 807
Spy Culture 446
Conspiracy Archive 70
Crystal Clark 11
Timothy Kelly 527
PINAC 1482
The Conscious Resistance 705
Independent Science News 70
The Anti Media 6152
Positive News 820
Brandon Martinez 30
Steven Chovanec 61
Lionel 276
The Mind renewed 434
Natural Society 2563
Yanis Varoufakis 900
Tragedy & Hope 122
Dr. Tim Ball 97
Web of Debt 136
Porkins Policy Review 386
Conspiracy Watch 174
Eva Bartlett 579
Libyan War Truth 304
DeadLine Live 1909
Kevin Ryan 62
BSNEWS 2014
Aaron Franz 209
Traces of Reality 166
Revelations Radio News 121
Dr. Bruce Levine 134
Peter B Collins 1456
Faux Capitalism 205
Dissident Voice 10069
Climate Audit 222
Donna Laframboise 397
Judith Curry 1095
Geneva Business Insider 40
Media Monarchy 2179
Syria Report 76
Human Rights Investigation 90
Intifada (Voice of Palestine) 1685
Down With Tyranny 10976
Laura Wells Solutions 39
Video Rebel's Blog 420
Revisionist Review 485
Aletho News 19100
ضد العولمة 27
Penny for your thoughts 2847
Northerntruthseeker 2218
كساريات 37
Color Revolutions and Geopolitics 27
Stop Nato 4690
AntiWar.com Blog 2866
AntiWar.com Original Content 6527
Corbett Report 2203
Stop Imperialism 491
Land Destroyer 1147
Webster Tarpley Website 1040

Compiled Feeds

Public Lists

Title Visibility
Funny Public