Climate sensitivity: lopping off the fat tail

by Judith Curry
Interest is running high this week on the topic of climate sensitivity.

Nic Lewis’ analysis of the consequences of reduced aerosol forcing is creating a stir.  Plus this week there is Workshop on Climate Sensitivity in Germany (details at RealClimate); hashtag #ringberg15.
The topic I want to focus on in this post is what we can infer about the upper bound of climate sensitivity, and structural uncertainty in how we even approach this problem.
Tail risk
The economic value of CO2 mitigation depends sensitively on the possibility of extreme warming. This insight has been obtained through a focus on the fat upper tail of the climate sensitivity probability distribution.
In fact, some have argued that more uncertainty in the extremes means more urgency to tackle global warming.   My counter argument to this is provided in the post Uncertainty, risk and (in)action; see also Tall tales and fat tails,
In light of recent research on climate sensitivity, it is time to revisit these ‘fat tails.’ My previous post  Worst case scenario versus fat tails describes a way forward, in context of scenario falsification.
History of IPCC sensitivity estimates
An interesting historical overview of the history of climate sensitivity estimates is provided by Euan Mearns.  Below is a summary of the IPCC assessments of equilibrium climate sensitivity (ECS):
FAR (1990):  The range of results from model studies is 1.9 to 5.2°C. Most results are close to 4.0°C but recent studies using a more detailed but not necessarily more accurate representation of cloud processes give results in the lower half of this range. Hence the models results do not justify altering the previously accepted range of 1.5 to 4.5°C.  Taking into account the model results, together with observational evidence over the last century which is suggestive of the climate sensitivity being in the lower half of the range,  a value of climate sensitivity of 2.5°C has been chosen as the best estimate.
SAR (1995):  No strong reasons have emerged to change [FAR] estimates of the climate sensitivity.
TAR (2001):  likely to be in the range of 1.5 to 4.5 °C.
AR4 (2007):   Equilibrium climate sensitivity  very likely is greater than 1.5 °C (2.7 °F) and likely to lie in the range 2 to 4.5 °C, with a most likely value of about 3 °C . A climate sensitivity higher than 4.5 °C cannot be ruled out.
AR5 (2013):  Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C (high confidence), extremely unlikely less than 1°C (high confidence), and very unlikely greater than 6°C (medium confidence).
The bottom line is that ECS estimates have remained within the range 1.5-4.5C for decades, with a brief diversion of the lower bound to 2.0C in the AR4.  FAR has a best estimate of 2.5C; AR4 provides a best estimate of 3.0C; AR5 declines to provide a best estimate owing to disagreement between climate model and observational methods.  The other significant statement of AR5 vs AR4 is ‘very unlikely greater than 6C (medium confidence)’, whereas AR4 declined to specify any kind of upper limit.
The ‘tails’
Lets take a look at the sensitivity distributions provided by the AR4, AR5, plus Nic Lewis’ recent analyses.
First, the equilibrium climate sensitivity, with the gray shading reflecting the 1.5-4.5C likely range of the AR5.
ECS from the AR4:

ECS from the AR5, instrumental estimates:

ECS from the AR5, climate models:
ECS from Nic Lewis, revised for new estimates of aerosol forcing [link].
And now, for the transient climate sensitivity (TCR), with shading from 1.0-2.5C (reflecting the AR5 ‘likely’ range).  Note, the AR5 states:   it is very unlikely that TCR is less than 1°C and very unlikely that TCR is greater than 3.5°C.
TCR from the AR4:
TCR from the AR5, instrumental and climate model estimates:

TCR from Nic Lewis, revised for new estimates of aerosol forcing.
I’ve tried to align the scales for ECR and TCR.  The new distributions of climate sensitivity calculated by Nic Lewis are strikingly lower than the AR4/AR5 estimates, particularly with regards to the upper tail.
Clarifying the fat tail: Modal falsification
On a 2010 thread (can’t find it now), I stated that I thought the ‘very likely’ range for equilibrium climate sensitivity was something like 0.5-10C (I can’t exactly recall the lower bound).  My rationale for this was the AR4 ECS figure (shown above).  I had no reason at that time to ‘reject’ any of the values shown in that figure.
Gregor Betz defines modal falsification as follows [BetzModalFalsification]:
Modal falsification:  It is scientifically shown that a certain statement about the future is possibly true as long as it is not shown that this statement is incompatible with our relevant background knowledge, i.e. as long as the possibility statement is not falsified.
Nic Lewis’ research arguably falsifies the high values of climate sensitivity determined from instrumental data, owing to problems with the statistical methodology and the forcing data.  IMO, Nic’s methodology for determining climate sensitivity from observations and energy balance model represents the best current method.
With regards to climate model determinations, see this paper:  The upper end of climate model temperature projections is inconsistent with past warming, by Peter Stott, Peter Good, Gareth Jones, Nathan Gillett, Ed Hawkins, published in Environmental Research Letters (open access) [link].  This paper does not directly address the issue of climate sensitivity.  It is obvious from comparing climate model simulations with observations that most climate models are running too hot for the early years of the 21st century.
Some recent inferences have been made that reduce the upper likely bound for ECS.  James Annan has stated It’s increasingly difficult to reconcile a high climate sensitivity (say over 4C) with the observational evidence for the planetary energy balance over the industrial era.
At the Ringberg Workshop, the title of Bjorn Stevens: Some (not yet entirely convincing) reasons for 2.0 < ECS < 3.5C.  I suspect that these reasons are based on inferences from climate models.
JC assessment of climate sensitivity
Here is my assessment, based on current background knowledge.
Climate models are not fit for the purpose of determining TCR, owing to their lacking capability to simulate the patterns and phasing of decadal to multi-decadal internal variability.
That leaves the energy balance climate models using historical observations, which seem well suited to determining TCR.  Nic Lewis analysis is arguably ‘best in class’, providing two recent analyses:

  • AR5 forcing: 1.05 – 1.8C
  • New (lower) aerosol forcing:  1.05 – 1.45C

Compare these values to the AR5 likely range for TCR of 1.o-2.5C.  I would place ‘medium confidence’ on Nic Lewis’ ranges of TCR (I’m not sure we’ve heard the last word on aerosol forcing, but I strongly suspect that the lower bound is less than what was used in the AR5).
The situation is much more complex and uncertain for ECS determinations.  The energy balance method is associated with uncertainties in ocean heat uptake, which is probably rather variable.  Failure to adequately simulate internal variability is much less relevant when climate models are run to equilibrium.  However, most of the climate models are using aerosol forcing that is too large, which is masking a sensitivity to greenhouse gases that is also too large.
Using Nic Lewis’ method for determining uncertainty, meaningful pdf’s can be determined (and hence a statistically meaningful ‘likely’ range can be determined.)  Once climate models are in the mix, I have argued previously (Probabilistic estimates of climate sensitivity) that Bayesian analysis and expert judgment are not up to the task of providing limits to the range of expected climate sensitivity.  Simply put, the collection of climate model simulations do not comprise a meaningful pdf.
So what to do?  Betz’s modal falsification provides a way forward.  Look at simulations from each climate model (preferably an ensemble) and see if there are any reasons to reject that model.  Reasons would include using a highly unrealistic value of aerosol forcing, simulations that do not agree with observations.  Figuring out how to actually do this in a meaningful way should be the main activity of climate model ensemble interpretation, IMO.
Where does this leave us for now, in terms of bounding ECS?

  • lower bound:  I would use Nic Lewis’ values for the lower bound:  1.05 (for the 5-95% range) and 1.2 (for the 17-83% range)
  • upper bound:  we need to look at climate models, for which there is no meaningful pdf, we can only look at discrete simulations. The highest model simulations seem to be slightly less than 5C.  Once the simulations are culled to reject models with very large aerosol forcing, I suspect that this will lower substantially (perhaps to Bjorn Steven’s value of 3.5C)

Compare to the AR5’s ‘likely’ range of 1.5-4.5C.  I would place low/medium confidence on these numbers.  The most striking issue is that the observational estimates are almost completely outside the range of the climate model simulations.
There is one climate model that falls within the range of the observational estimates: INMCM4 (Russian).  I have not looked at this model, but on a previous thread RonC makes the following comments.
On a previous thread, I showed how one CMIP5 model produced historical temperature trends closely comparable to HADCRUT4. That same model, INMCM4, was also closest to Berkeley Earth and RSS series.
Curious about what makes this model different from the others, I consulted several comparative surveys of CMIP5 models. There appear to be 3 features of INMCM4 that differentiate it from the others.
1.INMCM4 has the lowest CO2 forcing response at 4.1K for 4XCO2. That is 37% lower than multi-model mean
2.INMCM4 has by far the highest climate system inertia: Deep ocean heat capacity in INMCM4 is 317 W yr m22 K-1, 200% of the mean (which excluded INMCM4 because it was such an outlier)
3.INMCM4 exactly matches observed atmospheric H2O content in lower troposphere (215 hPa), and is biased low above that. Most others are biased high.
So the model that most closely reproduces the temperature history has high inertia from ocean heat capacities, low forcing from CO2 and less water for feedback.
Definitely worth taking a closer look at this model, it seems genuinely different from the others.
Structural uncertainty and unknowability
So, how are we going to resolve this issue?  Are we running in circles and chasing our tails, with the same ‘official’ range of 1.5-4.5C since the 1979 Charney report?
Will the Ringberg Workshop this week move things forward?  Well, I’m not optimistic.  Last week I tweeted:
wkshp seems to miss discussion of structural uncertainty in ECS/TCR determination
Tamsin responded:
Er…how do you know that if it hasn’t happened yet…?
Well the list of participants and the talk titles and suggested post AR5 papers all omit what I regard to be the elephant in the room:  the confounding factor of natural internal variability. Lewis and Curry (2014) gave a nod to this by choosing baseline period to be compatible in terms of the AMO index.  This partially address the issue, but there are other modes of multidecadal variability and there are also longer term modes; we have no idea to what extent a longer term oscillation might be contributing to the overall warming since circa 1600.
There is an important paper What is the effect of unresolved internal variability on climate sensitivity estimates?, that was discussed on this previous thread Meta-uncertainty in the determination of climate sensitivity.  Punchline:
We demonstrate that a single realization of the internal variability can result in a sizable discrepancy between the best CS estimate and the truth. Specifically, the average discrepancy is 0.84 °C, with the feasible range up to several °C. The results open the possibility that recent climate sensitivity estimates from global observations and EMICs are systematically considerably lower or higher than the truth, since they are typically based on the same realization of climate variability. This possibility should be investigated in future work. We also find that estimation uncertainties increase at higher climate sensitivities, suggesting that a high CS might be difficult to detect.
IMO, the most important of climate sensitivity papers published post AR5 is by Michael Ghil: A mathematical theory of climate sensitivity, or how to deal with both anthropogenic forcing and natural variability?  (discussed at CE here).  No sign of Ghil’s ideas at this Workshop.  See also this paper by Zaliapin and Ghil: Another look at climate sensitivity.
There are major structural uncertainties in climate models that contribute to problems with sensitivity, particularly related to the fast thermodynamic feedbacks (clouds, water vapor, lapse rate), a topic that I discussed on a previous thread Model structural uncertainty – are GCM’s the best tools?   It looks like some of these issues will be discussed at Ringberg.
There are also structural uncertainties in the energy balance model methods, in addition to the issue of ocean heat uptake.  Some of these are being addressed in a new paper by Nic Lewis that is under review (I suspect that this will be the topic of his talk at Ringberg).
Where does this leave us? Well it is difficult to argue that we should have more than ‘medium’ (at best) confidence in the range of climate sensitivity estimates.
It seems that we are on the verge of some progress in addressing model structural uncertainties, but there does not seem to be a path forward by this group to address the meta-issue of natural internal variability as structural uncertainty.
JC conclusions
Our methods of inferring climate sensitivity – using GCM climate models and energy balance models – are leading us to reject the highest values of climate sensitivity that were determined using methods or models that have been deemed erroneous.
In short, the ‘fat tail’ of climate sensitivity is getting skinnier and shorter.  Does this imply that ‘true’ climate sensitivity is relatively low?  No.  The absence of evidence and incompleteness of our knowledge (e.g. low/medium confidence) does not rule out unforeseen and unforeseeable extreme values of climate sensitivity.
So how are we to interpret our current understanding (and lack of understanding) of climate sensitivity in terms of policy?  Well that will be the topic of a forthcoming post.  The bottom line is that the justification for high values of climate sensitivity (using global and energy balance models) continues to diminish.
I hope that the Ringberg Workshop will be productive and interesting.  I will do a post on that once the talks are online.Filed under: Sensitivity & feedbacks

Source