The Hansen forecasts 30 years later

by Ross McKitrick and John Christy
Note: this is a revised version to correct the statement about CFCs and methane in Scenario B.
How accurate were James Hansen’s 1988 testimony and subsequent JGR article forecasts of global warming? According to a laudatory article by AP’s Seth Borenstein, they “pretty much” came true, with other scientists claiming their accuracy was “astounding” and “incredible.”  Pat Michaels and Ryan Maue in the Wall Street Journal, and Calvin Beisner in the Daily Caller, disputed this.


There are two problems with the debate as it has played out. First using 2017 as the comparison date is misleading because of mismatches between observed and assumed El Nino and volcanic events that artificially pinched the observations and scenarios together at the end of the sample. What really matters is the trend over the forecast interval, and this is where the problems become visible. Second, applying a post-hoc bias correction to the forcing ignores the fact that converting GHG increases into forcing is an essential part of the modeling. If a correction were needed for the CO2 concentration forecast that would be fair, but this aspect of the forecast turned out to be quite close to observations.
Let’s go through it all carefully, beginning with the CO2 forecasts. Hansen didn’t graph his CO2 concentration projections, but he described the algorithm behind them in his Appendix B. He followed observed CO2 levels from 1958 to 1981 and extrapolated from there. That means his forecast interval begins in 1982, not 1988, although he included observed stratospheric aerosols up to 1985.
From his extrapolation formulas we can compute that his projected 2017 CO2 concentrations were: Scenario A 410 ppm; Scenario B 403 ppm; and Scenario C 368 ppm. (The latter value is confirmed in the text of Appendix B). The Mauna Loa record for 2017 was 407 ppm, halfway between Scenarios A and B.
Note that Scenarios A and B also differ in their inclusion of non-CO2 forcing as well. Scenario A contains all non-CO2 trace gas effects and Scenario B contains only CFCs and methane, both of which were overestimated. Consequently, there is no justification for a post-hoc dialling down of the CO2 gas levels; nor should we dial down the associated forcing, since that is part of the model computation. To the extent the warming trend mismatch is attributed entirely to the overestimated levels of CFC and methane, that will imply that they are very influential in the model.
Now note that Hansen did not include any effects due to El Nino events. In 2015 and 2016 there was a very strong El Nino that pushed global average temperatures up by about half a degree C, a change that is now receding as the oceans cool. Had Hansen included this El Nino spike in his scenarios, he would have overestimated 2017 temperatures by a wide margin in Scenarios A and B.
Hansen added in an Agung-strength volcanic event in Scenarios B and C in 2015, which caused the temperatures to drop well below trend, with the effect persisting into 2017. This was not a forecast, it was just an arbitrary guess, and no such volcano occurred.
Thus, to make an apples-to-apples comparison, we should remove the 2015 volcanic cooling from Scenarios B and C and add the 2015/16 El Nino warming to all three Scenarios. If we do that, there would be a large mismatch as of 2017 in both A and B.
The main forecast in Hansen’s paper was a trend, not a particular temperature level. To assess his forecasts properly we need to compare his predicted trends against subsequent observations. To do this we digitized the annual data from his Figure 3. We focus on the period from 1982 to 2017 which covers the entire CO2 forecast interval.
The 1982 to 2017 warming trends in Hansen’s forecasts, in degrees C per decade, were:

  • Scenario A: 0.34 +/- 0.08,
  • Scenario B: 0.29 +/- 0.06, and
  • Scenario C: 0.18 +/- 0.11.

Compared these trends against NASA’s GISTEMP series (referred to as the Goddard Institute of Space Studies, or GISS, record), and the UAH/RSS mean MSU series from weather satellites for the lower troposphere.

  • GISTEMP: 0.19 +/- 0.04 C/decade
  • MSU: 0.17 +/- 0.05 C/decade.

(The confidence intervals are autocorrelation-robust using the Vogelsang-Franses method.)
So, the scenario that matches the observations most closely over the post-1980 interval is C. Hypothesis testing (using the VF method) shows that Scenarios A and B significantly over-predict the warming trend (even ignoring the El Nino and volcano effects). Emphasising the point here: Scenario A overstates CO2 and other greenhouse gas growth and rejects against the observations; Scenario B slightly understates CO2 growth, overstates methane and CFCs and zeroes-out other greenhouse gas growth, and it too significantly overstates the warming.
The trend in Scenario C does not reject against the observed data, in fact the two are about equal. But this is the one that left out the rise of all greenhouse gases after 2000. The observed CO2 level reached 368 ppm in 1999 and continued going up thereafter to 407 ppm in 2017. The Scenario C CO2 level reached 368 ppm in 2000 but remained fixed thereafter. Yet this scenario ended up with a warming trend most like the real world.
How can this be? Here is one possibility. Suppose Hansen had offered a Scenario D, in which greenhouse gases continue to rise, but after the 1990s they have very little effect on the climate. That would play out similarly in his model to Scenario C, and it would match the data.
Climate modelers will object that this explanation doesn’t fit the theories about climate change. But those were the theories Hansen used, and they don’t fit the data. The bottom line is, climate science as encoded in the models is far from settled.
Ross McKitrick is a Professor of Economics at the University of Guelph.
John Christy is a Professor of Atmospheric Science at the University of Alabama in Huntsville.
 
 
Moderation note:  As with all guest posts, please keep your comments civil and relevant.
 
 

Tags

Source