Hiatus controversy: show me the data

by Judith Curry
The scientific and political controversies surrounding the hiatus have continued to heat up. Lets take a look at ALL the global temperature data sets.

So, what is the ‘hiatus’ or ‘pause’ or ‘slowdown’, and why does it matter? Here are three criteria for the hiatus to matter:
1) the rate of warming over a particular period of at least 10 years is not statistically significant from zero (with the context of a nominal 0.1C uncertainty). Note the IPCC AR5 cited: “As one example, the rate of warming over the past 15 years (1998–2012; 0.05 [–0.05 to +0.15] °C per decade is smaller than the rate calculated since 1951 (1951–2012; 0.12 [0.08 to 0.14] °C per decade)”
2) the rate of warming over a particular period of at least 10 years is less than the warming projected by the IPCC AR5: “The global mean surface temperature change for the period 2016–2035 relative to 1986–2005 will likely be in the range of 0.3°C to 0.7°C (medium confidence).” This translates to 0.1C to 0.233C/decade. (Note the AR4 cited a warming rate of 0.2C/per decade in the early 21st century).
3) Periods meeting the criteria of either 1) or 2) are particularly significant if they exceed 17 years, which is the threshold for very low probability of natural variability dominating over the greenhouse warming trend.
Conventional surface temperature data sets
A comparison of HadCRU, NASA GISS, NOAA/NCDC. Cowtan and Way, and Berkeley Earth global temperatures through 2014 was provided to me by Steve Mosher for my April House Testimony:

The various data sets show pretty close agreement on the interannual variations and the magnitude of the trends over this period. While the trends for each data set vary slightly, all them have decadal trends sufficiently small to satisfy hiatus criteria 1) and 2).
Note: this figure was PRIOR to the new NOAA temperature dataset of Karl et al. A new document by Berkeley Earth [link]  clarifies the changes of the new NOAA data set relative to HadCRU, NASA GISS, and Berkeley Earth:

The NOAA curve (red) is lower than the others in the earlier part of this record, and warmer in the most recent years of this record. Specifically with regards to hiatus significance, the SOM in the Karl et al. paper cites global trends of 0.106C/decade for the period 1998-2014, and 0.116C/decade for the period 2000-2014.
Trends exceeding 0.1C fail to pass criteria 1) and 2) above, hence the new data set does not satisfy either criteria for a hiatus. Actually, the criteria for hiatus is barely missed, with the trend greater than zero at the 90% confidence level.
JC note: The Karl paper was inappropriately criticized for ‘cherry picking’ periods (and hiatus proponents are also criticized for same). Given criteria 3) above, cherry picking is a non issue – ANY period approaching 17 years is fair game for challenging the climate models.
Regarding the comparison among the different data sets, the Berkeley Earth report states:
The differences we see between the various approaches comes down to two factors: Differences in datasets and differences in methods. While all four records fall within the uncertainty bands, it appears as if NCDC does have an excursion outside this region; and if we look towards years end, it appears that their record shows more warmth than others.
The source of the difference between NCDC and the other data sets lies in its analysis of the ocean data, which will be the subject of a follow on post.
Reanalysis data
Reanalysis was discussed on a previous post reanalyses.org 
Reanalysis  is a climate or weather model simulation of the past that includes data assimilation of historical observations.  The rationale for climate reanalysis  is given by reanalyses.org:
Reanalysis is a scientific method for developing a comprehensive record of how weather and climate are changing over time. In it, observations and a numerical model that simulates one or more aspects of the Earth system are combined objectively to generate a synthesized estimate of the state of the system. A reanalysis typically extends over several decades or longer, and covers the entire globe from the Earth’s surface to well above the stratosphere.
Data Assimilation merges observations & model predictions to provide a superior state estimate. It provides a dynamically- consistent estimate of the state of the system using the best blend of past, current, and perhaps future observations.  
[Using a weather prediction model]  The observations are used to correct errors in the short forecast from the previous analysis time. Every 12 hours ECMWF assimilates 7 – 9,000,000 observations to correct the 80,000,000 variables that define the model’s virtual atmosphere. This is done by a careful 4-dimensional interpolation in space and time of the available observations; this operation takes as much computer power as the 10-day forecast.
Operational four dimensional data assimilation continually changes as methods and assimilating models improve, creating huge discontinuities in the implied climate record. Reanalysis is the retrospective analysis onto global grids using a multivariate physically consistent approach with a constant analysis system. 
In the 1990’s, we were cautioned not to use reanalyses for long term trends, owing to discontinuities in the satellite observing system. However, the situation has improved in recent decades and looking at trends during the most recent 20 years is reasonable.
Here is the figure on the global average surface temperature anomalies from the ECMWF reanalyses (ERAi) published by ECMWF earlier this year [link]
Note: the lighter, broader bars denote averages that exclude the polar regions, whereas the narrower, darker bars are global and include the polar regions.
ECMWF’s analysis generally agrees with the year to year  variability seen in the conventional surface temperature datasets. Note that in their analysis, 2014 was not the warmest year, and the amplitude of 1998 is somewhat smaller. I have not done any formal trend analysis on this data set, but eyeballing the graph it appears that since 1998 the trend would exceed 0.1C/decade, although the trend since 2002 or 2003 appears to be less than 0.1C/decade.
The real significance of the ECMWF analysis is their global analysis that includes the polar regions. Their method is vastly preferable to the kriging and extrapolation used by NASA GISS and Cowtan and Way. Interestingly, including the polar regions does not always produce a warmer global average temperature; notably in 2013 and 2014 it did not, largely owing to the cooling in Antarctica.
US/NOAA also produces a reanalysis, the CFSR. I have not seen a figure from NOAA plotting this data, but Ryan Maue of WeatherBell provides this plot (I’ve combined two of the plots into one) [link]:

The main features from the conventional temperature analyses are barely recognizable in the CFSR – 1998 is barely a blip and 2014 is nothing close to a warmest year.
I am not sure what to make of the differences between ECMWF and CFSR. ECWMF has the most comprehensive and best data assimilation system in the world, so I am inclined to pay serious attention to their global surface temperature analysis. In any event, I would like to see much more attention paid to interpreting the reanalysis products in terms of recent global temperature trends.
Atmospheric temperatures
The strongest evidence for the hiatus comes from the satellite (microwave) observations of bulk atmospheric temperature, pioneered by Christy and Spencer. Analyses of these data have shown a statistically significant hiatus for a period as long as 21 years.
The latest figure from Roy Spencer was given in his recent CATO talk:
Bob Tisdale has produced an interesting diagram comparing UAH, RSS and NASA LOTI:

The largest discrepancy among the three datasets are 1998 and 2014: in 1998 LOTI is lower than the other two, and in 2014 RSS is lower than the other two. In any event, all three data sets qualify for warming hiatus since 1998.
At the CATO event, John Nielsen-Gammon showed a very interesting figure, that compares UAH and RSS with ERAi (the ECMWF reanalysis):

Agreement is very close, which adds to the credibility of the ECMWF reanalysis. EXCEPT for 1998, where ECMWF is substantially lower. J N-G cites the following trends (presumably since 1997): UAH: -0.01 °C/decade, RSS: -0.03 °C/decade, ERAi: 0.08 °C/decade. All three are still in hiatus territory, but ERAi trend is much larger than the others owing the lower value in 1998.
The low value in 1998 also appeared in the ECMWF reanalysis surface temperatures. Note: ECMWF does not assimilate directly either the UAH or RSS temperatures, but rather assimilates the microwave radiances from the satellites. Not also the relatively low value for 1998 from the NASA LOTI.
Some work clearly needs to be done to sort out the differences among the bulk atmospheric temperatures determined directly from satellites (using the 3 different methods) as well as the reanalyses.
Diverging surface thermometer and satellite temperatures
Euan Mearns has an interesting post Diverging surface thermometer and satellite temperature records.
This post is already too long so I will just point you to the site. His concluding remarks:
Satellite and surface thermometer data agree over the oceans. They used to agree better over land until HadCRUT4 supplanted HadCRUT3, ending the pause and causing land surface thermometers to diverge from the satellite data sets.
JC reflections
The uncertainties in the various global temperature data sets are substantial relative to defining the existence (or not) of a warming hiatus and in assessing whether the observed trends are significantly lower than the model projections. I have stated this several times before: I think the error bars/uncertainties on these data sets are too low, particularly given the magnitude of the adjustments that are made.
With regards to 2015, the new Berkeley Earth report  cites an 85% probability of warmest year, owing the very large ocean warming, largely in the Pacific. However, Roy Spencer’s analysis of UAH shows no sign of 2015 being a warmest year.
The Berkeley Earth report concluded with the following statement:
2015 looks like it is shaping up to be an interesting year both from perspective of “records” and from the perspective of understanding how different data and different methods can result in slightly different answers. And it’s most interesting because it may lead people to understand that interpolation or infilling can lead to both warmer records and cooler records depending on the structure of warming across the globe. 
Here are the biggest uncertainties that I see:

  • sorting out what was going on in 1998, which was a year of discrepancy among the satellite atmospheric temperature data sets, and the ECMWF reanalysis
  • interpretation of what is going on in the polar regions, and I think ECMWF has the best approach on that one
  • sorting out the sea surface temperature issues (note this will be the topic of a post next week)

The bottom line with regards to the hiatus is all of the data sets except for the new NOAA/NCDC data set show a hiatus (with NASA LOTI being the other data set coming closest to not showing a hiatus).
The real issue of importance is comparing the climate models with recent observations. Here, even the latest NOAA/NCDC analysis still places the observations at the bottom envelope of the climate model simulations.
So it is premature to declare the hiatus dead.
 Filed under: Data and observations

Source