How long is the pause?

by Judith Curry
UPDATE:  comments on McKitrick’s paper
With 39 explanations and counting, and some climate scientists now arguing that it might last yet another decade, the IPCC has sidelined itself in irrelevance until it has something serious to say about the pause and has reflected on whether its alarmism is justified, given its reliance on computer models that predicted temperature rises that have not occurred.Rupert Darwall 

The statement by Rupert Darwall concisely states what is at stake with regards to the ‘pause.’   This seriously needs to be sorted out.  Here are two recent papers that contribute to setting us on a path to understand the pause.
HAC-Robust Measurement of the Duration of a Trendless Subsample in a Global Climate Time Series
Ross McKitrick
Abstract. The IPCC has drawn attention to an apparent leveling-off of globally-averaged temperatures over the past 15 years or so. Measuring the duration of the hiatus has implications for determining if the underlying trend has changed, and for evaluating climate models. Here, I propose a method for estimating the duration of the hiatus that is robust to unknown forms of heteroskedasticity and autocorrelation (HAC) in the temperature series and to cherry-picking of endpoints. For the specific case of global average temperatures I also add the requirement of spatial consistency between hemispheres. The method makes use of the Vogelsang-Franses (2005) HAC-robust trend variance estimator which is valid as long as the underlying series is trend stationary, which is the case for the data used herein. Application of the method shows that there is now a trendless interval of 19 years duration at the end of the HadCRUT4 surface temperature series, and of 16 – 26 years in the lower troposphere. Use of a simple AR1 trend model suggests a shorter hiatus of 14 – 20 years but is likely unreliable.
McKitrick, R. (2014) HAC-Robust Measurement of the Duration of a Trendless Subsample in a Global Climate Time Series. Open Journal of Statistics, 4, 527-535. doi: 10.4236/ojs.2014.47050. [link] to full manuscript.
JC comment:  I find this paper to be very interesting.  I can’t personally evaluate the methods, although I understand the importance of the heterskedacity an autocorrelation issues.  The big issue with length of the pause is comparison with climate model predictions; I would like to see the climate model simulations analyzed in the same way.  I would also like to see the HadCRUT4 results compared with Cowtan and Way and Berkeley Earth.  I also seem to recall reading something about UAH and RSS coming closer together; from the perspective of the pause, it seems important to sort this out.
 UPDATE:  The blog Musings on Paleoecology has a post on McKitrick’s paper Recipe for a hiatus, that critiques McKitrick’s method.  McKitrick posted a comment:
Hello Richard
Thank you for your interest in my paper. Let me make a couple of observations.
McKitrick uses a regression technique that is supposed to be robust to heteroscedasticity (unequal variance) and autocorrelation to find the trend in the temperature time series.
I use OLS to find the trend. The HAC method is used to compute the robust confidence intervals. I can’t tell if by your phrase “supposed to be” you are dubious about the robustness of the VF method but if you look at the article cited (V&F 2005), it contains all the power curves, null rejection rates and size estimates you are seeking.
What you are referring to in this post is a null distribution around Jmax. In 100 simulations assuming AR(2) around a positive trend you show that a 1995 or earlier start date occurs 10% of the time. It would be helpful if you also verified in each of those simulations that all the conditions of the definition were met (that the trend CI includes zero across the entire time subsample and applied in both the NH and SH.) Assuming that those things are the case, and you were to get roughly the same answer in 1000 or 10,000 simulations, what you are saying is that under the assumptions of your null, a pause of 19 years is now in the lower 10% tail of the null distribution. And by the looks of it in your Figure, in another 3 years it will be in the lower 5% tail. That’s an interesting additional bit of information on the topic and I encourage you to publish it, especially if you also add in the UAH and RSS computations as well.
However the problem with this kind of estimation–and what I expect a stats journal would point out– is that if what we really want to know is whether Jmax is significantly different from zero, you need a null that assumes it is zero and works out the corresponding distribution. And the difficulty with that is the well-known ‘Davies problem’ in which the parameter to be estimated is not identified under the null. There are simulation methods for handling this problem, which Tim Vogelsang and I briefly review in our new paper comparing models and observations in the tropical troposphere, again using HAC-robust methods (http://onlinelibrary.wiley.com/doi/10.1002/env.2294/abstract). We also outline a simple bootstrap method that gets around the simulation problem, but you’d need to verify whether you need to use a block bootstrap since you have assumed an AR2 error structure. You might get a wider or narrower CI around Jmax than the one you drew above, it’s hard to tell, especially since it will likely be a non-standard distribution.
 
Return periods of global climate fluctuations and the pause
Sean Lovejoy
Abstract.  An approach complementary to General Circulation Models (GCMs), using the anthropogenic CO2 radiative forcing as a linear surrogate for all anthropogenic forcings [Lovejoy, 2014], was recently developed for quantifying human impacts. Using preindustrial multiproxy series and scaling arguments, the probabilities of natural fluctuations at time lags up to 125 years were determined. The hypothesis that the industrial epoch warming was a giant natural fluctuation was rejected with 99.9% confidence. In this paper, this method is extended to the determination of event return times. Over the period 1880–2013, the largest 32 year event is expected to be 0.47 K, effectively explaining the postwar cooling (amplitude 0.42–0.47 K). Similarly, the “pause” since 1998 (0.28–0.37 K) has a return period of 20–50 years (not so unusual). It is nearly cancelled by the pre-pause warming event (1992–1998, return period 30–40 years); the pause is no more than natural variability.
Published in Geophysical Research Letters [link] to full manuscript.
The conclusion states:
“Unless other approaches are explored, the AR6 may simply reiterate the AR5’s “extremely likely” assessment (and possibly even the range 1.5–4.5 K). We may still be battling the climate skeptic arguments that the models are
untrustworthy and that the variability is mostly natural in origin. To be fully convincing, GCM-free approaches are needed: we must quantify the natural variability and reject the hypothesis that the warming is no more than a giant century scale fluctuation. ” 
JC comment:  I  like Lovejoy’s general approach, but convincingly rejecting a centennial scale giant fluctuation requires more robust paleo proxy reconstructions.   Lovejoy identifies a magnitude of the natural fluctuations of ~0.4C, which is the largest such estimate I’ve seen.
 
JC reflections
The climate community is in a big rut when it comes to climate change attribution – as I’ve argued in previous threads, climate models are not fit for the purpose of climate change attribution on decadal to century timescales.  Alternative methods are needed, and the two papers discussed here are steps in the right direction.
We will not be successful at sorting out attribution on these timescales until we have more robust paleo proxy data.  The paleo proxy community also seems to be in a  rut, with continued reliance on tree rings and other proxies having serious calibration issues.
The key challenge is this:  convincing attribution of ‘more than half’ of the recent warming to humans requires understanding natural variability and rejecting natural variability as a predominant explanation for the overall century scale warming and also the warming in the latter half of the 20th century.  Global climate models and tree ring based proxy reconstructions are not fit for this purpose.
 

 Filed under: Attribution, Data and observations

Source