Two more degrees by 2100!

by Vaughan Pratt
An alternative perspective on 3 degrees C?

This post was originally intended as a short comment questioning certain aspects of the methodology in JC’s post of December 23, “3 degrees C?”. But every methodology is bound to have shortcomings, raising the possibility that Judith’s methodology might nevertheless be best possible, those shortcomings notwithstanding. I was finding my arguments for a better methodology getting too long for a mere comment, whence this post. (But if actual code is more to your fancy than long-winded natural language explanations, Figures 1 and 2a can be plotted with only 31 MATLAB commands .)
Judith’s starting point is “It is far simpler to bypass the attribution issues of 20th century warming, and start with an early 21st century baseline period — I suggest 2000-2014, between the two large El Nino events.” The tacit premise here would appear to be that those “attribution issues of 20th century warming” are harder to analyze than their 21st century counterparts.
The main basis for this premise seems to be the rate of climb of atmospheric CO2 this century. This is clearly much higher than in the 20th century and therefore should improve the signal-to-noise ratio when the signal is understood as the influence of CO2 and the noise consists of those pesky “attribution issues”. Having used this SNR argument myself in this forum a few years ago, I can appreciate its logic.
Judith also claimed that “The public looks at the 3 C number and thinks it is 3 C more warming from NOW, not since the late 19th century.  Warming from NOW is what people care about.” Having seen no evidence either for this or its contrary, I propose clarifying any such forecast by simply prepending “more” to “degrees” (as in my title) and following Judith’s suggestion to subtract 1, or something more or less equivalent.
Proposal
So what would be an “obviously alternative” methodology? Well, the most extreme alternative I can think of to 15 years of data would be to take the whole 168 years of global annual HadCRUT4 to 2017.
The data for 1850-1900 is certainly sparser than for its sequel. What that does not address however is the extent to which that sparseness compromises the final analysis. By including that data instead of dismissing it out of hand, we may have a better chance of understanding that extent.
Besides increasing by an order of magnitude the duration of the data constraining the priors, another modification we can make is to the target. Instead of taking the goal to be estimating climate for 2100, perhaps plus or minus a few years, I suggest estimating an average, suitably weighted, over the 75 years 2063-2137.
This widening of the window has the effect of trading off precision in time for precision in temperature, as a sort of climate counterpart to the uncertainty principle in quantum mechanics. More generally this is a tradeoff universal to the statistics of time series: the variance of the estimate of the mean tends to be inversely proportional to the sample length.
This wide a window has the further benefit of averaging out much of the bothersome Atlantic Multidecadal Oscillation. And its considerable width also averages out all the faster periodic and quasiperiodic contributors to global land-sea surface temperature such as ENSO, the 11-year solar cycle, the 21-year magnetic Hale cycle, the ongoing pulses from typical volcanoes, etc.
But of what use is a prediction for 2063-2137 if we can’t use it to predict say the extent of sea ice in the year 2100? Well, if we can show at least that the average over that or any other period is highly likely to lie within a certain range, it becomes reasonable to infer that roughly half the years in that period are lower and half are higher. So even though we can’t say which years those would be, we can at least expect some colder years and some warmer years, relative to the average over that period. Those warmer years would then be the ones of greatest concern.
A 75-year moving average of HadCRUT4 would be the straightforward thing to do. Instead I propose applying two moving averages consecutively (the order is immaterial), of respectively 11 and 65 years, and then centering. This is numerically equivalent to a wavelet transform that convolves HadCRUT4 with a symmetric trapezoidal wavelet of width 75 years at the bottom and 55 years at the top. The description in terms of the composition of two moving averages makes it clearer that this particular wavelet is targeting the AMO and the solar cycle for near-complete removal. After much experimenting with alternative purpose-designed convolution kernels as wavelets I settled on this one as offering a satisfactory tradeoff between simplicity of description, effectiveness of overall noise suppression, transparency of purpose, and width—a finite impulse response filter much wider than 75 years doesn’t leave much signal when there’s only 170 years of data. Call climate thus filtered centennial climate.
The point of centering is to align plots vertically, without which they may find themselves uselessly far apart. The centering function we use is c(d) = d – mean(d). This function merely subtracts the mean of the data d from d itself in order to make mean(c(d)) = 0. Hence c(c(d)) = c(d) (c is idempotent).
Lastly I propose 1.85 °C per doubling of CO2 as a proxy for HadCRUT4’s immediate transient climate response to all anthropogenic radiative forcings, ARFs, since 1850. This proxy is reconstructed from ice cores at the Law Dome site in the Australian Antarctic Territory up to 1960 and as measured more directly at Charles Keeling’s CO2 observatory on Mauna Loa thereafter, giving the formula ARF = 1.85*log ₂(CO2) for all anthropogenic radiative forcing. The proof is in the pudding: it seems to work.
Results
Applying our centennial filter to HadCRUT4 yields the blue curve in Figure 1, while applying it to ARF (anthropogenic radiative forcing as estimated by our proxy) yields the red curve.

The two plots ostensibly covering the 30-year period 1951-1980 actually use data from the 104-year period 1914-2017; e.g. the datapoint at 1980 is a weighted average of data for the 75 years 1943-2017 while that at 1951 similarly averages 1914-1988. In this way all the data from 1850 to 2017 is used.
During 1951-1980 and 1895-1915 the two curves are essentially parallel, justifying the value 1.85 for recent and early transient climate response. But what of the relatively poor fit during 1915-1950?
Explaining early 20th century
We could explain the departure from parallel during 1910-1950 as simply an underestimate of TCR. However the distribution of CO2 absorption lines suggests that TCR should remain fairly constant over a wide range of CO2 levels. An explanation accommodating that point might be that the Sun was warming during the first half of the century.
To see if that makes sense we could plot the residual of the above figure against solar irradiance. While there used to be several reconstructions of total solar irradiance prior to satellite-based measurements, I’m only aware of two these days, due to respectively Greg Kopp (a frequent collaborator with Judith Lean) and Leif Svalgaard. Both are based on several centuries of sunspot data collected since Galileo started recording them, along with other proxies. The following comparison uses Kopp’s reconstruction.

 
It would appear that the departure from parallel in the middle of Figure 1 can be attributed almost entirely to solar forcing SF defined as centennial solar sensitivity times absorbed solar irradiance (ASI) as a fraction of total solar irradiance TSI received at top of atmosphere (TOA). The albedo (taken here to be 0.3) is the part of TSI reflected back to space as shortwave radiation. The remaining 70% is the portion absorbed by Earth. This is then averaged over Earth’s surface, which at 4πr² is four times the cross section πr² of the intercepted solar irradiance at TOA, whence the division by 4. That is, ASI = (1 – Albedo)*TSI/4. Lastly ASI (in W/m2) is converted to solar forcing SF (in °C) by multiplying by centennial solar sensitivity CSS (1.35 °C per W/m2 as estimated by Kopp’s reconstruction).
It is almost impossible to evaluate the goodness of this fit by looking just at Figure 1 and the red curve in Figure 2a. The residual (blue curve in 2a) needs to be plotted, and then juxtaposed with the red curve.
Any fit this good implies a high likelihood of four things.

  1. The figure of 1.85 for TCR holds not only on the right and left but the middle as well.
  2. CO2 is a good proxy for all centennial anthropogenic radiative forcing including aerosols.
  3. The filter removes essentially everything except HadCRUT4, ARF, and solar irradiance.
  4. The peak-to-peak influence on GMST of the evident 130-year oscillation in TSI is 0.07*5/3 = 0.12 °C. (The centennial filter attenuates the 130-year oscillation to 3/5 of its amplitude, compensated for by multiplying by 5/3 to estimate the actual amplitude.) Not only is the Sun not a big deal for climate, that 130-year oscillation makes its influence predictable several decades into the future.

As a check on Kopp’s reconstruction we can carry out the same comparison based on Leif Svalgaard’s reconstruction, leaving TCR and the residual completely unchanged.

On the one hand Svalgaard’s reconstruction appears to have assigned weights to sunspots of only 70% those of Kopp, requiring a significantly larger solar sensitivity (1.95) to bring it into agreement with the residual. On the other hand the standard deviation of the residual for Figure 2b (GMST – ARF – SF) is 2.3 mK while that for 2a is 3.7 mK, which is interesting.
Both fits are achieved with TCR fixed at 1.85. We were able to find a tiny improvement by using 1.84 for one and 1.86 for the other, but this reduced the standard deviations of the residuals for Figures 2a and 2b by only microkelvins, demonstrating the robustness of 1.85 ° C per doubling of CO2 as an ARF proxy.
The MATLAB script producing figures 1 and 2a,b from data sourced solely from the web at every run is in the file curry.m at http://clim8.stanford.edu/MATLAB/ClimEtc/.
I would be very interested in any software providing comparably transparent and compelling evidence for a substantially different TCR from 1.85, based on the whole of 1850-2017, and independent of any estimates of AMO and other faster-moving “attribution issues”.
Projection to 2063-2137
Regarding Is RCP8.5 an impossible scenario?, I prefer to think of it as a highly unlikely scenario. Not because Big Oil is on the verge of exhausting its proven reserves however, but because of its strange compound annual growth rate when computed in MATLAB as diff(rcp)./rcp(1:end-1)*100.

 
If that had been a stock market forecast one would suspect insider trading: something is going to happen around 2065 that will cause an abrupt reversal of climbing CAGR when it hits 1.2%, but the lips of the RCP8.5 community are sealed as to what it will be. Or perhaps 2060 is when their in-house psychologists are predicting a popular revolution against Big Oil.
Well, whatever. RCP8.5 is just too implausible to be believed.
Is any projection of rising CO2 plausible? Let me make an argument for the following projection.
Define anthropogenic CO2, ACO2, as excess atmospheric CO2 above 280 ppm. The following graph plots log₂(ACO2) since 1970. We can think of log₂(ACO2) as the number of doublings since ACO2 was 1. However the ±5 ppm variability in CO2 over its thousand-year history makes ACO2=1 a rather virtual notion.

ACO2 was pretty straight during the past century, but has gotten even straighter this century. It reveals a compound annual growth rate of just over 2%.
What could explain its increasing straightness?
One explanation might be that 2% is what the fossil fuel industry’s business model requires for its survival.
Will this continue?
The argument against is based on speculations about supply: the proven reserves can’t maintain 2% growth for much longer, the best efforts of the fossil fuel industry notwithstanding.
The argument for is based on speculations about demand: even if some customers stop buying fossil fuels for some reason, there will be no shortage of other customers willing to take their place, thereby maintaining sufficient demand to justify the oil companies spending whatever it takes to maintain proven reserves at the requisite level for good customer service, at least to the end of the present century.   Proven reserves have been growing throughout the 20th century and on into this one, and speculation that this growth is about to end is just that: pure speculation with no basis in fact. The date for peak oil is advancing at about the same pace as the data for fusion energy break-even.
There is a really simple way to see which argument wins. Just keep monitoring log2(ACO2) while looking for a departure from the remarkably straight trend to date. Any significant departure would signal failure to continue and the argument against wins.  But if by 2100 no such departure has been seen, the argument for wins, though few if any adults alive today will live to see it.
Today CO2 is at about 410 ppm, making ACO2 130 ppm. If the straight line continues, that is, if ACO2 continues to double every 34 years, two more doublings (multiplication of 130 by 4) bring the date to 2019 + 34*2 = 2087 and the CO2 level in 2087 to 130*4 + 280 = 800 ppm. Another 13 years is another factor of 2^(13/34) = 1.3, making the CO2 in 2100 130*4*1.3 + 280 = 956 ppm.
If the 1.85 °C per doubling of CO2 that has held up for 168 years continues for another 80 years, then we could expect a further rise in CO2 from today’s 410 ppm to 956 ppm to be accompanied by a rise in global mean surface temperature (land and sea together) of 1.85*log₂(956/410) = 2.26 °C.
Per decade, this comes to an average of 2.26/8 = 0.28 °C (0.51 °F) per decade. This is merely an average over those 80 years: some decades will rise more than that, some less.
But what if Figure 4 bends down sooner?
I have no idea. My confidence in what will happen if it remains straight is far greater than any confidence I could muster about the effect of it bending down.
For a more mathematical answer, bending down would break analyticity, and all bets would then be off.
A real- or complex-valued function on a given domain is said to be analytic when it is representable over its whole domain as a Taylor series that converges on that domain. In order for it to remain analytic on any extension of its domain it must continue to use the same Taylor series, which must furthermore remain convergent on that larger domain. Hence any analytic extension of an analytic function to a larger domain, if it exists, is uniquely determined by its Taylor series. This is the basis for the rich subject of analytic_continuation. Functions like addition, multiplication, exponentiation, and their inverses (subtraction, division, logarithm) where defined, all preserve analyticity.
Figure 4’s curve is analytic when modeled as a straight line. This would no longer remain the case if it started to bend down significantly
The essential contributors to centennial climate since 1850 look sufficiently like analytic functions as to raise concern when CO2 as its strongest contributor ceases to rise analytically. In particular drawdown by vegetation seems likely to respond analytically if we ignore the impact of land use changes governed by one of the planet’s more chaotic species.
So what does all this mathematical gobbledygook mean in practice? Well, it seems highly unlikely that the vegetable kingdom has been responding to rising CO2 anywhere near as fast as we have been able to raise it. While plants may well be trying to catch up with us, their contribution to drawdown is hardly likely to have kept pace.
But presumably their growth has been following CO2’s analytic growth according to some analytic function.   The problem is that we know too little about that dependence to say what plants would do if our CO2 stopped growing analytically.
Le Chatelier’s principle on the other hand entails a sufficiently simple dependence that we can expect a decrease in CO2 to result in a matching decrease in drawdown attributable to chemical processes. The much greater complexity of plants is what makes their contribution the biggest unknown here. In particular if the vegetable kingdom continued to grow at only a little less than its present pace until CO2 was down to say 330 ppm, its increasing drawdown could greatly accelerate removal of CO2 from the atmosphere.
But this is only one possibility from a wide range of such possibilities.
On the assumption that Figure 4 stays straight through 2100, and Earth doesn’t get hit in the meantime by something much worse than anything since 1850 such as a supervolcano or asteroid, I feel pretty comfortable with my “Two more degrees” forecast for the 75 years 2063-2137.
But if it bends down I would not feel comfortable making any prediction at all given the above concerns. (I made essentially this point in column 4 of my poster at AGU2018 in Washington DC, “Sources of Variation in Climate Sensitivity Estimates”, http://clim8.stanford.edu/AGU/ .)
Moderation note:  As with all guest posts, please keep your comments civil and relevant.
 

Source