Lorenz validated

by Kip Hansen
Some reflections on NCAR’s Large Ensemble.

In her latest Week in review – science edition, Judith Curry gave us a link to a press release from the National Center for Atmospheric Research (which is managed by the University Corporation for Atmospheric Research under the sponsorship of the National Science Foundation – often written NCAR/UCAR) titled “40 Earths: NCAR’s Large Ensemble reveals staggering climate variability”.
The highlight of the press release is this image:

 
Original caption:
“Winter temperature trends (in degrees Celsius) for North America between 1963 and 2012 for each of 30 members of the CESM Large Ensemble. The variations in warming and cooling in the 30 members illustrate the far-reaching effects of natural variability superimposed on human-induced climate change. The ensemble mean (EM; bottom, second image from right) averages out the natural variability, leaving only the warming trend attributed to human-caused climate change. The image at bottom right (OBS) shows actual observations from the same time period. By comparing the ensemble mean to the observations, the science team was able to parse how much of the warming over North America was due to natural variability and how much was due to human-caused climate change. Read the full study in the American Meteorological Society’s Journal of Climate. (© 2016 AMS.)”
What is this? UCAR’s Large Ensemble Community Project has built a data base of “30 simulations with the Community Earth System Model (CESM) at 1° latitude/longitude resolution, each of which is subject to an identical scenario of historical radiative forcing but starts from a slightly different atmospheric state.” Exactly what kind of “different atmospheric state”? How different were the starting conditions? “[T]he scientists modified the model’s starting conditions ever so slightly by adjusting the global atmospheric temperature by less than one-trillionth of one degree”.
The images, numbers 1 through 30, each represent the results of a single run of the CESM starting from a unique, one/one-trillionth degree difference in global temperature – each a projection of North American Winter temperature trends for 1963-2012.   The right-bottom image, labeled OBS, is the actual observed trends.
There is a paper from which this image of taken: Forced and Internal Components of Winter Air Temperature Trends over North America during the past 50 Years: Mechanisms and Implications, the paper representing just one of the “about 100 peer-reviewed scientific journal articles [that] have used data from the CESM Large Ensemble.” I will not comment on this paper, other than my comments here about the image, its caption, and the statements made in the press release itself.
I admit to being flummoxed — not by the fact that 30 runs of the CESM produced 30 entirely different 50-year climate projections from near-identical initial conditions. That is entirely expected. In fact, Edward Lorenz showed this with his toy weather models on his “toy” (by today’s standards) computer back in the 1960s. His discovery led to the field of study known today as Chaos Theory, the study of non-linear dynamical systems, particularly those that are highly sensitive to initial conditions. Our 30 CESM runs were initialized with a difference of what?   One/one-trillionth of a degree in the initial global atmospheric temperature input value – an amount so small as to be literally undetectable by modern instruments used to measure air temperatures. Running the simulations for just 50 years – from starting time of 1963 to 2012 – gives results entirely in keeping with Lorenz’ findings: “Two states differing by imperceptible amounts may eventually evolve into two considerably different states … If, then, there is any error whatever in observing the present state—and in any real system such errors seem inevitable—an acceptable prediction of an instantaneous state in the distant future may well be impossible….In view of the inevitable inaccuracy and incompleteness of weather observations, precise very-long-range forecasting would seem to be nonexistent.”
­What is the import of Lorenz? Literally ALL of our collective data on historic “global atmospheric temperature” are known to be inaccurate to at least +/- 0.1 degrees C. No matter what initial value the dedicated people at NCAR/UCAR enter into the CESM for global atmospheric temperature, it will differ from reality (from actuality – the number that would be correct if it were possible to produce such a number) by many, many orders of magnitude greater than the one/one-trillionth of a degree difference used to initialize these 30 runs in the CESM-Large Ensemble. Does this really matter? In my opinion, it does not matter. It is easy to see that the tiniest of differences, even in a just one single initial value, produce 50-year projections that are as different from one another as is possible(see endnote 1).   I do not know how many initial conditions values have to be entered to initialize the CESM – but certainly it is more than one. How much more different would the projections be if each of the initial values were altered, even just slightly?
What flummoxes me is the claim made in the caption – The ensemble mean (EM; bottom, second image from right) averages out the natural variability, leaving only the warming trend attributed to human-caused climate change.
In the paper that produced this image, the precise claim made is:
The modeling framework consists of 30 simulations with the Community Earth System Model (CESM) at 1° latitude/longitude resolution, each of which is subject to an identical scenario of historical radiative forcing but starts from a slightly different atmospheric state. Hence, any spread within the ensemble results from unpredictable internal variability superimposed upon the forced climate change signal. 
This idea is very alluring. Oh how wonderful it would be if it were true – if it were really that simple.
It is not that simple.   And, in my opinion, is almost certainly not true.
The climate system is a coupled non-linear chaotic dynamical system – the two major coupled systems being the atmosphere and the oceans. These two unevenly heated fluid systems acting under the influence of gravitational forces of the Earth, Moon and Sun while the planet spins in space and travels in its [uneven] orbit around the Sun produce a combined dynamical system of incredible complexity which exhibits the expected types of chaotic phenomena predicted by Chaos Theory – one of which is a profound dependence on initial conditions. When such a system is modeled mathematically, it is impossible to eliminate all of the non-linearity’s (in fact, the more the non-linear formulas are simplified, the less valid the model).
Averaging 30 results produced by the mathematical chaotic behavior of any dynamical system model does not “average out the natural variability” in the system modeled.   It does not do anything even resembling averaging out natural variability. Averaging 30 chaotic results produces only the “average of those particular 30 chaotic results”.
Why isn’t the mean the result with natural variability averaged out? It is because there is a vast difference between random systems and chaotic systems. This difference seems to have been missed in making the above claim.
Coin flipping (with a truly fair coin) gives random results – not heads-tails-heads-tails, but results that, when enough sample flips have been made, it is possible to average out the randomness and give a true ratio of the possible outcomes – that being 50-50 for heads and tails.
This is not true for chaotic systems. In chaotic systems, the results only appear to be random – but they are not random at all – they are entirely deterministic, each succeeding point being precisely determined by applying a specific formula to the existing value, then repeating for the next value. At each turn, the next value can be calculated exactly. What one cannot do in a chaotic system is predict what the value will be after the next 20 turns. One has to calculate through each of the prior 19 steps to get there.
In chaotic systems, these non-random results, though they may appear tyo be random, have order and structure. Each chaotic system has regimes of stability, periodicity, period doubling, ordered regimes that appear random but are constrained in value, and in some cases regimes that are highly ordered in what are called “strange attractors”.   Some of these states, these regimes, are subject to statistical analysis and can be said to be “smooth” (regions being visited statistically evenly) – others are fantastically varied, beautifully shaped in phase space, and profoundly “un-smooth” (with some regions hot with activity, while others are rarely visited, and can be deeply resistant to simple statistical analysis – albeit there is a field of study in statistics that focuses on this problem, it does not involve the type of average or mean used in this case).
The output of chaotic systems thus cannot simply be averaged to remove randomness or variability – the output is not random and is not necessarily evenly variable. (In our case, we have absolutely no idea about the evenness, the smoothness, of the real climate system.)
(On a more basic level, the averaging of 30 outcomes would not be valid even for a truly random, two-value system like a coin toss or a six-value system such as the toss of a single dice. This is obvious to the first-year statistics student or beginner at craps. Perhaps it is the near-infinite number of possible chaotic outcomes of the climate model that make it appear to be an even spread of random outcomes that allow this beginner’s rule to be ignored.)
Had they run the models 30 or 100 more times, and adjusted different initial conditions, they would have potentially had an entirely different set of results – maybe a new Little Ace Age in some runs — and they would have a different average, a different mean.   Would this new, different average, also be said to represent a result that “averages out the natural variability, leaving only the warming trend attributed to human-caused climate change.”? How many different means could be produced in this way? What would they represent?   I suspect that they would represent nothing other than the mean of all possible climate outputs of this model.
The model does not really have any “natural variability” in it in the first place – there is no formula in the model that could be said to represent that part of the climate system that is “natural variability” – what the model has are mathematical formulas that are simplified versions of the non-linear maths formulas that represent such things as non-equilibrium heat transfer, the dynamical flow of unevenly heated fluids, convective cooling, fluid flows of all types (oceans and atmosphere), which dynamics are known to be chaotic in the real world. The mathematically chaotic results resemble what we know as “natural variability” – this term meaning only those causes not of human origin – causes specifically coded into the model as such. The real world chaotic climatic results, what is truly natural variation, owe their existence to the same principles – the non-linearity of dynamical systems – but real world operating dynamical systems in which natural variability includes not just those coded into the model as “natural” but also all of the causes that we do not understand and those we are not aware of. It would be a mistake to consider the two different variabilities to be an identity – one and the same thing.
Thus “any spread in the ensemble” cannot be said to result from internal variability of the climate system or be taken to literally represent “natural variability” in the sense commonly used in climate science.   The spread in the ensemble simply results from mathematical chaos inherent in the       formulas used in the climate model, and represents the only the spread allowed by the constraining structure and parameterization of the model itself.
Remember, each of the 30 images creeated by the 30 runs of the CESM have been produced by identical code, identical parameters, identical forcing – and all-but-identical initial conditions. Yet none of them match the observed climate in 2012. Only one comes approximately close. No one thinks that these represent actual climates.   A full one-third of the runs produce projections that, had they come to pass in 2012, would have set climate science on its head. How then are we to assume that averaging – finding the mean of – these 30 climates somehow magically represents the real climate with the natural variability averaged out? This really only tells us how profoundly sensitive the CESM is to initial conditions, and tells us something about the limits of projected climates that the modeled system will allow. What we see in the ensemble mean and spread is only the mean of those exact runs and their spread over a 50 year period. But it has little to do with the real world climate.
To conflate the mathematically chaotic results of the modeled dynamical system with real-world “natural variability” – to claim that the one is the same as the other – is a hypothesis without basis particularly when the climate effects are divided into a two-value system consisting only of “natural variability” and “human-caused climate change”.
The hypothesis that averaging 30 chaotically-produced climate projections creates a ensemble mean (EM) that has had natural variability averaged out in such a way that comparing the EM projection to the actually observed data allows one to “parse how much of the warming over North America was due to natural variability and how much was due to human-caused climate change” certainly does not follow from an understanding of Chaos Theory.
The two-value system (natural variability vs. human-caused) is not sufficient as we do not have adequate knowledge of what all the natural causes are (nor their true effect sizes) to be able to separate them out from all human causes – we cannot yet readily mark down effect sizes due to “natural” causes, thus cannot calculate the “human-caused” remainder. Regardless, almost certainly, comparing the ensemble mean of multiple near-identical runs of a modeled known-chaotic system to the real-world observations is not a scientifically or mathematically supportable approach in light of Chaos Theory.
There are, in the climate system, known causes and there remains the possibility of unknown causes. To be thorough, we should mention known unknowns – such as how clouds are both effects and causes, in unknown relationships – and unknown unknowns – there may be causes of climatic change that we are totally unaware of as yet, though the possibility of major “big-red-knob causes” remaining unknown decreases every year as the subject matures. Yet because the climate system is a coupled non-linear system – chaotic in its very nature – tweezing apart the coupled causes and effects is, and will remain for a long time, an ongoing project.
Consider also, that because the climate system is a constrained chaotic system by nature(see end note 2), as this study serves to demonstrate, there may be some climate cause that, though it is as small as a “one/one-trillionth degree change in the global atmospheric temperature”, may spawn climate changes in the future far, far greater than we might imagine.
What the image produced by the NCAR/UCAR Large Ensemble Community Project does accomplish is to totally validate Edward Lorenz’ discovery that models of weather and climate systems are, they must be, by their very nature, chaotic – profoundly sensitive to initial conditions – and thus resistive of, or possibly impervious to, attempts at “precise very-long-range” forecasting.
# # # # #
End Notes:

  1. Climate models are parameterized to produce expected results. For instance, if a model fails to generally produce results that resemble actual observations when used to project known data, then it must be adjusted until it does. Obviously, if a model is run starting in 1900 for 100 years, and it produces a Little Ice Age by the year 2000, then something must be assumed to be amiss in the model. There is nothing wrong with this as an idea though there is increasing evidence that the specific practice may be a factor in the inability of models to correctly project even short-term (decadal) futures and their insistence on projecting continued warming in excess of observed rates.
  2. I refer to the climate system as “constained” based only on our long-term understanding of Earth’s climate – surface temperatures remain within a relatively narrow band and gross climate states seem restricted to Ice Ages and Interglacials. Likewise, in Chaos Theory, systems are known to be constrained by factors within the system itself – though results can be proven chaotic, they are not “just anything”, but occur within defined mathematical spaces – some of which are fantastically complicated.

# # # # #
Moderation note:  As with all blog posts, please keep your comments civil and relevant.
 
 Filed under: climate models

Source