Model structural uncertainty – are GCMs the best tools?

by Judith Curry
Rarely are the following questions asked:  Is the approach that we are taking to climate modeling adequate?  Could other model structural forms be more useful for advancing climate science and informing policy?

Why GCMs?
Here GCM refers to the global coupled atmosphere-ocean models, whose simulations under the CMIP are used by the IPCC.
The sociology of GCMs is discussed in a fascinating 1998 paper by Shackley et al., entitled Uncertainty, Complexity, and Concepts of Good Science in Climate Modelling: Are GCMs the best tools?  Stimulated by Shackley’s paper, here are excerpts from an abstract I’ve submitted to a Workshop to be held next October:
The policy-driven imperative of climate prediction has resulted in the accumulation of power and authority around GCMs, based on the promise of using GCMs to set emissions reduction targets and for regional predictions of climate change. Complexity of model representation has become a central normative principle in evaluating climate models, good science and policy utility. However, not only are GCMs resource-intensive and intractable, they are also characterized by over parameterization and inadequate attention to uncertainty. Apart from the divergence of climate model predictions from observations over the past two decades that are raising questions as to whether GCMs are over sensitive to CO2 forcing, the hope for useful regional predictions of climate change is unlikely to be realized based on the current path of model development. The advancement of climate science is arguably being slowed by the focus of resources on this one path of climate modeling.
 Philosophy of GCMs
Shackley et al. describe the underlying philosophy of GCMs:
The model building process used to formulate and construct the GCM is considered as a prime example of ‘deterministic reductionism’. By reductionism, we mean here the process of ‘reducing’ a complex system to the sum of its perceived component parts (or subsystems) and then constructing a model based on the assumed interconnection of the submodels for these many parts. This is not, of course, a process which necessarily reduces the model in size at all: on the contrary, it normally leads to more complex models, like the GCM, because most scientists feel that the apparent complexity that they see in the natural world should be reflected in a complex model: namely a myriad of ‘physically meaningful’ and interconnected subsystems, each governed by the ‘laws of nature’, applied at the microscale but allowed to define the dynamic behaviour at the macroscale, in a manner almost totally specified by the scientist’s (and usually his/her peer group’s) perception of the system.
This reductionist philosophy of the GCM model is ‘deterministic’ because the models are constructed on purely deterministic principles. The scientist may accept that the model is a representation of an uncertain reality but this is not reflected at all in the model equations: the GCM is the numerical  solution of a complex but purely deterministic set of nonlinear partial differential equations over a defined spatiotemporal grid, and no attempt is made to introduce any quantification of uncertainty into its construction. 
[T]he reductionist argument that large scale behaviour can be represented by the aggregative effects of smaller scale process has never been validated in the context of natural environmental systems and is even difficult to justify when modelling complex manmade processes in engineering.
I just came across a 2009 essay in EOS by Stephan Harrison and David Stainforth entitled Predicting Climate Change: Lessons from Reductionism, Emergence and the Past, which emphasizes this same point:
Reductionism argues that deterministic approaches to science and positivist views of causation are the appropriate methodologies for exploring complex, multivariate systems. The difficulty  is that a successful reductionist explanation need not imply the possibility of a successful constructionist approach, i.e., one where the behavior of a complex system can be deduced from the fundamental reductionist understanding. Rather, large, complex systems may be better understood, and perhaps only understood, in terms of observed, emergent behavior. The practical implication is that there exist system behaviors and structures that are not amenable to explanation or prediction by reductionist methodologies.
Model structural uncertainty
When climate modelers work to characterize uncertainties in their model, they focus on initial condition uncertainty and parametric (parameter and parameterization) uncertainty.  Apart from the issue of the fidelity of the numerical solutions to the physical equations, there is yet another uncertainty.  This is model structural uncertainty, which is described in a paragraph from my Uncertainty Monster paper:
Model structural form is the conceptual modeling of the physical system (e.g. dynamical equations, initial and boundary conditions), including the selection of subsystems to include (e.g stratospheric chemistry, ice sheet dynamics). In addition to insufficient understanding of the system, uncertainties in model structural form are introduced as a pragmatic compromise between numerical stability and fidelity to the underlying theories, credibility of results, and available computational resources.  
The structural form of GCMs has undergone significant change in the past decade, largely by adding more atmospheric chemistry, interactive carbon cyclone, additional prognostic equations for cloud microphysical processes, and land surface models.  A few models have undergone structural changes to their dynamic core – notably, the Hadley Center model becoming nonhydrostatic.
Structural uncertainty is rarely quantified in context of subsequent model versions.  Continual ad hoc adjustment of GCMs (calibration) provides a means for the model to avoid being falsified – new model forms with increasing complexity are generally regarded ‘better’.
The questions I am posing here relate not so much to these changes to model structural form that relate to the current reductionist paradigm, but more substantial changes  to the fundamental equations of the dynamical core or entirely new modeling frameworks that may have greater structural adequacy than the current GCMs.  Below are some interesting ideas on new model structural forms that I’ve come across.
Multi-component multi-phase atmosphere
The biggest uncertainty related to climate sensitivity is the fast thermodynamic feedback associated with water vapor and clouds. A number of simplifying assumptions about moist thermodynamics are made in climate models, as a carryover from weather models. For the long time integrations of climate models, accumulation of model errors could produce spurious or highly amplified feedbacks.
Treating the atmosphere as a multi-component multi-phase fluid (water plus the non condensing gases) could provide an improved framework for modeling processes related to clouds and moist convection, which remains one of the most vexing aspects of current GCMs.  Peter Bannon lays out the framework for such a model in  Theoretical Foundations for Models of Moist Convection.  I have long thought that this modeling framework would incorporate the water vapor/condensation driven processes discussed by Makarieva and colleagues.
Stochastic models
The leading proponent of stochastic parameterizations in climate models, and now fully stochastic climate models, is Tim Palmer of Oxford (formerly of ECMWF).  The Resilient Earth has a post on this Swapping Climate Models for a Roll of the Dice.  Excerpts:
The problem is that to halve the sized of the grid divisions requires an order-of-magnitude increase in computer power. Making the grid fine enough is just not possible with today’s technology.
In light of this insurmountable problem, some researchers go so far as to demand a major overhaul, scrapping the current crop of models altogether. Taking clues from meteorology and other sciences, the model reformers say the old physics based models should be abandoned and new models, based on stochastic methods, need to be written from the ground up. Pursuing this goal, a special issue of the Philosophical Transactions of the Royal Society A will publish 14 papers setting out a framework for stochastic climate modeling. Here is a description of the topic:
This Special Issue is based on a workshop at Oriel College Oxford in 2013 that brought together, for the first time, weather and climate modellers on the one hand and computer scientists on the other, to discuss the role of inexact and stochastic computation in weather and climate prediction. The scientific basis for inexact and stochastic computing is that the closure (or parametrisation) problem for weather and climate models is inherently stochastic. Small-scale variables in the model necessarily inherit this stochasticity. As such it is wasteful to represent these small scales with excessive precision and determinism. Inexact and stochastic computing could be used to reduce the computational costs of weather and climate simulations due to savings in power consumption and an increase in computational performance without loss of accuracy. This could in turn open the door to higher resolution simulations and hence more accurate forecasts.
In one of the papers in the special edition, “Stochastic modelling and energy-efficient computing for weather and climate prediction,” Tim Palmer, Peter Düben, and Hugh McNamara state the stochastic modeler’s case:
[A] new paradigm for solving the equations of motion of weather and climate is beginning to emerge. The basis for this paradigm is the power-law structure observed in many climate variables. This power-law structure indicates that there is no natural way to delineate variables as ‘large’ or ‘small’—in other words, there is no absolute basis for the separation in numerical models between resolved and unresolved variables.
In other words, we are going to estimate what we don’t understand and hope those pesky problems of scale just go away. “A first step towards making this division less artificial in numerical models has been the generalization of the parametrization process to include inherently stochastic representations of unresolved processes,” they state. “A knowledge of scale-dependent information content will help determine the optimal numerical precision with which the variables of a weather or climate model should be represented as a function of scale.” 
Dominant Mode Analysis
Shackley et al. describe Dominant Mode Analysis (DMA):
DMA seeks to analyse a given, physically based, deterministic model by identifying objectively the small number of dynamic modes which appear to dominate the model’s response to perturbations in the input variables. In contrast to the traditional reductionist modelling, this normally results in a considerable simplification of the model, which is simultaneously both reduced in order and linearised by the analysis. TheDMAmethodology involves perturbing the complex and usually nonlinear, physically based model about some defined operating point, using a sufficiently exciting signal, i.e., one that will unambiguously reveal all the dominant modes of behaviour. A low order, linear model, in the form of a transfer function, is then fitted to the resulting set of simulated inputoutput data, using special methods of statistical estimation that are particularly effective in this role. As might be expected from dynamic systems theory, a low order linear model obtained in this manner reproduces the quasilinear behaviour of the original nonlinear model about the operating point almost exactly for small perturbations. Perhaps more surprisingly, the reduced order model can sometimes also mimic the large perturbational response of its much higher order progenitor.
Network-based models
There is growing interest in the use of complex networks to represent and study the climate system.  This paper by Steinhauser et al.,  provides some background.  My colleagues at Georgia Tech are at the forefront of this application:  Annalisa Bracco and Konstantin Dovrolis, and also Yi Deng.  And of course, the stadium wave is network based.
JC summary
The numerous problems with GCMs, and concerns that these problems will not be addressed in the near future given the current development path of these models, suggest that alternative model frameworks should be explored.  I’ve mentioned the alternative model frameworks that I’ve come across that I think show promise for some applications.   I don’t think there there is a one size fits all climate model solution.  For example, stochastic models should provide much better information about prediction uncertainty, but will probably still not produce useful predictions on regional scales.  Network-based models may be the most useful for regional scale prediction.  And we stand to learn much about the climate system by trying a multi-component multi-phase model, and also from DMA.
The concentration of resources (financial and personnel) in supporting the traditional GCM approach currently precludes sufficient resources for the alternative methods (although networks and DMA are pretty inexpensive).  Tim Palmer may be successful at marshaling sufficient resources to further develop his stochastic climate modeling ideas.  Unfortunately, I don’t know of anyone that is interested in taking on the multi-component multi-phase formulation of the atmosphere (a particular interest of mine).
I look forward to hearing from those of you who are have experience in other fields that develop models of large, complex systems.  In my opinion, climate modeling is currently in a big and expensive rut.
 
 Filed under: climate models

Source