Generating regional scenarios of climate change

by Judith Curry
This post is about the practical aspects of generating regional scenarios of climate variability and change for the 21st century.

The challenges of generating useful scenarios of climate variability/change for the 21st century were discussed in my presentation at the UK-US Workshop [link].  Some excerpts from my previous post:
At timescales beyond a season, available ensembles of climate models do not provide the basis for probabilistic predictions of regional climate change. Given the uncertainties, the best that can be hoped for is scenarios of future change that bound the actual change, with some sense of the likelihood of the individual scenarios. 
Scenarios are provocative and plausible accounts of how the future might unfold. The purpose is not to identify the most likely future, but to create a map of uncertainty of the forces driving us toward the unknown future. Scenarios help decision makers order and frame their thinking about the long-term while providing them with the tools and confidence to take action in the short-term.
Are GCMs the best tool? – GCMs may not be the best tool, and are certainly not the only tool, for generating scenarios of future regional climate change. Current GCMs inadequate for simulating natural internal variability on multidecadal time scales. Computational expense precludes adequate ensemble size. GCMs currently have little skill in simulating regional climate variations. Dynamical & statistical downscaling adds little value, beyond accounting for local effects on surface variables. Further, the CMIP5 simulations only explore various scenarios of emissions.
The challenge for identifying an upper bound for future scenarios is to identify the possible and plausible worst case scenarios. What scenarios would be genuinely catastrophic? What are possible/plausible time scales for the scenarios? Can we ‘falsify’ these scenarios for the timescale of interest based upon our background knowledge of natural plus anthropogenic climate change?
New project
With this context, my company Climate Forecast Applications Network (CFAN) has signed a new contract to develop regional climate change scenarios as part of a large, complex project.  Here is a [link] describing generally how CFAN approaches developing such scenarios.
Without disclosing the target region or the client or the specific impact issue being considered (at some point, presumably there will be a publicly issued report on the project), I will describe the relevant aspects of the project that frame my part of the project.
My team is one of four teams involved in the generation of future scenarios for the region. Our role is to use the CMIP5 simulations, observations, and other model generated scenarios (developed by other teams) to develop scenarios of high-resolution surface forcing (temperature and precipitation), that will be used to force a detailed land surface process model.
There are two overall aspects of the project that I find appealing:

  1.  The creation of a broader range of future scenarios, beyond what is provided by the CMIP simulations
  2.  The importance of a careful uncertainty analysis.

Overview of proposed strategy
In context of the caveats provided in my UK-US Workshop Presentation, how can we approach developing a broad range of future scenarios for this region?
Previous analyses of CMIP5 simulations have identified several climate models that do a credible job of simulating the current climate of this region (particularly in terms of overall atmospheric circulation patterns and annual cycle of precipitation).  Nevertheless, even the best models show biases in both temperature and precipitation relative to observations.
This suggests starting with a really good historical baseline period  (say the last 30 years), using a global reanalysis product as the base.  I prefer to use a reanalysis product as the base rather than gridded observational datasets because the reanalysis product provides a dynamically consistent gridded state estimation that includes assimilation of available surface and satellite observations.  That said, the reanalysis products can have biases in data sparse regions particularly in the presence of topography (which is the case for the project region).  Hence satellite data and surface data can be used to adjust for biases in the reanalyses.
The second challenge for the baseline period is downscaling to a 1 km grid.  This sounds like a crazy thing to do, but the land surface model requires such inputs, and there is topography in the region that influences both temperature and precipitation. We propose to do a statistical downscaling approach based on a previous study that used an inverse modeling approach.  The resulting high resolution historical dataset will be used to calibrate and test the land surface model.  Further, they desire daily forcing (at 1 km) for the land surface model.  No, I am not going to defend the need for 1 km and daily resolution for this, but that requirement is in the project terms of reference.
The historical baseline dataset can be used in two ways in developing the 21st century scenarios from climate model simulations:

  1. To calibrate the climate model (remove bias) by comparing the observed baseline period with the historical climate model simulations, and then apply this same bias correction to 21st century simulations;
  2. Use as a baseline to which changes in the 21st century relative to the model baseline period are applied (add for temperature, multiply for precipitation).

For a variety of ancillary reasons, I prefer strategy #2, since it requires that only the observed baseline be downscaled, and it is the only approach that I think will work for more creatively imagined scenarios.
At first blush, it seems that both methods should give the same result in the end. The complication arises from chaos and decadal scale internal variability in the climate models, which was the topic of a recent post.
And then there is is the issue of which/how many CMIP5 models to use, but that is a topic for another post.
Implications of chaos and internal variability
In CMIP5, some of the modeling groups provided an ensemble of simulations for   the historical period and for the 21st century (varying the initial conditions).  The ensemble size was not large (nominally 5 members), when compared to the Grand Ensemble of 30 members described in last week’s post.
Re strategy #1. So, if you are trying to identify bias in the climate models relative to observations (say the past 30 years), how would you do this?  Average the 5 ensembles together, and calculate the bias?  Calculate the bias of each ensemble member?  Look for the single ensemble member that seems to have the multi-decadal variability that is most in phase with the observations?  Use the decadal CMIP5 simulations that initialize the ocean?  The reason this choice matters is that this bias correction will be applied to the 21st century simulations, and the bias corrections are useless if you are merely correcting for multidecadal variability that is out of phase with the observations.
So, is strategy #2 any better?  Strategy #2 bypasses the need to calibrate the climate models against observations.  But the challenge is then shifted to how to calculate the delta changes, and then use them to create a new scenario of surface forcing  that captures the spatio-temporal weather variability that is desired from a daily forcing data set.
How should we proceed with calculating a delta change from 5 historical simulations (say the last 10 years or so) and 5 simulations of the 21st century?  If you average the 5 historical simulations and average the 5 21st century simulations and subtract, you will end up with averaged out mush that has no meaningful weather or interannual or decadal variability. How about subtracting the average of the 5 historical simulations from each of the 21st century simulations?  Or, should we calculate a delta using each possible combination from the groups of historical and 21st century simulations? Then, how to assess uncertainty from an insufficiently large ensemble?
Lets assume that we solve the above problem in some satisfactory way.  How do we then incorporate the delta changes with the baseline observational field to create a credible daily, high resolution surface forcing field? Perhaps the deltas should be done for monthly averaged (or even regionally averaged values), and then the general sub monthly variability of the 21st simulation can be preserved?
Conclusions
Well I don’t have any conclusions obviously at this point, the above summarizes the issues that we are grappling with as we set this up.  You may ask why we are even doing this project, given the magnitude of the challenges.  Well, we would like business from this particular client, the project is scientifically interesting and important, and I would like to establish some sort of credible procedure for developing 21st century scenarios of climate variability and methodology for assessing uncertainty.  There is a cottage industry of people using one CMIP5 simulation, and then dynamically downscaling using a mesoscale model, and then presenting these results to decision makers without any uncertainty assessment.  Even Michael Mann agrees that this doesn’t make sense [link]. Fortunately, the client for this project appreciates the uncertainty issue, and is prepared to accept as an outcome to this project a 10% change with 200% uncertainty.
I have been crazy busy the last few months, I think I will start doing more of this sort of post, about things I am directly working on (including follow on posts related to this project).
I look forward to your comments and suggestions on this.
Moderation note:  This is a technical thread, please don’t bother to comment unless you have some technical input that you think I would be interested in.  General discussion about this can be conducted on Kip Hansen’s thread from last week.Filed under: climate models

Source