Root Cause Analysis of the Modern Warming

by Matt Skaggs
For years, climate scientists have followed reasoning that goes from climate model simulations to expert opinion, declaring that to be sufficient. But that is not how attribution works.

The concept of attribution is important in descriptive science, and is a key part of engineering. Engineers typically use the term “root cause analysis” rather than attribution. There is nothing particularly clever about root cause methodology, and once someone is introduced to the basics, it all seems fairly obvious. It is really just a system for keeping track of what you know and what you still need to figure out.
I have been performing root cause analysis throughout my entire, long career, generally in an engineering setting. The effort consists of applying well established tools to new problems. This means that in many cases, I am not providing subject matter expertise on the problem itself, although it is always useful to understand the basics. Earlier in my career I also performed laboratory forensic work, but these days I am usually merely a facilitator. I will refer to those that are most knowledgeable about a particular problem as the “subject matter experts” (SMEs).
This essay consists of three basic sections. First I will briefly touch on root cause methodology. Next I will step through how a fault tree would be conducted for a topic such as the recent warming, including showing what the top tiers of the tree might look like. I will conclude with some remarks about the current status of the attribution effort in global warming. As is typical for a technical blog post, I will be covering a lot of ground while barely touching on most topics, but I promise that I will do my best to explain the concepts as clearly and concisely as I can.
Part 1: Established Root Cause Methodology
Definitions and Scope
Formal root cause analysis requires very clear definitions and scope to avoid chaos. It is a tool specifically for situations in which we have detected an effect with no obvious cause, but discerning the cause is valuable in some way. This means that we can only apply our methodology to events that have already occurred, since predicting the future exploits different tools. We will define an effect subject to attribution as a significant excursion from stable output in an otherwise stable system. One reason this is important is that a significant excursion from stable behavior in an otherwise stable system can be assumed to have a single root cause. Full justification of this is beyond the scope of this essay, but consider that if your car suddenly stops progressing forward while you are driving, the failure has a single root cause. After having no trouble for a year, the wheel does not fall off at the exact same instant that the fuel pump seizes. I will define a “stable” system as one in which significant excursions are so rare in time that they can safely be assumed to have a single root cause.
Climate science is currently engaged in an attribution effort pertaining to a recent temperature excursion, which I will refer to as the “modern warming.” For purposes of defining the scope of our attribution effort, we will consider the term “modern warming” to represent the rise in global temperature since 1980. This is sufficiently precise to prevent confusion, we can always go back and tweak this date if the evidence warrants. 
Choosing a Tool from the Toolbox 
There are two basic methods to conclusively attribute an effect to a cause. The short route to attribution is to recognize a unique signature in the evidence that can only be explained by a single root cause. This is familiar from daily life; the transformer in front of your house shorted and there is a dead black squirrel hanging up there. The need for a systematic approach such as a fault tree only arises when there is no black squirrel. We will return to the question of a unique signature later, after discussing what an exhaustive effort would look like.
Once we have determined that we cannot simply look at the outcome of an event and see the obvious cause, and we find no unique signature in the data, we must take a more systematic approach. The primary tools in engineering root cause analysis are the fault tree and the cause map. The fault tree is the tool of choice for when things fail (or more generally, execute an excursion), while the cause map is a better tool for when a process breaks down. The fault tree asks “how?,” while the cause map asks “why?” Both tools are forms of logic trees with all logical bifurcations mapped out. Fault trees can be quite complex with various types of logic gates. The key attributes of a fault tree are accuracy, clarity, and comprehensiveness. What does it mean to be comprehensive? The tree must address all plausible root causes, even ones considered highly unlikely, but there is a limit. The limit concept here is euphemistically referred to as “comet strike” by engineers. If you are trying to figure out why a boiler blew up, you are not obligated to put “comet strike” on your fault tree unless there is some evidence of an actual comet.
Since we are looking at an excursion in a data set, we choose the fault tree as our basic tool. The fault tree approach looks like this:

  1. Verify that a significant excursion has occurred.
  2. Collect sufficient data to characterize the excursion.
  3. Assemble the SMEs and brainstorm possible root causes for the excursion.
  4. Build a formal fault tree showing all the plausible causes. If there is any dispute about plausibility, put the prospective cause on the tree anyway.
  5. Apply documented evidence to each cause. This generally consists of direct observations and experimental results. Parse the data as either supporting or refuting a root cause, and modify the fault tree accordingly.
  6. Determine where evidence is lacking, develop a plan to generate the missing evidence. Consider synthetically modeling the behavior when no better evidence is available.
  7. Execute plan to fill all evidence blocks. Continue until all plausible root causes are refuted except one, and verify that the surviving root cause is supported by robust evidence.
  8. Produce report showing all of the above, and concluding that the root cause of the excursion was the surviving cause on the fault tree.

I will be discussing these steps in more detail below.
The Epistemology of Attribution Evidence
As we work through a fault tree, we inevitably must weigh the value of various forms of evidence. Remaining objective here can be a challenge, but we do have some basic guidelines to help us.
The types of evidence used to support or refute a root cause are not all equal. The differences can be expressed in terms of “fidelity.” When we examine a failed part or an excursion in a data set, our direct observations are based upon evidence that has perfect fidelity. The physical evidence corresponds exactly to the effect of the true root cause upon the system of interest. We may misinterpret the evidence, but the evidence is nevertheless a direct result of the true root cause that we seek. That is not true when we devise experiments to simulate the excursion, nor is it true when we create synthetic models.
When we cannot obtain conclusive root cause evidence by direct observation of the characteristics of the excursion, or direct analysis of performance data, the next best approach is to simulate the excursion by performing input/output (I/O) experimentation on the same or an equivalent system. This requires that we make assumptions about the input parameters, and we cannot assume that our assumptions have perfect fidelity to the excursion we are trying to simulate. Once we can analyze the results of the experiment, we find that it either reproduced our excursion of interest, or it did not. Either way, the outcome of the experiment has high fidelity with respect to the input as long as the system used in test has high fidelity to the system of interest. If the experiment based upon our best guess of the pertinent input parameters does not reproduce the directly-observed characteristics of the excursion, we do not discard the direct observations in favor of the experiment results. We may need to go back and double check our interpretation, but if the experiment does not create the same outcome as the actual event, it means we chose the wrong input parameters. The experiment serves to refute our best guess. The outcomes from experimentation obviously sit lower on an evidence hierarchy than direct observations.
The fidelity of synthetic models is limited in exactly the same way with respect to the input parameters that we plug into the model. But models have other fidelity issues as well. When we perform our experiments on the same system that had the excursion (which is ideal if it is available), or on an equivalent system, we take great care to assure that our test system responds the same way to input as the original system that had the excursion of interest. We can sometimes verify this directly. In a synthetic model, however, an algorithm is substituted for the actual system, and there will always be assumptions that go into the algorithm. This adds up to a situation in which we are unsure of the fidelity of our input parameters, and unsure of the fidelity of our algorithm. The compounded effect of this uncertainty is that we do not apply the same level of confidence to model results that we do to observations or experiment results. So in summary, and with everything else being equal, direct observation will always trump experimental results, and experimental results will always trump model output. Of course, there is no way to conduct meaningful experiments on analogous climates, so one of the best tools is not of any use to us.
Similar objective value judgments can be made about the comparison of two data sets. When we look at two curves and they both seem to show an excursion that matches in onset, duration and amplitude, we consider that to be evidence of correlation. If the wiggles also closely match, that is stronger evidence. Two curves that obviously exhibit the same onset, magnitude, and duration prior to statistical analysis will always be considered better evidence than two curves that can be shown to be similar after sophisticated statistical analysis. The less explanation needed to correlate two curves, the stronger the evidence of correlation.
Sometimes we need to resolve plausible root causes but lack direct evidence and cannot simulate the excursion of interest by I/O testing. Under these circumstances, model output might be considered if it meets certain objective criteria. When attribution of a past event is the goal, engineers shun innovation. In order for model output to be considered in a fault tree effort, the model requires extensive validation, which means the algorithm must be well established. There must be a historical record of input parameters and how changes in those parameters affected the output. Ideally, the model will have already been used successfully to make predictions about system behavior under specific circumstances. Models can be both sophisticated and quite trustworthy, as we see with the model of planetary motion in the solar system. Also, some very clever methods have been developed to substitute for prior knowledge. An example is the Monte Carlo method, which can sometimes tightly constrain an estimation of output without robust data on input. Similarly, if you have good input and output data, we can sometimes develop a useful empirical relationship of the system behavior without really knowing much about how the system works. A simple way to think of this is to consider three types of information, input data, system behavior, and output data. If you know two of the three, you have some options for approximating the third. But if you only have adequate information on one or less of the types of information, your model approach is underspecified. Underspecified model simulations are on the frontier of knowledge and we shun their use on fault trees. To be more precise, simulations from underspecified models are insufficiently trustworthy to adequately refute root causes that are otherwise plausible.
Part 2: Established Attribution Methodology Applied to the Modern Warming
Now that we have briefly covered the basics of objective attribution and how we look at evidence, let’s apply the tools to the modern warming. Recall that attribution can only be applied to events in the past or present, so we are looking at only the modern warming, not the physics of AGW. A hockey stick shape in a data set provides a perfect opportunity, since the blade of the stick represents a significant excursion from the shaft of the stick, while the shaft represents the stable system that we need to start with.
I mentioned at the beginning that it is useful for an attribution facilitator to be familiar with the basics of the science. While I am not a climate scientist, I have put plenty of hours into keeping up with climate science, and I am capable of reading the primary literature as long as it is not theoretical physics or advanced statistics. I am familiar with the IPCC Annual Report (AR) sections on attribution, and I have read all the posts at RealClimate.org for a number of years. I also keep up with some of the skeptical blogs including Climate Etc. although I rarely enter the comment fray. I did a little extra reading for this essay, with some help from Dr. Curry. This is plenty of familiarity to act as a facilitator for attribution on a climate topic. Onward to the root cause analysis.
Step 1: Verify that a significant excursion has occurred.
Here we want to evaluate the evidence that the excursion of interest is truly beyond the bounds of the stability region for the system. When we look at mechanical failures, Step 1 is almost never a problem, there is typically indisputable visual evidence that something broke. In electronics, a part will sometimes seem to fail in a circuit but meet all of the manufacturer’s specifications after it is removed. When that happens we shift our analysis to the circuit and the component originally suspected of causing the failure becomes a refuted root cause.
In looking at the modern warming, we first ask whether there are similar multi-decadal excursions in the past millennium of unknown cause. We also need to consider the entire Holocene. While most of the available literature states that the modern excursion is indeed unprecedented, this part of the attribution analysis is not a democratic process. We find that there is at least one entirely plausible temperature reconstruction for the last millennium that shows comparable excursions. Holocene reconstructions suggest that the modern warming is not particularly significant. We find no consensus as to the cause of the Younger Dryas, the Minoan, Roman, and Medieval warmings, or the Little Ice Age, all of which may constitute excursions of at least similar magnitude. I am not comfortable with this because we need to understand the mechanisms that made the system stable in the first place before we can meaningfully attribute a single excursion.
When I am confronted with a situation like this in my role as facilitator, I would have a discussion with my customer as to whether they want to expend the funds to continue the root cause effort given the magnitude of uncertainly regarding the question of whether we even have a legitimate attribution target. I have grave doubts that we have survived Step 1 in this process, but let’s assume that the customer wants us to continue.
Step 2. Collect sufficient data to characterize the excursion.
The methodology can get a little messy here. Before we can meaningfully construct a fault tree, we need to carefully define the excursion of interest, which usually means studying both the input and output data. However, we are not really sure of what input data we need since some may be pertinent to the excursion while other data might not. We tend to rely upon common sense and prior knowledge as to what we should gather at this stage, but any omissions will be caught during the brainstorming so we need not get too worried.
The excursion of interest is in temperature data. We find that there is a general consensus that a warming excursion has occurred. The broad general agreement about trends in surface temperature indices is sufficient for our purposes.
The modern warming temperature excursion exists in the output side of the complex process known as “climate.” A fully characterized excursion would also include robust empirical input data, which for climate change would be tracking data for the climate drivers. When we look for input data at this stage, we are looking for empirical records of the climate both prior to and during the modern warming. We do not have a full list yet, but we know that greenhouse gases, aerosols, volcanoes, water vapor, and clouds are all important. Rather than continue on this topic here, I will discuss it in more detail after we construct the fault tree below. That way we can be specific about what input data we need.
Looking for a Unique Signature
Now that we have chosen to consider the excursion as anomalous and sufficiently characterized, this is a good time to look for a unique signature. Has the modern warming created a signature that is so unique that it can only be associated with a single root cause? If so, we want to know now so that we can save our customer the expense of the full fault tree that we would build in Steps 3 and 4.
Do any SMEs interpret some aspect of the temperature data as a unique signature that could not possibly be associated with more than one root cause? It turns out that some interpret the specific spatio-temporal heterogeneity pattern as being evidence that the warming was driven by the radiation absorbed by increased greenhouse gas (GHG) content in the atmosphere. Based upon what I have read, I don’t think there is anyone arguing for a different root cause creating a unique signature in the modern warming. The skeptic arguments seem to all reside under a claim that the signature is not unique, not that it is unique to something other than GHG warming. So let’s see whether we can take our shortcut to a conclusion that an increase in GHG concentration is the sole plausible root cause due to a unique data signature.
Spatial heterogeneity would be occurring up to the present day, and so can be directly measured. I have seen two spatial pattern claims about GHG warming, 1) the troposphere should warm more quickly, and 2) the poles should warm more quickly. Because this is important, I have attempted to track these claims back through time. The references mostly go back to climate modeling papers from the 1970s and 1980s. In the papers, I was unable to find a single instance where any of the feedbacks thought to enhance warming in specific locations were associated solely with CO2. Instead, some are associated with any GHG, while others such as arctic sea ice decrease occur due to any persistent warming. Nevertheless, the attribution chapter in IPCC AR 5 contains a paragraph that seems to imply that enhanced tropospheric warming supports attribution of the modern warming to anthropogenic CO2. I cannot make the dots connect. But here is one point that cannot be overemphasized: the search for a unique signature in the modern warming is the best hope we have for resolving the attribution question.
Step 3. Assemble the SMEs and brainstorm plausible root causes for the excursion.
Without an overwhelmingly strong argument that we have a unique signature situation, we must do the heavy lifting involved with the exhaustive approach. Of course, I am not going to be offered the luxury of a room full of climate SMEs, so I will have to attempt this myself for the purposes of this essay.
Step 4. Build a Formal Fault Tree
An attribution analysis is a form of communication, and the effort is purpose-driven in that we plan to execute a corrective action if that is feasible. As a communication tool, we want our fault tree to be in a form that makes sense to those that will be the most difficult to convince, the SMEs themselves. And when we are done, we want the results to clearly point to actions we may take. With these thoughts in mind, I try to find a format that is consistent with what the SMEs already do. Also, we need to emphasize anthropogenic aspects of causality because those are the only ones we can change. So we will base our fault tree on an energy budget approach similar to a General Circulation Model (GCM), and we will take care to ensure that we separate anthropogenic effects from other effects.
GCMs universally, at least as far as I know, use what engineers call a “control volume” approach to track an energy budget. In a control volume, you can imagine an infinitely thin and weightless membrane surrounding the globe at the top of the atmosphere. Climate scientists even have an acronym for the location “top of the atmosphere,” TOA. Energy that migrates inside the membrane must equal energy that migrates outside the membrane over very long time intervals, otherwise the temperature would ramp until all the rocks melted or everything froze. In the rather unusual situation of a planet in space, the control volume is equivalent to a “control mass” equation in which we would track the energy budget based upon a fixed mass. Our imaginary membrane defines a volume but it also contains all of the earth/atmosphere mass. For simplicity, I will continue with the term “control volume.”
The control volume equation in GCMs is roughly equivalent to:
[heat gained] – [heat lost] = [temperature change]
This is just a conceptual equation because the terms on the left are in units of energy, while the units on the right are in degrees of temperature. The complex function between the two makes temperature an emergent property of the climate system, but we needn’t get too wrapped up in this. Regardless of the complexity hidden behind this simple equation, it is useful to keep in mind that each equation term (and later, each fault tree box) represents a single number that we would like to know.
There is a bit of housekeeping we need to do at this point. Recall that we are only considering the modern warming, but we can only be confident about the fidelity of our control volume equation when we consider very long time intervals. To account for the disparity in duration, we need to consider the concept of “capacitance.” A capacitor is a device that will store energy under certain conditions, but then discharge that energy under a different set of conditions. As an instructive example, the argument that the current hiatus in surface temperature rise is being caused by energy storage in the ocean is an invocation of capacitance. So to fit our approach to a discrete time interval, we need the following modification:
[heat gained] + [capacitance discharge] – [heat lost] – [capacitance recharge] = [modern warming]
Note that now we are no longer considering the entire history of the earth, we are only considering the changes in magnitude during the modern warming interval. Our excursion direction is up, so we discard the terms for a downward excursion. Based upon the remaining terms in our control volume equation, the top tier of the tree is this:
From the control volume standpoint, we have covered heat that enters our imaginary membrane, heat that exits the membrane, and heat that may have been stashed inside the membrane and is only being released now. I should emphasize that this capacitance in the top tier refers to heat stored inside the membrane prior to the modern warming that is subsequently released to create the modern warming.
This top tier contains our first logical bifurcation. The two terms on the left, heat input and heat loss, are based upon a supposition that annual changes in forcing will manifest soon enough that that the change in temperature can be considered a direct response. This can involve a lag as long as the lag does not approach the duration of the excursion. The third term, capacitance, accounts for the possibility that the modern warming was not a direct response to a forcing with an onset near the onset of our excursion. An alternative fault tree can be envisioned here with something else in the top tier, but the question of lags must be dealt with near the top of the tree because it constitutes a basic division of what type of data we need.
The next tier could be based upon basic mechanisms rooted in physics, increasing the granularity:
The heat input leg represents heat entering the control volume, plus the heat generated inside. We have a few oddball prospective causes here that rarely see the light of day. The heat generated by anthropogenic combustion and geothermal heat are a couple of them. In this case, it is my understanding that there is no dispute that any increases above prior natural background combustion (forest fires, etc.) and geothermal releases are trivial. We put these on the tree to show that we have considered them, but we need not waste time here. Under heat loss, we cover all the possibilities with the two basic mechanisms of heat transfer, radiation and conduction. Conduction is another oddball. The conduction of heat to the vacuum of space is relatively low and would be expected to change only slightly in rough accordance to the temperature at TOA. With conduction changes crossed off, a decrease in outward radiation would be due to a decreased albedo, where albedo represents reflection across the entire electromagnetic spectrum. A control volume approach allows us to lump convection in with conduction.   The last branch in our third tier is the physical mechanism by which a temperature excursion occurs due to heat being released from a reservoir, which is a form of capacitance discharge.
I normally do not start crossing off boxes until the full tree is built. However, if we cross off the oddballs here, we see that the second tier of the tree decomposes to just three mechanisms, solar irradiance increase, albedo decrease, and heat reservoir release. This comes as no revelation to climate scientists.
This is as far as I am going in terms of building the full tree, because the next tier gets big and I probably would not get it right on my own. Finishing it is an exercise left to the reader! But I will continue down the “albedo decrease” leg until we reach anthropogenic CO2-induced warming, the topic du jour. A disclaimer: I suspect that this tier could be improved by the scrutiny of actual SMEs.
The only leg shown fully expanded is the one related to CO2, the reader is left to envision the entire tree if each leg were to be expanded in a similar manner. The bottom left corner of this tree fragment shows anthropogenic CO2-induced warming in proper context. Note that we could have separated anthropogenic effects at the first tier of the tree, but then we would have two almost identical trees.
Once every leg is completed in this manner, the next phase of adding evidence begins.
Step 5. Apply documented evidence to each cause.
Here we assess the available evidence and decide whether it supports or refutes a root cause. The actual method used is often dictated by how much evidence we are dealing with. One simple way is to make a numbered list of evidence findings. Then when a finding supports a root cause, we can add that number to the fault tree block in green. When the same finding refutes a different root cause, we can add the number to the block in red. All findings must be mapped across the entire tree.
The established approach to attribution looks at the evidence based upon the evidence hierarchy and exploits any reasonable manner of simplification. The entire purpose of a control volume approach is to avoid having to understand the complex relationship that exists between variables within the control volume. For example, if you treat an engine as a control volume, you can put flow meters on the fuel and air intakes, a pressure gauge on the exhaust, and an rpm measurement on the output shaft. With those parameters monitored, and a bit of historical data on them, you can make very good predictions about the trend in rpm of the engine based upon changes in inputs without knowing very much about how the engine translates fuel into motion. This approach does not involve any form of modeling and is, as I mentioned, the rationale for using control volume in the first place.
The first question the fault tree asks of us is captured in the first tier. Was the modern warming caused by a direct response to higher energy input, a direct response to lower energy loss, or as a result of heat stored during an earlier interval being released? If we consider this question in light of our control volume approach (we don’t really care how energy gets converted to surface temperature), we see that we can answer the question with simple data in units of energy, watts or joules. Envision data from, say, 1950 to 1980, in terms of energy. We might find that for the 30-year interval, heat input was x joules, heat loss was y joules, and capacitance release was z joules.   Now we compare that to the same data for the modern warming interval. If any one of the latter numbers is substantially more than the corresponding earlier numbers x, y, or z, we have come a long way already in simplifying our fault tree. A big difference would mean that we can lop off the other legs. If we see big changes in more than one of our energy quantities, we might have to reconsider our assumption that the system is stable.
In order to resolve the lower tiers, we need to take our basic energy change data and break it down by year, so joules/year. If we had reasonably accurate delta joules/year data relating to the various forcings, we could wiggle match between the data and the global temperature curve. If we found a close match, we would have strong evidence that forcings have an important near-term effect, and that (presumably) only one root cause matches the trend. If no forcing has an energy curve that matches the modern warming, we must assume capacitance complicates the picture.
Let’s consider how this would work. Each group of SMEs would produce a simple empirical chart for their fault tree block estimating how much energy was added or lost during a specific year within the modern warming, ideally based upon direct measurement and historical observation. These graphs would then be the primary evidence blocks for the tree. Some curves would presumable vary around zero with no real trend, others might decline, while others might increase. The sums roll up the tree. If the difference between the “heat gained” and “heat lost” legs shows a net positive upward trend in energy gained, we consider that as direct evidence that the modern warming was driven be heat gained rather than capacitance discharge. If those two legs sum to near zero, we can assume that the warming was caused by capacitance discharge. If the capacitance SMEs (those that study El Nino, etc.) estimate that a large discharge likely occurred during the modern warming, we have robust evidence that the warming was a natural cycle.

  1. Determine where evidence is lacking…

Once all the known evidence has been mapped, we look for empty blocks. We then develop a plan to fill those blocks as our top priority.
I cannot find the numbers to fill in the blocks in the AR documents. I suspect that the data does not exist for the earlier interval, and perhaps cannot even be well estimated for the modern warming interval.

  1. Execute plan to fill all evidence blocks.

Here we collect evidence specifically intended to address the fault tree logic. That consists of energy quantities from both before and during the modern warming. Has every effort been made to collect empirical data about planetary albedo prior to the modern warming? I suspect that this is a hopeless situation, but clever SMEs continually surprise me.
In a typical root cause analysis, we continue until we hopefully have just one unrefuted cause left. The final step is to exhaustively document the entire process. In the case of the modern warming, the final report would carefully lay out the necessary data, the missing data, and the conclusion that until and unless we can obtain the missing data, the root cause analysis will remain unresolved.
Part 3: The AGW Fault Tree, Climate Scientists, and the IPCC: A Sober Assessment of Progress to Date
I will begin this section by stating that I am unable to assess how much progress has been made towards resolving the basic fault tree shown above. That is not for lack of trying, I have read all the pertinent material in the IPCC Annual Reports (ARs) on a few occasions. When I read these reports, I am bombarded with information concerning the CO2 box buried deep in the middle of the fault tree. But even for that box, I am not seeing a number that I could plug into the equations above. For other legs of the tree, the ARs are even more bewildering. If climate scientists are making steady progress towards being able to estimate the numbers to go in the control volume equations, I cannot see it in the AR documents.
How much evidence is required to produce a robust conclusion about attribution when the answer is not obvious? For years, climate scientists have followed reasoning that goes from climate model simulations to expert opinion, declaring that to be sufficient. But that is not how attribution works. Decomposition of a fault tree requires either a unique signature, or sufficient data to support or refute every leg of the tree (not every box on the tree, but every leg). At one end of the spectrum, we would not claim resolution if we had zero information, while at the other end, we would be very comfortable with a conclusion if we knew everything about the variables. The fault tree provides guidance on the sufficiency of the evidence when we are somewhere in between. My customers pay me to reach a conclusion, not muck about with a logic tree. But when we lack the basic data to decompose the fault tree, maintaining my credibility (and that of the SMEs as well) demands that we tell the customer that the fault tree cannot be resolved because we lack sufficient information.
The curve showing CO2 rise and the curve showing the modern global temperature rise do not look the same, and signal processing won’t help with the correlation. Instead, there is hypothesized to be a complex function involving capacitance that explains the primary discrepancy, the recent hiatus. But we still have essentially no idea how much capacitance has contributed to historical excursions. We do not know whether there is a single mode of capacitance that swamps all others, or whether there are multiple capacitance modes that go in and out of phase. Ocean capacitance has recently been invoked as perhaps the most widely endorsed explanation for the recent hiatus in global warming, and there is empirical evidence of warming in the ocean. But invoking capacitance to explain a data wiggle down on the fifth tier of a fault tree, when the general topic of capacitance remains unresolved in the first tier, suggests that climate scientists have simply lost the thread of what they were trying to prove. The sword swung in favor of invoking capacitance to explain the hiatus turns out to have two edges. If the system is capable of exhibiting sufficient capacitance to produce the recent hiatus, there is no valid argument against why it could not also have produced the entire modern warming, unless that can be disproven with empirical data or I/O test results.
Closing Comments
Most of the time when corporations experience a catastrophe such as a chemical plant explosion resulting in fatalities, they look to outside entities to conduct the attribution analysis. This may come as a surprise given the large sums of money at stake and the desire to influence the outcome, but consider the value of a report produced internally by the corporation. If the report exonerates the corporation of all culpability, it will have zero credibility. Sure, they can blame themselves to preserve their credibility, but their only hope of a credible exoneration is if it comes from an independent entity. In the real world, the objectivity of an independent study may still leave something to be desired, given the fact that the contracted investigators get their paycheck from the corporation, but the principle still holds. I can only assume when I read the AR documents that this never occurred to climate scientists.
The science of AGW will not be settled until the fault tree is resolved to the point that we can at least estimate a number for each leg in our fault tree based upon objective evidence. The tools available have thus far not been up to the task. With so much effort put into modelling CO2 warming while other fault tree boxes are nearly devoid of evidence, it is not even clear that the available tools are being applied efficiently.
The terms of reference for the IPCC are murky, but it is clear that it was never set up to address attribution in any established manner. There was no valid reason to not use an established method, facilitated by an entity with expertise in the process, if attribution was the true goal. The AR documents are position papers, not attribution studies, as exemplified by the fact that supporting and refuting arguments cannot be followed in any logical manner and the arguments do not roll up into any logical framework. If AGW is really the most important issue that we face, and the science is so robust, why would climate scientists not seek the added credibility that could be gained from an independent and established attribution effort?
JC comments
I don’t normally provide comments within a guest post, but I need to make an exception here.  Some big light bulbs in this essay.  I have been dancing around the issues raised by Matt Skaggs in these previous posts, including tree logic:

But Matt’s essay really clarifies some things (I will do a follow up post on this general topic) This post also clarifies the disagreement between myself and Gavin Schmidt.  The main point of relevance here is that there are different ways to frame and approach the climate change attribution problem, and the one used by the IPCC and mainstream climate scientists isn’t a very good one.
Moderation note:  This is a guest post (invited by me).  Please keep your comments civil and relevant.Filed under: Attribution

Source