Insights from Karl Popper: how to open the deadlocked climate debate

by Larry Kummer, from the Fabius Maximus website.
Many factors have frozen the public policy debate on climate change, but none more important than the disinterest of both sides in tests that might provide better evidence — and perhaps restart the discussion. Even worse, too little thought has been given to the criteria for validating climate science theories (aka their paradigm) and the models build upon them.

This series looks at the answers to these questions given us by generations of philosophers and scientists, which we have ignored. This post shows how Popper’s insights can help us. The clock is running for actions that might break the deadlock. Eventually the weather will give us the answers, perhaps at ruinous cost.
“Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory — an event which would have refuted the theory.”
— Karl Popper in Conjectures and Refutations: The Growth of Scientific Knowledge (1963).
“I’m considering putting “Popper” on my list of proscribed words.”
— Steve McIntyre’s reaction at Climate Audit to mention that Popper’s work about falsification is the hallmark of science, an example of why the policy debate has gridlocked.

What test of climate models suffices for public policy action?

Climate scientists publish little about about the nature of climate science theories. What exactly is a theory or a paradigm? Must theories be falsifiable, and if so, what does that mean? Scientists have their own protocols for such matters, and so usually leave these questions to philosophers and historians or symposiums over drinks. Yet in times of crisis — when the normal process of science fails to meet our needs — the answers to these questions provide tools that can help.
A related but distinct debate concerns the public policy response to climate change, which uses the findings produced by climate scientists and other experts. Here insights about the dynamics of the scientific process and the basis for proof can guide decision-making by putting evidence and expert opinion in a larger context.
A previous post in this series (links below) described how Thomas Kuhn’s theories explain the current state of climate science. This post looks to the work of Karl Popper (1902-1994) for advice about breaking the gridlocked public policy debate about climate change.
Popper said scientific theories must be falsifiable, and that prediction was the gold standard for their validation. Less well known is his description of what makes a compelling prediction: it should be “risky” — of an outcome contrary to what we would otherwise expect. A radical new theory that predicts that the sun will rise tomorrow is falsifiable by darkness at noon — yet watching the dawn provides little evidence for it. Contrast that with the famous 1919 test of general relativity, whose prediction was contrary to that of the then-standard theory.
How does this apply to climate science?
From NOAA’s interactive Climate At A Glance graphing page.

Predictions of warming

“The globally averaged combined land and ocean surface temperature data as calculated by a linear trend, show a warming of 0.85 [0.65 to 1.06] °C, over the period 1880 to 2012, when multiple independently produced datasets exist. …
“It is extremely likely that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together. The best estimate of the human-induced contribution to warming is similar to the observed warming over this period.”
From the Summary of Policymakers to the IPCC’s Working Group I report of AR5.
Popper’s insight raises the bar for testing the predictions of climate models. The world has warmed since the late 19th century; anthropogenic forces became dominant only after WWII. The naive prediction is that warming will continue. This requires no knowledge of greenhouse gases or theory about anthropogenic global warming.
A risky test requires a prediction that differs from “more of the same”. [JC bold] Forecasts of accelerated warming late in the 21st century qualify as “risky” but provide no evidence today. Hindcasts — matching model projections vs. past observations — provide only weak evidence for the policy debate, as past data was available to the model’s developers.
As usual in climate science, these points have been made — and ignored. For example, “Should we assess climate model predictions in light of severe tests?” by Joel Katzav (Professor of Philosophy, Eindhoven University of Technology) in EOS (of the American Geophysical Union), 11 June 2011. It’s worth reading in full; here is an excerpt. [JC note: see previous CE post]
The scientific community has placed little emphasis on providing assessments of CMP {climate model prediction} quality in light of performance at severe tests. Consider, by way of illustration, the influential approach adopted by Randall et al. in chapter 8 of their contribution to the fourth IPCC report. This chapter explains why there is confidence in climate models thus: “Confidence in models comes from their physical basis, and their skill in representing observed climate and past climate changes”.
…CMP quality is thus supposed to depend on simulation accuracy. However, simulation accuracy is not a measure of test severity. If, for example, a simulation’s agreement with data results from accommodation of the data, the agreement will not be unlikely, and therefore the data will not severely test the suitability of the model that generated the simulation for making any predictions.
…It appears, then, that a severe testing approach to assessing CMP quality would be novel. Should we, however, develop such an approach? Arguably, yes
…. First, as we have seen, a severe testing assessment of CMP quality does not count simulation successes that result from the accommodation of data in favor of CMPs. Thus, a severe testing assessment of CMP quality can help to address worries about relying on such successes, worries such as that these successes are not reliable guides to out-of-sample accuracy, and will provide important policy-relevant information as a result.

Conclusions

The public policy debate about climate change has gridlocked in part because many consider the evidence given as insufficient to warrant massive expenditures and regulatory changes. The rebuttal has largely consisted of “trust us” and screaming “denier” at critics. Neither has produced progress; future historians will wonder why anyone expected them to do so.
This series seeks tests that both sides can accept — that might move the policy debate beyond today’s futile bickering.
The insights of Daniel Daves, Kuhn and advice by Popper offer a possible solution: test models from the past 4 Assessment Reports using observations from our past but their future. Run them with observations made after their creation, not scenarios, so they produce predictions not projections — and compare them with observations from after their creation. This will produce better evidence than we have today but still might not provide a “risky” prediction necessary to warrant massive public policy action — diverting resources from other critical challenges (e.g., preparing for return of past extreme weather events, addressing poverty, avoiding destruction of ocean ecosystems).
The criteria to prove current theories about climate change have received too little attention, mostly focusing on increasingly elaborate hindcasts (see this list of papers). Progress will come from better efforts to test the models, new insights from climate scientists, and the passage of time. But by themselves these might prove insufficient to produce timely policy action on the necessary scale. We should add to that list “developing better methods of model validation”.
JC note:  As with all guest posts, please keep your comments relevant and civil.

Filed under: Attribution, Scientific method

Source