NCDC responds to concerns about surface temperature data set

Our algorithm is working as designed. – NOAA NCDC

Recall, in the previous post Skeptical of skeptics: is Steve Goddard right? Politifact assessed Goddards claim as ‘Pants on fire.’
Over the weekend, I informed Politifact that this this issue was still in play, and pointed to my post and Watts’ post.  Today, Politifact  has posted an update After the Fact, drawing from the blog posts and also additional input from Zeke.  They conclude:
In short, as one of the experts in our fact-check noted, the adjusted data set from the government is imperfect and it changes as people work on it. However, the weight of evidence says the imperfections, or errors, have little impact on the broader trends.
Anthony Watts has a new post NCDC responds to identified issues in the USHCN.  Apparently the NCDC Press Office sent an official response to Politifact, which Watts obtained:
Are the examples in Texas and Kansas prompting a deeper look at how the algorithms change the raw data?
No – our algorithm is working as designed. NCDC provides estimates for temperature values when:
1) data were originally missing, and
2) when a shift (error) is detected for a period that is too short to reliably correct. These estimates are used in applications that require a complete set of data values.
Watts wrote that NCDC and USHCN are looking into this and will issue some sort of statement. Is that accurate?
Although all estimated values are identified in the USHCN dataset, NCDC’s intent was to use a flagging system that distinguishes between the two types of estimates mentioned above. NCDC intends to fix this issue in the near future.
Did the point Heller raised, and the examples provided for Texas and Kansas, suggest that the problems are larger than government scientists expected?
No, refer to question 1.
Steve Goddard has post on this, entitled Government scientists ‘expected’ the huge problems we found.
From the comments on Watts’ thread, Rud Istvan says:
The answer is in one sense honest: “Our algorithms are working as designed.”
We designed them to maintain zombie stations. We designed them to substitute estimated for actual data. We designed them to cool the past as a ‘reaction’ to UHI.
Wayne Eskridge says:
As a practical matter they have no choice but to defend their process. They will surely lose their jobs if they allow a change that damages the political narrative because that data infects many of the analyses the administration is using to push their agenda.
Wyo Skeptic says:
The Climate at a glance portion of the NCDC website is giving nothing but wonky data right now. Choose a site and it gives you data where the min temp, avg temp and max temp are the same. Change settings to go to a statewide time series and what it does is give you made up data where the average is the same amount above min as max is above avg.
http://www.ncdc.noaa.gov/cag/
Roy Spencer noticed it first in his blog about Las Vegas. I checked it out of curiosity and it is worse than what he seemed to think. It is totally worthless right now.
JC comments
As Wayne Eskridge writes, this issue is a political hot potato.   I hope that the NCDC scientists are taking this more seriously than is reflected by the statement from the Press Office.  I hope that NCDC comes forward with a more meaningful statement in response to the concerns that have been raised.
I’m hoping that we can see a more thorough evaluation of the impact of individual modifications to the raw data for individual stations and regions, and a  detailed comparison of Berkeley Earth with the NOAA USHCN data sets.  We can look forward to some posts by Zeke Hausfather on this topic.
A new paper has been published by the NOAA group that is unfortunately behind paywall: Improved Historical Temperature and Precipitation Time Series for U.S. Climate Divisions (you can read the abstract on the link).  The bottom line is that the results from v2 are much different from v1.  Presumably v2 is better than v1, but this large difference reflects the underlying structural uncertainty associated with models to produce fields of surface temperature.  When the adjustments are of the same magnitude of the trend you are trying to detect, then the structural uncertainty inspires little confidence in the trends.
NOAA needs to clean up these data sets.  Most importantly,  better estimates of uncertainty in these data are needed, including the structural uncertainty associated with different methods (past and present) for producing the temperature fields.
UPDATE:  Brandon Shollenberger has a very helpful post Laying the Points Out, that clarifies the four different topics related to the USHCN data set that people are talking about/criticizing.Filed under: Data and observations

Source