by Judith Curry
Reflections on Nic Lewis’ audit of the Resplandy et al. paper.
In response to Nic Lewis’ two blog posts critiquing the Resplandy et al. paper on ocean temperatures, co-author Ralph Keeling acknowledges the paper’s errors with these statements:
Scripps news release: Note from co-author Ralph Keeling Nov. 9, 2018: I am working with my co-authors to address two problems that came to our attention since publication. These problems, related to incorrectly treating systematic errors in the O2 measurements and the use of a constant land O2:C exchange ratio of 1.1, do not invalidate the study’s methodology or the new insights into ocean biogeochemistry on which it is based. We expect the combined effect of these two corrections to have a small impact on our calculations of overall heat uptake, but with larger margins of error. We are redoing the calculations and preparing author corrections for submission to Nature.
From the Washington Post:
“Unfortunately, we made mistakes here,” said Ralph Keeling, a climate scientist at Scripps, who was a co-author of the study. “I think the main lesson is that you work as fast as you can to fix mistakes when you find them.”
“I accept responsibility for what happened because it’s my role to make sure that those kind of details got conveyed,” Keeling said.
“Maintaining the accuracy of the scientific record is of primary importance to us as publishers and we recognize our responsibility to correct errors in papers that we have published,” Nature said in a statement to The Washington Post. “Issues relating to this paper have been brought to Nature’s attention and we are looking into them carefully. We take all concerns related to papers we have published very seriously and will issue an update once further information is available.”
From the San Diego Tribune:
“When we were confronted with his insight it became immediately clear there was an issue there,” he said. “We’re grateful to have it be pointed out quickly so that we could correct it quickly.”
“Our error margins are too big now to really weigh in on the precise amount of warming that’s going on in the ocean,” Keeling said. “We really muffed the error margins.”
Ralph Keeling prepared a guest post at RealClimate, explaining the issues from their perspective.
We would like to thank Nicholas Lewis for first bringing an apparent anomaly in the trend calculation to our attention.
Ralph Keeling behaved with honesty and dignity by publicly admitting these errors and thanking Nic Lewis.
Such behavior shouldn’t be news, however; it is how all scientists should behave, always.
Imagine how the course of climate science and the public debate on climate change would be different if Michael Mann would have behaved in a similar way in response to McIntyre and McKitrick’s identification of problems with the hockey stick analysis.
Hostile environment
In the WaPo article, Gavin Schmidt made the following statement:
“The key is not whether mistakes are made, but how they are dealt with — and the response from Laure and Ralph here is exemplary. No panic, but a careful reexamination of their working — despite a somewhat hostile environment,” he wrote.
“No panic.” Why would anyone panic over something like this? After a big press release, the magnitude of such an error seems substantially magnified. Embarrassing, sure (a risk of issuing a press release), but cause for panic? Keeling is right, best to fix as quickly as possible.
The Climategate emails revealed a lot of ‘panic’ over criticisms of hockey team research. The motives for the panic appeared to be some combination of fears over threats to careerism ambitions, potential damage to a political agenda, and basic tribal warfare against climate skeptics that they regarded as threatening their authority.
“Hostile environment.” Exactly what is ‘hostile’ about an independent scientist auditing a published paper, politely contacting the authors for a response and then posting the critique on a blog?
Perhaps Gavin is referring to the minor media attention given to their mistake, after their big press release and substantial MSM attention? GWPF is bemoaning the lack of attention to this error in the British media [link].
Or perhaps this is a figment of Gavin’s personal sensitivities, and the general strategy of the RC wing of the climate community to circle the wagons in the context of an adversarial relationship with anyone from outside the ‘tribe’ that criticizes their science. I know how this all works, given their ‘help’ during the hurricanes and global warming wars circa 2005/2006. All this made me feel rather paranoid about being criticized by the fossil fuel funded deniers and all that.
Gavin seems to be ‘managing’ the Resplandy situation to some extent (Ralph Keeling has not hitherto posted at RealClimate), and this management does not include any cooperation with Lewis, although Keeling was gracious enough to thank Nic.
Gavin’s views on hostilities is illustrated by Nic’s critique of the Marvel et al. paper, responded to in a rather contentious blog post with two subsequent updates that admitted that Lewis was partially correct. Two errors in the Marvel et al. paper were subsequently corrected. Was Lewis thanked? No way, he is treated to classic Gavin snark:
But there has also been an ‘appraisal’ of the paper by Nic Lewis that has appeared in no fewer than three other climate blogs (you can guess which).
I should be clear that we are not claiming infallibility and with ever-closer readings of the paper we have found a few errors and typos ourselves which we will have corrected in the final printed version of the paper.
Lewis in subsequent comments has claimed without evidence that land use was not properly included in our historical runs [Update: This was indeed true for one of the forcing calculations]
When there are results that have implications for a broad swath of other papers, it’s only right that the results are examined closely. Lewis’ appraisal has indeed turned up two errors, and suggested further sensitivity experiments.
According to Nic, Gavin’s assertion that Nic’s claim regarding land use forcing in their historical runs was made without evidence was blatantly untrue; Nic had published a detailed statistical analysis indicating, correctly, that land use forcing had been omitted from their total historical run forcing values.
Subsequently, the LC18 paper provided a published critique of key aspects of the Marvel et al. paper.
This blog post by Gavin provides a sense of the ‘hostile environment’ faced by independent scientists who evaluate climate science papers. Scientists should welcome discussion of their research and being pointed to any errors. Disagreement should be the spice of academic life; this is what drives science forward. However, when a political agenda and careerism enters into the equation, we have a different story. For an overview of the really hostile environment faced by McIntyre and McKitrick re the hockey stick, see Andrew Montford’s book The Hockey Stick Illusion.
So please, lets stop whining about ‘hostile environment’ and get on with our research in an open, honest and collegial way, giving credit where due.
Peer review
From the SD Tribune article:
While papers are peer reviewed before they’re published, new findings must always be reproduced before gaining widespread acceptance throughout the scientific community, said Gerald Meehl, a climate scientist at the National Center for Atmospheric Research in Boulder, Colorado.
“This is how the process works,” he said. “Every paper that comes out is not bulletproof or infallible. If it doesn’t stand up under scrutiny, you review the findings.”
Of course this is how things are supposed to work. This whole episode is being held up as an example of the self-correcting nature of science.
When I first saw the Resplandy paper, it didn’t pass the sniff test from my perspective in terms of a new and inexact method coming up with estimates that exceeded the ranges from analyses of in situ observations of ocean temperatures. Apparently the coauthors and Nature peer reviewers had no such concerns.
The Resplandy paper lists 9 coauthors, presumably all of who read the entire paper and were prepared to defend it. Other than Keeling, I am not familiar with any of these coauthors, but it seems that none have any expertise in data analysis and statistics.
From my own experience, particularly when I have a mentoring role with the first author (e.g. my postdoc or other young scientist), as a co-author I am trying to help them get their paper published preferably in a high profile journal and get some publicity for their work, so that they can advance their career and be successful with their job applications. Young scientists seem to think (probably correctly) that having a senior, well-known coauthor on their paper helps the chances for publication and publicity. I have to say that in my mentoring role as a faculty member, I ended up feeling conflicted about several papers I was coauthor on, with a conflict between my role as a mentor versus my duty to be able to defend all aspects of the paper. At some point, I started declining to add my name as coauthor and donated my time to improving the paper. Career suicide, but at that point I already had one foot out the academia door.
Now for the external reviewers selected by Nature. Imagine if the Resplandy paper identified a smaller trend than identified from conventional observations – the reviewers would have been all over this. Roy Spencer writes:
If the conclusions of the paper support a more alarmist narrative on the seriousness of anthropogenic global warming, the less thorough will be the peer review. I am now totally convinced of that. If the paper is skeptical in tone, it endures levels of criticism that alarmist papers do not experience. I have had at least one paper rejected based upon a single reviewer who obviously didn’t read the paper…he criticized claims not even made in the paper.
Early in my career I spent a great deal of time reviewing papers and grant proposals, and actually put considerable effort into making constructive suggestions to help make the paper/proposal better. Why? Because I wanted to see the outcomes and learn from them, and for science to move forward. I had a cooperative and helpful attitude towards all. In the mid-90’s, my rose-colored glasses got busted, when I was working on a committee and did 90% of work on a major document, only to end up as second author and squeezed out of the major funding. I realized that I was in competition for credit, recognition and funding, and that my ideas and hard work could be effectively stolen. This changed a lot of my attitudes, and looking back this is when I first stopped liking my job as a professor so much.
Frank Jablonsky tweets:
Effective peer review is usually very time consuming & uncomfortable, so it isn’t often done outside of conflicts between keen adversaries.
The ‘keen adversaries’ is key; papers supporting consensus perspectives pretty much get a free ride through the peer review process. Anything challenging the consensus gets either a rigorous review or rejected (or not even sent out for review), often for ancillary reasons not directly related to the substance of the paper.
At this point in my career, I respond to relatively few requests to review journal articles; since I am only publishing ~1 paper per year at this point, I figure I don’t owe the ‘system’ more than a few reviews per year. If I do accept a request to review a paper, it is probably because I know of the author and like their work; I am interested to see what they have to say and happy to help improve the paper if I can.
I probably review more papers for journalists, who send an embargoed copy of a ‘hot’ new paper and ask for my comments. I respond to as many of these requests as I have time for; these requests seem to come in spurts (I haven’t had any in awhile, now that I am ‘retired’). I am typically sent these papers to review since the journalist is interested in an adversarial perspective. And these reviews are prepared after the horse has left the barn (i.e. the paper is already published).
Much has been written about the problems of peer review. There is an interesting new paper: In peer review we (don’t) trust: how peer review’s filtering poses a systemic risk to science.
This article describes how the filtering role played by peer review may actually be harmful rather than helpful to the quality of the scientific literature.
Well, I have to say that I don’t know what is actually accomplished by journal peer review at this point. Academic scientists don’t get any credit or kudos for reviewing, so many do a quick and shoddy job. The end result in the climate field is gatekeeping and consensus enforcement, which is detrimental to the advancement of science.
Extended peer review
Owing to the relative free ride that consensus supporting and the more alarming climate science papers typically seem to get in the review process, particularly for high profile journals having press embargoes, etc., critical scrutiny is increasingly coming from technically educated individuals outside of the field of professional climate science, most without any academic affiliation.
Of course, the godfather of extended peer review in the climate field is Steve McIntyre. It’s hard to imagine what the field of paleoclimatological reconstructions for the past two millennia would be had not McIntyre & McKitrick happened onto the scene.
Regarding Nic Lewis, the extended peer reviewer du jour, he states it best himself in the WaPo article:
Lewis added that he tends “to read a large number of papers, and, having a mathematics as well as a physics background, I tend to look at them quite carefully, and see if they make sense. And where they don’t make sense — with this one, it’s fairly obvious it didn’t make sense — I look into them more deeply.”
Here is the issue. There are some academic climate scientists that have expertise in statistics comparable to Nic Lewis. However, I will wager that exactly none of them would have the time or inclination to dig into the Resplandy paper in the way that Nic Lewis did. While many scientists may have reacted like I did, thinking the paper failed the sniff test, nothing would have been done about it, and people that liked the result would cite the paper (heck, they ‘found’ Trenberth’s missing heat).
So Nic Lewis’ identification of the problem does not imply that the so-called ‘self-correcting process’ of institutional science is working. It is only working because of the highly-skilled and dedicated efforts of a handful of unpaid and unaffiliated scientists auditing those papers that come to their attention and they have time to investigate. Erroneous papers outside their fields of interest or which do not make the necessary data available are likely to escape detailed scrutiny. Moreover, in many cases it is impracticable to audit a paper’s results unless the computer code used to produce them has been made available, which is very often not the case. The single most important way of making institutional science more self correcting would be for all journals to insist on turnkey code, with all necessary data, being publicly archived by the time the paper is published online by the journal
A large number of articles have been written about this incident (very few in the MSM tho). Nic is referred to as a lukewarmer, a skeptic, a denier, a fringe scientist, etc. There is an apparent need to label Nic with an adversarial moniker, in spite of complimenting him for his work. The same for Steve McIntyre, and anyone else who criticizes a paper that feeds the consensus or alarmist narratives. McKitrick and I are in a slightly different category in terms of labeling owing to our academic positions.
Science as a tribal activity with adversarial tribes fighting for the dominant narrative so as to influence global climate and energy policy is not a healthy narrative for science. Speaking for myself and based on my impressions of Nic Lewis and Steve McIntyre over the past decade, there is no ‘activist’ motivation behind our critical evaluation of climate science in general or auditing of particular papers, beyond a general sense that good policy is based on accurate science and an appropriate assessment of uncertainty.
An interesting comment appeared at RealClimate:
Finally, this episode demonstrates as many others over the last 30 years the role of “gentleman” scientists. In the 18th century most scientists were of this type. As science got bigger, Universities became the preferred profession of those aspiring to be scientists. After WWII science became big business and scientists were often a blend of entrepreneur, public relations flak, and managers of large teams of postdocs and students, with little time left over for actual technical work. The most prolific publishers can not even have read all the papers on which they are authors much less checked any of the results.
Perhaps the scientific community needs to more wisely use the often free services of “gentlemen” scientists, those who are in retirement, and particularly professional statisticians. It continues to amaze me that most science outside medicine seems to avoid placing a professional statistician on the team and listening to him.
Franktoo writes in the CE comments:
IMO, the fact that auditing by Nic Lewis and Steve McIntyre has turned up so many problems (real problems as best this biased individual can tell) suggests that you and the whole climate science community should be deeply concerned about confirmation bias during peer review. However, that is another subject that can’t be publicly discussed without it reaching the conservative press and skeptical blogs.
Virtually all peer reviewers don’t have time to do even a cursory check of the work other than reading it for obvious problems and conflicts with already published papers. If peer reviewers were paid and expected to devote at least a couple of weeks to each review, the quality would be higher. The real problem here is that 90% of what is published is not worthy of the paper its printed on.
That’s why citizen scientists are exceptionally valuable.
The value of such analyses being conducted by independent scientists is substantial. Although the heyday of the technical climate blogs seems past, they remain the essential forum for such discussion and auditing. Efforts to institutionalize this kind of effort with a recognized red team were thwarted by politicization of the issue, and a failure to recognize what really drives the auditing of climate research and what makes it work. The Resplandy et al. paper seems to have revitalized the technical climate blogosphere somewhat; it is been ages since I visited RealClimate.
Fixing it – or not
Resplandy et al. are to be commended for jumping on this and addressing the problems as quickly as they can (apparently, the RealClimate post contains the essence of what they sent to Nature). It remains to be seen how Nature addresses this, particularly as Nic has identified at least one problem not dealt with by the authors in their Corrigendum (Part 3).
While on the topic of ‘fixing it’, I must mention Steve McIntyre’s latest post PAGES2K: North American Tree Ring Proxies. I have long declared CE to be a tree-ring free zone, basically because I have not really delved into this topic and SM has done such a good job. But here is what caught my attention. PAGES is an international group of paleoclimatologists that is a partner of the World Climate Research Programme and funded by US National Science Foundation and the Swiss Academy of Science. The 2017 PAGES paper lists about 80 coauthors. After auditing this paper (and the 2013 PAGES paper) and the proxies used, McIntyre concludes the following
- PAGES 2013 and PAGES 2017 perpetuate the use of Graybill stripbark chronologies – despite the recommendation of the 2006 NAS Panel that these problematic series be “avoided” in future reconstructions.
There is no hockey stick without the Graybill stripbark chronologies. Without having the background or putting in the effort to personally evaluate any of this, I’m asking if can anyone explain how and why the PAGES team has justified using bristlecone strip bark chronologies, given the 2006 National Academies Panel recommendation that they not be used (not to mention MM criticisms)? If this problem is as bad as stated by SM, the whole field of tree ring paleoclimatology appears to be deluded (or worse).
Conclusions
By quickly admitting mistakes and giving credit where due, Ralph Keeling has done something unusual and laudatory in the field of climate science. If all climate scientists behaved this way, there would be no ‘hostile environment.’
I find it to be a sad state of affairs when a scientist admitting mistakes gets more kudos than the scientist actually finding the mistakes. But given the state of climate science, I guess finding mistakes seems to be a more common story than a publishing scientist actually admitting to mistakes.
Given the importance of auditing climate research and independent climate scientists working outside of institutional frameworks, I wish there was some way to encourage more of this. In the absence of recognition and funding, I don’t have much to suggest. Other than providing a home for such analyses at Climate Etc.
My huge thanks to Nic Lewis for his efforts, the other guest posters at CE, and to all the denizens who enrich these analysis with their comments and discussion.