Peer review: silver bullet or lead balloon?

Climate Discussion Nexus | February 26, 2020

One argument too often used to smash resistance to global warming alarmism is that such-and-such a study was, or was not, “peer reviewed”. Lay persons may assume peer review means colleagues don white coats, go into the lab and redo the experiment themselves. But it doesn’t. Peer reviewers rarely check the data and almost never try to replicate the analysis. And the system is not working; Tsuyoshi Miyakawa’s agonized piece in the latest edition of Molecular Brain shows spreading awareness across any number of scientific fields of a “reproducibility crisis” in which even peer-reviewed work cannot be replicated by independent researchers due to dodgy statistical practices, selective reporting and, Miyakawa laments, outright “data fabrication.” This crisis needs to be addressed, for the sake of science generally not just climate science. But in the meantime, drop the “peer review” juju and engage on the merits of arguments please.
The dodgy practices are known as “HARKing” and “p-hacking.” For those of you behind the curve, at least to hear Wired tell it, “p-hacking” means fiddling data analysis so you seem to generate a “statistically significant” result, one with less than a 5% chance of coming up randomly, when in fact you have not. HARKing means “Hypothesizing After the Results are Known”, a practice at which we have repeatedly taken aim in this newsletter on topics from Australian fires to slowing ocean currents. And there’s also the very real problem, which Miyakawa does not discuss, of peer review turning into pal review in which researchers known to the authors give a cursory read and thumbs-up to their friends’ work in return for their friends doing the same to theirs.
As for data fabrication, Miyakawa makes the chilling assertion that “As an Editor-in-Chief of Molecular Brain, I have handled 180 manuscripts since early 2017 and have made 41 editorial decisions categorized as ‘Revise before review,’ requesting that the authors provide raw data. Surprisingly, among those 41 manuscripts, 21 were withdrawn without providing raw data, indicating that requiring raw data drove away more than half of the manuscripts. I rejected 19 out of the remaining 20 manuscripts because of insufficient raw data. Thus, more than 97% of the 41 manuscripts did not present the raw data supporting their results when requested by an editor, suggesting a possibility that the raw data did not exist from the beginning, at least in some portions of these cases.”
If it’s that bad in a field where only personal ambition for professional advancement might drive researchers to bend the truth, imagine what it’s like in climate where zealous attachment to some desired policy end is also thumbing the scale. Then ask Michael Mann for the key data behind his famous hockey stick and see how far you get.

Source