Advocacy research, incentives and the practice of science

by Judith Curry
There is a problem with the practice of science. Because of poor scientific practices, and improper incentives, few papers with useful scientific findings are published in leading journals. The problem appears to be growing due to funding for advocacy research.

J. Scott Armstrong and Kesten Green have written an important new paper Guidelines for Science: Evidence and Checklists.  I encourage you to read the whole paper.
Here are some excerpts that I think are particularly important:
Advocacy Research
Funding for researchers is often provided to gain support for a favored hypothesis. Researchers are also rewarded for finding evidence that supports hypotheses favored by senior colleagues. These incentives leads to what we call “advocacy research,” an approach that is contrary to the definition of science. In addition, university researchers are typically rewarded with selection and promotion on the basis of their performance against measures that have the effect of distracting them from doing useful scientific research.
Advocacy research can be the product of a genuine belief that one’s preferred hypothesis must be true, thus blinding the researcher to alternatives. The single-minded pursuit of support for a favored hypothesis has also been referred to as “confirmation bias”. The inability to consider alternatives appears to be a common problem even for scientists. Journal reviewers often act as advocates by recommending the rejection of papers that challenge popular theories. 
Distracting incentives
Researchers in universities are typically subject to incentives that are unrelated to or detrimental to Franklin’s call for useful research. In particular, university administrators reward researchers for obtaining grants and other funding, and for publishing papers in high-status journals.
There is little reason to believe that committees of officials in governments, corporations, or foundations can and do identify projects that would lead to useful scientific findings better than individual researchers can and do. Creativity is an individual activity, and designing research projects is better left to scientists who know how best to design research projects in their own area of expertise.
Obtaining funding is an expensive exercise, and this reduces the money and time researchers have available for doing useful research. Finally, if you do succeed in obtaining funding, you are likely to lose some freedom as we discuss below
The number of papers published in academic journals is a poor measure of useful scientific output. Many papers address trivial problems.
Effects on science
Armstrong and Hubbard (1991) conducted a survey of editors of American Psychological Association (APA) journals that asked: “To the best of your memory, during the last two years of your tenure as editor of an APA journal, did your journal publish one or more papers that were considered to be both controversial and empirical? (That is, papers that presented empirical evidence contradicting the prevailing wisdom.)” Sixteen of the 20 editors replied: Seven editors could recall none, four said there was one, while three said there was at least one. Two editors said that they published several such papers.
Fortunately, it occurs to some researchers and to some research organizations that their proper objective is to produce useful scientific findings. As a result, one can look in almost any area and find useful scientific research. Our concern in this paper is not the absence of important papers, but rather their infrequency. That concern is related to what Holub, Tappeiner, and Eberharter (1991)—referring to the field of economics—called the Iron Law of Important Papers: Rapid increases in government funding has increased the number of papers published but seems to have had little effect on the number of papers with useful scientific findings. 
Operational guidelines for scientists
The authors present comprehensive operational guidelines for scientists.  Here, I select some text that I feel makes particularly important points:
The way a problem is stated limits the search for solutions. To avoid that, state the problem in many different ways prior to searching for solutions, a technique known as “problem storming.”. Then search for solutions for each problem.
Skepticism drives progress in science. Unfortunately, skepticism can also annoy other researchers and thus reduce opportunities for employment, funding, publication, and citations. Researchers in universities go to considerable lengths to ensure a common core of beliefs as is witnessed by the fact that over the past half century, political conservatives have become rare in social science departments at leading U.S. universities  with the consequent loss of that source of skepticism toward fashionable ideas that are at odds with established economic principles.
It does little good to try to be as objective as possible. That is too vague. The solution suggested by Francis Bacon was to consider “any contrary hypotheses that may be imagined.” What information would cause you to conclude that your favored hypothesis was inferior to other hypotheses? If you cannot think of any information that would threaten belief in your preferred hypothesis, work on a different problem.    
Chamberlin (1890) observed that the fields of science that made the most progress were those that tested all reasonable hypotheses. Assess reasonableness generously, as Sir Francis Bacon suggested. The approach fosters objectivity. 
 
Using the Guidelines for Science
Adam Smith wondered why Scotland’s relatively few academics were responsible for many scientific advances during the Industrial Revolution, while England’s larger number of academics contributed little. He concluded that because the government provided them with generous support, academics in England had little motivation to do useful research. Modern universities around the world tend to be more like those of 18th Century England than they are like those of 18th Century Scotland. Should we expect different results?
Governments are inclined to support advocacy research and to suppress the speech of scientists who challenge that research. 
There is a long history of governments—civil and religious—suppressing the speech of scientists when it was politically inconvenient for them to do so. In modern times, the Soviet government endorsement of Lysenko’s theories led to persecution of agricultural experimenters whose findings did support those theories (Miller, 1996). Currently, some scientists whose findings conflict with the U.S. government’s position on the global warming alarm have been threatened, harassed, fired from government and university positions, subjected to hacking of their websites, and threatened with prosecution under racketeering (RICO) laws (see, e.g., Curry, 2015).
Peer review
According to Burnham (1990), mandatory journal peer review was not common until sometime after World War II. Burnham concluded that mandatory journal peer review has been detrimental to science. The evidence supports Burnham. Consider that reviewers fail to reliably identify errors in papers. For example, Baxt, Waeckerie, Berlin, and Callaham (1998) sent a fictitious paper with 10 major and 13 minor errors to 262 reviewers. Of that number, 199 submitted reviews. On average, the reviewers identified only 23 percent of the errors. They missed some big errors; for example, 68 percent of the reviewers did not realize that the results did not support the conclusions.
In a similar study, Schroter et al. (2008) gave “journal reviewers” papers containing nine major intentional errors. The typical reviewer found only 2.6 (29 percent) of the errors. But most important is the evidence presented in their paper that reviewers seldom assess whether submitted papers present useful scientific findings.
JC reflections
The essay by Armstrong and Green is provocative on numerous fronts.  I have a few overarching comments.
First, I think that their definitions of science, scientific method  and forecasting put forward are somewhat narrow, when the natural/physical sciences are considered (it may be appropriate for the social sciences).  See in particular these previous essays at CE:

I think the issue, and definition, of ‘advocacy science’ is important. It seems that far too much of climate research (including what is funded by the U.S. government) falls into this bucket.
The issue of incentives for researchers is a huge problem, which was discussed most recently at CE in this post:

I like the ‘problem storming’ idea.  I explicitly adopted the ‘multiple working hypotheses’ strategy in my paper Mixing Politics and Science in Testing the Hypothesis That Greenhouse Warming Is Causing a Global Increase in Hurricane Intensity.  Unfortunately, natural variability is treated as insignificant noise in way too many climate science papers.
With regards to the research checklist, I think there are some good ideas and important points there.  However, I find it to be overly constraining and formulaic for natural/physical sciences, particularly with regards to Bohr’s Quadrant.
And finally, specifically with regards to climate science, I think that the coupled advocacy and incentives issues raised here are very important and need to be more widely recognized in policy making, not to mention assessment reports.
 Filed under: Scientific method, Sociology of science

Source