Is federal funding biasing climate research?

by Judith Curry
Does biased funding skew research in a preferred direction, one that supports an agency mission, policy or paradigm?

There is much angst in the scientific and policy communities over Congressional Republicans’ efforts to cut NASA’s Earth Science Budget, and also the NSF Geosciences budget.  Marshall Shepherd has a WaPo editorial defending the NASA Earth Science Budget.
Congressional Republicans are being decried as ‘anti-science’.  However, since they are targeting Earth Science budgets and reallocating the funds to other areas of science, anti-science does not seem to be an apt description.  I suspect that an element contributing to these cuts is Congressional concern about political bias being interjected in the the NASA and NSF geosciences and social science research programs, notably related to climate change.
There is much discussion and angst over industrial funding of climate research (see my post on the Grijalva inquisition), but there seems to have been little investigation of the potential for federal research funding to bias climate research – a source of funding that is many orders of magnitude larger than industrial funding of climate research.
 New report from CATO
CATO has published a very  interesting analysis by David Wojick and Pat Michaels entitled Is the Government Buying Science or Support? A Framework Analysis of Federal Funding-induced Biases.   This report is a framework for future research on funding-induced bias, so the reports make no  specific allegations. The report includes a taxonomy of 15 kinds of bias,
with a focus on federal funding and examples from climate change science.
From the Executive Summary:
 Science is a complex social system and funding is a major driver. In order to facilitate research into Federal funding and bias it is necessary to isolate specific kinds of bias. Thus the framework presented here is a taxonomy of funding-induced bias.
Whatever the reason for the present bias research focus on commercial funding, the fact remains that the Federal Government funds a lot of research, most of it directly related to agency missions, programs and paradigms. In some areas, especially regulatory science, Federal funding is by far the dominant source.
Clearly the potential for funding-induced bias exists in these cases. It should be noted that we make no specific allegations of Federal funding induced bias. We do, however, point to allegations made by others, in order to provide examples. Our goal here is simply to provide a conceptual framework for future research into scientific biases that may be induced by Federal funding.
Here is an example of how [cascading amplification of funding bias]   might work.

  1. An agency receives biased funding for research from Congress.
  2.  They issue multiple biased Requests for Proposals (RFPs), and
  3. Multiple biased projects are selected for each RFP.
  4. Many projects produce multiple biased articles, press releases, etc,
  5. Many of these articles and releases generate multiple biased news stories, and
  6. The resulting amplified bias is communicated to the public on a large scale.

In the climate change debate there have been allegations of bias at each of the stages described above. Taken together this suggests the possibility that just such a large scale amplifying cascade has occurred or is occurring. Systematic research is needed to determine if this is actually the case.
The notion of cascading systemic bias, induced by government funding, does not appear to have been studied much. This may be a big gap in research on science. Moreover, if this sort of bias is indeed widespread then there are serious implications for new policies, both at the Federal level and within the scientific community itself.
Potential Practices of Funding-Induced Bias
1. Funding agency programs that have a biased focus. In some cases Congress funds research programs that may be biased in their very structure. For example, by ignoring certain scientific questions that are claimed to be important, or by supporting specific hypotheses, especially those favorable to the agency’s mission or policies.
2. Agency Strategic Plans, RFPs, etc., with an agenda, not asking the right questions.  Research proposals may be shaped by agency Strategic Plans and Requests for Proposals (RFP’s), also called Funding Opportunity Announcements (FOA’s). These documents often specify those scientific questions that the agency deems important, hence worthy of funding. Thus the resulting research proposals may be biased, speaking to what the agency claims is important rather than what the researcher thinks.
3. Biased peer review of research proposals. This bias may involve rejecting ideas that appear to conflict with the established paradigm, funding agency mission, or other funding interest. See also Bias #6: Biased peer review of journal articles and conference presentations.
4. Biased selection of research proposals by the agency program. The selection of proposals is ultimately up to the agency program officers. As with the selection of peer reviewers, there is some concern that some funding agencies may be selecting research proposals specifically to further the agency’s policy agenda.
5. Preference for modeling using biased assumptions.  The use of computer modeling is now widespread in all of the sciences. There is a concern that some funding agencies may be funding the development of models that are biased in favor of outcomes that further the agency’s policy agenda.
6. Biased peer review of journal articles and conference presentations. This issue is analogous to the potential bias in peer review of proposals, as discussed above. As in that case, this bias may involve rejecting ideas that conflict with the established paradigm, agency mission, or other funding interests.
7. Biased meta-analysis of the scientific literature. Meta-analysis refers to studies that purport to summarize a number of research studies that are all related to the same research question. For example, meta-analysis is quite common in medical research, such as where the results of a number of clinical trials for the same drug are examined.
8. Failure to report negative results. This topic has become the subject of considerable public debate, especially within the scientific community. Failure to report negative results can bias science by supporting researcher that perpetuates questionable hypotheses.
9. Manipulation of data to bias results.  Raw data often undergoes considerable adjustment before it is presented as the result of research. There is a concern that these adjustments may bias the results in ways that favor the researcher or the agency funding the research.
10. Refusing to share data with potential critics.  A researcher or their funding agency may balk at sharing data with known critics or skeptics.
11. Asserting conjectures as facts. It can be in a researcher’s, as well as their funding agency’s, interest to exaggerate their results, especially when these results support an agency policy or paradigm. One way of doing this is to assert as an established fact what is actually merely a conjecture.
12. False confidence in tentative findings.  Another way for researchers, as well as their funding agencies to exaggerate results is top claim that they have answered an important question when the results merely suggest a possible answer. This often means giving false confidence to tentative findings.
13. Exaggeration of the importance of findings by researchers and agencies.  Researcher and agency press releases sometimes claim that results are very important when they merely suggest an important possibility, which may actually turn out to be a dead end. Such claims may tend to bias the science in question, including future funding decisions.
14. Amplification of exaggeration by the press.  The bias due to exaggeration in press releases and related documents described above is sometimes, perhaps often, amplified by overly enthusiastic press reports and headlines.
15. More funding with an agenda, building on the above, so the cycle repeats and builds. The biased practices listed above all tend to promote more incorrect science, with the result that research continues in the same misguided direction. Errors become systemic by virtue of a biased positive feedback process. The bias is systematically driven by what sells, and critical portions of the scientific method may be lost in the gold rush.
For each of these 15, the report includes:

  • Concept analysis
  • Literature snapshot
  • Research directions and prospects for quantification
  • Climate debate examples

Based on my own experience and analysis, I find the following of these to be potentially most important:  1, 2, 3, 4, 6, 7, 10, 13, 15
Turning the tables
Christopher Monckton has sent a letter to Harvard, details at WUWT.  Excerpt:
Two of the co-authors of the commentary, Buonocore and Schwartz, are researchers at the Harvard T.H. Chan School of Public Health. Your press release quotes Buonocore thus: “If EPA sets strong carbon standards, we can expect large public health benefits from cleaner air almost immediately after the standards are implemented.” Indeed, the commentary and the press release constitute little more than thinly-disguised partisan political advocacy for costly proposed EPA regulations supported by the “Democrat” administration but opposed by the Republicans. Harvard has apparently elected to adopt a narrowly partisan, anti-scientific stance.
The commentary concludes with the words “Competing financial interests: The authors declare no competing financial interests”. Yet its co-authors have received these grants from the EPA: Driscoll $3,654,609; Levy $9,514,391; Burtraw $1,991,346; and Schwartz (Harvard) $31,176,575. The total is not far shy of $50 million.
Would the School please explain why its press release described the commentary in Nature Climate Change by co-authors including these lavishly-funded four as “the first independent, peer-reviewed paper of its kind”?
Would the School please explain why Mr Schwartz, a participant in projects grant-funded by the EPA in excess of $31 million, failed to disclose this material financial conflict of interest in the commentary?
Would the School please explain the double standard by which Harvard institutions have joined a chorus of public condemnation of Dr Soon, a climate skeptic, for having failed to disclose a conflict of interest that he did not in fact possess, while not only indulging Mr Schwartz, a climate-extremist, when he fails to declare a direct and substantial conflict of interest but also stating that the commentary he co-authored was “independent”?
Well this is an interesting case, is it hard to understand why Schwartz has received a lot of EPA funding?. Can you imagine EPA funding Willie Soon to do any kind of research?  Or me, for that matter?  (Apart from the issue that I have no interest in replying to EPA’s requests for proposals).  Seems like a pretty clear example of conflict of interest, that fits in very well with the Wojick/Michaels analysis.  And the $50M from EPA makes the $1.2M that Soon received over a decade seem like pocket change.  And finally, while Schwartz has made public statements in support of EPA policies, I don’t recall Soon making public statements supporting Southern Company’s policies?
JC reflections
In my recent essay Conflicts of interest in climate science, I completely missed this issue related to federal funding, but now this seems so obvious and resonates very much with my own experiences and observations.  Wojick and Michaels have opened up what I hope will be a very fruitful and illuminating line of inquiry.
The challenge for the federal funding agencies is this – how to fund mission relevant  ‘use inspired research’ (e.g. Pasteur’s quadrant) without biasing the research outcomes.
Here is how $$ motivates what is going on.  ‘Success’ to individual researchers, particularly at the large state universities, pretty much equates to research dollars – big lab spaces, high salaries, institutional prestige, and career advancement (note, this is not so true at the most prestigious universities, where peer recognition is the biggest deal).  At the Program Manager level within a funding agency, ‘success’ is reflected in growing the size of your program (e.g. more $$) and having some high profile results (e.g. press releases).  At the agency level, ‘success’ is reflected in growing, or at least preserving, your budget.  Aligning yourself, your program, your agency with the political imperatives du jour is a key to ‘success’.
Perhaps the Republican distrust of the geosciences and social sciences can be repaired if the agencies, programs and scientists work to demonstrate that they are NOT biased, by funding a broader spectrum of research that challenges the politically preferred outcomes.Filed under: Sociology of science

Source