by Judith Curry
Dan Kahan has an interesting blog post on scientists and motivated reasoning.
The link to Kahan’s blog post is [here], most of it is reproduced below:
6. Professional judgment
Ordinary members of the public predictably fail to get the benefit of the best available scientific evidence when their collective deliberations are pervaded by politically motivated reasoning. But even more disturbingly, politically motivated reasoning might be thought to diminish the quality of the best scientific evidence available to citizens in a democratic society (Curry 2013).
Not only do scientists—like everyone else—have cultural identities. They are also highly proficient in the forms of System 2 reasoning known to magnify politically motivated reasoning. Logically, then, it might seem to follow that scientists’ factual beliefs about contested social risks are likely skewed by the stake they have in conforming information to the positions associated with their cultural groups.
But a contrary inference would be just as “logical.” The studies linking politically motivated reasoning with the disposition to use System 2 information processing have been conducted on general public samples, none of which would have had enough scientists in them to detect whether being one matters. Unlike nonscientists with high CRT or Numeracy scores, scientists use professional judgment when they evaluate evidence relevant to disputed policy-relevant facts. Professional judgment consists in habits of mind, acquired through training and experience and distinctively suited to specialized forms of decisionmaking. For risk experts, those habits of mind confer resistance to many cognitive biases that can distort the public’s perceptions(Margolis 1996). It is perfectly plausible to believe that one of the biases that professional judgments can protect risk experts from is “politically motivated reasoning.”
Here, too, neither values nor positions on disputed policies can help decide between these competing empirical claims. Only evidence can. To date, however, there are few studies of how scientists might be affected by politically motivated reasoning, and the inferences they support are equivocal.
Some observational studies find correlations between the positions of scientists on contested risk issues and their cultural or political orientations (Bolsen, Druckman, & Cook 2015; Carlton, Perry-Hill, Huber & Prokopy 2015). The correlations, however, are much less dramatic than ones observed in general-population samples. In addition, with one exception (Slovic, Malmfors et al. 1995), these studies have not examined scientists’ perceptions of facts in their own domains of expertise.
This is an important point. Professional judgment inevitably comprises not just conscious analytical reasoning proficiencies but perceptive sensibilities that activate those proficiencies when they are needed (Bedard & Biggs 1991; Marcum 2012). Necessarily preconscious (Margolis 1996), these sensibilities reflect the assimilation of the problem at hand to an amply stocked inventory of prototypes. But because these prototypes reflect the salient features of problems distinctive of the expert’s field, the immunity from bias that professional judgment confers can’t be expected to operate reliably outside the domain of her expertise (Dane & Pratt 2007).
A study that illustrates this point examined legal professionals. In it, lawyers and judges, as well as a sample of law students and members of the public, were instructed to perform a set of statutory interpretation problems. Consistent with the PMRP design, the facts of the problems—involving behavior that benefited either illegal aliens or “border fence” construction workers; either a pro-choice or pro-life family counseling clinic—were manipulated in a manner designed to provoke responses consistent with identity protective cognition in competing cultural groups. The manipulation had exactly that effect on members of the public and on law students. But it didn’t on judges and lawyers: despite the ambiguity of the statutes and the differences in their own cultural values, those study subjects converged in their responses, just as one would predict if one expected their judgments to be synchronized by the common influence of professional judgment. Nevertheless, this relative degree of resistance to identity-protective reasoning was confined to legal-reasoning tasks: the judges and lawyers’ respective perceptions of disputed societal risks—from climate change to marijuana legalization—reflected the same identity-protective patterns observed in the general public and student samples (Kahan, Hoffman, Evans, Lucci, Devins & Cheng in press). Extrapolating, then, we might expect to see the same effect in risk experts: politically motivated divisions on policy-relevant facts outside the boundaries of their specific field of expertise; but convergence guided by professional judgment inside of them.
Or alternatively we might expect convergence not on positions that are true necessarily but that are so intimately bound up with a field’s own sense of identity that acceptance of them has become a marker of basic competence (and hence a precondition of recognition and status) within it. In Koehler (1993), scientists active in either defending or discrediting scientific proof of “parapsyology” were instructed to review the methods of a fictional ESP study. The result of the study was experimentally manipulated: Half the scientists got one that purported to find evidence supporting ESP, the other half one that purported to find evidence not supporting it. The scientists’ assessments of the quality of the study’s methods turned out to be strongly correlated with the fit between the represented result and the position associated with the scientists’ existing positions on the scientific validity of parapsychology—although Koehler found that this effect was in fact substantially more dramatic among the “skeptic” than the “non-skeptic” scientists.
Koehler’s study reflects the core element of the PMRP design: the outcome measure was the weight that members of opposing groups gave to one and the same piece of evidence conditional on the significance of crediting it. Because the significance was varied in relation to the subjects’ prior beliefs and not their stake in some goal independent of forming an accurate assessment, the study can and normally is understood to be a demonstration of confirmation bias. But obviously, the “prior beliefs” in this case were ones integral to membership in opposing groups, the identity-defining significance of which for the subjects was attested to by how much time and energy they had devoted to promoting public acceptance of their respective groups’ core tenets. Extrapolating, then, one might infer that professional judgment might indeed fail to insulate from the biasing effects of identity-protective cognition scientists whose professional identity has become identified strongly with particular factual claims.
So we are left with only competing plausible conjectures. There’s nothing at all unusual about that. Indeed, it is the occasion for empirical inquiry—which here would take the form of the use of the PMRP design or one of equivalent validity to assess the vulnerability of scientists to politically motivated reasoning—both in and outside of the domains of their expertise, and with and without the pressure to affirm “professional-identity-defining” beliefs.
——–
Several papers from Kahan’s reference list are particularly relevant for climate science:
Curry, J. Scientists and Motivated Reasoning. Climate Etc. (Aug. 20, 2013) [link]
Koehler, J.J. The Influence of Prior Beliefs on Scientific Judgments of Evidence Quality. Org. Behavior & Human Decision Processes 56, 28-55 (1993). [link]
Abstract. This paper is concerned with the influence of scientists′ prior beliefs on their judgments of evidence quality. A laboratory experiment using advanced graduate students in the sciences (study 1) and an experimental survey of practicing scientists on opposite sides of a controversial issue (study 2) revealed agreement effects. Research reports that agreed with scientists′ prior beliefs were judged to be of higher quality than those that disagreed. In study 1, a prior belief strength × agreement interaction was found, indicating that the agreement effect was larger among scientists who held strong prior beliefs. In both studies, the agreement effect was larger for general, evaluative judgments (e.g., relevance, methodological quality, results clarity) than for more specific, analytical judgments (e.g., adequacy of randomization procedures). A Bayesian analysis indicates that the pattern of agreement effects found in these studies may be normatively defensible, although arguments against implementing a Bayesian approach to scientific judgment are also advanced.
JC reflections
I regard this as an extremely important line of research. Laboratory experiments assessing this are invaluable here. It would be WONDERFUL to see some experiments assessing all this from a spectrum of climate scientists. Too much of the social psychology surrounding climate change is the pernicious twaddle coming from Lewandowsky et al.
Regarding our recent discussion of how to debunk the 97% consensus meme. Not only do we need better surveys of climate scientists (good questions plus stratification across areas of expertise and ‘motivations’), but we also need to understand the social psychology of the consensus supporters versus dissenters. Further, we need to understand the allegiance to the climate consensus of scientists (and professional societies) that are well outside the domain of climate science. Experiments conducted by social scientists who do not themselves have ‘motivations’ in the climate debate are needed. Given the lack of diversity in the field of social psychology, as the writings by Jonathan Haidt and others at heterodoxacademy.org are pointing out, this will not be easy to accomplish.
Here’s to hoping that we will see some thoughtful studies on this in the near future.
Filed under: Sociology of science