The Denialism Frame

by Andy West
An inadequately testable and inappropriate framing.

 1. Introduction
Geoff Chambers commenting recently in a Cliscep Post reminded me of the paper ‘Denialism: what is it and how should scientists respond?’ by Diethelm and McKee (D&M2009). Chambers calls this paper ‘the standard scientific work on Denialism’, and rightly so I think. Certainly the paper is quoted or referenced in support of many works1. Its principles also form the core of the wiki page for Denialism. Though the word ‘denialism’ existed prior to D&M2009, the paper appears to have contributed to increasing usage4 along with academic legitimization. I found no in-depth analysis of the popular framing of ‘denialism’ as promoted by D&M2009, despite its impact on several domains and not least that of climate change. So my own analysis follows.
2.  Criteria for recognizing ‘denialism’
As noted the wiki page for Denialism references D&M2009, in support of the assertion that denialism presents common features across topic domains via which denialist behavior can be recognized. Wiki summarizes the same five characteristics proposed by the paper thusly5:

  1. Conspiracy theories — Dismissing the data or observation by suggesting opponents are involved in “a conspiracy to suppress the truth”.
  2. Cherry picking — Selecting an anomalous critical paper supporting their idea, or using outdated, flawed, and discredited papers in order to make their opponents look as though they base their ideas on weak research. [This is number 3 in D&M2009, and some sources point to cherry picking of data too].
  3. False experts — Paying an expert in the field, or another field, to lend supporting evidence or credibility. [This is number 2 in D&M2009].
  4. Moving the goalpost — Dismissing evidence presented in response to a specific claim by continually demanding some other (often unfulfillable) piece of evidence.[In D&M2009 this is framed more as an impossible standard of proof rather than a moving target, yet the essence is the same].
  5. Other logical fallacies — Usually one or more of false analogy, appeal to consequences, straw man, or red herring.

So identifying denialism is apparently as straightforward as testing the target individual or social group for the above characteristics. Yet D&M2009 provides no methodology for achieving this objectively, and there are major problems with simply attempting a direct assessment.
3.  Problem: direct assessment cannot reliably distinguish ‘realists’ from ‘denialists’
Conspiracy theories and logical fallacies often abound on both sides of a long contested domain that has major social significance. So bullets 1 and 5 are unreliable criteria for who, overall, is ‘denying’ reality. This is because major social conflicts attract individuals with all sorts of beliefs and motivations; some of these people will generally back the evidential side, i.e. the ‘right’ side, yet for the wrong reasons and / or deploying the wrong arguments. Their impacts on the contest may be modest, e.g. as is often the case regarding folks with theoretical rather than emotively driven motivations2, or strong, e.g. typically from folks who’ve slipped into noble cause corruption3. In some cases a much more systemic promotion of the ‘right’ side yet via culturally driven and not evidentially driven arguments, will also occur because of cultural alliance effects.
So, consider a contested issue which features a largely evidential position, E, opposed mainly by religious believers. The religious side has a strong cultural alliance with a political party, X, which hence is pulled in for that side. This sparks a reaction whereby X’s political opponent, Z, weighs in on the evidential side, yet by default not with evidential arguments but instead deploying their regular range of cultural weapons, such as ‘folks who support the X party (or via association oppose E) have inferior brains’, which range will typically include some conspiracy theory, logical fallacies and so on. Hence the ‘right’ side ends up inextricably tangled with various cultural promotion and defensive behaviors (footnote 6 illustrates this for the climate domain).
Due to these various effects (plus another immediately below) not only will conspiracy theories and logical fallacies arise on both sides of a socially contested domain, it is likewise for cherry picking and false experts too. The only underlying criteria that D&M2009 recommends to which we might turn for some guidance regarding who is who within a contest featuring such mirrored behaviors, is that of a ‘dominant’ scientific consensus. The paper claims that the ‘right’ side must be the consensus side. Yet there is no acknowledgement of the difference between a scientific consensus and a social consensus, or that the latter can pose as the former7. Influence from an enforced social consensus increases the chances that scientists too will straddle the rift between sides, or maybe even end up mostly on the ‘wrong’ side. Authoritative, apparently settled science has been overturned many times8; scientists and policy makers are not magically separate from society and like everyone else they are subject to dynamic bias patterns that evolve across their society, for instance emotional bias regarding climate issues.
4.  Problem: direct assessment cannot escape domain bias
Considering the above effects one would expect most cherry picking to be the inadvertent result of bias, and probably subtle in nature. Yet even for more blatant cases, in a complex domain mired in claims and counter claims to the nth degree, it can be difficult to correctly identify cherry-picked data without fairly extensive domain knowledge. And likewise the picking of ‘discredited papers’ is a subjective criteria. It depends upon believing those who did the discrediting and their reasons for doing so, which implies a prior judgment that can only be based upon reasonable domain knowledge (and/or bias). Indeed the very allegation of cherry picking could itself be a cherry pick, if for instance this only presents an unfavorable part of the original case. So the criteria that reveal evidence choices as cherry picks are in themselves domain dependent, which tends to thwart objectivity.
It is likewise regarding experts. To reliably know whether an expert is ‘false’ or not requires domain knowledge. What they are paid and by whom is not on its own a definitive criteria (or even major criteria; ideological bias often motivates more than money, though the two can also be aligned). Navigating the often labyrinthine funding paths within a contested domain can be almost as complex as evaluating direct domain evidence; the public certainly don’t have time for this, and interpretation of funding network influences is itself subject to bias and polarization. For a major contested domain one expects opposing networks, nor is there a simple rule of thumb to interpret them, such as: ‘scientists paid by industry are less reliable’. Via the grant funding circus, government scientists or university employees have just as much skin in the game as industry has via market influence. It’s also the case that where strong culture is present in a contested domain (absent this there wouldn’t likely be ‘denialism’ anyhow), individuals who are most domain knowledgeable, i.e. ‘experts’, are in any case even more polarized than the rest of us9. Hence the advice of these experts on say cherry picking, or anything else, is potentially a slave to that polarization.
So absent some novel methodology (D&M2009 does not suggest any) we have fatal recursion: correctly identifying cherry picking and false experts implies a reasonably deep and yet also unbiased domain knowledge. In turn this means already knowing, despite the confounding factor of a highly polarized environment, which side is in fact ‘speaking to truth’ and which is ‘denying’; yet this is essentially what we were meant to be finding out in the first place. Or in other words, the domain knowledge needed to investigate these characteristics brings with it domain bias, which bias may lead to erroneous judgment.
5.  The standard of proof: a more useful criteria?
So confirming some of the bullet points in section 2 cannot be done objectively, and even if it could this does not reliably confirm which side is overall a denialist side, because behaviors are frequently mirrored. Yet I haven’t thus far included bullet 4 in the discussion. Is not a stable and realistic threshold of proof, and so a ‘right’ side that promotes this threshold, objectively recognizable?
Well D&M2009 cites four domains: HIV/AIDS, creationism, smoking/cancer, and climate change. The first seems to have a very definitive threshold; if one can independently replicate the development of AIDs from HIV, then bingo, proof achieved. And unfortunately there have been far too many inadvertent and tragic replications10. Yet for a wicked system like climate, how can a clear threshold of proof for imminent (before 2100) calamity, which is the key contested issue11a, even be established? Scientists and economists still range over either more, or less, global danger than an IPCC impact assessment that after decades of effort, seems vague at best. Neither calamity or net benefit is ruled out. A threshold of proof for the much contested ‘second hand smoke’ issue (properly a sub-domain, yet one cited by the authors) is dependent upon social and medical statistics. Hence this isn’t just a matter of simple replication and bias might afflict any threshold determination. This subject is home turf for the authors, they’re acknowledged experts; yet in eletter replies to D&M2009 and elsewhere there is not only robust criticism of the paper, but specific criticism of the authors’ stance on second hand smoke, from other experts. As a novice in this domain, how do I know which experts are false, or if neither are false yet the science is simply immature? There’s also complaint about the authors’ selection bias, rhetoric devices and use of defamation11b, so as usual there is defensive behavior on both sides; who is who?
Like the HIV case, proof of evolution over creationism seems like a very safe bet; familiar issues such as the increasing resistance of diseases to antibiotics allow us to actually perceive evolution in action. Yet what would this contested domain look like just 10 years, say, after Darwin’s publication of The Origin of Species? And what supporting evidence was available then?12 I submit that while the relevant criteria for proof may be obvious now, even to the educated elite they were not at all obvious then12a. So if the correct evidential goalpost and hence the ‘right’ side can only be confirmed for cases which are obvious long in retrospect, an assessment of the goalpost criteria is not particularly useful or reliable either. In the generic case, we cannot be certain of where on the timeline of science emergence we actually stand12b.
This all suggests that objective recognition of a stable and achievable standard of proof is not so simple a matter. The only definitive case (HIV) requires no consensus, being manifest via replication10. For many domains, a stable standard of proof simply reflects the maturity of the relevant science, and if the science isn’t mature (the long time to iteratively collect and analyze social trend or medical or climate data can impede that maturation), then standards of proof will be contested just like everything else, and could legitimately move, and will not be easily and objectively pinned own.
So even the most hopeful criteria in D&M2009 fails to provide us with a reliable means of identifying an overall ‘denialist’ side. And considering that similar rhetoric and behaviors typically appear on opposing sides, do these criteria truly define ‘denialist’ activity anyhow? Can both sides be ‘denialist’? Assuming one side is indeed ‘denialist’ overall, surely many folks therein are legitimately motivated? At this point one has to question not just the D&M2009 criteria, but whether ‘denialism’ is an appropriate framing and what principles this framing is based upon.
6.  D&M2009 does not establish cause
D&M2009 has only a single short paragraph dealing with the underlying reasons for denialism. The rest of the text explores the example four domains w.r.t. the cited main behaviors, supplying references. This is disappointing; in order to deal properly with a phenomenon, one first has to understand its cause(s). And when boldly stating that we are indeed seeing a well-defined phenomenon in the first place, one should surely have a reasonable grasp (or theory) of cause. Yet the relevant paragraph simply states denialist motivations as: eccentricity, idiosyncrasy (apparently with both these sometimes encouraged by maverick celebrity status), greed (corporate largesse from oil and tobacco is cited), and ideology13 or faith.
A major problem from a social psychology point of view is that these are very different motivators with very different power, scope, and resultant behaviors13a, which suggests the authors have barely considered cause at all, despite this is crucial. However D&M2009 cites Mark Hoofnagle’s blog (2007) as a primary source, wherein there is certainly more about cause. At this point it’s worth noting that D&M2009 is a close replication of Hoofnagle’s ideas (which already included the five main characteristics listed in section 2), merely distilling his concept of denialism and adding in the references from example domains, plus some extra nuance14 (Hoofnagle is properly cited, so nothing wrong with this). Yet Hoofnagle claims15 a very clear cause, dishonesty, which D&M2009 conspicuously drops. Hoofnagle also hints at mental illness16.
Diethelm and McKee are very wise to drop ‘dishonesty’ as a motivator17. Dishonesty is not a prime social driver and could not seriously power the behavior of, for instance, the 45% of Americans that D&M2009 cites as rejecting the evidence of evolution, or consistently produce significant minorities who exhibit similarly strong resistance in very different domains. For this, a potent universal social driver is needed, which also rules out eccentricity, idiosyncrasy and celebrity status (can be secondary / tertiary effects, as may dishonesty), and to a large extent greed too13a (its role is domain dependent yet not usually primary). While this leaves two that happen to fall on the target, we literally have only the single words for them, i.e. ideology and faith, but nothing whatsoever regarding some profound implications.
With D&M2009 shorn of Hoofnagle’s almost passionate fingering of dishonesty, a casual list of assumed causes seems to have been substituted, which means that ‘denialism’ is not based on principles and isn’t a characterized phenomenon, about which for instance one could make predictions. ‘Denialism’ is merely a set of observed rhetoric responses, which in the tremendously complex world of human sociality could occur for all sorts of reasons, only some being that people are inappropriately opposing known, genuine and proven scientific facts (while indeed some people theoretically championing the evidence will employ such rhetoric too).
7.  D&M2009 has little utility
A lack of underlying principles results in the fatal flaws outlined in sections 3 to 6. There is no solid phenomenon to actually test for. One cannot objectively identify all the D&M2009 criteria in a contest, and even if one could, this still cannot reliably tell us who is who. The five D&M2009 criteria are a subset from a long since categorized and much larger list of rhetoric devices18, some noted way back in Classical times. These can be deployed subconsciously (especially when passion and deep bias dominate) and if used systemically or excessively even the uninitiated can often detect their use. D&M2009 neither adds to this list, adds to our psychological understanding of specific devices, or provides a new means (or any means) of objectively discerning motivation behind device deployment.

  1. So does ‘denialism’ actually exist?

In attempting to answer this, we need to look at cause. ‘Ideology’13 and ‘faith’ both reflect strong cultural influence, albeit the latter word is usually used in a religious context and the former in a secular context. Of a large behavior spectrum for the culturally influenced, much is well-researched, for instance the fact that when an individual’s culture is threatened, they will defend it, and the mechanisms invoked include subconscious (and often potent) bias19. If a universal phenomenon of ‘denialism’ actually exists then we should look for it in cultural defense, to which one can add that the best form of defense is attack. When a new consensus (scientific or otherwise) threatens existing cultural values, it will be fought. (See footnote 20 for alignment to Michael Specter’s approach).
So cultural defense is a plausible candidate, yet this leads to a framing which is very different to the one that Diethelm and McKee (and Hoofnagle) arrived at. Speculating on possible denialism from this cause (we’ll call it ‘proto-denialism’) we can note that:

  1. One reason cultures are so powerful is that they are not driven primarily by dishonesty; overall, belief is both passionate and honest. Hence most ‘proto-denialists’ would be truthful, defending the truth as they see it (likewise they are not mentally ill).
  2. Cultural defense is not black-and-white, exhibiting various strengths and compromises. Hence there will not only be ‘proto-denialists’ and angels, but many folks who seem to be some of both.
  3. Just as with the defense of nations, cultural defense calls upon alliances. Hence powerful and complicating alliance effects will be in play, such as described in section 3.
  4. No one is free of cultural influence, hence in theory we’ll all be ‘proto-denialists’ of something.
  5. Cultural defense is domain orientated. Folks can be hugely biased in one domain, yet perfectly objective in another. One cannot assume similar behavior over domain boundaries.
  6. Innate or instinctive skepticism is a defense against cultural overdosing i.e. misinformation in a strong cultural context (e.g. propaganda, or systemic fear memes). Because unaided our instincts can’t detect whether an invader is cultural or evidential, especially if the latter is inappropriately promoted (plus, either one may threaten existing culture), we’d expect a strong overlap between genuine skeptic behavior and our ‘proto-denialist’ behavior.
  7. A (major) enforced social consensus will trigger a skeptic response, i.e. resistance to cultural encroachment. So how do we tell this from a scientific consensus triggering our ‘proto-denialist’ behavior, rooted in cultural defense?
  8. Cultural effects are many and varied.

Plus rhetoric is an indelible part of our expression, subconsciously working for us and making it virtually impossible to avoid all persuasive devices even when attempting to be as objective as we can. We applaud excitement about scientific findings despite this may compromise objectivity; none of us are Vulcans.
It’s possible that with a lot of work, some extreme corner of the behavior spectrum could be isolated via specific criteria, which then merits labeling as ‘denialist’. But in truth the characteristics of our ‘proto-denialists’ above are radically different to expectations from the current framing, a framing which may have tainted the term beyond redemption. Nor is this approach a great plan even without that taint, because it tends to mask uncomfortable yet crucial truths, especially those in f) and g). So along with other errors we may end up fooling ourselves that there’s a nice clinical division between skeptics and ‘denialists’27. Via naïve assumption of cause from a basic categorization of rhetoric, this is exactly the trap I believe Diethelm and McKee have fallen into. Hoofnagle goes further, dishing out labels of ‘dishonest’ and ‘crank’ yet without proper theoretical grounds; despite his noble motives many of these are bound to stick onto the wrong people. Some dishonesty and crankiness will ride any cultural wave, or backlash to such a wave, or backlash to an evidential cause that is perceived as cultural encroachment. But this does not mean that cranks and liars drive the main action; they do not. Nor can the touted methods reliably distinguish crankiness from cultural influence, or skepticism from either21.
It’s possible that ‘denialism’ could never be isolated out of cultural defense, i.e. our ‘proto-denialism’ may never be meaningfully distilled into a ‘denialism’ that’s worth the name. More constructive routes should anyhow be pursued for detecting who is who in a contested domain22.

  1. ‘Denialism’ achieves the opposite of the authors’ intent

So major failings in the concept of ‘denialism’ due to a lack of theoretical grounding, expose the authors to error and also let slip the bridle on their own bias23 as they apply their criteria to the example domains. Yet this is the least of the problems that Diethelm, McKee and Hoofnagle have created.
The three have laudably fought long against anti-science factions. Alas due to the failings above some of this fight defends dogma not science, and I fear all that will anyhow be hugely outweighed by unintended negative consequences. Hoofnagle stresses personal psychology not social psychology, and D&M2009’s vague, ill considered causes also allow this angle to prosper, diverting attention from cultural causation. Coupled with an inability to determine who is who, this means they’ve effectively supplied academic legitimacy for any side to call out any and all opponents as psychologically flawed; either systemic liars or cranks, or almost any deficiency. Memes prosper dramatically from vagueness, evolving to the worst implications without constraint because no reality-check back to an original tight definition is possible; there is no proper definition. And ‘denialism’ has indeed become a strong and negative emotive meme24, whose influence the authors have amplified.
Using ‘denialism’ to morally equate legitimate questioners with racially motivated folks who deny the holocaust, has maybe hit ‘worst’ already. Latterly Hoofnagle partially acknowledges this problem even while rehearsing his criteria again. Even the wiki page has some balance27 and notes this major issue with the framing. As far back as 2010, another Diethelm and McKee paper25 largely overlapping D&M2009 content, briefly complains that denialism is used by a ‘wrong’ side. Yet I doubt these authors or anyone else will get the detrimental ‘denialist’ genie back into a bottle anytime soon26. Any otherwise good work quoting them, will be devalued.
Diethelm and McKee wanted to provide health professionals with tools to fight harmful anti-science. Hoofnagle wanted a means to combat invalid emotional arguments. Yet their tools are fundamentally flawed, and promote a framing of ‘denialism’ that I believe amplifies misunderstanding, stigmatization, fear and other emotive reactions, at the expense of reason and scientific advance. Tools to do what they actually wanted, must have objective and cause-based methodology.
Link for Footnotes: footnotes
JC note:  As with all guest posts, please keep your comments relevant and civil.Filed under: Scientific method, Skeptics

Source