The lure of incredible certitude

by Judith Curry
“If you want people to believe what you *do* know, you need to be up front about what you *don’t* know.”-  Charles Manski

Twitter is great for networking.  My recent article Climate Uncertainty and Risk engendered a tweet and email from Professor Matthew Kahn, Chairman of the Department of Economics at the University of Southern California.  Professor Kahn (and also Richard Tol) emailed me a copy of a new paper entitled The Lure of Incredible Certitude, by Charles F. Manski, Professor of Economics at Northwestern University.
The Lure of Incredible Certitude
Charles Manski
Abstract. Forthright characterization of scientific uncertainty is important in principle and serves important practical purposes. Nevertheless, economists and other researchers commonly report findings with incredible certitude, reporting point predictions and estimates. To motivate expression of incredible certitude, economists often suggest that researchers respond to incentives that make the practice tempting. This temptation is the “lure” of incredible certitude. I flesh out and appraise some of the rationales that observers may have in mind when they state that incredible certitude responds to incentives. I conclude that scientific expression of incredible certitude at most has appeal in certain limited contexts. It should not be a general practice.
Excerpts:

On principle, I consider forthright characterization of uncertainty to be a fundamental aspect of the scientific code of conduct.
I have argued that forthright characterization of uncertainty serves important practical purposes. Viewing science as a social enterprise, I have reasoned that if scientists want people to trust what we say we know, we should be up front about what we don’t know. I have suggested that inferences predicated on weak assumptions can achieve wide consensus, while ones that require strong assumptions may be subject to sharp disagreements.
I have pointed out that disregard of uncertainty when reporting research findings may harm formation of public policy. If policy makers incorrectly believe that existing analysis provides an accurate description of history and accurate predictions of policy outcomes, they will not recognize the potential value of new research aiming to improve knowledge. Nor will they appreciate the potential usefulness of decision strategies that may help society cope with uncertainty and learn, including diversification and information acquisition.

A typology of practices that contribute to incredible certitude:

  • conventional certitude: A prediction that is generally accepted as true but is not necessarily true.
  • dueling certitudes: Contradictory predictions made with alternative assumptions.
  • conflating science and advocacy: Specifying assumptions to generate a predetermined conclusion.
  • wishful extrapolation: Using untenable assumptions to extrapolate.
  • illogical certitude: Drawing an unfounded conclusion based on logical errors.
  • media overreach: Premature or exaggerated public reporting of policy analysis.

To cite some examples, over fifty years ago, Morgenstern (1963) remarked that federal statistical agencies may perceive a political incentive to express incredible certitude about the state of the economy when they publish official economic statistics:
“All offices must try to impress the public with the quality of their work. Should too many doubts be raised, financial support from Congress or other sources may not be forthcoming. More than once has it happened that Congressional appropriations were endangered when it was suspected that government statistics might not be 100 percent accurate. It is natural, therefore, that various offices will defend the quality of their work even to an unreasonable degree.”

For short, I now call this temptation the “lure” of incredible certitude.
This contention is nicely illustrated by the story that circulates about an economist’s attempt to describe uncertainty about a forecast to President Lyndon B. Johnson. The economist is said to have presented the forecast as a likely range of values for thequantity under discussion. Johnson is said to have replied “Ranges are for cattle. Give me a number.”

I first discuss a psychological argument asserting that scientific expression of incredible certitude is necessary because the public is unable to cope with uncertainty. I conclude that this argument has a weak empirical foundation. Research may support the claim that some persons are intolerant of some types of uncertainty, but it does not support the claim that this is a general problem of humanity. The reality appears to be that humans are quite heterogeneous in the ways that they deal with uncertainty.
I next discuss a bounded-rationality argument asserting that incredible certitude may be useful as a device to simplify decision making. I consider the usual formalization of decision under uncertainty in which a decision maker perceives a set of feasible states of nature and must choose an action without knowledge of the actual state. Suppose that evaluation of actions requires effort. Then it simplifies decision making to restrict attention to one state of nature and optimize as if this is truth, rather than make a choice that acknowledges uncertainty. However, the result may be degradation of decision making if the presumed certitude is not credible.

A third rationale arises from consideration of collective decision making. The argument is that social acceptance of conventional certitudes may be a useful coordinating device, preventing coordination failures that may occur if persons deal with uncertainty in different ways. This rationale is broadly similar to the bounded-rationality one. Both assert that incredible certitude simplifies decision making, individual or collective as the case may be.
Some colleagues assert that expression of incredible certitude is necessary because the consumers of research are psychologically unable or unwilling to cope with uncertainty. They contend that, if they were to express uncertainty, policymakers would either misinterpret findings or not listen at all.
I conclude that scientific expression of incredible certitude at most has practical appeal in certain limited contexts. On principle, characterization of uncertainty is fundamental to science. Hence, researchers should generally strive to convey uncertainty clearly.

Researchers may also express certitude with private objectives in mind. They may believe that the scientific community and the public reward researchers who assert strong findings and doubt those who express uncertainty. They may conflate science with advocacy, tailoring their analyses to generate conclusions that they prefer. These private considerations may motivate some researchers, but they do not offer reasons why society should encourage incredible certitude.

JC reflections

Manski’s article compliments the field of climate science:

Yet some fields endeavor to be forthright about uncertainty.
I particularly have in mind climate science, which has sought to predict how greenhouse gas emissions affect the trajectory of atmospheric temperature and sea level. Published articles on climate science often make considerable effort to quantify uncertainty. See, for example, Knutty et al. (2010), McGuffie and Henderson-Sellers (2005), McWilliams (2007), Parker (2006, 2013), Palmer et al. (2005), and Stainforth et al. (2007). The attention paid to uncertainty in the periodic reports of the Intergovernmental Panel on Climate Change (IPCC) is especially notable; see Mastrandrea et al. (2010).

He cites some excellent, classic articles relating to uncertainty in weather and climate modeling and prediction  – McWilliams, Parker, Palmer et al., and Stainforth et al.

.
With regards to the treatment of uncertainty by the IPCC, I am much less impressed.  In the lexicon of the uncertainty monster, I characterized the IPCC’s treatment as ‘monster simplification:’

 .

Monster simplification. Monster simplifiers attempt to transform the monster by subjectively quantifying and simplifying the assessment of uncertainty. Monster simplification is formalized in the IPCC TAR and AR4 by guidelines for characterizing uncertainty in a consensus approach consisting of expert judgment in the context of a subjective Bayesian analysis (Moss and Schneider 2000).

In my paper Reasoning about Climate Uncertainty, I further argued that a concerted effort by the IPCC is needed to identify better ways of framing the climate change problem, exploring and characterizing uncertainty, reasoning about uncertainty in the context of evidence-based logical hierarchies, and eliminating bias from the consensus building process itself.

Apart from these concerns, it seems like the field of economics is in much worse shape with regards to dealing with uncertainty (and communicating it to policy makers) than climate science.  These two fields are combined in Integrated Assessment Modeling of the impacts of climate change.  In my paper Climate Uncertainty and Risk, I made the following comments:
.

The key climate science input to IAMs is the probability density function of equilibrium climate sensitivity (ECS). The dilemma is that with regards to ECS, we are in a situation of scenario (Knightian) uncertainty—we simply do not have grounds for formulating a precise probability distribution.  Other deep uncertainties in IAM inputs include the damage function (economic impact) and discount rate (discounting of futureutilities with respect to the present).  Without precise probability distributions, no expected utility calculation is possible.
This problem has been addressed by creating a precise probability distribution based upon the parameters provided by the IPCC assessment reports (NAS 2017). In effect, IAMs convert Knightian uncertainty in ECS into precise probabilities.  Of particular concern is how the upper end of the ECS distribution is treated—typically with a fat tail.  The end result is that this most important part of the distribution that drives the economic costs of carbon is based upon a statisticallymanufactured fat tail that has no scientific justification.
Subjective or imprecise probabilities may be the best ones available. Some decision techniques have been formulated using imprecise probabilities that do not depart too much from the appeal to expected utility. Frisch (2013) suggests that such applications of IAMs are dangerous, because while they purport to offer precise numbers to use for policy guidance, that precision is illusory and fraught with assumption and value judgments.

In any event, it is good to see more serious consideration of uncertainties being considered by economists, with some fresh insights into the motivations for ‘incredible certitude.’

And this quote (pulled from a presentation by Manski reported via twitter) sums things up perfectly:

“If you want people to believe what you *do* know, you need to be up front about what you *don’t* know.”

Source