Driving in the dark

by Judith Curry
Long-term strategies should be built not on “visions” of the future but instead on the premise that longer term predictions (that is, forecasts of situations years and decades out), however presently credible, will probably prove wrong. – Richard Danzig

I’ve just encountered a remarkable document, Driving in the Dark: Ten Propositions About Prediction and National Security, by Richard Danzig who served as Secretary of the Navy under President Clinton.  Excerpts:
Prediction lies at the root of all strategic thinking. However, whereas routine, short-term predictions are generally right, strategic judgments about future environments are often, one might say predictably, wrong. The common response to this shortcoming is to try to improve predictive capabilities.
I propose a different tack, namely that long-term strategies should be built not on “visions” of the future but instead on the premise that longer term predictions (that is, forecasts of situations years and decades out), however presently credible, will probably prove wrong. I attempt here to show that this premise is not sterile or disabling and instead point to five complementary strategies that will better prepare the defense community for what cannot be foreseen.
Like others, I favor efforts to improve capabilities for foresight, and I agree with the best thinkers in recognizing that foresight is not the same as prediction. Prediction implies an ability to discern a particular turn of events. Foresight identifies key variables and a range of alternatives that might better prepare for the future. Yet my concern here is not to abet this admirable effort but instead to recognize and cope with its limits.
In my view, long-term national security planning will inevitably be conducted in conditions that planners describe as “deep” or “high” uncertainty, and in these conditions, foresight will repeatedly fail. Predicting the future may be “an inescapable task for decisionmakers,” but it is not the only task and it is wrong to plan solely on predictive premises. Planners need to complement their efforts at foresight with thinking and actions that account for the high probability of predictive failure.
The report presents propositions regarding prediction. The first are descriptive:

  • The propensity to make predictions – and to act on the basis of predictions – is inherently human.
  • Requirements for prediction will consistently exceed the ability to predict. 
  • Planning across a range of scenarios is good practice but will not prevent predictive failure.

The second set of propositions are prescriptive. They show how, even as they strive to improve their foresight, policymakers can better design processes, programs and equipment to account for the likelihood of predictive failure. Doing so will involve several actions:

  • Accelerating decision tempo and delaying some decisions. In a world characterized by unpredictability and increasingly frequent surprise, there are heavy penalties for ponderous decisionmaking and slow execution. The U.S. government is now designing and producing equipment on political and technological premises that are outdated by the time the equipment reaches the field. Programs must also be designed to defer some decisions into the later stages of development.
  • Increasing the agility of our production processes. A 21st-century DOD must invest in capabilities to respond rapidly to unanticipated needs.
  • Prioritizing adaptability. In the face of unpredictability, future military equipment should be adaptable and resilient rather than narrowly defined for niche requirements. 
  • Building more for the short term. Major acquisitions are now built for long-term use but would benefit from greater recognition of the unpredictability of technology development and combat environments. 
  • Nurturing diversity and creating competition. Competition and diversity produce a valuable range of potential responses when unpredicted challenges and difficulties arise.

Some insightful comments from Aaron Frank:
In a sense, Danzig’s advice is to reject the notion of optimality, arguing that there is too much uncertainty about the environment  to base any significant wager with respect to policy and even the technologies that undergird military force structure. Importantly, Danzig’s concerns are not over any particular model and its precision, but regarding the act of postulating a single future, or even a range of potential futures, and then planning against them. Simply put, there is no a priori way of knowing if the future, or set of futures, being planned for are the right ones, yet decision-makers and organizations must commit resources, take action, and go about their business based on some vision of what is coming.
A second piece of Danzig’s point is his reference to planning across multiple scenarios. Such an approach has been argued as the basis of creating robust strategies and the reason should be fairly intuitive: betting the house on a single future coming to be is likely to fail, while finding things to do that perform well in a variety of different cases means that one’s strategies will endure. Danzig, however, alludes to another aspect of this problem and correctly notes that not all scenarios can be identified beforehand, and some may encourage actions that would be harmful in other cases. As a result, planning against alternative futures must be supplemented by encouraging adaptation and change as new information becomes available. Thus, Danzig sets up adaptation as a necessary complement to robustness as a crucial aspect of strategy and decision-making under uncertainty.
The notion that success breeds failure in the prediction business is an important and often unrecognized aspect of intelligence analysis. While there has been an extensive and excellent literature of the tensions that exist between the intelligence community and policy-makers, often resulting in intelligence that is unused, ignored, challenged, or politicized, the ways in which it may be uncritically accepted and then encouraged to go beyond what is knowable while maintaining an appearance of certainty and confidence may be another path to intelligence failure induced by positive feedback from consumers.
The so called movement away from symphonies following the same sheet of music describing large bureaucratic military organizations may be replaced by the metaphor of improvisational jazz, but the players will need to work together and practice for long periods in order to understand and follow one another’s cues, and replacing parts will be more difficult if there is no script to follow. 
JC reflections
While these ideas have obvious applications to climate change and national security, I think Danzig’s ideas also have broad applicability to climate change: how we view climate model predictions, the inadequacy of our scenarios, and the failure to factor in the possibility of genuine surprises, or Dragon Kings.
Decision making under deep uncertainty has been an important topic at CE:

The economists/statisticians that are worried about climate uncertainty and Black Swans (e.g. Nassim Taleb, Martin Weitzman) are worried only about uncertainties related human caused climate change (e.g. the fat tail of climate sensitivity).  Genuinely unforeseen climate change (e.g. cooling) or disasters (unrelated to climate sensitivity; associated with natural climate variability) are ignored.
I like the distinction between foresight and prediction, something I haven’t thought about before.
The idea of intelligence failure induced by positive feedback from consumers is especially relevant for climate models and the IPCC.
And finally, I like this metaphor from Aaron Frank:
The so called movement away from symphonies following the same sheet of music describing large bureaucratic military organizations may be replaced by the metaphor of improvisational jazz, but the players will need to work together and practice for long periods in order to understand and follow one another’s cues, and replacing parts will be more difficult if there is no script to follow.
The UNFCCC is trying to figure out how to enforce internationally the same sheet of music.  It aint going to work – not the enforcement and not the hoped for outcome.  The metaphor of improvisational jazz fits a bottom up, creative adaptation.Filed under: Policy, Uncertainty

Source