Could human and machine forecasters work together to increase the intelligence agencies’ foresight?
By Alexis C. Madrigal
Atlantic, 11 December 2012
We would like to know what the future is going to be like, so we can prepare for it. I’m not talking about building a time machine to secure the winning Powerball number ahead of time, but rather creating more accurate forecasts about what is likely to happen. Supposedly, this is what pundits and analysts do. They’re supposed to be good at commenting on whether Greece will leave the Eurozone by 2014 or whether North Korea will fire missiles during the year or whether Barack Obama will win reelection.
A body of research, however, conducted and synthesized by the University of Pennsylvania’s Philip Tetlock finds that people, not just pundits but definitely pundits, are not very good at predicting future events. The book he wrote on the topic, Expert Political Judgment: How Good Is It? How Can We Know?, is a touchstone for all the work that people like Nate Silver and Princeton’s Sam Wang did tracking the last election.
But aside from the electorate, who else might benefit from enhanced foresight? Perhaps the people tasked with gathering information about threats in the world.
You probably have never heard of IARPA, but it’s the wild R&D wing of our nation’s intelligence services. Much like the Defense Advanced Research Projects Agency, which looks into the future of warfare for the Department of Defense, the Intelligence Advanced Research Projects Activity looks at the future of analyzing information, spying, surveillance, and the like for the CIA, FBI, and NSA.
We wrote in-depth about a project they’re running to better understand metaphors (yes, metaphors), and, now, one of their projects is to apply Tetlock’s insights into expert judgment. In particular, while Tetlock found that most analysts were terrible, some were better than others, particularly those he called foxes, who were more circumspect in their pronouncements and less wedded to a hard-and-fast worldview. The work suggested that it might be possible to improve people’s judgments about the future.
His work matched up perfectly with a call for proposals that IARPA put out two years ago for a new program called ACE, Aggregative Contingent Estimation. They wanted researchers to “develop and test tools to provide accurate, timely, and continuous probabilistic forecasts and early warning of global events, by aggregating the judgments of many widely dispersed analysts.” Well, Tetlock thought, perhaps I can apply my research to this problem.
Phi Beta Iota: The amount of money being wasted by secretive research is overshadowed only by the amount of money being wasted on everything else. It is a very sad day when one has to realize that the US intelligence community is spending money on human cognition crap-shoots, because it has no strategic analytic model, no understanding of the human factor, zero moral engagement, and therefore cannot use history and a historically rooted understanding of cause and effect to anticipate. Collective intelligence matters, but deep multi-everything intelligence matters more. The 1976 matrix for the prediction of revolution and the 2013 whole system true cost model are better than anything IARPA is doing, and that should be shocking to anyone striving to improve the craft of intelligence.