Stephen Marrin Post-revision draft18 July 2011. Original draft submitted to Intelligence and National Security on 4 February 2011. Accepted for publication on 24 May 2011 pending minor revision.
Evaluating the Quality of Intelligence Analysis: By What (Mis) Measure?<
Dr. Stephen Marrin is a Lecturer in the Centre for Intelligence and Security Studies at Brunel University in London. He previously served as an analyst with the Central Intelligence Agency and US Government Accountability Office. Dr. Marrin has written about many different aspects of intelligence analysis, including new analyst training at CIA‘s Sherman Kent School, the similarities and differences between intelligence analysis and medical diagnosis, and the professionalization of intelligence analysis. In 2004 the National Journal profiled him as one of the ten leading US experts on intelligence reform.
Abstract: Each of the criteria most frequently used to evaluate the quality of intelligence analysis has limitations and problems. When accuracy and surprise are employed as absolute standards, their use reflects unrealistic expectations of perfection and omniscience. Scholars have adjusted by exploring the use of a relative standard consisting of the ratio of success to failure, most frequently illustrated using the batting average analogy from baseball.Unfortunately even this relative standard is flawed in that there is no way to determine either what the batting average is or should be. Finally, a standard based on the decision makers’ perspective is sometimes used to evaluate the analytic product’s relevance and utility. But this metric, too, has significant limitations. In the end, there is no consensus as to which is the best criteria to use in evaluating analytic quality, reflecting the lack of consensus as to what the actual purpose of intelligence analysis is or should be.
Evaluating the Quality of Intelligence Analysis: By What (Mis) Measure?
Evaluating the quality of intelligence analysis is not a simple matter. Frequently quality is defined not by its presence but rather by its absence. When what are popularly known as intelligence failures occur, sometimes attention focuses on flaws in intelligence analysis as a contributing factor to that failure.
ROBERT STEELE: Interesting, certainly worth reading, but divorced from the fundamentals and out of touch with the real masters. Any publication that fails to cite Jack Davis, the dean of analytic tradecraf in the English language, is fatally flawed. Of course it would help if one were also in touch with the “new rules for the new craft of intelligence,” but that may be too much to expect from a junior academic with limited real-world analytic experience who seems intent on citing only “approved” sources–a lack of source integrity that is also fatal. The article assumes that the four preconditions for sound analytics exist, and since they do not, at least in the US and UK and most other government intelligence communities, it is necessary to spell them out. Analysts are toads absent the following:
01 All-source collection including open source collection in 183 languages in near real-time–collection that is relevant, timely, and focused, not collection of convenience. It must also comprehend, collect, and present true cost economics with geospatial attributes at every datum point.
02) Back office and desktop analytic tools that integrate in one single open source package the eighteen functionalities identified by Gordon Oehler, Dennis McCormick, Diane Webb, and a handful of others in 1985. This still is not available today–Computer-Aided Tools for the Analysis of Science & Technology (CATALYST).
03) Deep personal knowledge of relevant history, culture, and language(s) pertinent to the matter at hand, and inclusive of a broad global multinational, multiagency, multidisciplinary, multidomain information-sharing and sense-making network (M4IS2)–the US security goons are nowhere near being able to comprehend the new security paradigm that demands full-spectrum human intelligence (HUMINT) across all slices, nationalities, and socio-economic ideo-cultural strata; and
04) An analytic model that is holistic, comprehensive, and centered on the public interest, at a minimum integrating the ten high-level threats to humanity, the twelve core policies that must be integrated; and the eight demographic challengers that will define the future with or without the USA. There are at least three models that could be usefully integrated: the revolution model, the expeditionary environment model, and the strategic analytic model. All require whole systems true cost economics.
Somewhat paradoxically, since I emphasize that intelligence is about outputs rather than inputs, it bears mention that the analytic process that produces intelligence is only as good as its inputs, its environment, and its receiving decision-makers. If the receiving decision-maker lacks integrity, then Paul Pillar is absolutely right: no amount of intelligence with integrity can overcome policy without integrity, with ONE exception: if the intelligence with integrity is placed immediately and visibly in the public domain.
DuckDuckGo on Analytic Tradecraft
DuckDuckGo on Jack Davis Analytic Tradecraft
Graphic: Evaluating Intelligence (Decision-Support) – Four Aspects
Journal: Reflections on Integrity UPDATED + Integrity RECAP
Mini-Me: Putting TS/SCI In Perspective – Need to Lose the Cement Overcoat of Excessive Classification and Excessive Corruption
Reference: Empire of Lies & Secrecy
Robert Steele: Intelligent Management of Intelligence Agencies, and the New Craft of Intelligence
Who’s Who in Public Intelligence: Jack Davis
Who’s Who in Collective Intelligence: Robert David STEELE Vivas
2004 ANALYSIS: All-Source Analysis, Making Magic
2009 Perhaps We Should Have Shouted: A Twenty-Year Retrospective
2010: Human Intelligence (HUMINT) Trilogy Updated
2012 PREPRINT FOR COMMENT: The Craft of Intelligence
2012 OPEN SOURCE EVERYTHING: Transparency, Truth & Trust (Phi Beta Iota Page with 33 Graphics Online)