Digital Dao
Evolving Hostilities in the Global Cyber Commons
Sunday, November 24, 2013
In OSINT, All Sources Aren't Created Equal
“In evaluating open-source documents, collectors and analysts must be careful to determine the origin of the document and the possibilities of inherent biases contained within the document.”
– FM2-22.3: Human Intelligence Collector Operations, p. I-10
“Source and information evaluation is identified as being a critical element of the analytical process and production of intelligence products. However there is concern that in reality evaluation is being carried out in a cursory fashion involving limited intellectual rigour. Poor evaluation is also thought to be a causal factor in the failure of intelligence.”
– John Joseph and Jeff Corkill “Information Evaluation: How one group of Intelligence Analysts go about the Task”
The field of cyber intelligence is fairly new and fortunately, thanks to the Software Engineering Institute at Carnegie Mellon and the work of Jay McAllister and Troy Townsend, we can take a credible look at the state of the practice of this field:
“Overall, the key findings indicate that organizations use a diverse array of approaches to perform cyber intelligence. They do not adhere to any universal standard for establishing and running a cyber intelligence program, gathering data, or training analysts to interpret the data and communicate findings and performance measures to leadership.”
– McAllister and Townsend, The Cyber Intelligence Tradecraft Project
The one thing that isn't covered in their report is the issue of source validation and how that contributes to the validity or value of the intelligence data received. However they did write a follow-up white paper with Troy Mattern entitled “Implementation Framework – Collection Management (.pdf)”
Continue reading “Berto Jongman: Jeffrey Carr on Evaluating Sources”








