The Multi-Cluster/Sector Initial Rapid Assessment (MIRA) is the methodology used by UN agencies to assess and analyze humanitarian needs within two weeks of a sudden onset disaster. A detailed overview of the process, methodologies and tools behind MIRA is available here (PDF). These reports are particularly insightful when comparing them with the processes and methodologies used by digital humanitarians to carry out their rapid damage assessments (typically done within 48-72 hours of a disaster).
Take the November 2013 MIRA report for Typhoon Haiyan in the Philippines. I am really impressed by how transparent the report is vis-à-vis the very real limitations behind the assessment. For example:
- “The barangays [districts] surveyed do not constitute a represen-tative sample of affected areas. Results are skewed towards more heavily impacted municipalities […].”
- “Key informant interviews were predominantly held with baranguay captains or secretaries and they may or may not have included other informants including health workers, teachers, civil and worker group representatives among others.”
- “Barangay captains and local government staff often needed to make their best estimate on a number of questions and therefore there’s considerable risk of potential bias.”
- Given the number of organizations involved, assessment teams were not trained in how to administrate the questionnaire and there may have been confusion on the use of terms or misrepresentation on the intent of the questions.”
- “Only in a limited number of questions did the MIRA checklist contain before and after questions. Therefore to correctly interpret the information it would need to be cross-checked with available secondary data.”
In sum: The data collected was not representative; The process of selecting interviewees was biased given that said selection was based on a convenience sample; Interviewees had to estimate (guesstimate?) the answer for several questions, thus introducing additional bias in the data; Since assessment teams were not trained to administrate the questionnaire, this also introduces the problem of limited inter-coder reliability and thus limits the ability to compare survey results; The data still needs to be validated with secondary data.
I do not share the above to criticize, only to relay what the real world of rapid assessments resembles when you look “under the hood”. What is striking is how similar the above challenges are to the those that digital humanitarians have been facing when carrying out rapid damage assessments. And yet, I distinctly recall rather pointed criticisms leveled by professional humanitarians against groups using social media and crowdsourcing for humanitarian response back in 2010 & 2011. These criticisms dismissed social media reports as being unrepresentative, unreliable, fraught with selection bias, etc. (Some myopic criticisms continue to this day). I find it rather interesting that many of the shortcomings attributed to crowdsourcing social media reports are also true of traditional information collection methodologies like MIRA.
The fact is this: no data or methodology is perfect. The real world is messy, both off- and online. Being transparent about these limitations is important, especially for those who seek to combine both off- and online methodologies to create more robust and timely damage assessments.