My brother recently pointed me to this BBC News article on the use of drones for Search & Rescue missions in England’s Lake District, one of my favorite areas of the UK. The picture below is one I took during my most recent visit. In my earlier blog post on the use of UAVs for Search & Rescue operations, I noted that UAV imagery & video footage could be quickly analyzed using a microtasking platform (like MicroMappers, which we used following Typhoon Yolanda). As it turns out, an enterprising team at the University of Central Lancashire has been using microtasking as part of their UAV Search & Rescue exercises in the Lake District.
Every year, the Patterdale Mountain Rescue Team assists hundreds of injured and missing persons in the North of the Lake District. “The average search takes several hours and can require a large team of volunteers to set out in often poor weather conditions.” So the University of Central Lancashire teamed up with the Mountain Rescue Team to demonstrate that UAV technology coupled with crowdsourcing can reduce the time it takes to locate and rescue individuals.
I gave a talk on “The future of Humanitarian Response” at UN OCHA’s Global Humanitarian Policy Forum (#aid2025) in New York yesterday. More here for context. A similar version of the talk is available in the video presentation below.
Some of the discussions that ensued during the Forum were frustrating albeit an important reality check. Some policy makers still think that disaster response is about them and their international humanitarian organizations. They are still under the impression that aid does not arrive until they arrive. And yet, empirical research in the disaster literature points to the fact that the vast majority of survivals during disasters is the result of local agency, not external intervention.
I’ve been invited to give a “very provocative talk” on what humanitarian response will look like in 2025 for the annual Global Policy Forum organized by the UN Office for the Coordination of Humanitarian Affairs (OCHA) in New York. I first explored this question in early 2012 and my colleague Andrej Verity recently wrote up this intriguing piece on the topic, which I highly recommend; intriguing because he focuses a lot on the future of the pre-deployment process, which is often overlooked.
Anyone working in digital can somewhat relate to the overuse of loosely defined marketing words – think ‘big data’ or ‘cloud computing’ (bzzzz). Growth hacking seems to be just another one of them.
In colloquial terms, growth hacking is associated with the exploitation of loopholes and the use of illegal techniques online to grow business development. Of course, in some cases this has been reality. When PayPal was first used on eBay, it was actually breaching the retailer’s T&C’s. Similarly, when Airbnb first started they poached their customers from Craigslist by spamming listings and inviting users to join their directory instead.
However, growth hacking can also simply be described as the ingenious use of tools, platforms and environments for business development, online AND offline – Google campus in East London, for example, is a good case of growth hacking taking place offline as start-ups use a shared working environment to maximise their potential. Online, growth hacking is the use of tracking and metric tools that teach us where our time is best spent; and the leveraging of platforms where target audiences and key players are.
‘Hacking’ does not necessarily equal to detrimental consequences for larger corporations either. Indeed, Paypal was then bought by eBay, and when Airbnb developed its interface it added the option to ‘post to Craigslist’.
I’m headed to the Philippines this week to collaborate with the UN Office for the Coordination of Humanitarian Affairs (OCHA) on humanitarian crowdsourcing and technology projects. I’ll be based in the OCHA Offices in Manila, working directly with colleagues Andrej Verity and Luis Hernando to support their efforts in response to Typhoon Yolanda. One project I’m exploring in this respect is a novel radio-SMS-computing initiative that my colleagueAnahi Ayala (Internews) and I began drafting during ICCM 2013 in Nairobi last week. I’m sharing the approach here to solicit feedback before I land in Manila.
The “Radio + SMS + Computing” project is firmly grounded in GSMA’s official Code of Conduct for the use of SMS in Disaster Response. I have also drawn on the Bellagio Big Data Principles when writing up the in’s and out’s of this initiative with Anahi. The project is first and foremost a radio-based initiative that seeks to answer the information needs of disaster-affected communities.
Welcome to Kenya, or as we say here, Karibu! This is a special ICCM for me. I grew up in Nairobi; in fact our school bus would pass right by the UN every day. So karibu, welcome to this beautiful country (and continent) that has taught me so much about life. Take “Crowdsourcing,” for example. Crowdsourcing is just a new term for the old African saying “It takes a village.” And it took some hard-working villagers to bring us all here. First, my outstanding organizing committee went way, way above and beyond to organize this village gathering. Second, our village of sponsors made it possible for us to invite you all to Nairobi for this Fifth Annual, International Conference of CrisisMappers (ICCM).
I see many new faces, which is really super, so by way of introduction, my name is Patrick and I develop free and open source next generation humanitarian technologies with an outstanding team of scientists at the Qatar Computing Research Institute (QCRI), one of this year’s co-sponsors.
We’ve been able to process and make sense of a quarter of a million tweets in the aftermath of Typhoon Yolanda. Using both AIDR (still under development) and Twitris, we were able to collect these tweets in real-time and use automated algorithms to filter for both relevancy and uniqueness. The resulting ~55,000 tweets were then uploaded to MicroMappers (still under development). Digital volunteers from the world over used this humanitarian technology platform to tag tweets and now images from the disaster (click image below to enlarge). At one point, volunteers tagged some 1,500 tweets in just 10 minutes. In parallel, we used machine learning classifiers to automatically identify tweets referring to both urgent needs and offers of help. In sum, the response to Typhoon Yolanda is the first to make full use of advanced computing, i.e., both human computing and machine computing to make sense of Big (Crisis) Data.
We’ve come a long way since the tragic Haiti Earthquake. There was no way we would’ve been able to pull off the above with the Ushahidi platform. We weren’t able to keep up with even a few thousand tweets a day back then, not to mention images. (Incidentally, MicroMappers can also be used to tag SMS). Furthermore, we had no trained volunteers on standby back when the quake struck. Today, not only do we have a highly experienced network of volunteers from the Standby Volunteer Task Force (SBTF) who serve as first (digital) responders, we also have an ecosystem of volunteers from the Digital Humanitarian Network (DHN). In the case of Typhoon Yolanda, we also had a formal partner, the UN Office for the Coordination of Humanitarian Affairs (OCHA), that officially requested digital humanitarian support. In other words, our efforts are directly in response to clearly articulated information needs. In contrast, the response to Haiti was “supply based” in that we simply pushed out all information that we figured might be of use to humanitarian responders. We did not have a formal partner from the humanitarian sector going into the Haiti operation.
What this new digital humanitarian operation makes clear is that preparedness, partnerships & appropriate humanitarian technology go a long way to ensuring that our efforts as digital humanitarians add value to the field-based operations in disaster zones. The above Prezi by SBTF co-founder Anahi (click on the image to launch the presentation) gives an excellent overview of how these digital humanitarian efforts are being coordinated in response to Yolanda.
While there are many differences between the digital response to Haiti and Yolanda, several key similarities have also emerged. First, neither was perfect, meaning that we learned a lot in both deployments; taking a few steps forward, then a few steps back. Such is the path of innovation, learning by doing. Second, like our use of Skype in Haiti, there’s no way we could do this digital response work without Skype. Third, our operations were affected by telecommunications going offline in the hardest hit areas. We saw an 18.7% drop in relevant tweets on Saturday compared to the day before, for example. Fourth, while the (very) new technologies we are deploying are promising, they are still under development and have a long way to go. Fifth, the biggest heroes in response to Haiti were the volunteers—both from the Haitian Diaspora and beyond. The same is true of Yolanda, with hundreds of volunteers from the world over (including the Philippines and the Diaspora) mobilizing online to offer assistance.
A Filipino humanitarian worker in Quezon City, Philippines, for example, is volunteering her time on MicroMappers. As is customer care advisor from Eurostar in the UK and a fire officer from Belgium who recruited his uniformed colleagues to join the clicking. We have other volunteer Clickers from Makati (Philippines), Cape Town (South Africa), Canberra & Gold Coast (Australia), Berkeley, Brooklyn, Citrus Heights & Hinesburg (US), Kamloops (Canada), Paris & Marcoussis (France), Geneva (Switzerland), Sevilla (Spain), Den Haag (Holland), Munich (Germany) and Stokkermarke (Denmark) to name just a few! So this is as much a human story is it is one about technology. This is why online communities like MicroMappers are important. So please join our list-serve if you want to be notified when humanitarian organizations need your help.
“Arguing that Big Data isn’t all it’s cracked up to be is a straw man, pure and simple—because no one should think it’s magic to begin with.” Since citing this point in my previous post on Big Data for Disaster Response: A List of Wrong Assumptions, I’ve come across more mischaracterizations of Big (Crisis) Data. Most of these fallacies originate from the Ivory Towers; from social scientists who have carried out one or two studies on the use of social media during disasters and repeat their findings ad nauseam as if their conclusions are the final word on a very new area of research.
The mischaracterization of “Big Data and Sample Bias”, for example, typically arises when academics point out that marginalized communities do not have access to social media. First things first: I highly recommend reading “Big Data and Its Exclusions,” published by Stanford Law Review. While the piece does not address Big Crisis Data, it is nevertheless instructive when thinking about social media for emergency management. Secondly, identifying who “speaks” (and who does not speak) on social media during humanitarian crises is of course imperative, but that’s exactly why the argument about sample bias is such a straw man—all of my humanitarian colleagues know full well that social media reports are not representative. They live in the real world where the vast majority of data they have access to is unrepresentative and imperfect—hence the importance of drawing on as many sources as possible, including social media. Random sampling during disasters is a Quixotic luxury, which explains why humanitarian colleagues seek “good enough” data and methods.
Citing freedom and security concerns, the makers of Replicant are calling for donations, we learn from “Fundraising a Fully Free Fork of Android” at Boing Boing. The project hopes to give us all the choice to run our Android-based mobile devices entirely upon free software.
But wait, you ask, isn’t Android is already open source? Well, most of it, but a few “key non-free parts” keep our Android devices tethered to proprietary programs. Such parts, they say, include the layer that communicates with hardware; yes, that would be pretty important.
Also of concern to Replicant developers are the pre-loaded applications that some of us call “bloatware,” but upon which many users have come to rely. The team plans to develop free software that provides the same functionality. (I hope they also include the option to delete applications without them returning uninvited. That would be a nice change.) Furthermore, they have set up rival to the Google Play store, their app repository called F-Droid. That repository, the article notes, works with all Android-based systems.
The write-up summarizes:
“Mobile operating systems distributed by Apple, Microsoft, and Google all require you to use proprietary software. Even one such program in a phone’s application space is enough to threaten our freedom and security — it only takes one open backdoor to gain access. We are proud to support the Replicant project to help users escape the proprietary restrictions imposed by the current major smartphone vendors. There will still be problems remaining to solve, like the proprietary radio firmware and the common practice of locking down phones, but Replicant is a major part of the solution.”
Replicant is underpinned by copyrighted software that has been released under an assortment of free licenses, which their site links to here. This is an interesting initiative, and we have a couple of questions should it be successful: Will Google’s mobile search revenues come under increased pressure? What happens if Samsung or the Chinese mobile manufacturers jump on this variant of Android? We shall see.