My team and I at QCRI have just completed a detailed analysis of the 13,200+ tweets posted from one hour before the attacks began until two hours into the attack. The purpose of this study, which will be launched at CrisisMappers 2013 in Nairobi tomorrow, is to make sense of the Big (Crisis) Data generated during the first hours of the siege. A summary of our results are displayed below. The full results of our analysis and discussion of findings are available as a GoogleDoc and also PDF. The purpose of this public GoogleDoc is to solicit comments on our methodology so as to inform the next phase of our research. Indeed, our aim is to categorize and study the entire Westgate dataset in the coming months (730,000+ tweets). In the meantime, sincere appreciation go to my outstanding QCRI Research Assistants, Ms. Brittany Card and Ms. Justine MacKinnon for their hard work on the coding and analysis of the 13,200+ tweets. Our study builds on this preliminary review.
The following 7 figures summarize the main findings of our study. These are discussed in more detail in the GoogleDoc/PDF.
Figure 1: Who Authored the Most Tweets?
Figure 2: Frequency of Tweets by Eyewitnesses Over Time?
Digital humanitarian volunteers have been busing tagging images posted to social media in the aftermath of Typhoon Yolanda. More specifically, they’ve been using the new MicroMappers ImageClicker to rate the level of damage they see in each image. Thus far, they have clicked over 7,000 images. Those that are tagged as “Mild” and “Severe” damage are then geolocated by members of the Standby Volunteer Task Force (SBTF) who have partnered with GISCorps and ESRI to create this live Crisis Map of the disaster damage tagged using the ImageClicker. The map takes a few second to load, so please be patient.
The more pictures are clicked using the ImageClicker, the more populated this crisis map will become. So please help out if you have a few seconds to spare—that’s really all it takes to click an image. If there are no picture left to click or the system is temporarily offline, then please come back a while later as we’re uploading images around the clock. And feel free to join our list-serve in the meantime if you wish to be notified when humanitarian organizations need your help in the future. No prior experience or training necessary. Anyone who knows how to use a computer mouse can become a digital humanitarian.
The SBTF, GISCorps and ESRI are members of the Digital Humanitarian Network (DHN), which my colleague Andrej Verity and I co-founded last year. The DHN serves as the official interface for direct collaboration between traditional “brick-and-mortar” humanitarian organizations and highly skilled digital volunteer networks. The SBTF Yolanda Team, spearheaded by my colleague Justine Mackinnon, for example, has also produced this map based on the triangulated results of the TweetClicker:
There’s a lot of hype around the use of new technologies and social media for disaster response. So I want to be clear that our digital humanitarian operations in the Philippines have not been perfect. This means that we’re learning (a lot) by doing (a lot). Such is the nature of innovation. We don’t have the luxury of locking ourselves up in a lab for a year to build the ultimate humanitarian technology platform. This means we have to work extra, extra hard when deploying new platforms during major disasters—because not only do we do our very best to carry out Plan A, but we often have to carry out Plans B and C in parallel just in case Plan A doesn’t pan out. Perhaps Samuel Beckett summed it up best: “Ever tried. Ever failed. No matter. Try Again. Fail again. Fail better.”
China is having a light-bulb moment. Scientists from the Shanghai Institute of Technical Physics have discovered that a microchip embedded one-watt LED bulb is capable of emitting Wi-Fi, with enough signal strength to provide internet for four computers.
The discovery, aptly named “Li-Fi,” relies on the use of special LED light bulb that operate with light as the carrier instead of traditional radio frequencies.
Click on Image to Enlarge
Data rates as fast as 150 megabits per second were achieved with the new Li-Fi connection, making it faster, cheaper and more energy efficient than traditional Wi-Fi signals.
Li-Fi apparently only uses five percent of the energy required to power Wi-Fi-emitting devices, which rely on energy cooling systems to supply Internet to cell towers and Wi-Fi stations.
Though the discovery has huge potential in the way we use Internet connection, Li-Fi is still in a crude testing stage, since it doesn't work if the light bulb is turned off or if light bulbs are blocked. That doesn't seem like such a huge burden, though: it just means you'll have to leave your lights on if you want to surf the Web. No more online shopping binges in the dark!
Li-Fi demonstrations will take place on November 5 in Shanghai at the International Industry Fair, where 10 kits will be tested out. A bright future seems to be in store for Li-Fi usage, which could range from using car headlights or focused light to transmit data, among many other potential applications.
We’ve been able to process and make sense of a quarter of a million tweets in the aftermath of Typhoon Yolanda. Using both AIDR (still under development) and Twitris, we were able to collect these tweets in real-time and use automated algorithms to filter for both relevancy and uniqueness. The resulting ~55,000 tweets were then uploaded to MicroMappers (still under development). Digital volunteers from the world over used this humanitarian technology platform to tag tweets and now images from the disaster (click image below to enlarge). At one point, volunteers tagged some 1,500 tweets in just 10 minutes. In parallel, we used machine learning classifiers to automatically identify tweets referring to both urgent needs and offers of help. In sum, the response to Typhoon Yolanda is the first to make full use of advanced computing, i.e., both human computing and machine computing to make sense of Big (Crisis) Data.
We’ve come a long way since the tragic Haiti Earthquake. There was no way we would’ve been able to pull off the above with the Ushahidi platform. We weren’t able to keep up with even a few thousand tweets a day back then, not to mention images. (Incidentally, MicroMappers can also be used to tag SMS). Furthermore, we had no trained volunteers on standby back when the quake struck. Today, not only do we have a highly experienced network of volunteers from the Standby Volunteer Task Force (SBTF) who serve as first (digital) responders, we also have an ecosystem of volunteers from the Digital Humanitarian Network (DHN). In the case of Typhoon Yolanda, we also had a formal partner, the UN Office for the Coordination of Humanitarian Affairs (OCHA), that officially requested digital humanitarian support. In other words, our efforts are directly in response to clearly articulated information needs. In contrast, the response to Haiti was “supply based” in that we simply pushed out all information that we figured might be of use to humanitarian responders. We did not have a formal partner from the humanitarian sector going into the Haiti operation.
What this new digital humanitarian operation makes clear is that preparedness, partnerships & appropriate humanitarian technology go a long way to ensuring that our efforts as digital humanitarians add value to the field-based operations in disaster zones. The above Prezi by SBTF co-founder Anahi (click on the image to launch the presentation) gives an excellent overview of how these digital humanitarian efforts are being coordinated in response to Yolanda.
While there are many differences between the digital response to Haiti and Yolanda, several key similarities have also emerged. First, neither was perfect, meaning that we learned a lot in both deployments; taking a few steps forward, then a few steps back. Such is the path of innovation, learning by doing. Second, like our use of Skype in Haiti, there’s no way we could do this digital response work without Skype. Third, our operations were affected by telecommunications going offline in the hardest hit areas. We saw an 18.7% drop in relevant tweets on Saturday compared to the day before, for example. Fourth, while the (very) new technologies we are deploying are promising, they are still under development and have a long way to go. Fifth, the biggest heroes in response to Haiti were the volunteers—both from the Haitian Diaspora and beyond. The same is true of Yolanda, with hundreds of volunteers from the world over (including the Philippines and the Diaspora) mobilizing online to offer assistance.
A Filipino humanitarian worker in Quezon City, Philippines, for example, is volunteering her time on MicroMappers. As is customer care advisor from Eurostar in the UK and a fire officer from Belgium who recruited his uniformed colleagues to join the clicking. We have other volunteer Clickers from Makati (Philippines), Cape Town (South Africa), Canberra & Gold Coast (Australia), Berkeley, Brooklyn, Citrus Heights & Hinesburg (US), Kamloops (Canada), Paris & Marcoussis (France), Geneva (Switzerland), Sevilla (Spain), Den Haag (Holland), Munich (Germany) and Stokkermarke (Denmark) to name just a few! So this is as much a human story is it is one about technology. This is why online communities like MicroMappers are important. So please join our list-serve if you want to be notified when humanitarian organizations need your help.
The sharing economy movement is taking a new stride in the Arab World, and many platforms have taken the initiative of implementing the methods of collaborative economy. We dig deep and scrutinize the factors and the potential which could see this industry grow bigger in the region at a quick rate. Here we offer some successful stories.
The sharing economy in the Arab World has been witnessing an ongoing shift in the trend that has envisaged owning rather than accessing. This shift has turned things around, where now the value of the product in the Arab World day after another has become one of usage- not in its outright ownership anymore; as was the case with mainstream consumer models. Used products are more fashionable, thanks to the popularity of online platforms for buying and selling used goods.
People are also adopting what could be called collaborative lifestyles, and depend on each other in circulating and spreading all what occupy their daily interests and concerns like we have seen in the turbulent upheavals of the Arab Spring where the power of social media and its effect on society have accelerated the rate at which relationships develop and information is shared.
Click on Image to Enlarge
The sharing economy movement in the Arab World has seen a positive eruption in the recent few years, especially in the last one. We’re beginning to share more and more in the Arab world —; boats (fishfishme); skills (Taskty); carpooling (Kartag); swapping goods (Swaphood ) or selling used goods (krakeebegypt, dubizzle.com, Avito.ma In Morocco, a classified ads website has become the second most-visited website in the country. and Takemine the first online marketplace for peer-to-peer goods sharing in Dubai that will open (launch) soon.
The idea is that open source Android is working like a Petri dish. Instead of growing little Googles, the Petri dish harbors a big Amazon and may soon give birth to a bigger Samsung. Here’s the point I noted:
As much as Google likes and touts that Android is open, that freedom may come with the cost of some control over the platform. Amazon may have started the first truly successful “fork” of Android, but Samsung is going after the whole place setting. Samsung kicked off its first Developers Conference on Monday and based on the keynote message, I wouldn’t be too happy if I were Google.
The point is that Android is supposed to be Google’s open source mobile platform. Others can use it, but Android is Google’s idea.
With iPhones too expensive for most mobile users and Microsoft mobile not getting the buzz Redmond hoped, Android is the mobile platform with legs it seems. Amazon and Samsung have figured this out. The companies have been moving forward with Android that has been reworked to make it less Googlely than Google may have hoped.
Amazon is a lesser problem for Google. Samsung, however, seems to be a bigger potential problem.
But my view is that the larger challenge will be from innovators in other countries who surf on Android. When I was in China, I learned about a number of mobile phones running Android that performed some interesting tricks. One taxi driver had a line of four mobile devices in his taxi. Each mobile had four SIMs. Each SIM connected to a different service providing information about pick ups.
I asked the taxi driver if the phones were running Google Android. The answer was, “I don’t know. There are cheap and do more than a high dollar, upper class phone. These are the future, not Apple or Google.”
Is the taxi driver correct? My view is that Google’s Android is not just fragmented. Android is enabling innovators to go in directions that may prove difficult for Google to control. Samsung may be the near term challenge for Google. Looking out over a longer time line, there may be a different set of challenges created by an open source mobile operating system, new manufacturing options, and a burgeoning demand for mobile devices that are delivering fresh, high-value functionality.
Sure the four phones put on a light show when orders came in. My smart phone has one SIM and was woefully out of step with the Chinese taxi driver’s needs. Google has to think about Android as free and open source software that may spawn some antibiotic resistant competitors.
There is so much attention (and hype) around the use of social media for emergency management (SMEM) that we often forget about mainstream media when it comes to next generation humanitarian technologies. The news media across the globe has become increasingly digital in recent years—and thus analyzable in real-time. Twitter added little value during the recent Pakistan Earthquake, for example. Instead, it was the Pakistani mainstream media that provided the immediate situational awareness necessary for a preliminary damage and needs assessment. This means that our humanitarian technologies need to ingest both social media and mainstream media feeds.
Now, this is hardly revolutionary. I used to work for a data mining company ten years ago that focused on analyzing Reuters Newswires in real-time using natural language processing (NLP). This was for a conflict early warning system we were developing. The added value of monitoring mainstream media for crisis mapping purposes has also been demonstrated repeatedly in recent years. In this study from 2008, I showed that a crisis map of Kenya was more complete when sources included mainstream media as well as user-generated content.
So why revisit mainstream media now? Simple: GDELT. The Global Data Event, Language and Tone dataset that my colleague Kalev Leetaru launched earlier this year. GDELT is the single largest public and global event-data catalog ever developed. Digital Humanitarians need no longer monitor mainstream media manually. We can simply develop a dedicated interface on top of GDELT to automatically extract situational awareness information for disaster response purposes. We're already doing this with Twitter, so why not extend the approach to global digital mainstream media as well?