The purpose of PeaceTXT is to use mobile messaging (SMS) to catalyze behavior change vis-a-vis peace and conflict issues for the purposes of violence prevention. You can read more about our pilot project in Kenya here. We’re hoping to go live next month with some initial trials. In the meantime, we’ve been busy doing research to develop an appropriate monitoring and evaluation strategy. As is often the case in this new innovative initiatives, we have to look to other fields for insights, which is why my colleague Peter van der Windt recently shared this peer-reviewed study entitled: “Mobile Phone Technologies Improve Adherence to Antiretroviral Treatment in a Resource-Limited Setting: A Randomized Con-trolled Trial of Text Message Reminders.”
I have been writing and blogging about “information forensics” for a while now and thus relished Nieman Report’s must-read study on “Truth in the Age of Social Media.” My applied research has specifically been on the use of social media to support humanitarian crisis response (see the multiple links at the end of this blog post). More specifically, my focus has been on crowdsourcing and automating ways to quantify veracity in the social media space. One of the Research & Development projects I am spearheading at the Qatar Computing Research Institute (QCRI) specifically focuses on this hybrid approach. I plan to blog about this research in the near future but for now wanted to share some of the gems in this superb 72-page Nieman Report.
In the opening piece of the report, Craig Silverman writes that “never before in the history of journalism—or society—have more people and organizations been engaged in fact checking and verification. Never has it been so easy to expose an error, check a fact, crowdsource and bring technology to bear in service of verification.” While social media is new, traditional journalistic skills and values are still highly relevant to verification challenges in the social media space. In fact, some argue that “the business of verifying and debunking content from the public relies far more on journalistic hunches than snazzy technology.”
I disagree. This is not an either/or challenge. Social computing can help every-one, not just journalists, develop and test hunches. Indeed, it is imperative that these tools be in the reach of the general public since a “public with the ability to spot a hoax website, verify a tweet, detect a faked photo, and evaluate sources of information is a more informed public. A public more resistant to untruths and so-called rumor bombs.” This public resistance to untruths can itself be moni-tored and modeled to quantify veracity, as this study shows.
Full post less two graphics below the line. Original post.
I was recently in New York where I met up with my colleague Fernando Diaz from Microsoft Research. We were discussing the uses of social media in humanitarian crises and the various constraints of social media platforms like Twitter vis-a-vis their Terms of Service. And then this occurred to me: we have organ donation initiatives and organ donor cards that many of us carry around in our wallets. So why not become a “Data Donor” as well in the event of an emergency? After all, it has long been recognized that access to information during a crisis is as important as access to food, water, shelter and medical aid.
Phi Beta Iota: This has very provocative and inspiring implications for redefining (or restoring) what it means to be human — the art of sharing information to help the community, the collective, survive and prosper. We are honoring Dr. Meiers original idea by including Data Donor as a permanent search term for the Open Source Everything Highlights.
My colleague Andrea Tapia and her team at PennState University have developed an interesting iPhone application designed to support humanitarian response. This application is part of their EMERSE project: Enhanced Messaging for the Emergency Response Sector. The other components of EMERSE include a Twitter crawler, automatic classification and machine learning.
. . . . . . . .
The iPhone application developed by PennState is designed to help humanitarian professionals collect information during a crisis. “In case of no service or Internet access, the application rolls over to local storage until access is available. However, the GPS still works via satellite and is able to geo-locate data being recorded.” The Twitter crawler component captures tweets referring to specific keywords “within a seven-day period as well as tweets that have been posted by specific users. Each API call returns at most 1000 tweets and auxiliary metadata […].” The machine translation component uses Google Language API.
This new book, Human Rights and Information Communication Technologies: Trends and Consequences of Use, promises to be a valuable resource to both practitioners and academics interested in leveraging new information & communication technologies (ICTs) in the context of human rights work. I had the distinct pleasure of co-authoring a chapter for this book with my good colleague and friend Jessica Heinzelman. We focused specifically on the use of crowdsourcing and ICTs for information collection and verification. Below is the Abstract & Introduction for our chapter.
Abstract
Accurate information is a foundational element of human rights work. Collecting and presenting factual evidence of violations is critical to the success of advocacy activities and the reputation of organizations reporting on abuses. To ensure credibility, human rights monitoring has historically been conducted through highly controlled organizational structures that face mounting challenges in terms of capacity, cost and access. The proliferation of Information and Communication Technologies (ICTs) provide new opportunities to overcome some of these challenges through crowdsourcing. At the same time, however, crowdsourcing raises new challenges of verification and information overload that have made human rights professionals skeptical of their utility. This chapter explores whether the efficiencies gained through an open call for monitoring and reporting abuses provides a net gain for human rights monitoring and analyzes the opportunities and challenges that new and traditional methods pose for verifying crowdsourced human rights reporting.
Surprising Findings: Using Mobile Phones to Predict Population Displacement After Major Disasters
Rising concerns over the consequences of mass refugee flows during several crises in the late 1970′s is what prompted the United Nations (UN) to call for the establishment of early warning systems for the first time. “In 1978-79 for example, the United Nations and UNHCR were clearly overwhelmed by and unprepared for the mass influx of Indochinese refugees in South East Asia. The number of boat people washed onto the beaches there seriously challenged UNHCR’s capability to cope. One of the issues was the lack of advance information. The result was much human suffering, including many deaths. It took too long for emergency assistance by intergovernmental and non-governmental organizations to reach the sites” (Druke 2012).
Phi Beta Iota: We continue to advocate free cellular service for all humans as a foundation for creating infinite wealth. OpenBTS is ready — combine that with Open Spectrum and we have an accelerator effect.
There’s a new Crowdmap in town called DeadUshahidi. The site argues that “Mapping doesn’t equal change. Using crowdsourcing tech like Ushahidi maps without laying the strategic and programmatic ground work is likely not going to work. And while we think great work has been done with crowdsourced reporting, there is an increasing number of maps that are set up with little thought as to why, who should care, and how the map leads to any change
In some ways this project is stating the obvious, but the obvious sometimes needs repeating. As Ushahidi’s former Executive Director Ory Okolloh warned over two years ago: “Don’t get too jazzed up! Ushahidi is only 10% of solution.” My own doctoral research, which included a comparative analysis of Ushahidi’s use in Egypt and the Sudan, demonstrated that training, preparedness, outreach and strategic partnerships were instrumental. So I do appreciate DeadUshahidi’s constructive (and entertaining!) efforts to call attention to this issue and explain what makes a good crowd-sourced map.
At the same time, I think some of the assumptions behind this initiative need questioning. According to the project, maps with at least one of the following characteristics is added to the cemetery:
No one has submitted a report to your map in the last 12 months.
For time-bound events, like elections and disasters, the number of reports are so infinitesimally small (in relation to the number of the community the map is targeting) that the map never reached a point anywhere near relevance. (Our measure for elections is, for instance, # of submissions / # of registered voters > .0001).
The map was never actually started (no category descriptions, fewer than 10 reports). We call that a stillbirth.
Mapping doesn’t equal change, but why assume that every single digital map is launched to create change? Is every blog post written to create change? Is every Wikipedia article edit made to effect change? Every tweet? What was the impact of the last hard copy map you saw? Intention matters and impact cannot be measured without knowing the initial motivations behind a digital map, the intended theory of change and some kind of baseline to measure said change. Also, many digital maps are event-based and thus used for a limited period of time only. They may no longer receive new reports a year after the launch, but this doesn’t make it a “dead” map, simply a completed project. A few may even deserve to go to map heaven—how about a UshahidiHeaven crowdmap?
I’m also not entirely convinced by the argument that the number of reports per map has to cross a certain threshold for the crowdsourced map to be successful. A digital map of a neighborhood in Sydney with fewer than one hundred reports could very well have achieved the intended goal of the project. So again, without knowing or being able to reliably discern the motivations behind a digital map, it is rather farfetched to believe that one can assess whether a project was success-ful or not. Maybe most of the maps in the DeadUshahidi cemetery were never meant to live beyond a few days, weeks or months in the first place.
That said, I do think that one of the main challenges with Ushahidi/Crowdmap use is that the average number of reports per map is very, very low. Indeed, the vast majority of Crowdmaps are stillborn as a forthcoming study from Internews shows. Perhaps this long-tail effect shouldn’t be a surprise though. The costs of experimenting are zero and the easier the technology gets, the more flowers will bloom—or rather the more seeds become available. Whether these free and open source seeds actually get planted and grow into flowers (let alone lush eco-systems) is another issue and one dependent on a myriad of factors such as the experience of the “gardener”, the quality of the seeds, the timing and season, the conditions of the soil and climate, and the availability of other tools used for planting and cultivation.
Or perhaps a better analogy is photography. Thanks to Digital Cameras, we take zillions more pictures than we did just 5 years ago because each click is virtually free. We’re no longer limited to 24 or 36 pictures per roll of film, which first required one to buy said roll and later to pay for it again to be developed. As a result of digital cameras, one could argue that there are now a lot more bad quality (dead) pictures being uploaded everywhere. So what? Big deal. There is also more excellent amateur photography out there as well. What about other technologies and media? There are countless of “dead” Twitter accounts, WordPress blogs, Ning platforms, customized Google Maps, etc. Again, so what?
Neogeography is about democratizing map-making and user-generated maps. Naturally, there’s going to be learning and experimentation involved. So my blog post is not written in defense of Ushahidi/Crowdmap but rather in defense of all amateur digital mappers out there who are curious and just want to map whatever the heck they well please. In sum, and to return to the gardening analogy if I may, the more important question here is why the majority of (Usha)seeds aren’t planted or don’t grow, and what can be done about this in a pro-active manner. Is there something wrong with the seed? Do would-be gardeners simply need more gardening manuals? Or do they need more agile micro-tasking and data-mining tools? The upcoming Internews report goes a long way to explaining the why & what and TechChange’s course on Ushahidi may be one way to save some future maps from ending up in the DeadUshahidi cemetery prematurely.