The Citizen Media Evidence Partnership (C-MEP) is a joint effort between Amnesty International USA’s Crisis Prevention and Response Unit and myself to develop a volunteer video validation network staffed by college students in the US. This fall Scott Edwards and Christoph Koettl at AI (USA) and I are working with one of my PhD students, Scott Meachum, to develop the alpha version of the project.
In a post last week at Political Violence at a Glance I provided a basic overview of the developing role of video validation in human rights work. As I noted there, a new project is archiving validated video relevant to abuse of human rights:
WITNESS has partnered with the citizen news innovator Storyful to create the YouTube Human Rights Channel—a central hub for citizen footage of human rights issues. From the Arab Spring to the repression of a Russian punk band the Human Rights Channel amplifies the videos that document human rights violations and struggles for justice.
Christoph Koettl explains that:
Video is highly useful for identifying abuses related to the two core issues of the laws of war: (1) the treatment of non-combatants (civilians or prisoners of war); and (2) the prohibition of indiscriminate or direct attacks against civilians.
More specifically, video can provide important evidence that augments or corroborates evidence from other sources, ranging from eyewitness accounts, satellite imagery, news accounts, and even Tweets or other text posts on social media. Video will very rarely produce a “smoking gun.” But it can provide a piece of evidence that is otherwise unavailable.
The problem is the flood of video that is posted in a conflict area. A recent search for “aleppo syria” at YouTube.com produced almost 800,000 hits. The Crisis Prevention and Response Unit at AI (USA) cannot begin to process the flood of material available, sift the wheat from the chaff, and then validate that the few potentially relevant videos actually contain footage from a specific location of interest. A network of volunteers, however, can be trained to do precisely that.
This fall we will spend three weeks training seven students, and then putting them through certification tests. Once a volunteer is certified, she will be given videos to view, and she will fill out a spreadsheet, coding whether one or more of four criteria are met by the content of the video. For those videos that meet at least one criterion, she will then attempt to validate that the video (i.e., establish that it really was shot at a given location, thus providing information about events there). The volunteer will then share the completed spreadsheet with the AI (USA) staff, who will conduct further, more stringent validation, and “merge” it with other evidence they have collected.
How does one validate the location of a video? If you would like to read a case study that depicts the process, please see this post at Brown Moses: The Process Of Video Verification – Rabaa, Egypt, August 14th 2013.
We hope to roll out the Beta stage of the project in January 2015, during which we will add 3-5 teams of volunteer students at other US universities. If all goes well the project will “go live” in September 2015, and we hope to be able to eventually grow it to teams of volunteers at several dozen US universities, and perhaps a partnership network that has chapters across the globe.
NB: Revised on Tue 24 Sept (added photo).