Over the last two years we have had endless discussions about how crowd sourced information is going to change the way we do crisis information management. Some people go as far to say as the regular humanitarian information management is dead and that the time of crowd has come. But one thing that we have yet to show is that all this crowd sourced information actually provides the humanitarian response community with actionable information. We have a few anecdotes of individual reports being helpful, but no overall study of the effectiveness.
I have lately been talking to a number of colleagues from the humanitarian community and one of the best hint at how to solve this came from Lars Peter Nissen from ACAPS. He pointed out that when they are planning needs assessments they start by defining what decisions they want to try to affect by the needs assessment. Then they work their way backwards and design an assessment that helps provide the answers needed to make that decisions.
When deciding to do a crowd sourced project for a disaster or crisis response, we must do the same. We must first define what decisions we are trying to affect. Once we know what decisions we want to try to affect, we need to define what information we would use as the basis for making these decisions. Once we know what information we would use as basis, we should look at what is the best way to visualize that information to optimize the decision making. In the age of crowd sourcing we have focused a bit too much on the power of geospatial visualization, but often graphs, trends or tables can help us make a better decision.
Once we know what decisions we want to help facilitate and how we want to visualize them, then we can start thinking of how we can get data from the crowd and through data processing and data analysis turn that data into this information. This may lead us to ask the crowds for more controlled questions or for our media monitoring teams to monitor reports of certain data instead of trying to capture all the available data out there. We can then look at ways of either automatically process the data or use a mechanical turk to utilize a "crowd" to do that processing. Same applies to taking that processed data and analyzing it. This can either be automatic or done via a crowd of people.
So before the next major disaster happens and we activate the digital volunteers lets sit down and define the end product first and then work our way back. This way we can really ensure that all this digital volunteer effort is utilized to the max.