Thursday, June 24, 2010

Responding to Haiti Earthquake - The Technology Perspective

In this post I will discuss my mission to Haiti following the Haiti earthquake and the role technology played.

Introduction


Last January I was the team leader of the Icelandic Urban Search and Rescue team (ICE-SAR) that deployed to Haiti following the devastating earthquake that struck the island on the afternoon of January 12th. Our team is made up of 35 volunteers that normally respond to various kinds of search and rescue missions in Iceland. Our team recently went through a classification process by the United Nations and that helped us is having everything well organized and exercised. This coupled with the speed of decision making within the Icelandic Ministry of Foreign Affairs meant that we were the first international urban search and rescue (USAR) team to arrive in Port-Au-Prince in the afternoon of January 13th.

The situation as we arrived was devastating and very few of us will ever forget the sights and sounds we witnessed those first days in Haiti. Thankfully we were able to rescue 3 live victims from a collapsed super-market within 24 hours of landing in Haiti. Images of the rescue were broadcast live on CNN and our team became a target for media outlets to call for updates.

In this post I want to focus on the aspect of how technology played a role in our operations in Haiti. One of the first things we did after we landed was to call back home to Iceland via satellite phone to let them know we had arrived safely in Port-Au-Prince. Our second phone call was to the UN office in Geneva which coordinates international USAR teams. We let them know that the airport was open and that other teams would be able to get landing permissions via the tower at the airport which was still semi-operational. Our next task was to get connected to the Internet via a satellite based internet system called BGAN. Through it we were able to update other USAR teams on the situation via a restricted web platform used by all the USAR teams worldwide.

The First 48 Hours

After we had set up base camp at the airport the rest of the team deployed into the city to perform search and rescue at a location called Caribbean Super-Market. At the base camp we were in constant contact with our headquarters back in Iceland and with UN in Geneva via our BGAN connection. As other teams arrived and the UN set up a coordination center (in our base camp), we were able to utilize the same technology to get maps of the area that could be used to plan the search operations.

Since our team working in the field was also equipped with a BGAN then they could retrieve information specific to their location as well as send and receive information to/from basecamp. This made all information processing much easier than relying simply on radio or satellite phone communications. They were also able to upload images of the first rescues that our headquarters then distributed to the media and our families and colleagues back home in Iceland.

The Importance of Maps

In the days that followed we were searching through areas of Port-Au-Prince and the coordination of that effort was made possible through the access to satellite imagery and maps that were downloaded through the BGANs. This was a stark difference from the earthquake in Bam, Iran in 2003 when everything was coordinated using a map of the town drawn up from a guidebook. A great NGO called MapAction deploys along with the UN Disaster Assessment and Coordination (UNDAC) team that is responsible for the overall coordination of large disasters. The MapAction team consists of GIS experts that work daily on creating maps. When they deploy to disasters they are key to getting good reference and situational overview maps on the ground.

Leogane

Having the ability to communicate easily back to headquarters also made it easier for us to do our job. On day 4 we were given the assignment to go to the town of Leogane which is approx. 30 km outside of Port-Au-Prince. It had been severely damaged by the earthquake since it was closer to the epicenter than the capitol. We were handed that assignment late in the evening and were given security clearance to go there early the following morning. Instead of staying awake the entire night retrieving information about the town and plan how to perform operations, we were able to hand over that work to our headquarters team. They had a great internet connection (compared to our 32kbps connection) and were all experienced search managers. When we woke up at 5am, we had a great information package waiting in our email inbox with maps, pictures of large building in the town and GPS locations of the schools and government offices. This remote planning of the search effort was made possible through the increased ability to connect to the Internet, even from disaster locations.

Hotel Montana

On our final day in Haiti, the team unanimously volunteered for a very difficult task, which was to go to Hotel Montana, a hotel and apartment complex where UN and NGO workers frequently stayed many of them along with their families. The hotel had already been searched for 10 days and likelihood of live finds was very low. Instead the team knew the main task would be body recovery. In the words of the squad leaders as they informed me of their decision to volunteer for this task, “we feel the humanitarian workers are our team members and we know our families would like to get closure”.

As the team went up to Hotel Montana from our base at the airport, I put a message on Twitter saying that I was proud of this decision of my team to take on this difficult task. A few minutes later I got a message via Facebook from a relative of a family member still missing in Hotel Montana. The message pointed me towards a Facebook group focusing on the efforts at Hotel Montana. I started posting information on the group page about our efforts there and answer questions from relatives in dire need of information. There was a media blackout at the site imposed by the UN peacekeeping team controlling the site. This meant relatives were not getting any information at all and many thought nothing was being done at the site. I could provide them with detailed information about the number of teams working on the site and give them insight into the progress being made. Relatives also sent me information about where their loved ones had been which I made sure to pass on to those coordinating the efforts at the site.

The team came back to camp around 2am after a very long and difficult day. At that time I was able to share with them some of the messages I had received from the relatives and it is needless to say that there was a very emotional atmosphere in the base camp. I was happy to find out later that rescue teams that came after us to Hotel Montana continued this practice of reporting back to the relatives.

Rebuilding Haiti

Haiti will never leave the minds of those who have responded there and when we meet we all admit our thoughts go back there every day. It is my hope that the current efforts to rebuild Haiti will receive the funding and support needed to help them get back on their feet.

Trusted Spaces

In this post I will discuss the need for closed collaboration groups for disaster coordination

During a disaster the sharing of information is crucial. A large portion of that information is and should be publically available to everyone. Certain information, such as information about individual beneficiaries should however receive the appropriate privacy handling such sensitive material is entitled to.
In my previous post I described how crowds can play an increasingly important role in the information management aspect of disaster response. In this post I want to focus on a particular aspect of information management which deals with how you can segment information down into areas. There can be multiple reasons why you would want to segment information down into areas.

One example is that a particular agency/organization may be responding to a crisis. It may want to have the ability to share its internal coordination information among the field workers and headquarter staff working on that particular disaster. Another example is when members of a particular humanitarian cluster (education, health, early recovery, nutrition, etc.) want to share information that is specific to that particular cluster, but might not be of interest to others. Thirdly you might want to share particular information between just two organization, for example UN and IFRC.

Dealing with crisis in conflict areas is probably the most complex case where information might need to be shared on a confidential basis. With increased involvement of military organizations in humanitarian operations, there are often cases where they would like to be able to share information with NGOs without at the same time making those NGOs targets because of their interaction with the military community.

The term trusted spaces has been used to describe what is common to these examples. Within a trusted space you can invite those individuals that should have access to the information in question. These trusted spaces can include only a few members or they can span hundreds. Information shared inside a trusted space should not be accessible to those not within the trusted space and due to the sensitivity of the information it can also be argued that the information should be encrypted when sent between those participating in a trusted space.

For trusted spaces to work well in the humanitarian field they must also fulfill a few more requirements. One is that members of the trusted space will not always be on-line. As I discussed in my previous post humanitarian field workers are occasionally connected. The trusted space must be able to deal with this occasional connectivity.

To make things even more complex – one needs to deal with the situation that happens when two individuals update the same part of information at the same time. Solutions such as record-locking and transactions (commonly used by databases) do not work as easily in the occasionally connected world. This requires trusted spaces to have to deal with information conflict resolution. In 95% of cases automated conflict resolution processes can be applied but in the remaining cases human interaction is required. That is why trusted spaces must include the capability for humans to communicate directly with each other to solve the conflict.

In the past there have been multiple approaches taken to address this concept. One of the most commonly used solution in the humanitarian field has been to use Microsoft Groove and its concept of workspaces. Others have used password protected web sites, sometimes built on technologies such as Microsoft SharePoint. While both have been used successfully one could argue that in both cases Big World solutions are being used to solve issues in the Small World (see definition of Small World-Big World in my last post) and when doing so there are often issues that come up, such as how to deal with the limited and expensive bandwidth in the Small World.

What is needed is a way to create and maintain trusted spaces using cloud technologies while at the same time allowing for the occasionally connected nature of small worlds. Anyone interested on creating this?

Flocks

In this post I will discuss ways to streamline information management in crowded yet occassionally connected environments.

Introduction

Crowdsourcing is rapidly becoming an important tool to use in disaster response as I described in my previous post. In that post I described how impromptu volunteer groups gathered to provide various forms of assistance to the people of Haiti. An interesting observation to that effort is that as CrisisCamps were held around the world, people inside each camp would divide themselves up into groups focusing on a particular project or task. At the same time people in camps in another city would be doing the same.


In order for the various groups in multiple camps to be able to coordinate their efforts, wikis, phone conferences, Skype chats and various other solutions were used to bring everyone together. In some cases solutions such as mechanical turks were used to divide the tasks at hand between those working on that particular project. In most cases volunteer project leaders were appointed who were made responsible for defining the process to be used and handing out the tasks.

Birds of a feather

“Birds of a feather flock together” is an old saying used to describe the fact that likeminded people will group themselves together. In social media today there are a number of ways in which people can group themselves together. Within Twitter users make use of hashtags to mark their message to be about a particular topic. Users interested in that topic can then create a search that shows all messages that contain that particular hashtag. Within FaceBook users have the ability to create groups and users can either self-subscribe to these groups or membership can be on an invitational basis only.

Twitter search lists and FaceBook groups however do not scale well. A group focusing on the rescue efforts in Hotel Montana in Haiti was receiving 100-200 messages per hour and to each message there were multiple responses, totally often in over 1000 messages to be sent during an hour.

In the same way Twitter search lists looking at a particular hashtag quickly become saturated, especially due to the high number of re-tweets. During the first few hours and days of the Chile earthquake there were easily over 1000 tweets per hour and it became very hard for a human to keep track of new information coming in.

Curators

An often used approach to dealing with this problem is the concept of curators. These are people who monitor a large number of sources and then post relevant information to their feed. People then create lists of the most active curators letting others know they are a good source of information. On a couple of occasions I have ended up on such lists. The problem with the curator approach is that it does not scale well. When I go to sleep or stop ignoring my family I stop posting. If you are lucky then you have a few good curators on a particular topic that span the globe in such a way that 24/7 information flow can be guaranteed.

The concept of Flocks

So how can we build upon those approaches that currently are being applied (curators, search lists, Facebook groups) to get better information sharing? What we need is a simple mechanism for expressing interest in a particular topic and the ability to share information about that topic to all of those interested. To mirror the saying used above those who are interested in a particular topic would join a flock. Once you join the flock you would have the ability to see the information already shared between the members of the flock. Once you have become a member of the flock you can start communicating with other members of the flock, either directly or to the entire flock. Information posted by those that are the most active (the curators) should be given priority over other information. It should also be possible to organize the information shared within the flock. As a member of a flock connects to it, they should be able to see what information has been provided since last time they were connected.

A solution like this could easily be built upon social media technologies that already exist. Twitter could be used to send and receive messages to/from a flock. A Twitter list could be used to coordinate the membership of the flock. A simple cloud based web site could be used to allow information management and visualization of that information.

Instead of constantly re-tweeting important/key information then one could envision a system through which users could tag important information as key information. This information would thereby get priority over other information.

The flocks could either be open or closed spaces where information would get collected. Their lifetime could be minutes/hours/days/months/years, all depending on what they were focusing on. By adding a bit of intelligence to the information being posted to a flock, automatic geo-tagging could be used to make the information visualized on a map. Automated translation tools might also help dealing with language issues.

Dealing with occasionally connected environments

But how would this kind of concept work for those operating in occasionally connected environments such as those experienced by disaster response workers? By applying social media technologies such as Twitter as the transportation mechanism, then you can go back to technologies such as text messages (SMS) as the delivery channel. They are one of the first communication mechanisms to get up and running. Synchronization technologies such as FeedSync are designed to operate in these environments. The blocks of information are usually small being transmitted and conflict resolution is not very complex for this kind of information sharing. A small off-line or mobile client could easily be developed that would provide similar functionality for those in the occasionally connected environment as the cloud based web site provides for those in the Big World.

Crowds, Clouds and Crisis

This post describes a disruption that is under way in humanitarian related information management and processing. It describes the role that both crowds and clouds will play in this disruption that will lead to better ability to handle crisis.
“There can be no deep innovation without an initial period of deep disruption”
Introduction
The January 2010 earthquake in Haiti was a turning point for humanitarian related information management and processing. Being the first major natural disaster since the explosion of social media, it allowed people from around the world to for the first time share information in real-time with each other and with organizations involved in the response. Urban Search and Rescue teams searching through the rubble for missing people would get contacted via social media about their efforts with information about those missing and in return were able to provide back to these relatives and friends accurate and detailed information about the rescue efforts. At the same time citizens were reporting locations of collapsed houses, camps of displaced people and medical facilities. These locations were mapped onto a situational awareness map that allowed responders to get a better overview of the situation facing them. And all of this happened in an ad-hoc manner through social media. Volunteer groups were set up around the world to help develope, test and translate applications, while yet other groups were mapping the streets of Port-Au-Prince and translating messages coming in from citizens in Creole to English before passing them on to aid organizations.

When dealing with disaster response you are normally faced with two opposing problems. First one is the lack of information and the other one is the flood of information. In the initial hours following a disaster, information coming out of the affected areas is very scarce and often does not get propagated to the humanitarian response community but instead ends up inside one organization or another. At the same time media and now especially social media is providing an overwhelming amount of information that is very disconnected and unorganized. This flood of information often forces response organizations to reject it as false or unverified information. Similarly multiple organizations will start doing assessments in the affected areas, but don’t have the bandwidth to process it and are reluctant to share it with others. It is important that these paradoxes are addressed.

When responding to disasters, very few organizations have the luxury to deploy multiple information managers. Most of their efforts go into providing the actual on the ground assistance. It is however well understood and agreed that effective disaster response must be well planned and must be built on actionable information. We however way too often see implementation plans by organization based on their “gut feel” or “word of mouth” on where the situation is worst. Humanitarian organizations have attempted to come up with rapid assessments for identifying where to put their efforts, but most of those “rapid” assessments are over 10 pages long and take forever to process. A small effort is under way to do joint-assessments by multiple organizations to get away from assessment-tiredness of the affected population, which seems to be asked the same questions 10 times before any help arrives. It is therefore important that we rethink the way we assess the situation in the field and how we process the information we receive.

While a few years back connectivity would be lost for weeks following a natural disaster, we now see mobile phone companies get some basic services such as text messaging back up and running within 24-72 hours after the initial event. At the same time the ownership of mobile phones has exploded, with well over half the population of Earth owning mobile phones. Even in some of the more remote locations you now have mobile connectivity. These people are connecting via their mobile phones to social media in an ever-growing number. We must find ways of leveraging this people, their local know-how and information.

Cloud based services such as Facebook and Twitter have already made it possible for us to communicate with millions of people and to leverage our individual social networks to reach a wider audience than ever before. But right now humanitarian organizations are mainly utilizing this channel for advocacy by providing information about their activities in hope of generating funds to sustain them. Very little efforts have been made to utilize these channels for information sharing or analysis.

The Crowds
During the Haiti crisis we saw a new form of humanitarian response, the crowd response. Through a few but strategic social networks a set of volunteer crowds were established to address some of the challenging information related issues faced by the citizens and response organizations in Haiti. One of the most successful one was the collaboration between Ushahidi and InSTEDD and a few others around a solution called Project 4636. It allowed citizens in Haiti to utilize SMS to send in information and requests for assistance. Instead of relying on specially formatted text messages from citizens, they made a quick decision to rather utilize the power of the crowd to transform free text messages into structured, geo-spatially located messages. By getting volunteer groups (all formed through social networks) to give some of their time to perform those validations, geo-spatial addressing and translations they could provide situational information to humanitarian agencies on the ground. To get this done they had literally thousands of volunteers from around the globe performing this task.

We need to harness this power of the crowds and willingness of people to help out during times of need to address some of the more complicated information management issues faced by the humanitarian community. People interested in participating in these kinds of efforts on a regular basis could be trained to perform certain tasks that can then be called upon during the times of crisis. Maybe it is time for the Internet equivelance of the PeaceCorps.

The crowds can be used for more than just simple situational awareness like in the case of Haiti. The emerging field of collaborative business intelligence and analysis can easily be applied to the humanitarian space. As mentioned earlier large amounts of data are being collected both via humanitarian response organizations but also through social media. Most of that data however is analyzed beyond the simple analytics that can be done with a few minutes/hours work in Excel. Within the field of collaborative BI the people involved are split into three types, the producers, the collaborators and the consumers. By applying the concept of the crowd and utilizing the power of the internet we don’t need those to be located in the same place. The producers, most of them in the field would make the raw data available and do some basic processing on it such as enhance it, highlight important information and combine different data sources. The collaborators, most of which would be located outside of the field would remix, mash-up and re-package the data as new information solutions. These collaborators could be connected to experts for example from the academic community which would be able to guide them. Finally the consumers of the information would be donors, people in humanitarian HQs and of course the field workers themselves. They contextualize the results to make decisions and develop strategies for how to deal with the crisis.

It is important to understand especially during the initial phase of the disaster, the need for speed is greater than the need for accuracy. If you wait for all the data to come in before you make any decision people will have died before you even start delivering any aid.  An example of this is whether we need 1000, 10000 or 100000 tents is more important than if the actual number of beneficiaries is 857, 9300 or 96544 respectively.  This allows us to apply what has been called edge-based analysis, in which multiple and possibly conflicting versions of the truth can exist. The task of the analysis is to come up with emergent prototypes of the situation and test them quickly.

In 2008 Ted Okada from Microsoft Humanitarian Systems coined the term Big World-Small World to describe how solutions are either built with the western world (including the headquarters of the humanitarian organization) or the field (including citizens of that country as well as field workers). It is important for us to understand that solutions built for one world often do not apply in the other. Social media and the growth of mobile phone ownership may provide for us an excellent opportunity to bridge these two worlds. Through simple means like text messaging we can get information from the small world, process it in the big world and then provide feedback back to those in the small world. This feedback loop between the two worlds is important to ensure that both sides become willing participants in this endeavor.

The Cloud

As mentioned earlier the cloud has enabled some of the advances already made in crowdsourcing of tasks. But it is important to realize that the cloud must play an ever increasing role if we want to make this vision a reality. One of the key aspects of that is that we must be able to scale work efficiently up and down as demand changes. As with most things we must be able to handle the peaks yet at the same time know that most of the time activity will be almost none at all.

The use of the cloud must be threefold. First of all the cloud must be utilized to coordinate the crowdsourcing, through solutions like turks (CrowdTurks). Secondly it must be utilized to automate as much of the processing as possible and finally it should be utilized to share back the information to the consumers, whether they are in the small world or in the big world. Let us look at each one of those.

In the case of Haiti most of the effort was being done by ad-hoc groups gathering around in universities and other locations. A few, but important key people lead the effort and helped split the tasks needed to be done up into multiple steps that then could be performed by smaller workforces. At one point in the effort a system similar to the MechanicalTurk developed by Amazon was set up to coordinate the work of processing all the incoming text messages.

This coordination of work needs to be more automated. It needs to be easy for people to sign up to do individual tasks in the process from anywhere. There needs to be a way to create new ad-hoc processes on the fly, provide description of each step in the process, so people can easily learn what needs to be done and then perform that step for the time they have available. This needs to be flexible and scalable in order to be able to handle the wide variety of tasks that need to be performed and the variations in the availability of the crowd.

Secondly there is automation of tasks. As information is flowing in through channels such as social media or text messages then the overwhelming amount of raw data coming in can be high. This information may be in multiple languages (for example in Creole), yet the overwhelming majority of the people in the crowd may be English speaking. By utilizing technologies like the Microsoft translation framework the amount of time needed to perform translations can be drastically reduced. Other automatic processing such as geo-tagging, filtering, removing of duplicates, weighing of authenticity (as attempted by the Ushahidi Swift River project) and so on can be extremely important to make this possible. These automation tasks need to be able to scale up and down as the flow of information rises and falls.

Last but not least it must be easy for people to consume the information being generated. This includes the ability to visualize it both geo-spatially and through other more common business intelligence visualizations. At the same time it must also be easy for people to retrieve back the information in the form of RSS feeds and as spreadsheets.  When providing access back to the small-world (i.e. the field) it is important for us to realize that those users are almost always occasionally connected back to the big world (i.e. the cloud).  We must therefore provide ways through which they can both collect information but also retrieve it via means that support this occasionally connected state.  This can be achieved through synchronization technologies such as FeedSync or through peer-to-peer sharing products like Microsoft Groove. We must also remember that during disasters connectivity is intermittent and costly. In most cases we are therefore not talking about direct cloud access to all the visualization products. Instead we must rely on technologies like mentioned earlier to retrieve the data and perform some of the bandwidth intensive visualizations directly on the client.

The Way Forward

This crowd and cloud based information management is not something that will be done by any single company, but rather as a collaborative crowd effort. Companies which provide cloud based services should participate by providing access to their clouds and by sharing their expertise in building cloud based services with the crowds of developers that will have to participate in this effort. Through their corporate social responsibility efforts they will get a chance to share some of their large investments in the cloud with those in dire need of assistance.

Existing collaborative efforts in this field, such as the Random Hacks of Kindness (RHoK) driven by Microsoft, Google, Yahoo, World Bank and the UN should serve as a model for a collaborative effort by the private sector, the humanitarian sector and the crowds out there willing to participate in an effort to make humanitarian response more effective and in turn save lives and reduce suffering.