Good Enough is Good Enough: Overcoming Disaster ... - Springer Link

4 downloads 110287 Views 265KB Size Report
that the landscape of the use of social media data in crisis response is varied, with ... or “good enough,” information in offline practice, and that social media data ...
Computer Supported Cooperative Work (CSCW) (2014) 23:483–512 © Springer Science+Business DOI 10.1007/s10606-014-9206-1 Media Dordrecht 2014

Good Enough is Good Enough: Overcoming Disaster Response Organizations’ Slow Social Media Data Adoption Andrea H. Tapia & Kathleen Moore College of Information Sciences and Technology, Penn State University, Philadelphia, PA 16801, USA (E-mail: [email protected]) Abstract. Organizations that respond to disasters hold unreasonable standards for data arising from technology-enabled citizen contributions. This has strong negative potential for the ability of these responding organizations to incorporate these data into appropriate decision points. We argue that the landscape of the use of social media data in crisis response is varied, with pockets of use and acceptance among organizations. In this paper we present findings from interviews conducted with representatives from large international disaster response organizations concerning their use of social media data in crisis response. We found that emergency responders already operate with less than reliable, or “good enough,” information in offline practice, and that social media data are useful to responders, but only in specific crisis situations. Also, responders do use social media, but only within their known community and extended network. This shows that trust first begins with people and not data. Lastly, we demonstrate the barriers used by responding organizations have gone beyond discussions of trustworthiness and data quality to that of more operational issues. Key words: humanitarian, relief, NGO, disaster, crowdsourcing, trust

1. Introduction The goal of improving information quality and use in any area of disaster relief is challenging. Response organizations operate in conditions of extreme uncertainty. The uncertainty has many sources: the sporadic nature of disasters, the lack of warning associated with some forms of disasters, and the wide array of responders who may or may not respond to any one disaster. This uncertainty increases the need for information. At the same time, research has shown that the amount of operational information flowing through an organization during a disaster can be overwhelming (Knuth, 1999). In these circumstances, appropriate information could make substantial improvements in the disaster relief process. While improved information quality and sharing are noble goals, the real aim is to improve relief services. To date it is unclear how much improvement in relief services result from improved information in the Information and Communication Technologies (ICT) realm. While anecdotal evidence of benefits exists to ignite efforts, a more systematic analysis of the ICT benefits (or lack thereof) is called for. While the uncertainty surrounding the disaster relief process creates

484

Andrea H. Tapia and Kathleen Moore

significant challenges for developing accurate assessments, a general model that quantifies the effects of improvements in information quality and use might enable us to provide recommendations for optimal information design that not only improve the performance of the organization itself, but also provide insight into the implications for those they serve. Data directly contributed by citizens and data scraped from disaster bystanders have strongly positive potential to give responders more accurate and timely information than is possible with traditional information gathering methods. Organizations that respond to disasters hold unreasonable standards for data, beyond the standards for the same data arising from more traditional sources. This has strong negative potential for these responding organizations to incorporate these data into decision points. We argue that social media data in crisis response already claims pockets of use and acceptance among organizations. We find that social media data is useful to responders in situations where information is limited, such as at the beginning of an emergency response effort, and when the risks of ignoring an accurate response outweigh the risks of acting on an incorrect one. In some situations, such as search and rescue operations, social media data may never meet the standards of quality required. In others, such as resource and supply management, media crowdsourcing data could be useful as long as it is appropriately verified and classified. In this paper we present findings from interviews conducted with representatives from large international disaster response organizations concerning their use of social media data in crisis response. We demonstrate how the barriers used by responding organizations have gone beyond discussions of trustworthiness and data quality to that of more operational issues. We begin the story by presenting the unvaried landscape that was found during an earlier stage of this research and contrasting that with more recent findings. Within a two-year span, responding organizations have changed their perceptions of the utility of social media data, and have started to make use of it under certain conditions. We argue that the standards of data quality applied to social media data in the past have become as malleable as the field of disaster response itself. This trend will continue and we will see social media data continue to find pockets of acceptable use. Further, we explore the nature of trust as it is applied to person and to information in the social media stream. Lastly, we will discuss the technical limitation of the emergency and crisis response communities in overcoming these technical challenges of social media adoption. 1.1. Challenges of the use of social media data in disaster response Researchers have demonstrated the power of crowdsourcing on the diffusion of news-related information (Kwak et al., 2010; Java et al., 2007; Oricchio, 2010; Lerman and Ghosh, 2010). Media crowdsourcing has been under the lens of researchers due to its use in disasters and other high profile events (Hui et al.,

Good Enough is Good Enough: Overcoming Disaster Response…

485

2012). The American Red Cross and Dell have launched a new Digital Operations Center, the first social media-based operation devoted to humanitarian relief, demonstrating the growing importance of social media in emergency situations (The American Red Cross). Much has been written concerning the value of using messaging and social media data from crowds of non-professional participants during disasters. Often referred to as “crowdsourcing,” the practice of average citizens reporting on activities “on-theground” during a disaster is seen as increasingly valuable (Palen and Vieweg, 2008; Palen et al. 2009; Sutton et al. 2008; Terpstra, 2012; Vieweg et al. 2008). Data produced through crowdsourcing is seen by the decision makers of emergency response teams as ubiquitous, rapid and accessible (Vieweg, 2010). It is believed to empower average citizens to become more aware during disasters, and to coordinate to help themselves (Palen et al. 2010). During recent disasters in developed environments, average citizens offered ground-level information, keeping outsiders informed about the reality on the ground. There are numerous challenges to adoption of social media data, including issues of reliability, quantification of performance, deception, focus of attention, and translation of reported observations/inferences to respond to crises. Using social media feeds as information sources during a large-scale event is highly problematic for several reasons, including the inability to verify either the person or the information that the person posts (Hughes and Palen 2009; Mendoza et al. 2010a; Starbird et al. 2010; Tapia et al. 2011; Vieweg, 2010). One problem became apparent during the earthquake in Haiti, when thousands of technical volunteers from around the world suddenly attempted to provide responders with mapping capabilities, translation services, people and resource allocation, all via Short Messaging Service (SMS) at a distance (Portsea, 2011). Despite the good will of field staff, their institutions’ policies and procedures were never designed to incorporate data from outside their networks, especially at such an overwhelming flow. In addition, the organizations did not have the technical staff or the analytical tools to turn the data flow into actionable knowledge (Palen et al. 2010; Portsea, 2011). Still, researchers are optimistic about the value of potential information provided; that issues surrounding trustworthiness can be reasonably resolved (Palen et al. 2009, 2010; Starbird et al. 2010). Temporally, the problem arises at the stage when emergency responders and relief organizations begin engaging their organizational mechanisms to respond to the crises in question (Munro, 2011). For decades, these organizations have operated with a centralized command structure, standard operating procedures, and internal vetting standards to ascertain appropriate responses to disasters. While not optimized to current expectations of speed, efficiency and knowledge, these mechanisms have been successful at bringing rescue, relief and recovery to millions (Walton et al., 2011). A central aspect of these organizational mechanisms is complete control over the internal flow of information concerning the crisis from source to organizational decision-maker, respectively. This ensures accuracy, security, legitimacy, and, eventually, trust between the organization and information source. Despite a

486

Andrea H. Tapia and Kathleen Moore

tremendous amount of research in the area, no mechanisms have been employed for harvesting social media data from the public in such a manner that facilitates organizational decisions. 1.2. Trustworthiness and data quality Many social media and mobile communication experts have looked with dismay upon crisis response organizations that have refused to make use of social media data. Much of the scholarly and para-scholarly literature that has come out concerning the reasons for this lack of uptake have focused primarily on the nature of the data quality (see Tapia, et al. 2011). They have argued that the data available through the crowd can never meet the exacting standards for trustworthiness needed by the responding organizations. Our definition of data quality rests on the concept of fitness or appropriateness. Data quality is a perception or an assessment of data’s fitness to serve its purpose in a given context. Data are of high quality “if they are fit for their intended uses in operations, decision making and planning” (Juran, 1988). The state of completeness, validity, consistency, timeliness and accuracy makes data appropriate for a specific use. Most data quality research involves investigating and describing dimensions of data, including accuracy, correctness, currency, completeness and relevance (Wang et al., 1993). Data quality is important in that acceptable levels of data quality are crucial to decision-making, operational processes and to the reliability of business analytics and intelligence. In the years 2008 through 2011, the central barrier to use in the literature was trustworthiness. According to a United Nations study of the potential of social media for humanitarian relief, “While [social media and crowdsourcing] make available information that would not have emerged otherwise, they pose a serious challenge in terms of authentication. Validation is a fundamental issue in the further use of social media in situations of conflict and disaster” (Coyle and Meier 2009). As presented at the Emergency Data Summit: The social web is creating a fundamental shift in disaster response that is asking emergency managers, government agencies, and aid organizations to mix their time-honored expertise with real-time input from the public. As of today, most of us are not yet ready to collect, respond or react to this incoming social data in a timely manner. The use of publicly available data in times or places of crisis raises issues of authenticity, privacy, trustworthiness and ownership (Emergency social Data Summit, 2010).

The issue of authentication is a key barrier to overcome, and the expediency with which these information flows propagate compounds this problem. There is an urgent need for the development of methods and applications for verification

Good Enough is Good Enough: Overcoming Disaster Response…

487

of social media information (Coyle and Meier 2009). Dr. David Wild believes the most common concern from the emergency management community about the use of social media in disasters (and citizen involvement in general) “is one of trust: specifically, the risk of misinformation and rumors spreading virally, thus causing all kinds of untold complications. Officials usually carefully curate information that is given to the public through well-organized channels (e.g. a Public Information Officer at the scene)” (Wild, 2010). Integrating social media or a crowdsourcing-like capability into a public safety or emergency management environment raises unique suitability considerations based upon its use context. These considerations include security and privacy, user identity management and authentication, evidence preservation and chain of custody, and practical possession and control matters. As with any alerting mechanism, the actual credentials and permissions of the person authorized to send alerts must be carefully managed (Mazzarella 2009). These assertions from the literature were mirrored in our own data collection in 2010 (see Tapia et al. 2011). In 2010, all subjects voiced that the data produced through crowdsourcing was untrustworthy. Equally strongly, the subjects stated that the veracity, accuracy, and legitimacy of data were the most important factors in data used in organizational decision-making. In addition, while the speed of gathering data was mentioned, it was not to be achieved at the cost of veracity. The respondents focused on the importance of the decisions to be made by the organization based on the data gathered. The decisions were framed as life-or-death decisions in which no margin of error could be tolerated. Some of those interviewed could imagine using data from citizens via crowdsourcing in the future, but only if some of the major obstacles to the trustworthiness of data were resolved. Resolution would come with an increasing number of messages being accorded more trust. From this, we learned that the data themselves are seen as problematic by humanitarian organizations. Another distinction was made between organizations involved in immediate emergency response and those involved in post disaster recovery. Some subjects found that the parts of their organizations, which were involved in the first 3 days post disaster, were less likely to make use of social media data. In these crucial first days of crisis, the organization reverted to standard operating procedures that have served the organization and its beneficiaries for decades. All found that the parts of the organizations that dealt with capacity building and recovery post disaster might be more amenable to trying social media data because the time pressure and criticality of the data were less acute. In this earlier phase of research, we were left with the conclusion that data quality was the single most important issue in determining the use of social media data by responding organizations. Due to the perceived lack of authentication, large-scale responders have been reluctant to incorporate social media data into the process of assessing a disaster situation, and the subsequent

488

Andrea H. Tapia and Kathleen Moore

decision-making process to send aid workers and supplies to disaster locations. Committing to the mobilization of valuable and time sensitive relief supplies and personnel, based on what may turn out to be illegitimate claims, has been observed to be too great a risk. Even at this point there were cracks in this wall. The subjects saw a strong value in social media data, especially after witnessing the Haiti response, but had no mechanisms to use it. These cracks also began to show that the organizations were beginning to think in a multifaceted way—seeing social media data as useful under some circumstances, as input to certain decisions, as valuable at certain times, and not universally unusable. This suggests a classic technology adoption issue followed by a process issue. Part of the issue, the idea that media crowdsourcing data flow is inefficient, may be both a perception and an adoption problem. As Walton et al. showed in their research on speed in humanitarian logistics, the perception of “speed” is a subjective experience between both relief provider and those they seek to help (2011). Lack of control over communication may lead decision makers to perceive that logistics are slow, and lack of communication from relief organizations make the local affected population experience a perception of slowness as well (Walton et al., 2011). This suggests that relief organizations that more fully immerse themselves in the media crowdsourcing environment feel a better sense of control over the flow of information, and, conversely, increased communication back to the local affected public through media crowdsourcing, which increases public perception of speed, even if the same relief organization does not yet utilize the medium in their internal decision trees. If the perception of trust is hampered by the adoption of media crowdsourcing as technology (Morris et al., 2012; Thomson et al., 2012), increased familiarity will go a long way in increasing trust in media crowdsourcing as an information source until such time that technology in the emergency management world catches up in the processing of information gleaned from the environment. Social media, at one point, was an unfamiliar technology. Research has shown that as familiarity with these new technologies rises, the potential for trust increases (Morris et al. 2012. Valenzuela et al. 2009). However, an acceptable level of trust in social media does not solve the problem of how these organizations will change their long-established operations and procedures to accommodate new, virtual data sources. 1.3. Beyond trustworthiness to good enough Perfect knowledge of a rapidly unfolding disaster situation is impossible. Responding organizations already practice decision-making with high levels of uncertainty and risk (Muhren et al., 2010). Several modern authors like Palen et al. (2010); de la Torre et al. (2012), show the saliency and longevity of this understanding. Research already suggests that socially distributed information during crises events is likely more accurate than currently assumed (Palen, 2009);

Good Enough is Good Enough: Overcoming Disaster Response…

489

however, even if the crowdsourcing environment is accepted and the vetting of people contributing to it is set aside, absolute trust in the content provided is still difficult to achieve due to challenges including locating socially distributed information activity online, and filtering and sorting content. Until such a time that comprehensive automation is achieved, it may be best to shift the perspective, as Palen et al. recommended, from “accuracy” to “helpfulness” (2010). We are interested in the malleable elements of trust calibration during complex human-machine-human interactions. Trust enables organizations and individuals to establish expectations, weigh risks, and proceed in a course of action without full knowledge. Our working definition of trust is taken from Alpern where trust, on behalf of the trustor, means the acceptance of a certain amount of risk when lacking full knowledge and lacking the ability to fully control a situation (Alpern 1997). If crowdsourcing were guaranteed to lead to optimal decision-making, the use of unverified data from unvetted persons would not feel like the insurmountable challenge as it has been described (Tapia et al., 2011). As Palen et al. (2010) contend, crisis responders never have complete knowledge of any given crisis, as crises, by definition, are scenarios where conditions hinge on extreme instability. In order to have all available data, an event must exist in a bounded universe. Disasters and other crises, however, are complex by nature, with many moving parts, and are not at all capable of being finite. Thus, satisficing–the “good enough” principle in decision making–should also apply to the technology employed in disaster scenarios (Granger-Happ, 2008). In his article, Granger-Happ quotes from some of his subjects. “When a disaster strikes,” an emergency response program director noted, “the number one objective is speed. We don’t have time to develop the best quality solutions; we need it there now!” Following the Tsunami response, a marketing director recalled, “We didn’t have time to have all the meetings, all the reviews, and all the approvals. We had to make on-the-spot-decisions. The interesting thing is that nothing fell apart. Maybe we could make decisions like that every day.” Granger-Happ notes that “Each of these brief stories has a common theme: sometimes expedient, ‘good enough,’ solutions are best. In our quest for the best quality, we may in fact have the unintended consequence of having less impact. … Simply stated, sometimes the best is what works.” When examining this “good enough” literature in the humanitarian space, it often refers to devices or service. For example, a dial-up modem connection, an old cellular flip top phone, an MSAccess database, or a handheld 15-year old PDA, may be enough technology to complete a task in a crisis situation in which electricity, bandwidth and newer models are unavailable or unreliable. The same principle has been applied to data. Organizations must make a decision with limited knowledge resources. They make the best decision they can with the information and resources at hand. Research in humanitarian disaster response teams has shown, even with the incorporation of all possible data, optimal decision making is difficult to achieve (Muhren et al. 2010). Thus, the advent of crowdsourcing should not be seen as a cure-all. The basic mechanism for overcoming the optimization issue

490

Andrea H. Tapia and Kathleen Moore

is the establishment of trust, the expectation of a reliable outcome despite vulnerabilities and incomplete knowledge (Alpern, 1997), and that people have the ability, benevolence, and integrity to report good information. Trust begins by first being predisposed to trust, or in this case, trusting that the local affected community has everyone’s best interest in mind when contributing information, and by trusting the mechanism by which that information is delivered (Thomson et al., 2012; Valenzuela et al., 2009). Despite research showing the rise of the altruistic community in the local affected population during crisis events (Quarantelli, 1999), there is still an expectation that people and information should be vetted. This is not an unreasonable argument as social media has shown a high propensity for rumor-mongering and proliferating false information (Mendoza et al., 2010a; Morris et al., 2012). Also, that the anonymous nature of many who tweet (send out short messages via the software program Twitter to a group of followers) is correlated to higher incidences of contributing information from less than credible sources (Thomson et al., 2012). At the same time, considering the altruistic community, crowdsourcing has been shown to have an organic and persistent self-correcting mechanism working to challenge false information (Mendoza et al., 2010b). Another way of looking at this problem of data quality is to view crowd filters in the form of re-tweets as an ad-hoc recommendation system for good information (Starbird and Muzny, 2012). However, reasonable trust in good intentions and information provided by the local affected public does not translate into information presented in a usable form. Supposing that both crowdsourcing and the community contributing to it are deemed trustworthy, receiving that information in a usable form remains a challenge (Tapia et al., 2011). Overcoming the barrier of processing information may be the biggest factor in establishing trust. Natural language processing and geo-locating have made great leaps in extracting, processing and classifying crowdsourcing feeds, but in real-time situations and to a reasonable degree of accuracy, sufficient efficiency has not yet been achieved. In the end, the patterns that emerge in the literature are that the traditional command structures of many relief and response organizations are already changing, albeit slowly, due to the technological advances of social media. Clearly, these organizations are not prepared for the information integration on either a technical or cognitive level, which further results in these organizations feeling overwhelmed. These same organizations recognize the potential value of crowdsourced information, but greet it with ambivalence due to issues of trust, a bias likely attributed to classic technology adoption issues. Early research has shown that organizations already operate and make decisions on a “goodenough” principal, so the questions arise: What constitutes trust? What constitutes data quality? How might that principal transfer to crowdsourced information on a technical and cognitive level? What other considerations need to be addressed to create a more comprehensive solution across the community?

Good Enough is Good Enough: Overcoming Disaster Response…

491

2. Research design In 2010, in response to the devastating earthquake in Haiti, a new collaborative research group was initiated at Penn State entitled EMERSE: Enhanced Messaging for the Emergency Response Sector. This research group drew in both computer scientists and social scientists seeking to use their research skills to help. The purpose of the research group was to find mechanisms to make citizenoriginated data useful to emergency responders and humanitarian agencies. In 2011 (Tapia et al., 2011), data and findings were presented from the first round of qualitative interviews conducted with the members of the organization NetHope. NetHope is an information technology collaboration of 37 leading international nongovernmental organizations (NGOs) representing more than $30 billion (U.S.) of humanitarian development, emergency response, and conservation programs serving millions of beneficiaries in more than 180 countries. Through member collaboration and by facilitating public-private partnerships with major technology companies, foundations, and individuals, NetHope helps members use their technology investments to better serve people in the most remote areas of the world. The Board of Directors of NetHope facilitated the data-gathering phase for this research. We conducted 21 in-depth qualitative interviews with one or two members of each participating member organization listed below. 1. CARE 2. Catholic Relief Services 3. International Federation of Red Cross and Red Crescent Societies 4. Mercy Corps 5. Oxfam 6. Save the Children 7. World Vision 8. Heifer International 9. Actionaid 10. Ashoka 11. Winrock 12. International Rescue Committee

For each organization we interviewed two representatives (in most cases). We interviewed one person who could speak to the technological needs and assets of the organization, such as a highly ranked technologist, like a CIO. The second person we interviewed from each organization was actively involved in their organization’s Emergency Response Division. This person was often a manager located in the headquarters office (not in the field) who was responsible for receiving assessment data and making decisions regarding the organization’s

492

Andrea H. Tapia and Kathleen Moore

response. We learned from our first round of interviews that humanitarian response organizations were highly compartmentalized, or “siloed,” and the information use by one division may not mirror that of another. Each research subject spoke for the organization as a whole, often with significant administrative, technological and operational experience. While we interviewed representatives from the same organizations as the first round of data collection, we did not interview the same people in 80 % of the cases. The interviewee changed for three reasons. First, the field of humanitarian response has a swiftly revolving door and, in a few cases, the person who filled the position changed. Second, in our first study we were often directed to the public relations division to talk about social media use, rather than the Emergency Response Division. Third, within one year’s time, many organizations had created a new position or section devoted to social media for emergency response. Each interview lasted between 60 and 75 min and was audio taped and transcribed. Our primary methodology can be seen as ethnographic with the use of qualitative data gathering and inductive analysis. Our goal was “to obtain an abstract analytical schema of a phenomenon that related to a particular situation” (Creswell, 1998). This work results in an exploratory and descriptive story using constructivist Grounded Theory analytics, via the work of Charmaz, to code and thematize the data (Charmaz & Mitchell, 2001; Charmaz, 2006). Our primary method of analysis is a continuous coding process. First, we developed a set of codes based on insights we had gained from the larger research, previous studies on social media and crowdsourcing use in humanitarian relief and emergency response, and the interview core questions. These codes were used deductively. Secondly, we identified codes that emerged from the data—open coding leading to “refining and specifying any borrowed extant concepts” (Strauss and Corbin, 1990, 1997). Next, codes were grouped into categories and subcategories, and linked together in a form of axial and selective coding (Strauss and Corbin, 1990, 1997). To ensure a high level of credibility for our data and analysis, we employed several techniques (Gall et al., 1996). We used a strategy of long-term involvement in that we spent four years collecting data from the same organizations to correct for situation specific influences. We also connected a coding check in which multiple researchers independently coded the data and checked for differences. We used the same interviewers and the same basic instrument over time. We conducted periodic member-checks with our subjects by showing them both the raw data collected and the analyzed data to ensure faithful representation and the validity of our instruments. We believe that if the raw data and the codes were shared, the logical relationship between research questions, research procedures, raw data, and results should be such that a reasonably prudent person would arrive at the same, or similar, conclusions.

Good Enough is Good Enough: Overcoming Disaster Response…

493

3. Findings We present our data in two large categories that are comprised of multiple codes. The first category we call “Current and Complex Information Gathering and Sharing.” In this category we present the coded data from our interviews that describe the practices in use by relief organizations to gather and share information during times of crisis. These involve both traditional and new methods. The second large category is termed “Considering Future Use of New Information and Gathering Techniques.” In each section we aggregate and paraphrase statements made by the subjects. However, for each significant code we offer one illustrative quote to give a flavor of the code and data therein. Lastly, in each section we present some of the technical challenges associated with the issues raised by the subjects.

3.1. Current and complex information gathering and sharing 3.1.1. Subcode: good enough data The strongest category of codes to arise from these data was the concept of “good enough” data quality. We interpreted our subjects’ meaning to be that in traditional response, decisions were made based on data that were available, regardless of any previous information standards for accuracy. During the interviews we asked the subjects directly what made data “actionable” or not in a traditional sense (not via social media or crowdsourcing). The most prevalent code in this category was what we called the “impetus to act, fast.” All of the subjects stated that during an active response, especially in the first 48 h, they did not have a wealth of time to wait to make decisions. Typically, after the onset of a disaster, the Emergency Response Manager would need to make decisions to deploy and what personnel and goods to send, within a matter of hours. Depending on the nature, severity and location of the disaster, the amount, accuracy and dimensions of the data fluctuate. For example, in the case of a hurricane or typhoon, forecast information is available for days prior to the event, allowing communities and responders to gather information and prepare. With a rapid onset disaster, such as an earthquake or mudslide, no prior data gathering is possible. In both cases, communication networks may be knocked out, so data gathering during the onset of the disaster may be limited. In addition, if the disaster should take place in an area of the developing world already under social, economic, political or environmental stress, data gathering will be compounded by the onset of a disaster. Despite this fluctuation, decisions are made. Most of the respondents simultaneously expressed three things: regret that better data was not typically available, acceptance that this condition was part of the nature of their work, and understanding that despite the lack of complete data, hundreds of emergencies had been responded to successfully, millions of lives

494

Andrea H. Tapia and Kathleen Moore

had been saved and regions had been reconstructed. One subject said, “Seat of the pants. Yeah. That’s what it feels like sometimes. When there is nobody there. Nobody is answering the phone and you only have one report and it’s not that good. If it’s a big [disaster] then you just have to activate. Go with it.” The urgency of emergency response can make information collection secondary to actual response efforts. One subject commented, “We never know everything, ever…It doesn’t matter. We still have to do it.” What we learn from this is that responders are already predisposed to accept and act upon unvetted information. Additionally, while organizations may not know everything in a crisis, they do know what they need to know and what they do not have in the way of information. The technical challenge for computermediated decision support then lies in establishing a baseline for knowledge. Once the existence of a crisis is established, there should follow a natural progression of questions related directly to the crisis type. Once basic questions have been answered, this opens opportunity for exploring social media to fill data gaps. 3.1.2. Subcode: making Do Our subjects suggested that they had to “make do” with the data that were available to them. In every situation, they knew they did not have a complete picture of events and circumstances from the field, but were forced to make operational decisions nonetheless. They cautioned us that they were always seeking the best information available through any means necessary, including improving their network of agents and informants, and through automated reporting. These efforts might improve the availability of information and the trusted nature of the information, but never approach complete knowledge. Several respondents discussed the challenges of getting information from local sources at the onset of a disaster, particularly when an organization did not have members already stationed at a location. One subject said, “At the onset of an event sometimes we have no one on the ground…and even if we do, sometimes they are incapacitated or can’t get word to us. It can take several hours to get communications up and regular.” Since an organization may not have a trusted source already geographically situated in the place where a crisis occurs, this necessitates the need to cultivate outside, un-vetted sources. Many respondents said their organization uses whatever information is available at the time, until they can get a team on location to perform a traditional assessment. Even once an organization has representatives on location, some respondents said they still look to local or third party sources for additional information to establish a sense of context. As another subject explained, “We get reports from our own team on the ground, but we also get information from volunteers and from other teams. Sometimes there are just people in the region from a mission or program who can share information about the situation. We can build a pretty

Good Enough is Good Enough: Overcoming Disaster Response…

495

good picture out of all that.” Accepting information from various outside sources enables organizations to establish a preliminary overview or an assessment of the emergency as it stands even though that information has not been thoroughly vetted. This is not to suggest that responders have not previously gathered information from ad hoc sources during a crisis event; in fact, it happens as a matter of course during most emergencies. As one responder describes, there is a certain expectation of how and where the information may be coming from, and a system as to how the responder will seek information. In that familiarity of process, there is an immediate sense of trust instilled in the information received: We are already getting stuff from everywhere. Yeah, we get official reports from our own reps on the ground, but we also get a lot more stuff that eventually makes it into the sit reps [situational reports for high level decision makers]. If we can, we call people we know who are there, whether they work for us or not. If we can’t call, we text. I’ll call people there who are in the government, working for other NGOs in the military, ours or theirs, doesn’t matter. If they are there then we’ll try to get to them. …sometimes I don’t even know them, but somebody does.

This code explicates the reality that organizations still work in stove-piped data environments, although occasionally organizations reach laterally across other organizations and their networks. Personally making contact via more traditional means and ICTs takes precious time, which is an unaffordable luxury. Hence, the technical challenge that arises is establishing a possible online, shared network of resources that can be readily accessed to establish basic trust. 3.1.3. Subcode: information that fits the decision Overall, all participants expressed that their information needs changed as the disaster environment changed. At the onset of a disaster, they expressed that they needed to understand the context and the scope of an emergency, including the size and location of the affected population and the extent of the damage to basic support infrastructure. Later, they needed information about specific gaps in the availability of goods, services and other forms of aid. Then they needed information about operational coordination, i.e. who is responding with what and where. Lastly, they expressed that they needed regular updates on the security situation, the impact of the intervention, the status of the affected population, and constant inter-organizational coordination of information: “Look, there is no one-size-fits-all approach here. What I need at 8:00 from the initial flash report is not what I need at 16:00… Sure we have standard assessment forms and flash reports, but they are never everything we need. We

496

Andrea H. Tapia and Kathleen Moore

always ask for more from the people who are there.” The information needs of responders do not change only as a disaster progresses, but also from disaster to disaster. In the words of a respondent, “Every disaster is different. That’s why this job is so hard. What we did in Haiti is completely different than what was needed in Aceh and in Turkey. Each one is a new game.” Subjects also discussed a shift from a high quantity of easy decisions early on to fewer, but more difficult, questions as they shifted from emergency response to disaster recovery. As an interviewee discussed, “When it starts I have to make a hundred decisions in 1 day. Should I activate? How bad is it? Whether to deploy? Who to send? What to send?..I have to make these decisions fast…” This sentiment was echoed in another subject’s comment, “Sometimes I have to make easy decisions like whether to get the ball rolling—are we going to respond. Once we are going then I have to make harder decisions about what is really needed and who, what and where I should send our people and goods. Logistics questions are tougher.”

The allocation and distribution of resources require prioritization to those populations with the most need (de la Torre et al. 2012). Social media data, again, can help fill gaps as decision support needs evolve, so it is necessary that any decision support system utilizing social media can temporally order relevant information in real time. However, social media data’s value as an information source is not a constant, and would vary as a disaster response develops. For example, during Hurricane Sandy in 2012, the Federal Communication Commission reported that 25% of cell towers were rendered inoperable due to the damage by the hurricane’s subsequent floods and fires (see the report Hurricane Sandy Rebuilding, 2013). Thus, the additional technical challenge here also lies in a more resilient infrastructure that allows for continual flow of social media data. 3.1.4. Subcode: trust in people, not in data In this code, our subjects stated that data were trusted because the person or organizations that they came from were trusted. These data had multiple and varied sources. Trusted sources were developed over time through a network of field officers, local informants and government officials who were tapped for information at the time of need. In addition, representatives of different responding organizations often supplied data across organizational boundaries. Our subjects said that after working in relief for many years, one changes jobs and organizations often and develops relationships with many people. This network of relationships led to a facilitated exchange of data when required. The existing relationship with the person or organization allowed for a simplified trust

Good Enough is Good Enough: Overcoming Disaster Response…

497

negotiation between our subjects (at headquarters) and the local agent providing the information. 3.2. Considering future use of new information and gathering techniques 3.2.1. Subcode: always on: continuous monitoring of humanitarian and response networks A powerful code to emerge from our data, is the power of social networks to impact the access to and quality of information. Our subjects stated that social networks predate social media networks. While they recognized that their personal and organizational networks did not cover every corner of the earth, and that inputs from affected populations were still the gold standard, many had come to rely on inputs from a widely dispersed network for operational inputs. All of our subjects stated that they followed members of the humanitarian community via social media, including media crowdsourcing and Facebook. The subjects explained that they each had a patchwork of different sources of social media data including official accounts managed by public relations and communications divisions of response organizations, informal vocal blogs of employees of these organizations, employees of various organizations in different missions across the globe, informal vocal blogs of humanitarian focused individuals, and family and friends. A subject stated, “I follow a lot of people. Some are just old friends, but a lot are people I know from work. When something is going on, they know.” Subjects said they were willing to trust social media data if the source was a member of another humanitarian response network. As one subject said, “Sometimes I just feel like those people know more than us, faster. It would be stupid not to listen.” All of our subjects stated that they believed that much of their personal social network was already producing actionable information. They stated that while their network could not provide all data necessary regarding a disaster, it served as a powerful informal source of information about the response and the conditions during a disaster. One subject said, “During the Japan quake I found out how things were from a couple of people I know in Japan… I learned who was being sent.” Another subject expanded on how social media data helped them understand the scope of response in an area, “Media crowdsourcing helps a lot. I know where people are, who is deploying…I can get a handle of who is where. That is really good.” Due to the high turnover rate for employees working in the humanitarian field, our subjects also explained that since they had changed jobs so often and moved so often, they had friends they followed from many organizations all over the world. Another subject said, “Yeah, I work for [large NGO] but I used to work for the UN and I know all those guys there. When they started tweeting about the flooding I knew we were going, too.” Another subject commented that “Half the people I follow I used to work with. I used to work in the [country] mission and I

498

Andrea H. Tapia and Kathleen Moore

know everybody who still works there. We still have some projects together. I know what is going on back there.” This frequent movement of employees helps create inter-organizational links through social networks. Having this information about other organizations is valuable both in efficiency of response and identifying gaps in response efforts. A subject commented: …so I know that [another NGO] is already there and [a friend who works for this other NGO] is saying how things are. Now I know more about what we’re getting into. I heard about the airport being a mess first from [a friends who works for this other NGO]. I don’t think we would have gotten on those boats if we didn’t know about the airport first. It saved us tons of time and headaches. As social networks have become more and more ubiquitous, it is understandable that the members of emergency response organizations have adopted them for personal use. Networks such as media crowdsourcing and Facebook have allowed the connections between these members to persist even after they are no longer working for the same organization. While these connections could be leveraged through more traditional means, the broadcast nature of social media data greatly reduces the effort required to gather information in this manner. The technical challenge here is two-fold: first, it emphasizes the value and need of using social media by organizations, and two, making sure that those organizations participating in the realm of social media are aware of each other’s presence, and properly networking and communicating in that space. 3.2.2. Subcode: reliance on volunteer and technical communities About a third of our respondents mentioned that they had already used or were planning to use secondary social media data; in other words, media crowdsourcing data that had been collected on a large scale and processed by outside groups other than their own member organization. In this case they expressed more trust of the volunteer and technical communities than the original data. Communities that were mentioned were Ushahidi, Crisis Mappers, The Standby Task Force and the Digital Humanitarian Network. Ushahidi is an opensourced, interactive platform for information collection, visualization, and mapping. Crisis Mappers is an online volunteer taskforce that provides ad hoc mapping needs around the globe during a crisis event, similar to the Standby Task Force, except that the Taskforce additionally provides analysis of geocached information. Lastly, the Digital Humanitarian Network seeks to integrate both virtual and on-the-ground volunteers. These entities are early pioneers, and the most established of the many crowdsourced communities. They have developed certain levels of trust over time. Each of these communities used a combination of crowd-sourcing and computational techniques to collect relevant social media data, process and categorize the data, and plot the data on a map for the

Good Enough is Good Enough: Overcoming Disaster Response…

499

responding organization. One subject put the willingness in terms of convenience, saying, “We are a small shop compared to some of the other members; we are never going to have the resources to learn how to process Twitter and texts. But, if Ushahidi makes maps of what’s going on in the affected region, we are going to use them. They are better than anything we can make or get.” Subjects also discussed how valuable sharing processed data could be. As one subject stated, “When we were in Libya, the Stand by Task Force was activated and got us unbelievable maps. We had no UN office in Libya and needed info fast. The Task Force got over 200 people to process media crowdsourcing and Flickr sites to plot what was going on. We got useable maps in just a few days.” While not every subject mentioned trusting a third party to process social media data to serve as input into their decision-making process, many did. They transferred expertise and trust from outside their organization to trusting volunteers processing the data rather than the data itself. The technical challenge here is the creation of a decision support system that can integrate platforms for third parties. However, if a third party is able to provide comprehensive service, the value of social media may still depend on numerous organizations utilizing the same service and maintaining access to their networks. As it stands, in a 2012 technology survey of emergency relief organizations and NGOs conducted by NetHope, it was discovered that organizations regularly utilized highly proprietary systems to support day-today field operations, thus increasing the difficulty of implementing a third party solution. Breaking this cultural trend among organizations is necessary to facilitate any third party system implemented in the field. 3.2.3. Subcode: a response early warning system Two-thirds of our subjects stated that in the case of initial awareness of a disaster, they would accept a low or unknown threshold for data quality in exchange for real time knowledge. One subject stated, “I often look to social media to know whether I should activate and deploy. If I see a bunch of people tweeting about an earthquake in Japan, you bet I am going to get out of bed. If I see enough people [tweeting] then it gets me up and I make some calls.” Others emphasized the speed at which social media data is created and propagated, claiming that media crowdsourcing was “an excellent early warning system” that allowed them to “know immediately if something is going on.” Often, the subjects stated they first heard about a disaster as it was happening via social media personally or via a colleague who was scanning their own feeds. This prompted the subject to seek additional information and to make a decision to deploy or not. This shows that it is not enough to simply know of various social media platforms and how they work. The technical challenge lies in that organizations

500

Andrea H. Tapia and Kathleen Moore

must also be regular participants and consumers in this space in order to be abreast of the latest trending topics. 3.2.4. Subcode: deeper contextual information More than half of the subjects stated that they looked to social media data during the first few days after a disaster for contextual data. Contextual data did not have the same level of data quality needs as other forms of data. The subjects said that often the data they received from official emergency assessments and reports left them with more questions than answers. They expressed that they often combined multiple sources of information to gain a more complete picture of a growing emergency situation. The subjects used data from social media sources to more fully flesh out descriptions of the emergency context to guide their response. Because the social media data was often an accompaniment to more formal sources, the standards for the level of trustworthiness of the data were also low. As a respondent recalled, “We had to go in and set up the communications and we needed to know what kind of power was still operational. You know cell towers, too, we heard about how bad it was from a text.” The additional context afforded by social media data also allowed for early response to enter a disaster area with an idea of the situation on the ground. “Everything is unknown. Up in the air. We used to have to send in guys half blind…We will have one guy doing the first assessment and calling it in, but it’s never enough. You never really know until you get there. If you can get some more from some other staff, you know, informally, then it can help.” At times, even major issues can be discovered from social media data sources. One subject stated, “When we went to Indonesia last year there was a big problem with the water. We found out about it first via media crowdsourcing before [NGO leaders] told us.” The subjects felt that they received official assessments from standard procedures, but that social media data gave them “a feel for what was really going on down there.” The more critical aspect of this issue is defining what constitutes “context.” A new report by Gralla et al (2013) has meticulously mapped out the decision making process in the field, so the technical challenge is now operationalizing information needs through natural language processing that scrapes this data from social media. 3.2.5. Subcode: when good enough Isn’t enough We asked the subjects directly about using data either obtained directly from a crowd via social media or data scraped from bystanders talking about a disaster. The code that arose most clearly in this category was the concept of trustworthiness. Of all the possible issues with data quality that might have arisen, such as completeness, validity, consistency, timeliness and accuracy, the most salient was trustworthiness. However, this again was not the trustworthiness of the content of the message (no accuracy). It appears that data is trusted because

Good Enough is Good Enough: Overcoming Disaster Response…

501

of the trust in the person or organization that offers the data, not trustworthiness of the originator. It is clear from this code that trust in the person is more important than trust in the content of the message. All subjects believed that the data produced through wider crowdsourcing was untrustworthy. One subject stated, “No. No way. The chain from the field has to be unbroken. We have to know who is sending us the information and how it gets here. There can be no doubt, no question.” Another participant said, “We only use data that we can confirm…So we’re only going to share information from others if we have had phone calls or face-to-face conversations or relationships with them well in advance. We would not just simply retweet something that’s out there.” One subject said, “But sure, I think as you say getting information in a realtime way, but again it depends on the quality of the information and who it comes from. But if it comes from a trusted source and you’re able to get something on a more immediate basis, then it would absolutely make a difference.” Another subject said, “It might be hard to trust the data. I mean, I don’t think you can make major decisions based on a couple of tweets.” Despite making decisions based on varied data, the standard for trustworthiness is much higher for information originating with crowds of unknown participants. The distance between the originator and the receiver appears to be too great. Morris et al (2012) research supports evidence of this type of trust bias where respondents reported trusting re-tweets from a trusted source, a verified expert, or someone they followed on Twitter. However, despite not knowing the source of information, many organizations do extend trust through the volume of a trending topic in social media. Thousands of tweets reporting the same information has served as “proxy” in lieu of a direct trusted source (Sakaki, et al. 2010). The technical challenge in this case is then determining the “tipping point” between the type of information and the level of trust required to accept that information. 3.2.6. Subcode: security flag Around half the subjects in this code stated that they would listen to social media data from any source, regardless of trustworthiness, if it spoke of a security threat to NGO field workers, supplies or camps. In addition, if these data were compiled by tweets sharing similar information on the threat, the humanitarian responders would act to seek out additional information and take additional precautions. A respondent gave an example of this kind of security threat, “If a bunch of people are saying that there are guys with guns on the Rue de whatever, then we are gonna check it out.” Later on saying, “Sometimes it’s like a prank call or something. Just some kids

502

Andrea H. Tapia and Kathleen Moore

messing with us. But do you really want to take that chance? At least we get some warning, if it is a real thing.” While Mendoza et al. (2010b) showed that social media is an excellent source for rumor control, the need for the creation of a hierarchy of decision making with the necessary levels of trust to assess and accept social media methods. Lacking this level of technical sophistication, a system or network with high level of participation on behalf of organizations allows for multiple organizations to be aware of a potential threat.

3.2.7. Subcode: too high a risk This code had several subjects who stated that there were some types of questions that required a very high level of confidence in which social media data could not be used as a key input. The subjects stated that during the search and rescue phase data, social media sources had the potential to both save and lose lives. One subject told the story, “What would happen if, after a disaster, people seeking for loved ones in piles of rubble heard that when someone sent a text or tweet about hearing life signs from a loved one, the search and rescue team came running. Every single one of them would then send a text or tweet in hope that someone would come running, in hope, even if there were no signs of life…The entire process for finding living people in the rubble would be uprooted, and those that might have been saved using the old search methods may now be left to die. On the other hand, texts have helped find people in Haiti. There just has to be a way of vetting these. Lives are on the line.” According to subjects, resources for search and rescue operations are limited during disaster relief scenarios, and current methods are already efficient at finding and rescuing victims. As the subject above said, many of the reports sent of trapped victims are not sent based on concrete information, but are a last attempt for someone to save a loved one. This does not mean, however, that people cannot be found and saved based on social media data. These reports should have the lowest priority in response, and only be considered after traditional methods have been exhausted. Other subjects said that many of the operational decisions about how many people to send, what goods to send and the best routes to use were decisions that required a very high level of accuracy that social media data was not able to provide. Subjects found that in most cases, decisions had to be made about sending items that would save lives, like water, medicine and food, and mistakes made by faulty inputs could not be tolerated. This code presents the same issues as establishing context, where defining context will establish the information need; here, as de la Torre et al (2012) suggest, further research and modeling of relief routing practices and other

Good Enough is Good Enough: Overcoming Disaster Response…

503

activities in the disaster community would elucidate information needs, whereby social media may then be used to complete potential information gaps.

4. Discussion: a changing landscape The implications from this research fall into both the sociological and technical realm, and are basically three-fold. First, we illuminate the age-old need for organizations to not just better collaborate offline, but to begin collaborating more fully online. Second, we have a basic understanding of the needs that any decision support system must address in the form of incorporating social media information. Third, we expand the conversation on the nature of trust and the next steps for future research of the concept in social media environments. We assumed that organizations would gather social media data arising from geographically significant areas, apply some analytical techniques that automate classifications and use the classified information to enhance the data gathered in more traditional ways. We now see that the creation of these systems is far more difficult than first imagined. The task has been taken up not by the responding organizations, but by volunteer and technical communities that have stepped up since the Haiti response to fill this computational need. Certain levels of cooperation among organizations have always existed, in which many look to each other to fill information gaps or to use as proxies for trust of unknown persons. However, this research shows that organizations are currently using the environment of social media in much the same way as they would use traditional communication means. Until such time that organizations interact with each other more fully by sharing assets, information, resources, etc. in this environment, social media will remain nothing more than just an advanced telephone. The informational needs of humanitarian organizations responding to a crisis are varied in deference to the response phase and to the types of decisions that need to be made. As such, the standard for the quality of that data also varies, respectively. Social media data is useful to responders in situations where information is limited, such as at the beginning of an emergency response effort, and when the risks of ignoring an accurate response outweigh the risks of acting on an incorrect one. If a decision support system were to exist that incorporated this new data source, what would this look like, and how does the technical community approach the design of such a system? First, we must have a sociological understanding, and then be able to map the decision making process. Recent work by UN-OCHA (Milner and Verity, 2013) and ACAPS (Gralla et al., 2013) has initiated this effort in its broad form, but meticulous mapping of this process will allow automated operationalization. Next, from the technical approach, a system is designed that has the ability to collate and process social media in real time, followed by sorting and categorizing data either as contextual or as potentially action-worthy. Stepping back into the

504

Andrea H. Tapia and Kathleen Moore

sociological realm, determining from organizations what constitutes actionable intelligence and how it is identified linguistically forms a basis for translating data from social media into an understandable and useable form. Further exploration into the nature of online trust is required before a technical solution may be achieved. Lastly, a system should possess the flexibility for the incorporation of both traditionally obtained data and data derived from third party systems. The concept of trust has come out of the data as one of the most salient codes, however not as a synonym for accurate. The concept of trust here is trust in a person, trust in an organization and trust in a network, all of which produce data that can be seen as more accurate because of the human agents involved. Trust in the network of emergency responders becomes an advance filtering system of data, culling and categorizing social media data. Trusting data derived from social media has also been discovered to be more complex than initially thought. Organizations first needed to adopt the technology of social media; however, familiarity with the existence and basic usage is not sufficient and requires more in-depth knowledge of the language, culture, and presentation of information in social media. Organizations will extend trust to other like-organizations and third party vendors, but have yet to make the move to utilize social media as more than context. The interviews show that the field of humanitarian and emergency response already makes decisions based on incomplete data, often from second-hand sources. The inherently complex and uncertain nature of any disaster limits responders’ ability to both gather and assess the quality of information from traditional sources. An example of good enough data in a traditional sense might be a phone call from a humanitarian aid worker in Haiti a few hours after the 2010 earthquake. The aid worker is not a disaster responder, nor does she work for the NGO that she is calling. This aid worker has placed this call because she believes that the NGO will respond to the earthquake and she may have crucial information that could help the response. She reports that the airfield at the main Port-au-Prince airport has been damaged and relief planes may not be able to land. The staff at the NGO headquarters do not know the aid worker and have no reason to trust this single source of data. The NGO also has no staff in country to verify. They trust and act on this information because if it is true, the consequences for not believing and acting on it are dire. They find an alternate airfield and load extra fuel just in case. Social media data can serve as an additional source of information, but is still being presented in an unfamiliar way, which causes information within a message to become suspect because it is presented in a new format unfamiliar to emergency responders. Using the same example, good enough data using social media data, a third-party organization would gather social media data in real time that references Haiti, the earthquake and the airfield. They would aggregate, process and deliver text and images to the NGO headquarters that described the damage to the airfield or showed the damage in images, all described or taken by bystanders in Haiti. The NGO

Good Enough is Good Enough: Overcoming Disaster Response…

505

headquarters staff would have additional evidence that the airfield was damaged. The data, while not authenticated nor complete, would give enough information to complete a task or make a decision. We propose that over time, with increasing immersion into this environment, emergency responders may continue to reevaluate their perceptions of trust and adpot media data into decision-making. However, we acknowledge that in some situations, such as search and rescue operations, social media data may never meet the standards of quality required. In others, such as resource and supply management, media crowdsourcing data could be useful as long as it is appropriately verified and classified. Organizations should formally identify these pairings of situation and the required quality of social media data, sharing standards with volunteer and technical communities so that they can contribute the appropriate data for an appropriate decision. There have been numerous attempts by the computational community to automate the assessment of trust from a wide array of approaches. Early research suggested that ranking trustworthiness through reputation based on frequency and quality of past posting activity (Adler & De Alfaro 2006) and a person’s affiliation in a larger social network may also indicate trustworthiness (Palen et al. 2009; Mendoza et al. 2010). In addition, the analysis of sentiment, or the implied emotional state of a microblogger, has been proven useful in political debate analysis, earthquakes, and during national security incidents (Diakopoulos & Shamma 2010; Cheong & Lee 2010; Qu et al. 2011). From the information side, information in a post may be considered trustworthy when linked to a credible source (Starbird et al. 2010), or when it is corroborated through multiple sources (Giacobe et al. 2010). Related to reputation, a microblogger who self-corrects information, or responds to criticism of information, may also be deemed credible and reliable (Shklovski et al. 2008). Reputation-based approaches combined with language processing are not the only approaches to take in solving this problem. Other approaches include geolocation and time data, which increases a tweet’s usability. However, our study suggests that trust lies not within the data quality itself, but in the social network of humans producing and using it. Our subjects trust people in their network, which allows them to trust the information produced. Increasing the ability of any one responder to trust more human participants quickly seems to be the best solution arising from our study. In a technical sense, a multi-phase process where crowdsourced information is first examined by the person submitting the information before the data they submit is ever considered is already happening. In this light, a promising approach is to combine language processing with additional contextual information, such as the identity of an individual microblogger, the network in which she/he operates and the content of the messages in relation to others in the network (Castillo et al. 2011). What this research has failed to understand is that trust as a psycho-socio concept is not well understood in a social media environment, and that understanding how the fluid dynamics of trust as an offline concept manifests in this specific digital

506

Andrea H. Tapia and Kathleen Moore

environment is paramount. More research is required to investigate what information about a person is immediately available to organizations, what information is important to them to establish trust, and, further, what technical resources they are willing to accept to attain that information. Morris et al. (2012) have done the most comprehensive research to date by breaking out the attributes of microblogs, having respondents rank and weigh factors for trustworthiness, and then testing for credibility impact. However, ranking and weighing trust factors is not enough for automated assessment, and that research was not performed on a sample of professional analysts or emergency responders. Using participants operating in the crisis response community may reinforce current results, or possibly produce new results. Keeping in mind conditions of safety and the accountability of life and death may lead a sample of this nature to assess attributes of microblogs very differently. Additionally, more in-depth cluster and factor analysis of the data may provide more nuanced results on the interplay of those attributes. Overall, these computational solutions may be the most powerful gamechanging applications of the data analytics arena, categorizing and verifying social media data so it can be served to organizations in the form that is actionable to responders. Despite this “killer app” status, from the findings presented above it would be a mistake to consider social media data as simply unused or unusable by response organizations. In 2011, Tapia mistakenly stated that “Twitter was a food that response organizations could not eat.” This implied that both high levels of data quality could not be reached via computational methods and response organizations could not alter their standards and practices to consume this data. It implied that Twitter and other microblog environments were best left for citizen-to-citizen communication. It is now better understood to say, “Twitter is a food that has always been consumed by the response community, but in varied forms and times, which may or may not be official or formal channels.” Employees and volunteers already working in the relief sector have become active social media users, playing roles in overcoming a technology adoption problem. They friend and follow colleagues and coworkers all over the world who form a semi-bounded environment from which data about disasters and their responses flow. The participants in these networks are friends and friends of friends who are largely trusted, sharing the same cultural understandings of humanitarian response and practice. This is largely informal, organic, and crosses organizational borders and hierarchies. This means that trust is not merely a singular interaction between two people, two groups, or even a decision maker and another group, but a chain. In this chain, trust is extended, transitioned, and extended again with the result being that if one is contributing information to this group, that they were first admitted to this group on a basis of being capable of providing trustworthy information, even if they do not have existing relationships with all

Good Enough is Good Enough: Overcoming Disaster Response…

507

members of the group. The data produced in these groups serves as supplemental input to decisions made by organizational responders. We find that the more formal the information gathering and transmission process is, the greater the loss of contextual information, and the greater value of a side stream of contextual data. A decision maker may not necessarily act upon information gleaned from media crowdsourcing, but that is not to say that they do not view the information as useful in some way. We have seen that this form of ambient awareness system is already operational and growing in conjunction with existing decision trees. As humanitarian responders have more fully engaged with social media, they have created their own forms of broad, asynchronous, and always-on communication systems around issues and regions that interest them. 5. Conclusion In this paper we make five contributions. We extend scientific knowledge through empirical data of both the practice of “good enough” information gathering and current use of social media by response organizations. We identify the organic network of responders already using social media and the importance of this network. We demonstrate that trust within this network is based on trust in people, not trust in data, and that future design should reflect this. Lastly, third party designers, developers and communities of practice are strongly needed by response organizations if the burden of finding useful data is to be surmounted. First, we have documented the actual current practice of “good enough” information for emergency response. While this concept had been noted before by others, it had not been supported with empirical data, nor included elements of social media. This is a contribution to science because this research and qualitatively deconstructed theory and practice exposes the information gaps for future research. This is a contribution to practice because by qualifying offline practices already in place, it provides a baseline by which standards may be established for developers to design appropriate mechanisms that reflect how emergency response already operates. Second, we have more fully documented the varied landscape of use of social media data in crisis response pockets of adoption among organizations. We have demonstrated areas where social media is already strongly in use (i.e. initial deployment) and where it is not (logistics and security). Developers of technologies for humanitarian use should focus on developing in the areas in which there is already significant use and acceptance by the community, ensuring a good organizational fit. While this only addresses one small piece of an overall problem, a good and robust system in one area likely sets the framework to follow with future technologies, addressing other areas. In looking at applying technological solutions to the overall issue, this research has

508

Andrea H. Tapia and Kathleen Moore

shown that information needs and consequential decision-making change throughout a crisis event. The needs surrounding initial deployment are straightforward, while needs regarding logistics and security become considerably more complicated, requiring greater levels of trust. This opens up new areas in science to conduct research in trust on a temporal level throughout crisis events. This also assists practitioners in slowly adopting and integrating social media into their activities over time, while not overwhelming them. Lastly, developers who focus on small and easily manageable solutions, rather than large complicated systems, can design in conjunction, as better understandings of this environment evolve. Third, our research points us to the already existing network of responders using social media as part of a response. This organic network of humanitarian media crowdsourcing users could serve as a middle ground between traditional data sources and unfiltered media crowdsourcing data. The non-competitive nature of the goals of humanitarian response organizations is ideal for fostering an environment for inter-organizational information sharing. We encourage these organizations’ employees to friend and follow each other and become regular users of social media. Having an informal, everyday knowledge of what other humanitarian workers are doing could lead to better organizational efficiency and coordination of response and recovery efforts. While not a scientific contribution, it is an internal solution helping practitioners ease technological adoption issues while the research and technical communities develop new solutions. Fourth, through our subjects, we show that in the sphere of emergency response, trust in people trumps trust in information. This leads us to support methods that improve a responder’s ability to quickly trust people within a network, leading to most useful social media. This is a contribution to science because, theoretically, this understanding had not yet been tested offline and outside of an experimental environment. This is a contribution to practice because now it provides future research with a better starting point to understand trust in this community. This is a contribution to developers because by understanding that trust of social media is currently only extended to known and tertiary networks, and other reputable or trusted persons known to these networks, this allows the development of systems that clearly require and show networks and affiliations to responders scanning and employing social media in their practice. While it is hoped that practitioners do not stovepipe information, but rather share information laterally, a system such as this may help reduce redundancies and overlapping aid through better communication. Fifth, through our interviews we have found that most responding organizations do not have the staff, technologies, skills and organizational will to create the computational solutions necessary to address the expanded use of social media. They recognize its value and importance, but cannot build the necessary tools themselves. It is common practice in the business world for

Good Enough is Good Enough: Overcoming Disaster Response…

509

organizations to rely on third parties for information verification and analysis, reducing their financial burden. Similarly, humanitarian organizations can shift some of the burden of establishing social media data’s trustworthiness and usefulness to outside volunteer organizations. Closer relationships between these should be encouraged. This will ensure that organizations specializing in processing reliable social media data, such as Ushahidi, could encourage humanitarian response organizations to use these sources. This is a contribution to science because it separates communities of study in their proper sphere. Largely unknown content providers, not consumers, drive third-party solutions. Understanding the motivations of these providers may help develop markers of trust on behalf of information consumers, which, in turn, will help developers bridge the middle ground technically, connecting these two communities. We found some practitioners’ demands concerning data quality to be “unreasonable.” While this term may be harsh, we intend it to be a call for clarity in this space. In this case, we use the term to mean “not guided by reason or sound judgment and not in accordance with practical realities.” Essential in this definition is the idea of practical guidance. We find that emergency response organizations lack practical guidance as to how to judge social media, evaluate it, categorize it and make it useful. Because of a lack of understanding and experience, response organizations offer blanket rejections of social media. One goal of this paper is to move the discussion from blanket rejection based on data quality issues to one less insurmountable. We intend to open up channels of communication between response organizations, software developers and communities of practice where new techniques and devices are being tried to solve some of these issues. Most importantly, we put out a call for further development in this area. We call for a full mapping of the key decisions made during a disaster response, the information needs, type, form and flow during those decision points, and assessing data quality and verifiable standards for each. We also learned that data, no matter how high the quality, will not be used by organizations, if the data are not provided at the time, form and the confidence level that the organization requires for each decision type. Organizational fit matters to use almost as much as data quality. In this paper we have just scratched that surface, with the intention of demonstrating that data quality needs to be varied. Further research must address this issue of data-organizational fit by considering the level of confidence considered to be acceptable and actionable to the organization and the actual physical and time constraints that an actionable model must work within to be useful. Systems to promote social media data adoption by disaster response organizations must be socio-technical in nature. The responding organizations will need to accept new data inputs into their decision-making processes and recognize their own sliding scales for accuracy. They must develop trust relationships with organizations and systems that willaggregate, process and deliver data to them in real time.

510

Andrea H. Tapia and Kathleen Moore

References Adler, B., and De Alfaro, L. (2006). A Content-Driven Reputation System for the Wikipedia. Proceedings of the 16th international conference on World Wide Web (WWW 07), pp. 261–270. Alpern, K. (1997). What Do We Want Trust to Be? Some Distinctions on Trust. Business and Professional Ethics, vol. 16, no. 1–3, pp. 29–45. Castillo, C., Mendoza, M., Poblete, B. (2011). Information Credibility on Twitter. WWW '11 Proceedings of the 20th international conference on World Wide Web (WWW), pp. 675–684. Charmaz, Kathy (2006) Constructing Grounded Theory. London: Sage. Charmaz, K., & Mitchell, R. G. (2001). Grounded theory in ethnography. In P. Atkinson & A. Coffey & S. Delamont & J. Lofland & L. Lofland (Eds.), Handbook of Ethnography. London: Sage. pp. 160–174. Cheong, M., and Lee, V. (2010). A microblogging-based approach to terrorism informatics: Exploration and chronicling civilian sentiment and response to terrorism events via Twitter. Information Systems Frontiers, vol. 13, no. 1, pp. 45–59. Coyle, D. and P. Meier. (2009) New Technologies in Emergencies and Conflicts: The Role of Information and social Networks. Washington, D.C. and London, UK: UN Foundation-Vodafone Foundation Partnership. Creswell, J. W. (1998). Qualitative inquiry and research design: Choosing among five designs. Thousand Oaks, CA: Sage. de la Torre, L., Dolinskaya, I., Smilowitz, K. (2012). Disaster relief routing: Integrating research and practice. Socio-Economic Planning Sciences, vol. 46, pp. 88–97. Diakopoulos, N., and Shamma, D. (2010). Characterizing debate performance via aggregated twitter sentiment,” in Proceedings of the 28th International Conference on Human Factors in Computing Systems (CHI '10). New York: ACM Press. pp. 1195–1198. Emergency Social Data Summit, (2010) The Case for Integrating Crisis Response with Social Media, August 12, 2010. http://emergencysocialdata.posterous.com/the-case-for-integratingcrisis-response-with Gall, M. D., Borg, W. R., and Gall, J. P. (1996). Educational research: An introduction, (6th ed) Longman, White Plains, NY. Giacobe, N., Kim, H., Faraz, A. (2010). Mining social media in extreme events: Lessons learned from the DARPA network challenge. IEEE International Conference on Technologies for Homeland Security (HST), pp.165-171. Gralla, E., Goentzel, J., Van de Walle, B. (2013). Report from the Workshop on Field Based Decision Makers’ Information Needs in Sudden Onset Disasters. Digital Humanitarian Network. https://app.box.com/s/kneqlcpq99xlkh08w0d6 Granger-Happ, E. (2008). The Good Enough Principle - What we can learn about technology from the pragmatic solutions of nonprofits. Save the Children, pp. 1–25. Hughes, A. L., & Palen, L. (2009). Twitter Adoption and Use in Mass Convergence and Emergency Events. International Journal of Emergency Management, vol. 6, no. 3/4, pp. 248–260. Hui, C., Tyshchuk, Y., Wallace, W., Goldberg, M., & Magdon-Ismail, M. (2012). Information cascades in social media in response to a crisis: a preliminary model and a case study. Proceedings of the 21st international conference companion on World Wide Web (WWW’12). New York, NY. pp. 653–656. Hurricane Sandy Rebuilding Task Force (2013). Hurricane Sandy Rebuilding Strategy. (https:// media.wnyc.org/media/resources/2013/Aug/19/Hurricane_Sandy_Rebuilding_Strategy.pdf). Java, A., Song, X., Finin, T., & Tseng, B. (2007). Why we twitter: understanding media crowdsourcing usage and communities. (J. Law & J. Hassard, Eds.) Network, vol. 43, no. 1, pp. 56–65. Juran, J.M. (1988), Juran on Planning for Quality, New York, New York: The Free Press. Knuth, R. (1999) Sovereignty, globalism, and information flow in complex emergencies. The Information Society, vol. 15, pp. 11–19.

Good Enough is Good Enough: Overcoming Disaster Response…

511

Kwak, H., Lee, C., Park, H., & Moon, S. (2010). What is Twitter, a social network or a news media? Proceedings of the 19th international conference on World wide web, vol. 112, pp. 591– 600. ACM. Lerman, K., & Ghosh, R. (2010). Information Contagion: an Empirical Study of the Spread of News on Digg and Twitter social Networks. Fourth International AAAI Conference on Weblogs and social Media, pp. 90–97. Retrieved from http://arxiv.org/abs/1003.2664 Mazzarella, J. (2009), Twitter for Public Safety & Emergency Management, April 28, 2009 (http:// preparednesstoday.blogspot.com/2009/04/twitter-for-public-safety-emergency.html) Mendoza, M., Poblete, B., and Castillo, C. (2010)a. Media crowdsourcing under crisis. Proceedings of the First Workshop on social Media Analytics SOMA, vol.10, pp. 71–79. Mendoza, M., Poblete, B., and Castillo, C. (2010)b. Media crowdsourcing Under Crisis: Can we trust what we RT? New York, vol.10, pp. 71–79. Milner, M.E., Verity, A. (2013). Collaborative Innovation in Humanitarian Affairs, Organization and Governance in the Era of Digital Humanitarianism. https://app.box.com/s/ oq2gdcy466j6bpdvzyxt Morris, M. R., Counts, S., Roseway, A., Hoff, A., and Schwarz, J. (2012). Tweeting is Believing? Understanding Microblof Credibility Perceptions. Proceedings of CSCW 2012. New York: ACM Press. Muhren, W. J., Durbic, D and Walle, B. V. D. (2010). Exploring Decision-Relevant Information Pooling by Humanitarian Disaster Response Teams. Journal of Financial Stability, vol. 1, pp. 34–47. Munro, R. (2011). Subword and spatiotemporal models for identifying actionable information in Haitian Kreyol. CoNLL’11 Proceedings of the Fifteenth Conference on Computational Natural Language Learning, pp. 68–77. Oricchio, R. (2010). Is Twitter A social Network? Inc. Magazine. Retrieved from http:// www.inc.com/tech-blog/is-twitter-a-social-network.html Portsea, L. J. (2011). Disaster Relief 2.0: The Future of Information Sharing in Humanitarian Emergencies. Disasters, vol. 16, pp. 1–8. UN Foundation & Vodafone Foundation Technology Partnership. doi:10.1111/j.1467-7717.1992.tb00370.x Palen, L., and Vieweg, S. (2008). The emergence of online widescale interaction in unexpected events: assistance, alliance. Proceedings of Computer Supported Cooperative Work (CSCW) 2008. Palen, L., Vieweg, S., and Anderson, K. M. (2010). Supporting “Everyday Analysts” in Safety- and Time-Critical Situations. The Information Society, vol. 27, no. 1, pp. 52–62. Palen, L., Vieweg, S., Liu, S. B., and Hughes, A. L. (2009). Crisis in a Networked World: Features of Computer-Mediated Communication in the April 16, 2007, Virginia Tech Event.Ssocial Science Computer Review, vol. 27, no. 4, pp. 467. Qu, Y., Huang, C., Zhang, P., Zhang, J. (2011). Microblogging after a major disaster in China: A case study of the 2010 Yushu earthquake. Proceedings of the ACM 2011 conference on Computer Supported Cooperative Work - CSCW '11. New York: ACM Press. p. 25–34. Quarantelli, E. L. (1999). Disaster Related social Behavior: Summary of 50 Years of Research Findings. 8Th International Symposium on Natural and Technological Hazards: Hazards 2000, pp. 1–13. Sakaki, T., Okazaki, M., and Matsuo, Y. (2010). Earthquake Shakes media crowdsourcing Users: Real-time Event Detection by social Sensors. Proceeding of the conference on the World Wide Web (WWW) 2010, April 26-30, pp. 851–860. Shklovski, I., Palen, L., Sutton, J. (2008). Finding community through information and communication technology in disaster response. Proceedings of the ACM 2008 conference on Computer supported cooperative work CSCW 08, pp. 127–136. Starbird, K., and Muzny, G. (2012). Learning from the Crowd: Collaborative Filtering Techniques for Identifying On-the-Ground Twitterers during Mass Disruptions. In L. Rothkrantz, J. Ristvej and Z. Franco, eds, Proceedings of the 9th International conference for Information Systems for Crisis Response and Management (ISCRAM). Vancouver, Canada, April 2012.

512

Andrea H. Tapia and Kathleen Moore

Starbird, K., Palen, L., Hughes, A., Vieweg, S. (2010). Chatter on the Red: What Hazards Threat Reveals About the social Life of Microblogged Information. Proceedings of the 2010 ACM conference on Computer supported cooperative work - CSCW '10. New York: ACM Press, pp. 241–250. Strauss, A. L., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques. Newbury Park, CA: Sage. Strauss, A. L., & Corbin, J. (Eds.). (1997). Grounded theory in practice. Thousand Oaks, CA: Sage. Sutton, J., Palen, L., and Shklovski, I. (2008). Backchannels on the Front Lines: Emergent Uses of social Media in the 2007 Southern California Wildfires. In F. Fiedrich and B. Van de Walle, eds. Proceedings of the 5th International conference for Information Systems for Crisis Response and Management (ISCRAM). Washington, DC, USA, May 2008. Tapia, A. H., Bajpai, K., Jansen, B. J., and Yen, J. (2011). Seeking the Trustworthy Tweet: Can social mediaData Fit the Information Needs of Disaster Response and Humanitarian Relief Organizations. Proceedings of the 8th International conference for Information systems for Crisis Response and Management (ISCRAM). Lisbon. Portugal. May 10. Terpstra, T. (2012). Towards a realtime media crowdsourcing analysis during crises for operational crisis management. In L. Rothkrantz, J. Ristvej and Z. Franco, eds, Proceedings of the 9th International conference for Information Systems for Crisis Response and Management (ISCRAM). Vancouver, Canada, April 2012. Thomson, R., Ito, N., Suda, H., Lin, F., Liu, Y., Hayasaka, R., Isochi, R., Wang, Z. (2012). Trusting Tweets: The Fukushima Disaster and Information Source Credibility on Twitter. In L. Rothkrantz, J. Ristvej and Z. Franco, eds, Proceedings of the 9th International conference for Information Systems for Crisis Response and Management (ISCRAM). Vancouver, Canada, April 2012. Valenzuela, S., N. Park, and Kee, K. F. (2009). Is There Social Capital in a Social Network Site?: Facebook Use and College Students’ Life Satisfaction, Trust, and Participation. Journal of Computer-Mediated Communication, vol. 14, no. 4, pp. 875–901. Vieweg, S. (2010). Social Media Contributions to the Emergency Arena: Discovery, Interpretation and Implications. Proceedings of the 2010 ACM conference on Computer supported cooperative work - CSCW '10, CSCW 2010, February 6–10. New York: ACM Press, pp.515–516. Vieweg, S., Palen, L., Liu, S. B., Hughes, A. L., and Sutton, J. (2008). Collective intelligence in disaster: examination of the phenomenon in the aftermath of the 2007 Virginia tech shooting. . Proceedings of the 5th International Proceedings of the Conference on Information Systems for Crisis Response and Management (ISCRAM). Washington DC, USA, May 2008. pp. 44–54. Walton, R., Mays, R., and Haselkorn, M. (2011). Defining Fast: Factors Affecting the Experience of Speed in Humanitarian Logistics. Proceedings of the 8th International Proceedings of the Conference on Information Systems for Crisis Response and Management (ISCRAM). Lisbon, Portugal. May, pp. 1–10. Wang, R., Kon, H. & Madnick, S. (1993), Data Quality Requirements Analysis and Modeling, Data Engineering. Ninth International Conference of Data Engineering(ICDE’93), Vienna, Austria, 19-23 April 1993. IEEEE. Wild, D. (2010), Allhazards blog. School of Informatics at Indiana University, http:// allhazards.blogspot.com/ September 15th, 2010.