Adopting A Disaster-Management-Based Contingency ... - IEEE Xplore

8 downloads 0 Views 359KB Size Report
InduShobha Chengalur-Smith, Salvatore Belardo, and Harold Pazer. Abstract—Ad hoc forecasts are generally unstructured forecasts that are performed ...
210

IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 46, NO. 2, MAY 1999

Adopting a Disaster-Management-Based Contingency Model to the Problem of Ad Hoc Forecasting: Toward Information Technology-Based Strategies InduShobha Chengalur-Smith, Salvatore Belardo, and Harold Pazer

Abstract—Ad hoc forecasts are generally unstructured forecasts that are performed infrequently. Unfortunately, there are no widely accepted formulas for performing such tasks. In this paper we draw parallels between the process of ad hoc forecasting and disaster management, using characteristics of disasters to develop analogous dimensions for ad hoc forecasts. Although a given unit within an organization may not have the opportunity to repeat a particular ad hoc forecast, there are certain similarities among ad hoc forecasts performed by different units within an organization. Mapping ad hoc forecasts along the disaster characteristics brings out these similarities and allows us to identify technology-based strategies for improving ad hoc forecasts. Just as disaster planning draws together people from different units of the organization and creates a common knowledge base, we propose the creation of a common repository of ad hoc forecasts that member organizations can draw upon. This allows organizations that find themselves in need of a particular ad hoc forecast to learn from both the disaster management literature and their own experiences. The presence of the Internet and the World Wide Web provide the infrastructure for creating an inter-organizational information system. Index Terms— Ad hoc forecast, contingency model, disaster characteristics, disaster planning, technology-based strategies.

I. INTRODUCTION

F

ORECASTS are used extensively by both the public and private sectors to predict future events and behavior, such as the state of the economy, the potential sales for a product or service, or even the probability of a bank becoming insolvent [8]. Business forecasts are an integral part of any organization’s planning and control activities and are typically based on pretested and widely accepted methodologies. Of the various features that distinguish forecasts, the one we will focus on in this paper deals with the degree to which the forecast is planned. We focus on this feature because of the paucity of current research on unplanned forecasts. Some forecasts, such as those performed to predict sales, are planned in advance and are part of ongoing business activities. Other forecasts are ad hoc, conducted in response to Manuscript received August 18, 1997; revised August 31, 1998. Review of this manuscript was arranged by Editor-in-Chief D. F. Kocaoglu. The authors are with the Management Science and Information Systems Program, School of Business, State University of New York, Albany, NY 12222 USA. Publisher Item Identifier S 0018-9391(99)03066-4.

some unplanned, unique event such as corporate embezzlement or industrial espionage. Fortunately, events requiring ad hoc forecasts are rare. However, because they are rare, neither the forecasting models nor the data required by the models to predict important decision variables are readily available. Ad hoc forecasts have characteristics that make estimating the future problematic; they occur infrequently, at unpredictable times, often with little forewarning, and the impact on the organization can be devastating. There is little research concerning this special category of forecasts and therefore few prescriptions concerning ways to improve the estimates required for effective decision making. Often times, when there is not sufficient experience in a given domain or with a particular decision, we search for analogous situations in other domains or disciplines. Benchmarking is an example of such an approach. Here a demonstrated standard of performance that represents the very best performance of a process or activity is identified. The benchmark company may not be in the same industry [12]. Since ad hoc forecasts have many characteristics in common with disasters (i.e., speed of onset, length of forewarning, predictability, etc.) it can be reasoned that the rich disastermanagement literature and research base can be of value in addressing ad hoc forecasting problems. In this paper we borrow from the disaster-management literature in order to develop a framework with which to study the ad hoc forecasting process. The framework allows us to map the ad hoc forecasting process onto disaster profiles, and these linkages suggest response strategies. Once an organization has generated a suite of best practices in ad hoc forecasting, it can supplement or replace the disaster framework for future ad hoc forecasts. In Section II, we highlight similarities between disasters and ad hoc forecasts, and we draw upon these similarities in Section III to propose a framework for analyzing ad hoc forecasts. Sections IV and V describe response strategies for disasters and suggest corresponding strategies for ad hoc forecasts. Section VI presents case studies of ad hoc forecasts in state agencies. We analyze these cases in Section VII using the framework developed in Section III. We conclude in Section VIII with recommendations for the use of information technology to support the appropriate response strategies.

0018–9391/99$10.00  1999 IEEE

CHENGALUR-SMITH et al.: ADOPTING A DISASTER-MANAGEMENT-BASED CONTINGENCY MODEL

II. AD HOC FORECASTS AND DISASTER MANAGEMENT: THE SIMILARITIES Disasters can be broadly defined as low-frequency, highconsequence events. They include not only natural disasters (e.g., earthquakes) but also man-made disasters such as oil spills. Disasters vary on several dimensions, such as frequency and predictability. In disasters such as earthquakes and oil spills, information and experience obtained through research, simulation, or observations has helped disaster planners design systems to address these crises. We recognize that the term disaster has negative connotations although crises, particularly business crises, may have positive or negative outcomes. Similarly, ad hoc forecasts are inherently neither positive nor negative, but like business crises and natural or manmade disasters, their outcome is unknown and they are low-frequency and high-consequence events. Precisely because the outcome of an ad hoc forecast is unknown, we seek parallels from the disaster-management literature. Drawing on lessons learned from disaster management can help with the process of coping with the uncertainty during ad hoc forecasting. Just as some apparently unique disasters have common characteristics that allow them to be considered part of a general class, ad hoc forecasts can also be grouped together according to certain characteristics. Ad hoc forecasts are unique, infrequent events that demand innovative solutions. By nature, ad hoc forecasts are of low frequency, and so there is usually limited historical practice that can be drawn upon to aid decision making. Take, for example, a decision concerning an immediate change in government legislation that could affect a budget, and hence jobs in, and services provided by, an organization. In order to manage such an unplanned event, decision makers have to put together an ad hoc forecast of the potential effects (positive and/or negative) of the event. This requires that data be pulled together from a number of sources in the organization. Typically these data are nonaudited owing to the fact that they are not used as part of a regularly planned forecasting effort, or they are based on estimates or imprecise data. The forecast can impact the entire organization and the repercussions of an erroneous forecast can be enormous, and yet models and data stores necessary to properly analyze such events are not readily available. It is important to note that the parallel being drawn is between disasters and the ad hoc forecasting process, not the event being forecasted. While the event may have either a positive or a negative effect upon the organization, the unexpected impact on the resources of the organization due to the ad hoc forecasting process is what we are attempting to mitigate. Ad hoc forecasts are generally nonroutine and ill-structured tasks. No monitoring systems exist for such tasks since they do not appear to threaten the organization directly. Since the length of forewarning is generally short, a considerable number of ad hoc forecasts are reactive in nature. Consequently, crisiscontingency planning approaches cannot be used, and as is the case in many first-time events, the line of responsibility is not clear and the stakeholders are complex. Due to the

211

range and unpredictability of the impact of the forecasting process, it is important that this class of forecasts be considered more seriously. Although each organization may perform a particular kind of ad hoc forecast very infrequently, there are lessons to be learned for all organizations by generalizing across these forecasts. III. A FRAMEWORK FOR A CONTINGENCY MODEL Disasters can be described in terms of eight characteristics: exposure; destructive potential; scope of impact; duration (of the disaster); controllability; predictability (in terms of time and/or location); speed of onset; and length of forewarning [9]. The levels of these characteristics are often a factor in devising strategies for managing the disaster. What is needed then, is a model that enables us to view ad hoc forecasting from a disaster-management perspective in order to assess the sensitivity of the organization to the forecast as well as the appropriateness of the candidate strategies. In order to do this, we developed a model based upon disaster characteristics and disaster management. We used the same eight characteristics, with slight modifications, to describe ad hoc forecasts. For instance, the frequency of a disaster is replaced by exposure of an organization to an ad hoc forecast. Similarly, the characteristic analogous to destructive potential of a disaster is the impact of deficiencies in the forecasting process. We chose a subset of disasters that correspond to a major portion of the research concerning disaster management. The five disasters are: an earthquake; a hurricane; a flood; a nuclear accident; and an oil spill. The geographical frame of reference we are using is a state. The disasters are assumed to be typical in the sense that the nuclear accident does not refer to a Chernobyl-type accident but one more similar to the Three Mile Island accident. Similarly floods do not include flash floods, and earthquakes do not include tremors. We believe that these varied disasters provide robustness to generalizations. We studied ad hoc forecasts in three state government agencies: the effect of a court ruling on a financial agency; a computer migration strategy in a health service agency; and an effort to change legislation in a regulatory agency. The three cases are used to illustrate the use of the contingency model to recommend appropriate response strategies. Once an organization has built up a bank of successful ad hoc forecasts, then the organization can replace the external disaster framework and substitute its own database of best practices as the framework for mapping any future ad hoc forecast. Thus the system enables the organization to evolve and learn. A. Model Development We developed three-point scales (high, medium, and low) for each of the eight disaster characteristics. These scales are not definitive, but they aid us in explaining our methodology and simplifying our results. While for forecasting the basic unit of analysis will be the organization, for disasters the basic unit will be the state, since this is usually the primary locale for disaster planning and response. On the briefest reflection, it is obvious that exposure for each major disaster type varies

212

IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 46, NO. 2, MAY 1999

dramatically between states. This variation is the greatest for earthquakes (e.g., California versus Ohio) and hurricanes (Florida versus Minnesota) but also varies substantially for floods depending on the number of rivers and flood plain configurations, for nuclear accidents depending on the number and type of reactors, and for oil spills based upon the existence of coast lines and the amount of petroleum transport within each state. Destructive potential is highest for earthquakes and hurricanes, medium for floods and nuclear accidents (we are excluding Chernobyl-type occurrences), and lowest for oil spills (the Exxon Valdez being a rare exception). Scope of impact would be highest for earthquakes and hurricanes, which could affect broad areas of a state. The scope for floods would be medium (limited to low-lying areas) and also medium for nuclear accidents (limited to areas surrounding and/or down wind from nuclear facilities). The scope of an oil spill would be the lowest, generally limited to a section of a river or harbor. Duration relates to the time over which the destruction continues. It is clearly the shortest for earthquakes, which are generally measured in seconds or minutes, medium for hurricanes and nuclear accidents, and longest for floods and oil spills, where the damage may continue to occur for days or even weeks. Predictability is the highest for floods, with factors such as the amount of snow pack and spring temperatures being important predictive factors for some states while upstream rainfall and tributary levels may be important predictors for other states. Improvements in weather forecasting and hurricane tracking now allow the construction of probability-of-impact statistics for hurricanes, which we classify as a medium level of predictability. The ability to predict the specific occurrence of an earthquake, nuclear accident, or oil spill is clearly much lower. Controllability relates to the ability of decision makers to intervene during the occurrence of the disaster in order to mitigate its effect. This would be the highest for a nuclear accident since there are systems in place designed for this purpose. At the other extreme, earthquakes and hurricanes clearly defy human control. Floods and oil spills fall between these extremes with generally some moderate degree of control available to decision makers. Since we explicitly ruled out flash floods from our analysis, the speed of onset would be the slowest for floods with water levels often increasing by 1 ft/h or less. The speed of onset would be the fastest for earthquakes where the major damage generally occurs during the first minute. Hurricanes, nuclear accidents, and oil spills have intermediate levels for this characteristic. Not only are earthquakes unpredictable (there is no earthquake season) and reach their peak severity very quickly (speed of onset) but also they strike without warning. Clearly the length of forewarning is shortest for earthquakes. Due to vastly improved weather forecasting during the last half of the twentieth century, length of forewarning for hurricanes and floods are long compared to the other disaster types. Monitoring systems in nuclear facilities and radio communications from ships and tug boats may give a medium level of forewarning for nuclear accidents and oil spills. The first two characteristics, exposure and destructive potential, taken together measure the expected cost of the dis-

aster. The next two characteristics, scope and duration, taken together reflect the impact of the disaster. Similarly the characteristics controllability and predictability are paired to get an indicator for the number of options available to mitigate the disaster. The last two factors, speed of onset and length of forewarning, can be combined to measure the response horizon. The response horizon is a measure of when a decision must be made relative to when quality information is available [4]. IV. DOMINANT RESPONSE STRATEGIES FOR FIVE DISASTER TYPES A. Earthquakes Because earthquakes are of low predictability, low controllability, short forewarning, and fast onset, a major emphasis has been placed on mitigating the impact through strategies that emphasize a hardening of the environment. For example, in areas of high or medium exposure to earthquakes, building codes have been employed to increase the resistance of structures to damage. After each occurrence of an earthquake, information is available to help in the evaluation of these building codes (e.g., perhaps office buildings and bridges survived with little damage while many freeway overpasses collapsed). Based on this information, codes can be revised and/or structures retrofitted to further harden the environment. In such situations, information technology can help. For instance, [14] describes a comprehensive system based upon a digital map database which is designed to help locate critical resources and identify potential problems. Here, Geographic Information Systems (GIS) are employed to extract information such as the location of underground pipes, construction material used, and the heights of buildings so as to help identify environmental hardening priorities. In addition, an expert system is employed to estimate losses in functionality of facilities and productivity losses. These estimates provide a means of estimating the economic and industrial impacts for the various regions. B. Floods Relative to other disaster types, floods have high predictability, medium controllability, slow speed of onset, and medium forewarning. The major response strategy has been the development of extensive flood control containment systems. These are designed to restrict the impact of a flood to predefined areas. While a substantial portion of such systems are permanent (e.g., dams and dikes), they may be supplemented by sandbags as required. After each flood, feedback is available on the success or failure of various portions of the containment strategy, and appropriate modifications can be made to the system. Software systems have also been employed to simulate traffic flow and estimate evacuation times for urban areas threatened by flooding. Belardo and Duchessi [2] describe the use of an object-oriented learning system that can be easily employed during a disaster. The system contains objects that represent the expert knowledge of experienced disaster managers concerning ways to respond to floods. Associated with each object are attributes and values that help decision

CHENGALUR-SMITH et al.: ADOPTING A DISASTER-MANAGEMENT-BASED CONTINGENCY MODEL

makers better understand plans and the best way to employ resources. The system provides a frame of reference against which unfolding events can be compared. This tool is used both for planning and training. C. Hurricanes Hurricanes can be characterized as being of medium predictability, low controllability, long forewarning and medium speed of onset. The dominant pre-event response strategy has been one which can be described as monitor and evacuate. Much effort is expended on early identification and tracking of hurricanes [13]. As predictions of landfall are made, evacuation plans are activated. After each occurrence of a hurricane, feedback information is available to compare actual and projected landfall time and location, as well as actual and projected evacuation times. Information on unnecessary evacuations may also be available. This feedback loop allows for further improvements in predictions of landfall and evacuation times. Griffith [10] describes a sophisticated, numerical storm surge model called SLOSH (sea, lake, overland surges from hurricanes), designed to help emergency managers perform vulnerability analyses and also to support them in their evacuation decisions. Other technologies include decision support systems (DSS) to help develop detailed evacuation plans based on the underlying transport network and available shelters. Simulation modeling can be combined with GIS to enable emergency managers to dynamically track the distribution of population and resources. D. Nuclear Accidents These are characterized by low predictability, high controllability, medium speed of onset, and medium forewarning. The major response has been the preconstruction of elaborate and multilayered feedback and control systems. These permit early and incremental intervention, which may include preparing evacuation plans. At each stage during the disaster, feedback may be available on the impact of this incremental intervention strategy. Training simulations are frequently used to prepare managers to effectively employ these multilayered feedback and control systems. Belardo et al. [5] describes how an incremental intervention strategy is employed as a nuclear accident progresses from an unusual event to an alert or to the more serious general emergency. The key to determining the appropriate intervention lies in the feedback obtained through various monitoring methods. Simulation techniques can be used as well to model credible interactive scenarios for emergency training exercises. The Illinois Department of Nuclear Safety has a computer-based remote monitoring system for monitoring nuclear power reactor safety parameters [11]. The system provides early warning of abnormal events, risk analysis of existing conditions, and a quantification of radiological releases. E. Oil Spills Oil spills are differentiated from other disaster types by a lower expected cost. This is due both to a lower destructive

213

potential and a smaller scope of impact. As a result, the economics do not justify the development and preconstruction of extensive containment systems, as was the case for flood-control projects. The major strategy is the regional predeployment of portable containment systems and dispersal agents in support of rapid response teams. Feedback after the occurrence of an oil spill is useful in evaluating the predeployment pattern and the speed and effectiveness of response. Computer technology enables cross referencing of information and enhances communication. Beroggi and Wallace [6] suggest that advanced technology, including videos, robots and satellite imagery, can provide the response team with real time data. Belardo et al. [3] describe the use of a computerbased model to support the decisions of an emergency response planner who is faced with the task of attaining the best overall protection with existing resources, while minimizing the risk of being unprepared for politically and environmentally sensitive events. V. PARALLEL RESPONSE STRATEGIES FOR AD HOC FORECASTING A prerequisite to the establishment of an organizational strategy for ad hoc forecasting is the determination of the organization’s exposure to the various categories of ad hoc forecasting needs. This could be accomplished by analyzing ad hoc forecasting experience as well as unmet needs during a time period such as the preceding five years. Based on the framework developed above and the dimensions of ad hoc forecasts, we propose several alternative procedures for dealing with classes of ad hoc forecasts. A. The Earthquake Parallel In most organizations there will be a subset of the potential applications for which the development of ad hoc forecasting response systems will not be cost effective. These will tend to be characterized by at least some of the following dimensions: low predictability; low controllability; short forewarning; and/or fast onset. In such cases, the preferred strategy may be to harden the environment. In disasters such as earthquakes, a strategy that has been employed successfully has been to increase design margins of the infrastructure on sensitive areas to more than meet government regulations. In business as well as government, there are a number of situations that can benefit from similar strategies. Take, for example, the automotive industry, which has instituted outsourcing and just-in-time manufacturing to improve efficiency and minimize costs. Such arrangements tend to create tightly coupled systems that magnify the effects of perturbations in various value-adding efforts. For instance, a sudden cut in supply lines because a supplier goes out of business or because of war in a developing nation that supplies raw materials suggests problems with processes that are too tightly coupled. One solution would be for the organization to relax its commitment to a just-in-time philosophy in situations where the production schedule is highly vulnerable to failure on the part of one or more vendors. In such situations, buffer

214

IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 46, NO. 2, MAY 1999

inventories can protect the production environment against late deliveries by less reliable vendors. B. The Flood Parallel Ad hoc forecasting situations which share a number of characteristics with floods such as high predictability, medium controllability, slow speed of onset, and long forewarning would be characterized by organizational “flood plains,” which are periodically impacted by ad hoc forecasting requests. An example would be a marketing department, which may rather frequently need forecasts concerning the impact of various changes in the product mix. A reasonable strategy in such situations is to establish a “permanent” ad hoc forecasting group within marketing to build a “dike of data” which will insulate the rest of the organization from constant requests to support these ad hoc forecasts. Because of the limited organizational scope of such a group, a substantial focus can be placed upon the refinement of both models and supporting information systems. C. The Hurricane Parallel For forecasting situations which share a number of characteristics with hurricanes, the organizational equivalent to “monitor and evacuate” would be to monitor the environment and selectively activate localized contingency plans. Such plans could be “localized” by department, product line, and/or geographical region. This strategy calls for a system that is linked to these localized ad hoc forecasting plans which specify access paths to data, available quantitative models (e.g., regression, simulation, game theory), available qualitative models (e.g., historical analogy, Delphi), as well as pre-identification of personnel with key knowledge. D. The Nuclear Accident Parallel The high controllability in the nuclear scenario coupled with medium speed of onset and medium forewarning led to the development of an incremental intervention strategy supported by feedback/feedforward control systems. Parallels in the organizational environment are ad hoc forecasts made in support of “PERT Type” project management approaches. Here, ad hoc forecasting can be improved by a support system created as an overlay to the project monitoring system. Such a system requires a model which correlates project activities after defining each activity in terms of basic elements. When substantial variation is observed in activities early in the network, adjustments can be made on the forecasts of correlated activities scheduled later. For example if the element “get data from accounting” has been a bottleneck for early activities, appropriate modifications can be made for time/cost estimates of later activities which include this element. E. The Oil Spill Parallel While intermediate in most other measures, oil spills were differentiated by low predictability, low destructive potential, and small scope of impact. Parallel ad hoc forecasting requirements in the organization would preclude the construction of

extensive databases in the anticipation of forecasting needs. Instead, a reasonable approach is the creation of a data repository through the identification of predeployed gatekeepers in each sector of the organization. These individuals would provide information sources and identify individuals with key knowledge in support of an in-house consulting group, which could be deployed in response to a forecasting need. This consulting group would have general purpose forecasting DSS and expertise in the use of Delphi-like procedures. As an organization gains experience with ad hoc forecasts, the successful response strategies could be added to the list above. The use of information technology enables sharing of information and facilitates learning across functional disciplines. Groupware like Lotus Notes could be used so that all employees can draw from the suite of best practices for ad hoc forecasts. VI. CASES We studied ad hoc forecasts at three state agencies. These cases were selected to illustrate and clarify the application of our model in a variety of contexts. The three agencies are representative of operational control (financial agency), managerial control (health agency) and strategic planning (regulatory agency). Thus, not only did the agencies differ in their primary functions, but as the following case descriptions will demonstrate, the requirements of the ad hoc forecasts also differed substantially. A. Background 1) Case 1—Court Ruling at a Financial Agency: A financial state agency which is involved in assuring tax compliance had a lawsuit brought against it and lost the case. This had a substantial impact on future collections as well as requiring certain refunds. The agency needed to estimate the impact on revenue and workload. Information on the number of cases impacted by the ruling, the average liability (in terms of revenue) and finally the collectibility of these revenues, had to be gathered. The revenue implications were the primary concern. The relevant data for this case were stored under an obsolete computer system, which did not have the data retrieval and analysis capabilities required to make such estimates. The agency had to rely on expert judgment to obtain these estimates. Clearly, these estimates varied with managers’ perspectives and hence required reconciliation. The division was also undergoing reorganization at this point in time, and the new management had implemented a new computer system as part of a move to effectively use automation. Employees were being trained in record-keeping under the new system, but even using the new system, retrieving such revenue information would not be straightforward. 2) Case 2—Computer Migration at a Health Agency: A health service agency decided to migrate from two computer platforms to a single one. Although the scale and scope of the migration had not been clearly defined, a forecast had to be made of the time required for this migration. The project involved extensive testing of migrated applications

CHENGALUR-SMITH et al.: ADOPTING A DISASTER-MANAGEMENT-BASED CONTINGENCY MODEL

as well as training former Honeywell workers in IBM-JCL. The budget for the following fiscal year was drawn up under the assumption that there would only be one vendor. Hence, working backward from the shutoff date, a schedule was drawn up. Since no one in the informations systems (IS) department had prior experience with a migration of this magnitude, they could not reliably estimate the time it would take. Several applications were interdependent, so testing times had to be estimated at each level. As the migration code was developed, the application algorithms were tested and then reprogrammed as necessary. The applications on the Honeywell were revenue generating and thus were strategic systems. The migration of the code itself was not the hard part, it was system testing that was unpredictable. Furthermore, there was no documentation for some applications, and other applications that were over 20 years old had missing source code. But the biggest setback during the migration process was a government mandate that required the creation of a whole new database. Thus, primarily due to external forces, the interim deadlines had to be pushed back. 3) Case 3—Legislation at a Regulatory Agency: A state regulatory agency with historical data on sulfur and nitrogen emissions from power plants across the country documented that 25% of the lakes in the region had a pH below five. Consequently, the state passed the Clean Air Act to regulate the amount of sulfur put out by utilities. Local mandates controlled the kind of coal that was being burned in the state. However, the regulatory agency suspected that other states were contributing to the problem through westerly winds. The relationship between the sources and acid rain deposition was modeled by the regulatory agency using historical annual data. The models demonstrated the impact of reduced emissions on acid rain and were used as decision-making tools. The agency was convinced that regional controls, rather than local controls, were required to solve the problem. The cost of moving from 3% sulfur coal to 1% was clearly known. The benefits of reduced acid rain to tourism and fishing, however, were intangible. A cost-benefit analysis was carried out to determine the impact of Federal regulation on the environment and as well as on industry. The forecast of the effect of potential legislation on acid rain led the agency’s effort toward making the legislation reality.

VII. CASE ANALYSIS For each of the three cases described above, their characteristics were mapped onto the eight disaster characteristics as shown in Fig. 1. A profile comparison, based on numerical scale equivalents, suggests appropriate response strategies. (This will be further elaborated on in Section VII-B.) The first column in Fig. 1 defines the lower bound for disaster (and ad hoc forecasting) scenarios. The characteristics would be as follows: low exposure; low destructive potential; narrow scope; short duration; high predictability; high controllability; slow speed of onset; and long forewarning. In fact, it is doubtful if such an event would be widely recognized as a disaster. At the other extreme, the far right-hand column defines a super disaster as follows: high exposure; high destructive

215

potential; wide scope; long duration; low predictability; low controllability; fast onset; and short forewarning. Fortunately, nothing short of a surprise nuclear attack corresponds to this upper bound scenario.

A. Case Profiles 1) Exposure: The financial agency is often subject to litigation, but the regulatory agency is less frequently in the position of having to propose legislation. On the other hand, an organization is even less likely to migrate its computer systems to another platform. Hence the health agency (C2) was judged to have low exposure and the financial agency (C1) was judged to have high exposure. 2) Destructive Potential: The destructive potential refers to the impact, on the organization, of deficiencies in the forecasting process. Since the liability for the financial agency (C1) was limited to refunds or foregone income rather than penalties, the destructive potential was considered small. However, the computer system at the health agency (C2) was crucial for billing and other operations and so the destructive potential was large. The impact of new regulations on the regulatory agency (C3) would be somewhere between the two. 3) Scope: Scope is determined in the context of the organization. Thus the scope of impact of the ruling was limited to a certain unit for the financial agency (C1). On the other hand, the impact of an erroneous forecast in the time taken to migrate the computers at the health agency (C2) would be spread through the organization, whereas the scope of impact of the forecast effect of legislation is somewhere between the above two, for the regulatory agency (C3). 4) Duration: The duration of the forecasting process was long for the health agency (C2) since they were continuously updating their forecast of how long each module of the migration would take. Similarly, the financial agency (C1) planned to appeal the ruling but had to deal with the ramifications of losing the case in the interim. However, the forecasting and modeling for the regulatory agency (C3) lasted a relatively short amount of time. 5) Predictability: The financial agency (C1) cannot predict the result of the appeal, nor could the agency have predicted losing the case, so the predictability of entering into the forecasting process was low. However, the health agency (C2) had been discussing dropping its legacy system for years and the migration to another platform was inevitable. Again, the regulatory agency (C3) falls somewhere between the two in terms of being able to predict that it would need to model and forecast the effects of legislation on acid rain. 6) Controllability: Since the financial agency (C1) was basing its forecasts on expert judgments, essentially subjective input, it was considered to have very limited control on the forecasting process. At the other extreme, the health agency (C2) was able to re-direct resources so that its forecast was actually fulfilled. The regulatory agency (C3) built the forecasting model and hence controlled the forecasting process to a certain extent. 7) Speed of Onset: Once the forecasting process began, it continued at a constant pace at both the health and regulatory

216

IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 46, NO. 2, MAY 1999

Fig. 1. Characteristics for disasters and ad hoc forecasts.

agencies, but at the financial agency a relatively large amount of forecasting effort was up-front. Thus the speed of onset was considered medium for the financial agency (C1) but slow for the other two agencies (C2 and C3). 8) Length of Forewarning: The length of forewarning was the longest at the health agency (C2) since they had been considering migration of applications for a long time. The financial agency (C1) had the least forewarning, whereas the regulatory agency (C3) was prepared to create forecasting models once they realized that local controls were not effective. B. Case Analysis As indicated earlier, the eight characteristics can be folded into a four-factor macromodel. The pairing of characteristics is shown in Fig. 2. The three-point scaling (along with their numerical equivalents) used for the characteristics is preserved in the macromodel. Recalling that expected cost is determined by considering both exposure and destructive potential, and using the state of California as our geographical framework, only earth-

quakes fell into the high expected cost category, resulting from medium frequency of exposure coupled with a high destructive potential. At the low end of the expected cost scale were nuclear accidents, which had an intermediate level of destructive potential but low exposure. The other three natural disasters as well as the three ad hoc forecasting cases all fell into the intermediate expected cost category, but for different reasons, as indicated below. Case 1 (C1) The financial agency, as well as oil spills, were of intermediate expected cost in spite of high exposure, because of relatively low destructive potential. Case 2 (C2) The health agency, along with hurricanes, were of intermediate expected cost since their high destructive potential was coupled with relatively low exposure. Case 3 (C3) The regulatory agency, and floods, were rated intermediate with respect to both component characteristics and consequently were of intermediate expected cost.

CHENGALUR-SMITH et al.: ADOPTING A DISASTER-MANAGEMENT-BASED CONTINGENCY MODEL

217

Fig. 2. The four-factor model.

The second of the macrofactors is impact. The impact of a natural disaster, or of an ad hoc forecast, is a function of its scope and its duration where a wide scope and a long duration would yield high impact. From Fig. 2, we observe that the health agency (C2), hurricanes, and floods were all high-impact events. The remaining natural disasters and ad hoc forecasting cases were of intermediate impact. The combination of the characteristics, predictability, and controllability determine the options available to the decision makers. Just as high expected cost and high impact make the decision process more challenging, so will few options. Notice that scales are reversed here as they were in Fig. 1 for controllability and predictability. So, once again, the upper right-hand corner in Fig. 2 corresponds to the most challenging scenario, in this case, few options. The financial agency (C1), along with oil spills, earthquakes, and hurricanes, were all characterized at the macrolevel by few options. The regulatory agency (C3) and nuclear accidents had an intermediate level of options while the health agency (C2) and floods had many options.

For the fourth macrofactor, horizon, a scale reversal is once again in order, for it is a short horizon which increases the challenge. This would result from short forewarning followed by fast onset. Such a short horizon was found for both the financial agency (C1) as well as earthquakes. Nuclear accidents and oil spills had an intermediate horizon, while the other cases and natural disasters all had relatively long horizons. Fig. 3 presents a summary of the macrofactor analysis for the three ad hoc forecasting cases as well as for the five reference disasters. Recall that for each pairing of characteristics, the upper right corner of the diagram represents the most severe situation, i.e., high expected cost, high impact, few options, and short horizon. It should be noted that as the organization gains experience in managing ad hoc forecasts, best practice reference profiles can be added to Figs. 1–3 to supplement and perhaps eventually replace the disaster management scenarios. The model used to link potential ad hoc forecasting scenarios with appropriate reference disasters is fairly simple,

218

IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 46, NO. 2, MAY 1999

Fig. 3. Summary characteristics for disasters and ad hoc forecasts.

consisting of a two-stage profile comparison. At the four-factor macromodel stage, for example, after pair-wise aggregation had been performed, C1 and C3 both fell into the midrange category with respect to impact (see Fig. 2). For C3, it was because it was intermediate on both of the paired components (i.e., scope and duration). C1, on the other hand, was in this category since its long duration was compensated for by a narrow scope. If there is perfect agreement on all eight characteristics, perfect agreement will exist at the macro level. However, in all other situations, additional information is obtained by viewing both profiles. For each ad hoc forecasting case, we calculate absolute deviations of its assigned numerical values for the four macrofactors and the eight characteristics from the numerical values assigned to the five reference disasters. For example, C1 is considered to have high exposure (numerical value of three), whereas an earthquake is assigned medium exposure (numerical value of two). Hence the absolute deviation of C1 from an earthquake for the characteristic exposure is one. This is done for all eight characteristics as well as for the four macrofactors for each reference disaster. These absolute deviations are then summed across the four macrofactors as well as across the eight characteristics. The disaster with the lowest sum for both the macrofactors and eight characteristics is considered the most appropriate match for the case. This two-stage profile analysis is applied to each of the three cases. The shaded columns in Fig. 4(a) show that, when macroprofiles are considered, the ad hoc forecasting environment of the financial agency (C1), is equally close to the profiles of an earthquake and an oil spill. Moving to the detailed profile by characteristics [the unshaded columns in Fig. 4(a)], it is seen that much greater similarity exists with the oil spill scenario.

The general prescription, as cited earlier, would be “ the creation of a data repository through the identification of predeployed gate keepers in each sector of the organization.” Since the state financial agency is subjected over time to a variety of legislative and judicial impacts on its operations, key individuals throughout the organization should be preidentified and readily available to support an ad hoc forecasting group within the agency. For each such occurrence, an appropriate subset of these key individuals would be selected to participate in a Delphi-like analysis conducted by the group to forecast the impact of such legislative or judicial decisions. Fig. 4(b) summarizes a similar procedure and shows that both the macroprofile (shaded) and the detailed profile by characteristics (unshaded) for the ad hoc forecasting environment of the Health Agency, is most similar to the reference disaster of a flood. For similar ad hoc forecasts, multiple options exist through monitoring and tracking personnel hours and resources, using scheduling software to refine the revised forecasts and making contingency plans for rescheduling personnel. The general response strategy for such situations would be to establish a permanent ad hoc forecasting group within the information technology sector of the agency and to build a database which will insulate the rest of the organization from the process. This will also allow the refinement of both supporting models and information systems. As shown in Fig. 4(c), for the regulatory agency (C3), the match is less clear. In fact, at the macrolevel, all reference disasters, except earthquakes, provide equally good (or poor) comparisons. At the detailed level, floods and nuclear accidents provide the best parallels with a slightly better fit being provided by the flood scenario. Multiple options exist for such an ad hoc forecast. For instance, the agency could have several responses, such as lobbying the legislature for

CHENGALUR-SMITH et al.: ADOPTING A DISASTER-MANAGEMENT-BASED CONTINGENCY MODEL

219

(a)

(b)

(c) Fig. 4. Agency profile analysis (absolute deviations): (a) financial agency; (b) health agency; and (c) regulatory agency.

changes in regulation based on preliminary results of the sensitivity analysis, as well as making collaborative efforts by meeting with industry. The regulation could be phased in slowly, and based on the observed incremental benefits and costs, the regulation could be adjusted accordingly. In general, the ad hoc forecasting response described in the previous section, consisting of a focused ad hoc forecasting group and database, can be supplemented by feedback/feedforward control systems.

VIII. TECHNOLOGY-BASED RECOMMENDATIONS FOR AD HOC FORECASTING To this point, we have demonstrated how the rich disastermanagement literature and research base can provide valuable insights into ways to better prepare for and respond to demands for ad hoc forecasts. We described, within the context of a cost-benefit framework, strategies that have been successfully employed to manage five major disasters and developed a model which enabled us to compare ad hoc forecasting with these disasters. In this way we were able to devise parallel strategies for ad hoc forecasting. In this final section, we conclude with a discussion on how to extend information

decision-based disaster-management systems to address the problems associated with ad hoc forecasting. The presence of the Internet and the accessibility of the World Wide Web provide a platform for developing a database of ad hoc forecasts. The low frequency of ad hoc forecasts does not justify a dedicated information system, but a cooperative venture across units of an organization may be feasible. There currently exist systems designed to foster interorganizational collaboration and knowledge exchange even among organizations with significantly different organizational structures and missions. Choudhary [7] describes a multilateral interorganizational information system (IOIS) as a link that serves as an intermediary between a firm and its trading partners. Such information systems could create competitive advantages, build strategic alliances, or simply be considered a public good (for example the airline reservations system, SABRE). A public good multilateral IOIS would be appropriate in the public sector because of the existing relationships among state agencies. Additionally, since the necessity for performing ad hoc forecasts is unpredictable, and the type of ad hoc forecast required varies greatly across time as well as across units, the creation of a public good multilateral IOIS would be beneficial to all states.

220

IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 46, NO. 2, MAY 1999

The classification scheme described earlier allows a firm to map its ad hoc forecasting process onto a grid of possibilities and find matching profiles. Since no electronic integration is required across units, the entry and exit costs for the information system are low, and thus the member units may change over time. Thus, rather than share information with preselected units, each unit can perform a dynamic search for a fit on every instance of an ad hoc forecast. Andersen Consulting has employed the software system Lotus Notes to create an intranet called Knowledge Xchange. Knowledge Xchange connects tens of thousands of Andersen employees worldwide and provides features such as technical discussion databases, training memos, scheduling, personnel evaluations, and more. Employees anywhere in the world can post a question or problem and within hours receive advice and solutions from knowledgeable people. They can also query a knowledge base of best practices. With such a system, Andersen Consulting will not have to continually rediscover core knowledge [1]. Formalizing the practice of ad hoc forecasting through the support of a groupware such as Lotus Notes would facilitate learning both from within (through the expertise of employees) as well as from the external environment (experts from other organizations). Such a practice would help to advance the creation and retention of an organizational memory. Just as a state may be subject to an array of disasters requiring sometimes overlapping, sometimes unique planning, response, and control systems, most organizations would similarly have requirements for multifaceted information decision systems to support ad hoc forecasting. It is clear that while the ad hoc forecasting information decision systems portfolio will be organizationally specific, the components and linkages for the resulting systems can be defined through the identification of parallels with disastermanagement scenarios. While much research is still required on the effectiveness and efficiency of formalized ad hoc forecasting systems, it seems certain that appropriate system strategies can be achieved more quickly by building on the research already available in the disaster management arena. REFERENCES [1] S. Belardo and A. Belardo, Trust—The Key to Change in the Information Age. Albany, New York: Sebastian Rose, 1997. [2] S. Belardo and P. Duchessi, “An object oriented system for pre-event crisis activities,” in Proc. 2nd Int. Conf. Emergency Planning, Lancaster, U.K., July 1993, pp. 43–50. [3] S. Belardo, J. Harrald, W. A. Wallace, and J. Ward, “Partial covering approach to siting response resources for major maritime oil spills,” Manage. Sci., vol. 30, no. 10, pp. 1184–1196, 1984. [4] S. Belardo and H. L. Pazer, “A framework for analyzing the information monitoring and decision support system investment trade-off dilemma: An application to crisis management,” IEEE Trans. Eng. Manag., vol. 42, pp. 352–359, Nov. 1995. [5] S. Belardo, H. L. Pazer, W. A. Wallace, and W. Danko, “Simulation of a crisis management information network: A serendipitous evaluation,” Decision Sci., vol. 14, no. 4, pp. 588–606, Fall 1983. [6] G. E. G. Beroggi and W. A. Wallace, “Emergency response and operational risk management,” in Proc. 1993 Int. Emergency Management and Engineering Conf., 1993, pp. 34–39.

[7] V. Choudhary, “Strategic choices in the development of inter-organizational information systems,” Inform. Syst. Res., vol. 8, no. 1, pp. 1–24, Mar. 1997. [8] P. Duchessi, S. Belardo, and P. Leonard, “A decision support system for identifying problem banks,” in Management Quality: Recession and Crises, R. Berndt, Ed. Berlin, Germany: Springer Verlag, 1994. [9] R. R. Dynes, Organized Behavior in Disasters. Lexington, MA: Heath Lexington, 1970. [10] D. A. Griffith, “Hurricane emergency management applications of the SLOSH numerical storm surge prediction model,” in Terminal Disasters: Computer Applications in Emergency Management, S. A. Marston, Ed. Boulder, CO: Univ. Colorado Inst. Behavioral Science, 1986. [11] M. Parker, F. Niziolek, and J. Brittin, “On-line expert systems for monitoring nuclear power plant accidents,” in Proc. 1993 Int. Emergency Management and Engineering Conf., 1993, pp. 134–145. [12] B. Render and J. Heizer, Principles of Operations Management. Needham Heights, MA: Allyn and Bacon, 1994. [13] F. Southworth, B. N. Janson, and M. M. Venigalla, “DYMOD: Toward real time, dynamic traffic routing during mass evacuations,” in Proc. Int. Emergency Management and Engineering Soc., Orlando, FL, 1992, pp. 84–89. [14] R. Wilson, C. Rojahn, R. Scholl, B. Skiffington, and T. Cocozza, “Utah EQUIP: A comprehensive earthquake loss prediction model for the wastach fault,” in Proc. 1993 Int. Emergency Management and Engineering Conf., 1993, pp. 53–58.

InduShobha Chengalur-Smith received the Ph.D. degree from Virginia Polytechnic Institute and State University, Blacksburg. She is an Associate Professor of Management Science and Information Systems in the School of Business, State University of New York, Albany. Her research interests include information quality, decision making, and technology implementation. Her publications have appeared in various journals including the Communications of the ACM, Transportation Research, and International Journal of Production Research. She has worked on industry-sponsored projects in the areas of quality control, transportation cost models, and technology implementation.

Salvatore Belardo is an Associate Professor of Management Science and Information Systems at the State University of New York, Albany. His research interests include decision support systems, knowledge-based systems, change management, and knowledge management. He has published a number of articles in journals such as Management Science, Interfaces, The Journal of Management Information Systems, and IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT. He has edited several books and has recently published a book entitled Trust: The Key to Change in the Information Age.

Harold Pazer received the MBA degree with additional graduate work from the University of Washington, Seattle. He is an Associate Professor of Management Science and Information Systems at the School of Business, State University of New York, Albany. His research has involved the analysis of quality in both production and information systems. In addition to co-authoring three textbooks, he has published articles in various journals including Management Science, Decision Sciences, International Journal of Production Research, and Information Systems Research.