CRISMA project

3 downloads 622 Views 8MB Size Report
Sep 5, 2014 - previously described can be determined by the optical density method (Milke & James,. 2000 .... data such as Twitter for characterizing local-scale population dynamics (Bhaduri et al.,. 2014). ...... Ristvej, & Z. Franco, eds.
Crisma Crisma

Crisma Version 2 of Dynamic vulnerability functions, Systemic vulnerability, and Social vulnerability Maria Polese, Giulio Zuccaro, Stefano Nardone, Salvatore La Rosa, Marco Marcolin, AMRA Christophe Coulet, Marianne Grisel, Mehdi-Pierre Daou, AEE Karoliina Pilli-Sihvola, FMI Christoph Aubrecht, Klaus Steinnocher, Heinrich Humer, Hermann Huber, AIT Kuldar Taveter, Stanislav Vassiljev, TTU Francesco Reda, Pekka Tuomaala, Kalevi Piira Riitta Molarius, VTT Miguel Almeida, Luís Ribeiro, Carlos Viegas, ADA 5.9.2014

This project has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 284552.

05.09.2014 | i

This project has received funding from the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement no 284552 "CRISMA“

Deliverable No. Subproject No.

SP4

Work package No.

43

Authors

Status (F = Final; D = Draft) File Name Dissemination level

D43.2 Subproject Title

Models for MultiSectoral Consequences Time-Dependent Work package Title Vulnerability for Systems at Risk Maria Polese, Giulio Zuccaro, Stefano Nardone, Salvatore La Rosa, Marco Marcolini (AMRA); Christophe Coulet, Marianne Grisel, MehdiPierre Daou (AEE); Karoliina Pilli-Sihvola (FMI); Christoph Aubrecht, Klaus Steinnocher, Heinrich Humer, Hermann Huber (AIT); Kuldar Taveter, Stanislav Vassiljev (TTU); Francesco Reda, Pekka Tuomaala, Kalevi Piira, Riitta Molarius (VTT); Miguel Almeida, Luís Ribeiro, Carlos Viegas (ADAI)

F CRISMA_D432_final.pdf PU

(PU = Public; RE = Restricted; CO = Confidential)

Contact

[email protected] [email protected]

Project Keywords Deliverable leader

Contractual Delivery date to the EC Actual Delivery date to the EC

www.crismaproject.eu

Name:

Maria Polese, Giulio Zuccaro

Partner:

AMRA

Contact: [email protected]; [email protected] 31.08.2014 05.09.2014

http://www.crismaproject.eu

05.09.2014 | iii

Disclaimer The content of the publication herein is the sole responsibility of the publishers and it does not necessarily represent the views expressed by the European Commission or its services. While the information contained in the documents is believed to be accurate, the authors(s) or any other participant in the CRISMA consortium make no warranty of any kind with regard to this material including, but not limited to the implied warranties of merchantability and fitness for a particular purpose. Neither the CRISMA Consortium nor any of its members, their officers, employees or agents shall be responsible or liable in negligence or otherwise howsoever in respect of any inaccuracy or omission herein. Without derogating from the generality of the foregoing neither the CRISMA Consortium nor any of its members, their officers, employees or agents shall be liable for any direct or indirect or consequential loss or damage caused by or arising from any information advice or inaccuracy or omission herein.

http://www.crismaproject.eu

05.09.2014 | iv

Table of Contents TABLE OF CONTENTS ................................................................................................................ IV LIST OF FIGURES ........................................................................................................................ VI LIST OF TABLES .......................................................................................................................... IX GLOSSARY OF TERMS ............................................................................................................... XI ACRONYMS ................................................................................................................................. XII EXECUTIVE SUMMARY ............................................................................................................. XIII 1.

INTRODUCTION ................................................................................................................... 1 1.1. Physical vulnerability of elements at risk ....................................................................... 1 1.2. Systemic vulnerability ................................................................................................... 3 1.3. Social vulnerability ........................................................................................................ 3

2.

TIME-DEPENDENT PHYSICAL VULNERABILITY (TDV)..................................................... 5 2.1. Vulnerability functions ................................................................................................... 5 2.2. The general approach for TDV...................................................................................... 8 2.2.1. Damage Probability Matrices for seismic vulnerability of buildings................... 10 2.2.2. Damage-dependent seismic vulnerability of buildings...................................... 14 2.2.3. Damage Probability Matrices of hydraulic works with respect to coastal submersion...................................................................................................... 18 2.2.4. Damage dependent vulnerability of hydraulic works with respect to coastal submersion...................................................................................................... 20 2.2.5. Damage Probability Matrices for house cooling during extreme weather conditions ........................................................................................................ 24 2.2.6. Time-dependent vulnerability for house cooling during extreme weather conditions ........................................................................................................ 28 2.3. Simulation model for TDV ........................................................................................... 29 2.3.1. TDV Model logic scheme ................................................................................. 29 2.3.2. Input/output specification and links with other models ..................................... 34 2.3.3. Example application of the TDV model for the seismic case ............................ 35

3.

SYSTEMIC VULNERABILITY ............................................................................................. 38 3.1. Introduction ................................................................................................................. 38 3.2. Synthesis of the review on methodologies for infrastructure and systemic vulnerability assessment ............................................................................................. 38 3.2.1. Vulnerability of Infrastructure ........................................................................... 39

http://www.crismaproject.eu

05.09.2014 | v

3.2.2. Systemic vulnerability ...................................................................................... 41 3.3. Road Network Vulnerability model (RNV) ................................................................... 42 3.3.1. Road link interruption modeling in case of earthquakes ................................... 42 3.3.2. Road link interruption depending on smoke ..................................................... 54 4.

SOCIAL VULNERABILITY .................................................................................................. 60 4.1. Introduction ................................................................................................................. 60 4.2. Human Exposure Modelling – the DynaPop Approach................................................ 60 4.2.1. Main Concepts and Scientific Background....................................................... 60 4.2.2. DynaPop introduction ...................................................................................... 63 4.2.3. DynaPop model logic scheme and input/output specifications ......................... 66 4.2.4. Test application ............................................................................................... 68 4.2.5. DynaPop links to other models ........................................................................ 72 4.3. Casualty modeling ...................................................................................................... 73 4.3.1. Earthquake casualty model ............................................................................. 73 4.3.2. Human thermal model ..................................................................................... 75 4.4. Evacuation modelling .................................................................................................. 79 4.4.1. Basic concepts in evacuation modeling ........................................................... 79 4.4.2. A new grid-based evacuation simulation tool for large scale assessment ........ 83 4.4.3. Life Safety Model............................................................................................. 90

5.

REFERENCES .................................................................................................................... 94

APPENDIX (A) – SIMULATION MODEL FOR TDV ................................................................... 100 APPENDIX (B) – SIMULATION MODEL FOR RNV ................................................................... 105 APPENDIX (C) – SEQUENTIAL SNAPSHOTS OF THE SMOKE CONCENTRATION SIMULATION .................................................................................................................... 110

http://www.crismaproject.eu

05.09.2014 | vi

List of Figures Figure 1. The relationship of models and data needed for the assessment of an impact scenario. ......................................................................................................................................... 2 Figure 2. Example seismic vulnerability curves relating macro-seismic intensity measure IM with Synthetic Parameter for Damage (SPD/5, in a scale from 0 to 1) for building of classes A (most vulnerable) to E (less vulnerable). ......................................................................................... 6 Figure 3. Graphical representation of Seismic DPM of building class A for the intensity VIII. .......... 7 Figure 4. Example seismic fragility curves for building class A featuring damage levels D3 and D5. .................................................................................................................................................. 7 Figure 5. Modification of static vulnerability due to damaging event (after Matrix, 2010). ................ 9 Figure 6. The possible options for including time-dependent vulnerability in the flow of analysis..... 9 Figure 7. Influence of a generic parameter on the SPD value per a fixed intensity. ....................... 10 Figure 8. Example fragility curves as a function of peak ground acceleration ag for building class B. .................................................................................................................................................. 14 Figure 9. Example collapse fragility curves derived through Eq. (3) in function of RECag, and fragility curves variation depending on global ductility demand (after Polese et al. (2014)). ....... 15 Figure 10. Original and modified fragility curves (in terms of D4+D5) that are obtained for building class 10 (a) and 12 (b) for ductility demand =2, 3 and 4 (after Polese et al. (2014)). ...... 16 Figure 11. The ideal elasto-plastic model and the critical damage D3 range. ................................ 17 Figure 12. The system to process the buildings affected by a seismic event sequence................. 18 Figure 13. Vulnerability curves for an elapsed time of 24 hours. ................................................... 25 Figure 14. Vulnerability curves for an elapsed time of 72 hours. ................................................... 26 Figure 15. Example of update of fragility functions depending on damage. ................................... 30 Figure 16. TDV model logic scheme. ............................................................................................ 32 Figure 17. TDV application to seismic analysis. ............................................................................ 33 Figure 18. Input/Output specification and link to other models. ..................................................... 35 Figure 19. Initial condition Building Inventory. ............................................................................... 36 Figure 20. EQ first event shake map data. .................................................................................... 36 Figure 21. Buildings Damage distribution after first seismic event. ................................................ 37 Figure 22. Building Inventory updated after first EQ. ..................................................................... 37 Figure 23. Nodes and links. .......................................................................................................... 39 Figure 24. An influence diagram on cascading events and mitigation. .......................................... 41 Figure 25. Sample of model output, data associated with a specific road link. .............................. 43 Figure 26. Definition of a road link................................................................................................. 45

http://www.crismaproject.eu

05.09.2014 | vii

Figure 27. Selection buffer around road links. ............................................................................... 46 Figure 28. Buildings selection by link buffer intersection. .............................................................. 46 Figure 29. Evaluation of building correlation to only one link. ........................................................ 47 Figure 30. Map of the probability of link interruption. ..................................................................... 48 Figure 31. Table results. ............................................................................................................... 48 Figure 32. Simplified model logic scheme. .................................................................................... 49 Figure 33. RNV model flowchart. .................................................................................................. 50 Figure 34. RNV model, input parameters and data slots interaction. ............................................. 51 Figure 35. Selected study zone and roads. ................................................................................... 52 Figure 36. Buffer zones around road links for building selection.................................................... 53 Figure 37. Buildings selected around the links, classified according to vulnerability classes. ........ 53 Figure 38. Probability of interruption of the links, color-coded. ...................................................... 54 Figure 39. Link feature values table showing the results of the RNV model. ................................. 54 Figure 40. Steps to determine the loss or reduction of functionality of a road due to smoke.......... 55 Figure 41. Simulations by FireStation Software: a) PM2.5 concentration at 2.0m after 7 hours of forest fire; b) Linear fire intensity distribution. ............................................................................ 56 Figure 42. Visibility variation with mass exhaust rate (a) and heat release rate of fire (b) for Pine, Read Oak and Douglas Fir................................................................................................... 57 Figure 43. Deceleration (a) and stopping distance (b) graphs (Limpert, 1992)............................... 58 Figure 44. Maximum traffic speed according to the visibility distance. ........................................... 58 Figure 45. Daytime vs. nighttime population distribution information for Lower Manhattan in New York City, disaggregated to a 90 m grid (LandScan USA dataset). ....................................... 61 Figure 46. Conceptual framework for dynamic population modeling.............................................. 62 Figure 47. Diurnal variation of total population per administrative unit. .......................................... 64 Figure 48. Diurnal variation of locations of population. .................................................................. 65 Figure 49. Diurnal variation of total population per locations. ........................................................ 66 Figure 50. Calculating absolute numbers for each class. .............................................................. 67 Figure 51. Density distribution for each class at a specific point in time. ....................................... 68 Figure 52. Overview of Baden test site.......................................................................................... 68 Figure 53. Night time (left) and day time (right) population per municipality for Baden test site. .... 69 Figure 54. Time use profiles for weekdays (left) and weekends (right). ......................................... 69 Figure 55. Density grids for home, work and commuting. .............................................................. 70 Figure 56. Density grids for shopping, leisure and events. ............................................................ 71

http://www.crismaproject.eu

05.09.2014 | viii

Figure 57. Population distribution at 9:00 am on a weekday. ........................................................ 71 Figure 58. Population distribution at 9:00 am (left) and 7:00 pm (right) for the city of Baden on a weekday. ...................................................................................................................................... 72 Figure 59. Population distribution at 9:00 am on a weekday (left) and on a weekend (right) for the city of Baden. .......................................................................................................................... 72 Figure 60. The logic flow of analysis for earthquake casualty model. ............................................ 74 Figure 61. HTM structure. ............................................................................................................. 76 Figure 62. Simulated single house building (LVIS 2000). .............................................................. 78 Figure 63. Simulated thermal comfort. .......................................................................................... 78 Figure 64. Conceptual framework for evacuation modeling. .......................................................... 82 Figure 65. NetLogo simulation window.......................................................................................... 85 Figure 66. NetLogo development environment.............................................................................. 86 Figure 67. Sample monitor window in NetLogo. ............................................................................ 86 Figure 68. Density color scheme. .................................................................................................. 87 Figure 69. Situation after 12 iterations or 16 minutes. ................................................................... 89 Figure 70. Situation after 33 iterations or 44 minutes. ................................................................... 89 Figure 71. Situation after 83 iterations or approximately 110 minutes. .......................................... 90 Figure 72. Screenshot of LSM ‘in operation’ for a ‘dry simulation’. Colors indicate the status of the respective agent (referring to a person initially exposed). ........................................................ 92 Figure 73. Screenshot of LSM ‘in operation’ for a ‘flooded simulation’ without early warning (i.e., the first person starts to evacuate when water reaches his house. Then, the information to start evacuating spreads among the population). Colors indicate the status of the respective agent (referring to a person initially exposed). ........................................................................................ 93 Figure 74 Schema of interface and wrapping of Simulation Model as a WPS. ............................ 100 Figure 75. Extract from WFS attributes table............................................................................... 101 Figure 76. Graphical interface of the WPS client, input parameters............................................. 103 Figure 77. Graphical interface WPS client, process result. .......................................................... 104 Figure 78. RNV model wrapped as a WPS service architecture. ................................................. 106 Figure 79. Simulation Model Integration WPS input parameters dialog. ...................................... 107 Figure 80. Example of access to the WFS. ................................................................................. 108 Figure 81. QGIS showing the WFS layer received from the WPS after running the RNV model.. 109 Figure 82. Evolution of PM2.5 concentration at 2.0m during the first 7 hours of the forest fire simulated by FireStation Software. Smoke distribution for: (a) ignition time; (b) after 2 hours; (c) after 3 hours; (d) after 4 hours; (e) after 5 hours; (f) after 6 hours; (g) after 7 hours; (h) after 8 hours........................................................................................................................................... 110

http://www.crismaproject.eu

05.09.2014 | ix

List of Tables Table 1. The studied systems/vulnerability matrix. .......................................................................... 4 Table 2. Generic scheme of damage probability Matrix for Elements of Class T. ............................ 7 Table 3. SPD ranges for the assignment of the vulnerability class. ............................................... 11 Table 4. Example definition of EMS98 damage scale for Masonry buildings. ................................ 11 Table 5. Seismic DPM for building class A. ................................................................................... 12 Table 6. Seismic DPM for building class B. ................................................................................... 12 Table 7. Seismic DPM for building class C. ................................................................................... 13 Table 8. Seismic DPM for building class D. ................................................................................... 13 Table 9. Damage Probability Matrix for a dike of good status. ....................................................... 19 Table 10. Damage Probability Matrix for a dike of medium status. ................................................ 20 Table 11. Damage Probability Matrix for a dike of poor status....................................................... 20 Table 12. Table expressing the probable damage levels as function of water level and for different status of dikes. ................................................................................................................ 20 Table 13. Dike segment description stored in the WS0. ................................................................. 21 Table 14. Damage Probability Matrix for dike segment vulnerability. ............................................. 21 Table 15. Dike segment vulnerability classification (after the large-scale simulation). ................... 22 Table 16. Dike segment behaviour – user decision. ...................................................................... 22 Table 17. Dike segment description stored in the WS1. ................................................................. 23 Table 18. Updated dike segment vulnerability classification (after a small-scale simulation). ........ 23 Table 19. Dike segment behaviour – user decision. ...................................................................... 23 Table 20. Dike segment description stored in the WS2. ................................................................. 24 Table 21. Transmittance value of the considered building classes. ............................................... 24 Table 22. Damage probability matrix for elapsed time of 12 hours. Apartment building class. ....... 26 Table 23. Damage probability matrix for elapsed time of 24 hours. Apartment building class. ....... 27 Table 24. Damage probability matrix for elapsed time of 36 hours. Apartment building class. ....... 27 Table 25. Damage probability matrix for elapsed time of 48 hours. Apartment building class. ....... 27 Table 26. Damage probability matrix for elapsed time of 72 hours. Apartment building class. ....... 27 Table 27. Damage probability matrix for elapsed time of 12 hours. Single house class ................. 27 Table 28. Damage probability matrix for elapsed time of 24 hours. Single house class ................. 28 Table 29. Damage probability matrix for elapsed time of 36 hours. Single house class ................. 28 Table 30. Damage probability matrix for elapsed time of 48 hours. Single house class ................. 28

http://www.crismaproject.eu

05.09.2014 | x

Table 31. Damage probability matrix for elapsed time of 72 hours. Single house class ................. 28 Table 32. Rules for class scaling in the case of earthquakes. ....................................................... 34 Table 33. Rules for class scaling in the case of dike submersion. ................................................. 34 Table 34. Casualty percentage by damage level and building type. .............................................. 75 Table 35. Thermal Sensation Index............................................................................................... 77 Table 36. Thermal comfort index................................................................................................... 77 Table 37. Essential attributes to be considered for the setup of an evacuation model. .................. 79 Table 38. Sample information data................................................................................................ 83 Table 39. Sample population information data. ............................................................................. 84

http://www.crismaproject.eu

05.09.2014 | xi

Glossary of terms Term Object Of Interest

Definition Object of Interest (OOI) is used in CRISMA to designate objects that are of interest to crisis management practitioners and therefore need to be represented and handled by a CRISMA Application. More precisely, the term is used for IT-representation of such objects within CRISMA. In the framework of D43.2 OOI represent generically the objects exposed to a hazard, also called elements at risk or exposed assets. Since OOI instances always exist in a spatial and temporal context, OOI can be considered a specialization of the "Feature" as defined by ISO 19101 and OGC 08-126. Physical vulnerability Physical vulnerability expresses the propensity of an asset (or generally element at risk, or Object Of Interest OOI) to sustain a certain damage level in a suitably defined damage scale System System is a set of entities connected together to make a complex whole or perform a complex function (Sterling & Taveter, 2009). System can also be defined as a complex of interacting components and relationships among them that permit the identification of a boundary-maintaining entity or process (Laszlo & Krippner, 1998). Systemic The concept of systemic vulnerability measures the tendency of a vulnerability territorial element to suffer damage (generally functional) due to its interconnections with other elements of the same territorial system (Pascale et al., 2010) Social vulnerability The inherent characteristics of a community or social system that make it susceptible to the potentially damaging effects of a hazard (adapted according to UNISDR, 2009). In the understanding of CRISMA this starts with the identification of actual social elements at risk (population), thus comprehends human exposure. Furthermore, inherent characteristics are considered that make certain population groups more susceptible (e.g. health status). In the context of WP43 that concept further integrates vulnerability aspects relevant for getting population out of the hazard zone (evacuation) as well as aspects geared towards potential impacts (casualty assessment). Time-dependent Referring to physical vulnerability, time-dependent vulnerability is vulnerability defined as the vulnerability affected by deterioration of elements characteristics due to ageing and/or damage. In a broader sense, time-dependent vulnerability generally indicates the variation of vulnerability characteristics over time (in the understanding of CRISMA, this e.g. also includes spatiotemporal patterns of exposure or varying situation patterns during the process of evacuation). World state A particular status of the world, defined in the space of parameters describing the situation in a crisis management simulation, that represents a snapshot (situation) along the crisis evolvement. The change of world state, that may be triggered by simulation or manipulation activities by the CRISMA user, corresponds to a change of (part of) its datacontents, as well as of parameters

http://www.crismaproject.eu

05.09.2014 | xii

ACRONYMS Term BN DPM EMS98 FMI HEM HTM ICC LSM MEC MU OOI PLINIVS-LUPT RC REC SMCP SOAP SPD WS

Definition Bayesian Network Damage Probability Matrix European Macroseismic Scale (Grünthal, 1998) Finnish Metereological Institute Human Exposure Model Human Thermal Model Indicators Criteria and Costs Life Safety Model MEChanical approach for seismic vulnerability assessment Minimal geographical Unit of analysis Object Of Interest University of Naples Federico II Interdepartmental Research Center Laboratory of Urban and Territorial Planning “Raffaele D’Ambrosio” Reinforced Concrete REsidual Capacity Simulation Model Control Parameters SimpleObject Access Protocol Synthetic Parameter of Damage World State

http://www.crismaproject.eu

05.09.2014 | xiii

Executive Summary This deliverable includes the final results of work-package WP43 “Time-dependent vulnerability for systems at risk”, that is part of sub-project SP4 “Models for multi-sectoral consequences”. The work-package WP43 builds upon the results of WP41 “Existing Models harmonization” and, together with the results of WP42 “Cascade Effects on CrisisDependent Space-Time Scales”, contributes to provide the “new models” to be integrated for use in simulation purposes of the CRISMA framework. The results of WP43, as well as of WP42, feed the decision-support and simulation model that is developed in WP44. The new models developed within SP4 are to be integrated in the CRISMA framework in collaboration with sub-project SP3 “Integrated Crisis Modelling System” and will be tested and refined situation-specifically by pilot applications in SP5. WP43 is organized in two time slots, so the first results were reported in D43.1 (Polese et al., 2013c) and are updated and refined in D43.2, at the end on the second time slot for WP43. In particular, while the first part of activities of WP43 was mainly dedicated to conceptual developments, the second time slot was principally dedicated to the implementation of the new models and on the derivation of some of the needed features. This deliverable reports on the new developed models and exemplifies their use with some examples. In addition, because the contents of the first deliverable D43.1 (Polese et al., 2013c) are Restricted, some parts of D43.1 are recalled here in order to facilitate the understanding of the rationale underlying the models development. In the first chapter of this deliverable, the three core aspects of time-dependent vulnerability that where investigated, namely time- and/or damage-dependent physical vulnerability of relevant assets that are exposed to hazards significant for pilots application in CRISMA (T43.1), systemic vulnerability of critical infrastructures and systems (T43.2) and social vulnerability (T43.3), are introduced. The next chapters are organized with the presentation of the newly developed models for Time-dependent Vulnerability (Chapter 2), Systemic vulnerability (Chapter 3) and Social vulnerability (Chapter 4). As stated before, main conceptual findings of D43.1 are recalled within each chapter, and new sections are added to describe the new developed models and features, as well as application examples. The models are also inserted in the online CRISMA catalogue (https://crismacat.ait.ac.at/); their use will be further described in the framework of the W45 handbook as part of the catalogue documentation.

http://www.crismaproject.eu

05.09.2014 | 1

1. Introduction This chapter introduces the three core aspects of time-dependent vulnerability that where investigated in the framework of the three tasks of WP43, namely time-dependent physical vulnerability (T43.1), systemic vulnerability of critical infrastructures and systems (T43.2) and social vulnerability (T43.3). Concerning time- and/or damage-dependent physical vulnerability of relevant assets that are exposed to hazards significant for pilots application in CRISMA, after the extensive review in D41.1 and further in depth analysis in D43.1, in Chapter 2, the main concepts and the approach that is followed to build a general and transferable simulation tool are resumed. Also, in chapter 2 the specific features needed for the case of damagedependent seismic vulnerability of buildings, of vulnerability of hydraulic works due to submersion, as well as of time-dependent vulnerability of houses to indoor cooling in extreme weather conditions will be described in more detail. These three cases are chosen because they shall be integrated in different pilot applications in CRISMA. Finally, the specifications of the Time-Dependent Vulnerability (TDV) simulation tool (referring to physical vulnerability) are described, while in Appendix A more details are given. Concerning systemic vulnerability of critical infrastructures and systems, a thorough review of existing assessment approaches was presented in D43.1. In the CRISMA project, simple, yet practical approaches on the infrastructure vulnerability are used. As outlined in section 3.2, when referring to systemic vulnerability in CRISMA, in principle the focus is on the vulnerability of line-like components, as defined by Franchin et al. (2013). In Section 3.3 the approach which is used to model road network vulnerability is presented and the specifications of the Road Network Vulnerability model are described, while more detailed features are specified in Appendix B. Concerning Social Vulnerability, different issues concerning social vulnerability and exposure aspects were elaborated in D43.1, ranging from time-dependent human exposure to evacuation and casualty modeling. These aspects are recalled in chapter 4. Chapter 4 furthermore describes main features of the DynaPop model, that was implemented for a test case in Austria to simulate time-dependent population distribution patterns and in a crisis context consequently human exposure, as well as of the evacuation model developed in the framework of Task 43.3.

1.1. Physical vulnerability of elements at risk The physical vulnerability of elements at risk (also called exposed objects, assets, or Objects Of Interest OOI) is one of the main ingredients for risk and loss estimation. Knowing the territorial distribution of the elements at risk (e.g. building inventory) and given the hazard intensity distribution on the same territorial units, it is possible to estimate, using suitable vulnerability functions (e.g. Damage Probability Matrices DPM or fragility curves), the potential damage distribution, i.e. the potential number of damaged elements for each of the vulnerability classes and for each level of the damage scale; the latter information is generally available at the level of Minima territorial Units (MU)1, but may be also aggregated and disseminated for larger territorial units.

1

Given the probabilistic nature of the vulnerability functions for the (structural) OOIs, and also due to privacy issues, they usually cannot be referred to single objects but should be applied to an aggregated number of objects belonging to Minima territorial Units (MU) whose dimension has to be suitably established in a

http://www.crismaproject.eu

05.09.2014 | 2

Figure 1 illustrates the relationship of the various models and data that are needed for the assessment of an impact scenario. Given the subdivision of the area of interest in Minima territorial Units (MU) of analysis, the hazard intensity, whose distribution is calculated by the hazard model, is given as an input for each MU (Intensity Input). The vulnerability functions (e.g. DPM or fragility curves) allow to calculate, for each MU, the probability of having the different possible damage levels for the Objects Of Interest belonging to different classes (Probability of damage for each class of OOI).

Figure 1. The relationship of models and data needed for the assessment of an impact scenario.

Knowing the inventory (at the MU level) of the OOI (OOI Inventory) it is possible to multiply the calculated probability for the number of OOI belonging to each class in every MU; this way, the potential/probable territorial damage distribution is obtained. The latter already represents an impact scenario; however, together with the results of the human exposure model, it may be further used as input for the casualty model to calculate the estimated distribution of injured/deaths. Also, by applying the Indicators, Criteria and Cost (ICC) functions2 (economic impact model) to the territorial distribution of damage, the territorial distribution of (direct and indirect) economic losses may be estimated. The need to update vulnerability functions for elements at risk arises when the characteristics of such objects change and determine a varied (generally worsened) behavior with respect to the hazardous event. In chapter 2 the main concepts related to the assessment of Time-Dependent Vulnerability, already introduced in D43.1, will be recalled, and the TDV simulation tool described.

context-specific manner. In fact, if the MU is too large, the results for each scenario won’t be detailed enough; on the other hand, choosing a too small MU would lead to consider, for each MU, too few OOIs and therefore the generalized vulnerability functions may not be statistically representative. The latter case (too small MU) would also involve privacy issues that could conflict with ethical and security sensitivity rules (in case data on individual human beings could be derived) (excerpt from D43.1). 2 The Indicators, Criteria and Costs (ICC) are introduced in order to support Decision Makers in the process of taking decisions; ICC, that can be easily visualized and compared for the various considered scenarios, synthetically represent the status of alternative world states (more details in D44.1)

http://www.crismaproject.eu

05.09.2014 | 3

1.2. Systemic vulnerability Various critical elements within a territorial system are vulnerable to hazards. Systemic vulnerability measures the tendency of a territorial element to suffer damage (generally functional) due to its interconnections with other elements of the same territorial system (Pascale et al., 2010). In D43.1 a review of some of the existing methodologies for systemic vulnerability assessment was presented, with particular emphasis on road network and electric power network vulnerability with respect to extreme weather and seismic events as well as vulnerability of telecommunication systems with respect to extreme weather. Moreover, the modeling issues for road network seismic vulnerability and road accidents in extreme weather conditions were described and exemplified . Section 3.2 of chapter 3 is a synthesis of the review introduced in D43.1, while in section 3.3 the Road Network Vulnerability model developed in the framework of Task 43.2 is presented in detail.

1.3. Social vulnerability The parts of this report related to the social vulnerability and exposure aspects elaborate on a set of different associated issues on varying levels of complexity and in different stages during a crisis event and its management. With regard to population exposure, the spatial distribution of population in general, and hence its exposure to hazards, is time-dependent, especially in metropolitan areas. Due to human activities and mobility, the distribution and density of population varies greatly in the daily cycle (Freire, 2010). Therefore, a more accurate assessment of population exposure requires going beyond residence-based census maps and figures (representing a nighttime situation) in order to be prepared for events that can occur any time and day (Freire and Aubrecht, 2012). Evacuation models have to consider both the physical and social aspects of a study site for their setup. In that regard, population exposure information is one of the main input factors as it provides the basis to start with in terms of getting people out of danger. Furthermore situational aspects such as blocked roads and other obstacles as well as general conditions of the route network are essential for modelling evacuation times. In post-event emergency management applications almost always “the need for speed” is stressed, i.e. in particular fast immediate response bearing the greatest chance of saving lives (Goodchild, 2008). Temporal aspects in real-world evacuation action as well as its modelling therefore are considered essential for decision makers, including accounting for varying initiation time as well as speed of successful evacuation rates in terms of continuous updates or short time intervals. Casualty models eventually aim at estimating the number of actually affected people, thus being related to the initial starting basis of exposed population and accounting for the follow-up evacuation processes (also possibly accounting for first-impact casualties prior to evacuation). While population exposure models can often be considered pretty hazardindependent (population being exposed to any kind of hazard), evacuation models and particularly casualty assessments need to be closely linked to the respective hazard situation. Casualty modeling in case of earthquakes do for example put a strong focus on the location of people in a temporally seamless manner. As earthquakes can strike without any prediction and warning, it is crucial to know if people are inside or outside of buildings

http://www.crismaproject.eu

05.09.2014 | 4

(thus information on the occupancy ratios per building type) and where they are exactly within the affected area. In a further step this is then linked with physical aspects such as structural building safety (danger of collapse, etc.). Table 1 summarizes the kind of issues that are intended to be dealt with. Final list of ‘processable’ items (in the CRISMA pilots) is depending on data availability. Table 1. The studied systems/vulnerability matrix. System

Hazard-independent population exposure

Population

X (locational aspects)

Affected area

x (land use, etc. to determine ‘disaggregation target zones’)

http://www.crismaproject.eu

Hazard-independent social vulnerability (focus on evacuation) X (inherent characteristics) x (general aspects: slope, building density, etc; situational aspects: blocked roads, fire, etc.)

Hazard-dependent social vulnerability (focus on casualties) X (mostly locational aspects)

x (collapsed buildings)

05.09.2014 | 5

2. Time-Dependent physical Vulnerability (TDV) Vulnerability expresses the propensity of an asset (or generally element at risk, or Object Of Interest OOI) to sustain a certain damage level in a suitably defined damage scale. In this sense, the physical vulnerability of individual assets, as well as groups or classes of assets, allows the direct expression of potential degree of loss of the single element or group of assets from external hazardous factors. Usually, physical vulnerability to external events is considered almost stationary in time. However, there is the need to update vulnerability functions for elements at risk when the characteristics of such objects change and determine a varied (generally worsened) behavior with respect to the hazardous event. Classic example is for the seismic case; indeed, in many parts of the world, the repetition of medium–strong intensity earthquake ground motions at brief intervals of time has been observed and, after a main shock has occurred, the structure in its new “damaged” state may behave very differently from the intact one. Hence, the seismic fragility of buildings impacted by aftershocks may change significantly. This applies also to other elements at risk in other hazard domains. In the following paragraph the main concepts underlying the definition of physical vulnerability in its standard (stationary) acception are resumed. Next, in section 2.2 the conceptual approach to the assessment of Time-dependent vulnerability, already introduced in D43.1 (Polese et al., 2013c) is resumed, and more detailed description of the assessment for the seismic case, the coastal submersion case and the extreme weather hazard case are provided. Finally, in section 2.3 the new developed simulation model for TDV is described.

2.1. Vulnerability functions Vulnerability in a physical sense may be usefully expressed via vulnerability functions; these functions, that are established for suitably defined classes of OOI and for selected potential damage levels in a proper damage scale, allow the probabilistic estimation of the corresponding damage for a given intensity of the hazardous event. Vulnerability, therefore, expresses the propensity of an asset (or generally element at risk) to sustain a certain damage level in a certain hazard scenario. Although there are several approaches to derive and represent the vulnerability of exposed assets, there are some common logic steps that are needed in order to proceed, briefly listed below. 1) The first step is to provide a suitable classification of the exposed assets, i.e. identify the typological characteristics of those objects that homogeneously define the behaviour of a “class” or “portfolio of elements”, with respect to the considered hazard. 2) Next, a (continuous or discrete) damage scale has to be established; the damage scale may be defined in terms of graduation of phenomenological damage on the assets or also in terms of repair/reconstruction costs. In the latter case the scale may be continuous from 0 to 1, with 1 representing the object replacement cost.

http://www.crismaproject.eu

05.09.2014 | 6

3) The Vulnerability has to be evaluated as the probable damage to an element (or portfolio of elements) at risk given a level of intensity of the adverse event. In general, the main approaches for estimating the vulnerability for a class of objects may be distinguished as empirical and analytical. Empirical methods rely on observational data from past events, and in this case vulnerability functions are derived by statistical treatment of observed damage. On the other hand, analytical methods rely on simplified modeling of the elements behavior and analytical evaluation of its aptitude to be damaged by the hazardous phenomenon of a given intensity. Another approach that may seldomly be used to build vulnerability curves is the so-called “judgmental” method, where the functions are built exclusively based on expert opinion, while a “hybrid” approach, modifying analytics-based relationships with observational data and/or experimental results, may compensate for the scarcity of observational data and modeling deficiencies of analytical procedures. As explained in D41.1 (Cabal et al., 2012) vulnerability can be expressed or presented in various ways: vulnerability indices, that in general have no direct relation with the different hazard intensities and are mostly used for expressing social, economic and environmental vulnerability; vulnerability curves, that are constructed on the basis of the relation between hazard intensities and previously observed/modeled damage data, and provide a relation in the form of a curve, with an increase in damage for a higher level of hazard intensity (see Figure 2); Damage Probability Matrices, (DPM) and fragility curves (see Figure 3). 0,8

SPD/5 0,7 0,6 0,5

A B C D E

0,4 0,3 0,2 0,1

IM

0,0

Intensity

V

VI

VII

VIII

IX

X

Figure 2. Example seismic vulnerability curves relating macro-seismic intensity measure IM with Synthetic Parameter for Damage (SPD/5, in a scale from 0 to 1) for building of classes A (most vulnerable) to E (less vulnerable).

DPM express in a discrete form, for a given element class, the conditional probability of obtaining a damage level k , due to an event of intensity I (see Table 2). DPM may be obtained via statistical treatment of observed damage data, or they can be derived by numerical simulation, or may be experts based (engineering judgment applied). In Figure 3 the graphical representation of DPM of building class A for the intensity VIII is shown. It illustrates that damage level 2 is most likely at the given hazard intensity, with less and more severe damages being less likely respectively.

http://www.crismaproject.eu

05.09.2014 | 7

Table 2. Generic scheme of damage probability Matrix for Elements of Class T. Intensity … … I … Imax

0 % % % % %

Damage Level … Dk % % % % % P[Dk|I,T] % % % %

1 % % % % %

…. % % % % %

Dkmax % % % % %

Fragility curves provide the probability for a particular group of elements at risk (class) to be in or exceeding a certain damage state under a given hazard intensity. For example, Figure 4 shows the fragility curves obtained for building class A and the two damage levels D3 and D5 of the seismic damage scale (see section 2.2.1 for more detailed definitions of the adopted Damage Scale and Vulnerability classes in the seismic domain).

I = VIII

0.4 0.35 0.3

Pk

0.25 0.2 0.15 0.1 0.05 0 0

1

2

3

4

5

Figure 3. Graphical representation of Seismic DPM of building class A for the intensity VIII.

Fragility curves are another way of representing the probabilistic info that are already expressed in the DPM.

class A 0.40 0.35 0.30

pk

0.25 0.20

D3

0.15

D5

0.10 0.05 0.00 V

VI

VII

VIII

IX

X

Figure 4. Example seismic fragility curves for building class A featuring damage levels D3 and D5.

The concepts described above (referring to the aforementioned steps 1 to 3) define the logic assessment of vulnerability in terms of propensity of an asset belonging to a given

http://www.crismaproject.eu

05.09.2014 | 8

class to sustain damage in a given scale and considering a given intensity of the adverse event. The vulnerability functions, expressed in one of the available formats (e.g. Damage Probability Matrices or fragility curves) are used to determine the probability of attaining variable damage levels for elements belonging to certain vulnerability classes. In paragraphs 2.2.1, 2.2.3 and 2.2.5 the DPM describing the seismic vulnerability of building classes, the vulnerability of hydraulic works due to coastal submersion and of houses for cooling during extreme weather phenomena are presented. Next, in paragraphs 2.2.2, 2.2.4 and 2.2.6 the rules for variation of such basic vulnerability functions are explained. In the following paragraph the basic conceptual model for assessment of time-dependent physical vulnerability is described.

2.2. The general approach for TDV In most engineering systems the deterioration process is divided in progressive and sudden deterioration. Hence “structural vulnerability” is commonly considered to be affected by two categories of phenomena which may determine time-dependency: (1) continuous deterioration of material characteristics or ageing, and (2) cumulating damage because of repeated overloading due to shocks (e.g., Sanchez-Silva et al., 2011). Ageing describes a slow process caused mainly by environmental factors, and it was studied in the past by using the reliability index profile, a function that describes the change of the reliability index with time (Frangopol et al., 2004; Petcherdchoo et al., 2004; Mori and Ellingwood, 1994). On the other hand, sudden deterioration describes sudden changes in the structural capacity. Deterioration caused by extreme events is mostly associated with earthquakes but can be used also to model the effect of hurricanes or blasts (terrorists attacks) (Sanchez-Silva et al., 2011). In WP43 time-dependent vulnerability of a number of elements at risk was studied, previously described already in D43.1. In particular, time-dependent seismic vulnerability of buildings impacted by repeated earthquakes, time-dependent vulnerability of dikes due to coastal submersion and building vulnerability due to floods. Also, the approach that can be followed to assess the cooling of houses during extreme weather events, as well as the approach that can be followed to estimate potential losses in a forest after forest fire spreading were explained. Given the initial vulnerability of an element (or class) at risk, in order to determine timedependent vulnerability the effects of time (incl. damage caused by an initial impact) have to be properly considered to allow consistent computation of time-dependent damage and/or losses. Accounting for the time- and in this context damage-dependent variation of vulnerability allows to use updated vulnerability functions; analogously to static (or latent) vulnerability functions, also the time-dependent vulnerability functions may be expressed using the same representation features: vulnerability indices, vulnerability curves, Damage Probability Matrices or Fragility curves. By way of example, Figure 5 shows a sketch of time-dependent variation (due to damage from several impacts) of seismic fragility; in this case also the effect of building strengtening in the reconstruction phase is considered as a factor inducing time-dependent variation of vulnerability.

http://www.crismaproject.eu

05.09.2014 | 9

Figure 5. Modification of static vulnerability due to damaging event (after Matrix, 2010).

When the characteristics of the elements at risk change, showing a (generally worsened) behavior with respect to the hazardous event, the need arises to update the initial vulnerability functions.

Figure 6. The possible options for including time-dependent vulnerability in the flow of analysis.

Figure 6 shows the updating of vulnerability functions as option 1 in the logic scheme of analysis. As an alternative approach (denoted with option 2 in Figure 6), instead of updating the vulnerability functions, the Inventory of OOI within each MU is updated. For example, an OOI originally belonging to a class Cj may be declassified in class Cj-1 if the damage level sustained due to the first event is relatively high. In the subsequent run of analysis (after second event) the vulnerability functions for class Cj-1 will be applied to the same object that originally belonged to class Cj. If an initial shock caused building to collapse, the OOI inventory must be updated in any case, thus accounting for the reduced number of buildings and thus spatial distribution of exposed objects.

http://www.crismaproject.eu

05.09.2014 | 10

The assessment of time- and in this context damage-dependent vulnerability variation (updating of vulnerability functions – option 1) as well as of the rules for suitable class shifting due to ageing or damage (updating of OOI inventory – option 2) is not trivial and different approaches may be followed in the different hazard domains, as it is described in the following subchapters (2.2.2 to 2.2.6). 2.2.1.

Damage Probability Matrices for seismic vulnerability of buildings

Considering the damages observed after past earthquakes, a number of authors elaborated on empirical seismic fragility curves for relevant building classes, e.g. Braga et al., 1982; Giovinazzi and Lagomarsino, 2004; Di Pasquale et al., 2005; Rossetto and Elnashai, 2003, Zuccaro et al., 2008a; Liel et al., 2012. Here, we adopt the fragility curves proposed in (Zuccaro et al., 2008a), that are essentially based on damages observed after past seismic events in Italy. Those curves are built based on Damage Probability Matrices (DPM) and the relative binomial coefficients that are proposed for relevant vulnerability classes. Building classes, ranging from A (most vulnerable) to E (less vulnerable) are defined once the vertical typology and other typological characteristics of the building are known. As explained in D43.1 (Polese et al., 2013c), the building classification with respect to seismic vulnerability, that is applied in CRISMA, is based on 5 interval ranges of a Synthetic Parameter of Damage (SPD). The SPD represents the average value of the distribution of damage defined according to the classification proposed in the European Macroseismic Scale EMS98 (Grünthal, 1998), that comprises 5 levels of damage from (D1 to D5) plus the null damage D0 (see, as example, Table 4 below). For each building, initial SPD may be evaluated initially considering the structural material for vertical structures (SPD-V), as in EMS98 classification (type of masonry, Reinforced Concrete RC, etc.) and next, according to the proposal of (Zuccaro et al., 2008a) it may be “corrected” on the basis of parameters influencing seismic behavior (type of horizontal structures, number of storeys, construction age etc.), as it is shown in Figure 7 (SPD-P).

Figure 7. Influence of a generic parameter on the SPD value per a fixed intensity.

http://www.crismaproject.eu

05.09.2014 | 11

After calibration through the statistical analysis of the data referred to the buildings grouped according to the typology of the vertical structures, 5 SPD ranges for the definition of the vulnerability classes are obtained (see Table 3). Table 3. SPD ranges for the assignment of the vulnerability class. A 2.0

B 2.0 1.7

C 1.7 1.4

D 1.4 1.0

E 1.0 -

The methodology, described in more detail in (Zuccaro et al., 2008a), allows to automatically assign the building seismic vulnerability classes, once the vertical typology and other typological characteristics of the building are known, even in the case of an inhomogeneous level of information. Original DPMs according to (Zuccaro et al., 2008a) are determined in terms of EMS-98 Macro-Seismic Intensity, and refer to the corresponding 5+1 level damage scale (Grünthal, 1998), from no damage (D0) to collapse (D5) (see Table 4 as an example of damage scale definition referring to Masonry buildings). Table 4. Example definition of EMS98 damage scale for Masonry buildings. Classification of damage to Masonry buildings Grade 1: Negligible to slight damage (no structural damage, slight non-structural damage) Hair-line cracks in very few walls. Fall of small pieces of plaster only. Fall of loose stones from upper parts of buildings in very few cases. Grade 2: Moderate damage (slight structural damage, moderate non-structural damage) Cracks in many walls. Fall of fairly large pieces of plaster. Partial collapse of chimneys. Grade 3: Substantial to heavy damage (moderate structural damage, heavy non-structural damage) Large and extensive cracks in most walls. Roof tiles detach. Chimneys fracture at the roof line; failure of individual non-structural elements (partitions, gable walls).

http://www.crismaproject.eu

05.09.2014 | 12

Classification of damage to Masonry buildings Grade 4: Very heavy damage (heavy structural damage, very heavy non-structural damage) Serious failure of walls; partial structural failure of roofs and floors.

Grade 5: Destruction (very heavy structural damage) Total or near total collapse.

Table 5 to Table 8 show the DPM obtained for relevant building classes according to the proposal of (Angeletti et al. 2002) . Table 5. Seismic DPM for building class A. Intensity V VI VII VIII IX X XI XII

0 0.59 0.32 0.12 0.03 0.01 0.00 0.00 0.00

1 0.33 0.41 0.32 0.16 0.06 0.02 0.01 0.00

Damage Level – Class A 2 3 0.07 0.01 0.21 0.05 0.33 0.18 0.32 0.31 0.20 0.34 0.10 0.28 0.04 0.19 0.02 0.12

4 0.00 0.01 0.05 0.15 0.29 0.39 0.41 0.38

5 0.00 0.00 0.00 0.03 0.10 0.21 0.35 0.49

4 0.00 0.00 0.01 0.02 0.06 0.12 0.19 0.27

5 0.00 0.00 0.00 0.00 0.01 0.02 0.04 0.08

Table 6. Seismic DPM for building class B. Intensity V VI VII VIII IX X XI XII

0 0.68 0.50 0.33 0.19 0.10 0.05 0.02 0.01

http://www.crismaproject.eu

1 0.27 0.37 0.41 0.37 0.29 0.20 0.12 0.07

Damage Level – Class B 2 3 0.04 0.00 0.11 0.02 0.21 0.05 0.30 0.12 0.34 0.20 0.33 0.28 0.29 0.33 0.22 0.35

05.09.2014 | 13

Table 7. Seismic DPM for building class C. Intensity V VI VII VIII IX X XI XII

0 0.87 0.78 0.67 0.55 0.44 0.33 0.23 0.16

1 0.13 0.20 0.28 0.35 0.39 0.41 0.39 0.35

Damage Level – Class C 2 3 0.01 0.00 0.02 0.00 0.05 0.00 0.09 0.01 0.14 0.03 0.21 0.05 0.27 0.09 0.31 0.14

4 0.00 0.00 0.00 0.00 0.00 0.01 0.02 0.03

5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

4 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.03

5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

Table 8. Seismic DPM for building class D. Intensity V VI VII VIII IX X XI XII

0 0.92 0.85 0.75 0.63 0.49 0.36 0.25 0.16

1 0.08 0.14 0.22 0.31 0.37 0.41 0.40 0.36

Damage Level – Class D 2 3 0.00 0.00 0.01 0.00 0.03 0.00 0.06 0.01 0.11 0.02 0.18 0.04 0.25 0.08 0.31 0.14

As discussed in (Polese et al., 2014), the fragility curves for each A to E building typologies then, may be built by summing up, for each intensity level, the probability values corresponding to the generic damage state. In order to represent the fragility curves in terms of an intensity parameter that may be straightforwardly used in a mechanical based framework, the intensity-based fragility curves can be expressed in terms of peak ground acceleration ag. In particular, the correlation between the intensity and ag can be derived based on existing Intensity- ag relations (e.g.: Guagenti and Petrini, 1989; Faccioli and Cauzzi, 2006; Faenza and Michelini, 2010; Margottini et al., 1992). For example, Figure 8 shows the fragility curves in terms of ag for vulnerability class B, obtained using the regression formula from (Guagenti and Petrini, 1989) to convert macroseismic intensity to peak ground acceleration.

http://www.crismaproject.eu

05.09.2014 | 14

1.00

Prob 0.75

D1_B D2_B D3_B D4_B

0.50

D5_B

0.25

0.00 0.00

0.15

0.30

0.45

0.60 ag (g)

0.75

Figure 8. Example fragility curves as a function of peak ground acceleration ag for building class B.

2.2.2.

Damage-dependent seismic vulnerability of buildings

In many parts of the world, the repetition of medium–strong intensity earthquake ground motions at brief intervals of time has been observed and, after a main shock has occurred, the structure in its new “damaged” state may behave very differently from the intact one. Hence, the seismic fragility of buildings impacted by aftershocks may have changed significantly and a consistent assessment of seismic risk in the short to midterm implies accounting for both time-dependent hazard and fragility curves (Polese et al., 2013a). Given the initial building’s seismic vulnerability, in order to determine vulnerability change following initial impacts the effects of damage have to be properly considered to allow consistent computation of time-dependent damage and/or losses. There are two possible approaches for modeling vulnerability change following initial damage: 1st option – variation of fragility functions This approach entails the explicit consideration of the variation of vulnerability functions describing the propensity of the OOI to suffer damage due to an hazardous event; the change of fragility functions may be directly determined as a function of the damage level that the generic element has suffered during the previous event 2nd option, update of inventory This approach, without changing the vulnerability functions, entails the reclassification of the OOI (in pre-fixed vulnerability classes) considering the worsening of their behaviour due to damage. In the following sub-paragraphs the two options will be described. In particular, the description of Option 1 is based on the work presented in (Polese et al., 2014), while the description of Option 2 is based on the work presented in (Zuccaro et al., 2008b).

http://www.crismaproject.eu

05.09.2014 | 15

The idea of updating the fragility curves based on the expected damage level in the buildings (Option 1) is very appealing and allows a consistent evaluation of damages and successive fragility updates. However, the method is fully implemented only for the Reinforced Concrete (RC) building classes, while further work is needed to extend this approach also to other building typologies (e.g. Masonry buildings). Therefore, as it will be described in section 2.3, the domain-independent tool for assessment of time-dependent physical vulnerability in the CRISMA framework currently builds upon the concept of inventory updating. Description of Option 1. Damage-dependent fragility curves for RC building classes In Polese et al. (2013a, 2014) a MEChanism based method (MEC) for the assessment of damage-dependent collapse fragility curves of existing RC building classes is presented. The papers were prepared in the framework of the CRISMA project and in this deliverable main results are summarized. The seismic behavior of damaged buildings, and the relative seismic safety, may be adequately represented by its seismic capacity modified due to damage, the so called REsidual Capacity REC. The residual capacity RECSa is defined as the minimum spectral acceleration (at the period Teq of the equivalent Single Degree Of Freedom – SDOF system) corresponding to building collapse. Also, considering the peak ground acceleration ag as damaging intensity parameter, RECag is defined as the minimum anchoring peak ground acceleration such as to determine building collapse. RECag represents the peak ground acceleration corresponding to the 50% probability of attaining collapse damage state and hence, as proposed in (Polese et al., 2013b), it may be employed for simple construction of collapse fragility curves:

P col|a g

1

ln

ag aˆ g

1

ln

ag

(1)

REC ag

with being the global value of dispersion, due to modeling uncertainties and inherent randomness associated to earthquake variability (Polese et al., 2013a; ATC, 2012). Therefore the estimation, following a MEC approach, of the REC variation for given ductility demands allows assessing the relative damage-dependent variation of collapse fragility curves (e.g. Figure 9). P(%) intact structure

50%

ductility demand (e.g. evaluated with CSM for main-shock)

RECag,

RECag

ag

Figure 9. Example collapse fragility curves derived through Eq. (3) in function of RECag, and fragility curves variation depending on global ductility demand (after Polese et al. (2014)).

http://www.crismaproject.eu

05.09.2014 | 16

For brevity reason, the full description of the MEC approach is not reported here, while the interested reader may refer to Polese et al. (2014) and Polese et al. (2013a) for further details. As observed in Polese et al. (2014), the proposed approach could be conveniently applied for assessment of damage- (or ductility-)dependent variation of fragility curves, while the initial collapse fragility curves (referring to undamaged state) shall be preventively calibrated based on observational data. In the paper, the authors adopted the fragility curves proposed in (Zuccaro et al., 2008a), that are essentially based on damages observed after past seismic events in Italy, as “initial” ones; however, the implementation of the procedure could be performed based on any other fragility curve provided they allow a realistic representation of collapse probability. Figure 10 shows the fragility curves for buildings belonging to classes 10 (a) and 12 (b) for the intact case and for assigned global ductility demand values of =2, 3, 4. As explained in Polese et al (2014) global ductility is a demand parameter that can be straightforwardly linked to the local ductility demand level for the RC elements, and therefore to the damage level within the structure. Class 10 and 12 are formed by buildings of 3 to 4 storeys, prone to first storey mechanism (soft storey) and being built in age 46–61 and 72-81, respectively. It may be observed that for ag=1 g the median collapse probability of buildings in class 10 raises from approximately 5% for the intact state to 10% for a ductility demand of 4, while for building class 12 it raises from approximately 2% to 3.6%. REC variation

0.2

0.2 0.18

0.18 0.16

0.14

0.14

0.12

0.12

p

p

0.16

10 10 =2 10 =3 10 =4

0.1

0.1

0.08

0.08

0.06

0.06

0.04

0.04

0.02

0.02

0 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

12 12 =2 12 =3 12 =4

1

ag(g)

(a)

0 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

ag(g)

(b)

Figure 10. Original and modified fragility curves (in terms of D4+D5) that are obtained for building class 10 (a) and 12 (b) for ductility demand =2, 3 and 4 (after Polese et al. (2014)).

The damage-dependent collapse fragility curves may vary significantly depending on the ductility demand level for the main-shock impacted structures. These curves may be conveniently used as a supporting tool for short- to medium-term risk or for scenario-based assessment during a sequence of events. Description of Option 2. Update of Inventory3 The problem is treated as a progressive deterioration of the building's resistance characteristics as represented by the damage level. Assuming that the structures can survive numerous moderate events when the “elastic threshold” (conventional for masonry buildings) is not exceeded, and assuming that this 3

Parts of this section are based on Zuccaro et al. (2008b).

http://www.crismaproject.eu

05.09.2014 | 17

corresponds to a damage level D3, a model of cumulative damage due to the sequence of events on the structures is calibrated (Zuccaro et al., 2008b). An ideal elasto-plastic model is assumed with damage D3 range centred on the elastic limit (see Figure 11). An important characteristic of the model is the capability to “recalibrate itself” during the event sequence, updating dynamically the building inventory and hence properly reassigning the vulnerability functions. The damage from the generic event modifies the capacity of the buildings to resist the following actions therefore the inventory of the buildings in the respective spatial unit of analysis changes. A routine has been developed to estimate the deterioration of the building resistance due to previous damage and to assign, proportionally to the level of the damage recorded after the event, a virtual vulnerability class that will address the choice of the DPM to be used for that building when the following event occurs.

Figure 11. The ideal elasto-plastic model and the critical damage D3 range.

In detail (see Figure 12): If the building has not been damaged (D0) or has suffered only light damage (D1) that has not violated the integrity of the structural elements it preserves the vulnerability class it had before the event. If the building has suffered damages D2 (light structural damage), the vulnerability level will be increased of one class. If the building has suffered heavy damages (D3) the vulnerability degree will be increased of two classes. If the building has suffered damages D4 (partial collapse) or D5 (total collapse) it is considered “lost” and it goes out of the inventory of buildings of the cell in the elaborations of the damage for the following events

http://www.crismaproject.eu

05.09.2014 | 18

Figure 12. The system to process the buildings affected by a seismic event sequence.

2.2.3.

Damage Probability Matrices of hydraulic works with respect to coastal submersion

The time dependent vulnerability of the hydraulic structures should be studied at two different time-scales. During an event, dikes should face to high external forces due to water level and waves which could damage their structure. If the dike structures do not resist to these actions, the coastal submersion can occur as a consequence of the hazardousevent. If the structures resist, the protection against coastal submersion for the considered event is ensured, but the dike may have sustained some damage and the resistance capacity of the dike is reduced, impairing its functionality for a next event. In order to assess the vulnerability of dikes, we consider the segments constituting the whole dike. We decide to cut dike segments in a way that their length will not exceed 250 meters. In the following paragraphs, when we will mention a dike, we will actually mean a segment of dike. Further, to diagnose flood defenses, we establish the status of each segment of dikes depending on the type of the dike. There are three main types of structures: rock dikes, masonry dikes earthen dikes and natural protections (dunes). Several phenomena can damage dikes such as external erosion, internal erosion and overflow. Thus, the status of a protection segment is assessed by a geotechnical expert considering the status of the internal siding materials, of the external siding materials and of the parapet. In case of coastal submersions, the duration of application of external forces to the protection is short (due to tide), so the internal erosion is neglectible. This is not necessarily the case for river flooding where the duration of the water forces applied could be longer (some days). The external erosion by waves could mainly affect earthen dikes and dunes. In such cases, a relatively small wave height (Hs~1m) could generate erosion and after some time generate a breach and/or a total failure of the dike. Of course, the probability of a breach appearance depends on the width of the dike and also on the duration of the impact by waves.

http://www.crismaproject.eu

05.09.2014 | 19

As in coastal areas, the tide effect implies that the duration of wave impact on the dikes is short (1 m

10%

80%

10%

http://www.crismaproject.eu

05.09.2014 | 20

Table 10. Damage Probability Matrix for a dike of medium status. Medium status Damage level

Water level above the dike

No failure

Breach

Total Failure

50 cm

5%

15%

80%

>1 m

0.10%

4.90%

95%

Table 11. Damage Probability Matrix for a dike of poor status. Poor status Damage level

Water level above the dike

No failure

Breach

Total Failure

50 cm

0.10%

4.90%

95%

>1 m

0%

0.10%

99.90%

These DPMs could be synthetized in one composite table (see Table 12) Table 12. Table expressing the probable damage levels as function of water level and for different status of dikes. Water level above the dike

2.2.4.

Status Good

Medium

Poor

50 cm

98% No failure 2%Breach 0%Total Failure

5% No failure 15%Breach 80%Total Failure

0,1% No failure 4,9%Breach 95%Total Failure

>1 m

10% No failure 80%Breach 10%Total Failure

0,1% No failure 4,9%Breach 95%Total Failure

0,0% No failure 0,1%Breach 99,9%Total Failure

Damage dependent vulnerability of hydraulic works with respect to coastal submersion

In the scope of coastal submersion simulation, the dikes are considered as (polylines) segments with a maximum length of 250 meters. For the definition of the initial World State WS0, the dikes segments are the Minimal Unit (MU) which contains the initial status (Good, Medium, Poor), including an indicator of the

http://www.crismaproject.eu

05.09.2014 | 21

collapsing of the dike and another indicator describing the behavior of the dike for the simulation of an event. This behaviour indicator can have three parameter values: Resist, Breach or Fail (see Table 13). In the CRISMA marine submersion scenario, the first step is to run a large scale simulation in order to assess which are the most exposed dikes. At this level, there are no collapsed dikes and the behaviour of all dikes is ‘Resist’. Table 13. Dike segment description stored in the WS0. Status

Behaviour

ID dike segment

Good

Medium

Poor

Collapsed

Resist

Breach

Fail

Id1

1

0

0

0

1

0

0

Id2

0

0

1

0

1

0

0

Id3

0

1

0

0

1

0

0



0

1

0

0

1

0

0

After the large-scale simulation and according to its results , it is possible to find the maximum value of the water height above the dike Hmax, for each “segment of dike”. This value will be then consequently be used to apply the Damage Probability Matrix defined for dikes (Table 14). Table 14. Damage Probability Matrix for dike segment vulnerability. Water level above the dike

Status Good

Medium

Poor

50 cm

98% No failure 2%Breach 0%Total Failure

5% No failure 15%Breach 80%Total Failure

0,1% No failure 4,9%Breach 95%Total Failure

>1 m

10% No failure 80%Breach 10%Total Failure

0,1% No failure 4,9%Breach 95%Total Failure

0,0% No failure 0,1%Breach 99,9%Total Failure

The result for each segment of dike is a probability to resist, breach or fail. The end user will be able to see on a map the segments of the dike with a color: Green, if the highest probability is to resist Orange, if the highest probability is to breach Red, if the highest probability is to totally fail The vulnerability classification of the dike segments in the World State is done as in the following table (Table 15). There are five columns: one for the dike segment ID, one for each vulnerability class and one for the collapsed segments. For each dike segment, its vulnerability class is defined by setting the value 1 in the respective column (for example, if

http://www.crismaproject.eu

05.09.2014 | 22

a dike has a maximum of chance to resist, there is 1 in the column “Probably Resist” and zero in the other columns). Table 15. Dike segment vulnerability classification (after the large-scale simulation). ID dike segment

Probably Resist

Probably Breach

Probably Fail

Allready Collapsed

Id1

1

0

0

0

Id2

0

0

1

0

Id3

0

1

0

0











Id n

0

1

0

0











Depending on the associated color and attributed probabilities of each dike segment, the user can decide which case he wishes to test (no failure, breach or total failure). This choice is made interactively. The result of this user decision of the future behavior of the dike is a new table (Table 16). Id3 shows a dike segment whose highest probability was to breach, but which is considered by user as sufficiently resistant to resist to another event simulation. If the end user would have a maximal damage vision, the choice made by him could be more pessimistic, e.g. all dike segment whose highest probability was to breach could be considered as Fail. Table 16. Dike segment behaviour – user decision. ID dike segment

Resist

Breach

Fail

Id1

1

0

0

Id2

0

0

1

Id3

1

0

0









Id n

0

1

0









These choices will be automatically translated into a “new” version of the status of the dikes following the rules: All dike segments marked to Resist have their status unchanged. All dike segments marked to Breach have their status changed to Poor. All dike segments marked to Fail have their status changed to Collapsed. Table 17 describes the result of this treatment and by consequence the new world state (WS1). In the modified cells, we indicate in parentheses the value of the previous World State, i.e. 0(1) standing for 0 in WS1 changed from 1 in WS0.

http://www.crismaproject.eu

05.09.2014 | 23

Table 17. Dike segment description stored in the WS1. Status

Behaviour

ID dike segment

Good

Medium

Poor

Collapsed

Resist

Breach

Fail

Id1

1

0

0

0

1

0

0

Id2

0

0

0 (1)

1 (0)

0

0

1

Id3

0

1

0

0

1

0

0

















Id n

0

0 (1)

1 (0)

0

0

1

0

















The second simulation at a local scale for the same event (and potentially the following simulation for another event) takes into account the choices from the end-user described in the WS1 (i.e. breach of this dike at that time or total failure of this dike at that time). Then the end-user will be able to see the assessment of the consequence of such a breach or total failure. The classification of the dike vulnerability is also updated as it is shown in Table 18. Table 18. Updated dike segment vulnerability classification (after a small-scale simulation). ID dike segment

Probably Resist

Probably Breach

Probably Fail

Allready Collapsed

Id1

1

0

0

0

Id2

0

0

0

1

Id3

0

1

0

0











Id n

0

0

1

0











Then the end-user could update the behavior of the segments of dikes for the following events if requested. Table 19 gives an example of such a user decision. Table 19. Dike segment behaviour – user decision. ID dike segment

Resist

Breach

Fail

Id1

1

0

0

Id2

0

0

1

Id3

0

1

0









Id n

0

0

1









Table 20 describes the consequence of such a decision on the new world state (WS2). Note that the behaviour of WS2 couldn’t be better than the one of WS1 as it is the result of the simulation of an event where some dikes breach or failed. If the user would like to modify his previous choices, it could go up in the simulation chain and create another world state (WS1b) and rerun the following simulation.

http://www.crismaproject.eu

05.09.2014 | 24

Table 20. Dike segment description stored in the WS2. Status

Behaviour

ID dike segment

Good

Medium

Poor

Collapsed

Resist

Breach

Fail

Id1

1

0

0

0

1

0

0

Id2

0

0

0

1

0

0

1

Id3

0

0 (1)

1 (0)

0

0

1

0

















Id n

0

0

0 (1)

1 (0)

0

0

1

















2.2.5.

Damage Probability Matrices for house cooling during extreme weather conditions

In an extreme weather case all information about the standard houses located in the area of interest should be considered. In the CRISMA framework, a deterministic approach to assess house cooling has been evaluated. Two vulnerability classes of the houses were considered with respect to cooling during extreme weather: apartment building B1 and family house B2. Relevant input parameters for both classes, such as thermal insulation represented by the transmittance value U, are represented in Table 21. It is important to notice that the lowest transmittance values (A), associated to the building envelope elements, are representative of an old building; instead the highest (D3) are representative of a new building. It is worth to notice that both apartment (B1) and single house (B2) buildings present the same transmittance values along the years. However, they show different thermal behaviors since they have diverse S/V (surface/volume) ratios and different internal room configurations. Moreover, each class consist of buildings with different heat capacity values of the envelope elements such as external wall, ground floor, floor and windows. In particular, every external envelope’s heat capacity, light (L), medium (M) and heavy (H), has been coupled with the transmittance values, indicated with the letters from A to D3 in Table 21. The two classes reflect the exposed heritage built throughout the years. Table 21. Transmittance value of the considered building classes. U-value [W/(m2K)] A

external wall 0,83

ground floor 0,42

B

0,47

C1

floor

window

0,48

2,2

0,29

0,48

2,2

0,28

0,22

0,36

1,6

C2

0,25

0,16

0,25

1,4

D1

0,17

0,09

0,16

1

D2

0,14

0,08

0,12

0,9

D3

0,08

0,07

0,09

0,7

Usually, physical vulnerability functions relate the intensity of a hazard parameter to a damage level in a suitably established damage scale. Even though the cooling of houses is not strictly connected to a physical damage of buildings, it represents a serious matter for the building occupants. Thus, the vulnerability functions for buildings in case of extreme weather have been characterized in terms of indoor temperature decrement. In fact, in

http://www.crismaproject.eu

05.09.2014 | 25

extreme cold weather safety decisions in case of blackout are based on two main issues: the cooling speed of buildings, indoor temperature decrement, and the associated human comfort in respective cooling circumstances. Therefore, the estimated vulnerability functions have associated the outside temperature, which is considered as the hazard intensity parameter, to the indoor temperature after given time intervals from blackout. In earlier sections of this deliverable it was stated that physical vulnerability can be expressed e.g. in terms of curves and matrices. Furthermore, the concept of timedependent vulnerability is introduced in this section due to the dynamic phenomena assessed. Firstly, time-dependent vulnerability curves were built and then damage probability matrices (DPMs) for the considered buildings per different elapsed times from blackout were created. In particular, 12, 24, 36 and 72 hours are considered as elapsed times. VTT house model has been used to evaluate the indoor temperature as a function of the outside temperature for a specific elapsed time per each building class. Consequently, vulnerability curves are created from this data. The figures below show the vulnerability curves for an elapsed time of 24 and 72 hours for all the considered buildings, reflecting the performance of the different considered input parameters.

Figure 13. Vulnerability curves for an elapsed time of 24 hours.

http://www.crismaproject.eu

05.09.2014 | 26

Figure 14. Vulnerability curves for an elapsed time of 72 hours.

Basically those vulnerability curves, for each building type, give the most probable indoor temperature at a given external temperature. The used time-dependent vulnerability approach has been extended in order to create DPMs. They represent the probability of attaining variable damage levels (i.e. reached indoor temperature) for each intensity value (i.e. external temperature) per each elapsed time from blackout. Particularly in this analysis, Gaussian probability distribution has been used, considering the variation of the main input parameters: transmittance and heat capacity values of the envelope elements per each class. Finally, the tables below show the DPMs for an elapsed time of 12, 24, 36, 48 and 72 hours respectively for the apartment building and single house classes. Table 22. Damage probability matrix for elapsed time of 12 hours. Apartment building class. h12

Tint [°C]

T ext [°C]

5

0

-5

-10

-15

-20

-25

-30

-35

-25

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

-30

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

-40

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

-50

21.4 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

http://www.crismaproject.eu

05.09.2014 | 27

Table 23. Damage probability matrix for elapsed time of 24 hours. Apartment building class. h24

Tint [°C]

T ext [°C]

5

0

-5

-10

-15

-20

-25

-30

-35

-25

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

-30

7.1 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

-40

28.6 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

-50

14.3 %

14.3 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

Table 24. Damage probability matrix for elapsed time of 36 hours. Apartment building class. h36

Tint [°C]

T ext [°C]

5

0

-5

-10

-15

-20

-25

-30

-35

-25

21.4 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

-30

28.6 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

-40

14.3 %

21.4 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

-50

21.4 %

14.3 %

14.3 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

Table 25. Damage probability matrix for elapsed time of 48 hours. Apartment building class. h48

Tint [°C]

T ext [°C]

5

0

-5

-10

-15

-20

-25

-30

-35

-25

21.4 %

7.1 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

-30

21.4 %

14.3 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

-40

21.4 %

14.3 %

14.3 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

-50

28.6 %

7.1 %

21.4 %

7.1 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

Table 26. Damage probability matrix for elapsed time of 72 hours. Apartment building class. h72

Tint [°C]

T ext [°C]

5

0

-5

-10

-15

-20

-25

-30

-35

-25

21.4 %

14.3 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

-30

21.4 %

14.3 %

7.1 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

-40

21.4 %

14.3 %

14.3 %

7.1 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

-50

21.4 %

14.3 %

14.3 %

14.3 %

7.1 %

0.0 %

0.0 %

0.0 %

0.0 %

Table 27. Damage probability matrix for elapsed time of 12 hours. Single house class h12

Tint [°C]

T ext [°C]

5

0

-5

-10

-15

-20

-25

-30

-35

-25

4.8 %

4.8 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

-30

9.5 %

9.5 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

-40

14.3 %

4.8 %

4.8 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

-50

4.8 %

14.3 %

4.8 %

4.8 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

http://www.crismaproject.eu

05.09.2014 | 28

Table 28. Damage probability matrix for elapsed time of 24 hours. Single house class h24

Tint [°C]

T ext [°C]

5

0

-5

-10

-15

-20

-25

-30

-35

-25 -30

14.3 %

4.8 %

9.5 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

9.5 %

14.3 %

4.8 %

4.8 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

-40

19.0 %

4.8 %

14.3 %

4.8 %

4.8 %

0.0 %

0.0 %

0.0 %

0.0 %

-50

14.3 %

14.3 %

0.0 %

14.3 %

4.8 %

0.0 %

4.8 %

0.0 %

0.0 %

Table 29. Damage probability matrix for elapsed time of 36 hours. Single house class h36

Tint [°C]

T ext [°C]

5

0

-5

-10

-15

-20

-25

-30

-35

-25

23.8 %

9.5 %

9.5 %

4.8 %

0.0 %

0.0 %

0.0 %

0.0 %

0.0 %

-30

23.8 %

4.8 %

14.3 %

4.8 %

4.8 %

0.0 %

0.0 %

0.0 %

0.0 %

-40

19.0 %

14.3 %

4.8 %

14.3 %

4.8 %

4.8 %

0.0 %

0.0 %

0.0 %

-50

19.0 %

14.3 %

9.5 %

4.8 %

9.5 %

9.5 %

0.0 %

4.8 %

0.0 %

Table 30. Damage probability matrix for elapsed time of 48 hours. Single house class

h48 Text [°C] -25 -30 -40 -50

Tint [°C] 5

0

-5

19.0 % 9.5 % 14.3 % 19.0 % 14.3 % 4.8 % 19.0 % 14.3 % 14.3 % 19.0 % 9.5 % 14.3 %

-10

-15

-20

-25

-30

-35

4.8 % 14.3 % 0.0 % 14.3 %

4.8 % 9.5 % 14.3 % 0.0 %

0.0 % 0.0 % 4.8 % 14.3 %

0.0 % 0.0 % 4.8 % 4.8 %

0.0 % 0.0 % 0.0 % 4.8 %

0.0 % 0.0 % 0.0 % 0.0 %

Table 31. Damage probability matrix for elapsed time of 72 hours. Single house class h72

Tint [°C]

T ext [°C]

5

0

-5

-10

-15

-20

-25

-30

-35

-25

19.0 %

14.3 %

14.3 %

9.5 %

4.8 %

0.0 %

0.0 %

0.0 %

0.0 %

-30

14.3 %

19.0 %

9.5 %

14.3 %

4.8 %

4.8 %

0.0 %

0.0 %

0.0 %

-40

19.0 %

9.5 %

19.0 %

9.5 %

9.5 %

9.5 %

4.8 %

0.0 %

0.0 %

-50

9.5 %

14.3 %

4.8 %

19.0 %

9.5 %

0.0 %

14.3 %

4.8 %

4.8 %

2.2.6.

Time-dependent vulnerability for house cooling during extreme weather conditions

The key parameter to keep the indoor temperature relatively high is the thermal mass of the envelope with insulation coming next. In fact, the buildings with heavyweight envelope show the highest indoor temperature after an elapsed time of 72 hours. Moreover, with regards to the same building typology, buildings with heavyweight envelope keep the indoor temperature at the same level of buildings with medium weight envelope with lower insulation. Clearly apartment buildings B1 show better performance due to less external surface. The DPM matrices confirm what stated; indeed, the probability to find low indoor temperatures is always higher in single houses than in apartment buildings for each elapsed times. Indeed, the external surfaces are responsible for the heat losses towards

http://www.crismaproject.eu

05.09.2014 | 29

the outside environment. In particular, family houses B2 at poor and medium insulation status are colder than apartment buildings with same insulation level. Obviously with the increasing of the elapsed time the probability that the indoor temperature will be below zero, in the considered classes, is increasing. Indeed with regards to the most extreme outside condition (external temperature of -50 °C), after just 12 hours no apartment building will have an indoor temperature below zero (see Table 22), while for the single houses there is already a 9,6% probability (see Table 27). On the other hand, after 72 hours from the blackout the probability rises to 35,7% (Table 26) for the apartment buildings and to 57,2% for the single houses (Table 31). It is worth noticing that, even after 72 hours elapsed, still no apartment buildings will have an indoor temperature below zero if the external temperature is -25 oC. Moreover, the probability that the indoor temperature drops down below zero degree from 12 to 72 hours after the blackout shows an average increment of 0%, 1,8%, 5,4% and 8,9% for the apartment building respectively for an external temperature of -25 oC, -30 oC, -40 oC, -50 oC, while for the single house the values are 7,1%, 8,3%, 11,9% and 11,9%. Furthermore, severe external temperature becomes a strong factor after 24 hours for the apartment buildings, but already after just 12 hours for the single houses.

2.3. Simulation model for TDV In the following paragraphs, main features of the model logic scheme, that was introduced in D43.1 (Polese et al., 2013c), will be recalled, and the basic specifications for the TDV simulation model will be described (additional technical details in Appendix A) 2.3.1.

TDV Model logic scheme

Vulnerability of elements at risk may be considered to be affected by time-dependency for the following reasons: continuous deterioration of material characteristics or ageing (in the long term) cumulating damage because of repeated overloading due to adverse events In addition, also inherent dependency on time of the damaging phenomenon shall be considered (as for the case of the cooling of houses with time from black-out in the extreme weather case) Given the initial vulnerability of an element (or class) at risk, in order to determine time dependent vulnerability the effects of time (or damage caused by an initial impact) have to be properly considered in order to allow consistent computation of time dependent damage and/or losses. The TDV model allows to perform consistent computation of time dependent damage by the use of suitably updated vulnerability functions. The updating can be done following two approaches: 1st option - variation of vulnerability functions (see e.g. Figure 15) This approach entails the explicit consideration of the variation of vulnerability functions describing the propensity of the exposed assets to suffer damage due to an hazardous event; the change of vulnerability functions may be directly determined as a function of the damage level that the generic element has suffered during the previous event (in case of

http://www.crismaproject.eu

05.09.2014 | 30

vulnerability variation due to damage by previous impact) or as a function of time (in case of ageing or, as for the extreme weather example, when the phenomenon is inherently dependent on time) Updated vulnerability curves P(%)

50%

Initial vulnerability curve Damage or time

Intensity Figure 15. Example of update of fragility functions depending on damage.

2nd option, update of inventory This approach, without changing the vulnerability functions, entails the re-classification of the exposed assets (in pre-fixed vulnerability classes) considering the worsening of their behaviour due to damage. The TDV model is “domain-independent” in the sense that the logic scheme is the same for different hazard domains (e.g. earthquake, flood, extreme weather …), but for using the model in order to compute time dependent losses (in terms of damages in the established damage scale) there is the need to suitably feed the model for each hazard-domain. The description given below refers to the implementation of Option 2 for implementation of the TDV model (update of the classification of elements at risk). Input data Elements vulnerability data Rules for inventory updating due to damage for previous event (for application of option 2) Other input data are given at the Minima territorial Units (MU) level (e.g. grid of 500x500 m) Inventory data Initial inventory: Initial distribution of vulnerability classes (number and distribution of elements at risk belonging to each class) Damage data Damage distribution (number of elements at risk – or Objects Of Interest OOI – belonging to each class performing the different damage states within each MU (for application of option 2, or option 1 in case of damage-dependent vulnerability)). The damage distribution is a result of the Vulnerability Model (stationary), e.g. as implemented in the Building impact model for the case of earthquakes.

http://www.crismaproject.eu

05.09.2014 | 31

Process The TDV model logically works in interactive manner with the standard Vulnerability Model VM and for a sequence of events, see Figure 16. For applying Option 2 the model needs the inventory and the damage distribution for the elements at risk (Objects Of Interest - OOI) as input (from the initial World State, WS(0)). Before the first event, at World State initial condition (WS(0)), the damage distribution for the selected domain is empty. Damage data is derived/estimated from the application of a standard Vulnerability Model VM (e.g. the Building Impact Model) after the first event (event 1). The (initial) inventory for the elements at risk is retrieved from the WS, that shall contain it due to the (former needed) application of a standard VM. The application of the TDV model with option 2, next, allows to update the OOI inventory in the WS. The updated OOI inventory becomes the inventory input, together with intensity input, for the successive event, for the standard VM to be used within the sequential events analysis. Output data

Updated inventory: Updated distribution of vulnerability classes (number and distribution of elements at risk belonging to each class) after each event.

http://www.crismaproject.eu

05.09.2014 | 32

Figure 16. TDV model logic scheme.

The application of this logic scheme to the case of TDV analysis in the earthquake domain lead to the following diagram (Figure 17) showing data slots and flows and models interaction:

http://www.crismaproject.eu

05.09.2014 | 33

Figure 17. TDV application to seismic analysis.

Light blue links in the figure indicate the data transfer managed by the TDV model itself. TDV model input data identified as ‘EQ Parameters’ and ‘Classes & rules’ refer to the earthquake’s physical parameters (position, magnitude, depth…) for the events to use in the TDV sequence modeling, and the definition and naming of the vulnerability classes of OOIs and the scaling rules due to damage. TDV model outputs are stored in the OOI Inventory data slot, i.e. after the simulation the model stores the updated OOI Inventory data (distribution of vulnerability classes per MU) in this container. The structure of the ‘OOI Inventory’ will be described hereafter. Sequential steps of the process for TDV simulation in the seismic case are identified with numbers in circles in the figure. 1. Initial distribution of the buildings vulnerability classes per MU fills the initial condition of the OOI Inventory 2. The TDV model gets the needed input parameters about the sequence of earthquakes to use in the simulation. 3. The TDV model builds the needed command with parameters in order to execute the Building Impact Model for the first event of the sequence. The Building Impact Model manages all the needed data to evaluate its Building Damage output. 4. Buildings Damage results, together with the Building Inventory data: 5. are used to update the OOI Inventory at step ‘n’ and stored. 6. OOI Inventory data of the ‘n-1’ sequence step are used as Building Inventory input for the Building Impact Model at next step ‘n’ in the sequence.

http://www.crismaproject.eu

05.09.2014 | 34

Then the TDV model follows the elaboration from (3) again until the end of the sequence of events. At the end of the process ‘OOI Inventory’ will contain for each step damage and OOI Inventory updated results for each MU. Rules for vulnerability updating In the subchapters 2.2.2 to 2.2.6 the rules for vulnerability updating for the cases of seismic building vulnerability, vulnerability of dikes to submersion and vulnerability of houses to internal cooling after electricity black-out were described. For example, in the case of seismic building vulnerability, the rules for vulnerability updating may be resumed with the Table 32, where 0 indicates no scaling, 1 and 2 indicate the scaling of 1 or two classes, respectively (e.g. from class B to A if scaling order is 1, from B to AA - a worser class than A - if scaling order is 2) and -1 indicates that the buildings are lost (i.e. removed from inventory). Table 32. Rules for class scaling in the case of earthquakes. Damage level

Scaling order

D0

0

D1

0

D2

1

D3

2

D4

-1

D5

-1

In case of vulnerability of dikes to submersion the rules for vulnerability updating may be resumed with the Table 33, where in case of an intermediate damage level (breach) all the classes are re-classified as “Poor”. Table 33. Rules for class scaling in the case of dike submersion. Damage level

Scaling order

Resist

0

Breach Fail

poor -1

In the case of vulnerability of houses to internal cooling after electricity black-out the “damage level” may be assimilated to the time passing from black-out. This way, the rule for vulnerability updating is very simple, since it is sufficient to refer to the damage probability matrices determined to the elapsed time from black-out (e.g., referring to class B1, apartment buildings, to the Table 22 for 12 hours from black-out, or Table 23 for 24 hours from black-out etc.) in order to suitably simulate the change in vulnerability condition. 2.3.2.

Input/output specification and links with other models

The input and output data are exchanged following the scheme represented in Figure 18.

http://www.crismaproject.eu

05.09.2014 | 35

In particular, the data exchange paths (blue links) and process execution (red link) for the generalized TDV model application, are represented. TDV Input parameters: represents the TDV model control parameters for the simulations executions. OOI Inventory: OOI inventory data needed for impact modeling. OOI Damage: output from impact modeling with damage of the OOI. TDV Updated OOI: updated OOI catalog after impact simulation.

Figure 18. Input/Output specification and link to other models.

Working of the scheme in Figure 18 can be described as following: the TDV model gets its input control parameter; reads OOI inventory (distribution of OOI vulnerability classes) and prepares first event impact model simulation (domain-specific impact model); executes the impact model (that uses internal OOI inventory, hazard intensity distribution and the DPMs and produces a damage distribution, OOI damage); the TDV evaluates the OOI damage obtained in the step and updates the OOI. 2.3.3.

Example application of the TDV model for the seismic case

In this paragraph the application of the model logic scheme is exemplified with reference to time-dependent seismic vulnerability for buildings. The initial World State, that is discretized at the level of Minima Unit of analysis (MU), contains the basic territorial GIS information and the initial distribution of vulnerability classes (building inventory), as it is shown in Figure 19. The next step is the input of the hazard intensity distribution. It is hypothesized that the hazard input is derived applying an attenuation law to the seismic event with parameters: Magnitude 5.6, Latitude 42.47 North and Longitude 13.2 East; Figure 20 shows the hazard map in terms of Peak Ground Acceleration (PGA) in units of g. The TDV model invoques the domain-specific impact model, also called Vulnerability Model VM (in this case the building impact model) with the building inventory and hazard

http://www.crismaproject.eu

05.09.2014 | 36

intensity distribution at the MU level as input. The output of the standard VM is the probable damage distribution. By way of example, Figure 21 shows the probable damage distribution of damage levels d4 and d5, representing the collapsed buildings in the adopted seismic damage scale.

Figure 19. Initial condition Building Inventory.

Figure 20. EQ first event shake map data.

The obtained probable damage distribution is used as input information for the updating of the building inventory. Figure 22 shows the distribution of probable damage on a map at MU level. Note that the updated inventory contains also the building class AA, representing worse behavior of buildings with respect to building class A, and building class AAA, representing worse behavior of buildings with respect to building class AA. Those classes have been

http://www.crismaproject.eu

05.09.2014 | 37

added in order to account for possible worsend behavior after damaging of buildings, as explained in section 2.3.1; obviously, there are no buildings classified as AA or AAA in the initial inventory, but it may happen that after damage some buildings are “de-classified” and therefore those classes may not be empty in the final world state.

Figure 21. Buildings Damage distribution after first seismic event.

Figure 22. Building Inventory updated after first EQ.

http://www.crismaproject.eu

05.09.2014 | 38

3. Systemic vulnerability 3.1. Introduction Different critical infrastructure systems, such as electricity and heat distribution, different transportation modes, telecommunication, and water delivery and sewage, are extremely vulnerable to natural and man-made disasters. This has been witnessed in many major disasters in Europe. Vulnerability of critical infrastructures has received considerable attention recently due to their importance in sustaining the functioning of society in a crisis situation (see e.g. EU FP7 projects SYNER-G, EWENT and WEATHER). In the scope of the CRISMA project, mainly the emergency phase of the crisis is addressed, with the aim to support decision makers for effective crisis management, or preparing them with training for response. Emergency phase falls in the short-term temporal range (a few days/weeks) and the typical spatial extent is at an urban/regional scale. Systemic vulnerability is defined, in addition to physical and functional vulnerability, as a concept specifically referred to territorial systems. While physical vulnerability refers to the vulnerability of an element, or group of elements, with respect to an hazardous event, and functional vulnerability represents the tendency of an element to suffer impaired functioning due to external phenomena, the concept of systemic vulnerability measures the tendency of a territorial element to suffer damage (generally functional) due to its interconnections with other elements of the same territorial system (Pascale et al., 2010). This chapter provides an overview on aspects of modeling infrastructure and systemic vulnerability with a special reference to road network vulnerability. Section 3.2 presents a synthesis of some of the existing methodologies for systemic vulnerability assessment. Next, the Road Network Vulnerability (RNV) model adopted in CRISMA is described in section 3.3, including a sub-section illustrating a sample application of the (RNV) simulation tool implemented in the framework of T43.2 .

3.2. Synthesis of the review on methodologies for infrastructure and systemic vulnerability assessment The impact of a disaster caused by a natural hazard on a system evolves with time elapsed from the event and in space. Typically, three time frames are considered (Franchin et al., 2013): short-term: in the aftermath of the event the damaged Infrastructure operates in a state of emergency; mid-term: the Infrastructure progressively returns to a new state of normal functionality; long-term: the Infrastructure is upgraded/retrofitted with available resources to mitigate the risk from the next event. Two important concepts in infrastructure network and systemic vulnerability assessments are links and nodes, which are illustrated in Figure 23. Within a single infrastructure element, nodes can represent critical points in the network, such as a bridge or a station, and links represent for instance roads and railways. A common way to assess the vulnerability of a network is to disable the functioning of one or several nodes and/or links, leading to disruptions in the flow of goods or services through the network (Murray et al., 2008).

http://www.crismaproject.eu

05.09.2014 | 39

Figure 23. Nodes and links.

In case of systemic vulnerability, the sets of nodes and links describe the whole territorial system (Pascale et al., 2010). In the Syner-G project a useful distinction among links and nodes is proposed (Franchin et al., 2013): Nodes: Point-like components (Critical facilities): single-site facilities whose importance for the functionality of the infrastructure makes them critical, justifying a detailed description and analysis. Examples include hospitals, power-plants. Area-like components: this is a special category specifically intended to model large populations of residential, office and commercial buildings that cannot be treated individually. These buildings make up the largest proportion of the built environment and generally give the predominant contribution to the total direct loss due to physical damage. Links: Line-like components (networks, lifelines): distributed systems comprising a number of vulnerable point-like sub-systems in their vertices, and strongly characterized by their flowtransmission function. Examples include electricity networks with vulnerable power plants, sub-stations, etc, or road networks with vulnerable bridges. When referring to systemic vulnerability in CRISMA, the focus is on line-like components, whose vulnerability may be studied with one of the existing approaches described in the following. 3.2.1.

Vulnerability of Infrastructure

Vulnerability of an infrastructure network can be assessed from several perspectives. Networks are characterised by different paradigms (e.g. hard, soft, critical) and foci (e.g. structural/physical, operational, environmental, social) (Murray et al., 2008). This necessitates the use of different methods for assessing the vulnerability of different systems. For what concerns the general approach of analysis, network vulnerability has been assessed in several different ways in the academic literature. Murray et al. (2008) identify

http://www.crismaproject.eu

05.09.2014 | 40

four methods, namely: scenario-specific, strategy-specific, simulation and mathematical modelling. The primary difference between these approaches lies in how the disruptive scenarios are assessed and understood. Each method has its benefits and limitations. In addition, Bayesian nets (BN) are also used to assess vulnerability. Scenario-specific assessments use a specific scenario or a small set of scenarios, such as the impact of an earthquake to transportation systems (see e.g. Ham et al., 2005; Tatano & Tsuchiya, 2008) to estimate the impacts of an incident, enabling comparison between selected, important scenarios. The most significant benefit of this approach is that the scenarios can be identified by experts and owing the small number of scenarios, complex analysis methods of the impacts can be used. (Murray et al., 2008). The small number of scenarios also creates the most significant drawback of this approach, as unrelated, but important scenarios can be easily omitted from the analysis. Strategy-specific assessments are used to analyse how vulnerable the network is with respect to a structured or coordinated loss of links or nodes. Instead of using an incidence scenario, such as earthquake, strategy-specific assessments use a hypothesized sequence or strategy of disruption. Strategy-specific approaches are more useful in assessing the vulnerability of different network configurations to identical, strategic incidents, where a strategy could be a targeted attack to the strategically most important parts of the system. As opposed to the scenario-specific approach, the strategy-specific approach uses one scenario but several network configurations. This makes it possible to assess the vulnerability of different configurations. However, the approach suffers from similar limitations as the scenario-specific approach because it only considers a limited number of scenarios. In addition, the approach also requires the determination of the relative importance of different nodes and links in the network. This is a simplification for many networks, because the network components are often interrelated, decreasing the importance of single components. Third drawback is that the approach does not consider a simultaneous disruption of different components (Murray et al., 2008). Because of the structured nature of disruption in this approach, it is not necessarily of importance when assessing the vulnerability of networks to extreme weather events due to their purely random nature. More sophisticated methods to assess vulnerability do not make any a priori assumptions regarding the network structure or the disruptive scenario. Simulation approach takes into consideration a suitable number of random scenarios, aiming to find lower and upper bounds for the impacts, thereby revealing the vulnerabilities in different networks. If the number of different scenarios is large, the simulation approach cannot cover the whole range of scenarios and impacts either, but it allows to identify and “analyse the range of possible scenarios when scenario enumeration is not an option” (Murray et al., 2008). A limitation of the simulation approach is that it does not necessarily consider important scenarios or reveal the most vulnerable components in the network, because it gives the same weight of importance to all scenarios. To account for varying importance, mathematical modelling approaches have been developed. By different modelling techniques, scenarios resulting in the greatest potential impact and revealing the most vulnerable components in the network can be assessed. Therefore, mathematical modelling allows a search for potentially important scenarios, which does not necessarily involve the most obvious, critical components, but is instead related to the functioning of the entire system. Mathematical modelling is also capable of identifying scenarios that pose the greatest threat to systemic vulnerability, thereby having a more system-oriented approach to vulnerability compared to the scenario-specific assessment. Naturally, also

http://www.crismaproject.eu

05.09.2014 | 41

the modelling approach has limitations. Modelling complex systems is always challenging, and taking into account all necessary variables and relationships may appear to be impossible. Also, the worst-case scenario is not always the only scenario which is important in assessing vulnerabilities. Other, less damaging scenarios may also create problems for the operational continuity of infrastructure systems. And as with all mathematical modelling, sometimes finding a feasible size for the model and to obtain solutions may be challenging (Murray et al., 2008). 3.2.2.

Systemic vulnerability

Pascale et al. (2010) suggest a scenario-based approach on systemic vulnerability, which consists of three stages: 1) topological characterization of the territorial system under study and the assessment of scenarios in terms of a hazard (Pascale et al., 2010 assess systemic vulnerability in landslide-prone areas); 2) the analysis of the direct consequences of a scenario event on the system; and 3) the definition of the assessment model of a systemic vulnerability hazard-prone areas. Another approach on systemic vulnerability is the application of Bayesian Networks (BN), which is an intuitive graphical method of modelling probabilistic interrelations between variables. BN consist of nodes representing variables which are interlinked with arches representing their causal or influential relationships. The variables can be either discrete, such as false/true or high/medium/low, or continuous. The causal relationships between variables are defined by conditional probability distributions, commonly referred to as node probability tables (NPT) or conditional probability tables (CPT). Therefore, it is possible to calculate the marginal (or unconditional) probability distributions of the variables. Furthermore, if evidence on some of the variables is observed, the other probabilities in the model can be updated using probability calculus and the Bayes theorem. This is referred to as propagation (Korb and Nicholson, 2010; Fenton, 2012). Influence diagrams (ID), which are generalisations of BN, contain in addition to probabilistic nodes also decision and value nodes. Hence, they can be used to analyse the consequences of different decisions and interdependencies as seen in Figure 24.

Figure 24. An influence diagram on cascading events and mitigation.

http://www.crismaproject.eu

05.09.2014 | 42

An initiating event, e.g. an earthquake, will cause some damage, but its extent can be influenced with mitigation and prevention actions. The damage (e.g. damaged buildings) may incur loss in terms of e.g. loss of lives or monetary loss. The damage may also incur secondary events to occur, i.e. the cascade effect. In the CRISMA project, simple, yet practical approaches on the infrastructure vulnerability are used. When referring to systemic vulnerability in CRISMA, in principle the focus is on the vulnerability of line-like components, as defined by Franchin et al. (2013). Section 3.3 presents the approach which is used to model road network vulnerability.

3.3. Road Network Vulnerability model (RNV) The RNV model that is described next was originally developed at Plinivs centre in order to assess the safety of possible escape routes in earthquake-impacted areas. In the framework of activities of Task 43.2, the model was rearranged and rewritten, and is described in the logic of integration within the CRISMA framework. Moreover, new capabilities were inserted in the model, specifically for automatic attribution of buildings along a road link to selected vulnerability classes, as it will be exaplined in section 3.3.1.1. The road network interruption modelling in case of earthquakes allows to assess the probability of road link interruption in case a seismic event of assigned intensity hits the area. The RNV model could be adapted for use in another hazard domain; for example, it can be used within the forest fire hazard domain, considering the probable interruptions of road tracts due to the presence of smoke or fire. In section 3.3.2 the possible road link interruption depending on smoke is discussed. 3.3.1.

Road link interruption modeling in case of earthquakes

The assessment of road network reliability following natural disasters is a complex issue that involves several physical and functional factors (Esposito S., et al., 2012). Generally, in the field of seismic risk analysis, road network performance can be based on the importance of the role played by the network itself. In particular, available studies can be assigned to the following three levels (Pinto et al, 2011): Level I. Connectivity analysis where the attention is focused on the functioning of the network in terms of pure connectivity. The service provided by the network after the natural disaster is evaluated, and may be of interest in identifying portions of the network which are critical with respect to the continued connectivity of the network. Level II. Flow analysis where the scope of the study is widened to include consideration of the network capacity to accommodate traffic flows. The damage to the network causes traffic congestion, resulting in increased travel time which is in turn translated into monetary terms. Level III. Full systemic approach, which aims at obtaining a realistic estimate of total loss, inclusive of direct physical damage to the built environment (residential and industrial buildings as well as network components), loss due to reduced activity in the economic sectors (industry, services), and network-related loss (increased travel time). Economic interdependencies are accounted for, such as the reduction in demand and supply of commodities (due to damaged factories, etc.), hence in the demand for travel, and due to the increased travel costs.

http://www.crismaproject.eu

05.09.2014 | 43

As described in Pinto et al. (2011), these three levels of analysis correspond to three possible levels of complexity in the assessment of reliability of a road network based on the importance of the role played by the network itself. At the lower end of the scale, meaningful only for transportation networks under emergency conditions, is the pure connectivity approach, whereby flow equations are not considered and the simple question of whether a residual connection exists between any two points of interest is answered. Higher level approaches consider the actual traffic flows and attempt to measure the indirect economic loss associated with travel time increases due to congestion on the damaged network. The third level consists of the most comprehensive approach representing a fully systematic study of the network which requires important input from economic disciplines. According to the above classification, here we are going to consider a ”level I” model, aiming at assessing the connectivity capacity of a road network after a disrupting event. In particular, in the CRISMA pilot context, the model is based on the assumption that vulnerability of road networks is strictly connected with building collapses (see also Goretti and Sarli, 2006). Therefore the model is consequently applied to roadways in urban areas. Indeed, it is reasonable to assume that the probability of interruption of a road is highly correlated to the seismic vulnerability of buildings along it. Accordingly, the model allows to perform a preliminary estimation of the probability of road “failure”, in the sense that the generic link cannot be used anymore (e.g. for escape/evacuation purposes). The model output consists of a vector layer of the road links, with the associated estimated probability of interruption, and other useful information about vulnerability distribution of the buildings along the link and probable damage distribution along the link due to the selected earthquake scenario. In Figure 25 an example of the tabular output of the model is shown; in particular, apart from the identification cell (osm_id is the id of the analysed road link), the subsequent 5 columns represent the number of buildings along the link belonging to each vulnerability class (a to d) and the total number of buildings (total_ed), ndi(1…i) represent the percentage of buildings that are damaged to level i, nlost the percentage of “lost” buildings (falling within damage levels D4 and D5) and pfail is the probability of failure. osm_id 1372 237237201

a 7

b 3

c 4

d 3

tot_ed 17

nd1 5.46

nd2 4.09

nd3 2.18

nd4 0.7

nd5 0.1

nlost 0.79

pfail 0.55

Figure 25. Sample of model output, data associated with a specific road link.

The model calculation procedure is fully automated and is developed as a PostgreSQLstored procedure. The language used is PL/pgSQL, and uses PostGIS geospatial extensions for geospatial data analysis on buildings and roads layers. The model itself then offers a WPS interface to enable access and integration within the CRISMA framework (more details on the model interface is provided in Appendix B). As it will be clarified next, the RNV model needs as input the seismic vulnerability classification for each building along the selected road links, so that a building impact evaluation can be performed and the link probability of interruption, due to partial and total collapses of nearby buildings, can be evaluated. However, this kind of information is not usually available, unless a dedicated survey is performed. Therefore, the new version of the RNV model developed in CRISMA was

http://www.crismaproject.eu

05.09.2014 | 44

enriched with an additional feature that allows the automatic assignment of buildings to relevant vulnerability classes. The road link inventory actually tested for use in the RNV model is built on free opensource roads data available from OpenStreetMap (OSM), that is a collaborative project to create a free and publicly editable map of the world. However, better, more detailed and exhaustive proprietary data can be acquired from commercial sources on a case-by-case basis. The Minima Unit of analysis (MU) for road network vulnerability are identified by road segments (or links) that are formed by one or more “selected road segments” and are comprised between two consecutive nodes. The initial selection of nodes and branches to be studied has to be performed by emergency planning experts. That way, a road network grid can be built for the purpose of analysis and planning of people evacuation or civil protection resources intervention on site. The grid can also be used to identify safety areas or hit places being reacheable directly or with alternate routes. Given the road link inventory, i.e. having defined the nodes and branches to be studied, the RNV model allows to estimate the probability of interruption of each road link. Concerning the buildings seismic vulnerability, each single buiding along the road links needs to be classified. There are two limitations on the data normally available: (a) geospatial data representing “built element” features in GIS are often available only in aggregated form, i.e. compiling more than one single building, which leads to processing of aggregates with uniform characteristics; (b) data on seismic vulnerability classification for each building (or aggregate of buildings) derived from statistical census is mostly just available at the grid level (MU of analysis of the building inpact model) and not at a building-by-building level. Therefore, grid-based data need to be processed in order to obtain a suitable attribuition of seismic vulnerability classes to the buildings facing each road link; to this end, a weighted random value selection is used, based on the seismic vulnerability classes distribuition in the grid cell to which the buildings belong, as it will be explained in 3.3.1.1. 3.3.1.1.

Specification of the RNV simulation tool

As explained before, the Minima Unit of analysis (MU) for the case of road network vulnerability, that are selected by emergency planning experts, are identified by road segments that are comprised between two consecutive nodes (see Figure 26). Each road link is characterized by a unique code (ID of the link) and the assembly of all selected links constitutes the road link inventory.

http://www.crismaproject.eu

05.09.2014 | 45

Link with ID

Nodes

Figure 26. Definition of a road link.

The basic data needed for using the model are of two distinct types: static input data (data slots) that describe the inventory (building inventory and road link inventory); and variable input data, namely the intensity distribution and the buffer dimension d. The latter parameter allows to define the influence area to be considered to each side of a road link in order to drive the selection of buildings that can have potential impact on the road link itself, as described next. With the road link inventory and the building inventory available, the RNV model needs to compute, for each link, the number of buildings belonging to each seismic vulnerability class, that are built along the link (vulnerability class distribution along the link). In order to evaluate the building classes distribution along each link, a selection buffer is built around the links (shown in dotted light blue in Figure 27) based on the dimension d (input parameter of the model). In order to determine which are the buildings that could potentially affect the link (i.e. possible cause of road link interruption), only the buildings within the buffer distance are selected. The dimension (width d) of the buffer is an input model parameter that can be chosen by the user.

http://www.crismaproject.eu

05.09.2014 | 46

Figure 27. Selection buffer around road links.

After calculating the selection buffer zone around the roads, the model selects the buildings that are within, or intersect, this buffer zone (Figure 28).

Figure 28. Buildings selection by link buffer intersection.

Some assumptions are made to simplify the model elaboration: each building is correlated to only one road link segment, also if in several cases the same building, or parts of it, intersects or is within buffer zones belonging to different roads segments.

http://www.crismaproject.eu

05.09.2014 | 47

For this a weighting factor is introduced for each building, referring to each buffer segment:

(2)

Where Areaint is the intersection area of the building with the buffer segment, and d is the building minimum distance from the road segment (this distance can be null if the building itself overlaps the road segment, that sometimes happens for small scale inaccuracy of roads and/or buildings maps). The weight is increasing with increased intersection area and decreased distance-to-road. Each selected building is then related to the road link for which this weighting factor is maximum (Figure 29).

d1

d2

Figure 29. Evaluation of building correlation to only one link.

In order to evaluate the probable building damage distribution along the link, the seismic intensity has to be available as input for each MU (that, as evidenced before, coincides with the road links themselves), Given the seismic intensity, and after application of the seismic vulnerability model at the level of each road link, the expected number of buildings affected by partial or total collapse Nc along each link can be computed. Note that the seismic impact evaluation in the framework of the RNV is embedded in the newly developed model and does not require invoquing of the building impact model as an external tool. Then, the probability of link interruption Pi can be calculated with the following expression, that is based on the assumption that the number of blockages along each road follows a Poisson distribution (Goretti and Sarli, 2006):

http://www.crismaproject.eu

05.09.2014 | 48

Pi

1

1

(3)

e NC

Given as input the earthquake intensity, the application of the seismic building vulnerability model allows the assessment of Nc for each link and, using the above equation, the computation of road link interruption probability (see Figure 30 for an illustration of links on map and Figure 31 for table results).

Figure 30. Map of the probability of link interruption.

Figure 31. Table results.

http://www.crismaproject.eu

05.09.2014 | 49

3.3.1.2.

Model logic scheme

Figure 32 shows the simplified model logic scheme of what has just been described. Having chosen the area of interest, the road links to be assessed shall be assigned (Inventory: road links), as well as the inventory of the buildings in the study area.

Figure 32. Simplified model logic scheme.

The road link inventory and the building inventory are combined in order to build the inventory containing the number of buildings belonging to relevant vulnerability classes for each road link. To this end, a suitable automatic procedure was implemented as described in the previous section. Next, given the earthquake intensity distribution in the study area (and therefore for each link), the application evaluates the impact on the buildings alongside the links, to determine the probable buildings damage and the vulnerability of links in terms of probability of link interruption.

http://www.crismaproject.eu

05.09.2014 | 50

Figure 33. RNV model flowchart.

A detailed model flow chart is shown in Figure 33: process paths are shown in black, while I/O data are shown with dotted blue lines. Data shown in purple represent static features, that are immutable with the model run. Figure 34 shows the interaction of the RNV model with the input parameters (in orange) and static input data slots (required initial data slots, in light yellow) as well as with output data (in cyan).

http://www.crismaproject.eu

05.09.2014 | 51

Figure 34. RNV model, input parameters and data slots interaction.

In particular, the model requires the road link inventory as input. Each road link is characterized by a unique code (ID for the link) for identification and data link. In order to determine the number of buildings belonging to each seismic vulnerability class that are present along the link (buildings vulnerability class distribution along the link), the seismic building inventory at the grid level needs to be available. The latter, as explained before, will be suitably treated in order to evaluate the buildings vulnerability class distribution along the link; to this end a further input parameter is required (the width d of the buffer representing the influence area for each side of the generic link). Given the seismic intensity (the model uses an evenly distributed seismic intensity in all the study zone), and the Damage Probability Matrix data (another static input data), the damage distribution for the buildings vulnerability classes along each link may be computed. More detailed description (in terms of SQL Data Definition Language (DDL) listings) of the relevant data slots may be found in Appendix B, where also the description of the architecture of the RNV model is provided. 3.3.1.3.

Example application of the RNV model

As an example of application of the RNV model to the study zone of L’Aquila, a selection of the road links extracted from OpenStreetMap data is used as a road link inventory (Figure 35). Parameters used are input earthquake intensity of VIII and a selection buffer of 10m alongside the road links.

http://www.crismaproject.eu

05.09.2014 | 52

Figure 35. Selected study zone and roads.

With the input earthquake intensity of VIII and a selection buffer of 10m alongside the road links the execution of the RNV model produces as a first result the evaluation of the buffer zone around the links (Figure 36), and the selection of the nearby buildings, shown classified by earthquake vulnerability class (red to green) (Figure 37). The result in terms of probable building damage distribution (number of estimated damaged buildings for each damage level along each link) and probability of interruption is shown mapped in Figure 38 which also shows the links color-classified according to their probability of interruption, and in table format in Figure 39.

http://www.crismaproject.eu

05.09.2014 | 53

Figure 36. Buffer zones around road links for building selection.

Figure 37. Buildings selected around the links, classified according to vulnerability classes.

http://www.crismaproject.eu

05.09.2014 | 54

Figure 38. Probability of interruption of the links, color-coded.

Figure 39. Link feature values table showing the results of the RNV model.

3.3.2.

Road link interruption depending on smoke

Besides the instinctive and well known effects of burning associated to a forest fire, also the impacts caused by smoke are of great importance. On the one hand, the effects of forest fire smoke can lead to health problems with special evidence to the most susceptible people, specifically those with breathing difficulties, babies or elderly people. If a village gets immersed in smoke, staying in the village may become impossible due to potential intoxication. On the other hand, forest fire smoke can impair the use of certain facilities through potential intoxication, as previously mentioned, or via causing reduced visibility. The following detailed example will address time-dependent vulnerability of a road or a road network to forest fire smoke.

http://www.crismaproject.eu

05.09.2014 | 55

The circulation of a road or a road network is clearly affected by forest fire smoke as visibility is reduced in situations of higher smoke concentrations. Consequently, traffic is slowed down or even made impossible during extreme concentrations. In this case, vulnerability is not associated to a specific damage but rather to a temporary loss of functionality. Concluding, during a forest fire, the potential use of a road is dependent on the smoke concentration that spatially and temporally changes according to the smoke released and to the dispersion conditions. To better explain the road link interruption due to smoke, an example derived from CRISMA Pilot D will follow. In this example an earthquake triggers a forest fire threating the population of the village Castel del Monte and a scenario of evacuation is considered. The plan for evacuation must take into account the roads available after the earthquake, the spread of the fire and the reduction of visibility due to smoke. In this case, only the matters related to the reduction of visibility will be elaborated on. Time-dependent vulnerability of a road due to forest fire smoke depends on several factors (Figure 40), namely: (1a) release of smoke mainly according to the fuel type, fuel cover, fuel properties (e.g. fuel moisture content or curing rate) and fire intensity; (1b) smoke dispersion, that is a function of the smoke released, meteorological conditions (atmospheric stability, etc.) and the wind fields (meteorological and convective); (2) visibility, that is a function of the optical density and concentration of smoke; (3) maximum possible traffic speed, depending on the type of road and on the visibility. Smoke release and dispersion

Visibility distance

Maximum traffic speed

FireStation

Optical density method

Maximum traffic speed

Figure 40. Steps to determine the loss or reduction of functionality of a road due to smoke.

The fire behaviour prediction model – FireStation – has incorporated a sub-model to determine the release and dispersion of smoke during a forest fire. This model mainly takes into account the fuel availability in the burning area, the fire intensity and the wind field. The fuel availability is given by a fuel map, which is required as input for FireStation. The fuel moisture content is another input that can be provided by the user if the preexisting default values are not appropriate. The fire intensity is provided by the fire behaviour prediction model and it is mainly a function of topography, fuel cover and wind field. Wind field is determined by the Canyon model, which is also integrated in FireStation, and takes into account the convective flows and the meteorological parameter wind. Each smoke component represents a passive scalar with no influence on the airflow. The presence of passive scalars is quantified by their concentration SC, which represents the mass of scalar per mass of fluid. The transport of passive scalars is governed by the following partial differential equation:

http://www.crismaproject.eu

05.09.2014 | 56

x

u

sc

y

v

sc

z

w

sc sc

x

x

sc

y

y

sc

z

z

S sc

(4)

where is the air density, u, v and w are the air velocity components in the x, y, z coordinate system, S SC is the volumetric source term (mass of scalar generated per unit fluid volume per unit time) and the diffusion coefficient is given by:

D sc

t

(5)

Sct

In the previous equation, D SC [m2/s] is the kinematic diffusivity for the scalar, t is the turbulence viscosity, computed by the turbulence model, and Sct is the turbulence Schmidt number. For each smoke component, emission per mass of burned fuel [g/kg], D SC and t are specified as input data. Figure 41a shows a simulation of the smoke concentration due to the forest fire considered for CRISMA Pilot D as previously mentioned. In order to simplify the illustration, only the concentration of PM2.5 will be presented and it is considered that smoke has the same values of concentration of this parameter. A simulation of the sequential smoke concentration can be found in Appendix C. Figure 41b shows the distribution of the fire intensity for the same event. The visibility can directly be determined via the fire intensity and this is the reason why that image is included.

(a)

(b)

Figure 41. Simulations by FireStation Software: a) PM2.5 concentration at 2.0m after 7 hours of forest fire; b) Linear fire intensity distribution.

As can be seen, PM2.5 concentration changes spatially and temporally consequently implying variation of visibility. Visibility during forest fires depends on many factors ranging from plume-related factors such as particle mass concentration, scattering and the absorption coefficient of the smoke particles, to environmental factors such as time of day, sun position, sky colour, other light sources, etc. Visibility also depends on the individual’s position, visual acuity and on whether the eyes are “dark-” or “light-adapted” (observer sensitivity).

http://www.crismaproject.eu

05.09.2014 | 57

The visibility in a road immersed in forest fire smoke with a concentration calculated as previously described can be determined by the optical density method (Milke & James, 2000; Mulholland, 1995a). The visibility distance S, at which an object is just visible, can be obtained by the following equation:

=

×

2.303 × 3.92

×

(6)

×

Where is the mass exhaust rate capacity [g/s], Q is the heat release rate [kW/s] and air is the air density [g/m3]. Term is referred to the visibility fuel parameter and was

obtained empirically by a wide range of studies for different fuels and burning modes (Tewarson, 1995; Mulholland, 1995b). In some references, the product is referred to as the effective or chemical heat of combustion. Figure 42 shows the variation of visibility with mass exhaust rate and heat release rate of fire for three different fuel types: Pine, Read Oak and Douglas Fir.

a)

b) Figure 42. Visibility variation with mass exhaust rate (a) and heat release rate of fire (b) for Pine, Read Oak and Douglas Fir.

To perform an evacuation operation it is firstly necessary to survey the available routes and, after defining a route for evacuation, the times needed for the several phases of evacuation must be determined. The first phases are related to the alert, preparation of people and the boarding on the transport vehicles. The trip time from the place to be evacuated to the concentration zone is dependent on several factors such as the type of road, dimension of evacuation column, etc. This time is defined by the estimated speed that vehicles can be driven in the defined route. In evacuation operations due to forest fire, a very important parameter for defining the speed of circulation is the conditions of visibility. The allowed speed of circulation shall let the driver to perceive an obstacle on the road and make an emergency brake, immobilizing the vehicle in a safe way, before hitting the obstacle. This means that the stopping distance should be inferior to the visibility distance. In parallel, there is a speed limit of circulation due to other factors not related to the smoke and the minimum value of circulation must be the lower between of these two maximum traffic speed values. The maximum allowed traffic speed must be found by the sum of reaction time tr after perception of the obstacle, the actuation time ta and the active breaking time tw, which

http://www.crismaproject.eu

05.09.2014 | 58

correspond to the time for the hydraulic breaking system works (pressure built-up time) ts and the effective time to stop after the wheels are immobilized (Figure 43).

a)

b) Figure 43. Deceleration (a) and stopping distance (b) graphs (Limpert, 1992).

In this kind of events, it is normally considered an extension of the normal reaction time due to driver fatigue, panic, etc. This means that a reaction and actuation time of about 2 seconds must be considered. The last version of the European Legislation 2006/96/CE states that the minimum vehicle deceleration should be 4m/s2. However, this value is obtained during tests under controlled conditions and on good tarmac roads. During forest fires, some evacuation routes may not have such good conditions and therefore a safety factor of 2 should be adopted in such conditions. The following equation allows the determination of the maximum traffic speed V [m/s] as function of the visibility distance S [m] impaired by the forest fire smoke, considering a reaction and actuation times of 2s, a pressure build-up time of 0.5s and a deceleration of 2m/s2.

= 2 × 5.0625 +

4.5

(7)

Figure 44 illustrates the maximum speed recommended according to the visibility conditions, considering the last equation.

Figure 44. Maximum traffic speed according to the visibility distance.

The loss of functionality of a road due to a forest fire is clearly dependent on the fire intensity and the concentration of smoke on the road. Loss of functionality is understood as the reduction of conditions for circulation traduced in a reduction of traffic speed. The

http://www.crismaproject.eu

05.09.2014 | 59

dynamic character of a forest fire makes this limitation temporally and geographically variable. The reduction of traffic speed in a certain road during a forest fire is the lower of two values: 1) the maximum traffic speed due to factors other than those related to smoke, and 2) maximum traffic speed determined considering the impairment of visibility due to the forest fire smoke.

http://www.crismaproject.eu

05.09.2014 | 60

4. Social vulnerability 4.1. Introduction Following up on the work done in the first stage of the WP43 that was documented internally in D43.1, this chapter provides a conceptual overview on aspects of modeling (1) population exposure, (2) casualties and human impact, and (3) evacuation times, under particular consideration of spatial and temporal issues as outlined in the introductory section (1.3). Further details in terms of implementation issues and aspects of actual model logic have been especially elaborated on during the second stage of WP43, when also example applications were set up and tested.

4.2. Human Exposure Modelling – the DynaPop Approach4 4.2.1.

Main Concepts and Scientific Background

In the context of proactive disaster risk as well as immediate situational crisis and emergency management and particularly for exposure and impact assessments the quality of available models and corresponding input data in all terms of spatial, temporal, and thematic accuracy and reliability is among the most important factors for risk mitigation and disaster impact minimization. With regard to information on population patterns, census data usually available in inhomogeneous spatial reference units (e.g. census tracts) are commonly considered the standard information input e.g. for assessing potentially affected people in case of an emergency. However, as has been increasingly pointed out for the last decade or so, there is a strong demand on population data that are independent from enumeration and administrative areas. Raster representations can meet this demand but are not available on local scale for many parts of the world in neither sufficient spatial nor thematic consistency. Re-allocating population counts from administrative areas (aggregated due to privacy reasons after initial address-based compilation) to a regular grid requires areal interpolation methods such as dasymetric mapping. This technique accounts for ancillary data do disaggregate spatially heterogeneous population information to areas where it is effectively present, at a finer resolution (Aubrecht et al., 2009). Land Use/Land Cover (LULC) maps are often used as a basis for the disaggregation process in that regard (Aubrecht et al. 2013, Eicher and Brewer 2001, Mennis and Hultgren 2006, Langford 2007). As mentioned earlier on, the spatial distribution of population in general, and in a disaster risk and crisis perspective its exposure to hazards, is strongly time-dependent. Due to human activities and mobility, both distribution and density of population varies greatly in the daily cycle (Freire, 2010), particularly in metropolitan areas (Figure 45). Therefore, a more accurate assessment of population exposure requires going beyond residencebased census data (merely representing a nighttime situation) in order to be prepared for events that can occur any time and day (Freire and Aubrecht, 2012).

4

Parts of this section are based on Aubrecht (2013), Aubrecht et al. (2014a,b,c), Steinnocher et al. (2014).

http://www.crismaproject.eu

05.09.2014 | 61

Figure 45. Daytime vs. nighttime population distribution information for Lower Manhattan in New York City, disaggregated to a 90 m grid (LandScan USA dataset).

In an attempt to address those issues of population dynamics the recently developed LandScan USA product represented a major improvement over previous static modeling methods. Following a multi-dimensional dasymetric modeling approach LandScan USA contains both nighttime residential and daytime population distribution information incorporating movement of workers and students, resulting in a 90 m resolution output grid (Bhaduri et al., 2007). The data is, however, not openly accessible to the public or the scientific community due to its formal classified initiation through the U.S. Department of Homeland Security. Aiming at refining established population models that represent ‘static’ residential distribution patterns, current existing population dynamics models can be broadly categorized in (1) approaches applying simplified binary distinction between daytime and nighttime (e.g. Freire and Aubrecht, 2012), and (2) multiple time-slice approaches that try to account for the continuous variation in human activities occurring in particular during the daytime period (e.g. Leung et al., 2010). While the first category usually mainly refers to commuting and work statistics to identify basic daytime patterns, the second category additionally considers statistics derived from time use surveys (TUS) showing more refined activity patterns and their evolution during the day. In fact, there is a third category of models that, however, follow an entirely different approach in a sense that near-real-time distribution patterns are analyzed using cell phone data logs (Loibl and Peters-Anders, 2012). Due to inherent privacy constraints and restricted data availability, the latter approach is still not considered feasible for widescale implementation. In this deliverable we present sample results and ongoing developments within the CRISMA framework regarding the conceptual setup and subsequent implementation logic of a seamless spatio-temporal population dynamics model – DynaPop (Aubrecht et al., 2014a,b) – that aims at serving as basic input for social impact evaluation in crisis management.

http://www.crismaproject.eu

05.09.2014 | 62

To introduce the concept, first the generalized model approach is described (mainly accounting for basic working and commuting patterns), as well as the model logic scheme outlined in a separate subsection. Developments in CRISMA are, however, already further advanced. Current work-in-progress deals with the integration of comprehensive time use statistics such as the European level HETUS (Harmonized European Time Use Survey) database to further enhance the DynaPop model. The test site (Baden, Austria) for which a sample implementation has been carried out is presented in subchapter 4.2.4. Looking further in terms of CRISMA pilot-specific implementations, e.g. for the Italian pilot (D) time use survey data (ISTAT data collected at national level) is available as well as data on touristic flows (option to calculate a ‘touristic index’ referring to the ratio of touristic arrivals divided by the resident population) and integration options will be tested exemplarily. Figure 46 shows a conceptual flowchart how such a population dynamics model can look like and operate, as developed by Aubrecht (2013). The above-mentioned approach of applying time use statistics is individually highlighted. Time usage is thereby integrated with mobility info in order to differentiate dynamic patterns particularly during daytime as well as on another temporal level accouting for seasonal variation (e.g. in terms of tourist influence).

Figure 46. Conceptual framework for dynamic population modeling.

While activity-specific time profiles as derived from classical time use surveys have been applied in the mentioned multiple time-slice approaches to map population dynamics, it is a more recent development to analyze additional sources representing such human activity patterns. Due to the rapid increase in volume and spatial density of volunteered geographic information (VGI) in recent years, there are now first attempts to use

http://www.crismaproject.eu

05.09.2014 | 63

empirically-derived ‘facility occupancy curves’ from location based social network (LBSN) data such as Twitter for characterizing local-scale population dynamics (Bhaduri et al., 2014). The idea of characterizing spatio-temporal patterns of collective user activity is not entirely new, but earlier studies often struggled with limited data densities particularly outside the ‘densely monitored regions’ that are foremost in the US (Rösler and Liebig, 2013) and consequently issues of representativeness. It therefore proved beneficial to aggregate data tied to venue and thus activity categories (e.g. Noulas et al., 2011) which provides an excellent basis for linking to target zones in population disaggregation models. Within the CRISMA DynaPop development context, we are currently also analyzing the feasibility of VGI integration into the model, specifically by evaluating Foursquare user check-in data (Aubrecht et al., 2014c). There are several aspects in the Foursquare data that are useful for mapping population dynamics. First, relative venue category-specific time use profiles (or occupancy curves) could theoretically be applied directly in the course of the disaggregation process to account for spatio-temporal human activity variations. Due to the clearly still existing issue of VGI representativeness (despite overall usage increase), another interesting option is to apply the Foursquare activity profiles to calibrate existing survey-based time use statistics. This currently seems the most likely feature to implement in the short-term in CRISMA DynaPop. One factor that quickly becomes obvious in that regard refers to the characterization and representation of ‘dinner time’. While classical survey-based TUS show the relative number of a population sample that has dinner at a certain time, there is no indication on the spatial location. While on working days lunch is commonly taken in restaurants (nearby the work place), many people prefer to have dinner at home. The Foursquare venue category-based TUS clearly show that distinction and therefore represent a significantly improved basis for the population disaggregation. Due to the inherent nature of VGI data, only relative numbers of Foursquare user activities are considered directly applicable for population dynamics mapping. Therefore it seems most feasible to focus on ‘mobile’ population, thus mainly people that are out for work and study, rather easily identifiable via commuting, work, and education statistics. In that context it is crucial to account for study area-specific characteristics in terms of regular working hours, e.g. in Portugal indicating a 2-h lunch break between around 12 am and 2 pm. That kind of information can also help to understand and potentially calibrate Foursquare data-inherent temporal shifts and uncertainties that are due to irregular check-in dynamics (e.g. a user may check in at the lunch place when he is actually already leaving and not check in back at work afterwards). Another promising parameter of the Foursquare venue data is the absolute number of check-ins. We are currently testing in the DynaPop test site (see chapter 4.2.4) if those absolute patterns could provide an indication on the relative ‘importance’ of a venue within a certain activity category, thus potentially approximating the housing density parameter that commonly serves as proxy for (residential) population density in dasymetric mapping approaches. 4.2.2.

DynaPop introduction

In the following the basic defining conceptual elements of the DynaPop model are presented, facilitating a general implementation of population disaggregation processes. The concept is based on the assumption that population data, aggregated to a region, can be redistributed within the region by means of local parameters. These local parameters are usually represented by information on land use (residential housing densities, commercial areas, transport lines etc.). Depending on the level of detail of these proxy

http://www.crismaproject.eu

05.09.2014 | 64

parameters the actual spatial distribution of population can be estimated with more or less accuracy. The most straightforward approach to population disaggregation is the estimation of refined residential population distribution patterns. Population data available per administrative unit (census tract, municipality, etc.) is disaggregated to actual settlement areas as e.g. identified in remote sensing imagery. This approach is based on the assumption that settlement and, in a more refined perspective, housing density is correlated with population density. The resulting population data set still represents nighttime population. (Steinnocher et al., 2011). In order to account for population dynamics and estimate daytime population the model needs to be conceptually extended. First, the total population per administrative ‘input’ unit may change in basic terms over the day due to people commuting in and out of the area. The disaggregation approach therefore requires a daytime dependent variable (temporal unit) in order to consider the diurnal variation of total population to be redistributed (e.g., hourly time steps).

Time dependent population in specific area 3900 3700 3500 3300 3100 2900 2700 2500

Figure 47. Diurnal variation of total population per administrative unit.

In addition to the varying total population numbers implicit information on the dynamic changes of people’s location is required. Freire and Aubrecht (2012) presented a sort of binary approach assuming that people are in their workplaces/schools during the day and at home during the night, thus coming up with a basic nighttime/daytime population distribution model. In a slightly modified approach that binary work-home model can be extended to include the transition periods in the morning and evening in its output. In addition to work and home, commuting is therefore included as additional model parameter: Work (W): represents the percentage of people working at a certain time unit Commuters (C): represents the percentage of people commuting at a certain time unit Home (H): represents the remaining percentage of people For reasons of simplification in this introductory section, the Home class (H) here also comprises people who are out for shopping, leisure etc. This is then further refined by using comprehensive time use statistics as outlined above and shown in the example

http://www.crismaproject.eu

05.09.2014 | 65

application (chapter 4.2.4) and as e.g. also implemented in the UK Population 24/7 model (Martin et al., 2009, 2010). Information on time usage e.g. in terms of weekly and seasonal variations also provides a basis for deriving more accurate input for casualty modeling (Zuccaro & Cacace 2011) as elaborated later on (referring to the issues of occupancy rates during working days vs. weekends and holidays as well as touristic influence). Figure 48 shows an example for the simplified temporal pattern of people’s locations over a day as outlined above. These basic patterns can be derived from statistical enquiries and do not necessarily differ per administrative unit.

Figure 48. Diurnal variation of locations of population.

The next step requires the combination of the total population per temporal unit with the percentages from the location patterns resulting in the distribution of absolute population per location. Figure 49 shows the result for the examples from Figure 47 and Figure 48:

http://www.crismaproject.eu

05.09.2014 | 66

Figure 49. Diurnal variation of total population per locations.

For the actual distribution of population each activity pattern requires a spatial representation, i.e. target zone. For ‘Home’ the residential areas are a useful proxy, for ‘Commute’ transportation networks are appropriate references and for ‘Work’ commercial, business and industrial areas can be used. While the mere spatial distribution of these proxy data may be sufficient for a basic disaggregation, density measures significantly improve the local estimates. For the residential areas housing densities can be used as an estimate for population density at ‘Home’. For the transportation networks traffic counts on roads or passenger counts in trains and undergrounds can provide a basis for average density measures. The estimation of population densities for work places is generally more challenging due to limited consistent data availability. On a very large scale workplaces per company can be used assuming that the precise location of companies is available and the list of companies is complete. The final step is a weighted distribution of the population per location (H, C, W) accounting for the corresponding target zones and density parameters (housing, transportation, workplaces). This step can be repeated for each temporal unit, thus the resulting temporal granularity or resolution is depending on the temporal units defined in the input data. 4.2.3.

DynaPop model logic scheme and input/output specifications

This subchapter provides some more details on the model logic referring to the simplified spatio-temporal population distribution and hence exposure representation approach outlined in the previous subchapter 4.2.2. The more advanced and sophisticated model version including the integration of time use statistics and related derivation of more seamless spatio-temporal patterns is presented then exemplarily in chapter 4.2.4. Generally, the earlier-described basic simplified conceptual model to calculate timedependent spatial population patterns consists of two steps: (1) Calculate the total amount of population per class (W, C, H) being present in a given area at a specific point in time (temporal domain).

http://www.crismaproject.eu

05.09.2014 | 67

(2) Use invariant distribution grids to calculate (class-dependent) population densities in the respective area (spatial domain). Processing the temporal domain requires two input parameters. The first one is the absolute number of people present in a certain source zone (administrative unit) within a given time slot. The second parameter refers to the relative activity ratios per location that are also time-dependent. Multiplying both parameters returns an estimate of class size in terms of absolute numbers. Figure 50 illustrates this process by means of the ‘12:00 h’ time slot.

Figure 50. Calculating absolute numbers for each class.

To proceed with spatial disaggregation from source to target zones, weighted density grids for each class belonging to a specific administrative unit are combined with the total amount of people being present in this spatial unit (see Figure 51). The density grid is a square raster with each n² grid cell representing an area with a pre-defined side length between 100 and 500 m (according to input data granularity and required output resolution). Each square holds a relative density value d with (d(n x m)) = 100% The model can be enhanced by optionally offering different spatial formats for the density measures and target zones, such as triangular meshes as sometimes used specifically in hydrological models (e.g. flooding, runoff). Spatial disaggregation is achieved by multiplying the absolute population numbers (in the given administrative source unit at a specific time slot) with all values of d(n²). This leads to spatial depictions representing the total amount of people being present in each spatial entity of the grid at a specified point in time.

http://www.crismaproject.eu

05.09.2014 | 68

Figure 51. Density distribution for each class at a specific point in time.

4.2.4.

Test application

The DynaPop model has been tested for the region of Baden, a city south of Vienna. This test site was chosen as there are reliable data available for the area, both in terms of population and land use. Figure 52 shows the location of the test site, which covers 16 municipalities, including the city of Baden.

Figure 52. Overview of Baden test site.

http://www.crismaproject.eu

05.09.2014 | 69

In order to model the population distribution over time three types of data sets had to be collected or derived: population data, time use profiles and land use density grids. Population counts and numbers of in- and out commuters per municipality were collected from Statistik Austria. Figure 53 shows the night time (residential) and day time (residential plus in- minus out-commuters) population mapped to the municpality areas. These data set are the starting point of the modelling approach.

Figure 53. Night time (left) and day time (right) population per municipality for Baden test site.

Figure 54. Time use profiles for weekdays (left) and weekends (right).

The activities of the population over a day is represented by time use profiles (TUP). Surveys conducted by Statistik Austria provide indications of how the average Austrian citizen spends his day in term of different activities. From these sources time use profiles for weekdays and Weekends were derived. Figure 54 shows the respective time use profiles. As can be seen from the TUPs six categories were defined to describe the activities over day: being at home, being at work, commuting, shopping, outdoor leisure activities and going out. While the first three categories are self explaining the latter ones require explanations. Shopping refers to the daily shopping activities, but also includes shopping as a leisure activity in the afternoon. The outdoor activities include walking or biking as well as sports like playing golf or horse back riding. The event category comprises all activities that refer to the visit of a restaurant, a pub, a bar or a club. Comparing the weekday with the weekend TUP the different behaviuor can be clearly seen. While the week day is dominated by working and commuting, the weekend emphasizes the leasure activities and the visit of events. Shopping at the weekend is

http://www.crismaproject.eu

05.09.2014 | 70

eliminated assuming the shops are closed on Sundays (which is the case in Austria). Thus the weekend TUP represents Sundays and holidays rather than Saturdays/Sundays. In order to distribute the population per activity reference grids for each category need to be generated. The spatial framework is a regular grid with 100m grid cells. For each cell the density of each category is mapped. The residential areas (home) and working areas (work) are derived from information products of the Copernicus Land Monitoring Services5. The high resolution layer on imperviousness and Corine Land Cover data were used to map residential densities and industrial as well as commercial areas. In addition geocoded information on companies was used to refine the work place distribution. Transportation network (commuting) is derived from open street map (OSM) data, including high level road network and railways. Figure 55 shows the density grid for these three categories.

Figure 55. Density grids for home, work and commuting.

The remaining categories (shopping, leasure and events) are more difficult to localise. They refer to particular locations that are usually not availble in general land use layers. Therefore an alternative information source was found in Foursquare data (see description in 4.2.1). In order to assign the grid cells, appropriate Foursquare location were defined and their location used as target zones. Figure 56 shows the resulting reference grids.

5

http://land.copernicus.eu/pan-european

http://www.crismaproject.eu

05.09.2014 | 71

Figure 56. Density grids for shopping, leisure and events.

Figure 57. Population distribution at 9:00 am on a weekday.

Applying the above described input data to the DynaPop model results in a spatiotemporal distribution of population per hour and grid cell for weekdays and weekends. Figure 57 gives an example, showing the population distribution for 9:00 am on a weekday. The difference between 9:00 am and 7:00 pm in the city of Baden is shown in Figure 58. While in the morning the population focuses on the central part of the city and the industrial areas in the south, in the evening the distribution is flatter within the residential area with a single focus on the very center with its restaurants and bars.

http://www.crismaproject.eu

05.09.2014 | 72

The same subset is shown in Figure 59, comparing the population distribution at 9:00 am on a weekday and on a weekend. Again the industrial and commercial areas are almost empty on the weekend and the population is distributed over the residential areas, i.e. most people are at home on a Sunday morning.

Figure 58. Population distribution at 9:00 am (left) and 7:00 pm (right) for the city of Baden on a weekday.

Figure 59. Population distribution at 9:00 am on a weekday (left) and on a weekend (right) for the city of Baden.

4.2.5.

DynaPop links to other models

Going beyond the use of DynaPop for the assessment of population exposure dynamics, knowledge of spatio-temporal population patterns is essential for a set of other related aspects in disaster risk and crisis management including evacuation and general response planning as well as casualty assessment. Evacuation models that are commonly either grid- or agent-based, have to consider both physical and social aspects of a study site for their setup. In that regard, in addition to situational aspects such as blocked roads and general conditions of the route network, population exposure information is one of the main input factors as it provides the basis to start with in terms of getting people out of danger. Temporal aspects including continuous updates on the speed of successful evacuation rates are considered essential for decision makers in a crisis situation. Accurate assessment in that regard is facilitated by considering appropriate dynamic input information as provided by DynaPop. Casualty models eventually aim at estimating the number of actually affected people, thus being related to the initial starting basis of exposed population and accounting for the follow-up evacuation processes (also possibly accounting for first-impact casualties prior to evacuation). While population exposure models can be considered to a certain extent hazard-independent (population being exposed to any kind of hazard or stressor), evacuation models and particularly casualty assessments need to be closely linked to the

http://www.crismaproject.eu

05.09.2014 | 73

respective hazard situation (particularly considering the speed of onset). Casualty modeling in case of earthquakes (i.e., rapid onset) does for example put a strong focus on the location of people in a temporally seamless and spatially explicit manner. As earthquakes can strike without any prediction and warning, it is crucial to know if people are inside or outside of buildings (occupancy ratios per building type) and where they are exactly within the affected area. In a further step this is then linked with physical aspects such as structural building safety (danger of collapse, etc.).

4.3. Casualty modeling The estimation of human casualties related to natural disasters has become a topic of vital importance for national and urban authorities responsible for emergency provision, for the development of mitigation strategies and for the development of adequate insurance schemes. Often, casualties due to natural disasters are caused by the damages of impacted objects (e.g. building collapses due to earthquake, internal cooling of houses after power outage caused by extreme weather events, flooding of the houses) while there are other cases where the hazardous event poses serious danger by itself (e.g. hurricanes, tsunamis, flash floods etc.). In this section we will discuss the main features for an earthquake casualty model as one example. In addition, the main features of a model enabling to assess potential casualties in houses that get cooled during extreme weather phenomena are described. Those models were already introduced in D43.1; they are recalled here to allow complete description of the models involved in impact assessment, that were mentioned in section 1.1. 4.3.1.

Earthquake casualty model

The basic features of an hybrid earthquake casualty model developed by Zuccaro and Cacace (2011) are described here, while more details on the model logic may be found in the next sub-section. The model is directly derived from the original idea of Coburn and Spence (1992), while it has been adapted to the Italian context thanks to data collected in the field regarding the percentage of the victims per structural type as well as data on the lifestyle of the population obtained from the National Institute of Statistics (ISTAT, 2008). International statistics show that seismic casualties are mainly caused by structural failure. Seventy-five percent of the total human losses are in fact attributed to structural causes, especially for strong earthquakes where victims due to building collapse predominate. The human losses deriving from non-structural causes are relatively low. They are dominant for low levels of ground shaking and are very variable and difficult to foresee. Therefore the probability of injury or death of the building occupants is generally evaluated as a function of the damage level of the building, and it can be assumed that ratio of injured and deaths are significant only for damage levels D4 and D5 (the most severe ones). Coherently to these considerations, the earthquake casualty model is logically inserted (as regarding the overall CRISMA model workflow) after the seismic building vulnerability model that allows calculation of expected damage for building vulnerability classes, and also takes into account information on human exposure (as derived from the human exposure model). Figure 1 (section 1.1) shows the collocation of the casualty model in the

http://www.crismaproject.eu

05.09.2014 | 74

logic flow of analysis of the seismic impact, while Figure 60 extrapolates the parts related to damage distribution, human exposure model and casualty model. Damage distribution for buildings

Human exposure

Casualty model

Number of injured/deaths

Figure 60. The logic flow of analysis for earthquake casualty model.

Model logic scheme for earthquake casualty model The earthquake casualty model is based on the evaluation of four fundamental parameters in addition to the total population exposure on the site at the time the earthquake happens: Mean of inhabitants by building type (vulnerability class) Occupancy rate by hour of the day and week by building type Touristic index by town and period of the year Casualty percentage by building type and damage level In that regard the casualty model relies on results from the human exposure model in several aspects. The second and third parameters listed above are e.g. basically derived from the Human Exposure Model (spatio-temporal parameters of population distribution in the short-term [diurnal cycle] and medium-term [seasonal aspects]). When it comes to variations associated with different building types, in particular related to casualty ratios and damage levels, the casualty model needs to further build upon the outcome of the exposure model as outlined later on. The number of deaths (Nd) and injured (Ni) are determined by the expressions (8) and (9):

4

Nd

5

TI c

N t , j NOt QDt , j

(8)

N t , j NOt QI t , j

(9)

t 1 j 1

4

Ni

5

TI c t 1 j 1

http://www.crismaproject.eu

05.09.2014 | 75

where: t = building type (t = 1, … 4) j = damage level (j = 1, …… 5) Nt,j = number of buildings of type t having damage level j NOt = number of occupants (at the time of the event) by building type TIc = Touristic Index by city QDt,j = proportion of deaths by building type and damage level QIt,j = proportion of injured by building type and damage level The casualty estimation is obtained as a proportion of the occupants of the building, according to damage level, classified by vertical building structure type: Reinforced Concrete (R.C.) or Masonry (see Table 34). These factors are calibrated on the basis of previous earthquake surveys; further development of the research will pursue the definition of the casualty as percentage of the occupancy of the EMS ’98 vulnerability classes (A, B, C, D) (Grünthal, 1998). The “C” class includes masonry and R.C. buildings because strong masonry and weak R.C. may manifest analogue seismic response. Table 34. Casualty percentage by damage level and building type. Damage level Casualty percentage

D0

D1

D2

D3

D4

D5

Vertical structure

Vulnerability class

QD

0

0

0

0

0.04

0.15

Masonry

A or B or C

QD

0

0

0

0

0.08

0.3

R.C.

C or D

QI

0

0

0

0

0.14

0.7

Masonry

A or B or C

QI

0

0

0

0

0.12

0.5

R.C.

C or D

As a general trend on casualties rates, it has been observed that buildings having the same volume show significant variations in the rate of injury or deaths, which are strongly dependent on the number of storeys. Therefore the casualty rates are higher in the case of tower blocks than for buildings having a large footprint and few storeys. ‘Number of occupants’ is a parameter determined by the Human Exposure Model that may be influenced by short-, mid- and long-term variation. On the other hand, to obtain useful results for the casualty model, the human exposure has to be performed with extended feature allowing, in addition to the estimation of population in the diurnal cycle in each cell, to evaluate their distribution among different building types and associated varying structural vulnerability classes. 4.3.2.

Human thermal model

Temperature decrease inside of buildings may cause victims among the building occupants. In case of extreme cold weather the usability of buildings depends mainly on the indoor temperature and the cooling speed of the buildings after heating cut-offs. Indeed, the health status of human beings inside buildings, which can be described in terms of human thermal comfort, depends on cooling time and absolute temperature indoors, but is not expressed only as a function of these technical features.

http://www.crismaproject.eu

05.09.2014 | 76

Fanger’s Predicted Mean Vote, PMV-method, is traditionally used for estimating thermal sensation and comfort (Fanger,1970). The PMV method is based on a heat balance model, also referred to as a “static” or “constancy” model. While assuming that the effects of the surrounding environment are explained only by the physics of heat and mass exchanges between the body and the environment, heat balance models do not take into account the human thermoregulatory system but view the human being as a passive recipient of thermal stimuli. According to previous research results, the PMV method progressively over-estimates the mean perceived warmth of warmer environments and the coolness of cooler environments. It is therefore valid for the prediction of thermal comfort only under severely restricted conditions. To accurately estimate thermal sensation and comfort in transient conditions, the calculation method should take into account the natural tendency of people to adapt to changing conditions in their environment by means of the human thermoregulation system. Human thermal models represent the human body from a thermo-kinetic point of view and they have been used for modelling the thermoregulation system. Over the last hundred years, numerous human thermal models have been developed. The utilization rate of these models has been low due to the complexities of the models and the difficult determination of calculation variables. VTT has developed the first approach, where a human thermal model has been implemented in a building simulation environment: the Human Thermal Model, HTM. HTM can be used for predicting thermal behaviour of the human body under both steady state and transient indoor environment conditions. It is based on true anatomy and physiology of the human body (see Figure 61). The connection with the building simulation environment enables defining the external boundary conditions such as surface temperatures and radiation heat transfer more accurately than with previously available human thermal models. The thermal sensation and thermal comfort estimation methodology presented by Zhang (2003) is integrated in HTM.

Figure 61. HTM structure.

HTM tissue heat transfer, thermal sensation and thermal comfort calculation has been successfully validated under various steady-state and transient indoor environment boundary conditions comparing the simulation results to measurements made with real human beings (Holopainen 2012). The simulated thermal sensations with the HTM method showed a better correlation with measured values than the Fanger’s PMV method. As a module of the VTT House building simulation tool, HTM can be used for estimating the effects of alternative building structures, as well as building service systems, on

http://www.crismaproject.eu

05.09.2014 | 77

occupants under different conditions more accurately and easily than before. This integrated method enables quantitative analysis of the significance of both external (structure insulation level, heating/cooling system) and internal (clothing, metabolism) boundary conditions on thermal sensation and comfort. The realistic thermal comfort of the user can be used as a design parameter for designing better thermal environments in new and renovated buildings. The thermal comfort indexes describe how people feel in changing temperatures and it can be used to predict their activities for example in crisis situation. The scale of human comfort index can be presented by two different indexes. The first one is the Thermal sensation index which is presented in Table 35. Table 35. Thermal Sensation Index. Index

Thermal sensation

4

very hot

3

hot

2

warm

1

slightly warm

0

neutral

-1

slightly cool

-2

cool

-3

cold

-4

very cold

However, this index does not take into account for example the effect of cold surfaces. Therefore VTT has developed another index, called Thermal comfort index, which is presented in Table 36 that is linked with some hypothetical activities. Table 36. Thermal comfort index. Index 4 2 +0 -0 -2 -4

Thermal comfort very comfortable comfortable just comfortable just uncomfortable uncomfortable very uncomfortable

Hypothetical activity People enjoy their circumstances Normal situation in massive houses Normal situation in light houses People feel a little bit uncomfortable People start to look for help Lethal circumstances

In order to show the potentiality of HTM model, some simulations are described below. HTM simulations were made to examine the effect of a power cut-off on a resident in different single house building types. The simulated single house building is presented as an example in Figure 62. The HTM is located in the middle of the kitchen.

http://www.crismaproject.eu

05.09.2014 | 78

Figure 62. Simulated single house building (LVIS 2000).

One example of the simulated thermal comfort is presented in Figure 63. The individual characteristics (i.e. age, gender, BMI and fitness) have impact on human thermal sensation (Tuomaala et al 2013): Increase in age decrease thermal sensation values. Males have higher thermal sensation index values when compared to females with corresponding BMI and Tone Index parameters. High individual fitness cause significant increase in thermal sensation index values.

Figure 63. Simulated thermal comfort.

http://www.crismaproject.eu

05.09.2014 | 79

4.4. Evacuation modelling There are many works describing evacuation modeling (e.g. Bhaduri et al., 2008; Freire et al., 2012; 2013; Post et al., 2009; Scheer et al., 2012; Tsai et al., 2013). Most of them are specific to a concrete situation and location (context-specific case studies). In the context of task T43.3 of CRISMA a new grid-based evacuation simulation tool, that is suitable for large scale assessment, was developed. The main features of this model, that is built using NetLogo simulation tool, together with an example application, are described in section 4.4.2. Some of the evacuation models are proprietary and will not be used in the CRISMA project, but for one sample model (the ‘Life Safety Model’, LSM) a license was acquired testing the possible integration in the CRISMA tool and subsequent implementation in one Pilot. The main features of LSM, together with an example application, are described in section 4.4.3. 4.4.1.

Basic concepts in evacuation modeling

To some extent, the specificity of the above-mentioned case studies is justified – we cannot have a single model that would describe the behaviour in all possible locations and situations. However, we can extrapolate specific attributes that belong to location, type of hazard and the evacuation routines. This information would be able to provide us with a generic concept for evacuation modeling. Refined and more specific models for specific locations and hazards can be extended from that generic approach to achieve more exact results and simulations. Locations and hazards First of all, we should identify attributes defining locations and types of hazards. Manipulating with those attributes we can describe almost all combinations of various locations and types of threats that can occur. Table 37. Essential attributes to be considered for the setup of an evacuation model.

Hazard

Location

Propagation/extent of a hazard

Time of day

Spread of threat

Reaction/notification time

Diffuse time

Diffuse time

Severity

Population density Complexity of evacuation paths Weather/season

Location attributes Properties related to location describe the world and situation where the threat has occured. They serve as global constraints affecting the propagation of the information needed for the population to start evacuating. In similar fashing terrain and weather conditions need to be defined that can either simplify or complicate the process of the evacuation.

http://www.crismaproject.eu

05.09.2014 | 80

Time of day During the day information about the hazard spreads faster and reaches a larger number of people that need to evacuate. At night, more people are at homes and less active which in turn affects the possibility to evacuate faster. Reaction/notification time The time that the information about the hazard needs to reach the population. In case of locations and types of hazards where warning and notification systems are available, the time will be shorter. In case of remote locations, the time to notify potentially affected population of the threat will be significantly higher. Diffuse time The time specific to a certain location that it takes to diffuse the threat. In case of tsunami threat in a big city, most of the impact will be absorbed by the buildings near the shore, affecting the survival chances of the population in these houses. In case of tsunami threat in a country side, the effect of the threat will be deeper. Population density Density and absolute number of the exposed population, affecting the number of people needed to evacuate and the complexity of that process. In case of high density the evacuation itself is complicated as e.g. people move more slowly on overcrowded roads. Population density is also directly related to the above listed parameter ‘Time of day’ which is accounted for in the setup of the spatiotemporal population exposure model (see section 4.2). Complexity of evacuation paths The terrain properties of the location. In case of steep slopes or high-density city locations the complexity of evacuation paths is high. On the other hand, in case of plain and open terrain the path is easier. Weather Certain weather characteristics increase the complexity of evacuation paths, e.g. in case of snow-induced blocked roads or extreme hot temperatures. Hazard attributes Properties related to the hazard describe the type of threat, its severity and propagation. Some types of hazards might have a huge impact and short effect time, for instance explosives. Other threats might be widely spread, have long effect time but overall low impact. Effects of cold weather in some regions may be an example for that case. Propagation/extent of hazard The extent of the hazard shows how large the affected territory is. In case of tsunami the extent is rather large as it covers the whole shore line. The same applies to extreme temperatures influencing wide areas. However, e.g. in case of a terrorist attack the spatial extent of the hazard is limited, as the source of the threat might be a singular point location.

http://www.crismaproject.eu

05.09.2014 | 81

Spread of threat Referring to the speed with which the threat spreads from the source across the affected area. The longer the “diffuse time” at the location, the harder it is to assess the spreading of the threat at the location. Diffuse time The time the threat is active and ’hazardous’. In case of extreme weather conditions or nuclear threat the diffuse time is rather high. In case of tsunamis or explosions it is significantly lower. Severity/magnitude The severity or magnitude of the threat. This is the force the hazard applies on the affected location/area. It accordingly spreads with the speed of “spread of the threat” and diffuses over time as based on the “diffuse time”. Evacuation model An evacuation model to be implemented in CRISMA should be applicable in general form for several scenarios available in the project. These scenarios are very different in nature (Broas et al., 2013). It is the matter of adjusting the parameters of “location” and “hazard” to correspond to each scenario. The resulting evacuation model is intended to be agent-based, as that is considered one of the best ways to simulate that kind of problem (e.g., Di Mauro et al., 2013; Fiedrich and Burghardt, 2011; Zhan and Chen, 2008). Consequently, there will be an additional actor in the model identifying the inhabitants. The agent will either correspond to the population as a whole as we will be dealing with a generalized data set, or in conceptual terms potentially describe each citizen at a certain location. In the latter case the simulation might turn out time-consuming and resource demanding. Furthermore, modeling individuals’ actions and behavior must be done carefully in full accordance to privacy constraints. However, at the current stage we conclude that the inhabitant agent should likely possess at least the following properties: Awareness Whether an agent is aware of the threat and ready to action on it Distance to the nearest shelter The distance to the nearest shelter. If no shelter is available at the location under investigation, as in case if people are trapped under an avalanche, then this variable is not set. Information about the nearest shelter Whether information about the nearest shelter exists and is actually available to the inhabitants. State Defines the state of the agent as: alive, injured, dead or asleep. For example, in an earthquake scenario, a certain ratio of the population may die or get severely injured as a direct consequence of structural failures caused by the primary

http://www.crismaproject.eu

05.09.2014 | 82

shock. The evacuation model must then take that updated population exposure information as new starting basis. Conceptual view In the following the conceptual view of how the evacuation model shall work is described. Simulation is supposed to work in turns or epochs. During every turn the following actions will be performed: Calculate the position of the population at the location Determine where is currently population at the location, how many inhabitants have survived and reached safety, how many are in danger. This basically refers to the above-described exposure model and its time-specific characteristics as well as in the immediate response phase to the first-stage casualty assessment as outlined later on. Re-position the effect of the hazard on location Apply the effect of the hazard on the location. Accordingly assess destroyed buildings, complicated evacuation paths, etc. Distribute the information about the hazard to the population Spread the information about the hazard to the population, increase awareness of the dangerous situation Re-position population at or from the location Perform relocation of inhabitants at the location by choosing the direction of movement. If the inhabitant is in safety, then he will not move any more. The Figure 64 below shows the simulation cycle:

Figure 64. Conceptual framework for evacuation modeling.

The simulation will be stopped when population is either ‘exterminated’ or the amount of inhabitants that survived and reached safety does not change for a long time.

http://www.crismaproject.eu

05.09.2014 | 83

4.4.2.

A new grid-based evacuation simulation tool for large scale assessment

The first evacuation model that is introduced was newly developed within WP43 and is a working prototype and proof of concept. The model is based on agent based simulation. Each object in the simulation is treated as an agent and it does not matter does it describe the terrain or represents the population. Each such object has a specific set of properties and is able to execute a specific set of actions. For instance, the agent that represents population is able to move on the map. On the other hand, the object that represent a map knows if that part of the map is within the simulation boundaries and does it contain roads. A single agent that represents the population represents a group of people at a given place at some specific time. Each specific place on the map can have a different population density depending on the time of a day. If we are talking about business district in the center of a city, it will be crowded with people during the office hours. During the evening and night hours the whole place will be deserted. 4.4.2.1.

Specification of Evacuation simulation tool

Importing of the data One of the main characteristics for the simulation tool is the compatibility with other applications working in the same project package. NetLogo, that is described in detail later, is not that rich in options. However, it can read data from an external file and in many cases this is sufficient. Of course, data format between different applications shall be interfaceable, and the validation of the proof of concept provided here demonstrated that such operation is possible. In order to validate data importation feature, an experimentation was performed with sample test data that, referring the the MU of hundred meter squares. First file contained information on the roads and their location. Table 38. Sample information data. Cells

Highway

100mN27721E47993

3

Exit

Road

100mN27721E47991

4

100mN27721E47992

4

100mN27721E47994

3

3

Table 38 shows an example of the sample test data. The first column shows the coordinates of the cell, in this example in Gauss–Krüger coordinate system. Other columns in the sample data identify whether that cell represents a road on the map. Roads are used in the simulation to find the fastest path to the safe zones. Second file contains the information about the population distribution .

http://www.crismaproject.eu

05.09.2014 | 84

Table 39. Sample population information data. Hour

R100_ID

Population

0

100mN27863E47947

0.1974

13

100mN27818E48006

17.541

19

100mN27827E48015

0.5866

23

100mN27820E48003

19.605

Table 39 shows the sample population data. First column is the day of the hour. As the density of the population of some cells changes over the time of day, that information might be very useful in additional developments of the evacuation model. Second column is the coordinates of the cell in Gauss–Krüger coordinate system. And the third column is the density of the population at the current cell. At this point in time the specific number of the population is not that important. What important is the quantitative relationship between different cells. The higher the number, the more dense is the population is at this cell. In this current prototype we will use the population data taken at 1700 hours. It is considered that it is the peak hour and the least convenient time to start evacuation. The information from these two files must be merged and displayed on the same map. Terrain information describes map and landscape. Population information describes population to be evacuated. Using pretty straightforward algorithm we converted the data from two given files and Gauss–Krüger coordinate system into our own coordinate system understandable by NetLogo. In NetLogo the initial point on the graph, the point with coordinates (0; 0) is located at the very center of the map. The highest x-axis coordinate is half the length of the map px. The smallest x-axis coordinate is, therefore, -px. Same applies to the coordinates on y-axis. NetLogo The tool used for simulation (NetLogo) is highly customizable. As a result of that, we were able to successfully test some preliminary data simplified from the Austrian sample application described in 4.2.4 and execute the very basic simulation.

http://www.crismaproject.eu

05.09.2014 | 85

Figure 65. NetLogo simulation window.

Figure 65 illustrates the simulation window in NetLogo with the imported data from the Austrian sample application. This simulation tool provides convenient controls for the simulation;for example it is possible to choose the speed of the simulation, access settings and change the behaviour of the main screen (everything updated per each tick or it is a continuous flow. The NetLogo environment provides the support of very basic programming language, that is similar to Pascal. Nevertheless, this environment is fully object-oriented and enables to operate with different object (agents) and change their behaviour. Figure 66 shows the development environment fully integrated into NetLogo.

http://www.crismaproject.eu

05.09.2014 | 86

Figure 66. NetLogo development environment.

NetLogo environment provides the option to track model output during every phase of the execution. We can track any property of the agent in the simulation and record the trend. For instance, Figure 67 shows a trend for evacuated agents, e.g. the trend how fast population is evacuated from the dangerous zone.

Figure 67. Sample monitor window in NetLogo.

On this figure you can see conventional graph and a trend. Horizontal axis represents time - how many ticks it has been since the simulation start, the vertical axis represents the amount of population agents present on the simulation.

http://www.crismaproject.eu

05.09.2014 | 87

Also it is possible to extract various configuration parameters into toggles and switches on the interface. We were able to import a map described with Gauss–Krüger coordinate system into NetLogo. NetLogo also offers possibilities to write data into a file. That means, we can save the current snapshot of the population in the NetLogo and import into some other tool for analysis, e.g. suitable for use in a GIS environment. 4.4.2.2.

Model logic scheme

The input data provide two main types of information: information about the roads on the map and information about the population distribution. Knowing the maximum values for x- and y-axes the map with the relative roads, can be drawn. In case of the population, the input data is provided at a 100m resolution, i.e. one point on the map is actually 100x100 meters square (in real life). Therefore, that square contains some population density. Unfortunately, it will be very resource consuming to operate with each person as an object in the simulation. In this case we are talking about 2 million different objects in the simulation. That is an immense requirement for the computer power for the simulation, considering we will have to relocate and re-draw those objects. At this stage we operate with a single object representing population at any given (inhabited) point on map. Currently we have around 7500 agents representing the population. The color of the agent describes if that agent has a small population density (light gray color) or high population density (black color).

Figure 68. Density color scheme.

On Figure 68 you can see the color scheme for the population density. The software agent colored with color 9 is barely inhabited. On the other hand, the agent colored with color 0 or 1 is highly inhabited. The simulation itself is very straightforward (as it is a proof of concept scenario). The goal for each agent representing the population is to run away from the map - that is considered a safe zone. The algorithm is currently very simple, each agent starts moving forward (as it does not know which direction is the safe zone). The moment the agent is outside of the map - it is saved. If the agent finds a road on the way - it will try to follow the road (as roads usually lead to safety). However, it still does not know which way to run, so it tries to randomly choose a direction and follow the road. If the road suddenly ends or if agent will find himself outside of the road for some reason, it will try to choose another direction and follow the road again. Once again it makes sense to repeat that the simulation is very straightforward and simple, its purpose is a proof of concept that this tool can be used for simulation and applicable in this project. The behaviour of the agents is unrealistic and serves the demonstration purpose only. This simulation has a great potential and should be definitely improved in the future.

http://www.crismaproject.eu

05.09.2014 | 88

4.4.2.3.

Future plans

There are many ways to enhance the simulation. First of all, the prototype has proven itself working and functional. Moreover, the backbone of the implementation can be used by other models in the project. Firstly, the goal would be to make the population agents more smart in choosing the way to evacuate and keeping that way till the end. Also, population should be “aware” where the safe zones are and choose the direction wisely. Secondly, the speed of moving agents should also be dependant on many factors. If a single population agent represents high density of the population, then its speed should be slower. Also, we may want to consider different social groups and individuals. 4.4.2.4.

Example application

Here is the example of the simulation in action. As it was mentioned above, one cell in the simulation is a square with a side length of a hundred meters. Current simulation moves each population agent one step to the direction it is heading. Considering that on average human walking speed can be estimated at 5 km/h he will walk the distance of hundred meters in about 1 minute 20 seconds. Once again, of course many factors are not considered here. The point of this example is only demonstration of the model and the simulation. On Figure 65 you can see the initial setup, with the population in its entry-point distribution, i.e. taken from the human exposure model according to the chosen evacuation starting time (e.g. immediately after an earthquake). After the simulation is initiated agents start moving according to programmed algorithm. On Figure 69 you can see the state of the simulation after 12 iterations. In real life that would be approximately after 16 minutes past the evacuation start. As it is clearly visible, a certain amount of agents has already been evacuated (i.e. moved out-of-bounds). The rest of the agents moved away from their positions in search of the path for the evacuation.

http://www.crismaproject.eu

05.09.2014 | 89

Figure 69. Situation after 12 iterations or 16 minutes.

On Figure 70 you can see the situation after 33 iterations. That would be approximately after 44 minutes after the beginning of the evacuation. The main traffic is on the roads, there are just a few agents “in the field” who have not found the path yet. A significant amount of agents has left the map and is considered safe.

Figure 70. Situation after 33 iterations or 44 minutes.

On Figure 71 you can see the situation in the simulation after 83 iterations. In real life that would be approximately after 110 minutes after the beginning of the evacuation. The main difference from the previous picture is that there are more evacuated agents. The rest is trying to find the correct path for the evacuation. After this moment the situation won’t change dramatically.

http://www.crismaproject.eu

05.09.2014 | 90

In future work, the main focus should be on making the evacuation algorithm more precise. That would require more detailed maps. Not only maps should contain roads and obstacles (buildings), but also the information about the direction to safety.

Figure 71. Situation after 83 iterations or approximately 110 minutes.

4.4.3.

Life Safety Model

As an example of an existing evacuation model that will be implemented in CRISMA within the framework of the French Pilot (B), in the following the ‘Life Safety Model’ (LSM) is described. LSM is an evacuation model developed by the company BC Hydro for flood purposes. HR Wallingford (HR Wallingford, 2012) then developed an interface to use the results of the TELEMAC-2D hydrodynamic model directly as inputs in LSM. LSM is a simulation tool which performs agent-based simualtions, in line with the conceptual elaborations described in section 4.4.1. It also models the interactions between moving assets (people and vehicles) and stationary assets (buildings) with a flood wave (HR Wallingford, 2012). This software allows estimating the time of evacuation of the population (at different required time steps and intervals), the number of drowned people (integrated form of casualty model), the number of safe people, the number of collapsed buildings due to the flood. Moreover the traffic simulation is based on the Greenshield equation: V=A-B*k

(10)

With V being the velocity (km/h), k the density (vehicles/km), A and B constants determined by observations. In LSM, it is possible to change these parameters by changing the minimum vehicle spacing, the maximum traffic density kmax and the minimum value for 1-k/kmax (HR Wallingford, 2012).

http://www.crismaproject.eu

05.09.2014 | 91

In order to run a simulation with LSM, several assets or characteristics of the area of interest should be represented such as people (referring to the population exposure model), road network, buildings, and warning centres (if existing). Further, it is important to know that in LSM the people are represented in two ways. First, the population is represented by individuals (PARU: People At Risk Unit). Each person can have its own properties to resist to the flood, to walk etc. For example, it is possible to assign to each person represented in the model a walking speed. Second, the population should be represented by groups (or family) (PARG: People At Risk Group). Each person has to be part of a group. However, a group can be constituted also by one single person. The people belonging to a same group will evacuate together. Moreover, a group could be declared ‘separable’ or ‘inseparable’. This parameter represents the possibility of a group to be separated during the evacuation process. The only way for an inseparable group to be separated is that a person of that group dies (HR Wallingford, 2012). Given potential privacy constraints and following the general setup of the spatio-temporal population distribution and exposure model in grid or mesh format, getting to the point of illustrating individuals involves high uncertainty. As individuals will not be modeled as such in the exposure model, but information on individuals is required to run LSM, an assumption is made so that a every cell constitutes one group, referring to the abovedescribed group concept. Further, LSM offers an option that allows running a simulation without flood. It is called a ‘dry’ simulation. Indeed, all the inputs are the same as usual, except that the hydrodynamic data are not necessary (because not used). The population starts to evacuate towards the safety points given as inputs using a zero default value as water depth. It is also possible to use the option ‘evacuate immediately’. The evacuation of people will therefore start as soon as the simulation starts without any delay (referring to response/notification time) (HR Wallingford, 2012). A screenshot of such an exemplary dry simulation run for CRISMA testing is provided in Figure 72 while an example for a ‘flooded simulation’ accounting for areas of coastal submersion is illustrated in Figure 73.

http://www.crismaproject.eu

05.09.2014 | 92

Figure 72. Screenshot of LSM ‘in operation’ for a ‘dry simulation’. Colors indicate the status of the respective agent (referring to a person initially exposed).

http://www.crismaproject.eu

05.09.2014 | 93

Figure 73. Screenshot of LSM ‘in operation’ for a ‘flooded simulation’ without early warning (i.e., the first person starts to evacuate when water reaches his house. Then, the information to start evacuating spreads among the population). Colors indicate the status of the respective agent (referring to a person initially exposed).

A continuous simulation video has also been produced illustrating the evacuation progress in seamless manner from start to finish. Furthermore a “guide” was prepared to serve as sort of handbook for the actual implementation of LSM, thus elaborating on input requirements as well as file format related issues. The results of a simulation are summarized in a text file (number of safe people, casualties, number of destroyed buildings, etc.). The time-dependent results are stored in files that can be visualized with a special post-processor (Blue Kenue developed by Canadian Hydroulic Center CHC or Paraview). It is also possible to make an animation of the simulation.

http://www.crismaproject.eu

05.09.2014 | 94

5. References Angeletti, P., Baratta, A., Bernardini, A., Cecotti, C., Cherubini, A., Colozza, R., Decanini, L., Diotallevi, P., Di Pasquale, G., Dolce, M., Goretti, A., Lucantoni, A., Martinelli, A., Molin, D., Orsini, G., Papa, F., Petrini, V., Riuscetti, M., Zuccaro, G. 2002. Rapporto finale della commissione tecnico-scientifica istituita dal Capo Dipartimento della Protezione Civile per l’Aggiornamento dell’inventario e della vulnerabilità degli edifici residenziali e pubblici per la stesura di un glossario. Scheda DPC 5° - 2, obiettivo prioritario n.1. (in Italian). ATC Applied Technology Council, FEMA P-58/ Pre-Release (2012), Seismic Performance Assessment of Buildings Volume 1 Methodology Aubrecht, C. 2013. A geospatial perspective on population exposure and social vulnerability in disaster risk research. Demonstrating the importance of spatial and temporal scale and thematic context. Dissertation. Vienna University of Technology, Department of Geodesy and Geoinformation, Research Group Photogrammetry & Remote Sensing. In press. Aubrecht, C., D. Özceylan, K. Steinnocher, S. Freire, 2013. Multi-level geospatial modeling of human exposure patterns and vulnerability indicators. Natural Hazards, Vol. 68, No.1, pp.147– 163. Aubrecht, C., Steinnocher, K., Hollaus, M., Wagner, W. 2009. Integrating earth observation and GIScience for high resolution spatial and functional modeling of urban land use. Computers, Environment and Urban Systems, Vol. 33 No.1, pp. 15–25. Aubrecht, C., Steinnocher, K., Humer, H., Huber H. 2014a. DynaPop-X: A population dynamics model applied to spatio-temporal exposure assessment – Implementation aspects from the CRISMA project. EGU (European Geosciences Union) General Assembly 2014. Geophysical Research Abstracts, Vol. 16, EGU2014-1932. Vienna, Austria. Aubrecht, C., Steinnocher, K., Huber, H. 2014b. DynaPop – Population distribution dynamics as basis for social impact evaluation in crisis management. In S.R. Hiltz, M.S. Pfaff, L. Plotnick, P.C. Shih, eds. ISCRAM 2014 Conference Proceedings, 11th International Conference on Information Systems for Crisis Response and Management (pp. 319–323). University Park, PA, USA, May, 2014. Aubrecht, C., Ungar, J., Freire, S. 2014c. Activity-specific time profiles from Foursquare check-in data: An improved basis for mapping population dynamics? In B. Bhaduri, S. Fritz, C. Aubrecht, eds. Role of Volunteered Geographic Information in Advancing Science (Workshop in conjunction with GIScience 2014). Vienna, Austria, September, 2014. In press. Bhaduri, B., Bright, E., Coleman, P. and Urban, M. 2007. LandScan USA: a high-resolution geospatial and temporal modeling approach for population distribution and dynamics. GeoJournal, Vol. 69 No. 1, pp. 103–117. Bhaduri, B., Bright, E., Rose, A.N., Cheriyadat, A. 2014. Occupancy Curves for Characterizing Population Dynamics. AAG Annual Meeting. Tampa, FL. Bhaduri, B., Nutaro, J., Liu, C. & Zacharia, T. 2008. Ultra-Scale Computing for Emergency Evacuation. In J. G. Voeller, ed. Wiley Handbook of Science and Technology for Homeland Security: John Wiley & Sons, Inc. Braga, F., Dolce, M., Liberatore, D. 1982. A Statistical Study on Damaged Buildings and an Ensuing Review of the MSK-76 Scale, Proceedings of the Seventh European Conference on Earthquake Engineering, Athens, Greece, pp. 431-450.

http://www.crismaproject.eu

05.09.2014 | 95

Broas, P. & CRISMA project team, 2013. CRISMA Deliverable D22.2 - Report on real and reference crisis management scenarios. Work Package Deliverable. CRISMA (Modelling crisis management for improved action and preparedness) project consortium. February 2013, 318 pp Cabal, A., Coulet, A, Erlich, M., Cossalter, A., David, E., Sauvaget, P., Polese, M., Zuccaro, G., Alten, K., Steinnocher, K., Aubrech,t C., Sihvonen, H., Max, M., Jähi, M., Porthin, M., Rosqvist, T., Perrels, A., Vajda, A., Pilli-Sihvola, K., Almeida, M., 2012. Existing hazard and vulnerability/losses models. Deliverable D41.1 of the Integrated project “CRISMA”, Project no. FP7/2007-2013 n.o 284552, European Commission. Coburn A. W., Spence R. 1992. Earthquake Protection: John Wiley & Sons Ltd, England, 355 p. Di Mauro, M., Megawati, K., Cedillos, V., Tucker, B. 2013. Tsunami risk reduction for densely populated Southeast Asian cities: analysis of vehicular and pedestrian evacuation for the city of Padang, Indonesia, and assessment of interventions. Natural Hazards, Vol. 68 No. 2, pp. 73–404 Di Pasquale, G., Orsini, G., Romeo, R.,W. 2005. New Developments in Seismic Risk Assessment in Italy, Bulletin of Earthquake Engineering, Vol. 3, No. 1, pp. 101-128. Eicher, C.L. & Brewer, C.A. 2001. Dasymetric Mapping and Areal Interpolation: Implementation and Evaluation. Cartography and Geographic Information Science, Vol. 28, No. 2, pp.125–138. Esposito, S., Iervolino, I., Elefante, L., Giovinazzi, S. 2011. Post-Earthquake Physical Damage Assessment for Gas Networks, Proceedings of the Ninth Pacific Conference on Earthquake Engineering Building an Earthquake-Resilient Society, 14-16 April, 2011, Auckland, New Zealand. Esposito, S., AA.VV. 2012. D2.3 - Guidelines for reliability analysis of roadway network including procedures for emergency response management, Deliverable of the EU Project CLUVA: CLimate change and Urban Vulnerability in Africa, Contract N. 265137. EWENT. ewent.vtt.fi/ Faccioli, E., Cauzzi, C. 2006. Macroseismic intensities for seismic scenarios estimated from instrumentally based correlations. In Proceedings of the First European Conference on Earthquake Engineering and Seismology (a joint event of the 13th ECEE & 30th General Assembly of the ESC)- Ginevra, 3-8 Settembre 2006, CD-ROM Faenza, L., Michelini, A. 2010., Regression analysis of MCS intensity and ground motion parameters in Italy and its application in ShakeMap. Geophysical Journal International, 180: 1138– 1152 Fanger, P Ole. 1970. Thermal Comfort: Analysis and applications in environmental engineering. McGraw-Hill Fenton, N. 2012. Probability Theory and Bayesian Belief Bayesian Networks, web site. http://www.eecs.qmul.ac.uk/~norman/BBNs/BBNs.htm. Franchin, P., AA.VV. 2013. D8.7 - Methodology for systemic seismic vulnerability assessment of buildings, infrastructures, networks and socio-economic impacts, Deliverable of the EU Project Syner-G: Systemic Seismic Vulnerability and Risk Analysis for Build-ings, Lifeline Networks and Infrastructures Safety Gain, Contract N. 244061. Franchin, P., Lupoi, A. & Pinto, P.E. 2006. On the role of road networks in reducing human losses after earthquakes, Journal of Earthquake Engineering Vol 10, Issue 2, pp. 195-206. Frangopol, D.M., Kallen M.J., Noortwijk, M. 2004. Probabilistic models for life-cycle performance of deteriorating structures: review and future directions. Prog. Struct. Eng. Mater. Vol. 6, pp. 197–212.

http://www.crismaproject.eu

05.09.2014 | 96

Freire, S. & Aubrecht, C. 2012. Integrating population dynamics into mapping human exposure to seismic hazard. Natural Hazards & Earth Systems Sciences, Vol. 12, No. 11, pp.3533–3543. Freire, S. 2010. Modeling of Spatiotemporal Distribution of Urban Population at High Resolution – Value for Risk Assessment and Emergency Management. In M. Konecny, S. Zlatanova, & T. L. Bandrova, eds. Geographic Information and Cartography for Risk and Crisis Management. Lecture Notes in Geoinformation and Cartography. Springer Berlin Heidelberg, pp. 53–67. Freire, S., Aubrecht, C. & Wegscheider, S. 2012. When the Tsunami Comes to Town - Improving evacuation modeling by integrating high-resolution population exposure. In L. Rothkrantz, J. Ristvej, & Z. Franco, eds. ISCRAM 2012, 9th International Conference on Information Systems for Crisis Response and Management. Vancouver, BC, Canada. Freire, S., Aubrecht, C. & Wegscheider, S. 2013. Advancing tsunami risk assessment by improving spatio-temporal population exposure and evacuation modeling. Natural Hazards, Vol. 68, No. 3, pp. 1311–1324. Fiedrich, F. & Burghardt, P. 2007. Agent-based Communications of the ACM, Vol. 50, No. 3, pp.41–42.

systems for disaster

management.

Giovinazzi, S., Lagomarsino, S. 2004. A Macroseismic Method for the Vulnerability Assessment of Buildings, Proceedings of the 13th World Conference on Earthquake Engineering, Vancouver, Canada, Paper No. 896 (on CD). Goodchild, M.F. 2008. Geospatial technologies and homeland security: Challenges and opportunities. In D.Z. Sui, ed. Geospatial Technologies and Homeland Security: Research Frontiers and Future Challenges. Dordrecht, The Netherlands: Springer, pp. 345–353. Goretti, A. & Sarli, V. 2006. Road Network and Damaged Buildings in Urban Areas: Short and Long-term Interaction. Bulletin of Earthquake Engineering. Vol.4 No.2, pp.1573-1456. Grünthal, G. 1998. European Macroseismic Scale. Chaiers du Centre Européen de Géodynamique et de Séismologie, Vol. 15, Luxembourg. Guagenti, E., Petrini, V. 1989. Il caso delle vecchie costruzioni: verso una nuova legge danniintensità, Proceedings of the 4th Italian National Conference on Earthquake Engineering, Milan (Italy), 1: 145-153 Ham, H., Kim, T.J. & Boyce, D. 2005. Assessment of economic impacts from unexpected events with an interregional commodity flow and multimodal transportation network model. Transportation Research Part A: Policy and Practice, Vol.39, pp. 849–860. Holopainen, R. 2012. A human thermal model for improved thermal comfort, Doctoral Dissertation, VTT Science 23, Espoo, Finland. 141 p. http://www.vtt.fi/inf/pdf/science/2013/S23.pdf ISO 19101. 2002. Geographic information – Reference model. ISTAT (National Institute of Statistics). .2008. Time Use in Daily Life. A Multidisciplinary Approach to the Time Use’s Analysis. N.35, 2008. Korb, K.B. & Nicholson, A.E. 2010. Bayesian Artificial Intelligence, 2nd ed. CRC Press Langford, M. 2007. Rapid facilitation of dasymetric-based population interpolation by means of raster pixel maps. Computers, Environment and Urban Systems, Vol. 31, No.1, pp. 19–32. Laszlo, A. and Krippner, S. 1998. Systems Theories: Their origins, foundations, and development. In: Systems Theories and A Priori Aspects of Perception, (J.S. Jordan, Ed.), Amsterdam: Elsevier.

http://www.crismaproject.eu

05.09.2014 | 97

Leung, S., Martin, D., Cockings, S. 2010. Linking UK Public Geospatial Data to Build 24/7 SpaceTime Specific Population Surface Models. GIScience 2010, 6th International Conference on Geographic Information Science. Zurich. Liel, A.,B., Lynch, K.,P. 2012. Vulnerability of Reinforced-Concrete-Frame Buildings and Their Occupants in the 2009 L’Aquila, Italy, Earthquake. Natural Hazards Review 11-23 Limpert,R. 1992. Brake Design and Safety, Society of Automotive Engineers, ISBN 1-56091-2618. Loibl, W., Peters-Anders, J. 2012. Mobile phone data as source to discover spatial activity and motion patterns. In T. Jekel et al., eds. GI_Forum 2012: Geovizualisation, Society and Learning. GI_Forum 2012. Berlin/Offenbach: Herbert Wichmann, VDE, pp. 524–533. Margottini, C., Molin, D., Serva, L. 1992. Intensity versus ground motion: A new approach using Italian data, Engineering Geology, Volume 33, Issue 1, September 1992, Pages 45–58 Martin, D., Cockings, S. & Leung, S., 2009. Population 24/7: building time-specific population grid models. European Forum for GeoStatistics 2009. The Hague, Netherlands, 11 pp. Martin, D., Cockings, S. & Leung, S. 2010. Progress report: 24-hour gridded population models. European Forum for Geostatistics. Tallinn, Estonia, pp. 9. Matrix. 2010. New Multi-HAzard and MulTi-RIsK Assessment MethodS for Europe, Project DoW, EU Project financed with grant N. 265138. Mennis, J. & Hultgren, T. 2006. Intelligent Dasymetric Mapping and Its Application to Areal Interpolation. Cartography and Geographic Information Science, Vol.33, No. 3, pp. 179–194. Milke, J., A. 2000. “Evaluating the early development of smoke hazard from fires in large spaces”. Transaction-american society of heating refrigerating and air conditioning engineers, 106.1: 627636. Mori, Y., Ellingwood, B. 1994. Maintaining reliability of concrete structures. I: role of inspection repair. ASCE J Struct. Eng, Vol. 120, No. 3, pp. 824–825. Mulholland, G.W. 1995a, “Smoke Production and Properties,” The SFPE Handbook of Fire Protection Engineering, 2nd Ed., DiNenno P.J. (ed.), National Fire Protection Association, Quincy, MA, p. 217-227. Mulholland, G.W. 1995b, Generation of heat and chemical compounds in fires, In: P.J. DiNenno (editor), SFPE Handbook of Fire Protection Engineering, 2nd edition, NFPA . Murray, A.T., Matisziw, T.C. & Grubesic, T.H. 2008. A Methodological Overview of Network Vulnerability Analysis. Growth and Change Vol. 39, pp. 573–592. Noulas, A., Scellato, S., Mascolo, C., Pontil, M. 2011. An Empirical Study of Geographic User Activity Patterns in Foursquare. ICWSM-11, 5th International AAAI Conference on Weblogs and Social Media. Barcelona. OGC 08-126. 2009. The OpenGIS Abstract Specification Topic 5: Features. Editors: Cliff Kottman and Carl Reed, Open Geospatial Consortium, Inc. Pascale, S., Sdao, F. & Sole A. 2010. A model for assessing the systemic vulnerability in landslide prone areas, Nat. Hazards Earth Syst. Sci., Vol. 10, pp. 1575–1590. Petcherdchoo, A., Kong, J.S., Frangopol, D.M., Neves, L.C. 2004. NLCADS (new life-cycle analysis of deteriorating structures) user’s manual; a program to analyze the effects of multiple actions on reliability and condition profiles of groups of deteriorating structures. Structural

http://www.crismaproject.eu

05.09.2014 | 98

engineering and structural mechanics research series no. CU/SR-04/3, Department of Civil, Environmental, and Architectural Engineering, University of Colorado, Boulder Vol.4, No.3, p. 63. Pinto, P.E., Cavalieri, F., Franchin, P., Lupoi, A. 2011. D2.6 – Definition of system components and the formulation of system functions to evaluate the performance of transportation infrastructures, Deliverable of the EU Project Syner-G: Systemic Seismic Vulnerability and Risk Analysis for Buildings, Lifeline Networks and Infrastructures Safety Gain, Contract N. 244061. Polese, M., Marcolini, M., Prota, A., Zuccaro, G. 2013a Mechanism Based Assessment of damaged building’s residual capacity. 4th ECCOMAS Thematic Conference on Computational Methods in Structural Dynamics and Earthquake Engineering (COMPDYN2013), Kos Island, Greece Polese M., Di Ludovico M., Prota A., Manfredi, G. 2013b Damage-dependent vulnerability curves for existing buildings. Earthquake Engineering and Structural Dynamics 42 (6): 853-870 Polese M., Zuccaro G., Nardone S., Marcolini M., Cabal A., Cossalter A., Coulet C.; Pilli-Sihvola K., Perrels A.,; Aubrecht C., Steinnocher K., Huber H., HumerH.; Taverer K., Vassiljev S., Meriste M., Holopainen R., Tuomaala P., Piira K., Piippo J., Rosqvist T., Molarius R., Almeida M., Reva V., Viegas D. X. 2013c Time-Dependent Vulnerability for Systems at Risk (V1), Deliverable D43.1 of the Integrated project “CRISMA”, Project no. FP7/2007-2013 n.o 284552, European Commission Polese, M., Marcolini, M., Zuccaro, G., Cacace, F. 2014 Mechanism Based Assessment of Damaged- Dependent Fragility curves for RC building classes, Bull Earthquake Eng, in press, DOI 10.1007/s10518-014-9663-4, available online at http://link.springer.com/article/10.1007/s10518014-9663-4 Rösler, R., Liebig, T. 2013. Using Data from Location Based Social Networks for Urban Activity Clustering. In D. Vandenbroucke, B. Bucher, J. Crompvoets, eds. Geographic Information Science at the Heart of Europe. Lecture Notes in Geoinformation and Cartography. Springer, pp. 55–72. Rossetto, T., Elnashai, A. 2003. Derivation of vulnerability functions for European-type RC structures based on observational data, Engineering Structures, 25:1241–1263. Sanchez-Silva, M., Klutke, G.A. and Rosowsky, D.V. 2011. Life-cycle performance of structures subject to multiple deterioration mechanisms. Structural Safety, Vol. 33, No. 3, pp. 206–217. Scheer, S.J., Varela, V. & Eftychidis, G. 2012. A generic framework for tsunami evacuation planning. Physics and Chemistry of the Earth, Parts A/B/C, Vol. 49(0), pp. 79–91. Steinnocher, K., Köstl, M. & Weichselbaum, J. 2011. Grid-based population and land take trend indicators - New approaches introduced by the geoland2 Core Information Service for Spatial Planning. New Techniques and Technologies for Statistics. NTTS 2011. Brussels, 9 pp. Steinnocher, K., Aubrecht, C., Humer, H., Huber, H. 2014. Modellierung raum-zeitlicher Bevölkerungsverteilungsmuster im Katastrophenmanagementkontext. In M. Schrenk, V. Popovich, P. Zeile, P. Elisei, eds. REAL CORP 2014: 19th International Conference on Urban Planning and Regional Development in the Information Society - Clever Solutions for Smart Cities. Proceedings (pp. 909–913). Vienna, Austria, May 21-23, 2014. Sterling, L. & Taveter, K. 2009. The Art of Agent-Oriented Modeling. Cambridge, MA, and London, England: MIT Press. SYNER-G, FP7 EU Project, http://www.vce.at/SYNER-G/ Tatano, H. & Tsuchiya, S. 2008. A framework for economic loss estimation due to seismic transportation network disruption: a spatial computable general equilibrium approach. Natural Hazards, Vol.44, pp. 253–265.

http://www.crismaproject.eu

05.09.2014 | 99

Tewarson, A., 1995. Generation of heat and chemical compounds in fires, In: P.J. DiNenno (editor), SFPE Handbook of Fire Protection Engineering, 2nd edition, NFPA. Tsai, J., Bowring, E., Marsella, S. & Tambe, M. 2013. Empirical evaluation of computational fear contagion models in crowd dispersions. Autonomous Agents and Multi-Agent Systems, Vol. 27, No. 2, pp. 200–217. Tuomaala, P., Holopainen R., Piira K. and Airaksinen M., 2013. Impact of individual characteristics – such as age, gender, BMI and fitness – on human thermal sensation. Presentation in Building Simulation 2013, Aug 27. UNISDR, 2009. 2009 UNISDR Terminology on Disaster Risk Reduction, Geneva, Switzerland: United Nations, 30 pp. Wallingford, HR. 2012. Life Safety Model 2D v2.2 User Guide v2.2. Zhan, F.B. & Chen, X. 2008. Agent-Based Modeling and Evacuation Planning. In D. Z. Sui, ed. Geospatial Technologies and Homeland Security. The GeoJournal Library. Springer Netherlands, pp. 189–208 Zhang, H. 2003. Human Thermal Sensation and Comfort in Transient and Non-Uniform Thermal Environments Hensen 1991, University of California, Berkeley, USA. Zuccaro, G., Albanese, V., Cacace, F., Mercuri, C., Papa, F. et al. 2008a. Seismic Vulnerability Evaluations Within The Structural And Functional Survey Activities Of The COM Bases In Italy, AIP Conf. Proc. 1020, pp. 1665-1674; doi: http://dx.doi.org/10.1063/1.2963797. Zuccaro, G., Cacace, F., Spence, R.J.S., Baxter, P.J. 2008b. Impact of explosive eruption scenarios at Vesuvius, Journal of Volcanology and Geothermal Research, Vol. 178, pp. 416–453. Zuccaro, G. & Cacace, F. 2011. Seismic Casualty Evaluation: The Italian Model, an Application to the L’Aquila 2009 Event. In R. Spence, E. So, & C. Scawthorn, eds. Human Casualties in Earthquakes. Advances in Natural and Technological Hazards Research. Springer Netherlands, pp. 171–184.

http://www.crismaproject.eu

05.09.2014 | 100

Appendix (A) – Simulation model for TDV A.1 Additional specifications of TDV simulation tool The TDV model can be accessed as a web service using the standard OGC WPS (Web Processing Service). The WPS process model uses some of the functions in the TDV python package component (see https://crisma-cat.ait.ac.at/component/TDV-Pythonpackage) and for this reason it is necessary that both software modules are installed on the same server.

Figure 74 Schema of interface and wrapping of Simulation Model as a WPS.

The TDV python package can be downloaded from the following link https://crismacat.ait.ac.at/component/TDV-Python-package. From the same link you can get information on how to install and configure the module For the correct operation of the TDV model the following configuration on the server side is needed: 1. Web Server: Apache (implementation of an HTTP server) 2. PyWPS module (an implementation of the Web Processing Service standard from Open Geospatial Consortium.) 3. MapServer (a platform for publishing spatial data and interactive mapping applications to the web) 4. PostgreSQL/PostGIS 5. Python 2.x 6. TDV python package 7. External library psycopg, a python library for accessing objects in a PostgreSQL database. The TDV python package is compatible for Mac, Linux and Windows platform. To get more information on how to install the TDV python package go to the catalog page https://crisma-cat.ait.ac.at/component/TDV-Python-package

http://www.crismaproject.eu

05.09.2014 | 101

The main python module (crismatdv) that implements the WPS process, requires input data that will be passed to the TDV model as its input and control parameters: some of them need to be stored within a DBMS, resident on the same server of the other service components too. The data that are stored in the RDBMS (PostgreSQL/PostGIS) are: 1 OOI Inventory. 2 Rules for vulnerability classes updating. The execution of the WPS requires passing the following parameters by the client (in the seismic application case): 1. seismic events parameters (data) 2. number of events to be simulated (event) The seismic events parameters (EVs Parameters) for simplicity will be passed to the process in a simple text file, “eq_parameters.txt”, in CSV format with value fields separated by semicolons (;). The EVs parameters represent the parameters of seismic events in the sequence to be considered for modeling TDV. A sample “eq_parameters.txt” file is: 1;0;null;42.459;13.371;12.4;4.2 2;0;null;42.462;13.366;12.9;4.7 3;1;grid_xyz_20090622;0.0;0.0;0.0;0.0

where the fields are, in order: ev_num : event number in sequence, flag_map : Boolean if a PGA map is the hazard input or not, map_name : name of the PGA distribution input table if available longitude : lon (decimal degrees) latitude : lat (decimal degrees) depth : earthquake epicenter depth in Km magnitude : earthquake magnitude The output of the model is represented by a WFS OGC service accessible with any Desktop GIS. In the output table, in addition to the inventory updated values, also the values of the distribution of the estimated damage are represented.

Figure 75. Extract from WFS attributes table.

The full listing of the (OOI) output data structure is listed hereafter, in format of the SQL table where these data are stored. For each MU (square cell identified with codcell) the relevant data about the event in the sequence, the vulnerability classes distribution in the MU (fields aaa, aa, a, b, c, d), the distribution of the damage classes in the MU per each vulnerability class, and class damage totals in the MU are stored.

http://www.crismaproject.eu

05.09.2014 | 102

CREATE TABLE aquila.ooi ( codcell integer, eventn integer, imp integer, t integer, the_geom geometry, inte smallint, aaa double precision, aa double precision, a double precision, b double precision, c double precision, d double precision, tot_cl double precision, pa double precision, pb double precision, pc double precision, pd double precision, nd0aaa double precision, nd1aaa double precision, nd2aaa double precision, nd3aaa double precision, nd4aaa double precision, nd5aaa double precision, nd0aa double precision, nd1aa double precision, nd2aa double precision, nd3aa double precision, nd4aa double precision, nd5aa double precision, nd0a double precision, nd1a double precision, nd2a double precision, nd3a double precision, nd4a double precision, nd5a double precision, nd0b double precision, nd1b double precision, nd2b double precision, nd3b double precision, nd4b double precision, nd5b double precision, nd0c double precision, nd1c double precision, nd2c double precision, nd3c double precision, nd4c double precision, nd5c double precision, nd0d double precision, nd1d double precision, nd2d double precision, nd3d double precision, nd4d double precision, nd5d double precision, nd0 double precision, nd1 double precision, nd2 double precision, nd3 double precision, nd4 double precision, nd5 double precision ) WITH (

http://www.crismaproject.eu

05.09.2014 | 103

OIDS=FALSE );

A.2 Example integration of TDV model The WPS process can be executed by using the WPS client (Simulation Model Integration BB) developed by SPACEBEL, that can be accessed at the following URL after the needed user login: http://crisma.spacebel.be/pilotC.

Figure 76. Graphical interface of the WPS client, input parameters.

The example considers a sequence of two events (field events). Clicking on LaunchProcess the WPS executes and runs the TDV model, getting the final results, shon in the next dialog box (see Figure 77). The Process result dialog reports about the process execution success and shows the link to the output in form of WFS service.

http://www.crismaproject.eu

05.09.2014 | 104

Figure 77. Graphical interface WPS client, process result.

As an alternative interaction method, you can execute the WPS processes by using any web browser (Firefox, Chrome etc) or wrapping it into your web application by using the URL of the WPS Server. The base WPS URL is http://wps.plinivs.it/cgi-bin/plinivs-crisma-wps?. To access to more information you can invoke a GetCapabilities request: http://wps.plinivs.it/cgi-bin/plinivs-crisma-wps?service=WPS&version=1.0.0&request=GetCapabilities

The GetCapabilities operation provides access to general information about a live WPS implementation, and lists the operations and access methods supported by that implementation. To get information about the data inputs to use for the process you can use the DescribeProcess request: http://wps.plinivs.it/cgi-bin/plinivs-crismawps?service=WPS&version=1.0.0&request=DescribeProcess&Identifier=crisma-time-dependentvulnerability-seq

The DescribeProcess operation allows WPS clients to request a full description of one or more processes that can be executed by the service. This description includes the input and output parameters and formats and can be used to automatically build a user interface to capture the parameter values to be used to execute a process. To execute the model you have to invoke an Execute request and to add the data inputs to the request: http://wps.plinivs.it/cgi-bin/plinivs-crismawps?service=WPS&version=1.0.0&request=Execute&datainputs=[data=http:///eq_parameters.txt ;event=2]

Note that in the last example the http:///eq_parameters.txt must be substituted with the real URL of the parameters text file.

http://www.crismaproject.eu

05.09.2014 | 105

Appendix (B) – Simulation model for RNV B.1 Input-output specification The following shows the SQL Data Definition Language (DDL) listings for the relevant described data. Roads Inventory layer data structure: CREATE TABLE aquila.roads_lines ( gid serial NOT NULL, the_geom geometry(LineString,32633), osm_id integer, name character varying(254), highway character varying(254), waterway character varying(254), aerialway character varying(254), barrier character varying(254), man_made character varying(254), other_tags character varying(254), bridge character(1), CONSTRAINT roads_lines_pkey PRIMARY KEY (gid) )WITH (

OIDS=FALSE);

Buildings Inventory layer data structure: CREATE TABLE aquila.edificit_utm ( gid serial NOT NULL, fid_edific numeric(10,0), desc_ character varying(50), id_edficio numeric(10,0), edificio numeric, esito character varying(1), accurat numeric, datains character varying(255), squadra numeric, accorpato numeric, note character varying(100), id_sub numeric, id_aggrega character varying(254), comune character varying(254), nome_loc character varying(254), esito_1 character varying(254), rgpvcm_01_ numeric, edificionu numeric, f8 character varying(254), the_geom geometry, CONSTRAINT edificit_utm_pkey PRIMARY KEY (gid), CONSTRAINT enforce_dims_the_geom CHECK (st_ndims(the_geom) = 2), CONSTRAINT enforce_geotype_the_geom CHECK (geometrytype(the_geom) = 'MULTIPOLYGON'::text OR the_geom IS NULL), CONSTRAINT enforce_srid_the_geom CHECK (st_srid(the_geom) = 32633) ) WITH (OIDS=FALSE);

DPM data structure: CREATE TABLE aquila.dpm ( inte integer NOT NULL, classe character varying(4) NOT NULL, d0 double precision, d1 double precision,

http://www.crismaproject.eu

05.09.2014 | 106

d2 double precision, d3 double precision, d4 double precision, d5 double precision, classemeno1 character varying(4), classemeno2 character varying(4), classnum integer, clmeno1num integer, clmeno2num integer, CONSTRAINT dpm_pkey PRIMARY KEY (inte, classe) ) WITH ( OIDS=FALSE );

B.2 Architecture for RNV model The RNV model is developed as a PostgreSQL-stored procedure, and the language used for it is PL/pgSQL, with a WPS interface enabling access and integration in the CRISMA framework. The WPS process can be executed by using the WPS client (Simulation Model Integration BB) developed by SPACEBEL, that can be accessed at the following URL after the needed user login: http://crisma.spacebel.be/pilotC. As an alternative the access can be done directly using any web browser pointing it to the URL of the WPS Server that is: http://wps.plinivs.it/cgi-bin/plinivs-crisma-wps followed by the needed parameters in line. Figure 78 shows the architecture used for the interfacing and use of the simulation model as a web service.

Figure 78. RNV model wrapped as a WPS service architecture.

http://www.crismaproject.eu

05.09.2014 | 107

Figure 79. Simulation Model Integration WPS input parameters dialog.

The WPS for RNV model requires the following data inputs: {identifier: eq_int, type: text value} {identifier: buff, type: text value} {identifier: new_eqv_classes, type: boolean}

where: eq_int – earthquake intensity used for evaluating links probability of interruption. buff – width of the buffer (on one side of the roads link) in meters to be used to select the buildings nearby the links. new_eqv_classes – enables a procedure for a new evaluation of earthquake vulnerability classes for buildings nearby the links. As an example a run of the model for the evaluation of road network vulnerability, for a reference earthquake intensity of VIII and a selection buffer of 12m alongside the road links, requires a browser input of the following URL: http://wps.plinivs.it/cgi-bin/plinivs-crismawpsservice=WPS&version=1.0.0&request=execute&identifier=rnv_elaboration&datainputs=[eq_in t=8;buff=12;new_eqv_classes=false]

The result in the browser will be an XML document like: rnv_elaboration Road Network Vulnerability Model Execute Road Network Vulnerability Model. The output of this process is a WFS URL. PyWPS Process rnv_elaboration successfully calculated rnv_elaboration Road Network Vulnerability elaboration RNV model output

This document includes the reference to the WFS URL resource model output, that can be used in any client able to use WFS data: http://143.225.105.71/cgi-bin/crisma-mapserver.cgi?map=/var/www/wps-outputs/pywps-5934ecfc-134911e4-bbd6-8222a4c2e5f7.map&SERVICE=WFS&REQUEST=GetFeature&VERSION=1.0.0&TYPENAME=rnv_elaboration

for example in Quantum GIS software, the access to the WFS can be done as illustrated in Figure 80:

Figure 80. Example of access to the WFS.

The output data in WFS format will be downloaded and shown in the desktop GIS application (see Figure 81): any desired style can then be applied to the layer geometries shown in figure, for illustrating the features associated characteristics (e.g. interruption probability of the segment), shown in table in Figure 31.

http://www.crismaproject.eu

05.09.2014 | 109

Figure 81. QGIS showing the WFS layer received from the WPS after running the RNV model.

http://www.crismaproject.eu

05.09.2014 | 110

Appendix (C) – SEQUENTIAL SNAPSHOTS OF THE SMOKE CONCENTRATION SIMULATION The following figure present the sequential evolution of smoke released and dispersion by the forest fire scenario of Pilot D. This simulation was carried out using the forest fire behavior simulation tool – FireStation – for 2m high.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

Figure 82. Evolution of PM2.5 concentration at 2.0m during the first 7 hours of the forest fire simulated by FireStation Software. Smoke distribution for: (a) ignition time; (b) after 2 hours; (c) after 3 hours; (d) after 4 hours; (e) after 5 hours; (f) after 6 hours; (g) after 7 hours; (h) after 8 hours.

http://www.crismaproject.eu