Epistemic Uncertainties in Fragility Functions Derived from Post-Disaster Damage Assessments
Downloaded from ascelibrary.org by University of Illinois At Urbana on 06/20/18. Copyright ASCE. For personal use only; all rights reserved.
David B. Roueche, A.M.ASCE 1; David O. Prevatt, Ph.D., F.ASCE 2; and Franklin T. Lombardo, Ph.D., A.M.ASCE 3
Abstract: Fragility functions define the probability of meeting or exceeding some damage measure (DM) for a given level of engineering demand (e.g., base shear) or hazard intensity measure (IM; e.g., wind speed, and peak ground acceleration). Empirical fragility functions specifically refer to fragility functions that are developed from posthazard damage assessments, and, as such, they define the performance of structures or systems as they exist in use and under true natural hazard loading. This paper describes major sources of epistemic uncertainty in empirical fragility functions for building performance under natural hazard loading, and develops and demonstrates methods for quantifying these uncertainties using Monte Carlo simulation methods. Uncertainties are demonstrated using a dataset of 1,241 residential structures damaged in the May 22, 2011, Joplin, Missouri, tornado. Uncertainties in the intensity measure (wind speed) estimates were the largest contributors to the overall uncertainty in the empirical fragility functions. With a sufficient number of samples, uncertainties because of potential misclassification of the observed damage levels and sampling error were relatively small. The methods for quantifying uncertainty in empirical fragility functions are demonstrated using tornado damage observations, but are applicable to any other natural hazard as well. DOI: 10.1061/AJRUA6.0000964. © 2018 American Society of Civil Engineers.
Introduction Fragility functions represent the conditional probability that a system or component experiences damage at or above a specified level given a specified level of demand or intensity. For example, a fragility function may express the probability of a building collapsing for a given peak ground acceleration. By nature, fragility functions are probabilistic. However, it is not always apparent which uncertainties, or their magnitudes, are incorporated within a given fragility function. Fragility functions are generally expected to represent the aleatory uncertainty (natural variability) in the process (Bradley 2010). For example, the stochastic properties of building materials and components used in a building introduce aleatory uncertainty in the resistance capacities of components and systems and thus the fragility functions representing these systems. There also exists epistemic uncertainty, which is scientific uncertainty in the model of the process because of limited knowledge, stemming from incomplete data or modeling assumptions (Padgett and DesRoches 2007), for example, the wind speed estimates at different locations in a tornado damage path based on wind field models or the classification of damage into discrete damage measures. The distinction between aleatory and epistemic uncertainty, though convenient, can be blurred, particularly in reliability engineering. 1
Assistant Professor, Dept. of Civil Engineering, Auburn Univ., Auburn, AL 36849 (corresponding author). ORCID: https://orcid.org /0000-0002-4329-6759. E-mail:
[email protected] 2 Associate Professor, Dept. of Civil and Coastal Engineering, Univ. of Florida, Gainesville, FL 32611. E-mail:
[email protected] 3 Assistant Professor, Dept. of Civil and Environmental Engineering, Univ. of Illinois at Urbana-Champaign, Urbana, IL 61801. E-mail:
[email protected] Note. This manuscript was submitted on August 25, 2016; approved on November 2, 2017; published online on March 16, 2018. Discussion period open until August 16, 2018; separate discussions must be submitted for individual papers. This paper is part of the ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part A: Civil Engineering, © ASCE, ISSN 2376-7642. © ASCE
Der Kiureghian and Ditlevsen (2009) made the simplifying distinction that epistemic uncertainty is the variability that can be reduced by the analyst through more or better-quality data and models. This distinction is generally adopted for this study, but the authors recognize that arguments could be made as to whether all or any of the uncertainties addressed in this paper are wholly epistemic. The inclusion of epistemic uncertainty in fragility functions has been a topic of interest in seismic engineering for quite some time. Kennedy et al. (1980) noted that the fragility functions and their uncertainty can be represented by three properties: (1) the median capacity defined in terms of the intensity measure (IM), e.g., wind speed; (2) the underlying randomness of capacity about the median (i.e., the aleatory uncertainty); and (3) the uncertainty in the median capacity (i.e., the epistemic uncertainty). Padgett and DesRoches (2007) evaluated the sensitivity of fragility functions for retrofitted bridges because of the effect of uncertainties in earthquake ground motions, structure geometry, and modeling parameters. Ground motion uncertainty was found to have the greatest influence on the resulting fragility functions. Bradley (2010) evaluated epistemic uncertainties in component fragility functions under earthquake loading, specifically finite sampling, loading protocol, and hostto-target uncertainty. Other studies have addressed ground motion intensity uncertainty in detail, proposing kernel smoothing functions (Lallemant et al. 2015; Noh et al. 2015) and Bayesian methods (Yazgan 2015) for directly incorporating uncertainty of the IM into the development of fragility functions. Within wind engineering, epistemic uncertainties have been acknowledged but generally have not been addressed quantitatively. Ellingwood et al. (2004) identified several sources of epistemic uncertainty in analytically based wind reliability assessments, including simplified models of the structures, resistance capacities of structural members based on small data samples, and wind exposures of buildings. Li and Ellingwood (2006) explicitly considered uncertainties in wind speed probabilities (i.e., return periods) and structural resistance databases in analytical reliability assessments of residential structures and observed that the choice of wind model had the most significant effect on reliability assessments of low-rise
04018015-1
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng.
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng., 2018, 4(2): 04018015
Downloaded from ascelibrary.org by University of Illinois At Urbana on 06/20/18. Copyright ASCE. For personal use only; all rights reserved.
residential construction to hurricanes. Ciampoli et al. (2011) noted several epistemic uncertainties in wind reliability assessments, specifically roughness length, aerodynamic loading coefficients, and the aeroelastic wind load models. Quantification of uncertainties in empirically derived fragility functions for wind engineering is lacking because, indeed, to the authors’ knowledge, no empirical fragility functions existed for wind engineering prior to those developed by the authors after the 2011 Joplin, Missouri, tornado (Roueche et al. 2017). This is despite their significant potential both for providing important validation to analytically derived fragility functions and for assessing building performance under hazard conditions for which limited engineering knowledge is available to develop accurate analytical models. This study summarizes the primary sources of epistemic uncertainty in empirical fragility functions and describes a methodology for quantifying these uncertainties. The method is demonstrated using damage observations and tornado intensity estimates for 1,241 residential structures damaged in the May 22, 2011, Joplin, Missouri, tornado. The methodology used, though demonstrated for a tornado, is also appropriate for fragility functions developed from posthurricane damage data or any other natural hazard event. The section “Development of Empirical Fragility Functions” describes the development of empirical fragility functions. Then, common sources of epistemic uncertainty are qualitatively introduced. Next, the sources of epistemic uncertainty in empirical tornado fragility functions using the 2011 Joplin, Missouri, tornado are quantified. Finally, the findings are related to future post-disaster damage assessments and the main conclusions of the study are summarized.
Development of Empirical Fragility Functions Empirical fragility functions are developed from damage observations and hazard intensity measurements or estimates for multiple damage sites. At each site, the maximum hazard intensity level experienced at the site and the final damage measure of the structure or system at the site must be known or estimated. The damage observations are typically obtained through field or remote investigations, whereas hazard intensity measures (IMs) are often estimated from numerical models conditioned to the specific hazard event (e.g., tornado with estimated damage width, peak wind speed, and core radius width). Then, for a given damage measure of interest, for example, building collapse, each site is classified as to whether the damage measure of interest was met or exceeded. If the damage measure was met or exceeded, the damage state (DS) for the site is given a value of 1, indicating failure. If the damage measure did not occur at the site, it is given a value of 0. This transforms the discrete damage measure classifications into a dichotomous failure dataset for a given damage measure (DM). Each damage observation site then consists of the dichotomous damage state (DS, either 0 or 1) and the intensity measure (IM) estimated or observed at the site, which could be wind speed, peak ground acceleration, wave height, or any other measure. A continuous function, such as a lognormal cumulative distribution function, can then be fit to the data pairs using maximum likelihood estimation or some other optimization routine to obtain the function parameters for the best fit. An example of the maximum likelihood equation for the lognormal cumulative distribution function (CDF) is given as X lnðIMi Þ − μ ˆ σˆ ¼ arg max μ; ni ln Φ μ;σ σ i¼1 lnðIM i Þ − μ ð1Þ þ ðN i − ni Þ ln 1 − Φ σ © ASCE
Fig. 1. Illustration of the development of an empirical fragility curve including the aleatory (within model) and the epistemic (between models) uncertainty
where μˆ and σˆ = lognormal mean and standard deviation that maximize the likelihood function; IM i = hazard intensity measure at a given level i; Φ = standard normal CDF; and ni and N i = number of failures and the total number of observations, respectively, for an IM i . For typical systems with stochastic qualities, there will not be perfect separation between the failures and nonfailures. Failures and nonfailures will be mixed, even for sites experiencing the same or nearly the same hazard intensities. This represents the aleatory uncertainty. Epistemic uncertainty can be illustrated by considering the case where there are perhaps three different models for estimating the hazard intensity at each site. This would result in three separate fragility functions, and the separation between them illustrates the epistemic uncertainty, as demonstrated in Fig. 1.
Sources of Uncertainty in Empirical Fragility Functions The degree of uncertainty in empirical fragility functions can vary significantly from one posthazard damage assessment to another, depending upon the attention to detail in the preparation and conducting of the assessment, the quality and quantity of the data available, and the choice of methods in obtaining the fragility functions from the empirical damage data. There are several common sources of uncertainty that can compound within the fragility functions. These are described in the following sections and broadly summarized in Fig. 2. Uncertainty in Intensity Measure The estimation of empirical fragility functions is explicitly linked to the estimation of the IM. As such, any uncertainty in the IM estimation will propagate into the development of the empirical fragility functions. This is often a major source of uncertainty in empirical fragility functions because IM observations are typically not directly measured at more than a couple of points, if any, within the region of affected structures. Observation stations, such as the Automated Surface Observing System (ASOS) or Advanced
04018015-2
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng.
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng., 2018, 4(2): 04018015
Downloaded from ascelibrary.org by University of Illinois At Urbana on 06/20/18. Copyright ASCE. For personal use only; all rights reserved.
Fig. 2. Sources of uncertainty in empirical fragility assessments for wind hazards; sources in bold text are considered in this study (adapted from Ioannou et al. 2012)
National Seismic System (ANSS), do not have sufficient spatial resolution to measure local, spatial, and temporal variations in IM for a given hazard event. Even when present, they are often damaged or destroyed (Blanchard 2013; Blessing and Masters 2005), which makes relying upon current networks of sensors or observation stations for IM estimates problematic. Thus, the development of empirical fragility functions relies upon numerical models of the IM distribution throughout the affected region. For earthquakes, the IM distribution is often estimated using various ground motion prediction equations (GMPEs), which predict the IM based upon explanatory parameters such as the earthquake source and distance from it, wave propagation path, and local site conditions (Stewart et al. 2015). For hurricane events, the IM distribution can be estimated from a number of wind field models (Holland 1980; Vickery and Twisdale 1995) that predict wind speed based upon such factors as the size of the hurricane, minimum pressure, translation speed, and terrain and ground observations [e.g., Balderrama et al. (2011)]. For tornadoes, the wind field is typically modeled based on input parameters such as the translation speed and size of the vortex, ratio of tangential to radial wind velocity components, and peak wind velocity (Holland et al. 2006; Lombardo et al. 2015; Twisdale and Dunn 1983). In all of these models, uncertainty is present in the model parameters, which are often based upon judgment and general observations rather than direct measurements. Thus, for a given hazard event, there often exists an ensemble of IM distributions that are plausible, which contributes to the uncertainty in empirical fragility functions. A few studies have directly addressed IM uncertainty in the development of empirical fragility functions. Noh et al. (2015) proposed the use of weighted kernel density functions to estimate the nonparametric relationship between IM and the probability of meeting or exceeding a specific damage measure. Sites with less IM uncertainty are more heavily weighted. Lallemant et al. (2015) considered the uncertainty in peak ground acceleration estimates in the development of empirical fragility functions under earthquake loading. They recommended estimating the fragility function parameters using the ensemble median IM of the models at each spatial point, inversely weighted by the lognormal variance of the © ASCE
IM estimates at the same point. Similar to the Noh et al. (2015) method, this weighting method serves to give larger influence to points where the IM is more certain, devaluing points where the IM uncertainty is larger. This method assumes that each ensemble model is equally likely; i.e., there is no rank to the models, hence the use of the median IM. Yazgan (2015) proposed a framework for incorporating IM uncertainty in empirical fragility functions using Bayesian posterior likelihood analysis. The outcome of this method is a single most-likely fragility function for each damage measure of interest, with the variance of the fragility function increasing with increased uncertainty in the IM simulations. In both the Lallemant et al. (2015) and Yazgan (2015) studies, it was noted that uncertainty in the IM estimates could be reduced by conditioning the simulations on site observations where available. Ideally, there would be several reliable IM measurements throughout the damage area. However, in lieu of these, any other independent estimates are viable, such as well-documented damage to engineered structures or tree-fall patterns. These independent estimates will also have associated uncertainties, but can serve to narrow the spread of the ensembles. Uncertainty in Damage Measure Classification Empirical fragility functions are developed from damage data gathered from field surveys after hazard events. For ease of classification, the damage is typically constrained into discrete damage measures (or categories) that are progressive in nature. Such damage measures include the ATC-13 nomenclature of none, slight, light, moderate, heavy, major, and destroyed for earthquakes (Rojahn and Sharpe 1985). Examples of damage measures for wind events include the degrees of damage (DODs) as described in the Enhanced Fujita (EF) scale for tornadoes (McDonald et al. 2006), or the Hazus–Multi-Hazard (HAZUS-MH) damage measures of no damage; minor, moderate, or severe damage; and destruction for hurricanes (Vickery et al. 2006). Classifying damage data into discrete damage measures can be performed subjectively through human judgment, as is done in most post-disaster assessments (Marshall 2002; Marshall et al. 2008; McDonald et al. 2006; Prevatt et al. 2012a), or programmatically, using damage detection algorithms, as is the case with many
04018015-3
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng.
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng., 2018, 4(2): 04018015
Downloaded from ascelibrary.org by University of Illinois At Urbana on 06/20/18. Copyright ASCE. For personal use only; all rights reserved.
automated remote sensing applications (Kashani et al. 2014; Kashani and Graettinger 2015). With either method, there is a risk of misclassification. The potential for misclassifications in humanassigned damage measures is largely dependent upon the following factors: (1) the skill and experience level of the assessor with regards to past assessments, (2) the quality of the data from which classifications are made, (3) the precision of the damage scale, and (4) the consistency in the use of the damage scale by the survey team. Any one or a combination of these factors can increase the uncertainties in the resulting fragility functions. Uncertainty from Finite Sampling In the damage data collection process, it is often not feasible to assess the damage level of every affected structure. As a result, assessment teams often must choose certain areas of the affected region to survey in detail. This can lead to potential biases in the data collection process (e.g., survey teams gravitate toward more heavily damaged areas, resulting in sparse data for certain damage measures or biased damage ratios for certain IM values) if care is not taken to collect a sufficiently representative sample set (Porter et al. 2007). Even with representative sample sets, fragility functions developed from a sample dataset will not be fully representative of the population; thus, another random sample from the same population would lead to different results. Uncertainty in Fitting of Fragility Functions The uncertainty in the fitting of fragility functions can be classified as the uncertainty between potential probability distributions (e.g., lognormal, and gamma) and the uncertainty within a given probability distribution. The former is considered the epistemic uncertainty in this study. The development of fragility functions from empirical damage data requires assumptions to be made as to the underlying distribution of the conditional probability of exceedance. Generally speaking, the primary requirement of these distributions is that they be bounded on the interval zero to one because the conditional probabilities of exceedance are also bounded by the same interval. The lognormal cumulative distribution function is most often used (Porter et al. 2007), but other distributions, such as the normal and gamma CDF, are also valid. Fitting the chosen CDF to the empirical data is typically performed through an optimization routine such as maximum likelihood, method of moments, or Bayesian methods (Lallemant et al. 2015).
Quantifying Epistemic Uncertainty Using Bootstrapping The influence of epistemic uncertainty in fragility functions can be assessed using methods broadly classified into two categories— analytical and simulation (Bradley 2010). Simulation techniques are better suited for data where the underlying distributions are not well known, and will be used in this study. One of the most versatile resampling methods is bootstrapping (Efron 1979), which randomly samples with replacement from the finite data set. The desired parameters (e.g., lognormal mean and standard deviation for fragility function based on the lognormal CDF) can then be calculated from the new sample set. This is performed N times such that there are N estimates of the sample parameters, from which the mean, variance, correlations, and biases can be evaluated to make nonparametric inferences about the population. This method has been commonly used to assess various sources of uncertainty in empirical fragility functions for earthquakes (Bradley 2010; Ioannou et al. 2012; Noh et al. 2015; Rota et al. 2008; Yazgan 2015) and is adopted for this study. The uncertainties quantified in the following sections are generally presented in terms of a 95% confidence interval in the median failure capacity, such that CI ¼ eμ0.975 − eμ0.025
ð2Þ
where μ0.975 and μ0.025 = 97.5th and 2.5th percentiles of the N estimates of the lognormal mean parameter, μ. The median failure capacity, abbreviated as Pf 50 , represents the IM at which there is a 50% probability of failure.
Epistemic Uncertainty in Fragility Functions from the 2011 Joplin, Missouri, Tornado The tornado that struck Joplin, Missouri, on May 22, 2011, was one of the most powerful and deadly tornadoes to strike the United States. Peak 3-s gust wind speeds at 10 m height were estimated to be between 78 and 90 m=s in the tornado, causing 158 direct fatalities and damaging or destroying 7,500 structures (Kuligowski et al. 2014). Prevatt et al. (2012b) conducted a post-tornado assessment of the damage, documenting damage to 1,349 single-family residential structures using geotagged photographs. The damage to these homes was classified using 10 damage measures for one-story and two-story residential structures, provided in Table 1, as described in the EF scale (McDonald et al. 2006). The use of geotagged photographs enabled each damaged
Table 1. Degrees of Damage (DOD) and Associated Wind Speed Ranges for 1-Story and 2-Story Family Residences from Revision 2 of the EF-Scale (Data from McDonald et al. 2006) DOD 1 2 3 4 5 6 7 8 9 10
Damage description
EXP (m=s)
LBa (m=s)
UBa (m=s)
Threshold of visible damage Loss of roof covering material (20%); collapse of chimney; garage doors collapse inward; failure of porch or carport Entire house shifts off foundation Large sections of roof structure removed; most walls remain standing Exterior walls collapsed Most walls collapsed in bottom floor, except small interior rooms All walls collapsed Destruction of engineered or well-constructed residence: slab swept clean
29.1 35.3 42.9 43.4
23.7 28.2 35.3 36.2
35.8 43.4 51.0 51.9
54.1 54.5 59.0 67.9 76.0 89.4
46.0 46.5 50.5 56.8 63.5 73.8
63.0 63.5 68.4 79.6 88.5 98.3
a
EXP, LB and UB represent the expected, lower bound, and upper bound respectively of the range of wind speeds associated with each DOD. The range of values is used to adjust the wind speed estimate based on influencing factors such as quality of construction or noticeable deterioration.
© ASCE
04018015-4
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng.
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng., 2018, 4(2): 04018015
Downloaded from ascelibrary.org by University of Illinois At Urbana on 06/20/18. Copyright ASCE. For personal use only; all rights reserved.
Fig. 3. Locations of damaged structures in Joplin, Missouri, assessed by Prevatt et al. (2012b) with respect to the wind speeds estimated using a treefall conditioned model from Lombardo et al. (2015)
structure and its associated damage measure to be located spatially within the tornado path. Kuligowski et al. (2014) also conducted a posttornado assessment for the Joplin tornado and developed a wind field model conditioned on tree-fall patterns observed in over 5,000 felled trees. Of the 1,349 structures assessed by Prevatt et al. (2012b), 1,241 were within the bounds of the tree-fall conditioned wind field model, as shown in Fig. 3. These damage observations and wind speed estimates were used to develop empirical tornado fragility functions for one-story and two-story residential structures (Roueche et al. 2017). The IM for the fragilities was assumed to be the 3-s gust wind speed at 10 m height above ground level in light suburban exposure. A complete description of the data set and source of the IM estimates is provided in Roueche et al. (2017) and Lombardo et al. (2015). Uncertainties in empirical fragility functions will be demonstrated using this dataset of 1,241 homes. Uncertainty in the Joplin Tornado Wind Speeds The spatial distribution of peak 3-s gust wind speeds at 10 m height in the 2011 Joplin, Missouri, tornado was estimated throughout the damage path using a tornado wind field model conditioned to treefall patterns (Kuligowski et al. 2014; Lombardo et al. 2015). The input parameters for this model include the translation speed, the ratio of radial and tangential velocity components, the rate at which © ASCE
wind speeds decay from the edge of the tornado vortex, the width of the tornado vortex, the ratio between the maximum rotational wind speed within the tornado and the translation speed of the tornado, and the critical wind speed at which trees are expected to fall. The model outputs the damage width (i.e., the width of the swath of felled trees), the ratio of damage widths on either side of the tornado, and the expected direction of tree-fall for each grid point within the tornado path. The model outputs were compared to the observed tree-fall patterns at 10 cross-sections along the tornado path. A range of values was chosen for each input parameter, and every combination of parameters was evaluated to determine the input parameters that best matched the observed tree-fall patterns. Thus, for each assessed site within the tornado damage path, there exists an ensemble of wind speed estimates, defining the IM uncertainty. The tree-fall patterns provide one independent source for wind speed estimates, but this methodology is still susceptible to biases and uncertainties. Recognizing this, Roueche et al. (2017) additionally conditioned the wind speed models on the structural damage and developed a consensus model fit to both the tree-fall and structural damage patterns. Between the tree-fall–based models and the consensus model, there exists an ensemble of 82 wind field models for the Joplin tornado representing the uncertainty in the IM estimate. The consensus wind field model has a peak wind speed of 78 m=s (175 mph), which is lower than the National Weather
04018015-5
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng.
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng., 2018, 4(2): 04018015
Downloaded from ascelibrary.org by University of Illinois At Urbana on 06/20/18. Copyright ASCE. For personal use only; all rights reserved.
Service estimate of 89 m=s (200 mph). Further detailed discussion of the consensus model can be found in Roueche et al. (2017). The IM uncertainty is quantified through Monte Carlo simulation so that the 2.5th and 97.5th percentile median wind speed capacity (Pf 50 ) can be obtained to evaluate the width of the 95% confidence interval. Weighted kernel density functions (Gisbert 2003) are fit to the distribution of wind speed estimates at each site formed by the 82 wind field models, such that the value of the density function fð·Þ at a given site j for a given wind speed im is given as in Eq. (3) n im − IM i;j 1X fðimÞj ¼ ð3Þ ωi;j K h h i where ωi = weight assigned to the wind speed estimate IM, for model i at site j; Kð·Þ = Gaussian kernel function; and h = bandwidth that determines the smoothness of the kernel density function. Here, h is taken as Silverman’s optimum bandwidth for the Gaussian kernel (Silverman 1986) h ¼ 1.06σn ˆ −1=5
ð4Þ
where σˆ = sample standard deviation; and n = number of samples. Gaussian weights are assigned based upon the departure of each wind speed model (of the 82 wind field models for the Joplin tornado) from the consensus model relative to the variability at the given site, as shown in Eq. (5) jIM i;j − IM cons;j j ωi;j ¼ φ ð5Þ σj where IM i;j = IM estimate from wind field model i ¼ 1∶82 at site j; IM cons;j = consensus model IM estimate at site j (considered the best model); σj = standard deviation of the 82 IM estimates at site j; and φð·Þ = standard normal probability density function. Using Monte Carlo methods with 10,000 simulations, the inverse kernel CDF (Frey and Rhodes 1998) is randomly sampled for each simulation, and the obtained probability of nonexceedance is applied consistently at each site for the given simulation to select the wind speeds at the site. For example, if a probability of nonexceedance of 0.59 is randomly drawn at the beginning of a
simulation, then for every site, the estimated wind speed is taken as the 59th percentile wind speed from the kernel density CDF of wind speed estimates at the site. In essence, this expands the 82 wind field models to an infinite number of wind field models based on the kernel functions established at each site. The method is illustrated in Fig. 4, in which the top row of plots shows the distribution of wind speed estimates at three random sites along with the weighted kernel density function of the wind speed estimates at each of the sites, with the “x” indicating the wind speed estimate of the consensus model. The bottom row of plots shows the inverse CDF of the kernel density functions and the wind speed value selected at each site for a randomly selected probability of nonexceedance of 0.59. Repeating this process for all sites, randomly drawing any other probability of nonexceedance between 0 and 1, for 10,000 simulations gives 10,000 wind speed estimates at each site from which 10,000 estimates of the fragility parameters for each limit state can be obtained. The median and 95% confidence interval of the fragility parameters are obtained from the distribution of fragility parameter estimates. Fig. 5 plots the 95% confidence interval of the fragility functions, representing the uncertainty in the intensity measure, as quantified using the methodology described previously. Following the convention of previous studies, e.g., Shinozuka et al. (2000), the 95% confidence interval curves differ only in median wind speed because the same log-standard deviation is used for all three curves. The width of the 95% confidence intervals of the median wind speed for all DODs range between 8.0 m=s (18 mph) and 10.8 m=s (24 mph). Uncertainty in Damage Measure Classification in the Joplin Tornado Damage measures in the May 22, 2011, Joplin, Missouri, tornado were classified using the 10 degrees of damage for one-story and two-story residential structures in the EF scale. The damage measures are reasonably descriptive compared to other damage scales, and with 10 damage levels, each describes a specific level of damage. The field survey that assigned the EF ratings, described in more detail in Prevatt et al. (2012b), was conducted by a team of engineering faculty, students, local engineers, and wood scientists. Damage was documented primarily in one of two ways. Some teams used handheld digital cameras with GPS capabilities, which
Fig. 4. Illustration of method for introducing random bias into IM selection for three sites using the inverse kernel CDF © ASCE
04018015-6
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng.
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng., 2018, 4(2): 04018015
Downloaded from ascelibrary.org by University of Illinois At Urbana on 06/20/18. Copyright ASCE. For personal use only; all rights reserved.
Fig. 5. Uncertainty bounds in fragility curves due to IM uncertainty
enabled the team members on foot to take pictures from all around the structure, capturing the full extent of damage. One team used a camera mounted to a vehicle, oriented 90° to the road and capturing images at regularly spaced intervals. Between the two methods, not every home being rated using photographs had a full 360° view of the damaged structure, meaning if a home had more extensive damage on a side of the house that was not captured with the cameras, that damage would not have been used in the rating process and thus the chances of an erroneous DOD rating being assigned would be higher. Once the images had been captured, teams of two parsed through the images and assigned DOD ratings to individual structures. A discrete probability distribution is subjectively defined to reflect the uncertainty in the damage measure classification. For example, the probability distribution can be defined such that there is a 70% probability of the assigned DOD rating being correct, 20% probability that the rating is in error of 1 DOD rating, and 10% probability it is in error of 2 DOD ratings. The resulting
probability matrix is provided in Table 2 and can be adjusted as needed to reflect the expected uncertainty. The probability matrix can also be estimated experimentally during or after a field survey. Multiple groups could rate the same structures to examine the uncertainty in the assigned DOD rating as a function of the number of reviewers in each group, the experience level of the reviewers, or any other relevant factors. The results presented in this study, however, are based on the subjective probability matrix shown in Table 2. Through Monte Carlo simulation, the probability matrix from Table 2 is used to generate a set of simulated observations, O ¼ fDM; IMg, where DM ij is the DOD rating randomly sampled from the probability matrix based on the DOD assigned by the survey team at site i ¼ 1∶1,241 for simulation j ¼ 1∶10,000 and IM i is the estimated wind speed at site i from the best wind field model. Fragility parameters based on the lognormal CDF are estimated from each simulated set of observations, giving 10,000 estimates of μ and σ. This entire process is then repeated for multiple
Table 2. Probability Matrix for Uncertainty in Damage Measures (DOD Ratings) Probability of true DOD rating
Assigned DOD rating
D0
D1
D2
D3
D4
D5
D6
D7
D8
D9
D10
D0 D1 D2 D3 D4 D5 D6 D7 D8 D9
0.70 0.10 0.05 0 0 0 0 0 0 0
0.20 0.70 0.10 0.05 0 0 0 0 0 0
0.10 0.10 0.70 0.10 0.05 0 0 0 0 0
0 0.10 0.10 0.70 0.10 0.05 0 0 0 0
0 0 0.05 0.10 0.70 0.10 0.05 0 0 0
0 0 0 0.05 0.10 0.70 0.10 0.05 0 0
0 0 0 0 0.05 0.10 0.70 0.10 0.05 0
0 0 0 0 0 0.05 0.10 0.70 0.10 0.10
0 0 0 0 0 0 0.05 0.10 0.70 0.10
0 0 0 0 0 0 0 0.05 0.15 0.70
0 0 0 0 0 0 0 0 0 0.10
© ASCE
04018015-7
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng.
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng., 2018, 4(2): 04018015
Downloaded from ascelibrary.org by University of Illinois At Urbana on 06/20/18. Copyright ASCE. For personal use only; all rights reserved.
Fig. 6. Change in width of the 95% confidence intervals of the median wind speed capacity with increasing probability that the correct DM was assigned
variations of the DM probability matrix (i.e., revisions to Table 2), with the probability of correct DM assignment (Pi¼j ) ranging from 0.5 to 1. Fig. 6 demonstrates the effect of the probability of correctly assigning the DM (i.e., DOD) on the width of the 95% confidence intervals of the median wind speed. As expected, with a probability of 1, meaning every DM is correctly assigned 100% of the time, the confidence intervals are 0 because there is no uncertainty. With the probability of correctly assigned DM at 0.5, meaning 50% probability that the team correctly identified the true DOD, the widths of the 95% confidence intervals of the median capacity vary between 1.6 m=s for DOD4 and 6.4 m=s for DOD9.
sampling using bootstrapping. The number of resamples used in the bootstrap method is 100, 300, 750, and the full sample size of 1,241. The uncertainty is presented as the width of the 95% confidence interval in the median capacity wind speed for each DOD limit state. The results demonstrate that the width of the 95% confidence intervals in the median capacity wind speeds are less than 5 m=s for all but the highest damage measure—DOD9. The increase in uncertainty in the higher damage measures has been noted in other studies (Bradley 2010; Porter et al. 2007) and is at least
Finite Sample Uncertainty In the Joplin, Missouri, tornado survey, a total of 1,349 structures were surveyed, with 1,241 of these within the bounds of the treefall–based wind field model and 1,115 of the 1,241 experiencing some level of damage (DOD > 1). It is estimated that 7,500 homes were damaged or destroyed in Joplin by the tornado (City of Joplin 2015), meaning this data set represents approximately 15% of the total population. The size of the sample set is well beyond the minimum of 25 per damage measure recommended by Porter et al. (2007). However, the fragility functions developed from the 1,241 surveyed structures are still only a sample, and a different random sample of structures from the same population would result in different fragility functions. This is the finite sample, or out-ofsample, error. The finite sample uncertainty is assessed by bootstrapping (Efron 1979). With bootstrapping, the full data set of 1,241 homes is resampled with replacement 10,000 times, resulting in 10,000 estimates of the fragility parameters μ and σ for each damage measure. From the distribution of μ, CI 0.95 is obtained to represent the finite sample uncertainty. Fig. 7 shows the uncertainty due to finite © ASCE
Fig. 7. Ninety-five percentage confidence intervals in the median failure wind speed for each DOD limit state representing uncertainty due to finite sampling
04018015-8
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng.
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng., 2018, 4(2): 04018015
Table 3. Rankings Indicating the Best Distribution Fit for Each DOD Based on Maximum Likelihoods Model rank (1 = best)
Downloaded from ascelibrary.org by University of Illinois At Urbana on 06/20/18. Copyright ASCE. For personal use only; all rights reserved.
DOD rating DOD1 DOD2 DOD3 DOD4 DOD5 DOD6 DOD7 DOD8 DOD9
1
2
3
4
5
Lognormal Lognormal Lognormal Lognormal Lognormal Lognormal Lognormal Lognormal Weibull
Gamma Gamma Gamma Gamma Gamma Gamma Gamma Gamma Normal
Normal Normal Normal Normal Normal Normal Normal Normal Gamma
Weibull Cauchy Weibull Cauchy Weibull Weibull Weibull Weibull Lognormal
Cauchy Weibull Cauchy Weibull Cauchy Cauchy Cauchy Cauchy Cauchy
partially a by-product of the methodology for developing empirical fragility functions. For illustration, consider that with a limit state of DOD6, any sites with DOD6, DOD7, DOD8, or DOD9 are considered failures and are given a value of 1. Any sites with DOD less than 6 are not failures, and are given a value of 0. Considering DOD9 as the limit state, only sites with DOD9 would be considered failures because in the Joplin dataset, no higher damage measures were observed. Thus, assuming all DODs are sampled equally, there would be four times fewer failure sites with DOD9 as the limit state than there would be for DOD6. The lower number of failure samples results in greater uncertainty in the estimates because it means fewer “1” points with which to condition the fragility function. It could be argued, then, that oversampling the highest damage measures relative to the other damage measures in post-disaster field surveys would lead to more robust fragility estimates in the higher damage states. Uncertainty in Fitting of Fragility Functions in the Joplin Tornado
Cauchy distributions. Each CDF is fit to the observed data using maximum log-likelihood. The models are ranked by comparing the maximized log-likelihood function for each distribution [Eq. (6)] ˆ βˆ ¼ arg max μ;
X
½ni lnðpi Þ þ ðN i − ni Þ lnð1 − pi Þ
ð6Þ
i¼1
where μˆ and βˆ = fit parameters for a given distribution that maximize the likelihood function; ni and N i = number of failures and number of observations at site i; and pi = probability of failure for the IM at site i, Pðdsi ≥ DSjIM i Þ for the given distribution. Because each distribution is defined by two parameters, this method is essentially the same as the Akaike’s information criterion (AIC) method (Agresti 2013). Using this criteria, the lognormal is indeed the best model for this data set, as shown in Table 3, achieving the maximum log-likelihood for all but DOD9. The normal, gamma, and Weibull distributions are all reasonable fits, however, and for the lower and middle damage measures are generally in good agreement, as illustrated in Fig. 8. For the higher damage measures, significant differences between the different distributions are observed when extrapolating beyond the observations. This reinforces the notion that applying fragility functions beyond the limits of the observations should be avoided (Porter et al. 2007). As the results demonstrate that the lognormal distribution is the best model for all but DOD9, the epistemic uncertainty stemming from the assumption of the underlying distribution of the fragility functions (i.e., lognormal, Weibull, normal, etc.) is deemed negligible and is not considered in evaluating the combined epistemic uncertainty. Combined Uncertainty in Empirical Fragility Functions
Uncertainty between different probability distributions used to fit the fragility functions can be assessed by evaluating the goodness of fit. With the Joplin tornado, five different distributions are evaluated, consisting of the normal, lognormal, Weibull, gamma, and
The uncertainties in the intensity measure, damage measure, and finite sampling were evaluated together using 10,000 Monte Carlo simulations in which all of the considered uncertainties were included together in each simulation. They were also combined using the square root of the sum of the squares, such that
Fig. 8. Agreement between common distributions used to fit the fragility curves for DOD6 and DOD9 observations are binned as failure rates in 5 m=s increments for visual comparison to the fitted models © ASCE
04018015-9
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng.
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng., 2018, 4(2): 04018015
Downloaded from ascelibrary.org by University of Illinois At Urbana on 06/20/18. Copyright ASCE. For personal use only; all rights reserved.
Table 4. Width of the 95% Confidence Interval in the Median Capacity (Pf 50 ) of Each DOD, Relative to the Median Capacity, for Each Source of Epistemic Uncertainty Individually and Combined DOD
Pf 50 (m=s)
Intensity measure (%)
Damage measure (%)
Finite sampling (%)
Combined (Monte Carlo) (%)
Combined (RSSa) (%)
Shinozukab (%)
1 2 3 4 5 6 7 8 9
33.9 39.3 43.8 50.4 54.7 54.9 66.7 74.3 84.6
24.2 24.3 20.6 18.9 17.5 17.4 13.7 12.3 11.1
7.6 4.0 2.4 1.9 1.7 2.1 2.4 3.6 6.9
5.4 4.6 3.7 3.6 3.8 3.7 4.1 5.3 9.4
21.5 23.1 20.5 20.1 19.0 19.1 15.3 13.3 13.9
25.9 25.0 21.1 19.4 18.0 18.0 14.5 13.8 16.1
5.2 8.0 5.9 6.2 5.9 5.7 4.9 5.1 5.0
a
Combined uncertainty evaluated using square root of the sum of the squares of the individual sources of epistemic uncertainty. Uncertainty estimate using confidence interval method from Shinozuka et al. (2000).
b
CI SS ¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi CI 2IM þ CI 2DM þ CI 2FS
ð7Þ
where CI SS = combined width of the 95% confidence interval in the median capacity; and CI IM , CI DM , and CI FS = individual widths of 95% confidence intervals in the median capacity because of uncertainties in the intensity measure (IM), damage measure (DM) classification, and finite sampling (FS). The widths of the confidence intervals for the individual and combined uncertainties are provided in Table 4 as a percentage of the median failure capacity, estimated using the consensus wind field model with no consideration of epistemic uncertainty. Fig. 9 plots the median failure capacity from the consensus wind field model for each DOD with the epistemic uncertainty in the median failure capacity [CI SS , calculated as shown in Eq. (7)]. As an alternative to the specific sources of uncertainty quantified in this study, Shinozuka et al. (2000) presented a general method to estimate confidence intervals in fragility functions. The lognormal CDF parameters of the fragility function μe and σe for each limit state are estimated from the data set of damage observations and corresponding hazard intensities. A new data set is then simulated by randomly drawing xi , representing the dichotomous failure state (i.e., 0 or 1), for each imi based upon the probabilities for μe and σe . New parameters, μj and σj , are then
estimated for this new data set. This process is repeated many times in a Monte Carlo framework with 10,000 simulations until a family of parameters is estimated. The 2.5% and 97.5% percentiles of this distribution of μ realizations are taken to estimate the uncertainty in the fit. This methodology is applied for all nine damage measures observed in the Joplin tornado, and the widths of the confidence intervals are also provided in Table 4. From the combined results, several observations can be made, as follows: 1. The widths of the 95% confidence intervals in the median capacity wind speed considering combined epistemic uncertainties are between 8 and 13 m=s; 2. Uncertainty in the intensity measure is the largest source of uncertainty in the fragility functions for all DODs; 3. Uncertainty increases with increasing DOD for damage measure and finite sampling uncertainty, but is similar across all DODs for intensity measure uncertainty; 4. Combining the uncertainties using Monte Carlo simulations or the root sum square produces similar results; and 5. The widths of the confidence intervals from Shinozuka et al. (2000) are between 20 and 40% of the confidence interval widths developed from explicitly considering epistemic sources of uncertainty. The Shinozuka et al. (2000) method does reasonably capture the finite sampling uncertainty for all but DOD9.
Implications for Future Post-Disaster Damage Assessments
Fig. 9. Median wind speed capacity for each DOD limit state with the epistemic uncertainty about the median wind speed capacity © ASCE
The focus of this study has been on epistemic uncertainty in fragility functions fit to post-disaster data, which here is broadly classified as uncertainty that can be reduced with more or better-quality data or models. As such, portions of the uncertainty described here could be reduced with careful planning and execution of the assessment itself. This is particularly true for the uncertainties due to finite sampling and damage classification. The results from this study suggest that collecting a sufficiently large sample of damaged and undamaged structures has a larger effect on reducing uncertainties than precise classification of the damage if the objective is the development of fragility functions. Further, it is more beneficial to sample the damage extremes (i.e., no damage, DOD1 and DOD8, DOD9, DOD10 for a tornado damage assessment) than to equally sample from all damage measure classifications. That the most significant source of uncertainty is the intensity measure estimates is not surprising. This likely will continue to be the primary source of uncertainty for empirical tornado fragility functions for the foreseeable future because near-surface wind
04018015-10
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng.
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng., 2018, 4(2): 04018015
Downloaded from ascelibrary.org by University of Illinois At Urbana on 06/20/18. Copyright ASCE. For personal use only; all rights reserved.
measurements for tornadoes are still difficult to obtain. The tree-fall method used in this study is promising, but additional research is still needed to reduce the spread of possible models for a given event. In regards to post-disaster damage assessments, the prominent uncertainty because of the intensity measure emphasizes the need to obtain reliable hazard intensity indicators throughout the damage path that can be used to condition the hazard intensity models. For tornadoes and other wind events, structures such as freestanding signs, poles, and engineered structures are best suited for such estimates. Special efforts should be made during post-disaster damage assessments to well document failures of these more reliable hazard intensity indicators so that the spatial hazard intensity models can be better conditioned.
Conclusions This study evaluates the uncertainty in empirical fragility functions using data gathered in the 2011 Joplin, Missouri, tornado. Four common sources of uncertainty are identified and quantified, namely uncertainty in the intensity measure estimates, in damage classification, in finite sampling, and in the fitting of the fragility functions. The total uncertainty is evaluated as the width of the 95% confidence interval in the median wind speed of the tornado fragility functions for each considered damage measure. Total uncertainties vary between 10 and 13 m=s. Uncertainties in the intensity measure contribute the majority of the total uncertainty, and reducing this uncertainty should continue to be a primary focus of future research. The methodology presented here is demonstrated using tornado data but is applicable to any hazard and is flexible for a variety of assessment types. It is crucial, however, that the methods presented here or similar ones be used when evaluating fragility functions based on empirical data because the uncertainties can be large. Comparisons between fragility functions for different target structures, events, or between empirical and analytical results can lead to erroneous conclusions if the uncertainties are not recognized and properly addressed.
Acknowledgments The first author gratefully acknowledges the support provided by the National Science Foundation Graduate Research Fellowship Program under Grant No. GMO2432. The second author gratefully acknowledges the support provided by the National Science Foundation under research Grant No. 1150975.
References Agresti, A. (2013). Categorical data analysis, Wiley-Interscience, Hoboken, NJ. Balderrama, J. A., et al. (2011). “The Florida coastal monitoring program (FCMP): A review.” J. Wind Eng. Ind. Aerodyn., 99(9), 979–995. Blanchard, D. O. (2013). “A comparison of wind speed and forest damage associated with tornadoes in Northern Arizona.” Weather Forecasting, 28(2), 408–417. Blessing, C., and Masters, F. (2005). “Attrition of ground weather observations during hurricane landfall.” 10th Americas Conf. on Wind Engineering, American Association for Wind Engineering, Baton Rouge, LA. Bradley, B. A. (2010). “Epistemic uncertainties in component fragility functions.” Earthquake Spectra, 26(1), 41–62. Ciampoli, M., Petrini, F., and Augusti, G. (2011). “Performance-based wind engineering: Towards a general procedure.” Struct. Saf., 33(6), 367–378. © ASCE
City of Joplin. (2015). “Fact sheet: City of Joplin May 22, 2011 EF5 Tornado.” 〈http://www.joplinmo.org/DocumentCenter/View/1985〉 (Mar. 15, 2016). Der Kiureghian, A. D., and Ditlevsen, O. (2009). “Aleatory or epistemic? Does it matter?” Struct. Saf., 31(2), 105–112. Efron, B. (1979). “Bootstrap methods: Another look at the jackknife.” Ann. Stat., 7(1), 1–26. Ellingwood, B., Rosowsky, D., Li, Y., and Kim, J. (2004). “Fragility assessment of light-frame wood construction subjected to wind and earthquake hazards.” J. Struct. Eng., 10.1061/(ASCE)0733-9445(2004) 130:12(1921), 1921–1930. Frey, H. C., and Rhodes, D. S. (1998). “Characterization and simulation of uncertain frequency distributions: Effects of distribution choice, variability, uncertainty, and parameter dependence.” Hum. Ecol. Risk Assess. Int. J., 4(2), 423–468. Gisbert, F. J. G. (2003). “Weighted samples, kernel density estimators and convergence.” Empirical Econ., 28(2), 335–351. Holland, A. P., Riordan, A. J., and Franklin, E. C. (2006). “A simple model for simulating tornado damage in forests.” J. Appl. Meteorol. Climatol., 45(12), 1597–1611. Holland, G. J. (1980). “An analytic model of the wind and pressure profiles in hurricanes.” Mon. Weather Rev., 108(8), 1212–1218. Ioannou, I., Rossetto, T., and Grant, D. (2012). “Use of regression analysis for the construction of empirical fragility functions.” 15 WCEE, Lisbon, Portugal. Kashani, A. G., Crawford, P., Biswas, S., Graettinger, A., and Grau, D. (2014). “Automated tornado damage assessment and wind speed estimation based on terrestrial laser scanning.” J. Comput. Civ. Eng., 10.1061/(ASCE)CP.1943-5487.0000389, 04014051. Kashani, A. G., and Graettinger, A. J. (2015). “Cluster-based roof covering damage detection in ground-based LiDAR data.” Autom. Constr., 58, 19–27. Kennedy, R. P., Cornell, C. A., Campbell, R. D., Kaplan, S., and Perla, H. F. (1980). “Probabilistic seismic safety study of an existing nuclear power plant.” Nucl. Eng. Des., 59(2), 315–338. Kuligowski, E. D., Lombardo, F. T., Phan, L. T., Levitan, M. L., and Jorgensen, D. P. (2014). “Technical investigation of the May 22, 2011, tornado in Joplin, Missouri.” National Institute of Standards and Technology, Gaithersburg, MD. Lallemant, D., Kiremidjian, A., and Burton, H. (2015). “Statistical procedures for developing earthquake damage fragility functions.” Earthquake Eng. Struct. Dyn., 44(9), 1373–1389. Li, Y., and Ellingwood, B. R. (2006). “Hurricane damage to residential construction in the US: Importance of uncertainty modeling in risk assessment.” Eng. Struct., 28(7), 1009–1018. Lombardo, F. T., Roueche, D. B., and Prevatt, D. O. (2015). “Comparison of two methods of near-surface wind speed estimation in the 22 May, 2011 Joplin, Missouri tornado.” J. Wind Eng. Ind. Aerodyn., 138, 87–97. Marshall, T. P. (2002). “Tornado damage survey at Moore, Oklahoma.” Weather Forecasting, 17(3), 582–598. Marshall, T. P., et al. (2008). “Damage survey of the Greensburg, KS tornado.” 24th Conf. on Severe Local Storms, American Meteorological Society, Savannah, GA. McDonald, J. R., Mehta, K. C., and Mani, S. (2006). “A recommendation for an enhanced Fujita scale (EF-scale), revision 2.” Wind science and engineering, Texas Tech Univ., Lubbock, TX, 111. Noh, H. Y., Lallemant, D., and Kiremidjian, A. S. (2015). “Development of empirical and analytical fragility functions using kernel smoothing methods.” Earthquake Eng. Struct. Dyn., 44(8), 1163–1180. Padgett, J., and DesRoches, R. (2007). “Sensitivity of seismic response and fragility to parameter uncertainty.” J. Struct. Eng., 10.1061/(ASCE) 0733-9445(2007)133:12(1710), 1710–1718. Porter, K., Kennedy, R., and Bachman, R. (2007). “Creating fragility functions for performance-based earthquake engineering.” Earthquake Spectra, 23(2), 471–489. Prevatt, D. O., et al. (2012a). “Making the case for improved structural design: Tornado outbreaks of 2011.” Leadersh. Manage. Eng., 10.1061 /(ASCE)LM.1943-5630.0000192, 254–270. Prevatt, D. O., Coulbourne, W., Graettinger, A. J., Pei, S., Gupta, R., and Grau, D. (2012b). Joplin, Missouri, tornado of May 22, 2011:
04018015-11
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng.
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng., 2018, 4(2): 04018015
Downloaded from ascelibrary.org by University of Illinois At Urbana on 06/20/18. Copyright ASCE. For personal use only; all rights reserved.
Structural damage survey and case for tornado-resilient building codes, ASCE, Reston, VA. Rojahn, C., and Sharpe, R. (1985). “Earthquake damage evaluation data for California.” ATC Rep., ATC-13, Redwood City, CA. Rota, M., Penna, A., and Strobbia, C. L. (2008). “Processing Italian damage data to derive typological fragility functions.” Soil Dyn. Earthquake Eng., 28(10–11), 933–947. Roueche, D. B., Lombardo, F. T., and Prevatt, D. O. (2017). “Empirical approach to evaluating the tornado fragility of residential structures.” J. Struct. Eng., 10.1061/(ASCE)ST.1943-541X.0001854, 04017123. Shinozuka, M., Feng, M., Lee, J., and Naganuma, T. (2000). “Statistical analysis of fragility functions.” J. Eng. Mech., 10.1061/(ASCE)07339399(2000)126:12(1224), 1224–1231. Silverman, B. W. (1986). Density estimation for statistics and data analysis, Chapman & Hall/CRC Press, New York.
© ASCE
Stewart, J. P., et al. (2015). “Selection of ground motion prediction equations for the global earthquake model.” Earthquake Spectra, 31(1), 19–45. Twisdale, L. A., and Dunn, W. L. (1983). “Probabilistic analysis of tornado wind risks.” J. Struct. Eng., 10.1061/(ASCE)0733-9445(1983)109:2(468), 468–488. Vickery, P., and Twisdale, L. (1995). “Wind-field and filling models for hurricane wind-speed predictions.” J. Struct. Eng., 10.1061/(ASCE) 0733-9445(1995)121:11(1700), 1700–1709. Vickery, P. J., Skerlj, P. F., Lin, J., Twisdale, Jr, L. A., Young, M. A., and Lavelle, F. M. (2006). “HAZUS-MH hurricane model methodology. II: Damage and loss estimation.” Nat. Hazards Rev., 10.1061/(ASCE) 1527-6988(2006)7%3A2(94), 94–103. Yazgan, U. (2015). “Empirical seismic fragility assessment with explicit modeling of spatial ground motion variability.” Eng. Struct., 100, 479–489.
04018015-12
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng.
ASCE-ASME J. Risk Uncertainty Eng. Syst., Part A: Civ. Eng., 2018, 4(2): 04018015