Struct Multidisc Optim (2011) 43:443–458 DOI 10.1007/s00158-011-0620-4
RESEARCH PAPER
Reliability-based design optimization with confidence level under input model uncertainty due to limited test data Yoojeong Noh · K. K. Choi · Ikjin Lee · David Gorsich · David Lamb
Received: 8 May 2010 / Revised: 18 November 2010 / Accepted: 1 January 2011 / Published online: 27 January 2011 c Springer-Verlag 2011
Abstract For obtaining a correct reliability-based optimum design, the input statistical model, which includes marginal and joint distributions of input random variables, needs to be accurately estimated. However, in most engineering applications, only limited data on input variables are available due to expensive testing costs. The input statistical model estimated from the insufficient data will be inaccurate, which leads to an unreliable optimum design. In this paper, reliability-based design optimization (RBDO) with the confidence level for input normal random variables is proposed to offset the inaccurate estimation of the input statistical model by using adjusted standard deviation and correlation coefficient that include the effect of inaccurate estimation of mean, standard deviation, and correlation coefficient. Keywords Reliability-based design optimization · Input model uncertainty · Confidence level · Confidence interval · Limited data · Adjusted parameters
Y. Noh · K. K. Choi (B) · I. Lee Department of Mechanical & Industrial Engineering, College of Engineering, The University of Iowa, Iowa City, IA 52242, USA e-mail:
[email protected] Y. Noh e-mail:
[email protected] I. Lee e-mail:
[email protected] D. Gorsich · D. Lamb US Army RDECOM/TARDEC, Warren, MI 48397-5000, USA D. Gorsich e-mail:
[email protected] D. Lamb e-mail:
[email protected]
1 Introduction For the purpose of structural reliability analysis, three types of uncertainties, such as physical uncertainty, statistical uncertainty, and simulation model uncertainty, need to be considered (Toft-Christensen and Murotsu 1986). Physical uncertainty is inherent randomness in physical observation, which can be described in terms of probability distributions. This physical uncertainty can be quantified only by examining sample data. In practice, very large samples are required to quantify distribution parameters. However, due to limited sample sizes for practical and economic reasons, there still exists some uncertainty. This uncertainty is termed statistical uncertainty and arises only due to lack of statistical information. Simulation model uncertainty occurs as a result of mathematical modeling of the actual behavior of a system. The simulation model uncertainty is beyond the scope of this paper and thus will not be considered in the paper. To deal with problems under physical and statistical uncertainties, the possibility-based design optimization (PBDO) that uses fuzzy sets to quantify the uncertain quantities has been proposed (Du and Choi 2008; Du et al. 2006). In PBDO, the uncertainties are modeled using membership functions. However, it does not consider data size to properly quantify the uncertainties. Thus, it could yield overly conservative design (Lee et al. 2009). Likewise, the interval-based method, which quantifies the uncertainties as intervals, also does not use the data size to quantify the uncertainties, so it could miss information about the input random variables (Rao and Cao 2002; Parkinson 1995; Penmetsa and Grandhi 2002). To solve problems under a mixture of finite samples and PDFs of input random variables, Bayesian reliability-based design optimization methods were proposed (Gunawan and Papalambros 2006;
444
Y. Noh et al.
Youn and Wang 2008). The methods use Bayesian approach to calculate the reliability of constraints for achieving the confidence in RBDO. However, it is still difficult for the Bayesian approaches to achieve the target confidence level for very limited data. Even when sufficient data are available, they show very slow convergence to the true optimum. Furthermore, the methods assume that the PDFs of input random variables are known, which is not common in real application. One of main problems in existing methods is that it is not clear how many sample data are required to declare whether the uncertainty is aleatory or epistemic. Thus, it is necessary to develop a method that is applicable to problems with both uncertainties. In this paper, an input statistical model with a confidence level of distribution parameters is proposed for RBDO, instead of using distribution parameters directly estimated from given sample data. In the proposed method, the input statistical model is estimated using the Bayesian method, which correctly identified the distribution types through statistical studies (Noh et al. 2008, 2010). To verify the proposed method, it is important to see how much confidence the RBDO results have. However, it is difficult to quantify the output confidence level directly because it is problem dependent. Thus, the confidence level of the input model is first assessed and then the output confidence level is assessed through two numerical examples, a two-dimensional mathematical example and a coil spring problem with correlated (Socie 2003; Annis 2004; Nikolaidis et al. 2004; Pham 2006) random variables. For the simplicity of the method, only normal random variables are considered in the paper.
2 β t -contour in inverse reliability analysis The RBDO problem is formulated to min.
Cost(d)
s.t.
, j = 1, · · · , nc P(G j (X) > 0) ≤ PFTar j d = μ(X), dL ≤ d ≤ dU , d ∈ R ndv and X ∈ R n
(1)
where X is the vector of random variables; d is the vector of design variables, which is the mean value of random variables; G j (X) represents the j th constraint functions; PFTar is j the given target probability of failure for the j th constraint; and nc, ndv, and n are the number of probabilistic constraints, number of design variables, and number of random variables, respectively.
Using the performance measure approach (PMA) (Youn et al. 2005), the j th constraint in (1) can be rewritten as P G j (X) > 0 − PFTar ≤ 0 ⇒ G j (x∗ ) ≤ 0 j
(2)
where G j (x∗ ) is evaluated at the most probable point (MPP), x∗ , in X-space, which can be obtained by solving the inverse reliability analysis: maximize
gj (u)
subject to
u = βtj
(3)
where gj (u) is the j th constraint function in U-space, i.e., gj (u) ≡ G j (x(u)) = G j (x)and βtj is the target reliability index such that PFTar = −βtj . After finding the MPP j using (3), the probability of failure can be estimated using the first-order reliability method (FORM) or the secondorder reliability method (SORM). The FORM is easy to use, but it is not accurate for highly nonlinear functions, which could happen when input variables are correlated with nonGaussian distributions (Noh et al. 2010). The SORM is more accurate than the FORM, but it is expensive to use due to the Hessian matrix. Thus, the MPP-based dimension reduction method (DRM), which achieves both the efficiency of the FORM and the accuracy of the SORM, is used in this paper (Lee et al. 2008; Noh et al. 2009). The hyper-sphere in (3), u = βtj , is called the βt contour after it is transformed to X-space and the MPP search is carried out on the βt -contour for the inverse reliability analysis in (3). Since it acts as a safety barrier that locates the mean value optimum design point within the feasible region away from constraint boundaries with the target probability of failure in X-space, the larger the βt -contour is, the more reliable the optimum design is. Consider two βt -contours obtained from an estimated input model and a true input model, respectively. Section 4 will explain in detail how to generate the estimated βt contour. Figure 1a shows the estimated βt -contour (dashed line) that fully covers a true βt -contour (solid line) in Xspace. For different locations of constraint functions as shown in Fig. 1b and c, the optimum designs using the estimated βt -contour and the true βt -contour are obtained as the triangular mark and the circular dot, respectively, where the gray region indicates failure, i.e., G(x) > 0. In these cases, since the estimated βt -contour is larger than the true βt -contour, the probabilities of failure evaluated at the optimum designs obtained using the estimated βt -contour are smaller than those at the true optimum, which means that the estimated βt -contour yields more reliable optimum designs. However, when the estimated βt -contour does not fully cover the true βt -contour as shown in Fig. 2a, it may or may
Reliability-based design optimization with confidence level
445
Fig. 1 βt -contour fully covering true βt -contour. a βt -contours, b Case 1, c Case 2
G(x)=0 MPP
MPP (a)
not yield a reliable optimum design. In Fig. 2b, the optimum design for the estimated βt -contour is reliable because it is farther away from the constraint boundary than the true optimum. However, in Fig. 2c, the optimum design for the estimated βt -contour is not reliable because it is closer to the constraint boundary than the true optimum. Consequently, it is important to know how input distribution parameters such as mean, standard deviation, and correlation coefficient affect the βt -contour shapes and sizes to ensure that the estimated βt -contour covers the true βt -contour, which is not known in real engineering applications. The mean determines the location of the βt -contour, and the correlation coefficient determines the width and length of the βt -contour like Fig. 2, so that it cannot be predicted that the larger or smaller mean and correlation coefficient yields a reliable design. On the other hand, a larger standard deviation always yields a larger βt -contour, so it can be used for obtaining a more reliable design. However, the prediction error in the mean and correlation coefficient still exists in RBDO problems when available data are insufficient. Therefore, instead of using the estimated mean and correlation coefficient, adjusted mean and correlation coefficient using their confidence intervals are proposed to ensure that the estimated βt -contour with the adjusted parameters covers the true βt -contour, which will lead to a desirable confidence level for the input model. Section 4 explains in detail how to obtain the adjusted parameters.
G(x)=0 (b)
(c)
3 Estimation of input statistical model Before carrying out RBDO, the marginal and joint CDF type need to be correctly identified and their distribution parameters need to be accurately quantified using sample data. Identification of the joint CDFs and marginal CDFs is presented in Noh et al. (2008, 2010). Section 3 provides a brief review of the identification method used in the literature along with the quantification of input distribution parameters used in this paper. To introduce the adjusted distribution parameters for the input model with confidence level, the confidence intervals of the mean, standard deviation, and correlation coefficient are explained. 3.1 Identification of input statistical model The two most representative methods that determine a marginal and joint distribution type for given sample data are the goodness-of-fit test (GOF) (Cirrone et al. 2004; Genest and Rémillard 2005; Genest and Favre 2007) and the Bayesian method (Noh et al. 2008, 2010; Huard et al. 2006). The GOF test has been widely used; however, since it relies on the parameters estimated from samples, if the parameter is incorrectly estimated, then the wrong marginal and joint distributions might be selected. On the other hand, since the Bayesian method calculates weights to identify marginal and joint distributions by integrating the likelihood function over the domain of the parameter, it is less dependent on the
Fig. 2 βt -contour partially covering true βt -contour. a βt -contours, b Case 1, c Case 2
G(x)=0 MPP
MPP (a)
G(x)=0 (b)
(c)
446
Y. Noh et al.
choice of the parameter. Thus, the Bayesian method is preferred over the GOF test (Noh et al. 2008, 2010; Huard et al. 2006). The numerical comparison of the GOF test and the Bayesian method is presented by Noh et al. (2008, 2010). 3.1.1 Copula to represent joint CDFs of correlated input variables Copulas are multivariate distribution functions whose onedimensional margins are uniform on the interval [0, 1]. If T the random variables X = [X 1 , · · · , X n ] have marginal distributions FX 1 x1 , · · · , and FX n (xn ), then according to Sklar’s theorem, there exists an n-dimensional copula C such that FX 1 ,...,X n x1 , . . . , xn = C FX 1 x1 , . . . , FX n (xn ) |θ (4) where FX 1 ,...,X n (x1 , . . . , xn ) is the joint CDF of X and θ is the matrix of correlation parameters between X 1 ,. . . ,X n . If marginal distributions are all continuous, then C is unique. Conversely, if C is an n-dimensional copula and marginal CDFs are given, then the joint distribution is an n-dimensional function of marginal CDFs (Nelsen 1999). By taking the derivative of (4), the joint probability density function (PDF) is obtained as f x1 , · · · , xn n = c FX 1 x1 , · · · , FX n xn |θ f X i xi
(5)
i=1 n 1 ,··· ,z n ) where c (z 1 , · · · , z n ) = ∂ C(z is the copula density ∂z 1 ···∂z n function with z i = FX i (xi ), and f X i (xi ) is the marginal
PDF for i = 1, · · · , n. Since the joint distribution is expressed as a copula function of marginal CDFs as shown in (4), it is easy to model the joint CDF using marginal CDFs and correlation parameters that can be obtained from the sample data. The dimension of the correlation matrix θ used in different copulas may or may not depend on the number of correlated random variables. For example, Archimedean copulas only have one correlation coefficient even if n variables are correlated. In many applications, it has often been observed that two variables are correlated to each other and there could be more than one correlated pair (Socie 2003; Annis 2004; Nikolaidis et al. 2004; Pham 2006). Thus, bivariate copulas, which are functions of two random variables only, are considered in this paper. Representative bivariate copula functions used in this paper and the domains of their parameters are shown in Table 1. Other copula functions (bivariate or multivariate) are presented by Noh et al. (2008, 2010).
Table 1 Copula functions C (u, v |θ )
Copula
θ ∈ θ
−θ −1/θ u + v−θ − 1 (0, ∞) −θ v −θ u −1 e −1 e 1 Frank − ln 1 + (−∞, ∞) θ e−θ − 1 2θsw − s 2 − w2 φ −1 (u) φ −1 (v) exp 2 1 − θ2 Gaussian dsdw [−1, 1] √ 2π 1 − θ 2 −∞ −∞
Clayton
To model the joint CDF using the copula, the matrix of correlation parameters θ needs to be obtained from the sample data. Since various types of copulas have their own correlation parameters, it is desirable to have a common correlation measure to obtain the correlation parameters from the experimental data. There are two commonly used correlation coefficients, Pearson’s rho and Kendall’s tau. The maximum likelihood estimation, nearest neighbor estimation, and expectation–maximization (E–M) algorithm estimate the Pearson’s rho to represent the correlation, but it only measures linear correlation between random variables, and accordingly it cannot be used for joint non-elliptical distributions with nonlinear correlation. On the other hand, The Kendall’s tau (Kendall 1938; Kruskal 1958), which is used in this paper, is mathematically associated with copula functions, and thus, it can be used for various types of copulas. The population version of Kendall’s tau is expressed as τ =4
I2
C (u, v |θ ) dC (u, v |θ ) − 1
(6)
where the unit square I 2 is the product I × I = [0, 1]×[0, 1] of the domain of marginal CDFs of X and Y , u = FX (x) and v = FY (y). Equation (6) can be rewritten as (Nelsen 1999) τ =4 C u, v |θ dC u, v |θ − 1 2 I =2 P X 2 < x, Y2 < y dC u, v |θ I2
+2
I2
P X 2 > x, Y2 > y dC u, v |θ − 1
= 2P X 1 > X 2 , Y1 > Y2 + 2P X 1 < X 2 , Y1 < Y2 −1 = 2P X 1 − X 2 Y1 − Y2 > 0 − 1 = P X 1 − X 2 Y1 − Y2 > 0 − 1 − P X 1 − X 2 Y1 − Y2 > 0 = P X 1 − X 2 Y1 − Y2 > 0 − P X 1 − X 2 Y1 − Y2 < 0 . (7)
Reliability-based design optimization with confidence level
447
Thus, the sample version of Kendall’s tau can be written as t=
c−d 2 (c − d) = c+d ns (ns − 1)
(8)
where c is the number of concordant pairs, d is the number of discordant pairs, and ns is the number of samples. As shown in (7), when a pair of two variable data sets (X 1 , Y1 ) and (X 2 , Y2 ) satisfy (X 1 − X 2 )(Y1 − Y2 ) > 0, then the pair is called concordant; if not, the pair is called discordant. If the Kendall’s tau is obtained from sample data using (8), the correlation parameter θ can be obtained using generator functions (ϕθ ), which is expressed as 1 ϕθ (t) τ =1+4 dt (9) 0 ϕθ (t) using (6) for Archimedean copulas (Nelsen 1999). For example, a generator function for the Clayton copula, which is a family of Archimedean copulas, is ϕθ (t) =
t −θ − 1 , θ
(10)
thus, Kendall’s tau for the Clayton copula is written as 1 θ+1 2 t −t dt = 1 − (11) τ =1+4 θ 2+θ 0 which is identical, as shown in Table 2. If (6) cannot be expressed as explicit functions like (11), the correlation parameter needs to be implicitly calculated using (6). 3.1.2 Identif ication of input model using Bayesian method
where Pr (D |h k , I ) is the likelihood function, Pr (h k |I ) is the prior on the candidate, and Pr (D |I ) is the normalization constant with any relevant additional knowledge I . For the marginal CDF, any distribution parameter such as mean, standard deviation, or shifting parameter can be used to integrate (12). However, based on our test (Noh et al. 2010), there is no significant difference between distribution parameters. Hence, mean is chosen as a parameter γ for integration. Integrating (12) over the mean γ , the weight for the marginal CDF can be obtained as (Noh et al. 2010) ns 1 Wk = f k (xi |γ )dγ (13) λ ( γ ) γk ∩ γ i=1
λ ( γ )
where is the length of the domain γ of the paramγ eter that the user might know and k is the domain of the parameter that a candidate Mk provides. In (13), the multiplication of marginal PDF values at all samples xi for i = 1, · · · , ns is the likelihood function for Mk ; 1/λ ( γ ) γ and k ∩ γ are related to the priors on the candidate in (12) where the prior is defined as the uniform distribution because the prior is usually unknown and the use of the uniform distribution does not affect the identification of distribution type comparing with other particular distribution types. For the bivariate copula of the random variables X and Y , choose Kendall’s tau as the parameter γ . Integrating (12) over the Kendall’s tau γ , the weight for the copula can be obtained as (Noh et al. 2008, 2010; Huard et al. 2006) ns
1
−1 Wk = c , v g u dγ (14) (γ )
k i i k λ ( γ ) γk ∩ γ i=1
Suppose a hypothesis h k that the given data D come from candidates Mk (marginal CDFs or copulas) for k = 1, · · · , q where q is the number of candidates. To identify a marginal CDF or copula that best describes the data among candidates, the Bayesian method calculates the probability of each hypothesis h k given data D as (Noh et al. 2008, 2010; Huard et al. 2006)
Pr D h k , I Pr h k I (12) Pr h k D, I = Pr D I Table 2 Kendall’s tau and its domain Copula
τ = g(θ)
τ ∈ τ
Clayton
1−
2 2+θ t 4 1 θ 1− 1− dt θ θ 0 et − 1
(0, 1]
2 arcsin θ π
[−1, 1]
Frank Gaussian
[−1, 1]\{0}
where the multiplication of copula density function values at marginal CDF values, u i = FX (xi ) and vi = FY (yi ), for i = 1, · · · , ns, corresponds to the likelihood function for Mk . In (14), gk−1 (γ ) can be obtained from explicit functions in Table 2. After normalizing (13) and (14), the given sample data are identified as the candidate marginal CDF or copula that has the largest normalized weight. If the number of sample data is sufficient, the Bayesian method correctly identifies the distribution types. In case that there is identification error due to limited data, adjusted parameters, which will be explained in Section 4, need to be proposed to compensate for the identification error. Similarly, the maximum entropy method, which selects the distribution with the largest entropy among all candidates, can be used to identify the joint distribution, or the distribution function can be approximated using Pearson function, saddle point, or extended lambda distribution using given data instead of identifying the distribution type. The comparison studies between the Bayesian method and other methods are beyond the scope of this paper, and will be carried out in the future.
448
Y. Noh et al.
3.2 Quantification of input model Once marginal and joint CDF types are identified using the Bayesian method, it is necessary to evaluate their parameters such as mean, standard deviation, and correlation coefficient based on given sample data. The mean and variance estimated from given sample data are called the sample mean μ˜ and variance σ˜ 2 and are calculated as ns xi μ= ns
(15)
i=1
and ns 2 1 σ = μ xi − ns − 1 2
(16)
i=1
respectively. The sample correlation parameter θ for the identified copula is obtained using (6) or (9). However, since there is prediction error in distribution parameters due to limited data, a range of distribution parameters where true parameters are located, which is called a confidence interval, needs to be estimated. Suppose that X is a Gaussian random variable with a population mean μ and standard deviation σ . A random sample of size ns is generated, i.e., x = [x1 , · · · , xns ], and the sample mean can be calculated using (15). It is assumed that each sample x1 , x2 , · · · , xns comes from a set of random variables X 1 , X 2 , · · · , X ns where X i follows the same Gaussian distribution as X . Since X i is a Gaussian random variable, it can be stated that ns 1 = M Xi ns
(17)
i=1
is also a Gaussian random variable and μ in (15) is the real . The expectation of the sample mean becomes ization of M the population mean as =μ E M (18) The variance of the sample mean is calculated as ns 1 σ2 = Var Xi = Var M ns ns
(19)
i=1
When the population standard deviation is unknown, √ −μ) ns (M follows a student’s twhich is the usual case, σ distribution (Haldar and Mahadevan 2000). Thus, for the sample mean and standard deviation, the confidence interval for 100 × (1 − α) level is obtained as σ σ Pr μ − tα/2,ns−1 √ ≤ μ ≤ μ + tα/2,ns−1 √ ns ns =1−α
(20)
where tα/2,ns−1 is the value of a student’s t-distribution with (ns − 1) degree of freedom evaluated at the probability of α/2; α indicates the significance level, which is the probability that the true mean is not within the confidence interval in (20). For example, for the 95% confidence level, α = 0.05. Thus, the lower and upper bounds of the confidence interval of the mean, μ L and μU , are σ μ − t0.025,ns−1 √ and μL = ns σ μ + t0.025,ns−1 √ μU = ns
(21)
respectively. Note that as the number of samples increases, the lower and upper bounds of the confidence interval of the mean approach the true mean. Likewise, to estimate the population variance σ 2 , which is an unknown constant, it is assumed that the samples come from ns uncorrelated Gaussian random variables, X 1 , X 2 , · · · , X ns . Using (16), the sample variance can be calculated as 1 2 Xi − M ns − 1 ns
2 =
(22)
i=1
Multiplying both sides of (22) by (ns − 1) and dividing by σ 2 , (22) can be rewritten as (ns − 1) 2 = 2 σ ns
i=1
Xi − μ σ
2
−
−μ M √ σ/ ns
2 (23)
Since μ and σ 2 are constant, and the first term on the right side of (23) is a sum of the squared ns uncorrelated Gaussian variables, it has a chi-square distribution with ns degrees of 2 . The second freedom (Ang and Tang 1984), denoted as χns term on the right side has only one squared Gaussian variable and thus has a chi-square distribution with one degree of freedom. Since the sum of two chi-square distributions with i and j degrees of freedom is also the chi-square distribution with (i + j) degrees of freedom (Hoel 1962), the left side of (23) has a chi-square distribution with (ns − 1) 2 degree of freedom, denoted as χns−1 . Therefore, the twosided (1 − α) confidence interval of the population variance σ 2 is given as (Haldar and Mahadevan 2000) σ2 σ2 (ns − 1) (ns − 1) 2 ≤σ ≤ =1−α (24) Pr c1−α/2,ns−1 cα/2,ns−1 where cα/2,ns−1 and c1−α/2,ns−1 are the critical values of the chi-square distribution evaluated at the probability levels of α/2 and (1 − α/2) with (ns − 1) degrees of freedom, respectively; σ 2 is the realization of 2 . Equation (24) indicates that the true standard deviation σ is within the interval, which is calculated from given ns
Reliability-based design optimization with confidence level
449
samples for a significance level α, with 100 × (1 − α)% of confidence. For two-sided (1 − α), the lower and upper bounds of the confidence interval for the standard deviation, σ L and σ U , respectively, are obtained from (24) as σL =
σ2 (ns − 1) and σU = c1−α/2,ns−1
σ2 (ns − 1) cα/2,ns−1
(25)
As the number of samples increases, the ratio of (ns − 1) to the critical values c1−α/2,ns−1 and cα/2,ns−1 increases and decreases toward one, respectively, which leads to a small confidence interval. If the number of samples is infinity, the lower and upper bound of the confidence interval of the standard deviation converges to the true standard deviation. To calculate the confidence interval of the correlation coefficient, Proposition 3.1 from Genest and Rivest (1993) is used as √ τ −τ ns ∼ N (0, 1) 4w
(26)
where τ is the population correlation coefficient, w = ns ns ns 1 1 1 2 + w − 2w) , w = I , w = I ji , (w i i i i j i ns ns ns i=1
and w =
1 ns
ns
j=1
j=1
wi . If (x1 j − x1i )(x2 j − x2i ) > 0; that is,
i=1
two samples are concordant, I ji = 1, otherwise, Ii j = 0. It is known that as ns approaches infinity, the sample correlation parameter follows a Gaussian distribution (Genest and Favre 2007) as 2 1 dg −1 ( τ) θ ∼ N θ, 4w ns d τ
(27)
Thus, the confidence interval of the correlation parameter for 100 × (1 − α) of the confidence level is obtained as (Genest and Favre 2007)
−1
dg ( τ )
d τ
4w dg −1 ( τ )
≤θ ≤ θ + z α/2 √
d τ
ns
4w θ − z α/2 √ ns
(28)
where z α/2 is the CDF value of Gaussian distribution evaluated at α/2.
4 Assessment of confidence level of input model for RBDO To have an input model with a target confidence level, confidence intervals of the input parameters that offset the prediction error of the estimated input parameters need
to be considered. In Section 4.1, the adjusted parameters obtained from confidence intervals of the input parameters are used to obtain the desirable input model that assures confidence level. Section 4.2 shows numerical test results for confidence levels of input models obtained using the proposed method. 4.1 Adjusted parameters using confidence intervals of input parameters As explained in Section 2, the upper bound of the confidence interval of the standard deviation provides a larger βt -contour, which leads to more reliable design, whereas the upper or lower bounds of the mean and correlation coefficient do not. Thus, the confidence intervals of the mean and correlation coefficient cannot be used directly to generate an input model with confidence level, unlike the confidence interval of the standard deviation. To resolve this problem, the adjusted standard deviation that integrates the confidence interval of the mean and the upper bound of the confidence interval of the standard deviation is proposed, so that it compensates the prediction errors of both the mean and standard deviation in construction of the βt -contour. Likewise, the adjusted correlation coefficient is calculated using the confidence interval of the correlation coefficient. The adjusted correlation coefficient combined with the adjusted standard deviation yields a βt contour that assures more reliable RBDO optimum designs. 4.1.1 Adjusted standard deviation using conf idence intervals of mean and standard deviation A small standard deviation indicates that the sample data tend to be close to the mean, while a large standard deviation indicates that the sample data are spread out with a large range of sample values. To observe how the standard deviation affects the estimation of the mean values, samples with three different sample sizes, ns = 30, 100, and 300, are generated from an assumed true input model 1,000 times. Table 3 shows the expected values of the sample mean ( μ), Table 3 Mean values of sample mean and its bounds L E ( μ) True mean and std. ns E μ μ = 5, σ = 0.3
μ = 5, σ = 2.5
U E μ
30
4.887
4.999
5.112
100
4.937
4.996
5.056
300
4.965
4.999
5.033
30
4.070
5.003
5.935
100
4.506
5.002
5.499
300
4.714
4.998
5.282
*E(·) is the expected value over 1,000 data sets generated from true input model
450
Y. Noh et al.
L U the lower μ and upper μ bounds of the confidence intervals of the mean obtained from randomly generated 1,000 data sets of 30, 100, and 300 samples where the true distribution is Gaussian with the true mean, μ = 5.0. To see the effect of standard deviation on the estimation of the mean, the true distributions with small and large standard deviations, 0.3 and 2.5, are tested. As displayed in Table 3, the small standard deviation (i.e., σ = 0.3) yields a small confidence interval of the mean because most of the samples center on the true mean. On the other hand, the large standard deviation (i.e., σ = 2.5) yields a large confidence interval of the mean because the samples are widely spread on a large domain of the input variable. Thus, for a large standard deviation, it is more challenging to predict the confidence interval in order to have a desirable confidence level of the input model. Let the change in the sample standard deviation be σ caused by the change in the sample mean μ where μ =
U
μ − μ = μ− μ L due to the symmetry of normal distributions. If the ratio of σ to μ (COV) is small, it can be expected that the prediction error of the mean μ is also small. On the other hand, if the COV is large, the prediction error of the mean is large, which leads to large σ . From the relationship between COV, μ, and σ , it is proposed that the ratio of σ to μ is approximated as the ratio of the sample standard deviation σ to the sample mean μ as σ σ ≈ μ μ
(29)
Adding σ obtained in (29) to the upper bound of the confidence interval of the standard deviation σ U obtained from (25), the adjusted standard deviation can be heuristically approximated as σ U + σ ≈ σU + σA =
σ × μ μ
(30)
In (30), the ratio of σ to μ functions as a scale factor such that the effect of the confidence interval of the mean on the adjusted standard deviation is proportional to COV. However, it should be noted that (30) works only for random parameters that do not change during the design optimization. For random variables, since their mean values are design variables, they are controllable during the design optimization. Thus, σ in (30) is removed for random variables. 4.1.2 Adjusted correlation coef f icient using conf idence interval of correlation coef f icient As explained in Section 2, the βt -contour with either the lower or the upper bound of the confidence interval of a correlation coefficient does not yield a reliable design. Figure 3a and b illustrate the effect of an underestimated or overestimated correlation coefficient on the βt -contour using small (τ = 0.2) and large (τ = 0.8) correlation coefficients, respectively. The βt -contours using underestimated and overestimated correlation coefficients are indicated as the dotted and dashed contours in the figures, respectively. For τ = 0.2 in Fig. 3a, if the adjusted standard deviation is 0.33, the obtained βt -contour covers the true βt -contour for both underestimated ( τ = 0.1) and overestimated ( τ = 0.3) correlation coefficients, which guarantees more reliable optimum design. However, for τ = 0.8 in Fig. 3b, even though the same adjusted standard deviation is used, the βt -contour with underestimated correlation coefficient ( τ = 0.7) covers the true βt -contour, whereas the overestimated correlation coefficient ( τ = 0.9) does not. Thus, the correlation coefficient needs to be adjusted based on the magnitude of estimated correlation coefficients such that the obtained βt -contour covers the true βt -contour.
Fig. 3 βt -contour using underestimated and overestimated correlation coefficient. a τ = 0.2, b τ = 0.8
(a)
(b)
Reliability-based design optimization with confidence level
451
Since the underestimated correlation coefficient yields a reliable design, the lower bound of the confidence interval of the correlation coefficient, τ L , seems to be adequate. However, when the correlation coefficient is small and positive, for example, τ = 0.2, its lower bound could become a negative correlation coefficient. In that case, the obtained βt -contour shape will be very different from a true βt -contour, which will lead to an inaccurate optimum design. Therefore, when the estimated correlation coefficient is small, the sample correlation coefficient can be used to obtain the desirable input confidence level because both underestimated and overestimated correlation coefficients combined with the adjusted standard deviation σ A cover well the true βt -contour as shown in Fig. 3a. However, it is also shown in Fig. 3a that the smaller correlation coefficient covers the true βt -contour better, so the adjusted correlation coefficient τ A needs to be slightly smaller than the sample correlation coefficient as shown in Fig. 4a. On the other hand, when the estimated correlation coefficient is large, the closer the adjusted correlation coefficient τA is to the lower bound of the correlation coefficient, the better the estimated βt -contour covers the true one as shown in Fig. 4b. Consequently, the adjusted correlation coefficient τ A is selected such τ − τ A to that the ratio of U L max τ − τ, τ − τ is equivalent to the estimated correlation coefficient τ as τ − τA τ= U max τ − τ, τ − τL
(31)
Thus, using (31), the adjusted correlation coefficient τ A can be obtained as
τ − τ × max τU − τ, τ − τL (32) τA = Table 4 shows the mean values of the confidence intervals of the correlation coefficients using 1,000 data sets with 30, 100, and 300 samples obtained from the Frank copula. When the true correlation coefficient is τ = 0.2 and ns = 30, the mean of sample correlation coefficients is only 0.137, which is smaller than the true one, and thus the adjusted correlation coefficient is close to the sample correlation coefficient. As the number of samples increases, both the mean value of the sample correlation coefficient and the adjusted correlation coefficient approaches the true correlation coefficient. When the correlation coefficient is
Fig. 4 Adjusted correlation coefficient based on its magnitude. a τ = 0.2, b τ = 0.8
τ~ L
Table 4 Mean values of confidence intervals of correlation coefficient and adjusted correlation coefficient U A L E ( τ) E τ E τ True Kendall’s tau ns E τ 0.2
0.8
30
0.024
0.137
0.226
0.102
100
0.071
0.179
0.275
0.155
300
0.127
0.201
0.268
0.186
30
0.639
0.800
0.866
0.669
100
0.747
0.799
0.833
0.758
300
0.775
0.799
0.819
0.780
*E(·) is the expected value over 1,000 data sets from true input model
large (τ = 0.8), the adjusted correlation coefficient is close to the lower bound of the confidence interval of the correlation coefficient as shown in (32) and Fig. 4. As the number of samples increases, the adjusted correlation coefficient approaches the true correlation coefficient, 0.8, since the lower bound of the correlation coefficient approaches the true correlation coefficient. This adjusted correlation coefficient along with the adjusted standard deviation is used for input statistical model with confidence level for RBDO. In the next section, how the proposed input statistical model with adjusted distribution parameters achieves predefined confidence levels is tested. 4.2 Assessment of confidence level of input model Figure 5 shows a flowchart for assessment of the confidence level of the input model. 300 data sets, where each data set has different sample size of ns = 30, 100, and 300, respectively, are generated from a true input model—the Frank copula with Gaussian marginal CDFs is the true joint CDF in this test. After generating data sets with 30, 100, and 300 sample sizes, the mean and standard deviation for each sample size are quantified, and the marginal distribution types of two input random variables are identified using the Bayesian method as explained in Section 3.1.2. Using the estimated marginal distributions, a copula that best describes the given data set is identified, and the correlation coefficient is estimated. Finally, using (30) and (32), adjusted parameters with the given confidence level are obtained. For the assessment of the confidence level of the input model, the βt -contour obtained from the input statistical model with adjusted distribution parameters is compared
τ~ A τ~ (a)
τ~U
τ~ L τ~ A
τ~ (b)
τ~ U
452
Y. Noh et al.
Start with True Input Model X1 and X2 ~ N(5,0.32) with Frank Copula (τ=0.2, 0.5, and 0.8) Generate 100 Sets of ns=30, 100, and 300
Identification of Marginal CDF Type Quantification of Mean and Standard Deviation Identification of Copula Type Quantification of Correlation Coefficient Compare βt- contours with Quantified Parameters and One with True Input Model Fig. 5 Flowchart of assessment of input confidence level
with the βt -contour obtained from the true input model. In this study, the confidence level of the input model is assessed in a conservative way by counting numbers out of 100 data sets that the βt -contour obtained from the input model with adjusted parameters fully covers the true βt contour. Since 95% for the two-sided target confidence level corresponds to 97.5% for the upper side of the confidence interval, the corresponding target confidence level is 97.5% in this paper. Figure 6 shows the true βt -contour and βt -contours obtained using the adjusted parameters drawn with a solid
Table 5 Confidence levels using adjusted parameters ns
τ = 0.2
τ = 0.5
τ = 0.8
30
89
90
88
100
92
92
92
300
96
95
95
line and a dotted line, respectively. In Fig. 6, it can be seen that the βt -contours obtained using the adjusted parameters mostly cover the true βt -contour. For example, in Fig. 6a, 89 βt -contours out of 100 fully cover the true βt -contour, yielding an 89% confidence level as shown in Table 5. The reason the obtained confidence level is smaller than the target confidence level is that there is an identification error for a small number of samples. Since the number of samples is small (ns = 30), the βt -contours obtained using the adjusted parameters are rather larger than the true one, which could yield overly conservative an RBDO optimum design. However, to attain the target confidence level of the input model for small sample size, we need to accept the conservativeness, which is reduced significantly as the number of samples increases as shown in Fig. 6. As shown in Table 5, as the number of samples increases, the confidence level of the input model seems to approach the target confidence level 97.5%. If we do not consider the prediction error for the mean and correlation coefficient, that is, only the upper bound of standard deviation is used for the input model estimation, the input confidence level is much lower than the target confidence level as shown in Table 6, especially when the correlation coefficient is large and the number of samples is small. This is because the overestimated correlation coefficients often make the obtained βt -contour narrow, as shown in Fig. 3, so that it does not cover the true βt -contour. Hence, this test verifies why the adjusted parameters in (30) and (32) are required to assure the confidence level. Even though the input confidence levels are tested only for two Gaussian variables with Frank copula, the confidence levels obtained from different input models will be similar to the results of Table 5 because the effect of input parameters on the shape of the βt -contour is the same. It should be noted that even if the confidence level of an input model is less than the target confidence level, the confidence level of RBDO optimum designs meeting the Table 6 Confidence level using upper bound of standard deviation only
Fig. 6 βt -contours using adjusted parameters
ns
τ = 0.2
τ = 0.5
τ = 0.8
30
88
77
52
100
92
83
75
300
95
94
93
Reliability-based design optimization with confidence level
453
(without considering confidence level) and another input model with adjusted parameters (with confidence levels for all input random variables) are used as an input for RBDO. Using the input models with the estimated and adjusted distribution parameters, respectively, optimum designs are obtained through the MPP-based DRM to minimize the reliability analysis error. Output confidence levels are assessed by counting the number of failed events, which means that the probability of failure at the optimum design estimated by the Monte Carlo simulation (MCS) is larger than the target probability of failure, which is 2.275% for both examples. 5.1 Mathematical example
Fig. 7 Constraint shape of (34)
target reliability should be higher than the input confidence level because the input confidence level is measured conservatively. However, as stated in Section 1, the output confidence level is problem dependent. Hence, in Section 5, it is shown how the confidence level of RBDO optimum design meeting the target reliability is obtained through two numerical examples.
Suppose that the true input random variables have Gaussian distributions, that is, X 1 and X 2 ∼ N di , 0.32 for i = 1, 2, and are correlated with the Frank copula with τ = 0.8. At the initial design, d1 = 3.0 and d2 = 3.0. An RBDO formulation for the mathematical example is defined to minimize subject to
cost(d) = −d1 + d2 P G j (X) > 0 ≤ PFTar (= 2.275%), j j = 1, 2, 3 dinitial = [5, 5]T , dL = [0, 0]T and dU = [10, 10]T
(33)
5 Numerical examples where three constraints are given as In this section, a mathematical example and coil spring example with correlated input variables are used to show how the input models with and without the confidence level yield RBDO results. For tests of RBDO with the confidence level, first, true input distributions are given to generate 100 sample data sets with three samples sizes, ns = 30, 100, and 300; thus, a total of 300 data sets are generated. It should be noted that the true input distributions are used only for sample data generation and are assumed to be unknown. Given generated sample data, distribution parameters are estimated and adjusted by (30) and (32). For comparison purposes, one input model with estimated parameters Table 7 Probabilities of failure and output confidence levels
RBDO with
No. of
PF2 , %
samples
Min.
X 21 X 2 20 (X 1 + X 2 − 5)2 (X 1 − X 2 − 12)2 G 2 (X) = 1 − − , 30 120 80 G 3 (X) = 1 − 2 (34) X 1 + 8X 2 + 5 G 1 (X) = 1 −
and are shown in Fig. 7. It is assumed that input standard deviations and correlation coefficients do not change during the design optimization once they are estimated from the given sample data. PF3 , % Med.
Max.
C.L.a
Min.
Average Med.
Max.
C.L.
(ns) Estimated parameter Adjusted parameter a The
confidence level
cost at optimum
30
0.233
2.511
13.388
46
0.039
2.462
14.608
47
−3.010
100
0.441
2.349
5.498
54
0.692
2.309
6.345
51
−2.980
300
0.699
2.209
3.895
58
0.936
2.152
5.038
57
−2.961
30
0.008
0.465
7.126
94
0.000
0.285
8.493
92
−2.326
100
0.119
0.969
3.352
97
0.154
0.851
3.551
98
−2.617
300
0.537
1.177
2.687
96
0.473
1.204
3.481
98
−2.744
454
Y. Noh et al.
(a)
(b)
Fig. 8 Box plot for statistical test results using estimated parameters. a For G2 (X), b For G3 (X)
Table 7 shows the minimum, median, and maximum values of the probabilities of failure for two active constraints, PF2 and PF3 , and the output confidence levels of optimum designs, which are obtained from 100 data sets, and Figs. 8 and 9 show box plots of the statistical test results in Table 7. As shown in Table 7, when the input model with the estimated parameters is used for ns = 30, the median values of PF2 and PF3 (2.511% and 2.462%, respectively) are slightly larger than the target probability of failure of 2.275%. Furthermore, 54 and 53 out of 100 data sets for G2 and G3 , respectively, failed to achieve the target probability of failure, yielding 46% and 47% output confidence levels for each constraint. Thus, the confidence levels of the
output performance are significantly lower than the target confidence level 97.5% as shown in Table 7, which means that the input model with the estimated parameters yields unreliable designs in more than half the cases. This phenomenon does not change even when the number of samples in each data set increases. As shown in Fig. 8, as the number of sample data increases, boxes whose edges indicate the 25th and 75th percentiles become smaller, which means that the input estimation becomes accurate. However, even with 300 samples, the output confidence levels for G2 and G3 are still 58% and 57%, respectively, as shown in Table 7, which is still far less than the target confidence level (97.5%).
(a) Fig. 9 Box plot for statistical test results using adjusted parameters. a For G2 (X), b For G3 (X)
(b)
Reliability-based design optimization with confidence level
455
Table 8 Properties of random variables and parameters dL
d
dU
Std.
X1
1.0
10.0
100.0
0.5
5
the lower limit (Arora 2004). Thus, the RBDO formulation is defined to minimize cost(d) = 25,000 (d1 + Q) π 2 d2 + d3 d32
X2
0.1
1.0
5.0
0.05
5
subject to
X3
0.01
0.1
0.5
0.005
5
X4
7.38E−4
2.95E−5
4
X5
1.15E+7
1.38E+6
12
Random Variables
Parameters
COV (%)
where three constraints are given as
5.2 Coil spring example The design objective of the coil spring is to minimize the volume of the spring to carry a given axial load such that the design satisfies the minimum deflection and allowable shear stress requirement, and the surge wave frequency is above
RBDO with
No. of
PF2 , %
samples
Min.
G 1 (X) = 1.0 −
8P(X 2 + X 3 )3 X 1
G 2 (X) = −1.0 +
X 43 X 5 8P(X 2 + X 3 )
π X 33 τa 4X 2 + 3X 3 0.615X 3 × + 4X 2 X2 + X3 X5 X3 G 3 (X) = 1.0 − 2 2π X 1 X 2 + X 3 w0 2X 4
parameter Adjusted parameter
(36)
where is the minimum spring deflection, 0.5 in.; τa is the allowable shear stress, 80,000 lb/in.2 ; w0 is the lower limit of surge wave frequency, 100 Hz; Q is the number of inactive coils, 2; and P is the applied load, 10 lb. In this example, the number of active coils (X 1 ), mean inner diameter (X 2 ), and wire diameter (X 3 ) are selected as random variables, and mass density of material (X 4 ) and shear modulus (X 5 ) are selected as random parameters. Since X 1 , X 2 , and X 3 are random variables, σ in (30) is not used to obtain the adjusted standard deviation. Instead, σ is considered to obtain the adjusted standard deviation for random parameters, X 4 and X 5 . Table 8 shows the statistical information of the random variables and parameters and their design bounds. It is assumed that three design variables are Gaussian, X i ∼ N di , σi2 for i = 1, 2, 3, and two random parameters are also Gaussian, X j ∼ N μj , σj2 for j = 4, 5. The COVs for two material properties are referred to in Schuëller (2007). In the manufacturing process, the coil inner
PF3 , % Med.
Max.
C.L.
Min.
Average Med.
Max.
C.L.
(ns) Estimated
(35)
j = 1, 2, 3
On the other hand, when the input model with the adjusted parameters is used, the obtained output confidence levels become much closer to the target confidence level. It should be noted that σ˜ is not used in (30) to obtain the adjusted standard deviation since both random variables are design variables in this example. When ns = 30, even though we consider the adjusted distribution parameters, some error still exists in the estimated output confidence levels (94% and 92% for G2 and G3 , respectively) as shown in Table 7, which is slightly smaller than the target confidence level (97.5%). This is because of the incorrect identification of marginal distributions and copula caused by the small number of samples, ns = 30. In addition, the reliability analysis error still exists even if the MPP-based DRM is used. However, as the number of samples increases, the minimum, median, and maximum values of PF2 and PF3 approach the target probability of failure from a reliable side as shown in Fig. 9. Moreover, as the number of samples increases, the average cost value at the optimum design obtained from 100 data sets decreases, which means that we can obtain a better optimum design if we can spend more on accurate input estimation.
Table 9 Probabilities of failures and output confidence levels for coil spring problem
P(G j (X) > 0) ≤ PFTar (= 2.275%), j
cost at optimum
30
0.553
2.433
15.339
46
0.073
3.045
14.080
39
100
1.177
2.047
5.367
61
0.122
2.417
5.614
45
5.619 5.725
300
1.515
2.059
3.095
72
1.077
2.276
4.674
50
5.716
30
0.024
0.207
1.408
100
0.000
0.435
4.225
91
8.531
100
0.388
0.687
1.678
100
0.023
0.998
3.003
94
6.899
300
0.817
1.174
2.119
100
0.562
1.476
3.237
96
6.297
456
Y. Noh et al.
(a)
(b)
Fig. 10 Box plot for statistical test results using estimated parameters. a For G2 (X), b For G3 (X)
diameter (X 2 ) and the wire diameter (X 3 ) are correlated, and thus the correlation coefficient between those two variables is assumed to be τ = 0.7 where the Clayton copula is used to model the joint distribution of X 2 and X 3 . Table 9 shows the minimum, median, and maximum values of the probabilities of failure estimated at the optimum design using the MCS, and the statistical results in Table 9 are plotted in Figs. 10 and 11. When the estimated parameters are used, the obtained output confidence levels are much lower than the target confidence level 97.5%. Even if the number of samples increases, the output confidence levels do not satisfy the target. On the other hand, when the input model with adjusted parameters is used, the esti-
mated probabilities of failure approach the target probability of failure as the number of samples increases, thus almost satisfying the target confidence level. Again, the error in the output confidence level is due to the reliability analysis error. In summary, from two numerical examples, we can see that the input model with the adjusted parameters shows desirable confidence levels of output performances compared with the input model with the estimated parameters. When the input model with the estimated parameters is used, the median of the estimated probability of failure seems to be very close to the target. However, it could still be dangerous to use the estimated parameters since the
(a) Fig. 11 Box plot for statistical test results using adjusted parameters. a For G2 (X), b For G3 (X)
(b)
Reliability-based design optimization with confidence level
output confidence level for the input model is much less than the target. Hence, to offset inaccurate estimation of input models due to the lack of statistical information, it is indeed necessary to use the adjusted parameters to obtain desirable output confidence levels. When the number of sample data is small, optimum designs using the input model with the adjusted parameters may look conservative. However, even in those cases, the output confidence level does satisfy the target confidence level, which means that the optimum designs are not conservative at all in terms of the confidence level. Furthermore, by increasing the number of sample data, we could achieve better optimum design with smaller cost values while satisfying the target confidence level.
6 Conclusion It is important to obtain an accurate input model from given sample data to obtain reliable RBDO results. The input model can be generated by identifying marginal CDFs and copula using the Bayesian method and by quantifying marginal and correlation parameters based on the given sample data. However, in practical applications, sample data are often insufficient, and thus it is difficult to obtain an accurate input model. To offset the inaccurate estimation of the input model, the adjusted standard deviation and correlation coefficient involving confidence intervals for all input parameters (mean, standard deviation, and correlation coefficient) are proposed such that they compensate the inaccurate estimation of the input parameters. To check whether use of the adjusted parameters provides the target confidence level of the output performance, the βt -contour is used to measure the confidence level of the input model. Numerical results show that the input model without confidence level does not yield desirable confidence levels for the input model and output performance on RBDO results. On the other hand, the input model with adjusted parameters yields desirable input confidence levels, and the obtained RBDO results are considerably reliable, which leads to desirable confidence levels of the output performances. For future research, the input confidence level for nonGaussian distributions will be tested using the bootstrap method. A real engineering problem with non-Gaussian distributions will be tested to observe how the optimum is changed when the confidence level is implemented in RBDO. Acknowledgments Research is jointly supported by the Automotive Research Center, which is sponsored by the U.S. Army TARDEC and ARO Project W911NF-09-1-0250. This support is greatly appreciated.
457
References Ang AH-S, Tang WH (1984) Probability concepts in engineering design, vol I: decision, risk and reliability. Wiley, New York Annis C (2004) Probabilistic life prediction isn’t as easy as it looks. JAI 1(2):3–14 Arora JS (2004) Introduction to optimum design. Elsevier, Amsterdam Cirrone GAP, Donadio S, Guatelli S et al (2004) A goodness-of-fit statistical toolkit. IEEE Trans Nucl Sci 51(5):2056–2063 Du L, Choi KK (2008) An inverse analysis method for design optimization with both statistical and fuzzy uncertainties. Struct Multidisc Optim 37(2):107–119 Du L, Choi KK, Young BD (2006) An inverse possibility analysis method for possibility-based design optimization. AIAA J 44(11):2682–2690 Genest C, Favre A-C (2007) Everything you always wanted to know about copula modeling but were afraid to ask. J Hydrol Eng 12(4):347–368 Genest C, Rémillard B (2005) Validity of the parametric bootstrap for goodness-of-fit testing in semiparametric models. Technical Rep G-2005-51, Group d’Études et de Recherche en Analyse des Décision Genest C, Rivest L-P (1993) Statistical inference procedures for bivariate archimedean copulas. J Am Stat Assoc 88(3):1034– 1043 Gunawan S, Papalambros PY (2006) A Bayesian approach to reliability based optimization with incomplete information. J Mech Des 128:909–10198 Haldar A, Mahadevan S (2000) Probability, reliability, and statistical methods in engineering design. Wiley, New York Huard D, Évin G, Favre A-C (2006) Bayesian copula selection. Comput Stat Data Anal 51:809–822 Hoel PG (1962) Introduction to mathematical statistics, 3rd edn. Wiley, New York Kendall M (1938) A new measure of rank correlation. Biometrika 30:81–89 Kruskal WH (1958) Ordinal measures of associations. J Am Stat Assoc 53(284):814–861 Lee I, Choi KK, Du L, Gorsich D (2008) Inverse analysis method using MPP-based dimension reduction for reliability-based design optimization of nonlinear and multi-dimensional systems. Comput Methods Appl Mech Eng 198(1):14–27 Lee I, Choi KK, Noh Y (2009) Comparison study between probabilistic and possibilistic approach for problems with correlated input and lack of input statistical information. In: 35th ASME design automation conference, San Diego, CA, August 30–September 2, 2009 Nelsen RB (1999) An introduction to copulas. Springer, New York Nikolaidis E, Ghiocel D, Singhal S (2004) Engineering design reliability handbook. CRC, New York Noh Y, Choi KK, Du L (2008) Selection of copulas to generate joint distributions of correlated input variables for RBDO. ASME 34th design automation conference, New York, New York, August 3–6 Noh Y, Choi KK, Lee I (2009) Reduction of ordering effect in reliability-based design optimization using dimension reduction method. AIAA J 27(4):994–1004 Noh Y, Choi KK, Lee I (2010) Identification of marginal and joint CDFs using the Bayesian method for RBDO. Struct Multidisc Optim 40(1):35–51 Parkinson A (1995) Robust mechanical design using engineering models. J Vib Acoust 117(1):48–54 Penmetsa RC, Grandhi RV (2002) Efficient estimation of structural reliability for problems with uncertain intervals. Comput Struct 80(12):1103–1112
458 Pham H (2006) Springer handbook of engineering statistics. Springer, London Rao SS, Cao L (2002) Optimum design of mechanical systems involving interval parameters. J Mech Des 124(3):465–472 Schuëller GI (2007) On the treatment of uncertainties in structural mechanics and analysis. Comput Struct 85:235–243 Socie DF (2003) Seminar notes: probabilistic aspects of fatigue. http://www.fatiguecaculator.com. Cited 8 May 2008
Y. Noh et al. Toft-Christensen P, Murotsu Y (1986) Application of structural systems reliability theory. Springer, Berlin Youn BD, Wang P (2008) Bayesian reliability-based design optimization using eigenvector dimension reduction (EDR) method. Struct Multidisc Optim 36(2):107–123 Youn BD, Choi KK, Du L (2005) Enriched performance measure approach for reliability-based design optimization. AIAA J 43(4):874–884