state function. On the other hand, the proposed algorithm does not need the knowledge about the position of the design point or the shape of the limit state ...
KSCE Journal of Civil Engineering (2013) 17(1):210-215 DOI 10.1007/s12205-013-1779-6
Structural Engineering
www.springer.com/12205
A New Adaptive Importance Sampling Monte Carlo Method for Structural Reliability Ehsan Jahani*, Mohsen A. Shayanfar**, and Mohammad A. Barkhordari*** Received October 28, 2011/Accepted March 24, 2012
···································································································································································································································
Abstract Monte Carlo simulation is a useful method for reliability analysis. But in Monte Carlo, a large number of simulations are required to assess small failure probabilities. Many methods, such as Importance sampling, have been proposed to reduce the computational time. In this paper, a new importance sampling Monte Carlo method is proposed that reduces the numbers of calculation of the limit state function. On the other hand, the proposed algorithm does not need the knowledge about the position of the design point or the shape of the limit state function. The key-idea of the proposed algorithm is that the mean of sampling density function is changed throughout the simulation. In fact, in random point generating process each point with lower absolute value of limit state function and nearer distance from space center is considered the mean of the sampling density function. Based on this, the centralization of the sampling will be on the important area. Keywords: structural reliability, monte carlo simulation, importance sampling, limit state function ···································································································································································································································
1. Introduction The failure probability of the structures is one of the principle topics in the structural engineering. This topic can be studied using reliability theory. The operation of each structure can be expressed by a function of basic random variables of that structure called limit state function (G(X)=0) so that G(X)>0 indicates a safe state and G(X) 0 I ( xj ) = ⎨ ⎩ 1 if G ( xj ) ≤ 0
(5)
A successful choice of hx (x) yields reliable results and reduces significantly the number of simulations, whereas an inappropriate choice produces inaccurate results. The main problem in importance sampling is how to choose an importance sampling density function that reduces required sample points. The key-idea of this technique is to obtain a non-negative sampling density located in the neighborhood of the most probable failure point (Papadrakakis and Lagaros, 2002). For importance sampling, many methods (Melchers, 1990; Maes et al., 1992; Bucher, 1988) have been proposed which require the design point or the feature of the limit state function to obtain adequate sampling density function, while, in many practical problems, the design point or the shape of the limit state function is not determined. An alternative strategy is to gather knowledge about the failure domain and thus limitstate(s) during sampling and use this knowledge to guide the sample domain towards the most important regions. This is called an adaptive method, e.g., (Cao and Wei, 2011; Grooteman, 2008). In this paper, a new adaptive importance sampling is proposed that does not need the design point.
3. A New Importance Sampling 2. Importance Sampling As mentioned earlier, the failure probability of a structure is the integration of joint probability density function (f (X)) of all the input random variables (X) on the failure domain which is written as Eq. (1). In basic Monte Carlo simulation, the sample points are generated using f (X). Since basic Monte Carlo generates sample points on the whole random variable space without any focus, this method requires many sample points. A lot of methods have been proposed to reduce the number of required sample points. These methods reduce the variance of obtained response, therefore are called reduction techniques. An importance sampling method originally proposed by Harbitz (1986) is one of these reduction methods. Importance Sampling (IS) is generally recognized as the most efficient reduction technique (Yonezawa et al., 1998, 2009; Frangopol, 1984; Bucher, 1988). In importance sampling, the sampling process focuses in the failure region and helps faster convergence to the true failure probability. For importance sampling the Eq. (1) can be rewritten as: Pf =
fx ( x ) - h ( X ) dX ∫ -----------hx ( X ) x
g ( x) < 0
Vol. 17, No. 1 / January 2013
(3)
As shown above, the importance sampling method that uses the design point as the mean of the sampling probability density function is an efficiency method but has an essential shortcoming that needs design point before solving the problem. In this paper, a new adaptive sampling is proposed that removes this big shortcoming. In the proposed algorithm, the knowledge of the design point and the shape of the limit state function are not required. Based on a definition of Hasofer and Lind (1974), the reliability index is the shortest distance between the limit state function and the center of the standard normal space. In the proposed algorithm, the mean of the sampling density function approaches the design point during the simulation. The type of the sampling density function is considered equal to the density function of random variables but the mean of the sampling density function is changed during the sampling point generation process. At first, a point is generated based on the probability density function of random variables. Then the value of the limit state function at this point and also the distance between this point and the center of the coordinate system are calculated. In next step, a new random point is generated based on the probability density function of random variables but with the obtained mean from
− 211 −
Ehsan Jahani, Mohsen A. Shayanfar, and Mohammad A. Barkhordari
the previous step. Then like previous step the limit state function and distance are calculated. If both of these values are lower than the previous obtained values, this new point is considered the mean of the sampling density function, otherwise the same previous point will be considered the mean. The presented steps are iterated by required numbers based on desirable accuracy. The standard deviation of the sampling function is the other issue that should be noticed in this algorithm. It can be same SD of the PDF, but in the problems with high reliability index in which distance between design point and the coordinate center is large, it is needed to generate random points beyond the center that means big standard deviation. And the lesser the distance, the nearer the random points are to mean of the sampling density function which means lesser standard deviation. Based on this description, in the proposed algorithm, the variant standard deviation has been used in the analysis process. In other words, a function is considered for the standard deviation which gives bigger value at the beginning of analysis and gradually the standard deviation is reduced. For example the following function can be considered: n–i n
σ = C ⎛⎝ ---------⎞⎠ + σinitail
9. Finally, the failure probability is obtained through dividing the failure numbers by the total numbers (Pf=failure number/total number). Figure 1 shows a Flowchart of the Proposed Algorithm.
(6)
where n is the total number of sampling. i is the number of sampling point at the time of the standard deviation changing. σ initial is the standard deviation of the random variables and C is a constant value dependent on the value of the reliability index. C is considered big for the high reliability index. This changing at the standard deviation is performed at a stage that the mean of the sampling function is changed. The proposed algorithm can be presented as following steps: 1. Form the limit state function and the probability density function of random variables. 2. Generate a random point based on the random variable probability density function with the mean of random variables and standard deviation obtained from Eq. (6) as a sampling function. 3. Calculate the limit state function and distance between random point and the center of space and considering them as “min limit state” and “min distance” respectively. 4. Generate a random point based on the random variable density function with the mean which is equal to obtained random point from previous steps and standard deviation from Eq. (6). 5. Calculate the limit state function and distance between random point and the center of variable space. 6. If the limit state function is negative, the value PDFrandom variables/ PDFsampling is added to the number of failure. (failure number = failure number + PDFrandom variables /PDFsampling) 7. If the limit state function value and the distance value are lower than the min limit state and the min distance respectively, then the value of the min limit state and the min distance will be equal to new obtained values, and also new standard deviation is calculated by Eq. (6) otherwise the same previous values will be kept. 8. Repeat steps 4 to 6 until reaching an acceptable accurate.
Fig. 1. Flowchart of the Proposed Algorithm
4. Numerical Examples In this section, it is proposed to use some relevant examples drawn from the literature in order to compare the efficiency of this algorithm. 4.1 Example 1 In the first example, a simple limit state function with two random variables is considered as following: G ( X1, X2 ) = X1 – X2
(7)
where X1 and X2 are normal random variables with 10 and 4 as mean value and 1 and 0.4 as standard deviation, respectively. Proposed algorithm with C=3 and 100 sampling results the reliability index and corresponding failure probability equal to β =5.0705
− 212 −
KSCE Journal of Civil Engineering
A New Adaptive Importance Sampling Monte Carlo Method for Structural Reliability
and Pf=1.9836×10-7, respectively. In this example with C=0 some of the runs result Pf =0 that indicate the importance rule of C parameter. To calculate the reliability index for this simple example by simple Monte Carlo simulation with interval confidence equal 95%, 109 sampling is required. While just 100 sampling points in the proposed algorithm indicates the efficiency and robustness of the proposed algorithm. 4.2 Example 2 In order to study the effect of the reliability index value on the new proposed adaptive importance sampling method, the following mathematical function is considered a limit state function
G1(X1, X2) = 4-X1 and G2(X1, X2) = 4 − X2
(9)
where X1, X2 are independent standard normal variables. Obviously there are two design points (cf. Fig. 2) which are both equally probable. The proposed approach can solve the multiple limit state function. Since the design point is changing during the sampling process, the important regions of the problem space, corresponding to different limit state functions of the problem, can be considered. Table 2 compares the results from new adaptive sampling with those from importance sampling (density shifted to one of the design points) and FORM. It is seen that conventional importance sampling and FORM neglect one of the failure modes.
n
G ( X ) = β n – ∑ Xi
(8)
i=1
where Xi , n and β are the standard normal random variables, the number of random variables and the reliability index, respectively. At first, the effect of the reliability index value on C parameter as an important parameter of the proposed algorithm is studied. The bigger the reliability index, relatively bigger C should be used. Since the big reliability index means that the design point has lengthy distance of the coordinate system center, for rapid reach to design point, bigger initial standard deviation or in other words produced random point with more distance from the center of the coordinate system is required that means bigger C. Also by reducing the sample numbers in analysis to cover the random variable space by fewer samples, the random samples should be generated with more distance from the coordinate center that means bigger standard deviation of sampling PDF. In other words C should be bigger. Of course, it is to be noted that very big C makes causeless producing of samples and divergence of the algorithm. In this example with β =5 and n=10, 1.2