This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit: http://www.elsevier.com/authorsrights
Author's personal copy
Computers & Industrial Engineering 66 (2013) 301–310
Contents lists available at SciVerse ScienceDirect
Computers & Industrial Engineering journal homepage: www.elsevier.com/locate/caie
Optimizing reliability-based robust design model using multi-objective genetic algorithm Vijay Rathod a, Om Prakash Yadav b,⇑, Ajay Rathore a, Rakesh Jain a a b
Department of Mechanical Engineering, Malaviya National Institute of Technology, Jaipur 302 017, India Department of Industrial and Manufacturing Engineering, North Dakota State University, Fargo, ND 58105, USA
a r t i c l e
i n f o
Article history: Received 31 August 2012 Received in revised form 17 June 2013 Accepted 21 June 2013 Available online 12 July 2013 Keywords: Reliability-based design optimization Robust design optimization Multi-objective genetic algorithms
a b s t r a c t Reliability-based robust design optimization (RBRDO) is one of the most important tools developed in recent years to improve both quality and reliability of the products at an early design stage. This paper presents a comparative study of different formulation approaches of RBRDO models and their performances. The paper also proposes an evolutionary multi-objective genetic algorithm (MOGA) to one of the promising hybrid quality loss functions (HQLF)-based RBRDO model. The enhanced effectiveness of the HQLF-based RBRDO model is demonstrated by optimizing suitable examples. Published by Elsevier Ltd.
1. Introduction The globally competitive market has forced manufacturers to design and manufacture highly reliable products at competitive prices to fulfill the customers’ expectations. Design engineers are constantly making strenuous efforts to establish new and effective tools and techniques to design reliable and durable products. Reliability-based robust design optimization (RBRDO) is one of the most useful tools developed in recent years to improve both quality and reliability of the products simultaneously. It optimizes the design for reliability and robustness in the presence of variability and uncertainty. Essentially, the RBRDO is an integration of reliability-based design optimization (RBDO) (Kapur & Lamberson, 1977; Melchers, 1999; Rao, 1992) and robust design (RD) (Phadke, 1989; Taguchi, 1987) in multi-objective optimization domain (Lee, Choi, Du, & Gorsich, 2008; Mourelatos & Liang, 2006; Yadav, Bhamare, & Rathore, 2010; Youn, Choi, & Yi, 2005; Zhuang & Pan, 2010). The RBDO approach optimizes the design for higher reliability by characterizing uncertainty in all the design variables and failure modes. On the other hand, the RD improves product quality and robustness by minimizing the variation in the product performance. To achieve the desired robustness, the RD approach exploits the nonlinearity in the performance functions and finds a blend of the design variables (parameters) that gives the smallest performance variations (Phadke et al., 1989). However, the separate implementation of the RBDO can only provide a reliable design, but it rarely accounts for performance variation. Similarly, ⇑ Corresponding author. Tel.: +1 701 231 7285; fax: +1 701 231 7286. E-mail address:
[email protected] (O.P. Yadav). 0360-8352/$ - see front matter Published by Elsevier Ltd. http://dx.doi.org/10.1016/j.cie.2013.06.018
the RD optimization alone can minimize the performance variation but may not guarantee the desired reliability (Yadav et al., 2010; Yang, 2007). Therefore, it is quite natural that the integration of these two techniques can complement each other to provide optimal design with high quality and reliability. This has inspired researchers to formulate an integrated RBRDO model and capture the merits of both RBDO and RD methods to ensure the quality and reliability of the products (Du, Sudjianto, & Chen, 2004; Koch, 2002; Lee et al., 2008; Mourelatos & Liang, 2006; Ueno, 1997; Wang & Wu, 1994; Yadav et al., 2010; Youn et al., 2005; Zhuang & Pan, 2010). In order to achieve the desired robustness in the product design, Taguchi’s quality loss concept based objective functions are being used widely, treating performance characteristics as smaller-thebetter type (S-Type), larger-the-better type (L-Type), and nominal-the-best type (N-Type) (Chandra, 2001; Yadav et al., 2010; Youn et al., 2005). The required reliability targets are enforced by modeling them as probabilistic constraints in the optimization model. These probabilistic constraints are characterized using first and second statistical moments of a performance function: mean and variance (Chandra, 2001). However, these statistical moments are analytically expressed using multi-dimensional integrals and therefore, practically impossible to calculate. Consequently, various approximation methods are proposed in the literature to estimate these moments more precisely and efficiently. Experimental design (Taguchi, 1987; Cox & Reid, 2000), response surface methodology (Montgomery, 2009), first order Taylor series expansion (Buranathiti, Cao, & Chen, 2004; Kalsi, Hacker, & Lewis, 2001; Su & Renaud, 1997), Monte Carlo simulation, and very recently, a univariate dimension reduction method (Lee et al., 2008; Xu &
Author's personal copy
302
V. Rathod et al. / Computers & Industrial Engineering 66 (2013) 301–310
Rahman, 2004) and performance moment integration method (Youn et al., 2005) are reported in the literature. Experimental design and response surface methodologies are exclusively used when the mathematical models are not readily available for the given quality or performance functions. These methods require a large amount of computation when large number of design variables are involved (Lee et al., 2008). The first order Taylor series expansion method is widely used to estimate the moments due to its simplicity, despite its well-known drawbacks (especially when the random variables have large variations). Monte Carlo simulation may provide precise accuracy but it can be very costly from a computational perspective. However, the recently developed univariate dimension reduction and performance moment integration methods have shown some potential to estimate the statistical moments precisely and accurately. A comparative study of these recently developed methods can be found in Lee et al. (2008). Usually in the RBRDO approaches the performance measures (i.e. expected performance value and performance variation) are traded-off, while maintaining the feasibility imposed by the probabilistic constraints. The literature shows that the weighted sum (WS) method is widely used to form a composite objective function for achieving the trade-off among the performance measures. The simplicity of the WS method makes it a favorite, in spite of its inability to generate points in a non-convex trade-off region (Das & Dennis, 1997; Messac, 1996). To overcome the limitations of WS method, Mourelatos and Liang (2006) have advocated a preference aggregation method to choose the best solution of a multi-objective optimization problem considering the designer’s preferences. However, both the methods depend on the user’s preferences and hence are subjective in nature. Further, the probabilistic constraints involved in the RBRDO models require a double loop optimization approach; wherein the outer design optimization loop constantly calls inner reliability assessment loops. The outer loop optimizes the design like any normal optimization approach and the inner reliability analysis loop uses performance measure or reliability index approach to locate the most probable point for achieving the desired reliability level (Du & Chen, 2002; Liang, Mourelatos, & Tu, 2008). The double loop optimization methods have proven to be computationally expensive. To minimize the computational burden researchers have proposed single loop and decoupled loop methods. The basic idea behind the single loop approach is to convert probabilistic constraints into deterministic constraints or combine both design optimization and reliability analysis tasks (Liang et al., 2008; Shan & Wang, 2008). Recently, a computationally efficient decoupled method called sequential optimization and reliability assessment (SORA) is proposed by Du and Chen (2002). The SORA uses a conventional gradient based optimization method and performance measure approach to determine the most probable point. Additionally, Youn et al. (2005) have suggested physical programming for optimization of the RBRDO model, which was intended to obtain a Pareto front. Yadav et al. (2010) have used sequential quadratic programming and Zhuang and Pan (2010) have recommended a Memetic algorithm for optimization of the RBRDO models. The Memetic algorithm is a hybrid search method that combines a global optimization technique like genetic algorithms (GA) for global search and a gradient based technique for local search. These methods use gradient based techniques in one or other forms that are local in scope and highly dependent of the initial conditions and user’s preferences. In addition to this, they also demand continuity in objective functions and design space, and therefore, face difficulties in exploring the entire Pareto region. The objective of this paper is to perform a comparative study of different formulations of RBRDO models to find a promising RBRDO model(s). The paper further explores a solution approach
that can give multiple solutions covering the entire Pareto region without user intervention, as this has been our prime motivational aspect behind this study. Finally, an evolutionary, multi-objective genetic algorithm (MOGA)-based solution approach is proposed to enhance the effectiveness of the HQLF-based RBRDO model. The suitability of the proposed solution approach is demonstrated by considering two different examples. The rest of the paper is organized as follows: Section 2 provides discussion on different RBRDO model formulation approaches and performs a comparative study. The MOGA-based solution approach is presented in Sections 3 and 4 concludes the paper with future research direction. 2. RBRDO models formulation approaches Engineering product designs are plagued with uncertainties such as variability in manufacturing processes, material properties, and uncertainties in users’ behavior and operating conditions (Du & Chen, 2000). Therefore, while designing the product for the desired quality and reliability, it is important to combine robustness and reliability considerations. This requires that the product’s expected functional performance and performance variability are simultaneously optimized to deal with these uncertainties. This requirement led to the formulation of several RBRDO models using different concepts and approaches. The existing work on RBRDO can be grouped under three different formulation approaches as given below: i. Moment-based RBRDO model reported in Youn et al. (2005), ii. Percentile difference-based RBRDO model reported in Du et al. (2004), Mourelatos and Liang (2006), Lee et al. (2008), Zhuang and Pan (2010), and iii. Hybrid quality loss functions (HQLF)-based RBRDO model reported in Yadav et al. (2010) 2.1. Moment-based RBRDO model The moment-based RBRDO model (Youn et al., 2005) uses three different types of the robustness objectives treating performance characteristics as N-Type, S-Type, and L-Type. The formulations of robustness objectives are derived using Taguchi’s quality loss function to capture the robustness requirements. Taguchi, Elsayed, and Hsiang (1989) have defined a quality loss in terms of deviation from the target value that can be approximated as: 0
C ql ðHÞ ¼ k ðH ht Þ
2
ð1Þ
0
where k is a constant called quality loss coefficient and ht is a target value of the response or performance characteristic H. Using Taguchi’s quality loss concept, Youn et al. (2005) have derived three different robustness objectives. For N-Type of the performance characteristic (H) with the target value ht, the expected quality loss function can be expressed as: 0
2
C ql ðHÞ ¼ k ½ðlH ht Þ þ r2H
ð2Þ
where lH and rH are the mean and the standard deviation of the performance characteristic H. The quality loss function for S-Type of the performance characteristic can be defined as: 0
C ql ðHÞ ¼ k ½l2H þ r2H
ð3Þ
Similarly, the quality loss function for L-Type of the performance characteristic is defined by taking reciprocal of (H) as: 0
C ql ðHÞ ¼ k ½ð1=l2H Þ þ r2H
ð4Þ
Author's personal copy
303
V. Rathod et al. / Computers & Industrial Engineering 66 (2013) 301–310
Using above quality loss functions, a generic RBRDO model was formulated as (Youn et al., 2005):
2 nn X Minimize Z ¼ wni 4 i¼1
þ
rH rHo
þ
rH rHo
2 #
lH ht lHo ht
!2 2
ns X þ wsj 4sgnðlH Þ j¼1
2 # þ
lH lH o
V f ðxÞ ¼ Dfaa12 ¼ f a2 f a1
!2
" # nl X l 2 r1=H 2 wlk sgnðlH Þ Ho þ
lH
k¼1
Subject to probðGm ðd; X; PÞ 6 0Þ P Rm
r1=Ho
m ¼ 1; 2; . . . ; mc
xL 6 x 6 x U L
U
d 6d6d
ð5Þ where nn, ns, and nl are the number of N-Type, S-Type, L-Type responses or performance characteristics. wni, wsj and wlk are the weights assigned to each category of the performance characteristics. lH, ht and rH are the expected value, target value and standard deviation of the performance characteristic (H). The objective functions are normalized by initial design values subscripted by ‘o’. Rm represents the reliability target for mth probabilistic constraint. xL and xU are the lower and upper limits of the random design variable vector X. Similarly dL and dU are the lower and upper limits of the deterministic design variable vector d. P is the vector of the random design parameters. sgn(lH) is the signum function used to multiply with S-Type and L-Type objective functions for properly minimizing or maximizing objective in case of negative robust response. This model contains a WS-based composite objective function formulated using N-Type, S-Type, and L-Type robustness objectives subject to probabilistic constraint(s). The WS method based objective functions can face difficulties in exploring a non-convex tradeoff region. Further, since this model requires user defined weights assigned to each functional characteristic, it is subjective to the user preferences. 2.2. Percentile difference-based RBRDO model The percentile difference-based RBRDO model achieves the robustness through an objective function, wherein the performance variation is approximately evaluated using the percentile performance difference between the right and left tails of the performance distribution (see Fig. 1). The use of the percentile difference method (PDM) in formulation of the RBRDO model was
Fig. 1. Percentile performance difference.
originally proposed by Du et al. (2004) and then used by other researchers like Mourelatos and Liang (2006) and Zhuang and Pan (2010). In the model, the performance variation measure Vf can be expressed as:
ð8Þ
where a1 and a2 are the low and high percentiles of the performance function f, respectively. Fig. 1 shows the percentile performance difference as the distance between the a1 percentile performance and a2 percentile performance. Shrinking the distance between the low and high percentiles performance is equivalent to minimizing the performance variation (Mourelatos & Liang, 2006; Zhuang & Pan, 2010). The use of the percentile difference over standard deviation for assessing variability of the performance function in the model provides certain advantages as: (1) since the PDM relates the probability at the tail areas of the performance distribution; it could be easily used even if performance distributions are skewed and do not follow normal distribution. On the other hand, the standard deviation only captures the dispersion around the mean value and therefore requires symmetric distribution (Du et al., 2004), and (2) the PDM also gives the confidence level easily without much calculation. The generic formulation of the RBRDO using PDM approach is given as (Du et al., 2004; Lee et al., 2008; Mourelatos & Liang, 2006; Zhuang & Pan, 2010):
Minimize f ðd; x; pÞ Minimize V f ðd; x; pÞ ¼ f a2 f a1 Subject to probðGm ðd; X; PÞ 6 0Þ P Rm
m ¼ 1; 2; . . . ; m
ð9Þ
xL 6 x 6 x U L
U
d 6d6d
The thought of the PDM is straightforward and applicable to both symmetric and asymmetric distributions, but it has some serious shortcomings. In the absence of monotonic performance functions, the percentile difference is not useful measure of robustness. In addition to this there is no defined percentile range that can be used in PDM to identify global minimum in presence of more than one minimum. Further, this model does not treat robustness objectives as S-Type, L-Type and N-Type, which is considered as a prime requirement to achieve robustness. Moreover, it needs subjective input in terms of the percentile range. 2.3. Hybrid quality loss functions (HQLF)-based RBRDO model The HQLF-based RBRDO model, proposed by Yadav et al. (2010), is formulated utilizing the concepts of goal programming and Taguchi’s quality loss function. This model attempts to find the trade-off solutions by satisfying several constraints (goals) to the extent possible. This feature makes this model more appealing to designers as compared to the other models. In the HQLF-based model, an objective function is formulated using deviational variables that are classified as desirable and undesirable deviational variables (d). These deviational variables are nothing but the difference between the targeted value and the actual value obtained by the model for any given performance characteristic. Since the model classifies all performance characteristics into N-Type, S-Type and L-Type categories, it treats the deviational variables associated with each of these categories as desirable and undesirable deviational variables. For example, for S-Type performance characteristic (fs(x)) having target value (T), the model constraint can be represented as (fs(x) + d d+ = T). Here d+ represents the overachievement of the target value and hence for S-type characteristics it is classified
Author's personal copy
304
V. Rathod et al. / Computers & Industrial Engineering 66 (2013) 301–310
as undesirable, while underachievement d- as desirable deviational variable. This model uses both the desirable and undesirable deviational variables to form robustness objective functions and hence it is called hybrid quality loss functions based model. Further, the deviational variables used to form the HQLF-based objective function are transformed into exponential form to ensure that the objective function is continuous and determinant even at zero deviational variable values. The model uses appropriate robustness objectives derived using Taguchi’s quality loss function for N-Type, S-Type, and L-Type of the performance characteristics as discussed below. For N-Type of the performance characteristic, both the underachievement and overachievement (d) are undesirable that need to be minimized. Hence the formulation of the objective function can be given as: 0
2
þ
2
LðdÞ ¼ k ½ðexpðd ÞÞ þ ðexpðd ÞÞ
ð10Þ
For S-Type of the performance characteristic overachievement (d+) is always undesirable, whereas under achievement (d) is desirable. Accordingly, the formulation for S-Type quality loss function can be given as: 0
2
þ
2
LðdÞ ¼ k ½ðexpðd ÞÞ þ ðexpðd ÞÞ
ð11Þ
Similarly, for L-Type of the performance characteristic overachievement (d+) is always desirable, whereas under achievement (d) is undesirable. The formulation for L-Type quality loss function can be given as: 0
2
þ
2
LðdÞ ¼ k ½ðexpðd ÞÞ þ ðexpðd ÞÞ
ð12Þ
A generic form of the HQLF-based RBRDO model is given as (Yadav et al., 2010):
Minimize Z ¼
n1 h X þ 2 w1i ðexpðd1i ÞÞ i¼1
þðexpðd1i ÞÞ
2
i
þ
n2 h X þ 2 w2i ðexpðd2i ÞÞ i¼1
2 þðexpðd2i ÞÞ
i
n3 X þ 2 2 w3i ½ðexpðd3i ÞÞ þ ðexpðd3i ÞÞ þ i¼1
þ
V i ðxj Þ þ d2i d2i ¼ T 2i i ¼ 1; 2; . . . ; n: lgl þ prob þ d3l d3l ¼ Rgl l ¼ 1; 2; . . . ; m:
rgl
xlj 6 xj 6 xuj xj ; di
j ¼ 1; 2; . . . ; k:
P0 ð13Þ
where
lg ¼ S L; rg ¼
2.4. Comparative study The performance capabilities of the RBRDO models can be compared by solving an appropriate design problem on an identical platform; i.e. by using similar methods for estimating performance characteristics and a similar optimization technique. It is worthwhile to mention here that Youn et al. (2005) have used a performance moment integration method to estimate the mean and variance of the performance characteristic. However, for comparison purpose a commonly used second order Taylor series expansion method is used to estimate the mean and variance for all the models. The mean f(xi) and variance V(si) of each performance characteristic are calculated using Eqs. (14) and (15) as shown below:
1 X @ 2 f ðxi Þ f ðxi Þ ¼ f ðxi Þ þ 2 @ x2i
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r2S þ r2L
where w1i, w2i, and w3i are the weights assigned to expected performance, performance variance, and reliability constraints/goals of þ þ each performance characteristics. d1i ; d2i and d3l are undesired devi þ ational variables and d1i ; d2i and d3l are desired deviational variþ ables. d1i is overachievement and d1i is underachievement from þ the target value of the ith response. d2i is defined as overachieve ment and d2i is as underachievement from the variance value of þ the ith response. Similarly, d3l represents overachievement and d3l represents underachievement from the reliability target. fi(x) and Vi(x) represent expected functional value and variance in the performance of the ith characteristic, respectively. T1i is target value and T2i is variance goal value for the ith characteristics. Rgl is target reliability, xlj is lower limit and xuj is upper limit of the jth design
r2xi
ð14Þ
xi
!2 X @f ðxi Þ Vðxi Þ ¼ r2xi @ xi xi
þ
Subject to f i ðxj Þ þ d1i d1i ¼ T 1i
variable. S represents strength or capacity and L represents a measure of load or stress acting on the system. lg and rg are the mean and standard deviation of the limit state function. In this generic model, the expected functional value related constraints are treated as N-Type, variance constraints are S-Type, and reliability constraints are considered as L-Type. The major weakness of the original model was that the deviational variables used to form the composite objective function may not be of the same magnitude; thereby making some constraints difficult to satisfy due to the dominance of higher magnitude deviational variables (Pedersen & Goldberg, 2004). However, this weakness can be addressed by normalizing or scaling of the constraints. Further, like moment-based RBRDO model, this model also uses WS method to form a composite objective function and therefore can face difficulties in exploring the non-convex tradeoff region. Since the model requires user defined weights, this model is also subjective to the user preferences. A comparative review of these three RBRDO models is summarized in Table 1. The following section further investigates the performance capabilities of these models considering cantilever beam example.
ð15Þ
It is essentially done to keep uniformity and to compare the different formulation approaches on same platform. A sequential optimization approach has been used to get optimal solutions. A cantilever beam example has been widely used in the literature to demonstrate the applicability of the probabilistic design optimization methods (Du & Chen, 2000; Mourelatos & Liang, 2006; Ramu, Qu, Youn, Haftka, & Choi, 2006; Shan & Wang, 2008; Zhuang & Pan, 2010) and therefore, provides a good case for comparing the performance capabilities of the RBRDO models. Fig. 2 shows a cantilever beam diagram adopted from Du and Chen (2000). The cantilever beam is designed against yielding due to bending stress while the cross-section area is desired to be kept as minimum. The ratio of h/b should be less than or equal to 2. The length of the beam (L) is considered as 200 mm. This problem has two design variables: x = [x1, x2]T = [b, h]T and four design parameters: P = [P1, P2, P3, P4]T = [R, Q, E, D]T. The design variables b and h are the dimensions of the cross-section as shown in Fig. 2, while R is the allowable stress (strength), Q is the vertical force, E is the elastic modulus, and D is the allowable deflection of the beam due to the force Q. All the design variables/parameters are considered to be normally distributed with distribution parameters given in Table 2. However, if these parameters/variables
Author's personal copy
305
V. Rathod et al. / Computers & Industrial Engineering 66 (2013) 301–310 Table 1 Comparisons of models and summary. Model description
Author(s)/reference(s)
Moment-based RBRDO model Uses N-Type, S-Type, and L-Type robustness objectives derived from Taguchi’s quality loss function to achieve robustness Weighted-sum method is used to form composite objective Normalization of composite objective function is achieved using initial design values Inadequate to explore non-convex trade-off region Allow process bias
Youn et al. (2005) Lee et al. (2008)
Percentile difference-based RBRDO model Percentile performance difference between the right and left tails of the performance distribution is used to achieve robustness Does not classify robustness objectives as N-Type, S-Type, and L-Type Weighted-sum method is used to form composite objective and it needs appropriate normalization Inadequate to explore non-convex trade-off region Difficult to decide suitable percentile difference levels or range Allow process bias
Hybrid quality loss functions (HQLF)-based RBRDO model Classifies the performance characteristic as N-Type, S-Type, and L-Type to achieve robustness as well as uses deviational variables based robustness objectives derived from Taguchi’s quality loss function Weighted-sum method is used to form composite objective function using deviational variables Appropriate normalization or scaling of constraints/goals are needed Inadequate to explore non-convex trade-off region Allow process bias
Du et al. (2004) Mourelatos and Liang (2006) Lee et al. (2008) Zhuang and Pan (2010)
Yadav et al. (2010).
1 2
lA ¼ bh þ ðhr2h þ br2b Þ
ð18Þ
r2A ¼ b2 r2b þ h2 r2h
ð19Þ
ls ¼ r2s ¼ Fig. 2. Cantilever beam.
ld
Table 2 Random variables/ parameters for the cantilever beam. Design variables/ parameters
R (MPa)
Distributions Normal (l, r) (200, 20)
Q (KN)
E (MPa)
D (mm) b (mm)
r ¼ Normal (200, 20)
Normal (5300, 500)
Normal (9, 1)
Normal (b, 0.05)
Normal (h, 0.05)
follow non-normal distribution, an appropriate approximate transformation technique suggested by Cruse (1997) can be utilized to satisfy normal distribution assumption. The example represents a dual objective optimization problem, i.e. reduce the cross-section area (lA) of the beam and its variance ðr2A Þ. The material density and the length of the beam are considered as constant and known. The optimal design should satisfy the strength, allowable deflection, and height to width ratio requirements. The strength and deflection requirements can be defined as
RSP0
ð16Þ
DdP0
ð17Þ
where S represent stress and d denotes deflection developed in the beam. The strength and maximum allowable deflection requirements are treated as probabilistic constraints and therefore, need to satisfy the required reliability target (0.99). The height to width ratio should be less than or equal to 2 and considered as deterministic constraint. The appropriate performance measures related to the cross-sectional area, stress and deflection of the beam are calculated using the following equations:
bh
þ
2
6L
1 12Q 2 36QL 2 r þ r b h 4 2 b3 h2 bh
2
2
bh
r2Q þ
2
6Q
bh
2
r2L þ
ð20Þ
6QL
2
2 2
b h
r2b þ
2 12QL 4
bh
r2h
1 8QL3 2 8QL3 2 48QL3 2 4L3 2 ¼ þ r þ r þ r þ r E b h 5 3 3 3 3 Q 2 E3 bh3 Ebh Ebh Eb h Ebh
h (mm) 2 d
6QL
4QL3
4QL3 3
E2 bh
!2 2 Eþ
r
4QL3 2 3
Eb h
!2 2 bþ
r
12QL3 4
Ebh
! 2 hþ
r
4L3
ð21Þ ! ð22Þ
!2
3
Ebh
r2Q
ð23Þ
Considering the above requirements and constraints, the Moment-based RBRDO model is formulated for cantilever beam example as given below:
0
Minimize Z ¼ w1 @sgnðlA Þ
lA lAo
!2 1 2 ! rA A þ w2
rAo
Subject to probðR S P 0Þ P 0:99 probðD d P 0Þ P 0:99
ð24Þ
2 h=b P 0 30 6 b 6 45 60 6 h 6 80 where w1 þ w2 ¼ 1 where lAo and rAo are the initial design values used to normalize the objective functions. The percentile difference-based RBRDO model is formulated as:
Minimize Z ¼ w1 ðlA Þ þ w2 ðflaA2 flaA1 Þ Subject to probðR S P 0Þ P 0:99 probðD d P 0Þ P 0:99 2 h=b P 0 30 6 b 6 45 60 6 h 6 80 where w1 þ w2 ¼ 1
ð25Þ
Author's personal copy
306
V. Rathod et al. / Computers & Industrial Engineering 66 (2013) 301–310
In case of the HQLF-based model, it is important to make all the constraints dimensionless to avoid domination of higher magnitude deviational variables. In an effort to achieve this, it is necessary to scale the constraints in following manner:
Table 3 Results of the moment-based model. Objective/s ðlA ; r2A Þ
Weight factors (w1, w2)
1.0, 0.0 0.1, 0.9 0.0, 1.0
Design variables (b, h)
3044.4020, 19.0266 3147.0809, 18.9281 3271.4279, 18.8867
39.0145, 78.0289 41.0047, 76.7455 43.4588, 75.2728
Strength & deflection reliability index (bS, bD)
Iteration functional evaluation (k, n)
4.41, 3.00 4.50, 3.00 4.60, 3.00
7(39) 9(54) 8(47)
Constraint þ þ di di ¼ 1 Expected v alue of constraint
Table 4 Results of the percentile difference-based model (a1 = 0.05, a2 = 0.95). Weight factors (w1, w2)
Objective/s ðlA ; r2A Þ
Design variables (b, h)
Strength & deflection reliability index (bS, bD)
Iteration & functional evaluation (k, n)
1.0, 0.0 0.1, 0.9 0.0, 1.0
3044.4020, 19.0266 3256.8804, 18.8872 3257.0624, 18.8872
39.0145, 78.0289 43.1692, 75.4407 43.4588, 75.2728
4.41, 3.00 4.59, 3.00 4.59, 3.00
8(45) 3(19) 3(19)
In the percentile difference based model the objective functions are appropriately normalized using initial design values. Further, X a1 and X a2 represent the coordinates considered for low percentile (a1 = 0.05) on the left tail and high percentile (a2 = 0.95) on right tail.
2
3
a1 rlh 6 ; X a1 ¼ 4lb qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi l2b þ l2h
a1 rlb 7 lh qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 5 l2b þ l2h
2
ð26Þ
3
a2 rlh 6 ; X a2 ¼ 4lb þ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi l2b þ l2h
a2 rlb 7 lh þ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 5 l2b þ l2h
The HQLF-based RBRDO model is formulated as given below: þ
2
2
þ
Minimize Z ¼ w1 ½ðexpðd1 ÞÞ þ ðexpðd1 ÞÞ þ ðexpðd3 ÞÞ
2
2 þ 2 2 þ ðexpðd3 ÞÞ þ ðexpðd4 ÞÞ þ ðexpðd4 ÞÞ þ 2 2 þ w2 ½ðexpðd2 ÞÞ þ ðexpðd2 ÞÞ þ 2 2 þ 2 þ ðexpðd7 ÞÞ þ ðexpðd7 ÞÞ þ w3 ½ðexpðd5 ÞÞ 2 þ 2 2 þ ðexpðd5 ÞÞ þ ðexpðd6 ÞÞ þ ðexpðd6 ÞÞ
Subject to
1 þ bh þ ðhr2h þ br2b Þ þ d1 d1 ¼ 3000:00 2 2
2
þ
ðb r2 þ h r2h Þ þ d2 d2 ¼ 18:00 b 6QL 1 12Q 2 36QL 2 þ rb þ 4 rh þ d3 dþ3 ¼ 200 2 3 2 2 b h bh bh 4QL3 Ebh
3
þ
1 8QL3 2 8QL3 2 48QL3 2 4L3 2 r þ r þ r þ r 2 E3 bh3 E Eb3 h3 b Ebh5 h Ebh3 Q
!!
þ
þ d4 d4 ¼ 1
þ
probðR SÞ þ d5 d5 ¼ 0:99 þ probðD dÞ þ d6 d6 þ h=b þ d7 d7 ¼ 2
¼ 0:99
30 6 b 6 45 60 6 h 6 80 where w1 þ w2 þ w3 ¼ 1 ð27Þ
ð28Þ
These three models were solved using sequential optimization approach on MATLAB platform and the results are shown in Tables 3–5, respectively. The optimization results of the three RBRDO models are provided in Tables 3–5. The optimal results clearly show that the change in weight values assigned to different objectives results into different sets of optimal solutions. For example, assigning more weight to the objective of minimizing cross-sectional area provides optimal value lA = 3044.4020, while assigning higher weight to variance minimization (robustness) objective gives optimal value r2A ¼ 18:8867. The model sensitivity to subjective input (weights assigned to different objectives) is explored by varying weights assigned to different objectives in the increments of 0.1 and also providing similar initial conditions. The results show that all the models are sensitive to subjective input and therefore provide different optimal solutions for different sets of weights assigned to objectives (see Tables 3–5). The weight factors which yielded repeated solutions are not shown in the Tables. By looking at objective function values, it seems all the models demonstrate similar pattern with small variation in actual values obtained. While comparing the performance in meeting probabilistic requirements (reliability constraints), for both moment-based and percentile difference-based models, the optimal value of reliability index for strength varies in the range of bS = 4.41–4.60 and for deflection reliability index value is bD = 3.00. On the other hand, the HQLF-based model achieves larger span of the reliability index for both strength (bS = 4.41–5.37) and deflection (bD = 3.00/ 5.17) requirements. The reason is that HQLF-based model allows decision makers to assign the weight (preference) to reliability requirements also. As shown in Table 5, by assigning weight w3 = 1.0 means decision makers consider reliability requirement as the most important, which provides optimal solution with higher reliability though at the cost of other objectives. It essentially provides more options to decision makers in terms of selecting desired optimal solution that provides better trade-off to their requirements. The optimization results of these models are further compared on the basis of number of iterations and functional evaluations required for achieving optimal solutions. Here iteration refers to repetition where the output from one iteration cycle is fed as an input to the next cycle; and the functional evaluation refers to the intermediate calculations required for evaluation of the objective function and constraints during each cycle or iteration. As shown in Tables 3–5, as compared to the other two models, the percentile difference-based model is proved to be faster since it requires less number of iterations and functional evaluations. On the other hand, the HQLF-based model proved to be more computationally expensive as it requires more number of iterations and functional evaluations to reach the optimal solution, which makes the model slower. The reason behind this may be the use of relatively larger
Table 5 Results of the HQLF-based model. Weight factors (w1, w2,w3)
Objective/s ðlA ; r2A Þ
Design variables (b, h)
Strength & deflection reliability index (bS, bD)
Iteration& functional evaluation (k, n)
0.2, 0.6, 0.2 0.1, 0.7, 02 0.0, 0.8, 0.2 0.0, 0.0, 1.0
3044.4028, 19.0266 3111.9877, 18.9551 3271.4611, 18.8867 3596.1502, 21.0512
39.0145, 78.0289 40.3208, 77.1770 43.4594, 75.2724 44.9498, 80.0000
4.41, 3.00 4.47, 3.00 4.67, 3.00 5.37, 5.17
20(357) 24(425) 26(461) 17(305)
Author's personal copy
V. Rathod et al. / Computers & Industrial Engineering 66 (2013) 301–310
number of constraints resulting into larger form of composite objective function. It is also worth to note here that during optimization the HQLFbased RBRDO model has shown tendency to yield non-optimal solutions if appropriate preferences were not given to the reliability related constraints. Whereas, the percentile difference-based model has the tendency to yield different optimal design solutions with different percentile ranges. It is important to note that the percentile difference-based model requires additional subjective input in the form of suitable percentile range and weights assigned to different requirements. Therefore, the nature of yielding different optimal solutions for different percentile ranges makes a design engineer’s job difficult. A similar kind of behavioral observation on the performance of this model has been reported in Lee et al. (2008). The moment-based model requires careful selection of initial design values for normalization. It is observed that the utopia points are proved to be a good choice as initial design values. The inability of these models in exploring multiple solutions from the entire Pareto region can be due to the use of the WS method to form composite objective function. This restricts them from exploring the entire trade-off region. However, the HQLFbased model provides few additional features as compared to the other two models even though it is computationally expensive. For instance, in case of the moment-based and the percentile difference-based models, the designer will not be able to assign preferences to the reliability requirements because these requirements are not part of the objective function. They can change the desired reliability target by assigning different goals to the probabilistic constraints, but cannot assign any weight or preference to them. On the other hand, the HQLF-based model provides an opportunity and flexibility to the designer to prioritize the reliability and robustness requirements. In addition to this, the utilization of goal programming concept provides another appealing angle to this model. Therefore, to further enhance the effectiveness of this model, we propose to use a multi-objective evolutionary approach like MOGA for optimization. Since MOGA is not capable to handle the probabilistic constraints, it requires a different approach to deal with these types of constraints while solving HQLF-based RBRDO model. There can be two possible measures to address this issue. One way is by converting probabilistic constraints into deterministic constraints as suggested by Shan and Wang (2008) and then use MOGA approach. A similar kind of evolutionary approach has been proposed by Zhuang and Pan (2010) using multi-objective based Memetic algorithm (MOMA) for optimizing the percentile difference-based RBRDO model. However, the use of gradient-based element for local search requires continuity in objective function and needs subjective input like preferences to obtain accurate Pareto front. Further, in the presence of more number of objectives (as it happens in HQLF-based model), the MOMA like approach can be computationally very expensive. The second possible way to address the probabilistic constraint related issue is by integrating reliability assessment loop based on performance measure or reliability index approach with MOGA to handle probabilistic constraints as suggested by Deb, Gupta, Daum, and Branke (2009). However, the integration of reliability assessment loop with MOGA can add extra computational burden. It is, therefore, proposed to convert probabilistic constraints into deterministic constraints and thereafter use standard MOGA for optimization so that the additional burden of reliability assessment loop and demerits of gradientbased search can be avoided. In the next section, MOGA-based solution approach is proposed for the HQLF-based RBRDO model to enhance the capability to explore entire trade-off region.
307
Table 6 Conversion of constraints into equivalent objective functions. Type of characteristics
Type of constraints/goal
Objective function
(S-Type) (L-Type) (N-Type)
ðfi ðxÞ 6 T i Þ ðfi ðxÞ P T i Þ (fi(x) = Ti)
Minimize hfi(x) Tii Minimize hTi fi(x)i Minimize |fi(x) Ti|
3. Moga for the HQLF-based RBRDO model The HQLF-based RBRDO model is based on nonlinear goal programming approach, and in order to improve the capability of this model for exploring the trade-off solutions on the entire Pareto region, we propose to use MOGA, an evolutionary optimization technique suggested by Deb (1999). It is important to mention here that MOGA is based on natural selection and biological evolution, and has been very effective in handling nonlinear multi-objective optimization problems. It uses three operators at each step to create the next generation from the current population: selection, crossover and mutation. Further, the MOGA does not impose any continuity or smoothness demand on the objective function, nor is it deterred by discontinuities in the feasible solution space or the decision variables involved. It puts together dissimilar blocks of a solution until a combination satisfies the imposed requirement (Chen, 2008). Since MOGA gives Pareto optimal solutions, the suggested approach can also find multiple solutions (Deb et al., 1999). This technique is very effective in solving problems that are having non-convex trade-off region, which are otherwise difficult to solve using conventional optimization methods. It also eliminates the necessity of any user defined weight factors required to obtain trade-off between a set of objectives, and takes away significant burden from the decision makers. To use the MOGA for optimization, the HQLF-based RBRDO model needs to be reformulated by converting each constraint into an equivalent objective function. For details on the procedure of constraint conversion, please refer Deb et al. (1999). The HQLF-based model categorizes the performance characteristics (constraints/goals) into three basic categories such as S-Type, L-Type, and N-Type. These constraints, depending on their category, can be converted into equivalent objective functions as shown in Table 6. However, it should be noted that before converting the probabilistic constraints into equivalent objective functions, they should first be converted into deterministic forms as suggested by Shan and Wang (2008). Here fi(x) represents performance characteristic and Ti is target to be achieved. The bracket operator h i in the equivalent objective function returns the value of the operand if the operand is positive, otherwise zero. The operator || returns the absolute value of the operand. The benefits of the above formulation are that (i) it does not require any additional constraint, and (ii) it yields multiple optimal solutions corresponding to different sets of weight factors and can be obtained simultaneously without any subjective input of weight factors (Deb et al., 1999). Since the HQLF-based model uses goal programming approach, there is a possibility that the MOGA may yield some solutions which will not strictly satisfy some of the constraints. Particularly, the probabilistic constraints, which characterize the reliability of a design, must be strictly satisfied to ensure the design acceptability. Deb et al. (1999) has suggested to use a bracket operator penalty function, whenever must satisfy (hard) constraints are present. The use of penalty parameters may elevate the dominance of the hard constraints. Therefore, we propose to use a simple additional external filter which will simply sort out the optimal solutions by blocking the solutions that do not satisfy the hard constraints.
Author's personal copy
308
V. Rathod et al. / Computers & Industrial Engineering 66 (2013) 301–310
As mentioned earlier, the advantage of the proposed method is that it generates multiple solutions and provides several options for design engineers to select the best fit solution. To select the best fit solution, design engineers can follow two possible approaches: 1. Look for a solution set which satisfies predetermined requirement level of each criterion or requirement. 2. For each set of optimal solutions, calculate how much relative importance (preference) is given to each criterion and select the one which matches with design engineer’s predetermined preference level. The relative importance (preference) for each criterion can be estimated using the following equation (Romero, 1991):
wi ¼
jfi ðxi Þ T i j=jT i j
PM
i¼0 jfi ðxi Þ
!
T i j=jT i j
Example 1. An example of a Roman-style catapult experiment is studied in Kim and Cho (2002) wherein control factors arm length (x1), stop angle (x2), and pivot height (x3) are used to predict the distance to the point where a projectile lands from the base of catapult. The experiment is performed using a central composite design with three replicates at each design point. The estimated response function of the process mean and standard deviation are as follows:
ld ¼ 84:88 þ 15:29x1 þ 0:24x2 þ 18:80x3 0:52x21 11:80x22 þ 0:39x23 þ 0:22x1 x2 þ 3:60x1 x3 4:4x2 x3
rd ¼ 4:53 þ 1:84x1 þ 4:28x2 þ 3:73x3 þ 1:16x21 þ 4:40x22 þ 0:94x23 þ 1:20x1 x2 þ 0:73x1 x3 þ 3:49x2 x3
ð29Þ
3.1. Implementation of MOGA Following steps are carried out (and suggested) for optimization using MOGA related function available in MATLAB: 1. Select an appropriate population size, so that the fair number of optimal solutions can be generated. 2. Initial population range should be strictly set in between lower bound and upper bound of design variables. 3. Higher Pareto fraction (80%) is used so that appropriate population on the best Pareto frontier can be kept during the optimization. 4. Other parameters are tuned to suite problem in hand so that desired set of optimal solutions can be obtained. 5. The optimization is terminated if the improvement in the fitness value is less than 0.00001 or the number of generation is over 500. 6. Additional external filter is used to filter the desired optimal solutions which strictly satisfy hard constraints. A probable limitation of the suggested approach can be high computational demand for heavily populated MOGA and time required for optimization may not be in favor of the suggested approach. But the availability of faster computers and parallel computing approach can help to overcome this limitation. To demonstrate the conversion procedure and enhanced capability of the HQLF-based model to yield entire Pareto front using the proposed optimization approach, we first consider a dual objective RD optimization problem adopted from Kim and Cho (2002) and then the cantilever beam problem is revisited.
ð30Þ
ð31Þ
Considering the expected distance target ld = 110 and standard deviation target rd = 2, the expected distance and variability constraints are converted into equivalent objective functions. For converting the constraints into equivalent objective functions, the expected distance constraint is considered as N-Type and standard deviation related constraint is treated as S-Type. The final formulation of the optimization problem is given as: Min f 1 ðxi Þ ¼ jð84:88 þ 15:29x1 þ 0:24x2 þ 18:80x3 0:52x21 11:80x22 þ 0:39x23 þ 0:22x1 x2 þ 3:60x1 x3 4:4x2 x3 Þ 110j Min f 2 ðxi Þ ¼ hð4:53 þ 1:84x1 þ 4:28x2 þ 3:73x3 þ 1:16x21 þ 4:40x22 þ 0:94x23 þ 1:20x1 x2 þ 0:73x1 x3 þ 3:49x2 x3 Þ 2:00i Subject to 1:682 6 x1 6 1:682 1:682 6 x2 6 1:682 1:682 6 x3 6 1:682 ð32Þ
The MOGA-based solution approach yields multiple sets of optimal solutions to provide suitable trade-offs for conflicting objectives (see Fig. 3). Each set of optimal solutions reflects a different set of priorities assigned to these objective functions. It is important to note here that these multiple solutions are generated without assigning priorities to different objectives, which essentially means there was no external intervention to create different scenarios for generating multiple optimal solutions. The Pareto front of the optimization results using population size of 60 is shown in Fig. 3. A comparison of the optimal solutions obtained using the proposed MOGA and the optimal solutions given in Kim and Cho (2002) clearly shows that the proposed MOGA-based solution approach gives multiple solutions for the entire Pareto front. It is worthwhile to mention here that Kim and Cho (2002) have obtained multiple optimal solutions in their work but by varying priority (weight) factors, whereas the proposed MOGA-based solution approach provides entire Pareto front without any weight assignment. This undoubtedly shows the superiority of MOGA-based solution approach in getting entire Pareto front, which otherwise would have been difficult to obtain using gradient based optimization method without assigning different combination of weight factors. The decision makers can select the most suitable solution from the given sets of multiple optimal solutions either by picking up the solution that meets the predetermined level of objectives, or calculate the relative importance assigned to each objective in
Table 7 Randomly selected solution and their relative importance.
Fig. 3. Mean versus standard deviation.
f1(xi)
f2(xi)
Weight factors (w1, w2)
94.3040 86.8621 73.2012
4.9411 4.0051 2.4159
0.91163, 0.08837 0.82657, 0.17342 0.38332, 0.61667
Author's personal copy
309
V. Rathod et al. / Computers & Industrial Engineering 66 (2013) 301–310
reformulated HQLF-based RBRDO model is then solved using the proposed solution approach. While reformulating the HQLF-based model, all the reliability related constraints are considered as L-type, and all other constraints are treated as S-type. Moreover, the reliability targets and height to width ratio related constraints should be satisfied to ensure that the design is acceptable.
(A)
The modified HQLF-based RBRDO model for the proposed MOGA approach is given as: 1 Minimize f 1 ðb;hÞ ¼ h bh þ ðhr2h þ br2b Þ 3000:00i 2 2
2
Minimize f 2 ðb;hÞ ¼ hðb r2b þ h r2h Þ 18:00i 6QL 1 12Q 2 36QL 2 Minimize f 3 ðb;hÞ ¼ h þ r þ 4 rh 200i 2 3 2 b 2 bh b h bh Minimize f 4 ðb;hÞ ¼ h
4QL3 Ebh
3
þ
1 8QL3 2 8QL3 2 48QL3 2 4L3 2 r þ r þ r þ r 2 E3 bh3 E Eb3 h3 b Ebh5 h Ebh3 Q
!! 1i
Minimize f 5 ðb;hÞ ¼ h0:99 probðR ls Þi Minimize f 6 ðb;hÞ ¼ h0:99 probðD lD Þi Minimize f 7 ðb;hÞ ¼ hh=b 2i Subject to 30 6 b 6 45 60 6 h 6 80
(B)
ð33Þ
Fig. 4. Optimization results (Pareto front) for cantilever beam.
Table 8 Randomly selected solution and their relative importance. f1(xi)
f2(xi)
f3(xi)
Weight factors (w1, w2, w3)
3445.9790 3358.5230 3138.6940
20.3563 20.2216 19.1276
5.1150 5.0116 4.5453
0.1899, 0.2156, 0.5945 0.2660, 0.2578, 0.4764 0.4905, 0.3619, 0.1476
the solution. Table 7 provides relative importance calculated using Eq. (29) for each objective considering three randomly selected solutions. This allows design engineers to select the most suitable solution that matches the predetermined importance (weight) assigned to each criteria or objective. For example, if the design engineer decides to consider the expected distance is very important (assigning high priority), the first solution will be their choice that assigns high importance (almost 90%) to expected distance criteria (see Table 7). On the other hand, if the variance reduction is more important, the third solution could be the obvious choice of the decision makers. Example 2. We now revisit the cantilever beam problem for optimization using the proposed approach. To solve the HQLF-based model using proposed MOGA approach, the probabilistic constraints need to be converted into deterministic constraints (see Appendix A for sample conversion). Thereafter all the constraints are converted into equivalent objectives. The
The Pareto front for cantilever beam solutions obtained using initial population size of 120 is shown in Fig. 4(A) and (B). Since this problem consists of multiple objectives, it is difficult to represent Pareto front graphically. Therefore, to present the Pareto front in 3D, only three objectives (cross-sectional area, variance, and strength reliability) are considered for plotting the Pareto front and shown with different perspectives. The optimization results show the ability of the MOGA-based solution approach to cover the entire Pareto front. It is important to note here that the proposed approach has given better and more robust design solutions as compared to the sequential optimization approach. Design engineers can select an optimal design solution that meets the predetermined requirements. For instance, if design engineers consider reliability requirement as most important, the optimal design solution for higher reliability design option is ðbS ¼ 5:376; lA ¼ 3595; r2A ¼ 21:04Þ. Similarly, if design engineers consider functional performance or variance reduction as more important and assign higher priority to either of them, the optimal design solution can be ðbS ¼ 4:416; lA ¼ 3045; r2A ¼ 19:03Þ or ðbS ¼ 4:588; lA ¼ 3248; r2A ¼ 18:89Þ. On the other hand, design engineers can calculate relative importance given to each objective for all optimal solutions and select the one which provides better trade-off and matches the predetermined importance assigned to each objective. Table 8 shows relative importance calculated for the objectives of three randomly selected solutions, which can help the designers to select appropriate solution. It is important to note here that, while calculating the relative importance, only three objectives are considered to demonstrate the idea of selecting the best-fit solution from multiple solutions generated. 4. Conclusion The paper provides a comparative study of the different RBRDO model formulations that clearly shows the pros and cons associated with each of these models. The tendency of the percentile difference-based RBRDO model to give different design solutions with different percentile ranges may confuse a decision maker. A perfectly normalized objective function and simplicity of momentbased RBRDO model can be a good choice for design optimization, but inadequacy to explore Pareto region and the selection of initial design values for normalization is critical and challenging. The HQLF-based RBRDO model provides more flexibility to design engineers by giving them the opportunity to prioritize the reliability
Author's personal copy
310
V. Rathod et al. / Computers & Industrial Engineering 66 (2013) 301–310
target along with other constraints, thus making this model more appealing to design community. Hence, a MOGA-based solution approach has been proposed to optimize and enhance the effectiveness of the HQLF based RBRDO model. The proposed solution approach provides multiple solutions from entire Pareto region so that the decision maker can select a specific design of his/her choice from several alternative design solutions available, even though providing an additional external filter to block solutions that are not satisfying the probabilistic constraints is not a good idea from computational cost and time perspective. Therefore, our future efforts are directed to develop new approach for handling the probabilistic constraints. Appendix A. Sample deterministic conversion of strength related probabilispffiffiffiffiffi tic constraint: probðR SÞ P 0:99, where R ¼ lR ½br2R = m
S¼
6 Q L 1 12 Q 2 36 Q L 2 h 2 þ 2 b 3 h 2 rb þ b h 4 rh b
!!
pffiffiffiffiffi 6L 6 18L m Q ¼ lQ br2Q 2 3 2 r2b 4 r2h bh b h bh pffiffiffiffiffi 6Q 18Q 2 m L ¼ lL br2L 2 r h 4 bh bh pffiffiffiffiffi ¼ l br2 6QL þ 18Q r2 þ 18QL r2 b m b h b 2 2 4 2 b 2 4 b h b h b h pffiffiffiffiffi ¼ l br2 12QL þ 12Q r2 þ 72QL r2 h m h h h 3 3 3 b 4 bh b h bh m ¼ ðrR Þ2 þ þ
rb
rQ
rL
þ
6Q bh
6QL 2 2
2
þ
6L 2
bh
18Q 4
bh
18Q 4 2
6 3 2
r2b
b h 2
18L bh
4
r2h
2
r2h
r2b þ
18QL 2 4
r2h
2
b h b h b h 2 12QL 12Q 2 72QL 2 þ rh þ r þ r h 3 3 3 b 4 bh b h bh
References Buranathiti, T., Cao, J., & Chen, W. (2004). A weighted three-point-based strategy for variance estimation. In Proc. of ASME des eng. tech conf. (DETC), Paper # DETC2004-57363. Chandra, M. J. (2001). Statistical quality control. Boca Raton, FL: CRC Press. Chen, L. (2008). Integrated robust design using response surface methodology and constrained optimization. Unpublished PhD Thesis, University of Waterloo, Ontario, Canada. Cox, D. R., & Reid, N. (2000). The theory of design of experiments. Landon: Chapman and Hall/CRC. Cruse, T. R. (1997). Reliability-based mechanical design. New York: Marcel Dekker. Das, I., & Dennis, J. E. (1997). A closer look at drawbacks of minimizing weighted sums of objectives for Pareto set generation in multi-criteria optimization problems. Structural Optimization, 14, 63–69.
Deb, K. (1999). Solving goal programming problems using multi-objective genetic algorithms. Evolutionary computation, 1999. CEC 99. Proceedings of the 1999 Congress, Vol. 10.1109/CEC.1999.781910. Deb, K., Gupta, S., Daum, D., & Branke, J. (2009). Reliability-based optimization using evolutionary algorithms. IEEE Transactions on Evolutionary Computation, 13(5), 1054–1074. Du, X., & Chen, W. (2000). Towards a better understanding of modeling feasibility robustness in engineering design. Journal of Mechanical Design, 122, 385–394. Du, X., & Chen, W. (2002). Sequential optimization and reliability assessment method for efficient probabilistic design. DETC-DAC34127, ASME Design Engineering Technical Conferences, Montreal, Canada. Du, X., Sudjianto, A., & Chen, W. (2004). An integrated framework for optimization under uncertainty using inverse reliability strategy. Journal of Mechanical Design, 126, 562–570. Kalsi, M., Hacker, K., & Lewis, K. (2001). A comprehensive robust design approach for decision trade-offs in complex systems design. ASME Journal of Mechanical Design, 123(1), 1–10. Kapur, K. C., & Lamberson, L. R. (1977). Reliability in engineering design. New York: John Wiley & Sons. Kim, J. Y., & Cho, B. R. (2002). Development of priority-based robust design. Quality Engineering, 14(3), 355–363. Koch, P. N. (2002). Probabilistic design: Optimizing for six sigma quality. In 43rd AIAA/ASME/ASCE/AHS structures, structural dynamics, and materials conference, Denver, Colorado, AIAA-2002-1471. Lee, I., Choi, K. K., Du, L., & Gorsich, D. (2008). Dimension reduction method for reliability-based robust design optimization. Computer and Structures, 86(13– 14), 1550–1562. Liang, J., Mourelatos, Z. P., & Tu, J. (2008). A single-loop method for reliability-based design optimization. International Journal of Product Development, 5(1–2), 76–92. Melchers, R. E. (1999). Structural reliability analysis and prediction (2nd ed.). New York: Wiley. Messac, A. (1996). Physical programming: Effective optimization for computational design. AIAA Journal, 34, 149–158. Montgomery, D. C. (2009). Design and analysis of experiments (7th ed.). New York: John Wiley & Sons. Mourelatos, Z. P., & Liang, J. (2006). A methodology for trading-off performance and robustness under uncertainty. Journal of Mechanical Design, 128, 856–863. Pedersen, G., & Goldberg, D. E. (2004). Dynamic uniform scaling for multi-objective genetic algorithms. IlliGAL Report, Vol. No. 2004008 (pp. 1–12). Phadke, M. S. (1989). Quality engineering using robust design. P.T.R. Prentice-Hall, Inc. Englewood Cliffs, New Jersey. Ramu, P., Qu, X., Youn, B. D., Haftka, R. T., & Choi, K. K. (2006). Inverse reliability measures and reliability-based design optimization. International Journal of Reliability and Safety, 1(1/2), 187–205. Rao, S. S. (1992). Reliability based design. New York: McGraw-Hill, Inc.. Romero, C. (1991). Hand book of critical issues in goal programming. Oxford, UK: Program Press. Shan, S., & Wang, G. G. (2008). Reliable design space and complete single-loop reliability-based design optimization. Reliability Engineering and System Safety, 93, 1218–1230. Su, J., & Renaud, J. E. (1997). Automatic differentiation in robust optimization. AIAA Journal, 35(6), 1072–1079. Taguchi, G. (1987). System of experimental design. New York: Unipub/Kraus International. Taguchi, G., Elsayed, E., & Hsiang, T. (1989). Quality engineering in production systems. New York: McGraw-Hill. Ueno, K. (1997). Company-wide implementations of robust-technology development. New York: ASME Press. Wang, W., & Wu, J. (1994). Reliability-based robust design. In Proceedings of 39th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics and material conference, Long Beach, CA, 20–23 April 1994 (pp. 2956–2964). Xu, H., & Rahman, S. (2004). A generalized dimension–reduction method for multidimensional integration in stochastic mechanics. International Journal of Numerical Methods in Engineering, 61(12), 1992–2019. Yadav, O. P., Bhamare, S. S., & Rathore, A. (2010). Reliability-based robust design optimization: a multi-objective framework using hybrid quality loss function. Quality and reliability Engineering International, 26, 27–41. Yang, G. (2007). Life cycle reliability engineering. NJ: Wiley. Youn, B. D., Choi, K. K., & Yi, K. (2005). Performance Moment Integration (PMI) method for quality assessment in reliability-based robust design optimization. Mechanics Based Design of Structures and Machines, 33, 185–213. Zhuang, X., & Pan, R. (2010). A multi-objective memetic algorithm for RBDO and robust design, 2010 RAMS proceeding.