Diffuse response surface model based on moving

0 downloads 0 Views 888KB Size Report
patterns for reliability-based design optimization of ultrahigh strength steel NC ... Variances in the input variables of an engineering system ... samplings) developed specifically for effective reliability analysis. The idea of combining the probabilistic and the design ..... As shown in Section 2 the statistical nature of design vari-.
Struct Multidisc Optim DOI 10.1007/s00158-011-0672-5

RESEARCH PAPER

Diffuse response surface model based on moving Latin hypercube patterns for reliability-based design optimization of ultrahigh strength steel NC milling parameters Peipei Zhang · Piotr Breitkopf · Catherine Knopf-Lenoir · Weihong Zhang

Received: 21 June 2010 / Revised: 24 November 2010 / Accepted: 5 May 2011 c Springer-Verlag 2011 

Abstract We focus here on a Response Surface Methodology adapted to the Reliability-Based Design Optimization (RBDO). The Diffuse Approximation, a version of the Moving Least Squares (MLS) approximation, based on a progressive sampling pattern is used within a variant of the First Order Reliability Method (FORM). The proposed method uses simultaneously the points in the standard normal space (U-space) and the physical space (X-space). The two grids form a “virtual design of experiments” defined by two sets of points in both design spaces, which are evaluated only when needed in order to minimize the number of the “exact” thus supposed costly, function evaluations. At each new iteration, the pattern of points is updated with the points appropriately selected from the virtual design, in order to perform the approximation. As an original contribution, we introduce the concept of “Advancing LHS” which extends the idea of Latin Hypercube Sampling (LHS) for the maximal reuse of already computed points while adding at each step a minimal number of new neighboring points, necessary for the approximation in the vicinity of the current design. We propose panning, expanding and shrinking Latin patterns of sampling points and we analyze the influence of this specific kind of patterns on the quality of the approxi-

P. Zhang · W. Zhang School of Mechanical Engineering, Northwestern Polytechnical University, 710072, Xi’an, Shaanxi, China P. Zhang · P. Breitkopf (B) · C. Knopf-Lenoir Laboratoire Roberval, UMR 6253 UTC-CNRS, Université de Technologie de Compiègne, BP 20529, 60205 Compiègne Cedex, France e-mail: [email protected]

mation. Then we analyze the minimal number of data points required in order to get well-conditioned approximation systems. In the application part of this work, we investigate the case of optimizing the process parameters of numerically controlled (NC) milling of ultrahigh strength steel. Keywords Response surface method · Diffuse approximation · FORM · RBDO · Adaptive LHS · NC machining

1 Introduction Variances in the input variables of an engineering system cause subsequent variances in its performance, and deterministic optimum designs that are obtained without taking uncertainties into consideration may lead to unreliable results. However, Reliability-Based Design Optimization (RBDO) methods require prohibitive computational time and have a slow convergence rate, especially when a doubleloop iteration procedure is required, where every optimization iteration includes the evaluation of the probabilistic quantities. To formulate a RBDO problem, the first step is to determine the limit state surfaces that will be involved as constraints in terms of failure probabilities. First and Second Order Reliability Methods, i.e. FORM/SORM (Haldar and Mahadevan 2000) provide classical solutions to allow the evaluation of failure probability in terms of a safety index which can be easily taken into account in the mathematical statement of the optimization problem. A safety index is associated with each limit state function whose value depends on the Most Probable Point (MPP), whose determination needs in turn evaluations of several limit state

P. Zhang et al.

functions. The total number of responses evaluations during the RBDO process is therefore very high and approximate models are valuable in this context. Due to the cost of computations, the use of RBDO is still limited in the design practice. Therefore, it is necessary to develop methods being able to substantially reduce the number of actual simulations. Response Surface Methodology (RSM, Box and Wilson 1951), provides a framework for building analytical surrogate models based on the Design of Experiments (DOE). Various regression models were proposed in the literature. The Moving Least Squares (MLS) method based on selective interaction sampling was applied in the RBDO context by Choi et al. (2001). Lee and Jung (2008) used Kriging model with constraint boundary sampling for enhancing the accuracy and efficiency of RBDO. Wang and Grandhi (1996) used the concept of intervening variables to calculate the safety index for structural reliability analysis. In order to further reduce the approximation error and computing time, RSM and gradient-based approaches may also be combined (Youn and Choi 2004; Kim and Choi 2008). In the current work, we adopt the Diffuse Approximation (a variant of MLS, Breitkopf et al. 1998) for its ability to maintain the local characteristic and to control the continuity, when the design point evolves in the design space. In the previous paper (Breitkopf et al. 2005), we developed the optimization strategy with moving patterns and diffuse approximation. However, we did not give a method to create economic patterns in order to reduce the number of data points. This is why the current article fills this gap and proposes the concept of ‘adaptive LHS pattern’. Even with RSM integrated within the RBDO process, the number of evaluations is still exponential for high dimension spaces. LHS (Latin Hypercube Sampling; Huntington and Lyrintzis 1998; Keramat and Kielbasa 1999) can save more than 50% of the computing effort when compared with the Monte Carlo simulations for structural reliability analysis (Olsson et al. 2003). Therefore, the idea of LHS was largely explored within RBDO by Jurecka (2008) and Liefvendahl and Stocki (2006) and others. Park (1994) proposed an optimal LHS sampling by maximizing the entropy. Ye et al. (2000) introduced a symmetric LHS to offer a compromise between computing effort and design optimality. Bates and Langley (2003) developed an Audze-Eglais Uniform LHS. Still in the domain of modified LHS, Stocki (2005) and Stocki et al. (2007) used the optimal LHS in the stochastic simulation of structural systems to improve design reliability. A global response surface generation using Stepwise Regression (SR) coupled with the optimal LHS DOE, is treated as the original problem for RBDO by Youn and Choi (2003). Choi et al. (2008) used LHS along with the Kriging metamodel and Constraint Boundary Sampling to form a new RBDO method.

There have been several attempts to use LHS along with Moving Least Squares approximation in the RBDO context. For example, Song and Lee (2011) combined the MLS with central composite design (CCD) and optimal LHS for RBDO of knuckle component. Kang et al. (2010) used axial experimental points and moving least square approximation for structural reliability analysis. The response surface function is updated successively by adding one new most probable failure point (MPFP) into the previous set of experimental points at a time. The new RSM was introduced by Youn and Choi (2004) using the MLS method with a new DOE (utilizing axial star and selective interaction samplings) developed specifically for effective reliability analysis. The idea of combining the probabilistic and the design space in RBDO problem is also explored in available literature. Liping et al. (1996) introduces the intervening variables concept, in which the standard normal variables (u in U-space) and a ratio (the mean values/standard deviations) are combined an adaptive approximation with the HLRF algorithm in U-space. Youn and Choi (2004) uses a new Design of Experiment (DOE) where axial samplings by default and interaction samples are taken at selective manner. The size of the design experimental block (domain of influence) is controlled by a parameter. A hybrid method based on simultaneous solution of the reliability and the optimization problem has been developed and successfully applied by Kharmanda et al. (2004) combining in the same function the objective function (associated to X-space) and the reliability index (associated to U-space), but the notion of a DOE common to the two spaces is not present. These techniques effectively reduced the computations in RBDO. However, during the process of optimization, repeated calculations are still performed in the vicinity of already calculated points. The purpose of our work is to develop a consistent framework to control the density of effectively sampled points, based on a concept of advancing LHS patterns and Diffuse Approximation in order to decrease the number of function calculations as much as possible. In the application of NC milling of ultrahigh strength steel, the success of a machining operation greatly depends on the value of machining parameters such as the feed rate, cutting speed, axial and radial depths of cut. Larger parameter values reduce machining time, but they can cause serious problems, such as chatter, tool breakage, poor surface finish, etc. Oppositely, smaller parameter values don’t fully explore the machine capacity. As the traditional approach for selecting machining parameters, based on accumulated experiences is not effective, several approaches for selecting the machining parameters have been developed. Memetic algorithm has been used to optimize machining parameters of multi-tool milling operations (Baskar et al. 2006).

Diffuse response surface model based on moving Latin

Cutting conditions have been optimized using artificial neural networks (Cus 2006). Tribes were adopted to optimize multi-pass face milling operations (Onwubolu 2006). Cutting parameter optimization has been studied to minimize production time in high speed turning (Oktem 2005; Bouzid 2005). Parallel genetic algorithm and parallel genetic simulated annealing have also been used to optimize multi-pass milling (Wang 2005). Genetic algorithms have been applied to determine the optimal machining parameters in the conversion of a cylindrical bar stock into a continuous finished profile (Amiolemhen 2004) along with other studies (Li 1996; Shunmugam et al. 2000; Baek et al. 2001; Tandon 2002; Davim 2003; Milfelner and Cus 2003). The cited works focus on obtaining deterministic sets of machining parameters for common materials such as aluminum alloy. However, ultrahigh strength steel 40CrNi2Si2MoVA, which is an attractive material due to its high tensile strength (1960 MPa), has received little attention because of its poor machinability, due to its elevated hardness (53 HRC). As the optimal point is usually close to the limit state surface, it is not effective or even dangerous to adopt directly the parameters provided by the mathematical approach without taking into account the tolerance region. This motivates our interest in the study of the RBDO approach for NC machining operations of ultrahigh strength steel. The paper is organized as follows. The second section of the paper gives a brief description of the general RBDO formulation and the FORM approach reminder, stressing the need of an approximation that provides both the function value and its gradient. In the third section, we discuss the RSM model based on Diffuse Approximation (DA) as a generalization of the Taylor formula, for any number of design variables with tensor product weight functions. In the fourth section, we address the fundamental issue of data point’s pattern underlying the DA. The ideas of a virtual Design Of Experiments (virtual DOE) and of adaptive Latin patterns are introduced along with the discussion of their numerical properties. Then, we make a link between these techniques and the RBDO approach. The fifth section describes the model of NC milling process obtained from actual experimental data, and two optimization test problems are solved to illustrate the numerical performance of the proposed RBDO technique.

where x are the random variables in the design space (X space) and u is the associated variable in the standard normal space (U -space) obtained using Rosenblatt (1952), Nataf (Der Kiugerhian and Liu 1986) or Box–Cox (Box and Cox 1964) transformation. The transformation T between the two different random spaces is defined as (Youn 2001) u = T (x) , x = T −1 (u)   g (x) = g T −1 (u) = h (u) Note that the transformation T requires a complete interpretation of probability distributions for random parameters, such as Fxi (xi ), the joint cumulative distribution function (CDF) of random vector x. Then, the individual points may be transformed in the following way   ui = −1 Fxi (xi ) i = 1, · · · , n

Two reliability approaches are often used: performance measure approach (PMA) and reliability index approach (RIA) which is selected in this work. The safety index β is related to the probability of failure by   β = −−1 P f

(2)

where P f is the probability of failure and  is the CDF of the standard normal variable. The safety index approach to reliability calculation is a mathematical optimization problem to find the point u* on the failure surface h(u) = 0 with the shortest distance from the origin to the limit state surface. During the RBDO process, the transformation between X -space and U -space at any design point must be interactively performed to estimate the probabilistic constraint and the safety index β = ||u∗ ||. In engineering practice, each variable may be on a different quantitative level. So a normalization should be done before the optimization. In this paper, all the variables are normalized within [0, 1] interval (Fig. 1).

X

x 2 General definition of RBDO model In system parameter design, the RBDO problems are generally defined as i (i = 1, 2, . . . , n) min or max f (x) subject to βi (u) ≥ β

(1)

Fig. 1 Normalized X -space

P. Zhang et al.

when applying DA as an alternative for the Finite Element approximation, but is not enough for the optimization purpose. The design spaces of interest are of higher dimension and specific issues of sampling and weighting have to be addressed for a proper implementation. Here we provide a scalable DA framework for an arbitrary number of design variables. The function value at a data point xi may be formulated using the first order Taylor expansion in terms of the function and the gradient values at the evaluation point x and in terms of the distances xi = xi − x, i = 1...k Fig. 2 Standard normal U -space

Figures 1 and 2 illustrate the two-level RBDO approach where the nominal value of the design variables is optimized in the normalized X -space and the reliability constraints are computed in the standard normal U -space. The FORM method of estimation of the safety index β is based on the Hasofer–Lind–Rackwitz–Fiessler (HLRF) algorithm (Haldar and Mahadevan 2000) ut+1 =

  t t    t  1 T u u ∇h − h u ∇h ut T t t ∇h (u ) ∇h (u ) (3) ∇h(ut )

where is the gradient vector of the limit state function h(u) at iteration t. Therefore, the application of (3) needs a RSM method approximating the value of the function and of its gradients.

3 Diffuse approximation (DA) in higher order spaces The Diffuse Approximation (Breitkopf et al. 2000) permits us to estimate the value of a function g: n →  and its derivatives based on a certain number k of samples g(xi ), i = 1, 2, ..., k (Fig. 3). DA is usually presented in one to three dimension spaces, which is generally enough

g (xi ) = g (x) + ∇ T g (x) xi + εi

(4)

The approximation errors for all the samples may be grouped in an error vector ε = (ε1 ...εk )T  ε = gn − P



g (x) ∇g (x)

(5)

where gn stands for the sample value vector gn (g1 . . . gk )T and ⎡ ⎢ ⎢ P =⎢ ⎣ T

1 x11 .. .

x1n

... ...

1 xk1 .. .

=

⎤ ⎥  1 ⎥ ⎥= x ⎦ 1

... 1 . . . xk

(6)

. . . xkn

where xi = (x1i . . . xni )T . The approximate function g(x) ˜ and the approximate gra˜ dient ∇g(x) values at the evaluation point x may then be found by minimizing the weighted squares error.   ˜ (x) = 1 ε T Wε J g˜ (x) , ∇g 2

(7)

For the second order Taylor expansion, the matrix P has additional lines corresponding to n + (n 2 − n)/2 secondorder terms xik xil , k, l = 1, ..., n, and one gets readily the criterion for the approximation of the function g(x), ˜ of ˜ ˜ the gradient ∇g(x) and of the Hessian H(x). The diagonal weight matrix ⎡ ⎢ W =⎣

w (x1 , x)

⎤ ..

⎥ ⎦

.

(8)

w (xk , x)

Fig. 3 Circular domain of influence in 2D

involves the weight functions w(xi , x) which translate the influence of the i-th sample point at the evaluation point x as illustrated in Fig. 3. The influence of a data point decreases with the relative distance d, according to the reference weight function

Diffuse response surface model based on moving Latin

wref (d), the choice of which is critical for the quality of the approximation. The weight function provides three features to the approximation: the locality, the continuity and the interpolation capacity. The locality is obtained when wref disappears outside the unit region. The C m continuity is governed by vanishing of the m-order derivative of wref at the boundary. The interpolation property is obtained j when w(xi , x j ) = δi (Kronecker delta). A common choice for wref , which satisfies the first two properties, is a spline function (Breitkopf et al. 2000)  wr e f (d) =

1 − 3d 2 + 2d 3 , 0 ≤ d < 1 0, d ≥ 1

(9)

The weight function w(xi , x) built from the reference weight function wref (d) serves implicitly as a selection tool for determining the domain of influence of the given point. An obvious and common choice is   T w (xi , x) = wr e f xi xi /R (10) with R is to be the given size of the domain of influence, which results in an n-spherical domain. In the current work, we develop a sampling scheme based on hypercube grids. Therefore, a tensor product of the reference weight functions (9), calculated separately for each component of xi , is better suited for our approach     w (xi , x) = wref xi1 /R1 × . . . × wref xik /Rn

Once the weight matrix is defined, one obtains the approximation of the function and of the gradient by minimizing the criterion (7). 

g˜ (x) ˜ (x) ∇g



 −1 = PWPT PWgn .

(12)

The condition number of the PWPT matrix determines the quality of the approximation and depends on the pattern and its size. The choice of the pattern, avoiding the degenerate situations (Breitkopf et al. 1998) is the subject of the next paragraph. In order to make the system independent of the pattern size, we define the diagonal scaling matrix D ⎡ ⎢ ⎢ D=⎢ ⎣

1

0 R1

0

..

.

⎤−1 ⎥ ⎥ ⎥ ⎦

(13)

Rn

where we can use the same radiuses of influence R j as for the weight function computation. Finally, we obtain the system 

−1   g˜ (x) T [(DP) W] gn = DA−1 Bgn . W = D (DP) (DP) ˜ (x) ∇g       A

B

(14) (11)

which results in n-cubical shaped domains and permits to adjust every R j , j = 1...n according to the required resolution in individual directions (Breitkopf et al. 2000; Fig. 4).

in which every term Ai j ∈ (0,1) is independent of the scale of the pattern. The difficulty in the use of formulation (14) is the control of the number of neighboring points k when the dimensionality n of the design space increases. This is the subject of the next paragraph.

4 LHS diffuse patterns

Fig. 4 Rectangular domain of influence in 2D

The DA scheme needs an efficient and scalable DOE, limiting the number of the “exact” function evaluations. Then, in n dimensions, DA requires k > n + 1 data points with linear P (for k = n + 1, the approximation degenerates to the Least Squares fitting as the P matrix becomes square, and [PWPT ]−1 PWgn = P−T W−1 P−1 PWgn = P−T gn ) and k > 1 + 2n + (n2 − n)/2 in the quadratic case. The Latin Hypercube Sampling (McKay et al. 1979) helps us to keep k proportional to n. In the Fig. 5(a) we present a square grid (gray region), subdivided into a set of tiles of equal probability, with the same number of subdivisions in every direction. In each tile, we draw one point, noted by a hollow circle, where a numerical experiment may

P. Zhang et al.

(a)

(b)

Fig. 5 a n = 2, k = 4, virtual design (16 hollow circles). b n = 2, k = 4, LHS (four filled circles)

(a) k=7 be performed—we will call it a “virtual design of experiments” or a “virtual DOE”. We have to choose a subset of k = 4 points (the number of lines and columns in the gray square) for performing actual computations. This subset forms a Latin square if (and only if) there is only one sample in each row and each column, as in Fig. 5(b). A Latin hypercube is the generalization of this concept into an arbitrary number of dimensions n, with a single sample in every hyper-plane aligned with the axes. When sampling a function g(x) in n , the k intervals are chosen in the way to obtain equally probable sub-domains and the k sample points are chosen to satisfy the Latin hypercube requirements. In the practical implementation, we obtain the usual Diffuse Approximation convergence rates. Figure 6 illustrates the convergence properties for a simple test function in 5 sampled with a uniform LHS. We progressively decrease the interval of the grid h of the LHS pattern (h is assumed constant in every direction) towards zero and we test different choices of the number of neighboring points k. We obtain a quadratic convergence rate for the function and a linear rate for its’ gradient. We observe that the rates are only slightly affected by varying k.

(b) k=10

4.1 LHS diffuse approximation quality The quality of the LHS approximation depends primarily on the conditioning of the linear system in (14). The terms of the matrix A depend only on matrices P and W, which are determined by the coordinates of the data points. In other words, A depends only on the underlying DOE pattern. In this part, we study the condition number of the A matrix and the global approximation error for increasing pattern sizes (k is the number of data points), for two different DOE patterns: uniform random sampling and LHS patterns. The Fig. 7 shows the condition number defined as the ratio of the biggest to the smallest eigenvalues of the A matrix, averaged over 104 trials. The curves correspond to different dimensions n of the design space. The LHS pattern is always chosen in the way to minimize the correlation

(c) k=20 Fig. 6 Rates of convergence for n = 5 and k = 7, 10 and 20 at x = [13; −12; 1; 1; 1]. f (x) = 1.5(x1 − 15)2 + 2(x2 + 10)2 + 2(x3 − 10)2 + 3(x4 − 12)2 + (x5 − 14)2

between the variables (Stocki 2005). We see that the LHS curve is under the random curve, which means that the LHS pattern gives the best condition number, for any pattern size and for any space dimension. Then, in Fig. 8, we compare the minimal number of points required for a fixed level of condition number. We can see that, for the same fixed level of condition number and

Diffuse response surface model based on moving Latin

Fig. 7 Comparison of condition number for random and LHS samplings (linear basis P, k is the number of data points, n = 2,5,8 is the dimension of the design space)

space dimension (n), the number of points needed for LHS is lower than for uniform random sampling. For instance, for n = 2 and the prescribed condition number 50, we see that more than k = 20 random points are needed, when k = 10 points are sufficient for LHS. For higher n and for bigger condition numbers the LHS advantage diminishes, but always permits to spare several data points. Finally, we compare also the relative L2 norm of error for random sampling and LHS patterns at x = [0.5 0.5]. The Fig. 9 illustrates the impact of the size of the pattern on the simple test function approximation quality for n = 2 and for varying k. We observe that the LHS diffuse pattern gives a stable error level, significantly lower than that of the random sampling, especially for the lower k values. The conclusion of this part of our study is that when choosing data points in the design spaces (“X” and “U” spaces), one has to bear in mind the resulting pattern in order to preserve the approximation stability and quality. The LHS patterns behave in a sound manner according to

Fig. 9 Approximation error for n = 2 and varying k, f(x) = x1 + x2 + 0.2x1 x2

these requirements. In the following section, we explain our technique to preserve the LHS pattern quality when advancing, expanding and shrinking the domain of interest around the current design point. 4.2 Advancing LHS patterns The overall idea of pattern adaptation is shown in the Fig. 10. Diffuse approximation is used here to approximate the failure surface and its gradient in the U -space. The pattern at the iteration t surrounds the current approximation of the MPP. Then, the formula (3) is used to estimate the new MPP approximation. At this point the pattern has to be updated in the vicinity of MPPt+1 . Our goal is to minimize the number of “exact” function evaluations. The idea is thus to reuse as many points as possible from previous computations and to add new points. The new points are selected in the way that together with the previous points they form a new LHS.

MPPj

* *

hj MPPj+1 hj+1

Fig. 8 Minimal pattern sizes for fixed condition number of the A matrix

Fig. 10 Advancing LHS pattern applied in FORM

P. Zhang et al. 1

2

(a)

(b)

Fig. 11 a Panning domain of influence, virtual DOE (seven hollow triangles). b Panning domain of influence, adapted LHS (two filled triangles)

The possible adaptations of the pattern are translating, zooming in and zooming out. As the current evaluation point moves during the optimization iterations, three cases are possible: –

– –

the point stays inside the pattern, but the convergence stops - we are going to use the ‘shrink’ operator, say by halving the interval of grid to refine the pattern; the point goes out of the local pattern—we are going to use the ‘pan’ operator in order to translate the pattern; when subsequent iteration points fall outside of the pattern, that means that the size of the pattern is too small, this is the situation when we use the ‘expand’ operator.

In the following Fig. 11(a), the region of interest from Fig. 5 is translated, say due to the increment of iteration in optimization process. We have to choose two new sampling points among the virtual DOE noted now by seven hollow triangles. The goal is to select the points in the way that to obtain the new Latin square. Figure 11(b) shows the two filled triangles chosen in the way to obtain an equilibrated Latin square together with the two points from the former square denoted by circles. When the optimization process converges, the size of the domain of interest decreases. In the Fig. 12(a), the gray domain corresponds to the reduced domain of interest from

2

3

4 (a)

(b)

Fig. 13 a Expanding domain, virtual DOE (LHS points denoted by filled triangles). b Expanding domain, adapted LHS (filled triangle No 2 is chosen)

Fig. 5(b). The hollow triangles represent again the virtual DOE points along which we have to choose now one additional point (the size of the square decreases from 4 × 4 to 3 × 3) in the way to preserve the Latin square. The Fig. 12(b) shows the chosen point as a filled triangle. The last situation happens when the domain of interest expands. In Fig. 13(a) the gray square expands the original 4*4 domain. The virtual DOE is once again denoted by triangles and we have to choose one of them. The four potential points verifying the LHS criterion for the updated pattern are denoted by the filled triangles in Fig. 13(a). Table 1 gives the value of the correlation coefficient, which is the usual criterion for LHS, for four possible choices of the candidate point. According to the correlation criterion, the best choice is the filled triangle No 4. As in our work, the current design point advances across the pattern, for instance towards upper left, the choice of the fourth point would give a potentially unbalanced pattern by adding points in the direction opposite to the descent. This happens, because the correlation coefficient does not depend on the position of the evaluation point. To compensate this effect, we use a criterion based on the condition number of the approximation at the current point x. The following line in Table 1 gives the condition numbers of matrix A(x) at a specific evaluation point (hollow circle in the upper left part of Fig. 13(a)), for the four possible choices. We select the pattern giving minimal value of condition number.

Table 1 Comparison of condition number and correlation coefficient for four possible points 1

(a)

2

(b) Correlation coefficient

Fig. 12 a Shrinking domain, virtual DOE (seven hollow triangles). b Shrinking LHS, adapted LHS (filled triangle)

κ (κ = λ max /λ min)

3

4

0.7796

−0.2717

0.7820

−0.2640

132.8759

4.0642

126.7684

66.9423

Diffuse response surface model based on moving Latin

Therefore, the filled triangle No 2 point (left, top) is selected as new point for forming a new 5 × 5 Latin square together with the four filled circle points from the previous iteration (Fig. 13(b)). This choice results in a well-balanced pattern of sampling points, surrounding the current evaluation point approximation and evolving in the direction of the current RBDO iteration. The interest of the adaptive moving LHS pattern approach is that it expands linearly to n dimensions avoiding the course of dimensionality. Coupled to the Diffuse Approximation with tensor product weight functions, this technique gives a consistent and numerically stable support for function approximation.

Fig. 14 The RSM with DA model for RBDO

4.3 Coupling moving LHS patterns with reliability computations We use classical two-level RBDO approach. The left and the right side of Fig. 14 illustrate the process respectively in the X -space and the U -space. At each optimization iteration, the eventually correlated and non-normally distributed design variables from the X -space are transformed into the independent standard normal U -space. The reliability constraints are then evaluated in the U -space, which involves an iterative process based on (3) and the generation of new data points when the current estimation of Most Probable Point (MPP) advances. The new points are generated according to the moving LHS approach described in the former

P. Zhang et al. Fig. 15 Proposed model with advancing LHS pattern for RBDO

β

subsection. In Fig. 14, in the U -space, both the solid rounds points (generated in the U -space) and the square solid points (transformed from X -space) are used together to calculate the Diffuse Approximation of the gradient and of the function terms involved in the FORM (3). At convergence, the most probable point (MPP) on the limit state surface is located. All the points evaluated in the U -space are then transformed into the X -space using the inverse transformation x = T −1 (u) and are denoted by small rounds points in the left side of Fig. 14. In the X -space, the next design point is determined by standard optimization using again approximated values and gradients, based again on the incremented LHS pattern. In both spaces, the points in the vicinity of the current design point are used to calculate approximated value of functions and gradients using Diffuse Approximation (12) and the process of optimization continues until the optimal point, satisfying prescribed satisfied safety indices β is found. The potential benefit of the proposed method is obvious when the design point x approaches the limit surface and the points from both spaces (X -space and U -space) are mixed. Mixing makes the points denser around xi or ui and provides a more precise local approximation and therefore a faster convergence reducing the number of function evaluations. An example of this process involving an advancing LHS pattern, taken from an actual computation, is given in Fig. 15, with n = 2 and k = 5. For the five iterations, we compute a total of only 10 ‘exact’ function values needed to support approximation, rather than 20 values (4 data points × 5 iterations), required if new DOE were centered around each subsequent iteration point.

5 Test problems: RBDO analysis of NC machining process In this part of our work, we apply the proposed methodology to a test problem of NC machining process of ultrahigh strength steel. The test problem is modeled by a set of analytical functions with coefficients adjusted to fit the experimental data. The formulations are provided for machining time, material removal rate, the tool life estimate etc.... These quantities serve alternatively as cost functions and reliability constraints in the two numerical examples provided in the second part of this paragraph. 5.1 NC milling model Figure 16 shows the sketch and the photo of the experiment used for measuring the cutting force. The tool advances in the y direction with a feed rate f (mm/min) and it rotates around the z-axis with the cutting speed N (rpm). First, an exponential relationship between the three components of the cutting force F and the cutting parameters is established (Yan 2007) b1

2

3

b4

Fi max = C F d pi N bi f bi de i , i = x, y, z.

(15)

where d p is axial depth of cut (mm) and de is radial depth of cut (mm). The coefficients C F , bi1 , bi2 , bi3 , bi4 are identified by least squares from the four level orthogonal design of experiments shown in Table 2, resulting in following models ⎧ 0.7153 N −0.2420 f 0.2269d 0.0606 ⎨ Fx max = 767.1848×d p e Fymax = 35.6615×d 0.8264 N −0.5024 f 0.5993de1.0831 . (16) p ⎩ Fzmax = 9.8560×d 0.7388 N −0.1749 f 0.5151de0.5358 p

Diffuse response surface model based on moving Latin Fig. 16 Experiment of measuring the cutting force. a Sketch of experiment. b Photo of experiment

(a)

Similar experiments were performed for the tool life model Ttl (min) 5  19650 Ttl = . (17) 0.1 0.25 d 0.15 p de N f The spindle power P (Kw) developed by the cutter of diameter D depends on the tangential component of the cutting force produced in the x direction and on the longitudinal component y but does not depend on the vertical component z as there is no displacement in this direction   Fx max × π D N + Fy max × f (18) P= 6 × 104 The machining time T (min) depends on the feed rate and on the length of cut L (mm) T =

L . f

(19)

(b) The material removal rate MRR (mm3 /min) is given directly by the product of the axial and the radial depth of cut and the feed rate M R R = f × d p × de .

(20)

Finally, the torque M (Nm) is given by M=

Fx max D . 2 × 103

(21)

We use these models in the following section where we define the RBDO problem. 5.2 Optimization problem statement As shown in Section 2 the statistical nature of design variables and functions to consider in RBDO process is taken into account as constraints (limit state functions) involving bounds on safety indices β.

Table 2 Design of experiments and corresponding cutting force No.

dp

N

f

de

Fxmax

Fymax

Fzmax

F

1

2

800

300

4

991.3

301.4

202.7

1055.8

2

2

1000

400

6

1027.5

496.6

281.0

1175.3

3

2

1200

500

8

1052.4

707.3

356.2

1317.1

4

2

1400

600

10

1071.1

929.8

429.2

1481.9

5

3

800

400

8

1474.9

1060.6

459.9

1873.9

6

3

1000

300

10

1326.9

1016.1

429.8

1725.7

7

3

1200

600

4

1405.6

520.7

364.1

1542.6

8

3

1400

500

6

1331.6

670.2

401.0

1543.7

9

4

800

500

10

1932.0

1958.0

719.1

2843.2

10

4

1000

600

8

1882.1

1533.3

674.1

2519.5

11

4

1200

300

6

1512.2

676.3

391.6

1702.2

12

4

1400

400

4

1517.3

479.3

355.8

1630.5

13

5

800

600

6

2290.0

1510.4

708.4

2833.3

14

5

1000

500

4

2031.2

780.2

499.1

2232.4

15

5

1200

400

10

1953.1

1680.2

704.2

2670.8

16

5

1400

300

8

1739.0

1027.8

524.4

2087.0

P. Zhang et al.

intervals are h = 0.01 in X-space and h = 0.001 in U-space. The radius of influence is R = 2.5 h. Stopping criterion (absolute value of error) is 10−2 (X -space) and 0.5 × 10−3 (U -space). Initial point is given as [3.5; 7; 450; 1000].

The main parameters which control the quality and the efficiency of the NC machining process are the axial depth of cut d p (mm), the radial depth of cut de (mm), the feed rate f (mm/min) the cutting speed around the z-axis N (rpm). Actual ranges of variation of these four variables are:

5.3.1 Case 1

d p ∈ [2, 5] , de ∈ [4, 15] f ∈ [300, 600] , N ∈ [800, 1400]

The tool life is one of the most attended targets in practical machining. So in case 1, the tool life (Ttl ) is considered as the objective to be maximized. Minimum values of the safety indices on machining time (T ) and material removal rate (MRR) are taken as constraints:

Machining parameters are supposed to obey to normal distributions; their variances σ are affected by both the capability of machine tools and the experience of workers. In this study, the variance of the axial depth of cut d p is issued from an experiment performed in a factory; the variances of other machining parameters are obtained from the experiences of workers. At last, we determined the variance σ as [0.005; 0.005; 2; 5]. A sequential quadratic programming (SQP) algorithm will be used to find optimal and reliable parameters we look for. From experimental results, the following expressions of the possible limit state functions have been identified: – – – – –

Maximum : tool life 





f toolli f e d p , de , f, N =

19650 0.15 d p de0.1 N f 0.25

T βT ≥ β M R R βM R R ≥ β L gT = 250 − ≥ 0 with . f g M R R = f × d p × de − 8000 ≥ 0 Subject to

Machining time: gT = 250 − T = 0 Material removal rate: g M R R = M R R − 8000 = 0 Torque: gtorque = 35 − M = 0 Cutting force: g f or ce = 3500 − F = 0 Tool life gtoolli f e = Ttl − 60 = 0.

The limit values of safety indices must correspond to acceptable failure probabilities. They depend on the limitations of practical machining, and may be different for the different limit state surfaces. For case 1, three situations are considered:

The failure domain is defined by g ≥ 0. Other parameters of the model are: D = 32 (mm), L = 84000 (mm). Based on the possible state limit functions mentioned above and on the engineering practice, two cases of NC machining are considered in this work (Fig. 17).

– – –

5.3 Test cases

βT ≥ 0, βMRR ≥ 0: this case is equivalent to deterministic optimization, βT ≥ 2, βMRR ≥ 2: corresponding to a probability of failure P f ≤ 2.275 × 10−2 , βT ≥ 3, βMRR ≥ 4: P f ≤ 1.35 × 10−3 and P f ≤ 3.17 × 10−5 respectively.

The detailed results of each iteration (number of new created points, optimal points x, objective function f toolli f e , two reliability constraints) in X-space for the first situation

In all the test cases the number of points used in the LHS pattern is k = 8. The X-space is normalized in order to have each design variable in the [0, 1] interval. The virtual grids Fig. 17 Process of optimization

β ≥ β

β

≥ 0

5

≥ β

≥ 0

β

≥ β

β ≥ 0

≥ β ≥0

Diffuse response surface model based on moving Latin Table 3 Results of each iteration in X -space for test cast 1 for βT ≥ 0 and βMRR ≥ 0 It. n◦

Nb of new points

− f toollife

x

βT

βMRR

1

8

[3.5, 7,450, 1000]

−208.784010

57.000000,

59.655566

2

8

[2, 7.479764, 449.7383, 998.9519]

−309.153158

56.869172,

−35.468890

3

8

[2, 8.74186, 447.9655, 994.6622]

−293.632990

55.982739,

−4.146687

4

8

[2, 8.925221, 448.009, 993.5693]

−292.167382

56.004483,

−0.069051

5

8

[2, 8.891009, 449.8863, 990.5713]

−295.636464

56.943141

−0.002770

6

8

[2, 6.526084, 569.58, 800]

−747.860864

116.789999

−17.074853

7

8

[2, 7.018623, 567.442, 800]

−724.541133

115.721020,

−0.993031

8

8

[2, 7.055865, 566.8924, 800]

−723.501844

115.446217

−0.004762

9

8

[2, 7.089607, 564.1934, 800]

−726.096996

114.096712,

−0.005142

10

8

[2, 9.954485, 336.2161, 800]

−1170.328884

0.108140,

−29.382826

11

8

[2, 11.6258, 336.1989, 800]

−1083.013832

0.099484

−3.605565

12

7

[2, 11.89959, 335.986, 800]

−1071.329326

−0.006992

−0.073595

13

4

[2, 11.90477, 335.9998, 800]

−1071.041726

0.000000,

0.000013

14

4

[2, 11.90529, 335.9852, 800]

−1071.076394

−0.007405,

0.000000

15

4

[2, 11.90476, 336, 800]

−1071.040924

0.000000

0.000018

16

4

[2, 11.90515, 335.989, 800]

−1071.067252

−0.005514

0.000000

17

0

[2, 11.90476, 336, 800]

−1071.041345

0.000000

0.000000

mentioned above are shown in Table 3 (βT ≥ 0, βMRR ≥ 0). The detailed results of each iteration in U-space and other two situations will not be presented here due to limited space. The Table 3 shows that the number of new created points decreases and reduces to zero (from 8 to 7, 4 and 0) at the end of iterations thanks to the adaptation of moving LHS patterns. Table 4 shows the results obtained by a deterministic formulation and by a reliability approach performed with three levels of reliability. We can observe that the same solution is reached by the proposed method (with advancing LHS

patterns) and by the reference one (without advancing LHS patterns), but due to the computation of safety indices, the number of functions evaluations increases from 13 in the deterministic approach to several hundreds in probabilistic design. The use of approximations capable to handle previously calculated points becomes then necessary. In Table 4, bigger radial depth of cut (de ) and smaller feed rate ( f ) with minimum d p and N can bring a longer tool life. More rigorous requirements on the T and MRR constraints (βT ≥ 3 and βMRR ≥ 4) imply to increase the axial depth of cut d p and feed rate f . Both constraints are active

Table 4 Numerical results for test case 1 Deterministic optimization

βT ≥ 0, βMRR ≥ 0

βT ≥ 2, βMRR ≥ 2

βT ≥ 3, βMRR ≥ 4

Reference

Reference

Reference

Proposed

Proposed

Proposed

Num of gT evaluations

13

488

456

440

350

640

450

Num of g M R R evaluations

13

352

280

336

207

424

312

Num of f toollife evaluations

13

136

111

104

96

128

120

dp

2

2

2

2

de

11.90476

11.90476

11.91711

12.00082

f

336

336

340

342

N

800

800

800

800

Value of ftoollife

−1071

−1071.0417

−1054.767

−1043.404

Value of gT

0

1.2158 × 10−4

−2.9412

−4.3859

Value of gMRR

0.00

−6.7471 × 10−4

−103.6368

−208.5614

βT

0

0.0

2.0

3.0

βMRR

0

0.0

2.0

4.0

P. Zhang et al. Table 5 Numerical results for test case 2 Deterministic optimization

βF ≥ 0, βtoollife ≥ 0

βF ≥ 2, βtoollife ≥ 2

βF ≥ 3, βtoollife ≥ 4

Reference

Proposed

Reference

Proposed

Reference

Proposed

Num of g F evaluations

10

488

382

224

162

280

198

Num of gtoollife evaluations

10

584

408

232

188

352

244

Num of f M R R evaluations

10

176

164

56

48

72

63

dp

3.8629756

3.8629756

3.821322

de

15

15

15

15

f

600

600

600

600

3.7918197

N

1090.3027

1090.3027

1081.9036

1073.0002

Value of f M R R

34766.7806

34766.7807

34391.8983

34126.3773

Value of g F

0

−1.8592 × 10−7

−18.8852

−28.4646

Value of gtoollife

0

2.1319 × 10−7

−2.8746

−5.9088

βF

0

0.0

2.0

3.0

βtoollife

0

0.0

2.0

4.0

at the optimum solution. The other important information in this table is the number of ‘exact’ function evaluations. Comparing the results with reference solution, we can see that the proposed method leads to a significant reduction of the number of ‘exact’ function evaluations: for the constraint gT function, this number is decreased by 32, 90 and 190 respectively for the three situations. For the constraint g M R R function they are reduced by 20%, 38% and 27% when using the DA. We can notice that the total number of objective function evaluations is also decreased by 25, 8 and 8: this is not directly related to the use of mixed X -space and U -space points, because no reliability index is computed for the objective function, but it can be interpreted as the effect of the high quality of the gradients’ values due to the DA, which allows a faster convergence of SQP. 5.3.2 Case 2 In case 2, material removal rate (MRR) is taken as the objective: in rough cut, we want to remove material rapidly without considering quality of cutting which will be achieved in the fine cutting stage. Constraints are imposed on safety indices of cutting force (F) and tool life.   max . f M R R d p , de , N , f = d p × de × f F βF ≥ β Subject to

toolli f e βtoolli f e ≥ β  g F = 3500 − Fx2 max + Fy2 max + Fz2max ≥ 0  5 with 19650 gtoolli f e = − 60 ≥ 0 0.1 0.25 d 0.15 p de N f

For case 2, target values of safety indices of cutting force (F) and tool life are determined as 0, 2, 3 and 4. As in the case 1, three situations are considered: – – –

βF ≥ 0, βtoollife ≥ 0: this case is equivalent to deterministic optimization, βF ≥ 2, βtoollife ≥ 2: corresponding to a probability of failure P f × 2.275 × 10−2 , βF ≥ 3, βtoollife ≥ 4: P f ×1.35×10−3 and P f ×3.17× 10−5 respectively.

The numerical results of reliability optimization of case 2 are shown in Table 5. The two constraints for three situations are active at the optimum. In Table 5, we find that the radial depth of cut de and feed rate f are almost up to their maximum values regardless of the target values of the safety indices. It means that we may adjust axial depth of cut d p with maximum de and f to arrive at bigger material removal rate (MRR) without violating the constraints on the cutting force and tool life. In this case, axial depth of cut d p should not be over 4 mm and cutting speed N is around 1000 rpm. As in the first case, the number of ‘exact’ function evaluations is significantly reduced due to the proposed approximation approach (moving LHS patterns): the number of ‘exact’ cutting force evaluations is reduced by 23%, 28% and 30% respectively for the three situations; the number of ‘exact’ tool life constraint evaluations is reduced by 30%, 19% and 31%. As the convergence of optimization is also faster, the number of objective function (MRR) evaluations is decreased by 6%, 14% and 13% respectively for three situations. These percents show that moving LHS patterns works better for constraints in U -space than in X -space due to the density of data points in U -space.

Diffuse response surface model based on moving Latin

From these examples, we conclude that mixing the points from both the X -space and the U -space improves the convergence speed by reducing the number of ‘exact’ function evaluations in two ways: better use of all previously calculated points by developing a response surface in the two spaces simultaneously, and more precise gradients’ evaluations due to DA.

6 Conclusions and perspectives In this study, we introduce a new response surface methodology for RBDO and demonstrate its efficiency in the optimization of machining process parameters. The latter are based on an empirical NC model with actual measurements performed during the industrial process. Our approach is based on progressive LHS design of experiments taking into account the sample points from X -space and U -space to calculate the safety indices β using DA within the FORM approach. For two cases of NC machining, the obtained results show that the new method can effectively decrease the number of ‘exact’ function calculations needed and thus reduce the computing time. Subsequent work will concern coupling the proposed approach with full scale modeling of the cutting process, based on the finite element simulation. Acknowledgments The authors gratefully acknowledge the financial support from the project Eiffel of the French Ministry of Foreign Affairs and from the Embassy of France in China. This work has been supported also by French National Research Agency (ANR) through COSINUS program (project OMD2 n◦ ANR-08-COSI-007).

References Amiolemhen PE (2004) Application of genetic algorithmsdetermination of the optimal machining parameters in the conversion of a cylindrical bar stock into a continuous finished profile. Int J Mach Tools Manuf 44:1403–1412 Baek DK, Ko TJ, Kim HS (2001) Optimization of federate in a face milling operation using a surface roughness model. Int J Mach Tools Manuf 41:451–462 Baskar N, Asokan P, Saravanan R (2006) Selection of optimal machining parameters for multi-tool milling operations using a memetic algorithm. J Mater Process Technol 174:239–249 Bates SJ, Langley DS (2003) Formulation of the Audze-Eglais Uniform Latin Hypercube design of experiments. Adv Eng Softw 34:493–506 Bouzid W (2005) Cutting parameter optimization to minimize production time in high speed turning. J Mater Process Technol 161:388–395 Box GEP, Cox DR (1964) An analysis of transformations. J R Stat Soc B 26:211–252 Box GEP, Wilson KB (1951) On the experimental attainment of optimum conditions. J R Stat Soc Ser B Meth 13:1–45

Breitkopf P, Touzot G, Villon P (1998) Consistency approach and diffuse derivation in element free methods based on moving least squares approximation. Comput Assist Mech Eng Sci 5:479– 501 Breitkopf P, Rassineux A, Touzot GP, Villon P (2000) Explicit form and efficient computation of MLS shape functions and their derivatives. Int J For Numer Meth Eng 48(3):451–466 Breitkopf P, Naceur H, Rassineux A (2005) Moving least squares response surface approximation: formulation and metal forming applications. Comput Struct 83:1411–1428 Choi KK, Youn BD, Yang RJ (2001) Moving least square method for reliability-based design optimization, 4th World congress of structural and multidisciplinary optimization. Dalian, China Choi K, Lee G et al (2008) A sampling-based reliability-based design optimization using KrigingMetamodel with constraint boundary sampling. In: 12th AIAA/SSMO multidisciplinary analysis and optimization conference, 10–12 September, Victoria, British Columbia Canada Cus F (2006) Approach to optimization of cutting conditions by using artificial neural networks. J Mater Process Technol 173:281– 290 Davim JP (2003) Design of optimization of cutting parameters for turning metal matrix composites based on the orthogonal arrays. J Mater Process Technol 132:340–344 Der Kiureghian A, Liu PL (1986) Structural reliability under incomplete probability information. J Eng Mech ASCE 112(1):85– 104 Haldar A, Mahadevan S (2000) Probability, reliability and statistical methods in engineering design Huntington DE, Lyrintzis CS (1998) Improvements to and limitations of Latin hypercube sampling. Prob Eng Mech 13:245–253 Jurecka F (2008) Robust design and reliability in structural optimization. FE design, FE-DESIGNGmbH, Haid-und-Neu-Str. 7, D-76131 Karlsruhe Kang SC, Koh HM, Choo JF (2010) An efficient response surface method using moving least squares approximation for structural reliability analysis. Probab Eng Mech 25:365–371 Keramat M, Kielbasa R (1999) Modified Latin hypercube sampling Monte Carlo (MLHSMC) Estimation for average quality index. Analog Integr Circuits Signal Process 19:87–98 Kharmanda G, Olhoff N, Mohamed A, Lemaire M (2004) Reliabilitybased topology optimization. Struct Multidiscipl Optim 26(5): 295–307. doi:10.1007/s00158-003-0322-7 Kim C, Choi KK (2008) Reliability-based design optimization using response surface method with prediction interval estimation. J Mech Des 130:121401-1–121401-12 Lee TH, Jung JJ (2008) A sampling technique enhancing accuracy and efficiency of metamodel-based RBDO: constraint boundary sampling. Comput Struct 86:1463–1476 Li H (1996) Intelligent rough machining of sculptured parts. Dissertation, University of Victoria Liefvendahl M, Stocki R (2006) A study on algorithms for optimization of Latin hypercubes. J Stat Plan Inference 136:3231– 3247 Liping W et al (1996) Safety index calculation using intervening variables for structural reliability analysis. Comput Struct 59(6):1139–1148 McKay MD, Beckman RJ, Conover WJ (1979) A comparison of three methods for selecting values of input variables in the analysis of output from a computer code (JSTOR Abstract). Technometrics Am Stat Assoc 21(2):239–245 Milfelner M, Cus F (2003) Simulation of cutting forces in ball-end milling. Robot Comput-Integr Manuf 19:99–106 Oktem H (2005) Application of response surface methodology in the optimization of cutting conditions for surface roughness. J Mater Process Technol 170:11–16

P. Zhang et al. Olsson A, Sandberg G, Dahlblom O (2003) On Latin hypercube sampling for structural reliability analysis. Struct Saf 25:47–68 Onwubolu CG (2006) Performance-based optimization of multi-pass face milling operations using Tribes. Int J Mach Tools Manuf 46:717–727 Park JS (1994) Optimal Latin-hypercube designs fore computer experiments. J Stat Plan Inference 39:95–111 Rosenblatt M (1952) Remarks on a multivariate transformation. Ann Math Stat 23(3):470–472 Shunmugam MS, Bhaskapa RSV, Narendran TT (2000) Selection of optimal conditions in multi-pass face milling using a genetic algorithm. Int J Mach Tools Manuf 40:401–414 Song CY, Lee JS (2011) Reliability-based design optimization of knuckle component using conservative method of moving least squares meta-models. Probab Eng Mech 26(2):364–379. doi:10.1016/j.probengmech.2010.09.004 Stocki R (2005) A method to improve design reliability using optimal Latin hypercube sampling. Comput Assist Mech Eng Sci 12:87– 105 Stocki R, Tauzowski P, Kleiber M (2007) Efficient sampling techniques for stochastic simulation of structural systems. Comput Assist Mech Eng Sci 14:127–140

Tandon V (2002) NC end milling optimization using evolutionary computation. Int J Mach Tools Manuf 42:595–605 Wang ZG (2005) Optimization of multi-pass milling using parallel genetic algorithm and parallel genetic simulated annealing. Int J Mach Tools Manuf 45:1726–1734 Wang L, Grandhi RV (1996) Safety index calculation using intervening variables for structural reliability analysis. Comput Struct 6(59):1139–1148 Yan X (2007) The Research on cutting force in high speed milling process of difficult-to-machine materials. Dissertation, Northwestern Polytechnical University Ye KQ, Li W, Sudjianto A (2000) Algorithmic construction of optimal symmetric Latin hypercube designs. J Stat Plan Inference 90:145– 159 Youn BD (2001) Advances in reliability-based design optimization and probability analysis, Dissertation, The University of Iowa Youn BD, Choi KK (2003) Reliability-based design optimization for crashworthiness of vehicle side impact. Struct Multidisc Optim 25:1–12 Youn BD, Choi KK (2004) A new response surface methodology for reliability-based design optimization. Comput Struct 82:241– 256

Suggest Documents