chapter 4 mathematical modelling and optimization ...

2 downloads 24 Views 355KB Size Report
tool wear, tool life, material removal rate, cutting force and power consumption. ...... (4.15) where ( ) represents the normalized value of mij, and Pi is the.
56

CHAPTER 4 MATHEMATICAL MODELLING AND OPTIMIZATION PROCESSES

4.1

INTRODUCTION A mathematical model is a description of a system using

mathematical concepts and language. The process of developing a mathematical model is termed mathematical modeling. A model may help to explain a system and to study the effects of different components, and to make predictions about behavior. Regression analysis is a study of relationship between a set of independent variables and the dependent variables. Independent variables are characteristics that can be measured directly. In this work, cutting speed, feed and depths of cut are the independent variables. These values are measured directly and kept as the input parameters for the machining of Inconel 718. These variables are also called as the predictor variables or explanatory variables. These variables predict the dependent variables and also explain the behavior of the dependents variable. In this work, the dependent variables constitute surface roughness, tool wear, tool life, material removal rate, cutting force and power consumption. Regression analysis is used to develop a mathematical model to explain the variance in the dependent variable based on the values of the independent variable. The objectives are defined initially in building the regression models and then possible independent variables are identified.

57 Regression coefficients are estimated and tested to see if all the coefficients are significantly reliable. Once the validation process is over, model is implemented in the decision support system, outlining the limitations and constraints of the model. Final performance of the model is monitored and is refined and expanded if necessary. Probability plots are used to visualize the relationship between any two variables. 4.2

DEVELOPMENT OF MATHEMATICAL MODELS USING REGRESSION ANALYSIS FOR MACHINING INCONEL 718 Regression is methodology that is obtained between the two

quantitative variables (y, x) such that the value y the dependent variable and x the independent variable. In addition, suppose the relationship between y and x is basically linear, but is inexact: besides its determination by x, y has a random component, u, which is called the disturbance or error. If i index the observation on the data pairs (x, y), then simple linear regression model formalizes the ideas just stated as =

+

+

The parameters

(4.1) represent the y intercept and the slope of

the relationship, respectively. The basic idea of nonlinear regression is same as that of linear regression. Nonlinear regression is characterized by the fact that the prediction equation depends non-linearly on one or more unknown parameters. A nonlinear regression model is of the form y = f (x

)

,

i = 1, … . . , n

(4.2)

Where y the responses, f are is a known function of the covariate vector x = (x , … . . x ) and the parameter vector

= ( ,…..

) , and

58 are random errors. The

are usually assumed to be uncorrelated with mean

zero and constant variance. 4.3

DEVELOPMENT

OF

NON-LINEAR

REGRESSION

MODELS The empirical relationship between the response variables (cutting force, surface roughness, tool life, tool wear, power consumption and material removal rate) and the machining parameters are obtained from the experimental data’s shown in Table 3.4. These values are then analyzed using statistical package simulation software (SPSS) to get the non-linear regression equations. The proposed model for various responses in this work are presented in the following form =

(4.3)

Where C is the constant, ‘v’ is the cutting speed, ‘f’ is the feed, ‘a’ is the depth of cut and x1, x2, x3 are estimated coefficients of regression model. Statistical simulation software estimates the parameters in nonlinear models using the Levenberg-Marquardt nonlinear least squares algorithm. The empirical relations for several responses are given below; .

Cutting Force ‘Fc’ = 5964.109 x v Tool life ‘TL’

= 22.048 x v

Power

= 1.083 x v

‘P’

. .

xf

xa

.

x f .

x f

.

x a x a

.

(4.4)

.

(4.5)

.

(4.6)

Material removal rate ‘MRR’ = 1000 Tool wear ‘Vb’

= 0.007 x v

.

(4.7)

x f

Surface Roughness ‘Ra’ = 2.383 x v

.

.

xa xf

.

.

xa

(4.8) .

(4.9)

59

The proposed models actually fit the experimental data shown in Table 3.4. This fit is checked by computing the various R-squared coefficients, which have the values between 0 and 1. The R-squared value is a comparison of the residual sum of squares for the fitted model to the residual sum of squares for a trivial model that consists of a constant only. R-squared values are statistical values derived from the regression equation to quantify model performance. The values of R-square range from 0 to 100 percentages. If the model fits the observed dependent variables perfectly, R-square is 1.0. 4.3.1

Cutting Force Model The cutting force model obtained by the non-linear regression

analysis is shown in equation (4.4). The R-Squared value for this cutting force model is 0.9981 and this value is high and close to 1. This brings out the factor that the obtained values are desirable for prediction of cutting forces. A probability plot of cutting force model is shown in Figure 4.1. The probability plot includes the middle lines, which are the expected probabilities from the distribution based on maximum likelihood parameter estimates. If the distribution is a good fit for the data, the points form a straight line. Here in the cutting force model all the points fall close to the middle lines and within the confidence intervals, which reveals that the predicted model is adequate for the prediction of cutting forces.

60

Probability Plot of Cutting Force Normal - 95% CI 99

95 90 80 70 60 50 40 30 20 10 5

1

500

1000 Cutting Force, N

1500

2000

Figure 4.1 Probability plot for Cutting force model 4.3.2

Tool Life Model The tool life model obtained by the non-linear regression analysis is

shown in Equation (4.5). The R-Squared value for this tool life model is 0.9913 and this value is high and close to 1. This brings out the factor that the obtained values are desirable for prediction of tool life of the cutting tool. A probability plot of tool life model is shown in Figure 4.2. The probability plot includes the middle lines, which are the expected probabilities from the distribution based on maximum likelihood parameter estimates. If the distribution is a good fit for the data, the points form a straight line. Here in the tool life model all the points fall close to the middle lines and within the confidence intervals, which reveals that the predicted model is adequate for the prediction of tool life.

61

Probability Plot of Tool Life Normal - 95% CI 99

95 90 80 70 60 50 40 30 20 10 5

1

-10

0

10 20 Tool Life, min

30

40

Figure 4.2 Probability plot for Tool life model 4.3.3

Power Consumption Model The Power consumption model obtained by the non-linear

regression analysis is shown in Equation (4.6). The R-Squared value for this power consumption model is 0.988 and this value is high and close to 1. This brings out the factor that the obtained values are desirable for prediction of power consumption. A probability plot of power consumption model is shown in Figure 4.3. The probability plot includes the middle lines, which are the expected probabilities from the distribution based on maximum likelihood parameter estimates. If the distribution is a good fit for the data, the points form a straight line. Here in the power consumption model all the points fall close to the middle lines and within the confidence intervals, which reveals that the predicted model is adequate for the prediction of power consumption.

62

Probability Plot of Power Consumption Normal - 95% CI 99

95 90 80 70 60 50 40 30 20 10 5

1

0

2

4

6 Power, KW

8

10

12

Figure 4.3 Probability plot for Power consumption model 4.3.4

Material Removal Rate Model The material removal rate model obtained by the non-linear

regression analysis is shown in equation (4.7). The R-Squared value for this material removal rate model is 0.9993 and this value is high and close to 1. This brings out the factor that the obtained values are desirable for prediction of material removal rate. A probability plot of material removal rate model is shown in Figure 4.4. The probability plot includes the middle lines, which are the expected probabilities from the distribution based on maximum likelihood parameter estimates. If the distribution is a good fit for the data, the points form a straight line. Here in the material removal rate model all the points fall close to the middle lines and within the confidence intervals, which reveals that the predicted model is adequate for the prediction of material removal rate.

63

Probability Plot of Material Removal Rate Normal - 95% CI 99

95 90 80 70 60 50 40 30 20 10 5

1

0

4000 8000 12000 Material Removal Rate, cubic.mm / min

16000

Figure 4.4 Probability plot for Material removal rate model 4.3.5

Tool Wear Model The tool wear model obtained by the non-linear regression analysis

is shown in Equation (4.8). The R-Squared value for this tool wear model is 0.9896 and this value is high and close to 1. This brings out the factor that the obtained values are desirable for prediction of tool wear rate. A probability plot of tool wear model is shown in Figure 4.5. The probability plot includes the middle lines, which are the expected probabilities from the distribution based on maximum likelihood parameter estimates. If the distribution is a good fit for the data, the points form a straight line. Here in the tool wear model all the points fall close to the middle lines and within the confidence intervals, which reveals that the predicted model is adequate for the prediction of tool wear.

64

Probability Plot of Tool wear Normal - 95% CI 99

95 90 80 70 60 50 40 30 20 10 5

1

0.05

0.10 Wear, mm

0.15

0.20

Figure 4.5 Probability plot for Tool wear model 4.3.6

Surface Roughness Model The surface roughness model obtained by the non-linear regression

analysis is shown in Equation (4.9). The R-Squared value for this surface roughness model is 0.9543 and this value is high and close to 1. This brings out the factor that the obtained values are desirable for prediction of surface roughness.

A probability plot of surface roughness model is shown in

Figure 4.6. The probability plot includes the middle lines, which are the expected probabilities from the distribution based on maximum likelihood parameter estimates. If the distribution is a good fit for the data, the points form a straight line. Here in the surface roughness model all the points fall close to the middle lines and within the confidence intervals, which reveals that the predicted model is adequate for the prediction of surface roughness.

65

Probability Plot of Surface Roughness Normal - 95% CI 99

95 90 80 70 60 50 40 30 20 10 5

1

0.3

0.4

0.5 0.6 0.7 Surface Roughness, µm

0.8

0.9

Figure 4.6 Probability plot for Surface roughness model 4.4

OPTIMIZATION OF MACHINING PROCESSES Optimization is a mathematical discipline that concerns the finding

of minima and maxima of functions, subject to so-called constraints. Today, optimization comprises a wide variety of techniques from Operations Research, machining and artificial intelligence and is used to improve business processes in practically all industries. Discrete optimization problems arise, when the variables occurring in the optimization function can take only a finite number of discrete values. Discrete optimization aims at taking these decisions such that a given function is maximized or minimized, subject to constraints.

66

4.4.1

Single Objective Optimization In case of local optimization algorithms (Sequential Quadratic

Programming (SQP), Generalized Reduced Gradient (GRG) and Adaptive Region Method, they typically converge quickly to a local optimum. These algorithms require the definition of a single objective function (minimize, maximize, reach target value), the boundaries for the input parameters, and if needed a set of equality or inequality constraints for the outputs. These local optimization algorithms are useful for solving general constrained optimization problems. The flowchart for an optimization process of single objective function is given in Figure 4.7. Optimization algorithm includes Differential Evolution (DE), Selfadaptive Evolution (SE), Simulated Annealing (SA) and Efficient Global Optimization (EGO). All these algorithms solve general constrained optimization problems. Whereas local optimization algorithms are likely to find a local optimum, global optimization algorithms have a high probability of finding a global optimum, because they look around in the design space at many locations simultaneously. In a single objective optimization algorithm, the process will terminate once the optimum solution is arrived. But there could be finite number of optimal solutions will involve for a multi objective problems.

67

Designing a series of experiments

Machining and measuring experimental values Developing mathematical models

Checking the accuracy of the model

Optimization of machining processes with the bounds

Obtaining the machining conditions for the required objective functions

Validation and verification with the experimental values Figure 4.7 Optimization process of Single Objective Function 4.4.2

Multi Objective Optimization In many cases, there are multiple objectives instead of a single

objective. Some objectives may be conflicting, in the optimization problem with the same alternative parameter choices. Hence, some trade-off between the criteria is needed to ensure a satisfactory design. Multi-objective optimization algorithms allow for optimizations that take into account multiple objectives simultaneously. Each objective can be a minimization or a maximization of an

68 output. Multi objective optimization constructs the so-called Pareto front of the multi-objective optimization problem. If a large number of design options need to be evaluated and their performance plotted in the objective function space, the outer boundary of this collection of points would define the borderline limit beyond which the design cannot be further improved. This is a Pareto line that separates the feasible and infeasible regions. In multi-objective optimization, the Pareto front is defined as the border between the region of feasible points, for which all constraints are satisfied, and the region of infeasible points. 4.4.3

Optimization Using Response Surface Methodology Response surface methodology is the most informative method of

analysis of result of a factorial experiment. The concept of a response surface involves a dependent variable ‘y’ called the response variable and several independent variables = ( ,

,

,… ….

,……

. then response variable is given as;

)

(4.10)

The goal is to optimize the response variable ‘y’. It is assumed that the independent variables ( ,

,…….

) are continuous and controllable

by the experimenter with negligible error. The response or the dependent variable is assumed to be random variable. In general, a suitable combination of cutting speed (v), feed (f) and depth of cut (a), optimizes tool life, tool wear, power consumption, surface roughness, cutting force and material removal rate for the lathe machining operations. The observed response ‘y’ as a function of the cutting speed, feed and depth of cut can be written as = ( ,

,……

)+

(4.11)

69

Where

is a random error. In the present work, cutting speed, feed,

depth of cut have been considered as the process parameters and the surface roughness and flank wear are taken as response variables. The selected design plan has chosen 27 experiments. It is three factors – three level central composite rotatable designs consisting of 27 sets of coded conditions. The generalized steps carried out in optimization process of Response surface Methodology is given in Figure 4.8.

Select Variables and determine the initial range of variables

Do experiments

Carry out factorial design, fit the objective function

Statistical Test

Construct the Response surface model

Determine the path of steepest ascent and find the optimum result

Figure 4.8 Response Surface Methodology Process

70

To analyze Response surface optimization process, a case study involving two objective functions such as minimization of surface roughness and minimization of flank wear while machining Inconel 718 using carbide cutting tool is carried out. To investigate the performance of surface roughness and flank wear in Inconel 718, single pass turning operation is carried out in dry condition. The experiment is conducted on L16 ACE designer lathe with the following specifications: power, 7.5 kW motor drive, speed range, 0 – 3,500 rpm, and feed range, 0.1 – 1,000 mm/rev with constant speed capabilities. The test specimens to conduct the experiment were prepared form 38 mm cylindrical bar stock. The cutting materials are K10 type uncoated carbide inserts, and as per ISO specification. Three levels were specified for each of the factors as machining parameters (cutting speed, feed, and depth of cut). The orthogonal array L27 was chosen, which consists of 27 rows corresponding to the number of parameters combinations at three levels. Experiments are conducted for these combinations and the surface roughness value and flank wear are measured. Surface roughness was taken at four locations and repeated twice at each location on the face of the machined surface, and the average values were reported. Flank wear was measured at the end of the machining processes by Scanning Electron Microscope (SEM). The experimental conditions and results for performance analysis of surface roughness and flank wear in turning of Inconel 718 are shown below in Table 4.1.

71 Table 4.1

Experimental data for performance analysis of surface roughness and flank wear in turning of Inconel 718

Expt. No

Cutting speed ‘v’ (m/min)

Feed ‘f’ (mm/rev)

Depth of cut ‘a’ (mm)

Surface Roughness ‘Ra’ ( m)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27

35 25 25 35 25 35 35 45 45 25 25 35 25 35 45 45 45 25 25 35 45 45 35 45 35 45 35

0.2 0.2 0.15 0.1 0.1 0.15 0.2 0.15 0.1 0.15 0.2 0.1 0.1 0.2 0.2 0.1 0.2 0.15 0.1 0.2 0.2 0.1 0.15 0.15 0.15 0.15 0.1

1.5 1 1 1 1 1 1.25 1 1 1.25 1.5 1.25 1.25 1.25 1.25 1.25 1 1.5 1.5 1 1.5 1.5 1.5 1.5 1.25 1.25 1.5

0.55 0.85 0.56 0.63 0.49 0.64 0.6 0.56 0.67 0.59 0.433 0.53 0.37 0.5 0.33 0.81 0.22 0.33 0.25 0.579 0.55 0.89 0.53 0.57 0.56 0.58 0.57

Flank wear ‘Vb’ (mm) 0.09 0.068 0.086 0.142 0.083 0.138 0.166 0.187 0.19 0.153 0.125 0.182 0.128 0.162 0.158 0.182 0.136 0.103 0.11 0.132 0.075 0.112 0.137 0.11 0.168 0.17 0.101

72 The optimization using Response Surface Methodology is accomplished by: Obtaining the individual desirability (d) for each response Combining the individual desirability to obtain the combined desirability (D) Maximizing the composite desirability and identifying the optimal input parameter settings. The estimated regression coefficients, analysis of variance (ANOVA) for surface roughness and flank wear and the contour plots for flank wear and surface roughness are plotted. The optimization of machining process has been carried out using response optimizer in MINITAB software. The response optimizer generated the optimization plot based on the individual and composite desirability value. The optimization results are discussed in detail at results and discussion chapter. 4.4.4

Taguchi’s Optimization Technique The design of experiments refers to the process of planning the

experiment, so that the appropriate data can be analyzed by statistical methods, resulting in a valid and objective conclusion. Design of experiments is a powerful tool for modelling and analysis of process variables over some specific variable which is an unknown function of these process variables. In the Taguchi design of experiments the signal to noise ratio ( ) representing quality characteristics for the observed data. In the case of surface roughness, lower values of them are desirable. Traditional experimental design methods are very complicated and difficult to use. Additionally, these methods require a large number of experiments when the number of process parameters

73 increases. The generalized steps carried out in analysis of machining parameters in Taguchi’s Techniques are given in Figure 4.9.

Define the Objective

Determine the design parameters

Select Orthogonal Arrays

Conduct the Experiment

Anova and S/N Analysis Determine the optimum parameter combination

Confirmation the experiments

Select new settings

Stop

Figure 4.9 Taguchi’s Technique Optimization Process Minitab software is used in this work for analysis the machinability assessment of Inconel 718 using Taguchi Technique. This software calculates response tables, linear model results, and generates main effects and interaction plots for: •

Signal-to-noise ratios (S/N ratios, which provide a measure of robustness) versus the control factors



Means (static design) or slopes (dynamic design) versus the control factors



Standard deviations versus the control factors



Natural log of the standard deviations versus the control factors

74 There are three S/N ratios that are available, such as smaller the better, larger the better and medium the better, which will be selected based on the response function and its characteristics. In turning operation, desired responses are minimum surface roughness and minimum tool wear so, smaller the better “S/N” ratio were selected. The S/N ratio for several type of characteristics of Taguchi’s technique is presented in Table 4.2. Table 4.2 Characteristics of Taguchi’s Optimization Techniques Choice of S/N ratio Larger better

is

Nominal is best

S/N ratio formulas =

10 log(

=

1

10 log

)/

Desired goal

Data type

Maximize the response

Positive

Target the response and you want to base the S/N ratio on means and standard

Non-negative with an "absolute zero" in which the standard deviation is zero when the

deviations

mean is zero Smaller is better

=

10 log

Minimize the response

Non-negative with a target value of zero

Where yi is the observed data at i th trial,

is the standard deviation

and n is the number of trails. The experimental conditions and results for performance analysis of surface roughness and flank wear in turning of Inconel 718 arrived in Table 4.1 are analyzed using Taguchi’s optimization process. The surface roughness and flank wear is individually analyzed using statistical software MINITAB. The mean S/N ratios for each level of the

75 machining parameters are calculated and the results are obtained. The optimal performances of the machining process are calculated and the parameters are ranked on the basis of the difference in the S/N ratio. The Mean effects plots for surface roughness and flank wear are plotted using the MINITAB software. The results of the Taguchi’s Optimization analysis are discussed in detail in Results and discussions chapter. 4.4.5

Genetic Algorithm based Multi Objective Optimization In a single objective optimization algorithm, the process will

terminate once the optimum solution is arrived. But there could be finite number of optimal solutions will involve for a multi objective problems. Goldberg proposed Multi objective optimization called Non-dominated Sorting Genetic Algorithm (NSGA-II), in which the non- dominated solutions are obtained. This NSGA-II algorithm is based on several layers of classifications of the individuals. To overcome the drawbacks of the conventional optimization approaches, non-traditional optimization technique, Non-dominated Sorting Genetic Algorithm (NSGA-II) is used. Nontraditional algorithms have proven successful in handling many real-world multi-objective concurrent engineering problems. The non-dominated sorting algorithm (NSGA –II) is applied to multi objective problems, which is a generic non-explicit based on multi objective evolutionary algorithm. The advantages of NSGA-II are; i)

They are a population based search techniques, so global optimal solution is possible,

ii)

They do not need any auxiliary information like gradients, derivatives, etc,

iii) They are easier to program and implement.

76 This algorithm uses the elite-preserving operator, which favors the elites of a population by giving them an opportunity to be directly carried over to the next generation. After two offspring’s are created using the crossover and mutation operators, they are compared with both of their parents to select the two best solutions among the four parent offspring solutions. Multi objective optimization problems give rise to a set of Pareto optimal solutions, none of which can be said to be better than any other in all objectives. In any interesting multi objective optimization problem, there exist a number of such solutions, which are of interest to designers and practitioners. Since no one solution is better than any other solution in the Pareto optimal set, it is also a goal in a multi objective optimization to find as many such Pareto optimal solutions as possible. Unlike most classical search and optimization problems, GAs work with a population of solutions and thus are likely candidates for finding multiple Pareto optimal solutions simultaneously. Deb and Datta (2012) demonstrated the efficiency of the proposed evolutionary multi-objective optimization to optimize the machining parameters in turning operations with two case studies – one having two objectives and other having three objective functions. The flowchart showing the performance of Non-dominated Sorting Genetic Algorithm II (NSGA II) is show in Figure. 4.10. According to this flow chart, the algorithm starts with a random initial generation. First, the parents and offspring are combined to form a string. When the objective functions of all strings in a generation are calculated, the solutions are classified into various non-dominated fronts. The crowded tournament selection operator is used to compare two solutions and returns the winner of the tournament according to two attributes; (1) A non-dominated front in the population and (2) A local large crowding distance.

77 The first condition makes sure that the chosen solution lies on a better non-dominated front, and the second condition ensures a better spread among the solutions. Start

Initialize population gen = 0

Front = 1

Is population classified?

Identify non-dominated individual

gen = gen+1 Crowded tournament selection

Front = front + 1

New population

Selection & crossover & mutation

Gen