J Intell Manuf DOI 10.1007/s10845-013-0828-9
A multi-performance prediction model based on ANFIS and new modified-GA for machining processes Arezoo Sarkheyli · Azlan Mohd Zain · Safian Sharif
Received: 4 February 2013 / Accepted: 14 August 2013 © Springer Science+Business Media New York 2013
Abstract In the recent years, there has been an increasing interest in presenting a comprehensive modeling technique to predict machining performances in different processes. As well, this paper proposes a new hybrid technique anchored in adaptive network-based fuzzy inference system (ANFIS) and modified genetic algorithm (MGA) to model the relationship between machining parameters and multi performances. MGA which employs a new type of population is effectively applied as the training algorithm to optimize the modeling parameters, finding appropriate fuzzy rules and membership function in the model. In the proposed MGA, a list of parameters is randomly considered as a solution and a collection of experiences ’in optimizing the solution’ is utilized as population. To show the effectiveness of the presented model, it is applied to wire electrical discharge machining (WEDM) process for predicting material removal rate and surface roughness. The prediction results are compared with the most common prediction modeling techniques based on ANN and ANFIS–GA. The statistical evaluation results reveal that the ANFIS–MGA considerably enhances accuracy of the optimal solution and coverage rate. Keywords Prediction model · Adaptive network-based fuzzy inference systems (ANFIS) · Modified genetic algorithm (MGA) · Population · Machining process
A. Sarkheyli (B) · A. M. Zain Soft Computing Research Group, Faculty of Computing, Universiti Teknologi Malaysia, 81310 Skudai, Johor, Malaysia e-mail:
[email protected] S. Sharif Department of Manufacturing and Industrial Engineering, Faculty of Mechanical Engineering, Universiti Teknologi Malaysia, 81310 Skudai, Johor, Malaysia
Introduction Comprehensive, qualitative and quantitative studies of machining processes and expanding effective prediction models with high degree of accuracy for machining performances are important in understanding of the processes, parametric optimization, process simulation, parametric analysis, and verification of the experimental results (Rao et al. 2009). As well, WEDM as a non-traditional material removal process which is broadly applied to shape work-pieces for aerospace, nuclear, and automotive industry (Ho et al. 2004) needs an optimal prediction modeling system. This process which is effectively employed to machine difficultto-machine materials (Abbas et al. 2012) is considered by many researchers in presenting appropriate modeling technique. In addition, material removal rate, cutting velocity and surface roughness which measure the production efficiency and quality of WEDM are important output performances in this process (Gauri and Chakraborty 2010). Performance of the WEDM process is significantly influenced through different parameters for instance servo feed setting, pulse on-time, pulse off-time, peak current, and wire tension. Because of existing variables parameters values and complex WEDM process, the researchers are concerned about discovering the best relationship between the output performances and convenient machining parameters by modeling WEDM by using correct mathematical techniques (Rao 2011).
Previous works on prediction models In the recent years, different modeling techniques in prediction of machining performances have been proposed for various processes. As well, different machining input parameters and output performances have been considered for modeling.
123
J Intell Manuf
Although different theoretical models have been proposed as prediction techniques for determination of machining performances such as regression model (Kovac et al. 2012), finite element analysis and thermal based model (Salonitis et al. 2009), but Artificial Neural Network (ANN) is the most common non-conventional or computational technique applied for modeling by previous researchers such as Zain et al. (2010) and Markopoulos et al. (2008). Neural network models employs as a predictor in different physical systems with nonlinear relationship between inputs and outputs (Chryssolouris et al. 1990; Luong and Spedding 1995; Gologlu and Arslan 2009). On the other hand, fuzzy logic (FL) which its rules come from empirical knowledge (Brinksmeier et al. 1998) could significantly find input–output and in-process parameter relationships (Mukherjee and Ray 2006; Chen et al. 2012) to illustrate human philosophy and solve problems in the absence of correct mathematical model (Nayak et al. 2004; Kovac et al. 2012). As well, integration of ANN and FL or network-based fuzzy inference systems (ANFIS) is considered as an effective modeling technique in predicting machining performances. ANFIS is a kind of feed forward neural network with supervised learning ability which is skilled to generate fuzzy rules (Sharma et al. 2008). Ho et al. (2009) observed that ANFIS is capable to generate input–output mapping based on expert knowledge (fuzzy if–then rules) and is an effective model in complex machining processes. In the recent years, ANFIS has been widely employed to generate non-linear models for machining processes to determine the input–output relationship. The authors Cus et al. (2009), Mukherjee and Ray (2006) and Dong and Wang (2011) have indicated that ANFIS technique is an effective way to manage machining processes. Although, ANFIS is a robust modeling technique in various application but it needs an effective training algorithm to work successfully (Shoorehdeli et al. 2006; Cpalka et al. 2008). Different algorithms could be employed for obtaining an optimal set of rules which is an important part of ANFIS network training. Some affords have been done to find optimal value for modeling parameters (premise and consequent parameters) in order to decline training error and increase the modeling accuracy such as Li et al. (2008). GA (described in section 2.6.1.1) is an optimization technique effectively employed for machining process (Yusup et al. 2012). As well, it is considerably used by the previous researchers to determine optimal values of premise and consequent parameters in the model (Lee and Lin 2004). This optimization is done by directly maximizing the training accuracy (Jiang et al. 2012). The researchers, Krishna et al. (2009) and Wang et al. (2003) employed GA as training algorithm to increase the accuracy of ANFIS and reduce the error through training and testing of the network. As well, Ho et al. (2009) utilized the same hybrid system, ANFIS–GA, for prediction of machining performances in milling process.
123
In relation to WEDM process, the most number of previous studies on modeling using non-conventional techniques are based on ANN. Chen et al. (2010) and Krishnan and Samuel (2012) employed ANN in modeling the process. Yang et al. (2011) used ANN and Taguchi method to improve the accuracy of the proposed model. Tzeng et al. (2011) and Ozturk and Ozturk (2004) applied GA as a training algorithm of ANN to improve the training process and find the optimal ANN model for WEDM and EDM processes, respectively. Yan and Fang (2007) projected a technique based on GA and FL for modeling the process. In addition some attempts were done for employing ANFIS in modeling the process such as Çayda¸s et al. (2009) by considering least square estimation and Gradient descent training algorithm. In this paper, a hybrid modeling technique based on ANFIS and new modified GA is employed to model material removal rate and surface roughness in WEDM process. In the proposed technique, MGA is effectively employed as training algorithm of ANFIS to estimate the optimal modeling parameters and find the best ANFIS model.
Prediction model based on ANFIS ANFIS is a hybrid technique based on ANN and FL, which integrates the ability of learning in ANN and a collection of fuzzy if-then rules with suitable membership functions to generate input–output with high degree of accuracy (Jang 1993; Mascioli et al. 1997). The basic architecture of ANFIS consists of five layers (Jeon and Rahman 2008) which are shown in Fig. 1. The layers, nodes and functions for a network with two inputs and one output are described below in summary. Symbol Oi,j in the figure addresses the output of ith layer and jth node. Layer 1 The number of membership functions determines the number of nodes in the first layer (Jang 1993). In this layer, O1,i is membership function and indicates the outputs. O1,i = μi (x) ,
(1)
where i = {1, 2, . . . , n} and n is the number of membership functions. For the Gaussian membership function, the output is specified by Eq. (2). μi (x) = e
−(x−ci )2 2ai2
,
(2)
where ai and ci are the premise parameters. By changing the values of the premise parameters, the Gaussian functions diverge consequently (Jang 1993). Layer 2 By multiplying the incoming signal, the outputs of the nodes in this layer are determined. O2,i = wi = μ1 (x1 ) ∗ μ3 (x2 ) , where i = {1, 2, . . . , 4}
(3)
J Intell Manuf Fig. 1 Basic ANFIS architecture (Ishibuchi 1999)
Layer 3 This layer calculates the ratio of ith rules firing strength to sum of all rules’ firing strength (this layer is named normalization layer). Also, any T-norm operator which generalize AND operator could be employed in this layer. Thus the outputs are determined as: wi O3,i = w¯ i = 4
j=1 w j
,
α 1 (di − yi )2 , α
(7)
i=1
where α addresses the number of sets for training datasets, di is the real output, and yi is the predicted output.
(4)
where i = {1, 2, . . . , 4} Layer 4 The output in this layer is calculated by Eq. (5) and called normalized firing strengths (Jang 1993). For example the rules’ format for the basic network in Fig. 1 is as follows: Ri : i f x1 is α and x2 is β then f i = ( pi x1 + qi x2 + ri ) Then, O4,i = w¯ i f i = w¯ i ( pi x1 + qi x2 + ri ),
MSE =
(5)
where i = {1, 2, . . . , 4} and pi , qi , ri are consequent parameters and α, β are linguistic terms. Layer 5 The output in this layer is calculated by summation of all incoming signals (see Eq. 6). w¯ i f i w¯ i f i = , (6) O5,i = w¯ i where i = {1, 2, . . . , 4}. As it is clear from basic ANFIS architecture, the nodes in layers 1 and 4 are adaptive while the others are fixed. These nodes employ the modeling parameters (premise and consequent parameters) which need to be trained based on the training dataset. The accuracy of prediction network is estimated based on the accuracy of experimental data with which it is trained (Chryssolouris et al. 1996). In this paper, mean squared error (MSE) is used for the accuracy estimation. The values of premise and consequent parameters can be found after ANFIS training to minimize the mean squared error (MSE) with the following Eq. (Ho et al. 2009):
Training algorithm in ANFIS Proposing a robust training algorithm could enhance the accuracy of ANFIS by estimating appropriate fuzzy rules and membership functions for the model. The proposed prediction model using a training algorithm in this paper is based on the following major steps: Step 1 Normalize input parameters. Standard min–max normalization procedure (see Eq. 8) is used in this paper to normalize the parameters. xi =
xi − min(xi ) max (xi ) − min(xi )
(8)
where xi is the normalized value of ith input value xi , min() and max() are the minimum and maximum values of xi in the dataset. Step 2 Initiate parameters for ANFIS. The number of input and output parameters for the network, and the number of linguistic terms for the inputs are determined in this step. Step 3 Choose a membership function for ANFIS model. Bell shaped membership function (Eq. 2) which is an effective membership in modeling machining processes (Maji et al. 2010) is employed in this paper. Step 4 Apply an optimization algorithm to train and optimize the modeling parameters in ANFIS. During the optimization, training data set is employed to find the fitness values of predicted modeling parameters. Step 5 Employ testing data set to predict output performances of the ANFIS with optimal parameters obtained in step 4.
123
J Intell Manuf Fig. 2 New solution creation process. a Current solution, b experience chromosomes, and c the new solution
Modified GA as training algorithm Genetic algorithm is a stochastic optimization technique which starts with a collection of random solutions (chromosome). These chromosomes progress in consecutive iterations and are measured by using a fitness function. As the evolution procedure continues, GA search procedure finally converges to an optimal chromosome (Musharavati and Hamouda 2011). GA optimization result is based on three basic functions called selection, crossover, and mutation to present a collection of chromosomes. By using these basic functions, GA generates a new population from an initial random population. This process continues by utilizing current population as an initial population until the predefined termination criteria is satisfied (Mukherjee and Ray 2006; Zain et al. 2010). Modified genetic algorithm is a modified GA to enhance the accuracy of the prediction results with high coverage rate. MGA starts with a single random solution and a collection of optimization experiences as population instead of a collection of solutions considered in GA. During the optimization process, MGA tries to improve the current solution and replace the new improved solution with the current one. Then a population of experiences (optimization experiences) can also be initially created or updated during the process. As well, because of using new structure of population, fitness function and new solution creation method in MGA and GA are difference. The structure of optimization experiences and the functions are discussed in detail afterward. Modified genetic algorithm optimization technique is developed in this paper as training algorithm to optimize the modeling parameters of the ANFIS. This technique is utilized to minimize the following objective function, (9) min G s1 , s2 , . . . , sβ where G is MSE value of the performances predicted by ANFIS while it is set by modeling parameters s1 , s2 , . . . , sβ . As well, β is the number of modeling parameters which have to be optimized. Also, there are some constraints on the parameters bounds which are considered by optimization algorithm in determining the input values.
123
Solution in MGA is defined as a list of adaptive modeling parameters which have to be optimized. There is only one solution in MGA at the time. A chromosome in the population addresses one optimization experience. It has the same size as solution size (β) to show improvement type of each parameter in the experiences. Three different actions are considered in each experience: no improvement, increment, and decrement. 0 value in each chromosome shows that no change is required for the related parameter’s value in that experience. Also, 1 and −1 values show that the value of the parameter has to be increased or decreased, respectively. The increment or decrement rate is determined by improvement rate. Figure 2 shows an example of new solution creation process. This process employs current solution and a chromosome of optimization experience in creating a new solution after doing experience. Based on the experience specified in this example, only three parameters values are improved by improvement rate to create a new solution. The process of the proposed MGA is summarized in the following steps: Step 1. Parameter setting MGA has some key factors which effect significantly on the algorithm performance. These parameters namely population size, maximum generation size, crossover, mutation and improvement rates are specified in this step. Step 2. Initialization In the second step, an initial solution and population of experiences on optimization the solution are created. First, an initial solution is selected randomly. Then a collection of mutations is accidentally done on the solution to generate different experiences on optimization. The experiences, which have successfully optimized the solution, are selected in evaluation step to create the initial population. Step 3. Evaluation In this step, the experiences are evaluated separately to build a population. First, the optimality of the new solution created by each experience is determined. The optimization happens if the condition in Eq. 10 is satisfied. M SE − M SE > 0
(10)
where M S E and M S E are the mean squared errors of the initial and new solutions, respectively. Each optimization in the
J Intell Manuf
initial solution is considered as a new experience of optimization (chromosome) in initial population. If the optimization condition is not satisfied, the related experience is removed from the population. Next, the fitness value of each experience, which shows the effectiveness of the experience, is determined by the number of times used for a successful optimization. The initial fitness value of an experience is set to 1. As well, the chromosomes with the highest and lowest fitness values indicate the best and worst experiences, respectively. Step 4. Sort The experiences in the population are sorted in descending order based on their fitness values. Therefore, the most common and effective experiences are placed on the top. Step 5. Selection In this step, two parents’ chromosomes with high fitness values as the parents of the next generation are randomly selected from the population. Step 6. Crossover Based on the crossover rate, a new offspring (experience) is generated from two chosen parents. Step 7. Mutation Genetic diversity is maintained from the new offspring. Mutation rate determined the probability of the mutation. Step 8. Evaluation Based on the experience (offspring) created in the previous steps and improvement rate, a new solution and a new experience is created. If the optimality condition defined in Eq. 10 is satisfied for the new solution, the new experience with the fitness value of 1 is replaced with the worst experience in the population and fitness value of the parents are increased. Also, the new solution is replaced with the current solution. Otherwise, the fitness values of the parents are declined. Step 9. Repeat If the stopping criterion is not satisfied, go to step 4; otherwise return the only solution as the optimal set of modeling parameters. The stopping criterions in this algorithm could be defined by a fixed number of generations, a fixed amount of CPU time, or a fixed number of consecutive iterations without any more optimization in the objective function value.
ated based on the five layers discussed in “Prediction model based on ANFIS” section. The numbers of input and output nodes are specified according to the numbers of machining parameters and performances, respectively.
Hybridization of ANFIS and MGA
In this step, the trained model is evaluated by using a testing dataset. Now, the trained model has its fix optimal modeling parameters which will not be changed during the testing process anymore. The machining parameters values are considered as input in the model. Comparison of the performances predicted by the proposed model and experiment performances gives some information about the accuracy of the proposed model. In the next section, the processes of experimental datasets design, and the three main steps of prediction model design (model design, training, and testing) based on a test case on WEDM process are discussed. The key factors of MGA for optimization are defined to optimize modeling parameters. Then, an optimal ANFIS model for the machining process is productively proposed.
In this section, a prediction model based on ANFIS and MGA concerning machining processes is designed in three steps. At first, the model architecture is build based on the predefined machining parameters and performances. Then, training process using the proposed MGA algorithm trains the model to find optimal modeling parameters. Finally, testing process is applied to specify the performances of the prediction model. Designing the model By considering an experimental case and dataset, the architecture of the model is designed. The basic architecture is cre-
Training the model As it is plainly discussed in “Training algorithm in ANFIS” section, ANFIS training is the process of optimizing modeling (premise or consequent) parameters in order to find the best ANFIS model. MGA can effectively be utilized for this optimization. The number of modeling parameters specifies the solution and experience size in MGA. In this optimization process, a single solution containing a combination of parameters values is randomly created based on the predefined parameters boundary. Also, an empty population of optimization experiences is formed by considering the population size. Then, a process of experience creation is started to organize an initial population. In each step of the process, an attempt for improving the current solution is made. As well, in each successful improvement, one optimization experience (based on the structure described in “Modified GA as training algorithm” section) is appended to the population. Fitness function is an effective function in measuring the created solution to distinguish the improved ones compared to the current one. Fitness functions vary on different problems and purposes. In a machining process, fitness value of a solution frequently is determined based on Eq. 10 while the MSE value is related to the experimental and predicted machining performances. The experimental performances are available in training dataset. As well, the predicted values are calculated regarding to the current and new solution’s parameters values and optimal ANFIS model. Evaluating the model
123
J Intell Manuf Table 1 Machining parameters considered for experiment design (Krishnan and Samuel 2012)
Experimental design
Parameter
Value
Dataset design
Material
AISI D3 (DIN X210Cr12)
Material brinell hardness
212–248 (HB)
Work-piece diameter
10 (mm)
Depth of cut
0.1 (mm)
Pulse off-time
30, 35, and 42 (μs)
Spark gap
30, 50, and 80 (μm)
Servo feed
3, 5, and 8 (level)
Rotational speed
30, 70, and 100 (rpm)
Flushing pressure
1.263, 1.893, and 3.267 (bar)
The experimental training data set and testing data set in this paper come from Krishnan and Samuel (2012) that were conducted on the ELECTRONICA ECOCUT CNC WIRE EDM (Table 2). In this experiment, a difficult to machine material named AISI D3 (DIN X210Cr12) was used for machining. The features of this material are given in the Table 1. Also, five machining parameters (with three levels of value) namely pulse off-time (PT ), spark gap (SG ), servo feed (S F ), rotational speed (R S ), and flushing pressure (FP ) and two
Table 2 Experimental data set considered for experiment design (collected by Krishnan and Samuel 2012) Experiment no.
PT (μs)
SG (μm)
S F (level)
R S (rpm)
1
30
30
3
30
2
30
50
5
70
3
30
80
8
4
34
30
5
5
34
50
6
34
7 8
FP (bar)
M R (mm3 /min)
S R (μm)
3.267
1.24
2.396
3.267
2.21
3.756
100
3.267
2.60
4.134
100
3.267
1.73
3.172
8
30
3.267
3.78
5.009
80
3
70
3.267
1.45
2.827
42
30
8
70
3.267
1.50
2.899
42
50
3
100
3.267
0.95
2.116
9
42
80
5
30
3.267
1.68
3.133
10
30
30
3
30
1.893
1.16
2.311
11
30
50
5
70
1.893
2.20
3.539
12
30
80
8
100
1.893
2.47
4.003
13
34
30
5
100
1.893
1.40
2.694
14
34
50
8
30
1.893
3.72
4.897
15
34
80
3
70
1.893
1.37
2.502
16
42
30
8
70
1.893
1.46
2.895
17
42
50
3
100
1.893
0.82
2.069
18
42
80
5
30
1.893
1.65
3.124
19
30
30
3
30
1.263
1.10
2.264
20
30
50
5
70
1.263
1.48
2.860
21
30
80
8
100
1.263
2.22
3.794
22
34
30
5
100
1.263
1.34
2.506
23
34
50
8
30
1.263
3.20
4.576
24
34
80
3
70
1.263
1.35
2.520
25
42
30
8
70
1.263
1.37
2.588
26
42
50
3
100
1.263
0.78
2.048
27
42
80
5
30
1.263
1.54
2.987
28
30
50
8
70
3.267
2.80
4.281
29
34
80
5
50
1.893
1.44
2.744
30
42
50
3
50
1.263
0.98
2.149
123
J Intell Manuf
machining performances namely material removal rate (M R ) and surface roughness (S R ) were considered for the experiment. Because of higher operating and maintenance cost of WEDM, minimum number of experiments has to be planned for training and testing. In this case, Krishnan and Samuel (2012) measured thirty output performances using various input parameters’ levels. The minimum number of the experiments was planned according to Taguchi fractional factorial design. According to the solution proposed by Zhang et al. (1998), the ratio of training and testing dataset size with a total of 100 % could be suggested as 90 and 10 %, respectively. Thus, the first 27 experimental data are considered as training data set, and the last three experiments are used as testing data set.
ANFIS design The ANFIS architecture factors which are considered in this experiment are shown in Table 3. In this study, the number of linguistic terms for each parameter determines the number of membership functions. Two linguistic terms are considered for each input parameter in this experiment. As well, since, bell-shaped membership function (see Eq. 2) is employed in this work, thus m = 2 × 5 = 10, where m is the number of membership functions. Therefore, there are 10 nodes in the first layer of ANFIS. As well, 10 set of premise parameters {ai , ci } are considered for this network where i = 1, 2, . . . , 10. Parameter ci addresses the center of this membership function and is the most important parameter which should be considered in training algorithm; in addition ai is the half width of the function. As well, the number of nodes in the second layer in the experiment is equal to the number of rules defined for each combination of input nodes (maximum number of nodes in this experiment is 2 × 2 × 2 × 2 × 2 = 32). Also, the outputs of nodes in this layer are calculated by multiplying the input values. In addition, as there are two performances in this network as the outputs, each node in layer 4 has two outputs (w¯ i f i1 and w¯ i f i2 ). The number of nodes in this layer is equal to the number of nodes in previous layer and calculated by:
Table 3 ANFIS factors considered for experiment design Parameter
Value
Number of input nodes
5
Layer 1: Number of membership functions
10
Layer 2: Number of nodes
32
Layer 3: Number of nodes
32
Layer 4: Number of rules
32
Layer 5: Number of output nodes
2
Membership function
Gaussian
Table 4 MGA algorithm factors considered for experiment design Factors
Value
Population size
1,000
Generation size
2,000
Selection function
Roulette wheel
Crossover function
Two point
Mutation function
Uniform
Mutation rate
0.1
Crossover rate
0.9
Improvement rate
0.8
Table 5 Range of modeling parameters considered for experiment design Parameter Premise parameters
Range axi,1 , axi,2 (i = 1 · · · 5)
0 · · · 0.6
s1 , s3 , s5 , s7 , s9 , s11 , s13 , s15 , s17 , s19 cxi,1 (i = 1, . . . , 5)
0 · · · 0.4
s2 , s6 , s10 , s14 , s18 cxi,2 (i = 1, . . . , 5)
0.5 · · · 1
s4 , s8 , s12 , s16 , s20 Consequent parameters
pi1 , qi1 , ri1 , gi1 , ti1 , h i1 , pi2 , qi2 , ri2 , gi2 , ti2 , h i2 (i = 1 · · · 32) s21 · · · s404
−2 · · · 2
MGA design f i1 f i2
= =
pi1 PT pi2 PT
+ qi1 SG + qi2 SG
+ ri1 S F + ri2 S F
+ +
gi1 R S gi2 R S
+ ti1 FP + ti2 FP
+ h i1 + h i2
where i = 1, 2, . . . , 32 and pi1 , qi1 , ri1 , gi1 , ti1 , h i1 , pi2 , qi2 , ri2 , gi2 , ti2 , h i2 are consequent parameters. The number of nodes in the fifth layer is equal to the number of machining performances. Since there are two nodes (surface roughness and material removal rate) in this experiment, the number of nodes in this layer is two.
In addition, the factors of MGA algorithm for training the ANFIS network in this experiment are given as Table 4. In this algorithm a set {s1 , s2 , . . . , s424 } containing 2 × 2 × 5 = 20 premise parameters and 32 × 12 = 384 consequent parameters are measured and optimized through the training process to find the best ANFIS model. Moreover, the ranges of parameters are considered as problem constraints. Table 5 gives information about the range of parameters considered in this study.
123
J Intell Manuf Table 6 Optimal premise parameters after training Input variables
Optimal premise parameters A
c
S1 = 0.5866
S2 = 0.2796
S3 = 0.5068
S4 = 0.6191
SG
S5 = 0.5433
S6 = 0.2771
S7 = 0.5251
S8 = 0.5014
SF
S9 = 0.1600
S10 = 0
S11 = 0.1950
S12 = 0.8156
RS
S13 = 0.1600
S14 = 0.1541
S15 = 0.1937
S16 = 1.0000
FP
S17 = 0.4231
S18 = 0.3434
S19 = 0.5273
S20 = 0.7112
PT
The modeling results and discussion The algorithms in this paper were implemented by MatlabR2009a. The modeling result is presented in two parts. In the first part, the optimization result of modeling parameters after training process are addressed to indicate the best fuzzy rules and membership function for the ANFIS model in the experiment. Afterward, the prediction results of the ANFIS model using testing dataset are measured to indicate the effectiveness of the proposed modeling technique. Training result Modified genetic algorithm optimization technique is utilized for finding optimal values of the premise and consequent modeling parameters. Table 6 lists the optimal modeling parameters which are estimated by training. These parameters are used in order to generate appropriate fuzzy rules and membership functions, and finally enhance the accuracy of ANFIS. Figures 3 and 4 show initial and final membership functions for all the machining parameters after training. As well, by finding optimal values of consequent parameters, the ANFIS fuzzy rules could be determined as the fixed rules for testing the network. Table 7 shows 32 rules produced by different combinations of linguistic terms after training. Prediction modeling result By finding optimal modeling parameters in the previous part, an appropriate ANFIS model is determined for this experiment of WEDM process. Now, in this section the experimental results of the prediction model for WEDM based on ANFIS–MGA are measured and compared with the ANFIS– GA and ANN proposed by Krishnan and Samuel (2012). The generation and population size of the training algorithm
123
Fig. 3 Initial Gaussian membership function of parameters PT , SG , S F , R S , FP with input range [0,1]
in ANFIS–GA are 2000. As well, selection, mutation and crossover functions in the GA are the same as MGA functions employed in the experiment. Also, ANN model offered by Krishnan and Samuel (2012) is based on the 5-14-14-2 architecture and “traingdx” is utilized as training algorithm for the model. Based on the result reported by Krishnan and Samuel (2012), this ANN model outperforms ANFIS model using hybrid training algorithm (consisting of back-propagation and least-squares estimation) and NSGA-II algorithm. In this part, three experimental data located in a Table 2 are selected as testing dataset for effectiveness evaluation of the proposed ANFIS–GA model. Also, since this hybrid technique has a stochastic algorithm for training, 10 runs of it are executed to achieve the average results and optimal parameters. Table 8 shows the prediction results of two machining performances (M R , S R ) by ANFIS–MGA, ANFIS–GA and ANN. Also, Table 9 reports the percentages of the prediction errors obtained by the modeling techniques. Based on the information specified in Table 9, it is clear that average of percentages in the predicted values obtained from ANFIS– GA is 6.11 % for MR and is 6.9 % for SR . On the other hand, the accuracy of prediction MR and SR in ANFIS–GA is increased by 16.8 and 37.3 % compared to ANFIS–GA and 31 and 43.6 % compared to ANN respectively. These percentages indicate that ANFIS–MGA definitely outperforms the ANFIS–GA and ANN model in term of accuracy. Figure 5 shows the MSE values of predicted MR in different generation sizes between 1,000 and 2,000 during training process. As it is clear from the figure, by increasing the size, the accuracy of the prediction model is raised. As well, 2,000 is the generation size for GA which gives the lowest MSE value, while the optimal value of solution in MGA is obtained before the iteration of 2,000. Moreover, this degree
J Intell Manuf
Fig. 4 Gaussian membership functions of parameters PT , SG , S F , R S , FP after training
123
J Intell Manuf Table 7 Optimal consequent parameters after training Rule number
Parameters
Rule
Coefficient of PT
1 2 3
S21 · · · S32 S33 · · ·S44 S45 · · · S56
4
S57 · · · S68
5
S69 · · · S80
6
S81 · · · S92
7
S93 · · · S102
8
S103 · · · S114
9
S115 · · · S126
10
S127 · · · S138
11
S141 · · · S152
12
S153 · · · S164
13
S165 · · · S176
14
S177 · · · S188
15
S189 · · · S200
16
S201 · · · S212
17
S213 · · · S224
18
S225 · · · S236
19
S237 · · · S248
20
S249 · · · S260
21 22
123
S261 · · · S272 S273 · · · S284
f 11 f 12 f 21 f 22 f 31 f 32 f 41 f 42 f 51 f 52 f 61 f 62 f 71 f 72 f 81 f 82 f 91 f 92 1 f 10 2 f 10 1 f 11 2 f 11 1 f 12 2 f 12 1 f 13 2 f 13 1 f 14 2 f 14 1 f 15 2 f 15 1 f 16 2 f 16 1 f 17 2 f 17 1 f 18 2 f 18 1 f 19 2 f 19 1 f 20 2 f 20 1 f 21 2 f 21 1 f 22 2 f 22
SG
SF
RS
0.0995
−1.2076
−1.2167
−0.7325
FP −0.7944
Constant term −0.7461
0.3937
−0.2263
−1.1522
0.3044
0.4656
1.0789
−0.7703
−0.9054
−1.1187
0.6508
−1.3391
0.9555
−0.7303
−0.8128
−1.1795
1.6353
−1.1552
0.7418
−0.8010
−0.1854
0.0116
−1.3137
−0.2491
0.2073
−0.3207
1.5612
1.5900
0.1559
−0.6318
−0.2626
−1.3106
−0.3158
−1.0996
−0.9303
−0.9057
0.0351
−1.2929
1.5859
1.5799
0.5489
−0.5823
1.1985
−0.1832
0.1659
1.1782
−0.2079
−0.9763
0.5738
0.4483
0.2264
0.8331
1.6342
0.3363
0.8848
−0.5239
0.1261
−0.0571
−1.3101
−0.4028
0.7116
0.1079
−1.2960
1.2862
1.6210
−0.8665
1.6101
−1.1605
−1.0011
0.1073
0.1603
0.2861
−0.1530
−1.0807
−0.6560
−0.2657
1.4936
1.6261
0.5326
−0.9696
1.0946
−1.1449
1.1285
1.0675
0.3772
−1.1664
1.4496
−1.1454
−0.3118
0.0145
0.9411
−0.8280
0.5041
−1.2910
1.4563
0.8397
1.5506
0.1112
−0.1156
−1.0763
−0.5453
−0.3195
−0.2981
−0.0677
0.6163
−0.4654
0.6116
0.4257
−0.4674
−0.9504
−1.3209
−1.0655
−0.8484
−0.1880
1.2568
−0.5788
−0.4804
0.1990
1.2607
−0.0591
0.5567
1.4057
1.0267
−0.1425
1.5388
1.0164
−0.3744
0.4372
−0.0553
0.9997
−0.8624
−0.4321
1.3276
−0.6708
−0.7416
1.5521
0.8432
0.0980
0.4614
−0.1898
0.5470
0.7646
0.2735
0.7936
−0.8148
0.6609
1.0149
0.6098
0.8787
0.4839
1.0158
0.4410
−1.3294
0.6152
−1.2277
−0.1359
1.5557
−1.0778
0.2941
1.5991
1.3843
−0.2063
−0.8290
−1.3242
0.4993
−0.7227
1.1056
−0.2875
−0.0637
−1.1985
0.2439
−0.7225
1.0736
1.2608
−1.0960
0.3853
−1.1689
−0.6729
−1.1206
1.1246
0.9831
−0.7120
0.1483
0.3706
1.0028
−0.2589
−1.2516
−0.0843
−0.3208
−1.2574
1.0425
1.1863
−0.7958
1.2228
1.2454
0.6264
−1.3473
0.3755
−0.7820
0.1599
1.2037
−0.9611
−0.9329
1.6371
0.4933
−0.9329
1.4411
0.6769
−0.4034
0.1611
0.8259
−0.7104
−0.8714
0.8840
0.9573
−0.0131
−0.1954 −0.9682
0.8033
1.6357
−0.9965
−0.3618
−1.1647
−0.5850
−0.9100
−0.4010
0.5213
0.4301
1.3290
−0.8897
1.4406
1.6364
−1.2036
1.0662
1.0558
0.5557
0.1468
−1.2775
−0.8059
1.3524
0.7064
0.9339
0.9692
0.0276
0.8514
−0.8518
0.2371
−0.7403
−0.5662
1.6038
1.0136
0.7495
1.0608
0.6765
−0.4651
−0.1324
−0.0097
1.1862
0.5301
J Intell Manuf Table 7 continued Rule number
Parameters
Rule
Coefficient of PT
23 24 25
S285 · · · S296 S297 · · · S308 S309 · · · S320
26
S321 · · · S332
27
S333 · · · S344
28
S345 · · · S356
29
S357 · · · S368
30
S369 · · · S380
31 32
S381 · · · S392 S393 · · · S404
SG
SF
RS
FP
Constant term
1 f 23
0.8630
0.1711
−0.3245
0.8602
2 f 23 1 f 24 2 f 24 1 f 25 2 f 25 1 f 26 2 f 26 1 f 27 2 f 27 1 f 28 2 f 28 1 f 29 2 f 29 1 f 30 2 f 30 1 f 31 2 f 31 1 f 32 2 f 32
−0.7398
0.6334
0.9091
−0.1808
0.3405
−1.3094
−0.5574
0.8241
1.1674
−0.6271
−0.4104
−0.4619
−0.3960
1.1187
0.3740
0.8161
−1.0723
0.7998
0.6755
−0.0334
1.1263
0.9436
−0.6402
0.3799
1.1330
−1.0782
0.2510
−0.4075
1.3328
−0.2065
−0.4477
−0.1892
0.0203
0.9544
0.4923
−1.2032
−0.4015
−0.8780
−0.4073
1.5551
0.1913
1.3804
1.4134
0.5280
−0.2220
−1.2978
0.2876
−0.2386
−0.0263
−1.1543
0.5817
−0.6188
−0.5147
0.6156
−0.5424
0.7181
−0.9729
1.5158
−0.4216
0.1092
0.7874
−1.1494
1.6081
−0.4600
0.1353
0.8266
1.4859
−1.0319
0.6498
−0.0819
0.8672
0.7204
−0.0507
−0.7690
0.1221
−1.2359
−0.6471
−0.5855
−1.0572
0.1939
0.2225
−0.9784
0.5435
−0.9903
0.8113
−0.4818
1.1525
0.8706
0.4560
1.5277 −0.7071
0.1386
0.9497
−0.7317
1.4295
1.4394
−0.1693
0.5361
0.5117
0.1728
1.2974
1.0168
−1.0787
1.4104
−0.0089
1.6486
−0.8258
0.0521
−0.1716
0.4695
−0.3307
−0.2418
−0.3050
1.0559
0.9534
1.0222
Table 8 Prediction results of two performances Experiment no.
Experiment result
Prediction result of ANFIS–MGA
Prediction result of ANFIS–GA
Prediction result of ANN (Krishnan and Samuel 2012)
MR
SR
MR
SR
MR
SR
MR
SR
28
2.80
4.281
2.644
3.894
2.48
3.78
2.46
3.734
29
1.44
2.744
1.549
2.979
1.36
2.874
1.32
2.599
30
0.98
2.149
0.925
2.138
1.03
1.791
0.92
2.552
Table 9 Percentage of prediction error of two performances Experiment no.
Percentage of prediction error of ANFIS–MGA
Percentage of prediction error of ANFIS–GA
Percentage of prediction error of ANN
MR
SR
MR
SR
MR
SR
28
5.548
9.030
11.42
11.70
12.14
12.75
29
7.571
8.629
5.55
4.73
8.33
5.24
30
5.588
0.462
5.10
16.65
6.12
18.80
Average
6.114
6.904
7.35
11.02
8.86
12.26
123
J Intell Manuf
Fig. 5 MSE value versus generation size in MGA and GA
of accuracy in the prediction result given by MGA is more than GA. Additionally, an experiment is done to compare the coverage rate (CR) of MGA and GA in training process. CR (MGA) is calculated by considering the percentage of the solutions in GA which are dominated by at least one solution in MGA (Zitzler and Thiele 1999). Figure 6 illustrates the coverage rate for five different generations in MGA and GA. In each generation, the results of 25 individual runs are used. As it is clear from the figure, MGA has more coverage rate compared to GA in all the generations especially in the early ones. Therefore MGA notably outperforms GA with respect to coverage rate. The data is further analyzed for sensitivity to identify the influence of varied machining parameters on performances. The results obtained are shown in Fig. 7. Regarding to the material removal rate, it is clear that the pulse off-time has more influence on material removal rate. As well, the paraFig. 6 Coverage rate versus generation size in MGA and GA
123
Fig. 7 Sensitivity analysis for machining performances
meters Flushing pressure, servo feed and rotational speed are the next most effective parameters on material removal rate respectively, while spark gap has the least influence on that performance. Regarding to the surface roughness, rotational speed has the most influence on the process. Servo feed, spark gap and pulse off-time are the next most effective parameters on the performance. As well, flushing pressure has the least influence on that surface roughness.
Conclusion and future works In this paper, by reviewing prediction modeling techniques proposed by previous researchers, a hybrid technique based on ANFIS and MGA was proposed to predict surface roughness and material removal rate in WEDM process. In this study MGA was effectively considered as ANFIS training algorithm. MGA as a modified GA training algorithm was utilized to find optimal modeling parameters for member-
J Intell Manuf
ship functions and fuzzy rules in ANFIS. MGA employed a new type of population and fitness function to considerably improve the quality of the initial population, coverage rate and effectiveness of the model. The main properties of MGA compared to basic GA are as follow: • The initial population in MGA is not fully a random population. • The fitness values of parent chromosomes are changed in each generation to show their effectiveness in optimization of the solution in different iterations. These advantages enhance the quality of initial population, and decline the number of iterations and percentage of error to reach the optimal solution. To evaluate the prediction modeling results, an experimental dataset on WEDM process was considered as training and testing datasets. ANFIS–MGA was developed to predict material removal rate and surface roughness with high accuracy in the process. The comparison results between the proposed ANFIS–MGA, ANFIS–GA and ANN indicated that the proposed ANFIS–MGA is significantly more accurate than the others. Also, MGA outperforms GA in term of coverage rate. The sensitivity analysis on the experiment in this paper showed that among five machining parameters, pulse off-time and spark gap have the highest and least influences on material removal rate respectively. On the other hand, rotational speed and flushing pressure have the highest and least influences on surface roughness, respectively. Acknowledgments Special appreciation to reviewer(s) for useful advices and comments. The authors greatly acknowledge the Research Management Centre, UTM and Ministry of Higher Education Malaysia (MOHE) for financial support through the Fundamental Research Grant Scheme (FRGS) No. R.J130000.7828.4F170.
References Abbas, N. M., et al. (2012). Electrical discharge machining (EDM): Practices in Malaysian industries and possible change towards green manufacturing. Procedia Engineering, 41, 1684–1688. Brinksmeier, E., et al. (1998). Modelling and optimization of grinding processes. Journal of Intelligent Manufacturing, 9, 303–314. Çayda¸s, U., et al. (2009). An adaptive neuro-fuzzy inference system (ANFIS) model for wire-EDM. Expert Systems with Applications, 36(3), 6135–6139. Chen, D., et al. (2012). Linguistic fuzzy model identification based on PSO with different length of particles. Applied Soft Computing, 12(11), 3390–3400. Chen, H. C., et al. (2010). Optimization of wire electrical discharge machining for pure tungsten using a neural network integrated simulated annealing approach. Expert Systems with Applications, 37(10), 7147–7153. Chryssolouris, G., Lee, M., Pierce, J., & Domroese, M. (1990). Use of neural networks for the design of manufacturing systems. Manufacturing Review, 3(3), 187–194.
Chryssolouris, G., Lee, M., & Ramsey, A. (1996). Confidence interval prediction for neural network models. IEEE Transactions on Neural Networks, 7(1), 229–232. Cpalka, K. et al. (2008). Evolutionary learning of flexible neuro-fuzzy systems. In IEEE International conference on fuzzy systems, (pp. 969–975). Cus, F., et al. (2009). Hybrid ANFIS-ants system based optimisation of turning parameters. Journal of Achievements in Materials and Manufacturing Engineering, 36, 79–86. Dong, M., & Wang, N. (2011). Adaptive network-based fuzzy inference system with leave-one-out cross-validation approach for prediction of surface roughness. Applied Mathematical Modelling, 35(3), 1024–1035. Gauri, S. K., & Chakraborty, S. (2010). A study on the performance of some multi-response optimisation methods for WEDM processes. International Journal of Advanced Manufacturing Technology, 49(1–4), 155–166. Gologlu, C., & Arslan, Y. (2009). Zigzag machining surface roughness modelling using evolutionary approach. Journal of Intelligent Manufacturing, 20(2), 203–210. Ho, K., et al. (2004). State of the art in wire electrical discharge machining (WEDM). International Journal of Machine Tools and Manufacture, 44(12–13), 1247–1259. Ho, W. H., et al. (2009). Adaptive network-based fuzzy inference system for prediction of surface roughness in end milling process using hybrid Taguchi-genetic learning algorithm. Expert Systems with Applications, 36(2), 3216–3222. Ishibuchi, H. (1999). Techniques and applications of neural networks for fuzzy rule approximation. Fuzzy Theory Systems: Techniques and Applications, 4, 1491–1519. Jang, J. S. R. (1993). ANFIS: Adaptive network based fuzzy inference system. IEEE Transactions on Systems, Man Cybernetics, 23, 665– 685. Jeon, J. K., & Rahman, M. S. (2008). Final report research project FHWA/NC/2006–52 fuzzy neural network models for geotechnical problems. Department of Civil Engineering North Carolina State University. Jiang, H. M., et al. (2012). Modeling customer satisfaction for new product development using a PSO-based ANFIS approach. Applied Soft Computing, 12(2), 726–734. Kovac, P., et al. (2012). Application of fuzzy logic and regression analysis for modeling surface roughness in face milling. Journal of Intelligent Manufacturing, 1–8, doi:10.1007/s10845-012-0623-z. Krishna, M.R.G., et al. (2009). Development of hybrid model and optimization of surface roughness in electric discharge machining using artificial neural networks and genetic algorithm. Journal of Material Processing Technology, 209(3),1512–1520. Krishnan, S. A., & Samuel, G. L. (2012). Multi-objective optimization of material removal rate and surface roughness in wire electrical discharge turning. International of Advanced Manufacturing Technology, doi:10.1007/s00170-012-4628-8. Lee, C. H., & Lin, Y. C. (2004). Hybrid learning algorithm for fuzzy neuro systems. IEEE International Conference on Fuzzy Systems, 2, 691–696. Li, C. S., et al. (2008). T–S fuzzy model identification based on chaos optimization. In Proceedings of the ISNN, (pp. 786–795). Luong, L. H. S., & Spedding, T. A. (1995). Neural-network system for predicting machining behavior. Journal of Materials Processing Technology, 52, 585–591. Maji, K., et al. (2010). Forward and reverse mappings of electrical discharge machining process using adaptive network-based fuzzy inference system. Expert Systems with Applications, 37, 8566–8574. Markopoulos, A. P., et al. (2008). Artificial neural network models for the prediction of surface roughness in electrical discharge machining. Journal of Intelligent Manufacturing, 19, 283–292.
123
J Intell Manuf Mascioli, F. M., et al. (1997). Constructive algorithm for neuro-fuzzy networks. Proceedings of the Sixth IEEE International Conference on Fuzzy Systems, 1, 459–464. Musharavati, F., & Hamouda, A. S. M. (2011). Modified genetic algorithms for manufacturing process planning in multiple parts manufacturing lines. Expert Systems with Applications, 38, 10770–10779. Mukherjee, I., & Ray, P. K. (2006). A review of optimization techniques in metal cutting processes. Computers Industrial Engineering, 50(1– 2), 15–34. Nayak, P. K., et al. (2004). A neuro-fuzzy computing technique for modeling hydrological time series. Journal of Hydrology, 291(1–2), 52–66. Ozturk, N., & Ozturk, F. (2004). Hybrid neural network and genetic algorithm based machining feature recognition. Journal of Intelligent Manufacturing, 15(3), 287–298. Rao, G. K. M., et al. (2009). Development of hybrid model and optimization of surface roughness in electric discharge machining using artificial neural networks and genetic algorithm. Journal of Materials Processing Technology, 209(3), 1512–1520. Rao, R. V. (2011). Advanced modeling and optimization of manufacturing processes. London: Springer. Salonitis, K., Stournaras, A., Stavropoulos, P., & Chryssolouris, G. (2009). Thermal modelling of the material removal rate and surface roughness for EDM die-sinking. International Journal of Advanced Manufacturing Technology, 40(3–4), 316–323. Sharma, V. S., et al. (2008). Cutting tool wear estimation for turning. Journal of Intelligent Manufacturing, 19(1), 99–108. Shoorehdeli, M. A., et al. (2006). A novel training algorithm in ANFIS structure. American Control Conference, 6, 5059–5064.
123
Tzeng, C. J., et al. (2011). Optimization of wire electrical discharge machining of pure tungsten using neural network and response surface methodology. Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, 225(6), 841–852. Wang, K., et al., (2003). A hybrid intelligent method for modelling the EDM process. International Journal of Machine Tools and Manufacture, 43, 995–999. Yan, M. T., & Fang, C. C. (2007). Application of genetic algorithm based fuzzy logic control in wire transport system of wire-EDM. Journal of Materials Processing Technology, doi:10.1016/j.jmatprotec. Yang, R. T., et al. (2011). Optimization of wire electrical discharge machining process parameters for cutting tungsten. The International Journal of Advanced Manufacturing Technology, 60(1–4), 135–147. Yusup, N., et al. (2012). Evolutionary techniques in optimizing machining parameters: Review and recent applications (2007–2011). Expert Systems with Applications, 39(10), 9909–9927. Zain, A. M., et al. (2010a). Application of GA to optimize cutting conditions for minimizing surface roughness in end milling machining process. Expert Systems with Applications, 37, 4650–4659. Zain, A. M., et al. (2010b). Prediction of surface roughness in the end milling machining using artificial neural network. Expert Systems with Applications, 37, 1755–1768. Zhang, G., et al. (1998). Forecasting with artificial neural networks: The state of the art. International Journal of Forecasting. 14, 35–62. Zitzler, E., & Thiele, L. (1999). Multi-objective evolutionary algorithms: A comparative case study and the Strength Pareto approach. IEEE Transactions on Evolutionary Computation, 3(4), 257–271.