Journal of Mechanical Science and Technology 29 (11) (2015) 4867~4876 www.springerlink.com/content/1738-494x(Print)/1976-3824(Online)
DOI 10.1007/s12206-015-1034-9
Swarm intelligence based on modified PSO algorithm for the optimization of axial-flow pump impeller† Fuqing Miao1,2, Hong-Seok Park2, Cholmin Kim1 and Seokyoung Ahn1,* 1
School of Mechanical Engineering, Pusan National University, Pusan, 609-735, Korea School of Mechanical and Automotive Engineering, Ulsan University, Ulsan, 680-749, Korea
2
(Manuscript Received February 4, 2015; Revised June 21, 2015; Accepted July 7, 2015) ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Abstract This paper presents a multi-objective optimization of the impeller shape of an axial-flow pump based on the Modified particle swarm optimization (MPSO) algorithm. At first, an impeller shape was designed and used as a reference in the optimization process then NPSHr and η of the axial flow pump were numerically investigated by using the commercial software ANSYS with the design variables concerning hub angle βh, chord angle βc, cascade solidity of chord σc and maximum thickness of blade H. By using the Group method of data handling (GMDH) type neural networks in commercial software DTREG, the corresponding polynomial representation for NPSHr and η with respect to the design variables were obtained. A benchmark test was employed to evaluate the performance of the MPSO algorithm in comparison with other particle swarm algorithms. Later the MPSO approach was used for Pareto based optimization. Finally, the MPSO optimization result and CFD simulation result were compared in a re-evaluation process. By using swarm intelligence based on the modified PSO algorithm, better performance pump with higher efficiency and lower NPSHr could be obtained. This novel algorithm was successfully applied for the optimization of axial-flow pump impeller shape design. Keywords: Swarm intelligence; Modified PSO algorithm (MPSO); Group method of data handling (GMDH); Axial-flow pump impeller ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1. Introduction Swarm intelligence (SI) is based on the collective behavior of decentralized, self-organized systems, natural or artificial and is employed in work on artificial intelligence. SI systems are typically made up of a population of simple agents interacting locally with one another and with their environments. The inspirations come from nature, especially biological systems. The agents can follow very simple rules. Although there is no centralized control structure dictating how individual agents should behave, local, and to a certain degree random, interactions between such agents lead to the emergence of "intelligent" global behavior. Examples include ant colonies, bacterial growth, animal herding, bird flocking, and fish schooling. Particle swarm optimization (PSO) is a new and frequently used SI technique in many research area recently [1-4]. Particle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. This kind of algorithm is originally attributed to Ken*
Corresponding author. Tel.: +82 51 510 2471 E-mail address:
[email protected] † Recommended by Editor Haedo Jeong © KSME & Springer 2015
nedy et al. [5-10] and was first intended for simulating social behavior, such as a bird flock or fish school [11]. The algorithm was simplified and it was observed to be performing optimization. The book by Kennedy and Eberhart describes many philosophical aspects of swarm intelligence including PSO [12, 13]. An extensive survey of PSO applications was performed by Poli [14, 15]. PSO has also been applied to multi-objective optimization problems [16-19]. However, unlike Genetic algorithm (GA), PSO has no crossover and mutation processes so it is easy to implement and has less computational complexity [20]. The original PSO (OPSO) algorithm shows good performance when dealing with some simple benchmark functions. However, it is difficult for OPSO algorithm to overcome local minima when handling some complex or multimode functions due to its poor local search capability, especially in the case of complex multipeak search problems [21]. A lot of improvements have been proposed to overcome these disadvantages of the OPSO algorithm. The improved PSO (IPSO) was proposed by Wang, et al. IPSO increases the optimal solutions’ searching ability but in the following evolution process, with the disappearance of the swarm diversity, the optimization is more easily trapped into local optimum [22]. For solving this kind of optimizations, the MPSO algorithm inspired by the bacterial foraging algorithm is
4868
F. Miao et al. / Journal of Mechanical Science and Technology 29 (11) (2015) 4867~4876
proposed in this paper. An axial-flow pump is a blade type pump with great flux, lower head, and high fluids flow. Because of its broad use in agriculture, irrigation and massive water projects, many researchers pay attention to the area of axial-flow pump design [23]. Optimization of the axial-flow pump is a multi-objective optimization problem rather than the single-objective optimization problem it has so far been considered in the Ref. [24]. In order to minimize the loss of total pressure and maximize the minimum pressure in the hydraulic turbine guide vane, the former researchers applied the NSGA-II algorithm in the multi-objective optimum design process. Finally, for the optimized guide vanes, the loss was reduced, and the cavitation performance was improved. Zhang et al. presented a multiobjective shape optimization of a helico-axial multiphase pump impeller based on NSGA-II and ANN [25]. The authors tried to maximize the pressure rise and pump efficiency. After the optimization using the NSGA-II multi-objective genetic algorithm, the five stages of optimized compression cells were manufactured and applied in an experimental test. The result showed that the pump pressure rose and the pump efficiency increased simultaneously which indicated that the method was feasible. Net positive suction head required (NPSHr) and pump efficiency (η) are important objective functions to be optimized simultaneously in real world complex multi-objective optimization problem in axial-flow pump design. These objective functions are either obtained from experiments or can be computed by using very timely and high cost CFD approaches, which cannot be used in an iterative optimization task unless a simple but effective meta-model is constructed over the response surface from numerical or experimental data [26]. Therefore, modelling and optimization of the parameters is applied in this paper by using GMDH-type neural networks and modified multiobjective Particle swarm optimization in order to minimize the NPSHr and maximize the efficiency of the pump simultaneously.
2. Variables and CFD simulation of axial-flow pump 2.1 Definition of objective functions As presented in Sec. 1, both the NPSHr and efficiency in an axial flow pump are important objective functions to be optimized simultaneously. The efficiency of an axial flow pump is defined by P η= e P
(1)
where Pe is the useful power transferred from pump to the liquid. Pe can be given by Pe = ρgQH
where P is shaft power.
(2)
NPSHr means the required net positive suction head which defines the cavitation characteristic of an axial-flow pump. It is an energy in the liquid required to overcome the friction losses from the suction nozzle to the eye of the impeller without causing vaporization [27]. NPSHr varies with design, size, and the operating conditions [28]. It will lead to reduction or stoppage of the fluid flow and can damage the pump with the increase of the NPSHr. The value of NPSHr can be calculated with the following formula: w2 v 2 NPSHr = λ 1 + 1 2g 2g
(3)
where w1 is the relative velocity of the fluid, v1 is the absolute velocity of the fluid, λ is the cavitation coefficient and wmax is the maximum relative velocity [29]. 2.2 Definition of design variables The design variables in this paper are hub angle βh,chord angle βc, cascade solidity of chord σc (σc = l/t) and maximum thickness of blade H. Selected design variables βh, βc, σc and H determine the geometry of the blade and influence the performance of the pump significantly. For example, if a big difference exists between the chord angle and the hub angle, blade warpage will be increased and cause large working noise. Finally it can cause pump power loss and narrow the pump high efficiency zone. In the impeller design process, if σc is increased, the pressure difference of the two sides of the blade will be increased. Pump efficiency is also increased, but NPSHr will be as well. Because of those, σc should be decided through a compromise between efficiency and NPSHr. Beside the four design variables we selected in this manuscript, other variables do exist, for example: the blade diameter, the hub diameter and the blade number, et al. However, these variables obviously cannot influence the target (NPSHr and efficiency). There are two sections that can be defined in the blades, one on the hub and another on shroud as shown in Fig. 1. So there are four design variables namely: βh, βc, σc and H. The various designs can be generated and evaluated in CAE software, ANSYS Fluent by changing the geometrical independent parameters as shown in Table 1. Consequently, some meta-models can be optimally constructed using the GMDH type neural networks in commercial software DTREG, which will be further used in the multi-objective Pareto based design of an axial flow pump using the modified PSO method. In this way, 81 various CFD analyses have been performed. 2.3 Flow analysis Because of the incompressible fluid flow, the equations of continuity and balance of momentum are given as
4869
F. Miao et al. / Journal of Mechanical Science and Technology 29 (11) (2015) 4867~4876
Table 1. Design variables and range.
Table 2. Operating conditions in simulation.
Design variables
Range from
Range to
Selected values
Parameter
βh
36°
54°
36°,45°,54°
Number of blades
Value 4
βc
21°
25°
21°,23°,25°
Fluid temperature
20℃
σc
0.75
0.85
0.75,0.80,0.85
Liquid density
998 kg/m³
H
7 mm
11 mm
7 mm,9 mm,11 mm
Rotation speed
530 rpm
Mass flow rate
2.61 m³/s
Pump head
2.31 m
Table 3. Numerical results of CFD simulation. Design variables
No.
Objective
βh(°)
βc(°)
σc
H(mm)
NPSHr (m)
η (%)
1
36
21
0.75
7
6.5
61.2
2
54
23
0.75
9
7
80.6
3
45
25
0.85
11
7.2
79.4
4
54
21
0.80
11
8.2
78.1
5
36
23
0.75
9
6.2
83.4
6
54
23
0.85
7
7.8
86.4
7
45
21
0.80
9
6.7
80.6
8
45
23
0.85
11
8.4
79.1
9
54
25
0.75
7
8.5
83.4
-----
-----
-----
-----
-----
-----
-----
80
54
25
0.85
11
6.5
64.3
81
45
23
0.75
9
7.7
74.1
ö ¶k ù ¶Vi Dk ¶ éæç k2 ê C ú -u u = + ν÷ ÷ ¶xi ú i j ¶x j Dt ¶x j êç k ε ø ëè û Dε ¶ = Dt ¶x j
éæ ö ¶ε k2 êç C k + ν÷ ç ÷ ¶x j ε êè ø ë
i = 1, 2, …, N j = 1, 2, …, n .
Fig. 1. Design variables of impeller blades.
¶Vi =0 ¶xi DVi ¶ 2Vi 1 ¶p ¶ =+ν ui u j Dt ρ ¶xi ¶x j ¶x j ¶x j
(4) (5)
i = 1, 2, …, N j = 1, 2, …, n The physical models that are used in the solver are the Reynolds-Averaged Navier-Stokes equations and the k-ε turbulence and the k-ε equations are given as [30]
(6)
ù ¶Vi ε ε2 ú - C ε1 ui u j - C ε2 k ¶x j k ú û
(7)
For the grid generation, a tetragonal, hexagonal mesh type was used in software ANSYS Gambit 2.4.6. For the pump and impeller surrounding areas a tetragonal mesh was used, and the remaining areas were filled with hexagonal mesh [31]. Boundary conditions are as follows: non-slip conditions were applied all around the walls, mass flow rate at the pumps inlet, static boundary condition was used at the outlet. The simulation was continued until the solution converged with a total residual of less than 10-4. Some of the operating conditions are shown in Table 2. The results of numerical simulation using ANSYS Fluent are shown in Table 3. A total of 81 CFD simulation results can be used to build the response surface of both the efficiency and the NPSHr using GMDH type neural networks. Such meta models can be used for the Pareto based multi-objective optimization of the axial flow pump using the modified PSO method.
4870
F. Miao et al. / Journal of Mechanical Science and Technology 29 (11) (2015) 4867~4876
yi = f(xi1 , xi2 , xi3 ,K , xin )
(i = 1,2,K , M)
(9)
where yi is the predicted output of i observation, xi1~xin is the input variables of i observation. In order to determine the GMDH neural network, the square of the difference between the actual output and the predicted one is minimized, and we have 2 M å y i - y ai ® min i =1
Fig. 2. The structure of a basic GMDH network.
[
3. Meta-models building using GMDH-type neural network Group method of data handling (GMDH) polynomial neural networks are a self-organizing approach by which gradually complicated models are generated based on the evaluation of their performances on a set of multi input-single output data pairs [32]. GMDH networks were originated in 1968 by Prof Alexey G. Ivakhnenko who was working at that time on a better prediction of fish population in rivers at the Institute of Cybernetics in Kyiv (Ukraine). This algorithm can be used to model complex system without having specific knowledge of the system. The main idea of GMDH is to build an analytical function in a feed forward network based on a quadratic node transfer function whose coefficients are obtained using regression technique [33].
]
where yai is the actual output of i observation, yi is the predicted output of i observation. The most popular base function used in GMDH is the Volterra functional series in the form of n n n n n n yai = a0 + å ai xi + å å aij xi x j + å å å aijk xi x j xk + K i =1 i =1 j =1 i =1 j =1 k =1
(11) where ya is the Kolmogorov-Gabor polynomial. xi, xj, xk, … denotes the input variables. a0, ai, aij, aijk, … denotes the desired values of the coefficients. It uses complete quadratic polynomials of two variables as transfer functions in the neurons. These polynomials can be represented by the form shown below: y = a0 + a1 xi + a2 x j + a3 xi x j + a4 xi2 + a5 x 2j .
3.1 Structure of a GMDH-type neural network Self-organizing means that the connections between neurons in the network are not fixed but rather are selected during training to optimize the network. The number of layers in the network is also selected automatically to produce maximum accuracy without over fitting. As shown in Fig. 2, the first layer (at the left) presents one input for each predictor variable. Every neuron in the second layer draws its inputs from two of the input variables. The neurons in the third layer draw their inputs from two of the neurons in the previous layer and this progresses through the whole network. The final layer (at the right) draws its two inputs from the previous layer and produces a single value which is the output of the network [34]. The formal definition of the identification problem is to find a function ƒ that can be approximately used instead of the actual one ƒa. In order to predict output y for a given input vector X = (x1, x2, x3, …. , xn) as close as possible to its actual output ya. For the given M observation of multi input-single output data pairs, we have yai = f a (xi1 , xi2 , xi3 ,K , xin )
(i = 1,2,K , M)
(8)
where yai is the actual output of i observation, xi1~xin is the input variables of i observation. For any given input vector X = (x1, x2, x3, …. , xn), there have
(10)
(12)
It can be seen that a tree of polynomials is constructed using the quadratic form given in Eq. (12) whose coefficients are obtained in a least squares sense. In this way, the coefficients of each quadratic function y are obtained to optimally fit the output in the whole set of input-output data pair [35]. 3.2 Meta-models building in DTREG The input and output data used in such modelling evolve two different data tables obtained from CFD simulation. Both of the tables consists of four variables as inputs i.e. βh, βc, σc and H (as shown in Fig. 1) and two outputs i.e. efficiency η and NPSHr. There are 81 patterns which can be used to train and test the GMDH neural network. The corresponding polynomial representations for NPSHr are as follows: N(3) = -0.6432-0.015βh+0.17βc+0.0325βh2-0.0274βc2+2.3e6βhβc N(1) = 3.524-0.134βh+0.0245σc+0.0418βh2+2.056e12σc2+2.01e-5βhσc N(7) = 5.643-0.2134H-0.02192 +0.0021H2+0.0004βh2 +2.45e-5Hβh N(4) = -1.89+0.213βc+0.015σc-0.004βc2+1.19e-11σc2 +1.24-5βcσc N(9) = 6.941-3.7845 N(3)+0.5418N(1)+0.4723N(3)2 +0.0245 N(1)2+0.0475 N(3) N(1)
4871
F. Miao et al. / Journal of Mechanical Science and Technology 29 (11) (2015) 4867~4876
N(6) = 6.147-2.1873 N(7)-0.7122 N(4)+0.218N(7)2 +0.0947 N(4)2+0.4234 N(7) N(4) NPSHr = -0.412-0.062N(9)+1.148N(6)+0.0812N(9)2 +0.0412 N(6)2-0.1271 N(9) N(6) .
Table 4. Amount of R2, RMSE, MAPE for efficiency and NPSHr.
The corresponding polynomial representation for efficiency is as follows: N(4) = -0.5412-0.411βh+2.143βc+0.016βh2 -0.0124βc2+0.00012βhβc N(6) = 16.895+1.2183H+0.495σc-0.008H2 -0.0041σc2+0.0012Hσc N(1) = -15.03+1.96βc+0.6181σc-0.023βc2-0.0041σc2 +0.0012βcσc N(7) = 34.107+1.254H-0.49βh-0.0082H2+0.0092βh2 +0.00059Hβh N(9) = 54.324-0.5912N(4)-1.0126N(6)+0.00521N(4)2 +0.0063N(6)2+0.0191N(4)N(6) N(3) = 61.801-0.5012 N(1)-1.0125N(7)+0.0043N(1)2 +0.00751N(7)2+0.014N(1)N(7) η = 0.7241-3.0174N(9)+5.0124N(3)-2.3104N(9)2 -2.5983N(3)2+5.362N(9)N(3). To view the accuracy of GMDH modeling some statistical measures are given in the table below for both of the objective functions. These statistical values are based on R2 as absolute fraction of variance, RMSE as root mean squared error and MAPE as mean absolute percentage of error which are defined as follows: n
R2 = 1 -
å
(Yi(Model ) - Yi(CFD ) )2 n
å (Y (
i Model ) - Yi (CFD )
RMSE =
(13)
Yi2(CFD )
i =1
)2 (14)
i =1
æ1 MAPE = ç çn è
n n
å i =1
Yi (Model ) - Yi (CFD ) ö÷ ´ 100 ÷ Yi (CFD ) ø
.
η
Parameter
(15)
It is evident from Table 4 that the model fits well the set of observations. The models obtained in this section can now be utilized in a Pareto multi-objective optimization of the axial flow pump considering both efficiency and NPSHr as conflicting objectives.
4. PSO algorithm Particle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. PSO is a population-based research algorithm. PSO was originally developed by Kennedy and Eberhart [36]. It was
NPSHr
Calculated
Experimental
Calculated
Experimental
R2
0.997
0.9991
0.9991
0.9990
RMSE
1.0234
1.0021
0.0491
0.0486
MAPE (%)
0.0083
0.0080
0.0068
0.0064
first intended for simulating social behaviour, as a stylized representation of the social behaviour of a bird flock or fish school. This algorithm was originally adopted for balancing weights in neural networks, and PSO has already become a popular global optimizer [37]. There is one study reported in the literature that extends PSO to a multi-objective problem [38]. A Dynamic neighbourhood particle swarm optimization (DNPSO) for multi objective problems was presented [39]. In their study, for each generation, particles of swarm find their new neighbours. The best local particle in the new neighbourhood is chosen as the group best (gbest) for each particle. A modified DNPSO is introduced by finding the nearest n particles as the neighbour of the current particle based on the distances between the current particles from others [40]. The Original PSO (OPSO) algorithm is similar to other algorithms based on the principles that are accomplished according to the following equations: vijt +1 = wvijt + c1r1(pbestijt - xijt ) + c2 r2 (gbestijt - xijt )
(16)
t +1 = x t + v t xij ij ij
(17)
i = 1, 2, …, N j = 1, 2, …, n where vit+1 represents the i particle’s speed at t+1 iteration. xit+1 represents the i particle’s position at t+1 iteration. x is the particle current position, v is the particle current velocity, t is the point of iterations (generations), w is inertia weight, c1 and c2 is acceleration constants, r1 and r2 is random values range [0,1], pbest is the personal best position of a given particle and gbest is the position of the best particle of the entire swarm. The algorithm developed by Kennedy and Eberhart was inspired by insect swarms (or fish schools or bird flocks) and their coordinated movements. This algorithm pays attention to the information sharing of pbest and gbest but just considers the experience of pbest and gbest and ignores the communication of other particles. So an Improved particle swarm optimization method (IPSO) was developed as the equations shown below: t +1 = wv t + c r (pbest t - x t ) + c r (gbest t - x t ) + c r CR vij 11 22 33 ij ij ij ij ij
(18) t t ì ï pbest kj - xij CR = í ï 0 î
if other
ran < cp
(19)
4872
F. Miao et al. / Journal of Mechanical Science and Technology 29 (11) (2015) 4867~4876
where k means the kth particle and k≠i, cp is communication probability between particles, ran is the random values range [0,1]. The communications between particles were considered and could supply much more information in order to search optimal solutions in IPSO. But in the following evolution process, with the disappearance of the swarm diversity, the optimization was easier to trap into local optimum. To address the disadvantages of IPSO, a Modified PSO (MPSO) was developed inspired by a bacterial foraging algorithm. Passino originally proposed the Bacterial Foraging Algorithm in 2002 [41]. It was inspired by the abstraction and simulation of the food engulfing bacterium in the human intestinal canal. There are three steps to guide the bacterium to the nutrient-rich area: chemotactic, reproduce and eliminationdispersal. Elimination-dispersal happened when bacterium were stimulated from outside and then moved to the opposite direction. Chemotactic is the action of bacterium gathering in the nutrient-rich area. For example, an E. coli bacterium can move in two different ways. It can run (swim for a period of time) or it can tumble, and it alternates between the two modes of operation its entire lifetime (i.e., it is rare that the flagella will stop rotating). As introduced, if the flagella rotate clockwise, each flagellum pulls on the cell, and then the net effect is that each flagellum operates relatively independently of the others. Sometimes the bacterium does not have a set direction of movement and there is little displacement. The bacterium has the behaviour of climbing nutrient gradients. The motion patterns that the bacterium will generate in the presence of chemical attractants and repellants are called chemotactic. For E. coli, encounters with serine or aspirate result in attractant responses, whereas repellent responses result from the metal ions Ni and Co, changes in pH, amino acids like leucine, and organic acids like acetate [41]. So because the behaviour can draw on advantages and avoid disadvantages, bacterium can search for a better food source, increase the chance of surviving and enhance its adaptive capacity to varied environment. If there are harmful stimuli in the process of MPSO, it is easy to get rid of a local optimal by applying the chemotactic operation. The function is shown below and the flow chart of MPSO is shown in Fig. 3. vijt +1 = wvijt - c1r1(pbestijt - xijt ) - c2 r2 (gbestijt - xijt ) - c3r3CR .
(20)
The search strategy of MPSO algorithm involves five steps. [Step 1] Initialize parameters n, N, Nh, c1, c2, c3, cp. where n : Dimension of the search space, N : The number of maximum iterations, Nh : The generation of harmful stimulus, c1, c2, c3 : The inertial weight factor, cp : The communication probability.
Fig. 3. Flow chart of MPSO.
[Step 2] Update the following: J : Current fitness value of particles. Jpbest : Fitness value of the particle best position found so far. Jgbest : Fitness value of the group best position found so far. [Step 3] Computation Computing communication probability cp of particles. [Step 4] Chemotaxis or elimination-dispersal operation Harmful stimulus: In the particle swarm optimum searching process, if the fitness value of the particle persist unchanged after several generations and finally reaches the maximum generation of harmful stimulus we set before, then this optimization was trapped into local optimum. If it meets a harmful stimulus, implement eliminationdispersal operation by using Eqs. (17), (19) and (20). Otherwise implement chemotaxis operation by using Eqs. (17)-(19). [Step 5] Stop criteria If meeting the stop criteria k = kmax, then optimization ends. Otherwise go to [step 1] for next iteration until we reach the specified number of iterations.
5. Benchmarks In this section, a set of three benchmark functions was employed to evaluate the MPSO algorithm in comparison with OPSO and IPSO: (1) Sphere n
f1 ( x ) =
åx
2 i
i =1
- 100 £ xi £ 100 Global optimum : min(fi ) = 0 .
F. Miao et al. / Journal of Mechanical Science and Technology 29 (11) (2015) 4867~4876
4873
Table 5. Benchmark test results. fi f1 f2 f3
Results
OPSO
IPSO
MPSO
Mean value
5.154e-003
9.154e-009
4.455e-009
Minimum value
8.161e-004
1.235e-001
0.000e+000
Mean value
2.125e-001
6.158e-002
1.247e-002
Minimum value
7.018e-004
4.973e-008
0.000e+000
Mean value
4.591e+001
2.347e+001
2.155e+001
Minimum value
1.043e+001
1.616e+000
3.319e-001
(2) Griwank
Fig. 4. Comparison of convergence speed (OPSO: Red, IPSO: Black, MPSO: Blue).
(3) Rosenbrock
For the function test, the population size was set to be 10 for 400 iterations. In OPSO, the acceleration constants c1 = c2 = 2. In IPSO and MPSO, the acceleration constants c1 = 0.5, c2 = 2, c3 = 0.5. The communication probability between particles in IPSO was set to be cp = 0.3, the generation of harmful stimulus set as 20. Each algorithm was run 100 times and the calculation results of mean value and minimum value of each algorithm are shown below. In Table 5, it is seen that MPSO can find the best solutions successfully for these three functions when facing complicated optimization functions, and its precision of solutions is much higher than OPSO and IPSO at the same time. Fig. 4 shows the comparison of convergence speed of the three algorithms during 400 iterations. The convergence curves shown in Fig. 4 were generated from multiple runs. It is seen that the convergence speed of the proposed MPSO is faster than OPSO and IPSO. MPSO spends much fewer iterations and less computational time to find the global optimum. Therefore, it can be concluded that MPSO is more proper than the aforementioned algorithms.
6. Apply multi-objective optimization by using modified PSO method The polynomial neural network models obtained in Sec. 3 are now employed in a multi-objective optimization procedure using the modified PSO method in order to investigate the optimal performance of the axial flow pump. Two conflicting objectives efficiency η and NPSHr are to be simultaneously
Fig. 5. Pareto front of NPSHr and efficiency.
optimized with the design variables βh, βc, σc and H. The design optimization problem of the objective function and constraints are a function of the following equation:
The obtained non-dominated optimum solutions based on the Pareto front are shown in Fig. 5. These points demonstrate the trade-offs in objective function NPSHr and efficiency. We can find that all the optimum design points in the Pareto front are non-dominated and could be chosen as the optimum pump. But choosing a better value for any objective function would cause a worse value for another objective. The solutions shown in Fig. 5 are the best possible design points. If we choose any other decision variables, the corresponding values of objectives will be worse.
4874
F. Miao et al. / Journal of Mechanical Science and Technology 29 (11) (2015) 4867~4876
Table 6. Mapped value of point A-D.
Table 7. Re- evaluation of the optimum design point.
A (6.50, 0.637)
B (6.78, 0.747)
C (7.28, 0.836)
D (7.70, 0.848)
E (8.52, 0.871)
MNPSHr
0
0.14
0.39
0.59
1
Mη
1
0.53
0.15
0.10
0
MNPSHr + Mη
1
0.67
0.54
0.69
1
Point
MPSO
CFD
83.6
85.4
ERROR 2.11%
NPSHr
7.28
7.41
1.75%
Fig. 7. Path line analysis from the CFD simulation.
Fig. 6. The sum of mapped value of point A-D.
In Fig. 5, the design points A and E stand for the best NPSHr and the best efficiency. Moreover, the other optimum design points, B and D can be simply recognized from Fig. 5. The design point B exhibits important optimal design concepts. In fact, the optimum design point B obtained in this article exhibits a decrease in NPSHr in comparison with that of point A whilst its efficiency improves in comparison with that of A. Similarly, the optimum design point D exhibits a decrease in efficiency in comparison with that of point E whilst its NPSHr improves in comparison with that of E. Now it is time to find a trade-off in optimum design points by compromising both objective functions. This can be achieved by the mapping method defined by the equation as shown below.
f - f min Mapped value = . f max - f min
Method Efficiency
(21)
In the mapping method, the value of objective functions of all non-dominated points are mapped into interval 0 and 1. Using the sum of these values for each non dominated point, the trade-off point simply is the one having the minimum sum of those values. The mapped value of point A-D is shown in the Table 6 and Fig. 6. There are the sum of the mapped value of NPSHr and the mapped value of η of point A-E. The optimum point is the one that has the smallest sum value. The result is shown in Table 6. It can be seen that point C has the smallest sum value and it can be used as our optimum design point. For point C: βh = 。 。 53.7 , βc = 22.5 , σc = 0.81, H = 8.2mm, NPSHr = 7.28m, η = 0.836. The optimum design point was then re-evaluated by CFD. The analysis results are shown in Figs. 7 and 8. It can be seen
Fig. 8. Total pressure calculation from the CFD simulation.
from Fig. 7, the path line is normal because there are no vortices or back flow on the flow path. It can be seen from Fig. 8, total pressure distribution becomes higher and higher from the hub section to shroud section. At the tip of the leading edge, there is the maximum value of total pressure because of the minimum value of peripheral speed of water. All these results mean this pump can work well with good performance. The results of such CFD analysis re-evaluations were compared with the optimum design point using the modified particle swarm optimization in Table 7. As seen, the two kinds of results agree well with each other.
7. Conclusions and future work A new shape optimization method of an axial-flow pump impeller based on the Modified PSO (MPSO) algorithm has been presented in this article. (1) GMDH type neural network was applied for high prediction accuracy. Two different polynomial relations for two
F. Miao et al. / Journal of Mechanical Science and Technology 29 (11) (2015) 4867~4876
objectives NPSHr and efficiency (η) were found by GMDH type neural networks using experimentally validated CFD simulations. (2) In order to overcome the disadvantages of OPSO and IPSO, which can be easily trapped into local optimal algorithm, the MPSO algorithm inspired by the bacterial foraging algorithm was proposed in this article. A benchmark test was also applied and the results indicate that the MPSO spent much fewer iterations and less computational time finding the global optimum. (3) Non-dominated optimum design points as a Pareto front of two object functions were found as the optimal solution which can help designers chose the proper result to meet different requirements. (4) By re-evaluation process, the MPSO optimization result and CFD simulation results agreed well with each other. This means swarm intelligence based on the modified PSO algorithm is feasible and can be applied for the optimization of an axial-flow pump impeller shape design. (5) In the optimization process by using the MPSO algorithm, the range of parameters and the acceleration constants value can affect the computational time and convergence speed. They are subjects which should be researched in future work.
Acknowledgment This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. NRF-2012M2B2B1055503) and NRF-2013R1 A1A2060197.
Nomenclature-----------------------------------------------------------------------OPSO : Original particle swarm optimization IPSO : Improved particle swarm optimization MPSO : Modified particle swarm optimization NPSHr : Net positive suction head required Η : Pump efficiency R2 : Absolute fraction of variance RMSE : Root mean squared error MAPE : Mean absolute percentage of error βh : Hub angle, (°) βc : Chord angle, (°) σc : Cascade solidity of chord H : Maximum thickness of blade, (mm) Pe : Useful power transferred from pump to liquid, (Kw) P : Shaft power, (Kw) w1 : The relative velocity of fluid, (m/s) v1 : The absolute velocity of fluid, (m/s) λ : The cavitation coefficient wmax : The maximum relative velocity, (m/s)
References [1] P. H. Chen, Pumped-storage scheduling using evolutionary
4875
particle swarm optimization, IEEE Transactions on Energy Conversion, 23 (1) (2008) 294-301. [2] J. J. Yang et al., A novel strategy of pareto-optimal solution searching in multi-objective particle swarm optimization (MOPSO), Computers and Mathematics with Applications, 57 (2009) 1995-2000. [3] F. Khoshahval et al., A new hybrid method for multiobjective fuel management optimization using parallel PSOSA, Progress in Nuclear Energy, 76 (2014) 112-121. [4] G. G. Chen et al., Chaotic improved PSO-based multiobjective optimization for minimization of power losses and L index in power systems, Energy Conversion and Management, 86 (2014) 548-560. [5] J. Kennedy and R. C. Eberhart, Particle swarm optimization, Proceedings of IEEE International Conference on Neural Networks IV (1995) 1942-1948. [6] Y. Shi and R. C. Eberhart, A modified particle swarm optimizer, Proceedings of IEEE International Conference on Evolutionary Computation (1998) 69-73. [7] J. Y. Park and S. Y. Han, Swarm intelligence topology optimization based on artificial bee colony algorithm, International Journal of Precision Engineering and Manufacturing, 14 (1) (2013) 115-121. [8] B. Bhandari, K. T. Lee, G. Y. Lee, Y. M. Cho and S. H. Ahn, Optimization of hybrid renewable energy power systems: A review, International Journal of Precision Engineering and Manufacturing-Green Technology, 2 (1) (2015) 99-112. [9] N. Tanaka, T. Honma and Y. Yokosuka, Structure shape optimization of free-form surface shell and property of solution search using firefly algorithm, Journal of Mechanical Science and Technology, 29 (4) (2015) 1449-1455. [10] M. C. Chiu, Multi-tone noise elimination in a spaceconstrained room lined with hybrid sound absorbers using a particle swarm method, Journal of Mechanical Science and Technology, 28 (9) (2014) 3411-3423. [11] J. Kennedy, The particle swarm: social adaptation of knowledge, Proceedings of IEEE International Conference on Evolutionary Computation (1997) 303-308. [12] J. Kennedy and R. C. Eberhart, Swarm intelligence, Morgan Kaufmann, San Francisco, USA (2001). [13] I. Driss, K. N. Mouss and A. Laggoun, A new genetic algorithm for flexible job-shop scheduling problems, Journal of Mechanical Science and Technology, 29 (3) (2015) 1273-1281. [14] R. Poli, An analysis of publications on particle swarm optimization applications, Technical Report CSM-469, Department of Computer Science, University of Essex, UK (2007). [15] R. Poli, Analysis of the publications on the application of particle swarm optimization, Journal of Artificial Evolution and Applications (2008) 1-10. [16] K. Parsopoulos and M. Vrahatis, Particle swarm optimization method in multi objective problems, Proceedings of the ACM Symposium on Applied Computing (SAC), Madrid, Spain (2002) 603-607.
4876
F. Miao et al. / Journal of Mechanical Science and Technology 29 (11) (2015) 4867~4876
[17] C. A. Coello and M. S. Lchuga, MOPSO: A proposal for multi objective particle swarm optimization, Congress on Evolutionary Computation, Honolulu, Havaii, USA (2002) 1051-1056. [18] A. Ariyarit and M. Kanazaki, Multi-modal distribution crossover method based on two crossing segments bounded by selected parents applied to multi-objective design optimization, Journal of Mechanical Science and Technology, 29 (4) (2015) 1443-1448. [19] M. A. Mohamed, Y. Manurung and M. N. Berhan, Model development for mechanical properties and weld quality class of friction stir welding using multi-objective Taguchi method and response surface methodology, Journal of Mechanical Science and Technology, 29 (6) (2015) 2323-2331. [20] R. B. Mohammad, X. Li and M. Zbigniew, A hybrid particle swarm with a time-adaptive topology for constrained optimization, Swarm and Evolutionary Computation, 18 (2014) 22-37. [21] J. Wang, G. G. Yen and M. M. Polycarpou, Improved PSO algorithm with harmony search for complicated function optimization problems, Lecture Notes in Computer Science, 1 (2012) 624-632. [22] Y. Y. Wang, B. Q. Zhang and Y. C. Chen, Robust airfoil optimization based on improved particle swarm optimization method, Applied Mathematics and Mechanics, 32 (10) (2011) 1245-1254. [23] J. L. Alision et al., Modern tools for water jet pump design and recent advances in the field, International Conference on Water Jet Propulsion, RINA, Amsterdam (1998). [24] X. Q. Luo et al., Multi-objective optimum design for hydraulic turbine guide vane based on NSGA-II algorithm, Journal of Drainage and Irrigation Machinery Engineering, 28 (5) (2010) 369-373. [25] J. Y. Zhang et al., Multi-objective shape optimization of helicon-axial multiphase pump impeller based on NSGA-II and ANN, Energy Conversion and Management, 52 (2011) 538-546. [26] H. Safikhani et al., Pareto based multi-objective optimization of centrifugal pumps using CFD, neural networks and genetic algorithms, Engineering Applications of Computational Fluid Mechanics, 5 (1) (2011) 37-48. [27] A. Nourbakhsh, Pump and pumping, University of Tehran Press, Iran, Tehran (2006). [28] L. Bachus and A. Custodio, Know and understand centrifugal pumps, First Ed., Elsevier press, UK (2003). [29] X. F. Guan, Research on development of high performance axial flow pump impellers, Pump Technology (1982) 25-32. [30] A. Riasi, A. Nourbakhsh and M. Raisee, Unsteady turbulent pipe flow due to water hammer using k-ε turbulence model, Journal of Hydraulic Research, 4 (2009) 429-437. [31] I. S. Jung et al., Shape optimization of impeller blades for a bidirectional axial flow pump using polynomial surrogate model, The Korean Society of Mechanical Engineering, 36 (12) (2012) 1141-1150.
[32] K. Kristinson and G. Dumont, System identification and control using genetic algorithms, IEEE Transactions on Systems Manufacturing and cybernetics, 22 (5) (1992) 1033-1046. [33] S. J. Farlow, Self-organizing method in modeling: GMDH type algorithm, Marcel Dekker Inc. [34] H. S. Phillip, DTREG predictive modeling software, DTREG Inc. (1984). [35] H. S. Park and P. Dahal, Development of a resilient cable joint for the insulation system of oil tanks, Journal of Mechanical Science and Technology, 26 (11) (2012) 3617-3624. [36] J. Kennedy and R. C. Eberhart, Particle swarms optimization, International Proc. IEEE International Conference Neural Networks, Perth, Australia (1995) 1942-1948. [37] R. C. Eberhart, R. Dobbins and P. K. Simpson, Computational intelligence PC tools, Morgan Kaufmann Publishers (1996). [38] C. A. Coello and M. S. Lchuga, MOPSO: a proposal for multiple objective particle swarm optimization, Proceedings of the IEEE Congress on Evolutionary Computation, Honolulu, Hawaii, USA (2002) 1051-1056. [39] X. Hu and R. C. Eberhart, Multi-objective optimization using dynamic neighborhood particle swarm optimization, Proceedings of the IEEE Congress on Evolutionary Computation, Honolulu, Hawaii, USA (2002) 1677-1681. [40] T. Ameur, and M. Assas, Modified PSO algorithm for multi-objective optimization of the cutting parameters, Computer Aided Engineering, Prod. Eng. Res. Devel, 6 (2012) 569-576. [41] K. M. Passino, Biomimicry of bacterial foraging for distributed optimization and control, IEEE Control Systems Magazine, 22 (2002) 52-67.
Fuqing Miao received his bachelor degree in Central South University and master degree in University of Ulsan. He is currently a Ph.D. candidate student at Pusan National University, majoring in Precision Processing System at the College of Mechanical Engineering. His research interests are multi-objective optimization, PSO algorithm, fluid mechanics and their applications. Seokyoung Ahn received his BS degree in Mechanical Engineering from Pusan National University, M.S. degree from POSTECH and Ph.D. degree from the University of Texas at Austin. His research interests are modeling, estimation and control of nonlinear manufacturing process such as Electroslag remelting process. His other research area includes nuclear site decommissioning technologies such as melt decontamination.