Hybrid particle swarm optimization-simplex algorithm for inverse problem NIE Ru1, YUE Jian-hua2, DENG Shuai-qi2 1. School of Computer Science and Technology, China University of Mining and Technology, Xuzhou 221116, China E-mail:
[email protected] 2. School of Resources and Earth Sciences, China University of Mining and Technology, Xuzhou 221116, China Abstract: Inverse problem requiring repeated forward computation is a hard ill-posed problem. Traditional linear inversion methods like Newton method and Newton-like methods may not be optimal convergent if a good initial estimate cannot be provided. Considering that the exiting particle swarm optimization algorithm(PSO) can not take evolution speed and solution quality into account at the same time, a hybrid simplex particle swarm optimization algorithm (HPSO) which combines simplex method with PSO is proposed for wave impedance inverse problem. Application example shows that the proposed algorithm possesses the advantages of both PSO and simplex search method, which have the features of quick convergence and high accuracy of identification. The proposed algorithm is an efficient tool for wave impedance inverse and it performs much better than PSO on such problems. Key Words: PSO, simplex method, hybrid algorithm, inverse problem
1
INTRODUCTION
With the rapid development of the mining industry it is urgent to investigate structural features of coal seams and the geologic conditions that affect mining. Seismic inversion is an effective method to improve seismic resolution and lithology estimation. Utilizing lithologic characteristics like wave impedance, the methods greatly improve the resolution of coal seams and enhance the continuity and the detection capability of weak reflection. The wave impedance inversion can be defined as a non-linear inverse problem[1]. One of the most important factors, which determine the results of inverse analysis, is optimization method. Classical methods such as Levenberg-Marquardt method, conjugate gradient method and trust region method have been successfully employed in solving inverse problem. These methods have an advantage of estimating the solutions in relatively short computational time, but the results are affected by the initial values and premature convergence is likely to happen. These local search methods are enough to obtain satisfactory results for a simple problem, because a good initial value can be obtained based on a priori information. But for some complex problems, it is difficult to obtain good initial values, thereby leading to difficulties in obtaining satisfactory parameters with any high degree of confidence by local search algorithms. To overcome the difficulties of local search methods, in the past decades, stochastic search algorithms like simulated annealing (SA), genetic algorithm (GA) and particle swarm optimization algorithm(PSO) have been proved to be very effective in solving optimization problems. However, these methods suffer from This work is supported by National Nature Science Foundation under Grant 40674074
c 978-1-4244-5182-1/10/$26.00 2010 IEEE
computational overburden for its large dimensionality when applied to wave impedance inversion problem. Compared with other global optimization methods PSO has simple concepts and high practical implement ability. But the conventional particle swarm optimizers are time-consuming while solving the complicated optimization problems such as wave impedance inversion problem. So this paper pays more attention to how to design more effective and efficient PSO-based procedures than the conventional particle swarm optimizers while solving the complicated optimization problems. As a global optimization algorithm PSO can ensure that the search is less likely to be trapped in local optima but it can make the search converge slower. However pure local search methods such as the simplex method can make the search converge faster than PSO, but these methods are easy to fall into local optima. To deal with the slow convergence of PSO, an idea to combine PSO with a local simplex search technique is proposed in this paper. Such hybrid method can simultaneously inherit the merits of PSO and simplex search technique so that the hybrid method can achieve a better balance between convergence time and global optimality. This paper is organized as follows. Section 2 presents the PSO and the simplex algorithm. Section 3 introduces HPSO. Section 4 presents application example of HPSO. Finally, conclusions are given in Section 5.
2
THE PSO AND SIMPLEX ALGORITHM
2.1 The PSO PSO is an evolutionary computation technique through individual improvement plus population cooperation and competition[2]. A particle’s status on the search space is characterized by two factors: its position and velocity. The
3439
position and velocity of the ith particle in the d-dimensional search space can be represented as Xi=[xi,1, xi,2, . . . , xi,d] and Vi=[vi,1, vi,2, . . . , vi,d ], respectively. Each particle has its own best position (pbest)Pbi=(pbi,1, pbi,2, . . . , pbi,d ) corresponding to the personal best objective value obtained so far at time t. The global best particle (gbest) is denoted by Pbg, which represents the best particle found so far at time t. The new velocity of each particle is calculated as follows: v i, j (t + 1) = wvi , j (t ) + c1 r1 ( pbi , j − x i , j (t )) ˄1˅ + c 2 r2 ( pbg , j − x i, j (t )),
j = 1,2," d
where c1 and c2 are constants called acceleration coefficients,w is called the inertia factor, r1 and r2 are two independent random numbers uniformly distributed in the range of [0, 1]. Thus, the position of each particle is updated in each generation according to the following equation:
xi , j (t + 1) = xi , j (t ) + vi , j (t + 1), j = 1,2," d.
˄2˅
Generally, the value of each component in Vi by Eq.(1) can be clamped to the range [ívmax, vmax] to control excessive roaming of particles outside the search space. Then the particle flies toward a new position according to Eq. (2). This process is repeated until a user-defined stopping criterion is reached[3].
2.2 The simplex algorithm The simplex search method was firstly proposed by Spendley, Hext, and Himsworth in 1962 and refined in 1965 by Nelder and Mead. This approach is characterized by a derivative-free line search method particularly designed for traditional unconstrained minimization scenarios, such as the problems of nonlinear least squares, nonlinear simultaneous equations, and other types of function minimization. Consider first that function values which is a polyhedron in the factor space of n input variables, of an initial simplex are evaluated. Through a sequence of elementary geometric transformations (reflection, inside contraction, expansion and outside contraction), the initial simplex moves, expands or contracts. First, function values at the (N+1) vertices of an initial simplex are evaluated, which is a polyhedron in the factor space of Ninput variables. In the minimization case, the vertex with the highest function value is replaced by a newly reflected, better point, which would be approximately located in the negative gradient direction. At every point the objective function is evaluated. The point with the highest numerical value of all three points is perpendicularly mirrored against the opposite line segment. This is called a reflection. The reflection can be accompanied with an expansion to take larger steps or with a contraction to shrink the simplex where an optimization valley floor is reached. The optimization procedure continues until the termination criteria are met. The termination criterion is usually the maximum number of reflections with contractions or the volume (area) of the simplex. The algorithm is generally implemented in N dimensions, where simplex is a hypercube with N+1 vertex points.
3440
2010 Chinese Control and Decision Conference
3
The HYBRID PSO-SIMPLEX ALGORITHM (HPSO)
In this article, we introduce a hybridization of the PSO and the Simplex algorithm. As a classical powerful local descent algorithm, the simplex algorithm has advantages of simplicity and efficiency; however it is easily entrapped in a local optimum and its convergence is extremely sensitive to the initial start point. Correspondingly, the advantage of PSO is less likely to be entrapped in local optima, but the convergence rate will slow down and the computational cost is high at later stage. 3.1 Improved simplex algorithm If the optimal solution of an N-dimensional problem is very far away from the starting point, a second expansion operator may help improve the convergence rate. This operator will apply only after the success of an expansion attempt. In this situation, the current simplex might be very likely far away from the optimal solution. If a vertex was located outside a variable boundary the response was assigned an unfavorable value. The procedure might be inefficient if the optimum is located on the boundary. The simplex will be forced away from the optimum and it will approach the boundary several times before the optimum is reached[4]. The expansion is performed again in order to extend the search space in the same promising direction and the second expansion point is calculated by the following equation:
Psec on d
exp
= θPexp + (1 - θ )Phigh ᧤3᧥
where h is the second expansion coefficient (h > 1). The choice of h = 2 has been tested with much success from early computational experience. If a vertex (R) was located outside the variable boundary it was corrected back to the boundary (R0) and the simulation continued with a new simplex where R was replaced with R0 according to the procedure suggested by Routh et al., the second expansion is accepted by replacing Phigh with Psecond exp; otherwise, Pexp replaces Phigh. A new iteration is started. 3.2 Modified PSO One of the major drawbacks of the standard PSO is its premature convergence, especially while handling problems with more local optima. In IPSO, each member of a complex is a potential parent with the ability to participate in a process of evolution. A sub-swarm selected from the complex is like a pair of parents. To ensure that the evolution process is competitive, we require that the probability that better parents contribute to the generation of offspring is higher than that of worse parents. Finally, each new offspring replaces the worst point of the current sub-swarm, rather than the worst point of the entire population. This ensures that process before being replaced or discarded. Thus, none of the information contained in the sample is ignored[5]. Step1: Initializing. Select p ≥ 1 , m ≥ 1 , where p is the number of sub-swarms, m is the number of points in each complex. Compute the sample size s and the function value fi at each point Xi. Step2: Rank the points in the order of increasing function value F.
Step3: Partition function value F into p sub-swarms and each sub-swarm contains n points. Step4: Evolve each complex using PSO separately. Step5: Replace each complex into the function value and sort them in the order of increasing function value. Step6: If the convergence criteria is satisfied then stop, otherwise, go to step4. 3.2 The hybrid PSO-Simplex algorithm (HPSO) PSO is one of the heuristic global search algorithms. The goal of integrating simplex method with PSO is to combine their advantages and avoid disadvantages. In addition, the search pattern of PSO is according to Eq. (1) and Eq. (2); combining PSO with simplex method can enrich its search strategies. The initial population is randomly generated in the problem search space. General hybridization schemes fall into two broad categories: the staged pipelining type hybrid and the additional-operator type hybrid. In the first type hybrid, the stochastic optimization process is applied to all individuals in the population, followed by further improvement using simplex search. In the second type hybrid, the simplex method is applied as if it was a standard genetic operator for which a corresponding probability is specified by the user[7]. According to the character of wave impedance inversion, the staged pipelining type hybrid is used here. In HPSO for every particle that violates the constraints, use the gradient repair method to direct the infeasible solution toward the feasible region. If two particles have the same constraint fitness value, then their objective fitness values are compared in order to determine their relative position. The one with better objective fitness value is positioned in front of the other. The top N1 particles are then fed into the simplex search method to improve the particles. The PSO method adjusts the other particles by taking into account the positions of the N1 best particles. This procedure for adjusting the N1 particles involves selection of the global best particle, selection of the neighborhood best particles, and finally velocity updates. Every generation, after the stochastic optimization process is applied to all individuals in the population, select D+1 points from the population based on rank-based fitness to generate the initial simplex. Then apply simplex operators several times to the initial simplex to update it and replace the selected points with the new simplex vertices. The particle with the better fitness value in each neighborhood is denoted as the neighborhood best particle. In order to start up the simplex method, one has to define the initial simplex, in principle composed of D+1 distinct vectors[7]. The approach which selects D+1 rank-based solutions from the population is adopted to initialize the simplex. The particles are sorted in preparation for repeating the entire run. The process terminates when a certain convergence criterion is satisfied. The steps of HPSO are summarized as follow: 1. Initialization. Randomly generate N particles (solutions) and evaluate them. Select the better positions as the initial particles[8]. Repeat 2. Repair particles that violate the constraints by directing the infeasible solution toward the feasible region. Discard unrepairable solutions.
3. Each employed particle finds a candidate position according to equation (2). Evaluate the candidate position and select a better one as the new candidate. 4. Rank the solutions and calculate the rank-based fitness according to equation (2). 5. Each particle selects a better position according to equation (2) and generates a candidate solution according to equation (2). Evaluate the candidate particle and select a better one as the candidate particle. 6. Select D+1 points from the population based on rank-based fitness to generate the initial simplex. 7. Simplex Method. Apply the simplex operator to the top N+1particles and update the (N+1)th particle. 8. Replace the selected points in step 6 with the simplex vertices after step 7. 9. Memorize the best solution found so far. Until a termination condition is met.
4
EXAMPLES
The object of wave impedance inversion is to reconstruct the model by the observation data, namely medium parameters, focus parameters and interface shape are deduced from surface seismic records and responds. For nonlinear geological inversion, the inversion process can be regarded as optimization in the high-dimension parameter space. Usually, the object function is designed as the difference between observation data and the theoretical model data. For seismic impedance inversion, it is also be transformed into an optimization problem by the objective function. Usually the seismic records are firstly obtained by observation instruments and then theoretical observations are computed by current model parameters and the difference between them are obtained. If the difference does not meet the convergence condition, repeat the evolution process until the condition is satisfied. The object function is defined as follow: n
E (σ ) = ¦ (u i' − u i ) 2 ˄4˅ i =1
In the formula
σ = (σ 1 , σ 2 ,..., σ M )
is parameters to be
' inverted and M is layer numbers and u i , u i (i = 1,2,..., n)
are observation data and theoretical synthetic records and n is samplings. While the objective function shown as (4) is regarded as convergence end condition, the inversion
problem can be described as follow: σ ∈ X is to be obtained and the formula (5) is satisfied. *
E (σ *) = min E (σ ) In
{
the
˄5 ˅
formula
(5)
In
the
formula
}
(5)
X = (σ 1 , σ 2 ,..., σ M ) σ i− ≤ σ i ≤ σ i+ , i = 1,2,..., M .
To further improve the inversion efficiency, the object function shown in formula (1) can be transformed into the style demonstrated by formula (6). n
n
i =1
i =1
E (σ ) = ¦ (u i' − u i ) 2 + ¦ (σ i − σ i* ) ˄6˅
2010 Chinese Control and Decision Conference
3441
In the formula (6)
σi
is computed parameter and
σ i*
is the
prior knowledge. Suppose density is const and the sampling is 150. 60Hz ricker wavelet is given and synthetic seismic records are computed by convolution of theoretical wavelet and reflection coefficient. For wave impedance inversion problem the best solutions obtained by PSO, CPSO[9] and HPSO whose statistical results shown in Table1. This table demonstrates the superiority of HPSO over the other methods on searching for the global optimum for the wave impedance inversion problem. Based on the aforementioned simulation results and comparisons, it can be concluded that HPSO sustains competitiveness on solving the wave impedance inversion problem. HPSO, which not only has the best optimal solution among all the methods, but as seen in Table1. that shows the statistical results of these methods for the wave impedance inversion problem is superior to the other methods. Even the worst solution found by HPSO is better than the best solutions by the other methods. Table1. Statistical results of different methods for wave impedance inversion problem Methods Best Mean Worst Std. PSO 6207.6737 6286.8243 6218.4575 7.3853 CPSO 6059.9463 6177.2533 6469.3220 130.9297 HPSO 5930.3137 5946.7901 5960.0557 9.1614
5. CONCLUSIONS
In this paper, PSO combined with the simplex method is proposed for solving wave impedance inversion problem. The motivation of such a hybrid method is to explore a better tradeoff between computational cost and global optimality of the solution attained. Practical application of HPSO for wave impedance inversion problem shows that HPSO is accurate, reliable and efficient at locating global optima than the other alternatives.
3442
2010 Chinese Control and Decision Conference
REFERENCES [1] Yang Wen-cai. Trends of Geophysics Inversion[J].Earth Science Frontiers, 2002,9(4):389̚396. [2] M. Clerc and J. Kennedy, “The particle swarm—Explosion, stability, and convergence in a multidimensional complex space,” IEEE Trans. Evol. Comput., vol. 6, no. 1, pp. 58–73, Feb. 2002. [3] F. van den Bergh and A. P. Engelbrecht, “A cooperative approach to particle swarm optimization,” IEEE Trans. Evol. Comput., vol. 8, no. 3, pp. 225–239, Jun. 2004. [4] Erwie Zahara, Yi-Tung Kao. Hybrid Nelder ̢ Mead simplex search and particle swarm optimization for constrained engineering design problems[J]. Expert Systems with Applications,2009, 36: 3880̢3886. [5] Cui GZ, Niu YY, Wang YF. A new approach based on PSO algorithm to find good computational encoding sequences. Prog Nat Sci 2007;17(6):712–6. [6] Tillett T, Rao TM, Sahin F, Rao R. Darwinian particle swarm optimization. In: Proceedings of the 2nd Indian international conference on artificial intelligence, Pune, India. 2005. p. 1474–87. [7] Poli R, Kennedy J, Blackwell T. Particle swarm optimization. An overview. Swarm Intelligence 2007;1:33–57. [8] Chau KW. Particle swarm optimization training algorithm for ANNs in stage prediction of Shing Mun River. J Hydrol 2006;329(3-4): 363–367. [9] Niu B, Zhu Y, He X, Wu H. MCPSO: a multi-swarm cooperative particle swarm optimizer. Applied Mathematics and Computation 2007;185:1050–62.