Finding all solutions of nonlinear systems using a

0 downloads 0 Views 396KB Size Report
May 7, 2011 - where L = (c1, c2, . . ., cC) is the array of centroids of all classes and. U is the .... Parameters employed for Luus–Jaakola and FCM in each example. Example. Luus–Jaakola. FCM nouter ninner nb. C. 1 ..... aci = 0.4724. R2T2.
Applied Soft Computing 11 (2011) 5424–5432

Contents lists available at ScienceDirect

Applied Soft Computing journal homepage: www.elsevier.com/locate/asoc

Finding all solutions of nonlinear systems using a hybrid metaheuristic with Fuzzy Clustering Means W.F. Sacco a,c,∗ , N. Henderson b,c a

Universidade Federal do Oeste do Pará, Av. Marechal Rondon, s/n, Santarém, PA 68040-070, Brazil Instituto Politécnico, Universidade do Estado do Rio de Janeiro, R. Alberto Rangel, s/n, Nova Friburgo, RJ 28630-050, Brazil c Thermodynamics and Optimization Group (TOG), Brazil b

a r t i c l e

i n f o

Article history: Received 14 August 2008 Received in revised form 27 October 2010 Accepted 1 May 2011 Available online 7 May 2011 Keywords: Nonlinear systems Global optimization Metaheuristics Fuzzy clustering

a b s t r a c t We apply a recently introduced hybrid metaheuristic to solve nonlinear systems of equations with multiple roots as an optimization problem. In this technique, first, the Luus–Jaakola random search method is used to explore the search space. Then, in order to find more than one root, the best solutions previously found are clustered using Fuzzy Clustering Means. Finally, multiple Nelder–Mead simplex instances are applied using these solutions as starting points for searching within their respective clusters’ domain. Our method is compared against other methodologies using benchmarks from the literature, and shows to perform well. Moreover, we successfully apply it to a real-world nonlinear system from the field of chemical engineering: the double retrograde vaporization problem. © 2011 Elsevier B.V. All rights reserved.

1. Introduction The development of realistic and appropriate models in the form of highly nonlinear systems has contributed to the description of important real-world problems. Several of these systems of equations are NP-hard [34]. In fact, they demand the simultaneous satisfaction of a number of inequality and/or variable bound constraints, and can present multiple roots. In the literature, there have been recent efforts to solve nonlinear systems using metaheuristics. C-GRASP [18], which is an adaptation of the well-known GRASP algorithm [10] to continuous global optimization, was successfully applied to numerous benchmarks [18,19]. In a very interesting approach, Grosan and Abraham [13] used an evolutionary algorithm [12] to solve nonlinear systems by seeing them as multiobjective optimization problems. In this work, we apply a recently introduced hybrid metaheuristic to the solution of nonlinear systems with multiple roots formulated as an optimization problem. This three-step metaheuristic was created by Henderson and Sacco [15] having in mind multimodal optimization problems, as the search space is clustered in order to maintain diversity. It was applied to a specific real-world chemical engineering problem. Here, we extend the application to

∗ Corresponding author at: Universidade Federal do Oeste do Pará, Av. Marechal Rondon, s/n, Santarém, PA 68040-070, Brazil. Tel.: +55 93 8402 6503. E-mail addresses: [email protected], [email protected] (W.F. Sacco), [email protected] (N. Henderson). 1568-4946/$ – see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.asoc.2011.05.016

benchmarks from the literature of nonlinear systems and provide a more rigorous exposition of the method. The technique works as follows. First, the Luus–Jaakola random search algorithm [33] is used to explore the search space. The reader must bear in mind that any other random search method could have been used. Our purpose in this first step is not necessarily to obtain a low-fitness solution, but to promote a thorough exploration of the search space. Secondly, the best solutions found by the Luus–Jaakola algorithm are grouped using a cluster analysis algorithm. According to Marriott [35], cluster analysis is “a general approach to multivariate problems in which the aim is to see whether the individuals fall into groups or clusters. There are several methods of procedure; most depend on setting up a metric to define the closeness of individuals”. In this work, we use a cluster analysis method called Fuzzy Clustering Means, or simply FCM [4,20]. This technique, which has been already associated with optimization algorithms [46,47], uses the concept of fuzzy logic [54] to group the solutions proposed by the Luus–Jaakola algorithm. Thirdly, multiple Nelder–Mead simplex [40] instances are applied using the best solutions found by the Luus–Jaakola algorithm as starting points for searching within their respective clusters’ domain. The Nelder–Mead simplex local search algorithm has been widely employed combined with stochastic optimization algorithms. See, for recent examples, Zahara and Kao [55]; Sacco et al. [48]; Chelouah and Siarry [6]. Das [7] and Koduru et al. [26] recently applied a conceptually similar scheme. The Particle Swarm Optimization algorithm (PSO [25]) is applied to the optimization problem. In each generation, the

W.F. Sacco, N. Henderson / Applied Soft Computing 11 (2011) 5424–5432

population is clustered using the k-means algorithm [1], and a few steps of the Nelder–Mead algorithm applied separately to each cluster. One must bear in mind, however, that these authors use a populational stochastic algorithm, while we sample a number of points using a single-particle technique, LJ. Moreover, we cluster the population using FCM, while Das [7] and Koduru et al. [26] employ the more traditional k-means. In spite of its popularity, k-means is less accurate than FCM (see [28], for example). The remainder of the paper is organized as follows. In the next section, the solution of a nonlinear system is formulated as an optimization problem. Section 3 presents the hybrid metaheuristic. In Section 4, this method is applied to nonlinear benchmarks from the literature and to a real-world nonlinear system from the field of chemical engineering [16]. Finally, the conclusions are made. 2. Problem formulation Let us consider the problem of computing solutions of nonlinear systems with simple bound constraints. We can express this problem as

⎧ f1 (x) = 0 ⎪ ⎪ ⎪ ⎪ ⎪ ⎨

f2 (x) = 0 s.t. x ∈ [a, b] ⊆ n ,

(1)

⎪ ⎪ .. ⎪ ⎪ ⎪ ⎩.

fN (x) = 0

5425

Since the system represented by Eq. (1) has solution(s) in [a, b], then, in terms of results, to solve this system is equivalent to find a global minimum(a) of the optimization problem given by Eq. (2). 3. The optimization method 3.1. The Luus–Jaakola algorithm Random search methods for optimization are based on a random exploration of a domain to find a point that minimizes an objective function. They were originally introduced by Anderson [2], and then developed by Karnopp [22] and Matyas [36], among others. Random search methods have been widely employed in chemical engineering for continuous optimization as, for example, those proposed by Luus and Jaakola [33], Gaines and Gaddy [11], and Salcedo et al. [51]. The most popular of these techniques is the Luus–Jaakola algorithm [33], which has been used not only in chemical engineering [33,29,32], but also in control problems [31], in optics [3], in electrical engineering [52], in chromatography [44], and in nuclear engineering [49], among other applications. As stated by Liao and Luus [30], the idea behind the Luus–Jaakola algorithm is very simple: random solutions are selected over a region that is decreased in size as iterations proceed. Our implementation of Luus–Jaakola is described below. It differs from the original algorithm proposed by Luus and Jaakola [33] in one point: while, originally, x∗ was replaced by a possible improved solution only after the internal loop was completed, we replace x∗ immediately if a better solution is found, as suggested by Gaines and Gaddy [11] in their optimization algorithm.

Algorithm 1: Luus−Jaakola Data: Initial search size r(0) , number of external loops nout and internal loops nin , contraction coefficient ε, and initial solution x∗ . Result: Optimal solution x∗ . begin for i ← 1 to nout do for j ← 1 to nin do x(j) ←− x∗ + R(j) r(i−1) , where R(j) is a diagonal matrix of random numbers between -0.5 and 0.5 if f (x(j) ) < f (x∗ ) then x∗ ←− x(j) r(i) ←− (1 − ε)r(i−1) end 3.2. Fuzzy Clustering Means where x = (x1 , . . ., xN )T ∈  n , fi : n →  and [a, b] ≡ [a1 , b1 ] × [a2 , b2 ] × . . . × [aN , bN ], with ai < bi , for all i = 1, . . ., N. Note that vectors a = (a1 , a2 , . . ., aN ) and b = (b1 , b2 , . . ., bN ) are specified as the lower and upper bounds of the variables, and set [a, b] is a box in n , where there exist one or more roots of the nonlinear system. Let us suppose that function fi : n → , for any i = 1, . . ., N, can be nondifferentiable or even discontinuous, but it must be bounded in [a, b]. If F = (f1 (x), . . ., fN (x))T , the problem described by Eq. (1) can be reformulated as the following optimization problem: n

Min f (x) s.t. x ∈ [a, b] ⊆  .

(2)

The fuzzy cluster separation known as FCM became popular because it did not require, at each iteration, the total allocation of an individual to a certain cluster or class. This algorithm borrowed from fuzzy logic the concept of pertinence, that denotes the degree of association of an individual to a given class. By definition, in a fuzzy classifier, pertinence ik (individual k, class i) must satisfy the following conditions [4]: ik ∈ [0, 1], 1
0 is the reflection coefficient, and x¯ is the centroid given by 1 xi . n n

x¯ =

(9)

i=1

m d2 , ik ik

where L = (c1 , c2 , . . ., cC ) is the array of centroids of all classes and U is the CXN matrix that contains all pertinences ik . This matrix is known as the C-partition fuzzy matrix, where C is the number of classes fixed a priori and N is the population size. Deriving Eq. (6) in relation to ik equaling it to zero and respecting constraints (4), the following equation is achieved [20]: ik =

of derivatives of the objective function, being effective in problems where the function is discontinuous. In a given iteration of the Nelder–Mead algorithm, n + 1 points, denoted by x1 , . . ., xn , xn+1 , are used to compute trial steps. In all iterations, we will always consider x1 , . . ., xn , xn+1 so that f(x1 ) ≤ · · · ≤ f(xn ) ≤ f(xn+1 ) holds. A trial step is accepted or rejected based on the function value of the trial point and on the three values, f(x1 ), f(xn ) and f(xn+1 ). Geometrically, at each iteration, these n + 1 points may be thought of as the vertices of a simplex in n , S = [x1 , . . ., xn , xn+1 ] ⊂  n . If n = 2, for example, then S = [x1 , x2 , x3 ] ⊂  2 is a triangle in the Euclidian plan. Thus, xn+1 is the vertex of the simplex that has the largest value of (worst vertex). Trial steps are generated by the operations of reflection, expansion, contraction, and shrinkage. A reflected vertex is computed by reflecting the worst vertex through the centroid of the remaining vertices as

.

(7)

This equation quantifies the pertinence of the k th individual to the i th class or cluster. Notice that it takes into account not only the distance of the k th individual to the i th class, but also the distance of this individual to all the other classes. The FCM algorithm is, basically, the following:

The reflected vertex is accepted if f(x1 ) ≤ f(xr ) < f(xn ), and the next iteration begins with the simplex defined by S = [x1 , . . ., xn , xr ], where xr was not ordered with respect to the other vertices. If f(xr ) < f(x1 ), then the trial step generated an acceptable point and the step is expanded. In this case, the expansion vertex is computed as ¯ xe = xr + (1 − )x,

(10)

where  > 1 is the expansion coefficient. If f(xe ) < f(x1 ), then xe is accepted. Otherwise, xr is accepted. Thus, if f(xr ) < f(xn ), then either the reflected or expanded vertex is accepted and the next iteration begins. But, if f(xn ) ≤ f(xr ), then the internal contraction vertex is computed as

Algorithm 2: Fuzzy Clustering Means Data: N points to be classified, number of classes C, nebulosity degree m, convergence criterion ε. Result: All points are assigned to a class. begin l←1 Initialize the C-partition fuzzy matrix U (0) repeat Evaluate ci using Eq. (5) Update the elements of U (l) using Eq. (7) l ←l+1 until U (l−1) − U (l) < ε end In this paper, we use m = 2 and ε = 10−3 . The other parameters (N and C) are problem-dependent. 3.3. The Nelder–Mead algorithm The Nelder–Mead simplex method is a local search algorithm to obtain a solution of optimization problems. This algorithm belongs to a class of methods called direct search methods [5], that try to solve optimization problems using only the objective function values. The choice of a direct search method avoids the calculation

¯ xc = ˇxn+1 + (1 − ˇ)x,

(11)

where ˇ = 1/2 is the contraction coefficient. Otherwise, the external contraction vertex is computed as ¯ xˆ c = ˇxr + (1 − ˇ)x.

(12)

The contraction vertex is accepted if its function value is lower than xn . Finally, if both the reflection vertex and the contraction

W.F. Sacco, N. Henderson / Applied Soft Computing 11 (2011) 5424–5432

vertex are rejected, then the simplex is shrunk. In this case, each vertex xi , except x1 , is replaced by xi =

x 1 + xi , 2

∀i = 2, . . . , n + 1.

(13)

Values f(xi ) are computed and sorted. This final procedure determines the new simplex S = [x1 , . . ., xn , xn+1 ] with which the next iteration starts. The algorithm of the Nelder–Mead simplex method is given below [24].

5427

and build boxes from the clusters radii. Following that, comes the “Local Search” step, where each best point is used as an initial solution for the Nelder–Mead algorithm to solve a subproblem within its cluster domain. Finally, the minima found by the Nelder–Mead instances are sorted in ascending order of objective-function value so that the best solutions (i.e. global optima in a multimodal problem) are highlighted. Our hybrid approach can be described as follows:

Algorithm 3: Nelder−Mead Simplex begin Sort the vertices of S so that f (x1 ) ≤ ... ≤ f (xn ) ≤ f (xn+1 ) holds. while f (xn+1 ) − f (x1 ) > ε do a. Compute x, xr , and f (xr ). b. Reflection: if f (x1 ) ≤ f (xr ) < f (xn ) then Replace xn+1 with xr goto step g c. Expansion: if f (xr ) < f (x1 ) then Compute f (xe ) if f (xe ) < f (xr ) then Replace xn+1 with xe else Replace xn+1 with xr goto step g d. External Contraction: if f (xn ) ≤ f (xr ) < f (xn+1 ) then Compute f (xc ) if f (xc ) ≤ f (xr ) then Replace xn+1 with xc goto step g else goto step f e. Internal Contraction: if f (xr ) ≥ f (xn+1 ) then Compute f (xc ) if f (xc ) < f (xn+1 ) then Replace xn+1 with xc goto step g else goto step f f. Shrinkage: for i ← 2 to n + 1 do xi ← (x1 + xi )/2 Compute f (xi ) g. Sort: Sort the vertices of S so that f (x1 ) ≤ ... ≤ f (xn ) ≤ f (xn+1 ) holds. end

3.4. The hybrid metaheuristic In order to determine all minima of our optimization problem, our approach uses a hybrid strategy. First of all, using the Luus–Jaakola algorithm, we generate an initial sampling with points belonging to the feasible region . This step is called “Global Search”. In the next step, called “Selection”, we select the best points from the initial sample. In the third step, “Clustering”, we group these best points using the Fuzzy Clustering Means algorithm

(1) Global search: use few steps of the Luus–Jaakola algorithm to generate an initial sampling, let us say, np points, x1 , . . . , xnp ∈ . (2) Selection: using the initial sampling, select nb ≤ np best points, x1 , . . . , xnb , such that f (x1 ) ≤ · · · ≤ f (xnb ) ≤ · · · ≤ f (xnp ). (3) Clustering: a. Use FCM to assign the nb best points to C clusters.

5428

W.F. Sacco, N. Henderson / Applied Soft Computing 11 (2011) 5424–5432

Table 1 Parameters employed for Luus–Jaakola and FCM in each example. Example

Luus–Jaakola nouter

1 2 3 4 5 6 7 8 9 10 11

100 100 100 10 100 100 100 100 100 100 100

Table 2 Numerical examples used in our experiments.

FCM ninner 1000 1000 1000 1000 1000 1000 10,000 1000 1000 100,000 100,000

nb

C

100 250 100 100 250 400 1000 100 1000 1000 1000

10 15 10 10 15 20 10 10 10 10 10

b. Build a box from each cluster radius, as follows: consider a cluster G with nG points x(j) ∈  N . Given the centroid cG of G, calculate the distance from cG to x(j) , j

d(j) (cG , x(j) ) = max1≤i≤N |ciG − xi |.

(14)

Example

Number of variables

Range

1. Brown’s almost linear system 2. High-degree polynomial system 3. Himmelblau 4. Trigonometric system no. 1 5. Trigonometric system no. 2 6. Robot kinematics application 7. Kinematics application (kin2) 8. Automotive steering example 9. Conformal analysis of cyclic molecules 10. Chemical equilibrium problem 11. Double retrograde vaporization problem

5

[− 10, 10]5

3

[− 0.6, 6] × [− 0.6, 0.6] × [− 5, 5]

2 2

[− 5, 5]2 [0.25, 1] × [1.5, 2]

2

[0, 2]2

8

[− 1, 1]2

8

[− 10, 10]8

3

[0.06, 1]3

3

[− 10, 10]3

5

[0, 100]5

2

[0, 1] × [100, 10,000]

Then, the radius r of cluster G may be given by r = max1≤j≤nG d(j) (cG , x(j) ).

(15)

Thus, the cluster G can be defined as a subregion of N such that G = {x ∈ N |max1≤i≤N |ciG − xi| ≤ r}.

(16)

This cluster may also be seen as a box in N with coorG − r, c G + dinates [c1G − r, c1G + r] × [c2G − r, c2G + r] × . . . × [cN N r]. (4) Local search: using the Nelder–Mead algorithm, for each box i ⊂  originated from one of the C clusters, solve ni constrained optimization problems with the ni cluster samples as starting points: Min f (x) s.t. x ∈ i ⊆ n .

(17) (1)

(2)

b) obtained by (5) Sorting: sort the minima x∗ , x∗ . . . , x(n ∗

Nelder–Mead simplex, so that

(1) f (x∗ )

4.1. Example 1 – Brown’s almost linear system We solved the 5-dimensional case of this system [38,21], given by

⎧ f1 = 2x1 + x2 + x3 + x4 + x5 − 6 ⎪ ⎪ ⎨ f2 = x1 + 2x2 + x3 + x4 + x5 − 6 f3 = x1 + x2 + 2x3 + x4 + x5 − 6 .

(18)

⎪ ⎪ ⎩ f4 = x1 + x2 + x3 + 2x4 + x5 − 6 f5 = x1 x2 x3 x4 x5 − 1

This system has three real roots. Jäger and Ratz [21] took 0.150 s to find these roots using a specialized method called BB + NLTSS (BB = Buchberger’s algorithm + NLTSS = Nonlinear Triangular System Solver). Our algorithm found them in all executions, taking an average CPU time of 0.077 s per execution. 4.2. Example 2 – high-degree polynomial system

(2)

b) ≤ f (x∗ ) ≤ . . . ≤ f (x(n ∗ ).

(1)

(6) Final selection: select the nf ≤ nb best minima x∗ , . . . , x∗(nf ) so that f (x∗(nf ) ) ≤ ε, where ε > 0 is sufficiently small. 4. Numerical examples AthlonTM

Our tests were performed on a AMD 64 X2 Dual Core Processor 4000+ PC with 1 Gb RAM running openSUSE Linux 10.3. Our optimization method was implemented in C++ and compiled with GNU g++ version 4.2.1. Table 1 shows the parameters employed for Luus–Jaakola and FCM in each example. Nelder–Mead simplex was set up with the following parameters: ˛ = 1.0, ˇ = 0.5,  = 2.0, ε = 10−6 . As in the literature of nonlinear systems the performance of an algorithm is measured mostly in terms of computational time (see all the references in this section), we adopt the same evaluation criterion. We understand that computational time depends on hardware, platform, language and coding style, but this is the only information available regarding the other methods. As in Hirsch et al. [19], we consider a root to be found if the objective function value becomes smaller than 10−7 . We performed 100 independent runs with different random seeds for examples 1–10. Table 2 lists the numerical examples used to evaluate our method.

This example was taken from Kearfott [23], and consists of the following system of equations:

⎧ 9 5 2 4 ⎪ ⎨ f1 = 5x1 − 6x1 x2 + x1 x2 + 2x1 x3 ⎪ ⎩

f2 = −2x16 x2 + 2x12 x23 + 2x2 x3

(19)

.

f3 = x12 + x22 − 0.265625

This problem has 12 real solutions. A generalized bisection based on boxes took 233 s to find all the solutions [23]. Our method was successful in finding all roots in each run, taking an average time of 0.169 s per execution. 4.3. Example 3 – Himmelblau In this example, we determine the 9 roots of the Himmelblau system. This example was taken from Maranas and Floudas [34], and consists of the following system of equations:



f1 = 4x13 + 4x1 x2 + 2x22 − 42x1 − 14 f2 = 4x23 + 2x12 + 4x1 x2 − 26x2 − 22

.

(20)

Maranas and Floudas [34] used a branch and bound algorithm to solve this system, taking 10.89 s to find the 9 solutions. We found all these solutions in 100/100 executions, taking an average time of 0.040 s per execution.

W.F. Sacco, N. Henderson / Applied Soft Computing 11 (2011) 5424–5432 Table 3 Coefficients aji for Example 7.

4.4. Example 4 – trigonometric system no. 1 This system was taken from Maranas and Floudas [34], and was used as test system by Hirsch et al. [19]. It is defined by



f1 = 0.5 sin(x1 x2 ) − 0.25x2 / − 0.5x1 . f2 = (1 − 0.25/)(exp(2x1 ) − e) + ex2 / − 2ex1

(21)

It has two solutions. Our algorithm found both roots in all executions, taking an average time of 0.069 s per execution. Maranas and Floudas [34] took 2.0 s to find the solutions. More recently, Hirsch et al. [19] used C-GRASP to find both roots in 100/100 executions, taking an average of 0.071 s to find the first root and 0.390 s to find the second, which means a total average time of 0.461 s per execution.

4.5. Example 5 – trigonometric system no. 2

j/i

1

2

3

4

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

−0.249150680 1.609135400 0.279423430 1.4344801600 0.0000000000 0.4002638400 −0.800527680 0.0000000000 0.0740523880 −0.083050031 −0.386159610 −0.755266030 0.504201680 −1.091628700 0.0000000000 0.049207290 0.0492207290

0.125016350 −0.686607360 −0.119228120 −0.719940470 −0.432419270 0.000000000 0.000000000 −0.864838550 −0.037157270 0.035436896 0.085383482 0.00000000 −0.039251967 0.00000000 −0.432419270 0.0000000000 0.013873010

−0.635550070 −0.115719920 −0.666404480 0.110362110 0.290702030 1.258776700 −0.629388360 0.581404060 0.195946620 −1.228034200 0.0000000000 −0.079034221 0.026387877 −0.057131430 −1.162808100 1.258776700 2.162575000

1.48947730 0.23062341 1.32810730 −0.25864503 1.16517200 −0.26908494 0.53816987 0.58258598 −0.20816985 2.68683200 −0.69910317 0.35744413 1.24991170 1.46773600 1.16517200 1.07633970 −0.69686809

4.7. Example 7 – kinematics application (kin2)

This system was also used by Hirsch et al. [19] to test C-GRASP as a nonlinear-system solver. It is defined by



5429

f1 = − sin(x1 ) cos(x2 ) − 2 cos(x1 ) sin(x2 ) . f2 = cos(x1 ) sin(x2 ) − 2 sin(x1 ) cos(x2 )

(22)

This application, known as kin2, describes the inverse position problem for a six-revolute joint problem in mechanics [53,13]. It is defined by the following system (1 ≤ i ≤ 4):

⎧ 2 − 1, fl = xi2 + xi+1 1≤l≤4 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ fm = a1i x1 x3 + a2i x1 x4 + a3i x2 x3 + a4i x2 x4

+a5i x2 x7 + a6i x5 x8 + a7i x6 x7 + a8i x6 x8

.

(24)

It has 13 solutions. Hirsch et al. [19] found all these solutions in 100/100 runs, taking a mean time of approximately 5 s. Our algorithm was also successful in finding the 13 roots in all runs, taking an average time of 0.393 s per execution.

⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩

4.6. Example 6 – robot kinematics application

The coefficients aji , 1 ≤ j ≤ 17, are given in Table 3. We found the 10 solutions of this system in all executions, taking an average time of 1.076 s per execution. Using a branch and prune algorithm, Van Hentenryck et al. [53] needed 353.06 s to find all the solutions. It must be emphasized, though, that they considered as

This real-world example has been widely employed in the literature [39,23,34,18]. Morgan and Shapiro [39] say that this is “a common problem in kinematic analysis of robot manipulators (. . .), in which the desired position and orientation of the robot hand is given and the relative (robot) joint displacements are to be found.” The system is given as follows:

⎧ f1 = 4.731 × 10−3 x1 x3 − 0.3578x2 x3 − 0.1238x1 + x7 ⎪ ⎪ ⎪ ⎪ ⎪ −1.637 × 10−3 x2 − 0.9338x4 − 0.3571 ⎪ ⎪ ⎪ ⎪ ⎪ f = 0.2238x1 x3 + 0.7623x2 x3 + 0.2638x1 − x7 ⎪ ⎪2 ⎪ ⎪ ⎪ −0.07745x2 − 0.6734x4 − 0.6022 ⎪ ⎪ ⎪ ⎨ f3 = x6 x8 + 0.3578x1 + 4.731 × 10−3 x2 ⎪ f4 = −0.7623x1 + 0.2238x3 + 0.3461 ⎪ ⎪ ⎪ ⎪ 2 2 ⎪ ⎪ f5 = x1 + x2 − 1 ⎪ ⎪ ⎪ ⎪ f6 = x32 + x42 − 1 ⎪ ⎪ ⎪ ⎪ f = x2 + x2 − 1 ⎪ 7 ⎪ 5 6 ⎪ ⎩ 2 2

+a9i x1 + a10i x2 + a11i x3 + a12i x4 + a13i x5 +a14i x6 + a15i x7 + a16i x8 + a17i ,

5≤m≤8

8

search domain [−108 , 108 ] . Using the same range as ours, Grosan and Abraham [13] took 221.29 s using an evolutionary algorithm to optimize a transformation of this system into a multiobjective optimization problem. It must be added, though, that Grosan and Abraham [13] did not achieve a sum of the absolute values of the objective functions below 1, while we considered an execution to be successful if the value given by Eq. (3) was less than 10−7 . 4.8. Example 8 – automotive steering example

.

(23)

This system, which was used by Hirsch et al. [19] to test C-GRASP as a non-linear system solver, describes the kinematic synthesis mechanism for automotive steering. It is described by the following system, for i = 1, 2, 3: Gi (

i , i )

i ) − x3 ) − Fi (x2

sin(i ) − x3 )]

+ [Fi (1 + x2 cos(i )) − Ei (x2 cos(

f8 = x7 + x8 − 1

It has 16 solutions, all of which were found by our algorithm in 100/100 runs, taking an average time of 0.295 s per execution. With his generalized bisection technique, Kearfott [23] took 1,120 s to find all the roots. Applying their method, Maranas and Floudas [34] needed 109.58 s. Using an ingenuous modification of the objective function in order to create an area of repulsion near solutions already found, Hirsch et al. [18] obtained success in 10/10 executions of C-GRASP, taking an average CPU time of 3,048 s.

= [Ei (x2 sin(

2

i ) − 1)]

− [(1 + x2 cos(i ))(x2 sin(

i ) − x3 )x1

− (x2 sin(i ) − x3 )(x2 cos(

i ) − x3 )x1 ]

2

,

2

(25)

where Ei = x2 (cos(i ) − cos(0 )) − x2 x3 (sin(i ) − sin(0 )) − (x2 sin(i ) − x3 )x1 ,

(26)

and Fi = −x2 cos(

i ) − x2 x3

+ (x3 − x1 )x2 sin(

sin( 0 ).

i ) + x2

cos(

0 ) + x1 x3

(27)

5430

W.F. Sacco, N. Henderson / Applied Soft Computing 11 (2011) 5424–5432

Table 4 Angles for Example 8, in radians. i

a system of ten equations in ten unknowns, which can be reduced to the following system of five equations in five unknowns [37]: i

i

1.3954170041747090114 1.7444828545735749268 2.0656234369405315689 2.4600678478912500533

0 1 2 3

1.7461756494150842271 2.0364691127919609051 2.2390977868265978920 2.4600678409809344550

⎧ f = x x + x − 3x 5 1 1 2 1 ⎪ ⎪ ⎨ f2 = 2x1 x22 + x1 + x2 2 x32 + R8 x2 − Rx5 + 2R10 x22 + R7 x2 x3 + R9 x2 x4 f3 = 2x2 x3 + 2R5 x3 − 8x5 + R6 x3 + R7 x2 x3

f4 = R9 x2 x4 + 2x4 − 4Rx5 ⎪ ⎪ ⎩ f5 = x1 (x2 + 1) + R10 x22 + x2 x32 + R8 x2 + R5 x32 + x42 − 1 + R6 x3 2

,

(30)

+R7 x2 x3 + R9 x2 x4

where The unknowns are given by x1 , x2 , and x3 . The angles i and i are given in Table 4, in radians. For a brief description of the mechanical meaning of this system, please refer to Hirsch et al. [19]. This system has two solutions, which were both found by our method in 100/100 runs, taking an average CPU time of 0.367 s. CGRASP was also successful in 100/100 executions, taking an average of 0.84 s to find the first solution and 5.06 s to find the second [19], giving a total average time of 5.90 s. 4.9. Example 9 – conformal analysis of cyclic molecules This system was taken from Emiris [8], where it was solved using a sparse elimination method. Identifying molecular structure is of vital importance, especially for pharmaceutical drug design and medical research [9]. Emiris [8] and Emiris and Mourrain [9] used computer algebra methods to visualize all spatial configurations, or conformations, of cyclic (a.k.a. ring) molecules. Using the model proposed by Parsons and Canny [41], they related molecular conformations to robot kinematics, thinking of bonds as rigid joints and atoms as articulations. For a cyclohexane molecule, the bond lengths and angles provide the constraints, while the six dihedral angles are allowed to vary. Using the analogy with kinematics, as each pair of consecutive axes intersect at a link, the link offsets are zero for all six links, reducing the 6-D problem to a system of 3 polynomials in 3 unknowns. Given the dihedral or flap angles  1 ,  2 and  3 , making the following transformation, ti = tan

i , 2

cos i =

1 − ti2 1 + ti2

, sin i =

2ti 1 + ti 2

,

i = 1, 2, 3.

(28)

Emiris [8] arrives at the polynomial system below:

⎧ 2 2 2 2 ⎪ ⎨ f1 = ˇ11 + ˇ12 t2 + ˇ13 t3 + ˇ14 t2 t3 + ˇ15 t2 t3 ⎪ ⎩

f2 = ˇ21 + ˇ22 t32 + ˇ23 t12 + ˇ24 t32 t12 + ˇ25 t3 t1 , f3 =

ˇ31 + ˇ32 t12

+ ˇ33 t22

+ ˇ34 t12 t22

(29)

+ ˇ35 t1 t2

where ˇij are input coefficients, which, for the first instance solved by Emiris [8] are given by the (i, j) th entry of matrix



−9 −1 −9 −1 −9 −1

−1 +3 −1 +3 −1 +3

+8 +8 +8



.

This system has 8 real solutions, which were all found by our method in each execution, taking an average time of 0.552 s per execution. Emiris [8], using his specialized method, took an average CPU time of 0.4 s. 4.10. Example 10 – chemical equilibrium problem In spite of having a single real solution, we selected this problem to show that, besides finding multiple solutions, our method is able to promote a wide exploration of the search space with a low computational cost. This example, taken from Meintjes and Morgan [37], has been widely employed in the literature ([34,53,18,13], among others). It concerns the combustion of propane (C3 H8 ) in air (O2 and N2 ) to form ten products. This chemical reaction generates

⎧ R = 10 ⎪ ⎪ ⎪ ⎪ R5 = 0.193 √ ⎪ ⎪ ⎨ R6 = 0.002597/√40 R7 = 0.003448/ 40

√ ⎪ ⎪ R8 = 0.00001799/ ⎪ √ 40 ⎪ ⎪ ⎪ ⎩ R9 = 0.0002155/ 40 √

(31)

.

R10 = 0.00003846/ 40

Variables xi are surrogates for atomic combinations, which means that only positive values make physical sense. Among the four real solutions reported by Meintjes and Morgan [37], only one has all-positive components. Hence, if the search domain is taken from the positive side, as we did, this will be the only solution. Using the same search domain as ours, Maranas and Floudas [34] took 31.7 s to find the solution using branch and bound. Using as initial interval [0, 108 ], a branch and prune method took about 56 s to solve this problem [53]. C-GRASP, by its turn, found the solution in 10/10 runs, taking an average time of 37.53 s [18]. These authors do not inform the search domain. Recently, Grosan and Abraham [13] solved this problem less successfully (i.e., they did not get a sum of objective functions below 0.5), taking an average computing time of 32.71 s. Our algorithm found the root in 100/100 executions, taking in the average 3.226 s. 4.11. Example 11 – the double retrograde vaporization problem In general, phase equilibrium problems are nonlinear systems that are often modeled as global optimization problems. It has been observed that many of these thermodynamic problems possess several minima, all them of interest for the science and technology, see, for example, Henderson et al. [17] and references therein. The calculation of vapor-liquid phase envelopes for mixtures is one of those difficult problems, mostly near the critical region of phase transition. To test the present methodology, we consider the numerical construction of a vapor-liquid envelop near a critical point in the presence of the double retrograde vaporization phenomenon, as studied previously by Henderson et al. [16]. We present numerical results for methane + n-butane binary system, which is relevant to petroleum science. Here, this binary mixture is modeled using the Peng–Robinson equation of state equation associated with the one-fluid van der Waals mixing rule, see Peng and Robinson [42]. The problem can be formulated as the following nonlinear system with simple bound constraints [16]: (v) (l) Given T, Pmin > 0, Pmax > 0 and x1 , find (x1 , P) such that: (l) (l) (v) (v) fˆ1 (T, P, x1 ) − fˆ1 (T, P, x1 ) = 0, (l) (l) (v) (v) fˆ2 (T, P, x1 ) − fˆ2 (T, P, x1 ) = 0,

subject to (l)

0 ≤ x1 ≤ 1, Pmin ≤ P ≤ Pmax . (l)

(l)

(l)

(l)

Here, x1 = xmethane and x2 = xn-butane are the molar fractions of methane and n-butane in liquid phase, respectively. In a similar

W.F. Sacco, N. Henderson / Applied Soft Computing 11 (2011) 5424–5432 (v)

(v)

(v)

(v)

form, x1 = xmethane and x2 = xn-butane are the molar fractions of the same components in vapor phase. T is the temperature and (l) (v) P represents the pressure. The functions fˆ1 and fˆ1 are the socalled fugacities of component i in liquid phase and vapor phase, (l) (v) respectively. In this binary problem, the variables x2 and x2 are (ˇ)

(ˇ)

obtained by the difference x2 = 1 − x1 , for all ˇ = l, v. Without considering the index of the phases, the fugacity of component i can be written, for all phases, as fˆi = i xi P,

(32)

where by Peng–Robinson equation, we have bi (Z − 1) − ln(Z − B) b

ln i =

2 2

A − √ 2 2B

x j=1 j



ai aj (1 − kij )

a

 b − i b

ln

 Z + 2.414B Z − 0.414B

.

(33) In Eq. (33), for each phase, Z is the compressibility factor, which is obtained by solving the following cubic equation: Z 3 − (1 − B)Z 2 + (A − 2B − 3B2 )Z − (AB − B2 − B3 ) = 0.

(34)

The terms a and b in Eq. (33) are the attractive and repulsive intermolecular factors, respectively. Using the classical one fluid van der Waals mixing rule, with one adjustable binary parameter kij , we have b=

2 

xi bi ,

(35)

i=1

a=

2 2  

xi xj



ai aj (1 − kij ),

(36)

,

(37)

i=1 j=1

where bi = 0.07780

RTci Pci

ai = aci ˛i (T ), aci = 0.4724



(38)

R2 Tc2i Pci

˛i (T ) = 1 + mi

,

(39)



 1−

T Tci

2 ,

mi = 0.37646 + 1.54226ωi − 0.26992ωi2 ,

(40) (41)

A=

aP , R2 T 2

(42)

B=

bP . RT

(43)

Parameters Tci , Pci and ωi are, respectively, the critical temperature, critical pressure and acentric factor of pure component i, and R is the universal gas constant. In the present work, these parameters were obtained from Reid et al. [45]. We used k12 = 0, Pmin = 100 kPa and Pmax = 10, 000 kPa. The proposed algorithm was tested intensively. Twenty-one tests were made for the methane composition in vapor phase in the interval [0.99903, 0.99923] and specified temperature of T = 189.06 K. The critical temperature of pure methane is 190.56 K. Thus, the temperature of this binary mixture is slightly below the critical temperature of the more volatile component. In these (v) conditions, for all x1 ∈ [0.99903, 0.99923], the problem has three

5431

Table 5 Results for the double retrograde vaporization problem. (v)

(l)

x1

(x1 , P), with P in kPa

0.99903 0.99904 0.99905 0.99906 0.99907 0.99908 0.99909 0.99910 0.99911 0.99912 0.99913 0.99914 0.99915 0.99916 0.99917 0.99918 0.99919 0.99920 0.99921 0.99922 0.99923

(0.28373, 1263.8); (0.91822, 3981.8); (0.93064, 4018.1) (0.28854, 1286.2); (0.90202, 3935.4); (0.95134, 4083.4) (0.29354, 1309.6); (0.88497, 3886.1); (0.95879, 4109.9) (0.29874, 1333.8); (0.87071, 3843.8); (0.96354, 4128.2) (0.30420, 1359.4); (0.85758, 3803.6); (0.96709, 4142.8) (0.30993, 1386.3); (0.84511, 3764.1); (0.96991, 4155.1) (0.31595, 1414.5); (0.83301, 3724.6); (0.97223, 4165.8) (0.32229, 1444.3); (0.82110, 3684.5); (0.97420, 4175.2) (0.32893, 1475.6); (0.80930, 3643.5); (0.97590, 4183.7) (0.33601, 1509.0); (0.79736, 3600.8); (0.97740, 4191.6) (0.34354, 1544.5); (0.78525, 3556.3); (0.97873, 4198.8) (0.35157, 1582.5); (0.77285, 3509.6); (0.97993, 4205.5) (0.36013, 1623.1); (0.76015, 3460.5); (0.98100, 4211.7) (0.36943, 1667.2); (0.74686, 3407.9); (0.98198, 4217.6) (0.37953, 1715.2); (0.73291, 3351.4); (0.98289, 4223.2) (0.39064, 1768.1); (0.71810, 3290.0); (0.98372, 4228.5) (0.40292, 1826.6); (0.70225, 3222.9); (0.98448, 4233.4) (0.41692, 1893.5); (0.68477, 3147.4); (0.98519, 4238.2) (0.43324, 1971.6); (0.66506, 3060.6); (0.98585, 4242.8) (0.45321, 2067.1); (0.64180, 2956.2); (0.98647, 4247.2) (0.48032, 2197.0); (0.61146, 2817.3); (0.98706, 4251.4)

physically coherent solutions, see Henderson et al. [16], for example. As we can observe in Table 5, the algorithm was capable to find the three solutions in all the considered cases. 5. Conclusions In this work, we apply a hybrid metaheuristic that was created for multimodal optimization problems in order to solve nonlinear systems with simple bound constraints and several roots. In spite of being a multipurpose methodology (i.e. not specifically designed for nonlinear systems of equations), our algorithm shows to perform well, finding all roots of benchmarks from the literature in competitive processing times, as well as solving a real-world problem. However, one must bear in mind that, for example, the branch and bound [34] and branch and prune [53] approaches are specialized methods to solve nonlinear systems that are guaranteed to find all solutions, the so-called guaranteed reliability solutions. As Maranas and Floudas [34] say “convergence to all multiple solutions was achieved with reasonable computational effort”. It’s implicit in this sentence that computing time may be sacrificed for the sake of reliability. In our algorithm, on the other hand, the solutions obtained do not have guaranteed reliability. We consider that both philosophies are complementary: the stochastic-based optimization methods like ours must be used as general purpose problem solvers, while more refined algorithms like Floudas’ are designed to solve a specific class of problems. The point here is that, for a multipurpose algorithm, our method performs surprisingly well in demanding problems like the chemical equilibrium system or the double retrograde vaporization problem. Moreover, it is very easy to implement, as all routines are easy to understand, which is not the case for the specialized methods. In the future, we intend to apply our methodology to other nonlinear systems of equations, as for example those real-world examples found in Pérez and Lopes [43], and to multimodal optimization problems in general, like the real-world ones found in Henderson et al. [17] and in Sacco et al. [50]. Acknowledgements The authors gratefully acknowledge the financial support provided by CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico, Ministry of Science & Technology, Brazil). W.F.S.

5432

W.F. Sacco, N. Henderson / Applied Soft Computing 11 (2011) 5424–5432

also acknowledges the support by FAPESPA (Fundac¸ão de Amparo à Pesquisa do Estado do Pará, State of Pará, Brazil). The research by N.H. has been carried out within the framework of project PROCIENCIA-UERJ financed by FAPERJ. The authors are grateful to the two anonymous reviewers for their suggestions and remarks that were decisive in improving this article. References

[27] [28]

[29]

[30] [31]

[1] M.R. Anderberg, Clusters Analysis for Applications, Academic Press, New York, 1975. [2] R. Anderson, Recent advances in finding best operating conditions, Journal of the American Statistical Association 48 (1953) 789–798. [3] S.M. Al-Marzoug, R.J.W. Hodgson, Optimization of platinum–carbon multilayer mirrors for hard X-ray optics, Optics Communications 268 (2006) 84–89. [4] J.C. Bezdek, Pattern Recognition with Fuzzy Objective Function Algorithms, Plenum Press, New York, 1981. [5] R.P. Brent, Algorithms for Minimization without Derivatives, Prentice-Hall, Englewood Cliffs, NJ, 1973. [6] R. Chelouah, P. Siarry, Genetic and Nelder–Mead algorithms hybridized for a more accurate global optimization of continuous multiminima functions, European Journal of Operational Research 148 (2003) 335–348. [7] S. Das, Evolutionary algorithms with Nelder–Mead simplex based local search, in: J.R. Dopico, J.D. de la Calle, A.P. Sierra (Eds.), Encyclopedia of Artificial Intelligence, vol. 3, Idea Group Publishing, Hershey, PA, 2008, pp. 1191–1196. [8] I.Z. Emiris, Sparse Elimination and Applications in Kinematics, PhD dissertation, Computer Science Department, University of California at Berkeley, Berkeley, CA, 1994. [9] I.Z. Emiris, B. Mourrain, Computer algebra methods for studying and computing molecular conformations, Algorithmica 25 (1999) 372–402. [10] T.A. Feo, M.G.C. Resende, Greedy randomized adaptive search procedures, Journal of Global Optimization 6 (1995) 109–133. [11] L.D. Gaines, J.L. Gaddy, Process optimization by flow sheet simulation, Industrial & Engineering Chemistry Process Design and Development 15 (1976) 206–211. [12] D.E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, Reading, MA, 1989. [13] C. Grosan, A. Abraham, A new approach for solving nonlinear equations systems, IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans 38 (2008) 698–714. [14] L.O. Hall, B. Özyurt, J.C. Bezdek, Clustering with a genetically optimized approach, IEEE Transactions on Evolutionary Computation 3 (1999) 103–112. [15] N. Henderson, W.F. Sacco, Prediction of double retrograde vaporization by hybrid global–local optimization using fuzzy clustering means, Chemical Product and Process Modeling 3 (1) (2008), Article 47. [16] N. Henderson, W.F. Sacco, G.M. Platt, Finding more than one root of nonlinear equations via a polarization technique: an application to double retrograde vaporization, Chemical Engineering Research and Design 88 (2010) 551–561. [17] N. Henderson, W.F. Sacco, N. Barufatti, M.M. Ali, Calculation of critical points of thermodynamic mixtures with differential evolution algorithms, Industrial & Engineering Chemistry Research 49 (2010) 1872–1882. [18] M.J. Hirsch, C.N. Meneses, P.M. Pardalos, M.G.C. Resende, Global optimization by continuous grasp, Optimization Letters 1 (2007) 201–212. [19] M.J. Hirsch, P.M. Pardalos, M.G.C. Resende, Solving systems of nonlinear equations with continuous GRASP, Nonlinear Analysis: Real World Applications 10 (2009) 2000–2006. [20] F. Höppner, F. Klawonn, R. Kruse, T. Runkler, Fuzzy Cluster Analysis – Methods for Classification, Data Analysis and Image Recognition, John Wiley and Sons, Chichester, West Sussex, England, 1999. [21] C. Jäger, D. Ratz, A combined method for enclosing all solutions of nonlinear systems of polynomial equations, Reliable Computing 1 (1995) 41–64. [22] D.C. Karnopp, Random search techniques for optimization problems, Automatica 1 (1963) 111–121. [23] R.B. Kearfott, Some tests of generalized bisection, ACM Transactions on Mathematical Software 13 (1987) 197–220. [24] C.T. Kelley, Detection and remediation of stagnation in the Nelder–Mead algorithm using a sufficient decrease condition, SIAM Journal on Optimization 10 (1999) 43–55. [25] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings of IEEE International Conference on Neural Networks, Piscataway, NJ, 1995, pp. 1942–1948. [26] P. Koduru, Z. Dong, S. Das, S.M. Welch, J. Roe, Multi-objective evolutionarysimplex hybrid approach for the optimization of differential equation models

[32]

[33] [34] [35] [36] [37] [38] [39]

[40] [41]

[42] [43]

[44]

[45] [46]

[47]

[48]

[49]

[50]

[51]

[52] [53]

[54] [55]

of gene networks, IEEE Transactions on Evolutionary Computation 12 (2008) 572–590. R. Krishnapuram, J.M. Keller, A possibilistic approach to clustering, IEEE Transactions on Fuzzy Systems 1 (1993) 98–110. D. Kumar, S.K. Rath, K.S. Babu, FCM for gene expression bioinformatics data, Communications in Computer and Information Science 40 (2009) 521–532. Y.P. Lee, G.P. Rangaiah, R. Luus, Phase and chemical equilibrium calculations by direct search optimization, Computers & Chemical Engineering 23 (1999) 1183–1191. B. Liao, R. Luus, Comparison of the Luus–Jaakola optimization procedure and the genetic algorithm, Engineering Optimization 37 (2005) 381–398. R. Luus, Use of Luus–Jaakola optimization procedure for singular optimal control problems, Nonlinear Analysis-Theory Methods & Applications 47 (2001) 5647–5658. R. Luus, D. Hennessy, Optimization of fed-batch reactors by the Luus–Jaakola optimization procedure, Industrial & Engineering Chemistry Research 38 (1999) 1948–1955. R. Luus, T.H.I. Jaakola, Optimization by direct search and systematic reduction of the size of search region, AIChE Journal 19 (1973) 760–766. C.D. Maranas, C.A. Floudas, Finding all solutions of nonlinearly constrained systems of equations, Journal of Global Optimization 7 (1995) 143–182. F.H.C. Marriott, Dictionary of Statistical Terms, Longman Publishing Group, London, 1990. J. Matyas, Random optimization, Automation and Remote Control 26 (1965) 246–253. K. Meintjes, A.P. Morgan, Chemical equilibrium systems as numerical test problems, ACM Transactions on Mathematical Software 16 (1990) 143–151. A.P. Morgan, A method for computing all solutions to systems of polynomial equations, ACM Transactions on Mathematical Software 9 (1983) 1–17. A. Morgan, V. Shapiro, Box-bisection for solving second-degree systems and the problem of clustering, ACM Transactions on Mathematical Software 13 (1987) 152–167. J.A. Nelder, R. Mead, A simplex method for function minimization, Computer Journal 7 (1965) 308–313. D. Parsons, J. Canny, Geometric problems in molecular biology and robotics, in: Proceedings of the 2nd International Conference on Intelligent Systems for Molecular Biology, Palo Alto, CA, 1994, pp. 322–330. D. Peng, D.B. Robinson, A new two-constant equation of state, Industrial and Engineering Chemistry Fundamentals 15 (1976) 59–64. R. Pérez, V.L.R. Lopes, Recent applications and numerical implementation of quasi-Newton methods for solving nonlinear systems of equations, Numerical Algorithms 35 (2004) 261–285. I. Poplewska, W. Piatkowski, D. Antos, Effect of temperature on competitive adsorption of the solute and the organic solvent in reversed-phase liquid chromatography, Journal of Chromatography A 1103 (2006) 284–295. R.C. Reid, J.M. Prausnitz, B.E. Poling, The Properties of Gases and Liquids, McGraw-Hill, Singapore, 1988. W.F. Sacco, M.D. Machado, C.M.N.A. Pereira, R. Schirru, The fuzzy clearing approach for a Niching genetic algorithm applied to a nuclear reactor core design optimization problem, Annals of Nuclear Energy 31 (2004) 55–69. W.F. Sacco, C.M.F. Lapa, C.M.N.A. Pereira, C.R.E. de Oliveira, A Niching genetic algorithm applied to a nuclear power plant auxiliary feedwater system surveillance tests optimization, Annals of Nuclear Energy 33 (2006) 753–759. W.F. Sacco, H. Alves Filho, N. Henderson, C.R.E. de Oliveira, A metropolis algorithm combined with Nelder–Mead simplex applied to nuclear reactor core design, Annals of Nuclear Energy 35 (2008) 861–867. W.F. Sacco, H. Alves Filho, G.M. Platt, The Luus–Jaakola algorithm applied to a nuclear reactor core design optimisation, International Journal of Nuclear Energy Science and Technology 4 (2008) 1–10. W.F. Sacco, N. Henderson, A.C. Rios-Coelho, M.M. Ali, C.M.N.A. Pereira, Differential evolution algorithms applied to nuclear reactor core design, Annals of Nuclear Energy 36 (2009) 1093–1099. R.L. Salcedo, M.J. Gonc¸alves, S.F. Azevedo, An improved random search algorithm for non-linear optimization, Computers & Chemical Engineering 14 (1990) 1111–1126. V. Singh, Obtaining Routh-Padé approximants using the Luus–Jaakola algorithm, IEE Proceedings-Control Theory and Applications 152 (2005) 129–132. P. Van Hentenryck, D. McAllester, D. Kapur, Solving polynomial systems using a branch and prune approach, SIAM Journal on Numerical Analysis 34 (1997) 797–827. L.A. Zadeh, Fuzzy sets, Information and control 8 (1965) 338–352. E. Zahara, Y.T. Kao, Hybrid Nelder–Mead simplex search and particle swarm optimization for constrained engineering design problems, Expert Systems with Applications 36 (2009) 3880–3886.

Suggest Documents