Univariate Dynamic Encoding Algorithm for Searches (uDEAS)

0 downloads 0 Views 630KB Size Report
SUMMARY. This paper proposes a new computational optimization method modified from the dynamic encoding algorithm for searches. (DEAS). Despite the ...
IEICE TRANS. FUNDAMENTALS, VOL.E90–A, NO.8 AUGUST 2007

1679

PAPER

A Fast Computational Optimization Method: Univariate Dynamic Encoding Algorithm for Searches (uDEAS)∗ Jong-Wook KIM†a) and Sang Woo KIM†† , Members

SUMMARY This paper proposes a new computational optimization method modified from the dynamic encoding algorithm for searches (DEAS). Despite the successful optimization performance of DEAS for both benchmark functions and parameter identification, the problem of exponential computation time becomes serious as problem dimension increases. The proposed optimization method named univariate DEAS (uDEAS) is especially implemented to reduce the computation time using a univariate local search scheme. To verify the algorithmic feasibility for global optimization, several test functions are optimized as benchmark. Despite the simpler structure and shorter code length, function optimization performance show that uDEAS is capable of fast and reliable global search for even high dimensional problems. key words: dynamic encoding algorithm for searches, global optimization, function optimization, computational optimization

1.

Introduction

In solving engineering problems most researchers and engineers encounter global or local optimization. Since global optimization is hard to mathematically formulate and to prove whether a current optimum is globally optimal or not, a current best local optimum is often to be regarded as a global optimum. Optimization methods can be categorized into two approaches; indirect and direct methods as termed in [1]. The indirect methods resort to gradient information of cost functions for iterative approximation, such as the steepest descent method [2], the Newton-Raphson method [3], the conjugate gradient method [4], and so on. For these algorithms to be feasible, cost functions must be sufficiently smooth with twice differentiability, which is quite a rigorous condition to many real world problems whose cost functions are shaped rugged and/or peaky. The direct methods, on the other hand, adopt alternative techniques, e.g. a random walk in the random search method [5], the theory of natural selection in the genetic alManuscript received June 9, 2006. Manuscript revised January 16, 2007. Final manuscript received April 13, 2007. † The author is with the Department of Electronics Engineering, Dong-A University, Busan, 604-714, Korea. †† The author is with the Electrical and Computer Engineering Division, Pohang University of Science and Technology (POSTECH), Pohang, 790-784, Korea. ∗ This research was supported by the MIC (Ministry of Information and Communication), Korea, under the ITRC (Information Technology Research Center) support program supervised by the IITA (Institute of Information Technology Advancement) (IITA2006-(C1090-0602-0013)). a) E-mail: [email protected] DOI: 10.1093/ietfec/e90–a.8.1679

gorithm (GA) [6], the phenomena of an annealing process in the simulated annealing [7], the tunneling method in effective global search [8], and the strategy of revisit avoidance in the tabu search [9]. Since they do not use the gradient information, these methods are more convenient to apply but as a rule less efficient than the indirect methods in terms of running time and solution quality. However, owing to their gradient-free property the direct methods are recently prevailing in a variety of engineering fields by the aid of high performance microprocessors. From the need of a faster and more robust computational optimization method employing the merits of both the indirect and direct methods, the dynamic encoding algorithm for searches (DEAS) has been developed by the authors since 2002 [10]–[17]. DEAS generates a randomlyconstructed initial binary matrix, yields and evaluates neighboring matrices, and selects optimal matrices for subsequent searches. Several papers are written from theoretic view points [10]–[12] and from application view points [13]– [17]. There is a dedicated website ‘www.deasgroup.net’ containing brief introduction and related publication concerning DEAS. The advantages of DEAS can be summarized with the following comments [12]: DEAS works within a binary representation which yields a fast convergence performance, facilitates effective revisit check functions, determines stop condition in a straightforward way, and retains high portability to various numerical optimization problems. Despite its satisfactory application results, the initial version of DEAS suffers from the exponential amount of function evaluation with the increase of problem dimension. To overcome this weakness, a revision of DEAS has been developed by concentrating more on the reduction of computation amount. The latter type is named univariate DEAS (uDEAS), while the former is classified as exhaustive DEAS (eDEAS). Despite a simpler local search principle of uDEAS, benchmark result presented in this paper is quite promising for high-dimensional functions and many other large-scale optimization problems. This paper is organized as follows. Section 2 describes searching principles of uDEAS with comparison of eDEAS. Section 3 presents global optimization results of uDEAS for the standard test functions. Section 4 gives concluding remarks.

c 2007 The Institute of Electronics, Information and Communication Engineers Copyright 

IEICE TRANS. FUNDAMENTALS, VOL.E90–A, NO.8 AUGUST 2007

1680

Fig. 1

2.

Local search aspect of DEAS in a one-dimensional problem.

Univariate DEAS

uDEAS is a global optimization method which contains local and global search strategies as with eDEAS. This section provides description concerning each strategy. 2.1 Local Search Strategy eDEAS and uDEAS have similar but different local search strategies. Fundamental search principles of both types of DEAS are based on two properties of binary numbers; bisection and increment/decrement in corresponding real space. The bisection comes from adding 0 (1) at a least significant bit (LSB) position of an arbitrary binary number, which lead to decrease (increase) of an equivalent real value compared with the real value of the initial binary number. This property is described in the following theorem proved in [12]. Theorem 2.1 (Relation of parent and child strings): Let an m-bit-long binary string, s p = am am−1 · · · a1 , ai ∈ {0, 1}, i = 1, · · · , m, be termed a parent string. If 0 is inserted at the LSB of s p , the new string is termed a ‘left-hand child string’, slc . On the other hand, if 1 is inserted, it is termed a ‘right-hand child string’, src . If these strings are decoded into normalized real numbers within 0 and 1 by the following decoding function fd (bm bm−1 · · · b1 b0 ) =

1

m 

2m+1 − 1

j=0

b j2 j

(1)

where bi ∈ {0, 1}, i = 0, · · · , m, their relations are described as fd (slc ) ≤ fd (s p ) ≤ fd (src )

(2)

where the left- and right-hand side equalities hold only if s p are an all-zero string and an all-one string, respectively.

Figure 1 shows a binary tree structure where the relations of floating point numbers in parenthesis (representing decoded values of each binary string) confirm the above theorem. The operation of inserting a binary digit is compared to spreading child branches which probe adjacent search points, and this search type is named ‘bisectional search (BSS).’ The underlying principle of BSS is similar to the binary search technique in computer science [18], the dichotomous search, and the interval halving method in nonlinear optimization [1]. Therefore BSS retains high convergence rate as is demonstrated in [1] by the ratio of the final and initial intervals of uncertainty for one-dimensional unimodal function. However, successive BSS confines search routines inside a single branch. That is, although search is started at the highest node of ‘0’ (‘1’) in the tree, BSS alone can never escape from the left (right) branch. ‘Unidirectional search (UDS)’ is employed to resolve this limitation. UDS searches in a horizontal manner by hopping on subbranches with increment addition or decrement subtraction of binary strings. This movement can readily remove the barrier which exists as a dotted line between 0111b and 1000b in the binary tree, and maintains equidistant search resolution for a given string length, while BSS exponentially increases resolution. Owing to the complementary properties of BSS and UDS, a combination of single BSS and multiple UDS is used in DEAS as a session at every increase of string length. Figure 1 illustrates how the session operates in a binary tree whose cost function is smoothly unimodal with a local minimum located near 0.80 (1100b). Transitions 1 and 2 corresponding to BSS seek which direction is optimal at 01b node. After selection of 011b by comparing cost values of both nodes, UDS takes over and extends the search in the guided direction, i.e. positive in this case, until a better node is found through transitions of 3 to 5. From a new optimal node of 101b (0.71) a secondary session starts again with the same search routine until a local minimum 1100b is attained. The binary operation of DEAS in local search seems somewhat complicated when search dimension increases. During eDEAS the binary strings are stacked up into binary

KIM and KIM: A FAST COMPUTATIONAL OPTIMIZATION METHOD

1681

Fig. 2

Fig. 3

(a) (b) Three dimensional search aspects of eDEAS (a) BSS, (b) UDS.

Difference of bisectional search principles of eDEAS and uDEAS.

matrices with dynamically elongating columns. Thus the pseudo-codes of BSS and UDS for an n-dimensional problem are written in terms of binary matrices [12], which is modified for uDEAS and shown in Fig. 4. Figure 2 shows search schemes of BSS and UDS in a 3-dimensional problem, where 2n order of computation is required for BSS and UDS [15]. General optimization performance has been validated as superior through the seven benchmark functions. However, the function evaluation numbers increase exponentially with problem dimensions. That is, for the sixdimensional function ‘Hartman 6,’ eDEAS computed averagely 1760 times to attain a global minimum, while it computed only 189 times for the three-dimensional function ‘Hartman 3’ [12]. To resolve the problem of exponential increase uDEAS is motivated from the idea that a movement in an ndimensional space can be reproduced by a combination of axial movements. Figure 3 illustrates this notion for twodimensional space where a diagonal jump from a center point to one of its edge points can be described by two steps of lateral and longitudinal movements. In this case, the center and edge points are interpreted as an initial search point and its nearest neighboring points, respectively. This modification of search principle requires a subsequent change in local search scheme of uDEAS; it is favorable for BSS and UDS to be carried out for each parameter in a cyclic way in uDEAS, while eDEAS comprises single BSS preceded by multiple UDS for every possible neighboring points as

shown in Fig. 2. This change is conceptually more straightforward, and leads to reduction of code lengths. In other words, uDEAS does not require a somewhat complex procedure which uses extension vectors and the masking technique to detect whether a redundant direction has happened or not in UDS [12]. Figure 4 shows pseudo-code of a session under local search of uDEAS for an initial binary matrix of n × k dimension. As a start point, B∗n×k which is the best matrix of the previous session is loaded and decoded to real parame∗ . Then uDEAS selects x1 as the first search ter vector θn×1 direction. For BSS, the first row in B∗n×k is copied to a tem(1) porary row r1×k and two child strings, s(0) 1×(k+1) and s1×(k+1) , are generated by adding 0 and 1 and decoded to make temporary parameter vectors. For evaluation of the better direction, cost values are computed for the new parameter vectors, and as a result optimal direction dopt and an optimal child string s∗1×(k+1) are determined. Therefore, after BSS, length of the selected row increases by 1, which makes the current best matrix as a pseudo-matrix. In this paper, a matrix composed of more than one row whose length is different from the other rows is called a pseudo-matrix. An ⎤ ⎡ ⎢⎢⎢0

1

0

1

example of the binary pseudo-matrix is A = ⎢⎢⎢⎣⎢1 0

0 1 1

⎥⎥⎥ 0⎥⎥⎥⎥, ⎦

and its three rows are A(1, 1 : 3) = [0 1 0], A(2, 1 : 4) = [1 0 1 0], A(3, 1 : 3) = [0 1 1]. After BSS for x1 , UDS is repeatedly carried out guided by the optimal direction dopt until no better solution is found. If dopt is 0, a neighborhood string is generated by s = s∗ − 1, while for dopt = 1, s = s∗ + 1. Then the best row s∗ attained after UDS is finally saved as the first row of the current best matrix D∗ . This is the end of search for x1 direction, and local search starts from x2 direction with BSS. In this manner, all the rows in B∗n×k are sequentially loaded and treated by BSS and UDS. After search for xn , the current best pseudo-matrix D∗ becomes a rectangular matrix, and the current session is completed. This session is iterated from initial to final string lengths directed by the global

IEICE TRANS. FUNDAMENTALS, VOL.E90–A, NO.8 AUGUST 2007

1682

D∗n×(k+1) = SESSION(B∗n×k ) fd

∗ . Decode the initial matrix : B∗ → θn×1 for i = 1 : n BSS: Load an i-th row of current best matrix: r1×k = B∗ (i, 1 : k). for j = 0 : 1 Load the current best parameter vector into a temporary vector: θ( j) ← θ∗ . j) Add a binary bit j to the i-th row vector; s(1×(k+1) = [r j]. Update the parameter vector with the decoded i-th parameter; θ( j) (i) = fd (s( j) ). Evaluate the cost of a new parameter vector; J ( j) = f (θ( j) ) end Compare the above cost values; J ∗ = min(J (0) , J (1) ) if J ∗ = J (0) then dopt = 0, s∗ = s(0) , θ∗ = θ(0) . else dopt = 1, s∗ = s(1) , θ∗ = θ(1) . end if UDS: while a better solution is obtained do θ ← θ∗ . if dopt = 0 then s = s∗ − 1. else s = s∗ + 1. end if Update the parameter vector with a decoded i-th parameter; θ (i) = fd (s ). Evaluate the cost of a new parameter vector; J = f (θ ) if J < J ∗ then s∗ ← s , J ∗ ← J. else Stop UDS. end if end while Save the i-th best row into current best pseudo-matrix: D∗ (i, 1 : k + 1) = s∗ end for B∗n×(k+1) =D∗n×(k+1)

Fig. 4

Fig. 5

Pseudo-code of session in local search of uDEAS.

(a) (b) Search principles of BSS and UDS in both DEAS’s (a) eDEAS, (b) uDEAS.

search scheme. Figure 5 illustrates an example which contrasts the difference of eDEAS and uDEAS for a two-dimensional op-

timization problem. Thick lines in Fig. 5(a) show optimal search paths attained at each evaluation, and the dashed lines indicate that the corresponding movements will reach a ma-

KIM and KIM: A FAST COMPUTATIONAL OPTIMIZATION METHOD

1683

trix already visited by previous UDS. This revisit is readily protected by the concept of extension vector e, i.e. if extension has happened in xi direction during UDS, e(i) = 1, otherwise e(i) = 0, and a simple masking technique as [12] 1 X X

0 X 0

0 X 0

previous optimal extension vector current extension vector masked result

If all the bits represented as ‘X’ in the masked result are simultaneously zero, the current extension vector is regarded as a redundant search direction. Figure 5(b) shows that uDEAS can locate the same local minimum by the axial search schemes with that of neighbors eDEAS. For BSS of x1 the two pseudo-matrix   generated from an initial matrix of 01 10 are 01 10 0 and  0 1

1 0

Table 1 restart no. 1 2 row 3 4 length 5 6 . ..

History table of the Branin function. 1 0 10 33 65 657 1297 . ..

2 2 12 44 52 612 3268 . ..

3 4 1 3 11 ⇐ 11 39 39 71 71 143 143 1295 1295 . . .. ..

1

. After evaluating and selecting a better one, UDS extends from it along the optimal direction (positive, in this case) as    0 1 1 1 0 0 1 0 1 → → . 1 0 1 0 1 0

Then the session searches along x2 direction, and the best  1 0 1 matrix 0 1 0 is attained after UDS in a similar manner. After this session is terminated, the current best pseudomatrix becomes a regular matrix. Figure 5 shows that the final best matrices of eDEAS and uDEAS can be identical for unimodal and smooth cost functions even through different search schemes of BSS and UDS. 2.2 Global Search Strategy In searching with uDEAS, search space is continuously divided by a finite number of grids with gradually shrinking intervals, which means that search paths are automatically determined according to the shape of cost functions. Therefore, if uDEAS is applied to any two identical matrices, the subsequent search process will yield exactly the same search results. Moreover, uDEAS enables UDS to search horizontally as shown in Fig. 1. Therefore, search paths of different initial matrices can encounter at one of subordinate child matrices. Since this revisit gives rise to unnecessary computation, it must be detected and prohibited at every instance of new sessions. The simplest way to implement this HISTORY CHECK routine is to save a current binary matrix in a lookup table as a real number and compare it sequentially with the past data. To this end, real-valued representatives for a string concatenated from a matrix is saved and compared in the lookup table with the following process 

0 1 ⇒ 0 1 1 0 ⇒ 6. 1 0 Then, every newly assigned value is compared with those of previous optimal matrices whose corresponding row lengths are identical as shown in Table 1.

Fig. 6 Global indices of uDEAS at a preliminary search for the Goldstein-price function. (squares, circles, and triangles representing search points started from different initial points)

An additional important feature for global optimization is an escaping scheme; when successive cost values converge, it can be inferred that a current minimum falls inside the region of attraction of a local minimum. In the case a current local minimum is larger than expected, uDEAS commands to escape from the local minimum and to restart by referring to the two restart parameters; colIndRestart and costIndRestart. This RESTART CHECK is implemented as “If the row length of a current best matrix equals colIndRestart and its cost is larger (smaller, for a maximization problem) than costIndRestart, stop searching and start from a new random initial matrix whose row length is optInitRowLen. Otherwise continue the current search.” These parameters can be apparently configured from a preliminary search undertaken to roughly take a view of cost landscape by trying searches from a number of random initial matrices. The colIndRestart is selected as the minimal row length from which global and local minima can be apparently discriminated, and costIndRestart is an approximately intermediate value between the best and the secondbest group of local minima. Figure 6 shows the way that colIndRestart and costIndRestart are selected in the GoldsteinPrice function [19]. The optInitRowLen in the figure represents an optimal initial row length by which a global minimum has high probability to be discovered. Since the search step grows larger with a smaller row length, a small optInitRowLen means the region of attraction for global minima

IEICE TRANS. FUNDAMENTALS, VOL.E90–A, NO.8 AUGUST 2007

1684

spreads wide over the whole search space. For the problems whose approximate cost landscapes are acknowledged by experience, the preliminary search can be skipped by setting the global optimization indices intuitively. After determining the three global optimization indices, main search is performed until the number of restart equals the assigned number of numMaxRestart. The abovementioned global search strategy where an initial and maximal row lengths are given as iRL and mRL, respectively, can be written in pseudo-code as: Initialize the history table. For iRL = iLen1 : iLen2 Randomly generate an initial binary matrix Tn×iRL . HISTORY CHECK for Tn×iRL . For m = 1 : numMaxRestart For k = iRL : mRL Tn×(k+1) = SESSION(Tn×k ). RESTART CHECK for Tn×(k+1) . HISTORY CHECK for Tn×(k+1) . end for end for end for where iLen1 and iLen2 denote the row lengths of initial and final matrices of trial, respectively. For the preliminary search iLen2 is larger than iLen1 as shown in Fig. 6, where iLen1 = 1, iLen2 = 3. However, in main search the parameters can be configured as iLen1 = iLen2 = 1 and iRL =optInitRowLen. The present version of uDEAS has the same global optimization scheme with eDEAS, i.e. the multistart approach. However, global optimization scheme can be separately improved irrespective of local optimization scheme. The hopping technique rather than restart in global optimization can be an alternative. 3.

Optimization Results

3.1 Test Functions This section presents a performance comparison of uDEAS using several standard multi-dimensional test functions, whose functional values and coordinates of each global minima are already reported in literature. A list of widely used test functions is given below [19]: • Branin function (BR)

2

5.1 5 1 fBR (x) = x2 − 2 x21 + x1 − 6 + 10 1 − π 8π 4π × cos x1 + 10, − 5 ≤ x1 ≤ 10, 0 ≤ x2 ≤ 15 The global minimum value f ∗ is 0.397887358, and it is reached at the three points x∗ = (−3.142, 12.275), (3.142, 2.275), (9.425, 2.425).

• Six-hump camel-back function (CA) 1 6 x + x1 x2 − 4x22 + 4x42 , 3 1

fCA (x) = 4x21 − 2.1x41 + −5 ≤ x1 , x2 ≤ 5

This function is symmetric about the origin, and has three conjugate pairs of local minima. The global minimum value f ∗ , −1.03162845, is attained at x∗ = (0.08983, −0.7126), (−0.08983, 0.7126). • Goldstein-Price function (GP) fGP (x) = [1 + (x1 + x2 + 1)2 (19 − 14x1 + 3x21 − 14x2 +6x1 x2 + 3x22 )] × [30 + (2x1 − 3x2 )2 (18 − 32x1 + 12x21 +48x2 − 36x1 x2 + 27x22 )], − 2 ≤ x1 , x2 ≤ 2 The global minimum value f ∗ is equal to 3 and the minimum point is x∗ = (0, −1). There are four local minima in the feasible region. • Rastrigin function (RA) fRA (x) = x21 + x22 − cos 18x1 − cos 18x2 , −1 ≤ x1 , x2 ≤ 1 The global minimum value f ∗ is equal to −2, and the minimum point is x∗ = (0, 0). There are about 50 local minima in this function. • Shubert function (SH) fS H (x) =

 5

  5 i cos[(i + 1)x1 + i] i cos[(i + 1)x2

i=1

i=1

 +i] , − 10 ≤ x1 , x2 ≤ 10

This function is notorious for its peaky landscape which has 720 local minima and 18 global minima. The global minimum value f ∗ is −186.730909. • Hartman function fH (x) = −

4  i=1

n

 ci exp − αi j (x j − pi j )2 , j=1

0 ≤ xi ≤ 1, i = 1, · · · , n The coefficients ci for n = 3 and n = 6 are given in [19]. For n = 3 the global minimum value f ∗ is equal to −3.86278215 and it is reached at the point x∗ = (0.114, 0.556, 0.852). For n = 6 the global minimum is −3.32236801 and it is reached at the point x∗ = (0.201, 0.150, 0.477, 0.275, 0.311, 0.657). Figure 7, which is produced by the authors, shows landscapes of the four two-dimensional test functions.

KIM and KIM: A FAST COMPUTATIONAL OPTIMIZATION METHOD

1685

(a)

(b)

(c) (d) Fig. 7 Landscapes of test functions (a) Six-hump camel-back function (b) Goldstein-Price function (c) Rastrigin function (d) Shubert function. Table 2 Global search indices of uDEAS. Abbreviations for the functions: BR, Branin; CA, Camelback; GP, Goldstein-Price; RA, Rastrigin; SH, Shubert; H3, Hartman (n = 3); H6, Hartman (n = 6). optInitRowLen colIndRestart costIndRestart global minimum

BR 1 7 0.5 0.3979

CA 1 5 −0.5 −1.0316

These functions, which contain various likely features in real problems, are suitable for visual investigation of search procedures. Table 2 summarizes three escaping indices of uDEAS and global minima for each test function. Determining these indices is easy and simple by some random trial matrices at the preliminary stages. Although the indices in Table 2 are chosen approximately, uDEAS can quickly locate the global minimum owing to the random start property of the multistart method. Table 3 provides the number of function evaluations computed by conventional global optimization methods, eDEAS, and uDEAS. Since the function evaluation numbers represent computational amount of algorithms to locate global minima, the smaller is the better. The overall termination condition for the whole methods is set as the achievement of the desired accuracy of global cost error,

GP 3 6 10.0 3.0

RA 3 6 −1.9 −2.0

SH 3 6 −120 −186.73

H3 1 5 −3.5 −3.8628

H6 1 5 −3.25 −3.3224

1 (= f − f ∗ ) = 10−6 [19]. The overall optimization results demonstrate that eDEAS and uDEAS have higher search performance than the conventional numerical methods. It is worth noting that the function evaluation numbers of five test functions are smaller for uDEAS than for eDEAS. Among them the evaluation reduction in high-dimensional problems is remarkable as shown in the functions H3 (3 dimensional) and H6 (6 dimensional). As dimension increases from 3 to 6, eDEAS tries about 9 (≈ 26 /23 ) times longer to locate global minima, while uDEAS does about 2 (≈ 2 · 6/2 · 3) times longer. This gap reflects the difference in local search structures of both DEAS’s; exponential for eDEAS and linear for uDEAS. As problem dimension increases up to tens and hundreds, uDEAS outperforms eDEAS dramatically. For Shubert function, however, uDEAS is twice slower than eDEAS. This may be due to the biased search property

IEICE TRANS. FUNDAMENTALS, VOL.E90–A, NO.8 AUGUST 2007

1686 Table 3 Comparison of uDEAS with eDEAS and other optimization methods in terms of function evaluation numbers. Abbreviations for the methods: SDE is the stochastic method of Aluffi-Pentini et al. [20]; EA denotes the evolution algorithm of Yong et al. [21]; MLSL is the multiple-level singlelinkage method of Kan and Timmer [22]; IA is the interval arithmetic technique of Ratschek and Rokne [23]; TUN is the tunneling method of Levy and Montalvo [24]; and TS refers to the tabu search scheme of Cvijovic and Klinowski [25]. BR CA GP RA SH H3 H6

SDE 2700 10822 5439 241215 3416 -

EA 430 460 2048 -

MLSL 206 148 197 -

IA 1354 326 7424 -

TUN 1469 12160 -

TS 492 486 540 727 508 2845

eDEAS 94 87 103 304 137 189 1760

uDEAS 77 74 113 273 268 131 261

Table 4 Function evaluation numbers consumed for eDEAS and uDEAS to find global minima of the high-dimensional test functions. Function dimensions are varied from 10 to 30 to compare increase of computation amount. dimension function

f1

n = 10 f2

f3

f1

n = 15 f2

f3

f1

n = 20 f2

f3

f1

n = 30 f2

f3

eDEAS uDEAS

40734 567

13306 187

129261 1123

1294316 849

425978 279

3670364 2016

1132

372

3256

1787

557

7149

Table 5 Ratios of function evaluation numbers for Table 4. Reference is the function evaluation number at n = 10. dimension n = 10 n = 15 n = 20 n = 30 function f1 f2 f3 f1 f2 f3 f1 f2 f3 f1 f2 f3 eDEAS uDEAS

1.00 1.00

1.00 1.00

1.00 1.00

31.77 1.50

32.01 1.49

of uDEAS; in BSS, eDEAS stretches to possible neighboring points simultaneously, while uDEAS probes in x1 direction first and then continues in x2 direction. Thus if a wrong direction for x1 is selected by a wedge in cost landscape, subsequent search in x2 direction can make a current local minimum progress further from a local optimum. Therefore eDEAS is robust and is suitable for low-dimensional problems, while uDEAS is appropriate for high-dimensional problems. In order to show that uDEAS can find good solutions as eDEAS with less computational time for high-dimensional problems, three high-dimensional benchmark functions are taken from the well-known literature of the evolutionary programming [26]: • Unimodal function f1 (x) =

n 

x2i

i=1

The global minimum value f ∗ is 0 at the point x∗ = 0, and each variable is bounded within [−100, 100], i.e. x ∈ [−100, 100]n . • Step function f2 (x) =

n 

( xi + 0.5 )2

i=1

The global minimum value f ∗ is 0 at x∗ = 0, and x ∈

28.39 1.80

2.00

1.99

2.90

3.15

2.98

6.37

[−100, 100]n . This function has one minimum and is discontinuous. • Multimodal function  ⎛ ⎞  n ⎜⎜⎜  ⎟⎟⎟ 1 x2 ⎟⎟⎟ f3 (x) = −20 exp ⎜⎜⎜⎝⎜−0.2 n i=1 i ⎠⎟ ⎞ ⎛ n ⎟⎟ ⎜⎜⎜ 1  cos(2πxi )⎟⎟⎟⎠ + 20 + e − exp ⎜⎜⎝ n i=1 The global minimum value f ∗ is 0 at the point x∗ = 0, and the variable bound is x ∈ [−32, 32]n . Figure 8 shows profiles of preliminary search for 10 dimensional functions, i.e. n = 10, with the following condition • iLen1 = 1, iLen2 = 5 • numMaxRestart = 5 • mRL = 20, whose meanings are described at the pseudocode of main search. From the figure it seems that eDEAS and uDEAS can find global minima of the test functions equally well. However, measured function evaluation numbers consumed to attain the global minima are considerably different. Tables 4 and 5 provide the function evaluation numbers and ratios required for eDEAS and uDEAS to locate global minima of each test function with accuracy of 10−6 . They are averaged numbers after 10 independent runs according

KIM and KIM: A FAST COMPUTATIONAL OPTIMIZATION METHOD

1687

(a) eDEAS for f1

(b) uDEAS for f1

(c) eDEAS for f2

(d) uDEAS for f2

(e) eDEAS for f3

(f) uDEAS for f3

Fig. 8 Preliminary search of eDEAS and uDEAS for 10-dimensional test functions (squares, circles, and triangles representing search points started from different initial points).

to adequate global search indices. uDEAS shows excellent performance than eDEAS in terms of linear increase of computation time. These results are also far better than those in [26]. What is interesting in the tables are the ratios match well with dimension both for eDEAS and uDEAS. A little deviation of f3 at n = 30 is due to nonlinear increase of local

minima. 4.

Conclusion

In this paper, a numerical optimization method modified from eDEAS is proposed with the name uDEAS. The global

IEICE TRANS. FUNDAMENTALS, VOL.E90–A, NO.8 AUGUST 2007

1688

optimization scheme of uDEAS, i.e. multistart, is almost identical, but local optimization principle is quite different with eDEAS. Owing to the parameter-by-parameter movement in local search, uDEAS requires no redundancy check between UDS, which give rise to shorter code lengths and almost linear increase of cost evaluation with problem dimension. The performance of uDEAS has been verified by standard test functions whose minimum evaluation numbers to locate a global optimum have been reported for each competing algorithm. The test functions retain diverse characteristics of real engineering problem such as bad-scale (the Goldstein-Price function), peaky (the Shubet function), high-dimensional (the two Hartman functions and the three n-dimensional functions) properties and the like. The benchmark result shows that uDEAS has strong advantages on high-dimensional optimization problems, and its robustness properties are not deteriorated so much as been expected. This means that uDEAS can be directly employed to the weight training of neural networks, solving inverse kinematics of robot joints, and deriving optimal structures of protein, and so on. Moreover, as eDEAS has been successfully applied to the parameter identification of induction motors [15], uDEAS will realize the technology of on-line parameter identification of motors in (hybrid) electrical vehicles. As future work for uDEAS, the present sequential order from x1 to xn in local search will be changed in an efficient manner according to cost landscape and solution quality. Moreover, the multistart as a global search scheme should be more elaborate with the conventional hopping or tunneling methods. References [1] S.S. Rao, Engineering Optimization, John Wiley & Sons, 1996. [2] A.L. Cauchy, “M´ethode g´en´erale pour la r´esolution des syst`emes d’´equations simultan´ees,” Comptes Rendus de l‘Academie des Science, vol.25, pp.536–538, 1847. [3] J. Raphson, Analysis Aequationum Universalis, London, 1690. [4] E. Polak, Computational Methods in Optimization, Academic Press, New York, 1971. [5] R.L. Fox, Optimization Methods for Engineering Design, Addison Wesley, 1971. [6] D.E. Goldberg, Genetic Algorithm in Search, Optimization and Machine Lerning, Addison Wesley, 1989. [7] G.L. Bilbro, “Fast stochastic global optimization,” IEEE Trans. Syst. Man Cybern., vol.24, no.4, pp.684–689, April 1994. [8] Y. Yao, “Dynamic tunneling algorithm for global optimization,” IEEE Trans. Syst. Man Cybern., vol.19, no.5, pp.1222–1230, Sept./Oct. 1989. [9] F. Glover, “Tabu search methods in artificial intelligence and operations research,” ORSA Artificial Intelligence, vol.1, no.2, p.6, 1987. [10] J.-W. Kim and S.W. Kim, “A novel parametric identification method using a dynamic encoding algorithm for searches (DEAS),” International Conference on Control, Automation, and Systems, pp.406– 411, Muju, Korea, Oct. 2002. [11] N.G. Kim, J.-W. Kim, and S.W. Kim, “A study for global optimization using dynamic encoding algorithm for searches,” International Conference on Control, Automation and Systems, pp.857– 862, Bangkok, Thailand, Aug. 2004.

[12] J.-W. Kim and S.W. Kim, “Numerical method for global optimisation: Dynamic encoding algorithm for searches,” IEE Proc.-Control Theory Appl., vol.151, no.5, pp.661–668, Sept. 2004. [13] J.-W. Kim and S.W. Kim, “Gain tuning of PID controllers with the dynamic encoding algorithm for searches (DEAS) based on the constrained optimization technique,” International Conference on Control, Automation, and Systems, pp.871–876, Gyeongju, Korea, Oct. 2003. [14] Y.J. Jang, Y.-K. Lee, and S.W. Kim, “Design of optimal temperature patterns in the reheating furnace with regenerative burner for energy saving,” Applied Simulation and Modeling, pp.35–40, Rhodes, Greece, June 2004. [15] J.-W. Kim and S.W. Kim, “Parameter identification of induction motors using dynamic encoding algorithm for searches (DEAS),” IEEE Trans. Energy Convers., vol.20, no.1, pp.16–24, March 2005. [16] Y. Park, Y. Lee, J.-W. Kim, and S.W. Kim, “Parameter optimization for SVM using dynamic encoding algorithm,” International Conference on Control, Automation, and Systems, pp.2542–2547, Kintex, Korea, June 2005. [17] J.-W. Kim, S.Y. Cha, and J.-K. Kim, “Optimal design of transformer cores using DEAS,” Proc. KIEE Summer Annual Conference, pp.917–920, Pyongchang, Korea, July 2005. [18] D.E. Knuth, The Art of Computer Programming, vol.3: Sorting and Searching, Addison-Wesley, 1973. ˇ [19] A. T¨orn and A. Zilinskas, Global Optimization, Springer-Verlag, Berlin, 1989. [20] F. Aluffi-Pentini, V. Paris, and F. Zirilli, “Global optimization and stochastic differential equations,” J. Optim. Theory Appl., vol.47, pp.1–15, 1985. [21] L. Yong, K. Lishan, and D.J. Evans, “The annealing evolution algorithm as function optimizer,” Parallel Comput., vol.21, no.3, pp.389– 400, 1995. [22] A.H.G.R. Kan and G.T. Timmer, “A stochastic approach to global optimization,” in Numerical Optimization, ed. P.T. Boggs, R.H. Byrd, and R.B. Schnabel, pp.245–262, SIAM, Philadelphia, Pennsylvania, 1985. [23] H. Ratschek and J. Rokne, New Computer Methods for Global Optimization, Ellis Horwood, Chichester, UK, 1988. [24] A.V. Levy and A. Montalvo, “The tunneling algorithm for the global minimization of functions,” SIAM J. Scientific and Statistical Computing, vol.6, pp.15–29, 1985. [25] D. Cvijovic and J. Klinowski, “Taboo search: An approach to the multiple minima problem,” Science, vol.267, pp.664–666, 1995. [26] X. Yao, Y. Liu, and G. Lin, “Evolutionary programming made faster,” IEEE Trans. Evol. Comput., vol.3, no.2, pp.82–102, 1999.

Jong-Wook Kim was born in Youngpoong, Korea, in 1970. He received the B.S., M.S., and Ph.D. degrees from the Electrical and Computer Engineering Division at Pohang University of Science and Technology (POSTECH), Pohang, Korea, in 1998, 2000, and 2004, respectively. Currently, he is a Full-time Lecturer in the Department of Electronics Engineering at Dong-A University, Busan, Korea. His current research interests are numerical optimization methods, robot control, intelligent control, diagnosis of electrical systems, and system identification.

KIM and KIM: A FAST COMPUTATIONAL OPTIMIZATION METHOD

1689

Sang Woo Kim was born in Pyungtaek, Korea, in 1961. He received the B.S., M.S., and Ph.D. degrees from Department of Control and Instrumentation Engineering, Seoul National University, in 1983, 1985, and 1990, respectively. Currently, he is an Associate Professor in the Department of Electronics and Electrical Engineering at Pohang University of Science and Technology (POSTECH), Pohang, Korea. He joined POSTECH in 1992 as an Assistant Professor and was a Visiting Fellow in the Department of Systems Engineering, Australian National University, Canberra, Australia, in 1993. His current research interests are in optimal control, optimization algorithm, intelligent control, wireless communication, and process automation.

Suggest Documents