Classifier-assisted Constrained Evolutionary Optimization for

0 downloads 0 Views 325KB Size Report
Abstract—In orthodontics, retraction springs made of metallic wires are often ...... [3] T. Bäck, D. Fogel, Z. Michalewicz, Handbook of evolutionary computation ...
Classifier-assisted Constrained Evolutionary Optimization for Automated Geometry Selection of Orthodontic Retraction Spring Dudy Lim, Yew-Soon Ong, Rachman Setiawan, and Muhammad Idris

Abstract—In orthodontics, retraction springs made of metallic wires are often used to move a tooth with respect to another by the virtue of the spring back effect. Specially selected form of spring may result in accurate force and moment required to move the tooth towards direction that suits a particular patient. In current practice, the geometry remains to be selected manually by orthodontists and no substantial automation of such process has been proposed to date. In this paper, we experiment with the automated geometry selection of the orthodontic retraction spring using constrained evolutionary optimization. Particularly, a Classifier-assisted Constrained Memetic Algorithm (CCMA) is designed for the purpose. The main feature of CCMA lies in the ability to identify appropriate spring structures that should undergo further refinement using a classifier system to perform the inference. Comparison to the baseline canonical Genetic Algorithm (GA) and Memetic Algorithm (MA) further highlights the efficacy of the proposed approach. In addition, to also assert the robustness of the CCMA for general complex design, further studies on commonly used constrained benchmark problems and existing constrained evolutionary optimization methods are also reported in the paper.

O

I. INTRODUCTION

ne important apparatus in the field of orthodontics is the metallic-wired retraction spring formed to suit individual orthodontic cases. It is used to retract or move a tooth with respect to another by virtue of the spring back effect. Parameters of the spring selected results in a set of unique force system, consisting of forces and moments in order to move the tooth towards certain direction. Currently, the geometry selection in this process still relies on manual selections by the orthodontists. Hence, it is apparent that automation in the form of optimization would be beneficial and desirable to improve the efficiency of such process. Early efforts towards solving design optimization are mostly based on pure mathematical analysis. For instance, in analytical constrained optimization, Kuhn-Tucker (K-T) necessary conditions for optimality are defined and then solved for a candidate optimal solution [1]. For a problem with objective functions f(x), inequality constraints g(x), and

Manuscript received February 4, 2010. Dudy Lim is with the Centre for Computational Intelligence (C2i), School of Computer Engineering, Nanyang Technological University, Nanyang Avenue, Singapore 639798 (e-mail: [email protected]). Yew-Soon Ong is the Director of Centre for Computational Intelligence (C2i) at the School of Computer Engineering, Information System Division, Nanyang Avenue, Singapore 639798 (e-mail: [email protected]). Rachman Setiawan and Muhammad Idris are with the Mechanical Engineering Design Research Division, Faculty of Mechanical & Aerospace Engineering, Institut Teknologi Bandung, West Java, Indonesia (e-mail: {rachmans, idris13}@ edc.ms.itb.ac.id).

equality constraints h(x), the Lagrange function to be minimized, instead of f(x), is defined as: ng

L( x, u, v, s) = f ( x ) +

∑ u ( g (x) + i

i

si2 ) +

i =1

T

2

nh

∑ v h (x ) i i

i =1

(1)

T

= f ( x ) + u (g ( x ) + s ) + v h ( x ) where u and v are Lagrange multipliers while s determines whether equality constraints are active. The i-th inequality constraint is active if gi(x)=0. Based on the K-T necessary conditions, there exist u and v vectors at the stationary/optimum point x*, such that: ∂f ∂L ≡ + ∂x j ∂x j

ng



u i*

i =i

∂g i + xj

*

hi ( x ) = 0 ; g i (x

*

) + s i2

= 0;

u i* s i = 0 ; u i* ≥ 0 ;

nh

∑v

* i

i =1

∂hi = 0; ∂x j

i = 1,..., n h i = 1,..., n g

(2)

i = 1,..., n g i = 1,..., n g

where n g and nh denote the number of inequality and equality constraints, respectively. However, due to the moderately high dimensionality of design variables and complex geometry constraints involved in the problem, it is apparent that such paper-based analytical optimization is not convenient to adopt in practice. Beyond the analytical methods, deterministic numerical optimizers are developed to tackle the above shortcomings. The common idea behind these optimizers is to systematically iterate from an initial design and improve it until the optimality conditions are met, rather than to solve series of complex equations analytically. Some renowned optimizers belong to this class are steepest descent, conjugate gradient, quadratic programming, and linear approximation methods [2]. However, being deterministic, a relatively good starting point must be carefully selected to locate the global optimum; otherwise they only reach a local optimum. Further, the unavailability of accurate gradient information, noisy, and multimodal functional landscapes may also reduce their effectiveness. The last few decades have also been marked with much prominent advancement in the field of optimization study. Among those are a group of stochastic numerical optimization algorithms inspired by Darwin’s theory of evolution, collectively known as Evolutionary Algorithms (EAs) [3]. Based on Darwin’s survival of the fittest principle,

EAs evolve a population of individuals, representing candidate solutions to a given problem that goes into a competition for survival. To date, EAs have emerged as a powerful paradigm for global optimization, solving optimization problems characterized by high dimensional, non-separable, multimodal, constrained, and nondifferentiable fitness landscapes, which are often regarded as hard to handle by their deterministic counterparts. In recent decades, almost all forms of successful stochastic optimization algorithms including meta-heuristics and evolutionary algorithms involve some forms of lifetime learning or meme in their design. In particular, the hybridizations of population based and local heuristic search methodology, commonly known as Memetic Algorithms (MAs) [4][5][6][7] now represents one of the most popular and fastest growing areas of memetic computing research, where many success stories on real-world applications have been reported [8][9][10][11]. In this paper, to achieve the purpose of automating the retraction spring geometry selection, we perform optimization using a Classifier-assisted Constrained MA (CCMA). It is worth noting that the main feature of CCMA lies in the ability to identify appropriate spring structures that should proceed with further refinement for better alternatives, using a classifier system to perform the inference. The rest of this paper is structured as follows. In Section II, the retraction spring geometry problem is defined and a brief literature review on constrained evolutionary optimization is presented. Subsequently, Section III introduces the proposed Classifier-assisted Constrained MA (CCMA). Section IV which forms the core contribution of this paper presents the use of CCMA for retraction spring geometry selection with a comparison study to other alternative evolutionary approaches including the basic GA and MA. Besides the real world spring geometry selection problem, this section also provides empirical results of CCMA on some representative benchmark constrained problems, to further assert the robustness of the approach on a variety of complex scenarios. Finally, Section V concludes this paper and outlines several interesting future works.

this paper focuses on optimum design with respect to TLoop type retraction spring using objective function derived by analytical method using Castigliano’s theorem [12][13]. In principle, the retraction effect will generate axial force (Fx) and bending moment (Mz) on the edge of wire (support/bracket). Using Castigliano’s theorem, angular deflection θ is formulated as a function of strain energy (U) and moment (Mz) as: θ=

∂U ∂M z

(3)

whereas the activation or linear deflection u x is function of strain energy (U) and force (Fx), i.e.: ux =

∂U ∂Fx

(4)

Next, Ut, the strain energy total by adding strain energy at each wire section, is described as: nsec tion

Ut =

∑ n =1

(5)

ln

1 M n2 dln 2 EI

∫ 0

where E and I are modulus elasticity and moment of inertia respectively, Mn is moment equation for each wire section, and n denotes the section index. Further, the Ut equation can be simplified by representing Mz2 constant, 2MzFx constant, and Fx2 constants as At, Bt, and Ct respectively, which results in the strain energy equation as follows: Ut =

(

)

1 M z 2 [A t ] + 2 M z Fx [Bt ] + Fx 2 [C t ] 2 EI

Theoretical solution for T-Loop can be determined by developing moment equation for each wire section. Configuration for each wire section is illustrated in Fig. 2 and each equation showed in Table 1.

II. PROBLEM DEFINITION AND LITERATURE REVIEW A. Problem Definition In the field of orthodontics, ideal retraction spring design is needed to reach efficient tooth treatment, i.e., the tooth movement can be controlled towards desired location. Many designs of retraction spring have been developed by researchers, one of which is the T-Loop structure. Fig. 1 shows the application of T-Loop to retract or move tooth. The forces (Fx, Fy) and moment will be produced by T-Loop after activation applied. Without loss of generality,

(6)

Fig. 1. Force system produced by T-Loop [14]

Objective Function :

⎛ R (L, R ) ⎞ − 1⎟⎟ minimize F (L, R ) = ⎜⎜ t ⎝ Ra ⎠ where Rt (L, R ) =

2

(10)

Mz Fx −1

Fig. 2. Geometry of T-Loop

Table 1. Moment equations derivation of T-Loop, n denotes the section index for eqn. (5) Moment equations (Mn) n = 1-9

n

1 2 3 4 5 6 7 8 9

M 1 = M z + Fx l1 sin θ

M 2 = M z + Fx (L1 sin θ − l 2 )

M 3 = M z − Fx (L2 − L1 sin θ )

M 4 = M z − Fx (L2 − L1 sin θ + R − R cos θ )

⎡ Fx ⎤ ⎡C t B t ⎤ ⎡u x ⎤ ⎢ ⎥=⎢ ⎥ ⎢ ⎥ EI ⎣ M z ⎦ ⎣B t A t ⎦ ⎣2θ ⎦ Subject to: Equality constraints h1 (L ) = L3 − L5 = 0 h2 (L ) = L4 − ( L3 + L5 − e) = 0 h3 (L ) = L2 − ( L6 − d ) = 0 Inequality constraints g1 (L ) = L1 + L7 + e ≤ Lt g 2 (L ) = L6 + 2 L8 ≤ H t Bound constraints 4.5 × 10−3 ≤ L1 ≤ 7 × 10−3 m 4 × 10−3 ≤ L2 ≤ 6 × 10 −3 m

M 5 = M z − Fx (L2 − L1 sin θ + 2 R )

M 6 = M z − Fx ( L2 − L1 sin θ + 2 R − ( R − R cos φ ) )

3.5 × 10−3 ≤ L3 ≤ 5 × 10−3 m

M 7 = M z − Fx (L2 − L1 sin θ )

8 × 10 −3 ≤ L4 ≤ 16 × 10 −3 m

M 9 = M z − Fx (L6 − ( L2 − L1 sin θ ) − l 7 sin θ )

4 × 10 −3 ≤ L6 ≤ 6 × 10 −3 m

M 8 = M z − Fx (L2 − L1 sin θ − l 6 )

Meanwhile, displacement is the first derivation of strain energy respect to Fx. It is basically the activation ( u x ) distance of wire which can be derived mathematically as: ux =

∂U t 1 = (2 M z [B t ] + 2 Fx [C t ]) ∂Fx 2 EI

(7)

Angular displacement is the first derivation of strain energy with respect to moment. This displacement is applied for positioning the teeth. When the wire is mounted on bracket, gable will be zero. Mathematically, this is described as follows: (8) ∂U t 1 2θ = = (2 M z [A t ] + 2 Fx [B t ]) ∂M z

2 EI

Eqns. (7) and (8) can be solved in matrix form, to find Fx and Mz solution, as follows: ⎧ Fx ⎫ ⎡Ct ⎨ ⎬=⎢ ⎩M z ⎭ ⎣ Bt

−1

Bt ⎤ ⎧u x ⎫ ⎨ ⎬EI At ⎥⎦ ⎩2θ ⎭

(11)

(9)

Finally, the optimization problem can be defined as minimizing the absolute error between analytical ratio (Rt) and actual ratio (Ra):

3.5 × 10 −3 ≤ L5 ≤ 5 × 10 −3 m

4.5 × 10−3 ≤ L7 ≤ 7 × 10−3 m

0.5 × 10 −3 ≤ R ≤ 1.5 × 10 −3 m Table 2. Geometries and material of wire. Parameter Material Modulus Elasticity, E Width, B Height, H Cross section, A Moment of Inertia, I Angle of gable or theta, θ Moment-to-force ratio (M/F), Ra Distance of Gap, e Total of Length, Lt Total of Height, Ht Offset, d

Magnitude SS, Stainless Steel 2 x 1011 N/m2 0.5588 x 10-3 m 0.4046 x 10-3 m 2.3 x 10-9 m2 3.1 x 10-15 m4 0o 3 x 10-3 m 0.5 x 10-3 m 20 x 10-3 m 15 x 10-3 m 1 x 10-3 m

B. Literature Review on Constrained Evolutionary Optimization As far as constrained design problem is concerned, some prominent techniques reported using evolutionary optimizers are summarized as follows: - Penalty-based methods. The most common approach for evolutionary constraint-handling techniques is to penalize infeasible solutions. Instead of minimizing f ( x ) , the optimizer’s task now is to minimize f c ( x ) = f ( x ) + ρ , where ρ denotes a positive penalty term if x is infeasible. Different penalty-based methods have been proposed in the

literature, namely: death, static, dynamic, adaptive, and selfadaptive penalties. Death penalty simply limits the evolutionary search to the feasible regions by rejecting all infeasible solutions generated [15]. Static penalty methods penalize infeasible solutions based on the degree of constraint violation [16]. Dynamic penalty methods, on the other hand, uses penalty terms that change over time [17]. Adaptive penalty methods decide on the penalty magnitude based on feedback from the evolutionary search [18]. Last but not least, Self-adaptive penalty methods encode all possible penalty parameters into the chromosome of a candidate solution and evolve them together with the design vector [19]. - Repair-based methods. Another popular approach in evolutionary constrained optimization is to repair the infeasible solutions [20]. As the name suggests, the basic idea is to map a feasible solution into its feasible counterpart. This could be achieved via domain knowledge or going through several alternatives of solutions using heuristics or even greedy algorithms, to find a feasible solution associated with the particular infeasible solution. - Ranking-based methods. Two well-known ranking schemes proposed in the literatures for such purpose are the deterministic [21] and stochastic ranking schemes [22]. Deterministic ranking scheme can be summarized by the following three rules: 1) a feasible solution is preferred compared to the infeasible one, 2) Between any two feasible solutions, the one having better objective value is preferred, and 3) between any two infeasible solutions, the one having less constraint violation is preferred. On the other hand, stochastic ranking introduces randomness in the comparison criteria on whether the objective value or constraint violation is used for comparison. - Multi-Objective(MO)-based methods. The basic idea behind this method is to treat constraints as objectives in the MO context. Hence, an original problem of n f objectives, n g inequality constraints, and n h equality constraints can be redefined as a multi-objective problem with n f + 1 or n f + n g + n h objectives. In [23], constraints are treated as many objectives in an optimization framework. On the other hand, [24] proposed to use aggregated constraints as a single additional objective. Hybridization-based methods. Besides the abovementioned classes of algorithms, there has also been a recent trend on the hybridization or interplay with machine learning. In [25], regression models of the objective and constraint functions are used to perform the so-called approximate ranking scheme, where expensive evaluations are performed only when the rank induced after model update changes. Intriguing efforts in [26][27] utilize Support Vector Machine classifier to model the feasibility structure of a problem, i.e., whether the candidate solutions fall within the feasible region, near feasibility boundary, or alternatively within the infeasible region, while enhancing search efficiency.

III. CLASSIFIER-ASSISTED CONSTRAINED MEMETIC ALGORITHM

A.

Memetic Algorithms(MAs)

MAs are population-based meta-heuristic search methods that are inspired by Darwinian principles of natural evolution and Dawkins notion of a meme defined as a unit of cultural evolution capable of local refinements [4][5]. In its simplest form, a conventional MA which integrates local search procedures into EA can be formulated as in Algorithm 1. ________________________________________________ Algorithm 1. Memetic Algorithm _____________________________________________________ 1: Generate and evaluate a population of design vectors 2: while termination condition is not satisfied do 3: Generate offspring population using evolutionary operators 4: for each offspring x chosen for refinement do 5: Apply local search to find an improved solution, x opt 6: 7: 8:

Perform replacement using Lamarckian learning, i.e., if f (x opt ) < f (x ) then

x = x opt

9: end if 10: end for 11: end while

________________________________________________ While canonical EAs are generally known to be capable of exploring and exploiting promising regions of the search space, they can take a relatively long time to locate the exact local optimum with high precision. MAs, on the other hand mitigate such issue via the combination of global exploration and local exploitation.

B.

Classifier-assisted Constrained MA (CCMA)

One classical challenge of MA design lies in identifying of appropriate individuals that would undergo local refinement. Being able to do so would improve the efficiency of search. To date, typically naïve solutions to this problem to do this is via random, sampling-based, or probabilistic selection [6]. However, it is worth noting that most of these efforts have concentrated on non-constrained or only bound-constrained problems. In contrast to earlier works that concentrated on regression meta-models to enhancing search [8][9], here we propose a classifierassisted memetic algorithm, designed specifically for nonlinear constrained optimization. In the context of constrained optimization, knowledge on the feasible-infeasible separation boundaries would assist the designers in putting more attention on promising regions where good solution may reside, while avoiding the many computational intensive objective/fitness evaluations that evolutionary methods often demands. The fact that global optimum solutions are often located near the feasibleinfeasible separation boundaries, where one or more constraints are active, further justifies the significance of

making such knowledge available. Since, there exists only two classes of solutions, i.e., either feasible or infeasible, a binary classification system may be easily formulated for this purpose. Taking this cue, we propose here a Classifierassisted Constrained Memetic Algorithm (CCMA). CCMA begins by initializing and evaluating a population of candidate solutions using Design of Experiment (DOE) technique. All evaluated individuals are archived into a database as training inputs for building the classifier, based on their feasibility condition. At this stage, it is possible that no classifier is built if the database has only archived one existing class, i.e. all data are either feasible or infeasible. Subsequently, the search proceeds similar to a canonical MA. In the local refinement phase, if a classifier exists, it will be used to test whether an individual should undergo refinement. In particular, local refinement is performed only on misclassified individuals. However, if no classifier is available due to insufficient data archived at runtime, local refinement is performed on η randomly selected individuals. The workflow of CCMA is detailed in Algorithm 2. ________________________________________________ Algorithm 2. Classifier-assisted Constrained Memetic Algorithm _____________________________________________________ 1: Generate and evaluate a population of design vectors 2: Update database and classifier with every newly evaluated design vector, x 3: while Computational budget is not exhausted do 4: Generate offspring population using evolutionary operators 5: Evaluate offspring population 6: Update database and classifier with every newly evaluated design vector, x 7: if Classifier exists then 8: Test offspring population against classifier 9: end if 10: if No misclassified offspring OR classifier does not exist then 11: Perform local search on η random individuals 12: else 13: Perform local search on misclassified individuals 14: end if 15: Update database and classifier with every newly evaluated design vector, x 16: end while

________________________________________________ Note that every newly evaluated design vector during the search will update the database only if it is misclassified by existing classifier. This makes good sense since the inclusion of such new design vectors may potentially induce changes to the existing classifier. Figs. 3 and 4 illustrate two cases where the database update might trigger changes to the separation boundary. In the first case as depicted in Fig. 3, the new design vectors lie near the separation boundary, hence it can shown that their inclusions altered the corresponding separation boundary. On the other hand, Fig. 4 depicts the case where the new design vectors lie in previously unexplored regions, hence the separation boundary defined by the classifier is updated with the newly learned knowledge. In this manner, the proposed CCMA facilitates the exploration of uncertain regions, i.e., the separation boundaries and previously unexplored regions that not only serves to guide the evolutionary search towards

good quality solution, but allowed better discovery on knowledge about the feasible-infeasible boundaries. IV. EMPIRICAL STUDY

A. Experiments on Retraction Spring Problem The performance of CCMA is compared against the canonical Genetic Algorithm and Memetic Algorithm with Deterministic Ranking (GA-DR and MA-DR) on the orthodontic retraction spring problem defined in Section 2. The algorithms considered are briefly discussed as follows: • GA-DR. This is the baseline Genetic Algorithm on which CCMA builds upon. The deterministic ranking used, as explained in Section II-B, is based on 3 simple rules:1) a feasible solution is preferred compared to the infeasible one, 2) Between any two feasible solutions, the solution of better objective value is preferred, and 3) between any two infeasible solutions, the solution with less constraint violation is preferred. • MA-DR. It is a hybridization of the GA-DR with local refinement procedure or the CCMA without any classifier assistance used in the search, i.e., a canonical MA. In consistent to GA-DR, deterministic ranking scheme is also used. • CCMA. The Classifier-assisted Constrained MA described in Section III-B. In particular, the traditional 3layer (input, hidden, output) feed-forward Artificial Neural Network (ANN) with back-propagation learning and the Sequential Quadratic Programming (SQP) are used as the classifier and local search techniques, respectively. Note that without loss of generality, other forms of classifiers and local solvers may be employed in the proposed algorithm. The following parameters in Table 3 are used for GA-DR, MA-DR, and CCMA. For fair comparison, 30 independent runs are conducted for each algorithm. Results obtained by the 3 algorithms are then summarized in Table 4. From the results, several important observations can be drawn. Firstly, it is obvious that the two variants of MA considered, i.e., MA-DR and CCMA, outperformed the baseline GA-DR optimizer. Particularly, GA-DR could not locate any feasible solution at the end of 1.0E+03 fitness evaluations. In contrast, both MA-DR and CCMA have converged to sub-optimal fitness values for the same number of fitness evaluations used (see Table 4). This is a typical problem of canonical GAs for solving constrained optimization problem, where they are unable to quickly identify feasible solution at precisions comparable to MA. Meanwhile, between the two MA variants, through the additional classifier-assisted mechanism to selecting appropriate individuals that undergo refinements, CCMA is observed to outperform the canonical MA-DR as the search progresses. The best optimized structures designed by GADR, MA-DR, and CCMA are depicted in Fig. 5, 6, and 7, with fitness values of 4.9694E-09, 5.7483E-19, and 6.5208E-21, respectively.

(a) Original separation boundary

(b) Updated separation boundary

Fig. 3. Classifier update near separation boundary.

(a) Original separation boundary

(b) Updated separation boundary Fig. 4. Classifier update in the unexplored region.

Table 3. Setting of experiments for GA-DR, MA-DR, and CCMA. Parameters of GA-DR, MA-DR, and CCMA Population size 100 Crossover probability 0.9 Mutation probability 0.1 Maximum number of evaluations 1.0E+04 Evolutionary operators uniform crossover & mutation, elitism and deterministic ranking selection Number of independent runs 30 Parameters of MA-DR and CCMA Local search iteration 10 Number of random individuals 10% of population size undergo local search

Fig. 5. Best design obtained by GA-DR, with fitness value of 4.9694E-09

Table 4. Results obtained by GA-DR, MA-DR, and CCMA on the orthodontic retraction spring problem after 1.0E+03, 2.5E+03, 5.0E+03, 7.5E+03, and 1.0E+04 evaluation count, respectively. Method GA-DR

MA-DR

CCMA

Evaluation Count 1.0E+03 5.0E+03 7.5E+03 1.0E+04 1.0E+03 5.0E+03 7.5E+03 1.0E+04 10E+03 5.0E+03 7.5E+03 1.0E+04

Mean N.A 1.85E-05 8.48E-06 6.36E-06 6.61E-11 2.17E-15 4.08E-16 2.86E-16 7.77E-11 2.35E-15 7.40E-16 4.02E-17

Standard Deviation N.A 2.70E-05 2.11E-05 2.14E-05 2.93E-10 4.16E-15 5.91E-16 3.09E-16 2.57E-10 4.74E-15 1.85E-15 8.09E-17

Fig. 6. Best design obtained by MA-DR, with fitness value of 5.7483E-19

Fig. 7. Best design obtained by CCMA, with fitness value of 6.5208E-21

B. Experiments on Benchmark Problems To better assert the performance of the proposed CCMA, study on commonly used representative constrained benchmark problems (see Appendix) are used to pit the CCMA against 4 existing constrained evolutionary algorithms, namely, Evolution Strategy with Stochastic Ranking (ES-SR) [22], Simple Multi-membered Evolution Strategy (SMES) [28], Adaptive Tradeoff Model with Evolution Strategy (ATMES) [29], and multi-objective Hybrid Constrained Optimization EA (HCOEA) [30]. Note that results for these 4 algorithms are taken directly from the respective sources in literature, without any re-runs. In previous experiment on the orthodontic spring problem, we do not present a comparison of CCMA with these 4 algorithms due to the unavailability of their original codes. Similar parametric configurations tabulated in Table 2 are also used here in the study on the robustness of the CCMA, with the exception on maximum number of evaluations used for search termination. Instead 2.4E+05 (SMES, ATMES, and HCOEA) or 3.5E+05 (ES-SR) objective evaluations is used to be in consistent with the literature and for the sake of a fair comparison. Preliminary results obtained by CCMA and the other 4 algorithms are summarized in Table 5, while the t-test with 95% confidence level is tabulated in Table 6. It is worth noting that statistically, CCMA is shown to perform competitively if not better than most the existing algorithms reported in the literature. In several instances, including F3 and F4, CCMA is observed to outperform SMES&ATMES and ES-SR, respectively. Table 5. Results obtained by CCMA, ES-SR, SMES, ATMES, and HCOEA on the benchmark problems after 2.4E+05 evaluation count. Benchmark Problem F1

F2

F3

F4

F5

Algorithm CCMA ES-SR SMES ATMES HCOEA CCMA ES-SR SMES ATMES HCOEA CCMA ES-SR SMES ATMES HCOEA CCMA ES-SR SMES ATMES HCOEA CCMA ES-SR SMES ATMES HCOEA

Mean -1 -1 -1 -1 -1 -30665.54 -30665.54 -30665.54 -30665.54 -30665.54 680.630 680.625 680.643 680.639 680.630 -6961.81 -6875.94 -6961.28 -6961.81 -6961.81 -0.089157 -0.095825 -0.095825 -0.095825 -0.095825

Standard Deviation 0.00E+00 1.90E-04 2.09E-04 5.90E-05 1.30E-12 0.00E+00 2.00E-05 0.00E+00 7.40E-12 5.40E-07 2.23E-04 3.40E-02 1.55E-02 1.00E-12 9.41E-08 9.33E-13 1.60E+02 1.85E+00 4.60E-12 8.51E-12 0.020524 2.60E-17 0.00E+00 2.80E-17 2.42E-17

Table 6. Results of t-test with 95% confidence level comparing statistical values for CCMA and those of ES-SR, SMES, ATMES, HCOEA on F1-F5, in terms of p-value and s+, s-, or ≈ to indicate if CCMA is significantly better, significantly worse, or indifferent, respectively. Benchmark Problem F1 F2 F3 F4 F5

p-value SMES ATMES 1.0(≈) 1.0(≈) 1.0(≈) 1.0(≈)

Suggest Documents