Solving Continuous Optimization Problems using a ...

2 downloads 0 Views 3MB Size Report
based on the highest and lowest value of HM for that dimension variable. The formulas used in SHS are given in. (1) through (3). trial i = HM(int rnd ! ×HMS + 1 ).
Solving Continuous Optimization Problems using a Dynamic Self-Adaptive Harmony Search Algorithm Ali Kattan

Rosni Abdullah

School of Computer Sciences, USM 11800 Penang, Malaysia [email protected]

School of Computer Sciences, USM 11800 Penang, Malaysia [email protected]

Abstract—In order to solve global optimization problems of continuous functions, researchers rely on using meta-heuristic algorithms to overcome the computational drawbacks of the existing numerical methods. A recent such meta-heuristic is the Harmony Search algorithm that was inspired from the music improvisation process and has been applied successfully for solving such problems. The proper settings of the algorithm parameters prior to starting the optimization process plays a crucial role in its overall performance and ability to converge to a good solution. Several improvements have been suggested to automatically tune some of these optimization parameters to achieve better results in comparison to the original. This paper proposes new dynamic and self-adaptive Harmony Search algorithm that utilizes two new quality measures to drive the optimization process dynamically. They key difference between the proposed algorithm and many recent improvements is that the values for the pitch-adjustment rate and the bandwidth optimization parameters are determined independently from the current improvisation count and hence do not change monotonically but dynamically. Results show the superiority of the proposed method against several recent methods using some common benchmarking problems. Keywords-component; computational intelligence; metaheuristic

I.

INTRODUCTION

In order to solve global optimization of continuous functions, a great number of numerical methods have been applied. However these methods were limited as they required substantial gradient information and depended highly on the initial starting point of the search [1]. In addition, such gradient-based search may become difficult and unstable when the objective function and constraints have multiple or sharp peaks [2]. Stochastic algorithms such as evolutionary algorithms have shown to be effective optimization techniques as they do not require special conditions or mathematical properties of the objective functions [3]. The Harmony Search (HS) algorithm is an evolutionary stochastic global optimization (SGO) method that is similar in concept to other SGO methods such as genetic algorithms, particle swarm optimization and ant colony optimization in terms of combining the rules of randomness to imitate the process that inspired it [4]. It is a relatively young metaheuristic that was inspired from the improvisation process of musicians [5] and was later used for engineering optimization problems with continuous design variables [6].

The HS algorithm has been applied successfully in many fields and proved to be a competitive alternative to other SGO methods [4, 7, 8]. The original HS algorithm, referred to as “classical”, required statically setting a certain number of parameters prior to starting the optimization process. However, the algorithm capabilities are quite sensitive to these parameter settings affecting its overall performance and its ability to converge to a good solution [3, 9]. Many recent works have proposed “self-adaptive” HS variants that would automatically tune and find the best settings for these optimization parameters achieving better results than that of the classical. However, many of these variants have also introduced new additional parameters, which in some cases would also require an additional effort to set their initialization values manually depending on the problem considered prior to starting the optimization process. In addition, many of these variants have forced a relationship between the value of these parameters and the current optimization process iteration count imposing a linear monotonic change that is bounded by a maximum integer value selected subjectively by the user to indicate termination. Consequently, the selection of such integer value would affect the whole optimization process. In this work the authors propose a new dynamic and selfadaptive HS algorithm that utilizes two new quality measures, namely the best-to-worst ratio (BtW) and the improvisation acceptance rate percentage to dynamically compute the values of two main parameters during the optimization process, namely the pitch-adjustment rate (PAR) and the bandwidth (BW). Using some common benchmarking problems comprising continuous functions, comparisons are made against the reported results of three recent HS-based algorithms featuring some self-adaptive features. These are the Self-adaptive HS (SHS) [3], the Selfadaptive Global-best HS (SGHS) [9] and HS with Adaptive Pitch Adjustment (HSAPA) [10]. Results show that the proposed algorithm is superior to these recent approaches. This paper is organized as follows: section II is the literature review coving the background of the classical HS algorithm and also presents some recent related works. Section III presents the proposed dynamic self-adaptive HS algorithm and section IV is the experimental results. Section V is the discussion and finally the conclusions are given in section VI.

II.

LITERATURE REVIEW

HS concept is based on the improvisation process of musicians in a band where each note played by a musician represents one component of the harmony vector. The harmony vector represents a solution vector x, having the size N representing all musician generated notes associated with a harmony quality value, an aesthetic measure, as shown in Fig. 1.

Figure 1. The harmony vector and its mathematical representation

The harmony vector is analogous to N dimension variables and the harmony quality represents the fitness function f(x). The computational procedure for the classical HS algorithm is given in Fig. 2 where HMS represents the Harmony Memory Size, HMCR is the Harmony Memory Considering Rate, PAR is the Pitch Adjustment Rate, BW is a vector of size N from which the adjustment value for each dimension variable is drawn and MAXIMP is the maximum improvisation rate representing the total number of iterations. For some real valued continuous objective functions the N dimension variables’ values would be drawn from the same range [xL, xU]. The classical HS set all of its optimization parameters statically prior to starting the optimization process. The perfect solution vector is found when each component value is optimal based on some objective function evaluated for this solution vector [11]. The improvisation process continues until MAXIMP is reached. A detailed description of the algorithm can be found in [6].

Two important algorithm parameters namely PAR and BW are considered to have a great influence on the quality of the final solution and hence these optimization parameters need to be skillfully assigned in order to obtain good results [12]. PAR determines the probability of whether the value that is selected from memory is to be adjusted or not and affects the search area size and the diversity of the population represented by the harmony memory (HM) [9]. BW is the adjustment value for each dimension variable that must guarantee that the resultant value is within the permissible range specified of each variable. The Improved HS (IHS) [13] and Global-best HS (GHS) [14] were the first two early attempts to auto-tune these parameters. The value of these parameters would be computed in every iteration based on the current improvisation rate value, which is bound by MAXIMP. Both of these algorithms have reported superiority over the classical HS algorithm in a number of domains. However, each algorithm has its own limitations and many have criticized the methods’ approach used in adjusting these optimization parameters [3, 10, 12]. Wang and Huang (2010) proposed “an almost-parameterfree harmony search algorithm” referred to as Self-adaptive Harmony Search (SHS) [3]. In their proposed method some of the parameters are automatically adjusted according to its self-consciousness. PAR is decreased linearly with from 1.0 to 0.0 as a function of the iteration count. BW on the other hand was replaced altogether and the new harmony is updated according to the maximal and minimal values in the HM. It was indicated that such technique would result in the new harmony making better utilization of its own experiences. BW is computed for each dimension variable based on the highest and lowest value of HM for that dimension variable. The formulas used in SHS are given in (1) through (3). !"#$% ! = !"(!"# !"# ! ×!"# + 1 ) (1) !"#$% ! = !"#$% ! + [max !" ! − !"#$% ! ]×!"#[0,1) (2) !"#$% ! = !"#$% ! − [!"#$% ! − min !" ! ]×!"#[0,1) (3)

Figure 2. The computational procedure for classical HS algorithm

The technique used in SHS would avert the need for any initial setting involving PAR and BW. Based on a number of full-factorial experiments involving HMCR and HMS, it was found that an HMS size of around 50 and HMCR value of 0.99 are the most suitable to use for a number of continuous optimization problems involving four benchmarking functions. The maximum iteration count for all the problems considered was set to a value of 100,000. The Self-Adaptive GHS (SGHS) [9] is based on GHS [14] and employs a new improvisation scheme with an adaptive parameter tuning method. Three parameters are adjusted dynamically during the optimization process in SGHS, namely HMCR, PAR and BW. It is assumed that the HMCR and PAR values are normally distributed in the range of [0.9,1.0] and [0.0,1.0] respectively with mean values HMCRm and PARm. It uses a standard deviations of 0.01 for HMCR and 0.05 for PAR. HMCRm and PARm are

initially set prior to starting the algorithm at 0.98 and 0.9 respectively. Then SGHS starts with a HMCR & PAR values generated according to the normal distributions of these two. After a specified number of iterations, referred to as learning period (LP), which was selected at a value of 100, both HMCRm and PARm are recalculated by averaging all the recorded HMCR and PAR values during this period and then the procedure is repeated. It was mentioned that such approach would result in appropriate values that can be gradually learned to suit the particular problem and the particular phases of the search process. The dynamic value of BW is set in a fashion that is similar to that used in IHS, i.e. as function of the current iteration count and MAXIMP and as given (4). To sum up, SGHS still requires the initial value setting of BWmin, BWmax, HMCRm, PARm and the new parameter LP. These parameters must be determined manually before starting the optimization process. !" !"# =!! !"!"# − !"!"# !"!"# − ×2!"#!!!!"!!"# < !"#$!%/2 ! !"#$!% !"!"# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"!!"# ≥ !"#$!%/2

(4)

Another recent self-adaptive HS variant is the HS with Adaptive Pitch Adjustment (HSASP) introduced by Worasucheep (2011). In this algorithm, the bandwidth value is adapted dynamically using a technique inspired from the velocity clamping in particle swarm optimization. Arguing that the simultaneous dynamic change of both HMCR and PAR can cause a twist of global and local search (a contradiction to the concept used in SGHS), HSASP uses a high fixed value of 0.995 for the HMCR parameter while PAR, as in SHS, is decreased linearly from 1.0 to 0.0 during the optimization process. It was indicated that a large HMCR value would increase convergence rate of the algorithm in most cases and provides better performance [3, 13, 14]. Worasucheep (2011) argued that the technique used in SHS [3] using the lowest and the highest values of the ith dimension variable in the HM, given previously in (1) through (3), would eliminate the benefits of the exploration of search space outside the current boundary confined within a minimum and a maximum current values and would result in non-uniform distribution of position of the new harmony. In HSASP the improvisation process also considers using the current high and low values of each dimension variable within HM. However, the improvisation would introduce two new terms, the range and the value λ and as given in (5) through (8) [10]. The bandwidth parameter BW is replaced with λ×range where it is argued that stochastic oscillation in each dimension is restricted to the current range of harmony position and is regulated by the parameter λ with values ranging from 0.2 to 0.8. !"#$% ! = !"# !" ! − !"#!(!" ! ) !"#$% ! ± !×!"#$% ! ×!"#() If trial(i)< xL(i) then trial(i)= xL (i) If trial(i)> xU(i) then trial(i)= xU(i)

(5) (6) (7) (8)

The proposed technique has shown to attain good results in comparison to other methods including SHS [3] and the classical HS [6]. However, it still requires several manual setting for the newly introduced parameter λ. The best values were found to be 0.3, 0.4 and 0.5 considering the problems selected. As for other parameters, HMS was set at 50 with the memory initialized using the common uniform random initialization. The maximum iteration count was set manually based on the problem dimensionality. There are two contradicting techniques to change PAR in the methods introduced above. PAR either starts from a minimum value and ends with a maximum one as in IHS, GHS and SGHS [9, 13, 14] or it starts from a maximum value and ends with a minimum one as in SHS and HSASP [3, 10]. Regardless of the technique, PAR behavior is predetermined in these methods to be monotonic and does not take into account the quality of solutions in the current HM. A larger value of PAR would result in further modification to the newly created dimensional variable thereby enhancing the local exploitation ability of the algorithm, whereas a smaller value of PAR would result in the new harmony vector to select its dimensional values by perturbing the corresponding values in HM, thus enlarging the search area and diversity of the HM [3]. The main difference among these methods is choosing which one to take place first, local exploitation or enlarging the search area and diversity? i.e. start with large PAR value or small PAR value? There is no link between the “quality” of the current solutions and PAR value setting. III.

PROPOSED METHOD

The proposed method utilizes the Best-to-Worst (BtW) ratio of the current HM. The concept of BtW was introduced earlier by the authors for the training of artificial neural networks using the HS algorithm [4, 15]. The words “best” and “worst” are part of the HS algorithm nomenclature whereby the algorithm basically tries to find the “best” solution among a set of solutions stored in HM by improvising new harmonies to replace those “worst” ones as in the algorithm given in Fig. 2. At any time the HM would contain a number of solutions including a best solution and a worst solution in terms of their stored quality measures, i.e. fitness function values. With minimization problems in mind, the BtW value is a value in the range [0,1] and as given by the ratio of the current best harmony fitness value to the current worst harmony fitness value in HM. This is expressed in (9) where a higher BtW value in this case indicates that the quality of current HM solutions is approaching that of the current best. !"# =

!(!!"#$ ) !(!!"#$% )

(9)

The PAR value in the proposed method is adjusted dynamically based on the value of the current BtW ratio rather than the value of the current iteration count. This is shown in Fig. 3 where as search progresses the BtW value would eventually decrease owing to having better quality solutions in HM. This behavior is expressed in (10) and (11)

where PAR becomes a function of BtW and not the iteration count. Unlike the methods introduced in the previous section, PAR change is not monotonic and is not bound by a MAXIMP value. PAR is to increase or decrease in response to the quality of current solutions in HM represented by the computed BtW ratio.

confined within a minimum and a maximum [10]. Based on a number of experiments, using a value of C=2.0 makes the performance of our proposed algorithm comparable to those presented earlier. However, in some cases the results obtained were inferior in comparison. In those cases it was noted that using a smaller C value would cause the method to converge to a better solution, where the total number of accepted improvisation would be higher. In general, the number of accepted improvisations will gradually decrease as convergence is approached. The proposed algorithm introduces another quality measure: the improvisation acceptance rate percentage (AccRate) representing the total number of accepted improvisations in the last 100 iterations. The value of C could be self-adapted based the value of AccRate and as expressed in (14). 2!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"!!""#$%& ≥ !"!!

Figure 3. The dynamic setting of PAR in the proposed method

!"#$%& =

!"#!"# !!"#!"# !!!

= !"#!"# − !"#!"#

!"# = !"#$%& ∙ !"# + !"#!"#

! = 2∙

(10) (11)

PARmax is set at the value of 1.0, as in many of the methods presented earlier, and PARmin is set at a small value that is greater than zero. Setting PARmin to zero might inhibit the pitch adjustment process completely in case of HM stagnation; a condition in which BtW becomes close to unity awing to having similar solutions in HM. If the BtW value decreases, indicating a range of different quality solutions, then PAR value increases to reflect the local exploitation ability of the algorithm and cause further modifications to the newly created harmony. On the other hand if the BtW value increases, indicating a higher quality solutions within HM, then PAR value decreases to enlarge search area and diversity by causing the values of the newly created harmony to be selected by perturbing the corresponding values in HM. The pitch-adjusting process is accomplished by using a dynamic bandwidth value (DBW) that considers each harmony vector component (dimension variable) separately and as given in (12) and (13). ! !"#$%&'((!) = ! ∙ !"#$%&(!!" ) !"# ! = !"#(−!"#$%&!!" ! , !"#$%&!!" ! )

(12) (13)

The ActiveBW for the considered dimension variable is computed by calculating the standard deviation of the respective HM column. The DBW of the dimension variable is a random value confined within the positive and negative range of the ActiveBW for that dimension variable. This is similar in concept to settings used in SHS and HSASP given earlier in that it considers the current high and low values existing within HM for each dimension variable. The standard deviation in this case represents how much variation exists from the average of each component vector value in HM. A value of C>1.0 would extend the dynamic bandwidth value so that it would not eliminate the benefits of the exploration of search space outside current boundary

!""#$%& !"!!

!!!!!!!!"!!"!! > !""#$%& > 1

(14)

0.1!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"!!""#$%& ≤ 1

C is initially set at the value 2.0 (including the first 100 iterations). ACth represents a threshold at which C value would be decreased proportionally with the acceptance rate. Based on several experimental settings, it was found that a value of ACth = 20% would give a very good performance. A minimum value of C = 0.1 would be used for a very low acceptance rate and as indicated in (14). In this case the effective range of ActiveBW in (12) will start to decrease linearly with the current improvisation acceptance rate once it falls below the threshold value of ACth resulting in DBW fine-tuning within smaller bounds as in (13). Termination in all other methods presented earlier is the same as that of the classical HS; specifying a maximum number of iterations as stipulated by MAXIMP value selection [3, 9, 10, 12]. The proposed algorithm also uses this standard termination condition whereby the total number of optimization cycles is bounded by using a MAXIMP. The proposed method however adds one more termination conditions that is “OR”ed with the standard one. With minimization considered, if the current fitness function value of the best solution within HM is less than a very small number delta, then this would also signal termination. IV.

EMPIRICAL RESULTS

The value settings for HMS and HMCR where investigated thoroughly in SHS and HSASP [3, 10] and common values are used for these two in the proposed method. With minimization problems in mind, fitness function values that are less than delta = 1.0E-50 are considered to be practically zero and would cause immediate termination otherwise termination is bound by MAXIMP. HM initialization follows the same trend used in SHS [3]. The proposed method uses the symmetric low-discrepancy sequences to initialize HM using the Mersenne Twister (MT19937) pseudo-random number generator [16]. Table I summarizes the parameter settings used in the proposed DSHS.

TABLE I.

PARAMETER SETTINGS FOR DSHS

HMS$

HMCR$

PARmin$

PARmax$

ACth$

MAXIMP$

50

0.99

0.1

1.0

20%

6.0E+05

Considering a dimensionality of N=30, three common benchmarking optimization functions are shown in Fig. 4 along with each function’s range, optimal value and a 3D function graph to give a visual idea about the function’s surface nature. The first function, Sphere, is a unimodal functions characterized by having single global optima. The other two functions are multimodal and are characterized by having many local optima in addition to single global optima.

In each test the proposed DSHS algorithm was run for 30 times and the bootstrap-t statistical analyses were carried out with 95% confidence interval for all test functions to verify the method. Table II shows the results of the proposed method in comparison to others. In case of HSASP, the reported results are associated with different λ value and shown in square brackets in its respective column in the table. The best-attained value with its associated iteration count and the last iteration for which the proposed DSHS algorithm terminated are given in the last column. Zero values shown in italic font type in the proposed method column indicate values less than delta (1.0E-50), otherwise an actual zero values. The convergence graphs for the proposed method for each of the benchmarking function were obtained and one sample for Generalized Griewangk function are shown in Fig. 5 (other are not included due to space restrictions). These graphs are drawn against accepted improvisation count. Since graph (a) and (b) contain condensed data due to the large number of iterations, a trendline is also included in each to show the general behavior where it is drawn as polynomial to the sixth order. V.

Figure 4. Continuous benchmarking functions

DISCUSSION

The unimodal Sphere function is known to contain some malicious cases causing poor or slow convergence [17]. For the other multimodal functions, the number of local optima increases exponentially with the dimension of the problem. This appears to be the most difficult test for optimization algorithms [18]. The proposed method has performed better than any other method as the obtained results are superior to the best achieved by any other method. In two out of three benchmarking problems, the proposed method terminated much earlier than the final iteration count stipulated by MAXIMP and as given in Table II. This include problem A and B were the proposed method was able to terminate earlier based on the opted precision for the minimum fitness value (delta 0 can be justified where the algorithm was able to escape such stagnation condition after a number of iterations resulting in improvising a new best solution in HM. In the last benchmarking problem, the Rosenbrock function, the algorithm terminated upon reaching MAXIMP. This problem is a multimodal multidimensional function, with huge number of local optima. The global optimum here lays inside a long, narrow, parabolic shaped flat valley. To find the valley is trivial, however convergence to the global optimum is difficult and hence this problem has been frequently used to test the performance of optimization algorithms [17]. It could be argued that the setting of threshold in (14) is yet another newly introduced HS parameter that must be set prior to starting the optimization process. This is probably true if a different set of optimization problems is considered for the proposed method. However, many of the selfadaptive methods introduced earlier in section II have also introduced new parameters in order to achieve the sought improvements over the classical HS. Probably an alternative dynamic technique could replace the one proposed for computing the constant C in (14) where such technique would use some sort of self-adaptivness to set such threshold. VI.

PAR or BW ranges and standard values are used for other control parameters similar to those used in recent methods. PAR is completely decoupled from the iteration count and changes in accordance with BtW. In this case PAR change is not monotonic and not bound by a maximum iteration count value. PAR is set to be inversely proportional to BtW and increases or decreases in response to the quality of current solutions in HM represented by the computed BtW ratio.

CONCLUSIONS

This paper presented a dynamic self-adaptive harmony search algorithm for solving continuous optimization problems. The proposed DSHS algorithm utilized two new criteria to drive the optimization process: the best-to-worst ratio (BtW) of the fitness function value for the current HM to dynamically adjust the PAR value and a dynamic BW (DBW) based on the standard deviation of each dimension variable in the HM. This is carried out in proportion with the improvisation acceptance rate in order to adjust the bandwidth of each dimension variable independently. The proposed method does not require any manual setting for

Figure 5. DSHS convergence graphs for Griewangk’s benchmarking function drawn against accepted improvisations.

The proposed DSHS method was verified and compared against the reported results of three other recent methods with self-adaptive features considering three common continuous optimization problems. The proposed method was superior to other methods. The average fitness value decreases almost consistently during the optimization process throughout all the problems. The proposed method uses a dynamic bandwidth formula that is affected by the current improvisation rate once it starts to fall below a certain threshold value. Although a threshold value of 20% was found to be suitable for the continuous optimization problems considered in this work, future work should consider an alternative dynamic technique that would use a form of self-adaptivness to set such threshold automatically. Future work should also consider more continuous optimization problems with higher dimensionality and comparisons against other recent HSbased algorithms must be carried out.

[4]

[5]

[6]

[7]

[8] [9]

[10]

ACKNOWLEDGMENT The first author would like to express his sincere gratitude for the valuable assistance provided by the following authors of the HS-based methods considered in this work: Dr. Mohammed G. Omran of Gulf University for Science and Technology (Kuwait), Dr. Chia-Ming Wang of Natinal Yunlin University of Science and Technology (Taiwan) and Dr. Chukiat Worasucheep of King Mongkut’s Univeristy of Technology Thonburi (Thailand). This research is supported by UNIVERSITI SAINS MALAYSIA and has been funded by the Research University Cluster (RUC) grant titled by “Reconstruction of the Neural Microcircuitry or Reward-Controlled Learning in the Rat Hippocampus” (1001/PSKBP/8630022).

[11]

[12]

[13]

[14] [15]

REFERENCES [1]

[2]

[3]

W. S. Jang, H. I. Kang, and B. H. Lee, "Hybrid Simplex-Harmony search method for optimization problems," presented at the IEEE Congress on Evolutionary Computation (CEC 2008) Trondheim, Norway, 2008. M. Jaberipour and E. Khorram, "Two improved harmony search algorithms for solving engineering optimization problems," Commun Nonlinear Sci Numer Simulat, vol. 15, pp. 3316-3331, 2010. C.-M. Wang and Y.-F. Huang, "Self-adaptive harmony search algorithm for optimization," Expert Systems with Applications, vol. 37, pp. 2826-2837, 2010. TABLE II.

Function$

[16]

[17]

[18]

A. Kattan and R. Abdullah, "Training of Feed-Forward Neural Networks for Pattern-Classification Applications Using Music Inspired Algorithm," International Journal of Computer Science and Information Security, vol. 9, pp. 44-57, 2011. Z. W. Geem, J. H. Kim, and G. V. Loganathan, "A New Heuristic Optimization Algorithm: Harmony Search," Simulation, vol. 72, pp. 60-68, 2001. K. S. Lee and Z. W. Geem, "A New Meta-heuristic Algorithm for Continuous Engineering Optimization: Harmony Search Theory and Practice," Computer Methods in Applied Mechanics and Engineering, vol. 194, pp. 3902-3933, 2005. J.-H. Lee and Y.-S. Yoon, "Modified Harmony Search Algorithm and Neural Networks for Concrete Mix Proportion Design," Journal of Computing in Civil Engineering, vol. 23, pp. 57-61, 2009. Z. W. Geem, Music-Inspired Harmony Search Algorithm: Theory and Applications vol. 191: Springer, 2009. Q.-K. Pan, P. N. Suganthan, M. F. Tasgetiren, and J. J. Liang, "A self-adaptive global best harmony search algorithm for continuous optimization problems," Applied Mathematics and Computation, vol. 216, pp. 830-848, 2010. C. Worasucheep, "A Harmony Search with Adaptive Pitch Adjustment for Continuous Optimization," International Journal of Hybrid Information Technology, vol. 4, pp. 13-24, 2011. Z. W. Geem, C.-L. Tseng, and Y. Park, "Harmony Search for Generalized Orienteering Problem: Best Touring in China," in Advances in Natural Computation. vol. 3612/2005, ed: Springer Berlin / Heidelberg, 2005, pp. 741-750. Z. W. Geem and K.-B. Sim, "Parameter-setting-free harmony search algorithm," Applied Mathematics and Computation, vol. 217, pp. 3881-3889, 2010. M. Mahdavi, M. Fesanghary, and E. Damangir, "An Improved Harmony Search Algorithm for Solving Optimization Problems," Applied Mathematics and Computation, vol. 188, pp. 1567-1579, 2007. M. G. H. Omran and M. Mahdavi, "Globel-Best Harmony Search," Applied Mathematics and Computation, vol. 198, pp. 643-656, 2008. A. Kattan, R. Abdullah, and R. A. Salam, "Harmony Search Based Supervised Training of Artificial Neural Networks," in International Conference on Intelligent Systems, Modeling and Simulation (ISMS2010), Liverpool, England, 2010, pp. 105-110. M. Matsumoto and T. Nishimura, "Mersenne twister: A 623dimensionally equidistributed uniform pseudo-random number generator," ACM Transactions on Modeling and Computer Simulation, vol. 8, pp. 3-30, 1998. J. G. Digalakis and K. G. Margaritis, "On Benchmarking Functions for Genetic Algorithms," International Journal of Computer Mathematics, vol. 77, pp. 481-506, 2001. M. Ji and J. Klinowski, "Taboo evolutionary programming: a new method of global optimization," Proceedings of the Royal Society A, vol. A2006, pp. 3613-3627, 2006.

MEAN AND STANDARD DEVIATION (SD) OPTIMIZATION RESULTS FOR N=30

SHS$[3]$

SGHS$[9]$

A. Sphere

6.9160E-07 (1.1035E-06)

0 (0)

B. Griewangk

8.4515E-05 (2.3783E-04)

3.54E-02 (4.93E-03)

C. Rosenbrock

2.6468E+01 (5.6833E-01)

1.51E+02 (1.31E+02)

HSASP$[10]$ 1.384E-41 (5.243E-41) [λ=0.4] 0 (0) [λ=0.4] 2.688E+01 (2.917E-01) [λ=0.7]

Proposed$Method$ Best(value( DSHS( @iteration,( Termination( 0 0 (0) 205,779 205,779 0 0 (0) 69,926 69,926 1.42E+01 1.39E+01 (1.07E-01) 599,949 MAXIMP

Suggest Documents