Document not found! Please try again

An evolutionary algorithm for the biobjective QAP

2 downloads 0 Views 126KB Size Report
and the descendents are derived from the parents by mutation based on an. EC-memory method. This EC-memory method is an extended version of an.
An evolutionary algorithm for the biobjective QAP Istvan Borgulya University of Pecs, Faculty of Business and Economics H-7621 Pecs, Rakoczi ut 80. [email protected]

Abstract. In this paper we present a new method for the biobjective quadratic assignment problem. This method is a modified version of an earlier multi-objective evolutionary algorithm. It uses a special truncation selection, and the descendents are derived from the parents by mutation based on an EC-memory method. This EC-memory method is an extended version of an earlier method, and we can use on the more value discrete space. The quality of the results of our algorithm is better than the results of some stochastic local search, or ACO algorithms

1 Introduction Knowles and Corne [7] presented a QAP variation considering several flows and distances. This multi-objective QAP problem has a number of potential applications. For example, in hospital layout problem we may be concerned with simultaneously minimizing the flows of doctors of their rounds, of patients, of hospital visitors, and of pharmaceuticals and other equipment [7]. The mathematical expression is then {f1 (π), f2 (π), ..., fm (π)} minπ∈Sn F (π) = n k where fk (π) = i,j=1 f wij dπ(i)π(j) 1 ≤ k ≤ m. k n is the number of facilities and locations, f wij denotes the k th flow between i- and j -facilities, Sn is the set of all permutations with n elements and π ∈ Sn , dij is the distance between location i and location j and πi gives the location of facility i in permutation π. In the last years Knowles and Corne [8] presented instance generators for the biobjective QAP (bQAP) and some methods were developed for bQAP. E.g. an ACO algorithm was designed [9], stochastic local search algorithms were developed [11], and a parallel evolutionary technique for bQAP was present [4].

2

Istvan Borgulya

In this paper we present a new evolutionary algorithm (EA) for the bQAP. This algorithm is based on an earlier multi-objective EA, named MOSCA2 [2]. For the bQAP we modified MOSCA2, we use other selection, mutation operator and we use a local search procedure. To improve the quality of the results we use a modified version of an EC-memory method by mutation, instead of recombination operator. We compared our algorithm with other algorithms (e.g. robust tabu search (RoTS), ACO, stochastic local search algorithms). The quality of the results of our algorithm became better and we got this result after similar or longer running times. In addition to this introduction section, this paper is organised into the following sections. The new, extended version of an EC-memory method is described in Section 2. Section 3 includes the new version of the MOSCA2. In Section 4, we present our computational experience with the new version and compare our results with other heuristics results. Section 5 contains concluding remarks.

2 The EC-memory method There are many variants of explicit collective memory (EC-memory) methods that memorises the past events and/or past successes of the evolution process in the EA. With the help of this method we can chose e.g. appropriate evolutionary operators during the evolutionary process, we can drive the offspring generate process, or we can select the individuals (e.g. the PIBS of Baluja et al. [1], the ”Virtual loser” of Sebag et al. [14], ant colony algorithm [3], cultural algorithm [13]). We choose to adopt the method of Sebag et al. [14] that memorises the past failures of evolution through a virtual individual, the virtual loser (VL). We can use the VL in the binary space, and its memory is a numeric vector that gives the average values of the worst individual by every bit position (variable). With the help of the VL we can give the probability of mutating a bit in an individual: the probability of mutating bit i in individual X should reflect how much it discriminates Xi from VL, that is, it should increase with pi = 1 − |V Li − Xi |. We can use this technique e.g. by continuous function optimization, or by combinatorial problem, discretized through a binary or a Gray coding. In this paper we modify and extend the VL. We made to be able the VL more discrete values handling. The discrete values can be integer, or real number, but different objects, e.g. values of permutations too. Generally we can not compute the average of the discrete values (e.g. by permutation) as by the VL, but we can compute the relative frequency of every discrete values by the variables one by one. The principle of the new VL version, named EVL (E xtended V irtual Loser) is the following. Let be m different values of the variables. (We see

An evolutionary algorithm for the biobjective QAP

3

only this simple version. If the numbers of the discrete values or the discrete values aren’t the same by every variable we can easily modify the next formulas.) Let us notice ECM an nxm matrix that stores the relative frequency of the different values of the variables. This matrix is updated through the search procedure using a few of the worst performing individuals. Let ECMijgen be the relative frequency of the ith values on the j th position (variable) in the genth generation. We can update the elements of the ECM matrix similar way as by the VL: ECMijgen+1 = (1 − α)ECMijgen + α dECMij

(e.g. α = 0.2)

where dECMij is the relative frequency of the ith value on the position j th based on the worse individuals of the genth generation and α denotes some relaxation factor. For the probability of mutating the j th variable in individual X we can use the ECMXj ,j qj = n ECM

kj

k=1

formula. We get the highest qj values by the worst values of the variable Xj based on the worst individuals. Consequently we can chose a better value for Xj with higher probability than 1 − qj . To improve the probability we can use the best individual too. Let be B one of the best individuals. If Xj = Bj we change evidently the value of Xj with low probability. With the help of Bj the probability of mutating the j th variable in individual X is the following: pj = 1 − |qj − aj | where if Xj = Bj then aj = 1 else aj = 0. Mutation based on the EV L: Let be X a descendant. We rank the variables Xi decreasing based on qi , and select the first (e.g. max n/2) elements of the queue. Let U notice the set of the selected variables. By every variable Xj ∈ U we search an other Xz ∈ U such a way, that the probability ECMXz ,j

pj = 1 − |  n

k=1

ECMkj

− ai |

is maximal (where if Xz = Bj then ai = 1 else ai = 0). After that we write the value of Xz into the ith position and we delete the variables Xz from U .

3 The new algorithm 3.1 The MOSCA2 The fundamental principles of MOSCA2 [2] are the following: let us segregate the members of the population into t subpopulations, each subpopulation

4

Istvan Borgulya

will approximate an other part of the Pareto front sought. Each subpopulation is storing only non-dominated individuals of the possible members of the subpopulation (at a limited amount). The dominance of a new descendant getting into the subpopulation is determined by comparing it to a designated non-dominated individual, the prototype. If it founds a non-dominated descendant superior to the previous prototype, it deletes the former members of the subpopulation, and replaces the prototype by the new descendant. During the evolution process the new potential Pareto optimal solutions are periodically stored in a separate archive, and we get the result in this archive. If this separate archive (SARC ) is full, the algorithm deletes a given percentage (10%) of its elements. We select first the most dominated individuals for deletion, after that continuously one of the individuals close to each other. The MOSCA2 uses a 2-stage algorithm structure where every stage is a steady-state EA. The first stage is a quick ”preparatory” stage that is designated to improve the quality of the initial population. The second stage is an evolutionary strategy with some special operators (more details in [2]). 3.2 MOSCA2b for bQAP The new version of the MOSCA2, named MOSCA2b, uses some special operators. So, the selection is a special version of the truncation selection, the descendents are derived from the parents by mutation based on the EVL and the algorithm uses the 2-opt local search procedure with weighted objective. By solving a multi-objective problem there are two important tasks generally: to reach a good convergence to the Pareto optimal front and to cover all points of this front with different solutions. MOSCA2b solves these tasks only with the help of the truncation selection and the EVL method. The main steps of MOSCA2b: Procedure MOSCA2b (t, subt, arcn, itend, ddp) /* The initial values. it = 0, SU BPi = ∅ (i = 1, 2, ..., t) Let pi ∈ SU BPi (i = 1, 2, ..., t) : SARC = ∅ itt = 400, kn = 100 /* First stage * Fitness evaluation: ranking of P Do itt times it = it + 1 A descendant is generated randomly. Reinsertion. od /* Second stage * Ranking of P. Initial values of ECM Repeat Do kn times it = it + 1

An evolutionary algorithm for the biobjective QAP

5

Truncation selection, mutation based on EVL, local search. Reinsertion. od Ranking of P, Update of SARC, Deleting. Update ECM. until it > itend end The parameters of the algorithm: t - the number of the subpopulation. subt - the maximum size of each subpopulation. arcn - the maximum size of SARC. itend - the maximal number of the generation. ddp - parameter of the Deleting procedure. The main function and characteristics of the two EAs are as follows: • • • • •

• • •



The first individuals are randomly generated. The P population is built from subpopulations: SU B1 , SU B2 , ..., SU Bt (P = ∪SU Bi ). There is a designated non-dominated individual in every subpopulation, the prototype. The value of the fitness function is a rank number determined according to the Pareto ranking method by Goldberg [6]. In the first stage the descendants are randomly selected from S. In the second stage the algorithm randomly selects a parent with rank number 1 from P (this is a special truncation selection). Mutation based on the EV L. Let be X a descendant. We rank the variables Xi decreasing based on qi , and select the first, maximum n/2 elements of the queue for mutation and we change the value of the variables according in Section 2 (By the bQAP, B is the prototype of the subpopulation of X ). ECM : It is the matrix in the EVL method. ECM is defined after the termination of the first stage. It is periodically updated by using the weakest individuals. In the updating procedure we use 20% of the population. As local search the algorithm uses the 2-opt local search with weighted objective [5]. On reinsertion in a subpopulation the algorithm do not use the rank number, it is enough to examine the Pareto dominance between a prototype and the descendant. In the first stage, the algorithm compares the descendant with the most similar prototype. In the second stage the descendant is compared with the prototype of the subpopulation of the parent. Deleting. A given percent of the most dominated individuals in P will be deleted based on the rank number. This percent decreases as the number of iteration increases (The deleted subpopulations will be replaced with new one).

6





Istvan Borgulya

Function computation. Based on observation in [12] on the transformation of asymmetric flow or distance matrix into symmetric one, we compute in a simple form the change of the function values in the local search procedure similar way as in [11]. Stopping criteria. The algorithm is terminated if a pre-determined number of iterations have been performed.

4 Experimental results Test problems We used the test problem of [11] that was generated with the instance generator of [8]. These instances and the reference solutions of RoTS are available at http://www.intellektik.informatik.tu-darmstadt.de/˜lpaquete/QAP. The instances were generated with n ∈ {25, 50, 75} locations and with correlations between the flow matrices of ρ ∈ {−0.75, −0.50, −0.25, 0.0, 0.25, 0.50, 0.75}. Parameter selection Our experience with the earlier version of MOSCA2b helped easier to chose the values of the parameters. Only with a little difference we could use the same values. So the used parameters were the following: t = 100, subt = 10, arcn = 1000 and ddp = 30%. The maximal number of the generation (or fitness evaluations) was 1500, or 2300 depending from the problem (The MOSCA2b was implemented in Visual Basic and ran on a Pentium 4 1.8 GHz with 256 MB RAM). Comparative results As performance measure we used the binary -indicator from [11]. The binary -indicator gives the factor by which an approximation set is worse than another with respect to all objective [15]. In practice, the binary -indicator is calculated as I (A, B) = maxb∈B mina∈A max( ab11 , ab22 ) where A and B are non-dominated objective value vectors of a problem with two objectives. With the help of this measure we can compare two solutions: if I (A, B) > 1 and I (B, A) ≤ 1, then the set B completely dominates the set A. To compare the results of the MOSCA2b we chose the RoTS algorithm. With the help of the reference solutions of the RoTS we compared the performance of the MOSCA2b and RoTS, used shorter running times as the RoTS running time. Every test problem was run 10 times, and the table 1 shows the average comparative results of MOSCA2b. In the table we can see the results

An evolutionary algorithm for the biobjective QAP

7

by different values of correlation (ρ ) and size(n), 1 gives I (B, A), 2 gives I (A, B) (where A is the outcomes of the MOSCA2b and B is the outcomes of RoTS), avgt is the average computation times in second and avgn is the average number of solutions. Table 1. The average comparative results of MOSCA2b ρ n

1

2

0.75 25 50 75 0.50 25 50 75 0.25 25 50 75 0.00 25 50 75 -0.25 25 50 75 -0.50 25 50 75 -0.75 25 50 75

1.096 1.041 1.032 1.094 1.021 1.014 1.099 1.045 1.009 1.051 1.009 1.012 1.073 1.020 1.010 1.084 1.022 1.031 1.086 1.007 1.001

0.923 0.967 0.978 0.908 0.989 0.985 0.933 0.971 0.985 0.947 0.975 0.993 0.928 0.968 0.986 0.952 0.962 0.978 0.990 0.986 0.993

avgt avgn 18.8 86.0 249.9 18.1 86.5 289.4 19.4 87.6 1464.2 18.7 88.1 1465.5 18.8 144.5 1454.1 19.3 157.4 1231.2 19.5 153.9 1036.4

2 3 2 5 7 5 8 7 10 12 11 12 19 16 22 28 20 36 64 71 82

Analyzing the results, we can conclude that the outcomes of MOSCA2b are better by all test problems based on the performance measure than the outcomes of RoTS. That shows the figure 1 too. The plot gives the nondominated results of the methods and every MOSCA2b’ results dominate the RoTS’ results. Only the numbers of the non-dominated solution of the RoTS are more that by MOSCA2b, but the quality of the MOSCA2b’solutions are better. We can compare MOSCA2b with other methods too. In [9] an ACO algorithm, and in [11] two stochastic local search (Pareto Local Search and Two-Phase Local Search) were developed for the bQAP. In both papers the methods were compared based on the binary -indicator with RoTS, so we can compare MOSCA2b with these methods too. Because the RoTS has better performance as the ACO and the two stochastic local search algorithms (see [9], [11]), MOSCA2b has also better performance by the given problems as these methods. Only the running time is different: MOSCA2b has similar or

8

Istvan Borgulya

shorter running times as the ACO, and has longer running times as the two local search algorithms. (We considered by the comparison that the methods run on different computers.)

Fig. 1. The plot gives the non-dominated results of MOSCA2b (thicker points) and the results of RoTS (thinner points). The results correspond to three-three instances with size 50 (left column) and with size 75 (right column) and with correlation ρ =-0.75 (line top), ρ =0 (line center), ρ =0.75 (line bottom).

An evolutionary algorithm for the biobjective QAP

9

We can observe that the solution quality of MOSCA2b (and the other methods too) depends from the correlation (ρ ) between the flow matrices. Depending on this correlation there are significant differences in the results. For high positive correlations the search is very hard and by every method the average number of solutions is low. With decreasing correlation the methods found the solutions easier and the average number of solutions increases continuously.

5 Summary With the modification of an earlier multi-objective EA, we have developed a method for bQAP. The new version, named MOSCA2b, uses some special operators: the selection is a special version of the truncation selection, and the descendents are derived from the parents by mutation based on an ECmemory method, named EVL. The EVL is an extended version of the ”virtual loser ” method of Sebag et al. [14]. With the help of the truncation selection and the EVL method our algorithm can reach a good convergence to the Pareto optimal front and cover all points of this front with different solutions. Other EA methods can solve these important tasks only with special plus techniques, operators. The quality of the results our algorithm is better than the results of some stochastic local search, or ACO algorithms, but the running times are generally similar, or longer. As future work we can improve the speed of our algorithm, and we can try to use our algorithm by other multi-objective optimization problems.

6 Acknowledgements The Hungarian Research Foundation OTKA T 042448 supported the study.

References 1. Baluja S, Caruana R (1995) Removing the genetics from the standard ge-netic algorithms. In: Prieditis A, Russel S (Eds) Proc. of the ICML95, Morgan Kaufmann. 38-46. 2. Borgulya I (2005) A Multi-objective Evolutionary Algorithm with a Separate Archive. Central European Journal of Operations Research 13 (3): 233-254. 3. Carbonaro A, Maniezzo V (2003) The Ant Colony Optimization paradigm for Combinatorical Optimization. In: Ghosh A, Tsutsui S (Eds.) Advances in Evolutionary Computing. Theory and Applications. Springer, Berlin Heidelberg. 539-557.

10

Istvan Borgulya

4. Day RO, Lamont GB (2005) Multiobjective quadratic assignment problem solved by an explicit building block search algorithm MOMGA-IIa. Lecture Notes in Computer Science 3448, 91-100. 5. Deb K (2001) Multi-Objective Optimization using Evolutionary Algorithms. Wiley New York. 6. Goldberg DE (1989) Genetic Algorithm for Search, Optimization and Machine Learning. Addison-Wesley, Reading, Massachusetts. 7. Knowles JD, Corne DW (2002) Towards landscape analyses to inform the design of a hybrid local search for the multiobjective quadratic assignment problem. In: Abraham A, Ruiz-del-Solar J, Koppen M (Eds.) Soft Computing Systems: Design,Management and Applications. IOS Press, Amsterdam, pp. 271-279. 8. Knowles JD, Corne D (2003) Instance generators and test suites for the multiobjective quadratic assignment problem. Lecture Notes in Computer Science 2632, 295-310. 9. Lopez-Ibanez M, Paquete L, St¨ utzle T (2004) On the design of ACO for the biobjective quadratic assignment problem. Lecture Notes in Computer Science 3172, 214-225. 10. Merz B, Freisleben B (2000) Fitness landscape analysis and memetic algoritms for the quadratic assignment problem. IEEE Trans. on Evolutionary Computation 4 (4): 337-352. 11. Paquete L, St¨ utzle T (2006) A study of stochastic local search algorithms for the biobjective QAP with correlated flow matrices. European Journal of Operational Research 169: 943-959. 12. Pardalos P, Rendl F, Wolkowicz H (1994) The quadratic assignment problem: a survea and recent developments. In: Pardalos P, Wolkowicz H (Eds.) Quadratic Assignment and Related Problems. American Mathematical Society, Providence, RI, 1-42. 13. Reynolds RG, Chung CJ (1997) Regulation the Amount of Information Used for Self-Adaptation in Cultural Algorithms. In: B¨ ack T (ed) Proc. of the 7th International Conference on Genetic Algorithm. Morgan Kaufmann Pub. San Francisco. 401-308 14. Sebag M, Schoenauer M, Ravis´e C (1997) Toward Civilized Evolution: Developing Inhibitions. In: B¨ ack T (ed) Proc. of the 7th International Conference on Genetic Algorithm. Morgan Kaufmann Pub. San Francisco. 291-298. 15. Zitzler E, Thiele L, Laumanns M, Fonseca CM, da Fonseca G (2002) Performance Assessment of Multiobjective Optimizers: An Analysis and Review. IEEE Transactions on Evolutionary Computation 7 (2): 117-132.

Suggest Documents