Adaptive penalties for evolutionary graph coloring - Semantic Scholar

2 downloads 0 Views 196KB Size Report
Thereafter, in Section 4 we present an adaptive mechanism that is changing the penalty ..... Domain-independent extensions to GSAT: Solving large structured ...
Adaptive penalties for evolutionary graph coloring A.E. Eiben, J.K. van der Hauw Leiden University, The Netherlands, fgusz,[email protected]

Abstract. In this paper we consider a problem independent constraint handling mechanism, Stepwise Adaptation of Weights (SAW) and show its working on graph coloring problems. SAW-ing technically belongs to the penalty function based approaches and amounts to modifying the penalty function during the search. We show that it has a twofold bene t. First, it proves to be rather insensitive to its technical parameters, thereby providing a general, problem independent way to handle constrained problems. Second, it leads to superior EA performance. In an extensive series of comparative experiments we show that the SAW-ing EA outperforms a powerful graph coloring heuristic algorithm, DSatur, on the hardest graph instances and has a linear scale-up behaviour.

1 Introduction In this paper we consider an adaptive mechanism for constraint handling (called SAW-ing) on graph 3-coloring problems. In [13] SAW-ing was applied to 3SAT problems and the resulting EA turned out to be superior to WGSAT, the best heuristics for 3SAT problems known at the moment. It is interesting to note that optimizing the population size and the operators in the the SAW-ing EA for 3SAT resulted in an algorithm that was very similar to WGSAT itself. This EA was, however, obtained independently from WGSAT, starting with a full blown EA with a large population and using crossover. It was an extensive test series that showed that a (1; ) selection scheme using mutation only is superior. Despite the di erent origin of the two compared methods, their similarity in the technical sense might suggest that the superior performance of the SAW-ing EA is just a coincidence: it holds for 3SAT, but not for other constraint satisfaction problems. In this paper we show that this is not the case. Graph coloring falls in the category of grouping problems. Several authors [14, 15, 24] have considered grouping problems and gave arguments that they cannot be successfully solved by usual genetic algorithms, e.g. using traditional representations and the corresponding standard operators, and proposed special representation and crossovers for such problems. In this paper we show the viability of another approach to solve a grouping problem based on an adaptively changing tness function in an EA using a common representation and standard operators. We restrict our investigation to graph 3-coloring problems that are pure constraint satisfaction problems, unlike the constrained optimization version as

studied by for instance Davis [7]. To evaluate the performance of our EA we also run a powerful traditional graph coloring algorithm on the same problems. The nal comparison shows that the SAW-ing EA is superior to the heuristic method on the hardest problem instances. The rest of the paper is organized as follows. In Section 2 we specify the problems we study and give a brief overview on traditional graph coloring algorithms. We select one of them, DSatur, as a competitor we compare the performance of EAs with. Section 3 summarizes our results on graph coloring obtained with a EA, and compares this EA with DSatur, as well as a hybridized EA+DSatur system. Thereafter, in Section 4 we present an adaptive mechanism that is changing the penalty function, thus the tness landscape, during a EA run. We show that the adaptive EA highly outperforms other tested EA variants, including the hybrid system. Finally, in Section 5 we compare the adaptive EA with DSatur and conclude that the EA is superior with respect to performance on hard problem instances as well as concerning scale-up properties.

2 Graph Coloring In a graph 3-coloring problem the task is to color each vertex v V of a given undirected graph G = (V; E ) with one of three colors from 1; 2; 3 so that no two vertices connected by an edge e E are colored with the same color. This problem in general is NP-complete [16] making it theoretically interesting; in the meanwhile there are many speci c applications like register allocation [3], timetabling [25], scheduling and printed circuit testing [17]. In the literature there are not many benchmark 3-colorable graphs and therefore we create graphs to be tested with the graph generator written by Culberson.1 Creating 3-colorable test graphs happens by rst pre-partitioning the vertices in three sets (3 colors) and then drawing edges randomly by a certain probability p, the edge density. We generated equi-partite 3-colorable graphs where the three color sets are as nearly equal in size as possible, as well as at 3-colorable graphs, where also the variation in degree for the vertices is kept to a minimum. Determining the chromatic number of these two types of graphs is very dicult, because there is no information a (heuristic) coloring algorithm could rely on, [6]. Our tests showed that they are also tough for 3-coloring. Thorough this paper we will denote graph instances by, for example Geq;n=500;p=0:10;s=1 , standing for an equipartite 3-colorable graph with 500 vertices, edge probability 10% and seed 1 for the random generator. Cheeseman et al. [4] found that NP-complete problems have an `order parameter' and that the hard problems occur at a critical value or phase transition of such a parameter. For graph coloring, this order parameter is the edge probability or edge connectivity p. Theoretical estimations of Clearwater and Hogg [5] on the location of the phase transition, supported by empirical validation, improved the estimates in [4] and indicate that the hardest graphs are those with 2

f

2

1

Source code in C is available via ftp://ftp.cs.ualberta.ca/pub/joe/ GraphGenerator/generate.tar.gz

g

an edge connectivity around 7=n - 8=n. Our experiments con rmed these values. We will use these values in the present investigation and study large graphs with up to 1500 vertices. To compare the performance of our EAs with traditional graph coloring algorithms we have looked for a strong competitor. There are many (heuristic) graph coloring algorithms in the literature, for instance an O(n0:4 )-approximation algorithm by Blum [1], the simple Greedy algorithm [20], DSatur from Brelaz [2], Iterated Greedy(IG) from Culberson and Luo [6], XRLF from Johnson et al. [19]. We have chosen DSatur as competitor for its high performance. DSatur uses a heuristic to dynamically change the ordering of the nodes and then applies the greedy method to color the nodes:

{ A node with highest saturation degree (= number of di erently colored neighbors) is chosen and given the smallest color that is still possible.

{ In case of a tie, the node with highest degree (= number of neighbors that are still in the uncolored subgraph) is chosen.

{ In case of a tie a random node is chosen.

Because of the random tie breaking, DSatur is a stochastic algorithm and, just like for the EA, results of several runs need to be averaged to obtain useful comparisons. For the present investigation we implemented the backtracking version of Turner [23], which backtracks to the lastly evaluated node that still has available colors to try.

3 The Evolutionary Algorithm We implemented di erent steady-state algorithms using worst tness deletion, with two di erent representations and tested di erent operators and population sizes for their performance. These tests were intended to check the hypotheses in [14, 15, 24] on the disadvantageous e ects of crossover in standard representations, as well as to nd a good setup for our algorithm. For a full overview of the test results see [12], here we only present the most interesting ndings. It turned out that mixing information of di erent individuals by crossover is not as bad as is generally assumed. Using integer representation, each gene in the chromosome belongs to one node and can take three di erent values as alleles (with the obvious semantics). Applying heavy mixing by multi-point crossovers and multi-parent crossovers, [8], improves the performance. In Figure 1 we give an illustration for the graph Geq;n=200;p=0:08;s=5 , depicting the Average Number of Evaluations to a Solution (AES) as a function of the number of parents in diagonal crossover, [8], respectively as a function of the number of crossover points in m-point crossover. The results are obtained by averaging the outcomes of 100 independent runs. Integer representation, however, turned out to be inferior to order-based representation. Using order-based representation, each chromosome is a permutation of nodes, and a decoder is needed to create a coloring from a permutation.

110000 DIAG m-point 1-point 100000

AES

90000

80000

70000

60000 0

5

10

15 20 25 Number of Parents

30

35

40

Fig. 1. E ect of more parents and more crossover points on the Average Number of Evaluations to a Solution (AES) for Geq;n=200;p=0:08;s=5 We used a simple coloring decoder, encountering the nodes in the order they occur in a certain chromosome and giving each node the smallest2 possible color. If each of the three colors leads to constraint violation, the node is left uncolored, and the tness of a chromosome (to be minimized) is the number of uncolored nodes. After performing numerous tests the best option turned out to be an order-based GA without crossover, using mutation only with population size 1 in a (1+1) preservative selection scheme. Because of the lack of crossover we call this algorithm an EA (evolutionary algorithm), rather than a GA (genetic algorithm). This EA forms the basis of our further investigation: we will try to improve it by hybridization and by adding the SAW-ing mechanism. Let us note that DSatur also uses an ordering of nodes as the basis to construct a coloring (in particular, it uses an ordering based on the degrees to break ties). It is thus a natural idea to use an EA to nd better orderings than those used by DSatur. Technically, DSatur would still use the (dynamically found) saturation degree to select nodes and to color them with the rst available color, but now it would break ties when two nodes have equal saturation degree by using a permutation (an individual in the EA) for ordering the nodes. From an EA point of view we can see DSatur as a new decoder for the EA which creates a coloring when fed with a permutation. We also tested this hybridized EA+DSatur system where the tness value of a given permutation is the same as for the greedy decoder. The comparison between the order-based EA, DSatur with backtracking and the hybrid system is shown in the rst three rows of Table 1 in Section 4.1 for the graph Geq;n=1000;p=0:010 . These results are based on four random seeds for generating graph instances, for each instance 25 independent runs were executed with Tmax = 300:000 as the maximum number of evaluations for every algorithm. In the table column SR stands for Success Rate, i.e. the % of cases where the graph could be colored, the column AES is again 2

Colors are denoted by integers 1,2,3

the average number of evaluations to a solution. Based on Table 1 we can make a number of interesting observations. First, the results show even the best EA cannot compete with the DSatur, but hybridizing the two systems leads to a coloring algorithm that outperforms both of its components. Second, the performance of the algorithms is highly dependent on the random seeds used for generating the graphs. This means that they are not really powerful, they cannot color graphs with a high certainty.

4 The SAW-ing Evolutionary Algorithm In this section we extend the EA used in the previous tests. Let us rst have a look on the applied penalty function that concentrates on the nodes (variables to be instantiated), rather than on the edges (constraints to be satis ed). Formally, the function f to be minimized is de ned as: f (x) =

n X i=1

wi  (x; i)

(1)

where wi is the local penalty (or weight) assigned to node xi and  1 if node xi is left uncolored (x; i) = 0 otherwise In the previous section we were simply counting the uncolored nodes, thus used wi 1. This weight distribution does not distinguish between nodes, although it is not reasonable to assume that all nodes are just as hard to color. Giving hard nodes a high weight is a very natural idea, since this gives the EA a high reward when satisfying them, thus the EA will `concentrate' on these nodes. A major obstacle in doing this is, of course, that the user does not know which nodes are hard and which ones are easy. Heuristic estimates of hardness can circumvent this problem, but still, the hardness of a node is most certainly also depending on the problem solver, i.e. coloring algorithm, being applied, that is a node that is hard for one problem solver may be easy for another one. Furthermore, being hard may be also context dependent, i.e. may depend on the information the problem solver collected in a certain phase of the search. This means that even for one speci c problem solver, a particular setting of weights may become inappropriate as the search proceeds. A simple answer to these drawbacks is embodied by the following mechanism. 

4.1 Stepwise Adaptation of Weights Motivated by the above reasons we decided to leave the decisions of the hardness of di erent nodes to the EA itself, moreover we allow the EA to revise its decisions during the search. Technically this means that we apply a varying tness function that is repeatedly modi ed, based on feedback concerning the progress of the search process. Similar mechanisms have been proposed earlier in

another context by, for instance, Moris [21] and Selman and Kautz [22]. In evolutinary computation varying parameters can be divided into three classes [18], dynamic, adaptive and self-adaptive parameter control. Our approach falls in the adaptive category. The general idea is now implemented by repeatedly checking which nodes in the best individual3 violate constraints and raising the penalty wi belonging to these nodes. Depending on when the weights are updated we can distinguish an o -line (after the run, used in the next run) and an on-line (during the run) version of this technique. In [9] and [10] the o -line version was applied, here we will use the on-line version. In particular, the EA starts with a standard setting of wi 1 for each node. After each Tp tness evaluations the best individual in the population is checked and the weights belonging to its uncolored nodes are increased by w, i.e. setting wi = wi + w. This mechanism introduces two new parameters, Tp and w and it is important to test whether the EA performance is sensitive for the values of these parameters. Limited by space requirements here we can only give an illustration. Figure 2 shows the success rates of an asexual (only SAWP mutation) and a sexual EA (OX2 crossover and SWAP mutation) for di erent values of w and Tp . These 

4

4

4

4

1 1

SWAP OX2+SWAP

SWAP OX2+SWAP

0.8

Success Rate

Success Rate

0.8

0.6

0.4

0.6

0.4

0.2

0.2

0 0 0

5

Fig. 2. In uence of

10

15 delta w

20

25

30

0

2000

4000

6000

8000

10000

Tp

w (left) and Tp (right) on SR. Tested with Tmax = 300000 on Geq;n=1000;p=0:01;s=5 , Tp = 250 is used for w, = 1 is used for Tp . 4

4

4

tests recon rm that using mutation only is better than using crossover and mutation, furthermore they indicate that the parameters of the SAW mechanism have only neglectable in uence on the EA performance. For no speci c reason we use w = 1 and Tp = 250 in the sequel. Obviously, an EA using the SAW mechanism searches on an adaptively changing tness landscape. It is interesting to see how the tness (actually the penalty expressing the error belonging to a certain chromosome) changes under the SAW regime. Figure 3 shows a typical run. As we see on the plot, the error is growing up to a certain point and then 4

3

More than one individuals could be also monitored, but preliminary tests did not indicate advantage of this option.

4000

3500

3000

Penalty

2500

2000

1500

1000

500

0 0

20000

40000 Evaluations

60000

80000

Fig. 3. Penalty curve of the best chromosome in an asexual SAW-ing EA, tested on Geq;n=1000;p=0:01;s=5 with Tmax = 300000.

it quickly decreases, nally hitting the zero-line, indicating that a solution has been found. In high resolution (not presented here) the curve shows decreasing error rates in periods of xed Tp , followed by a sudden increase when Tp is reset, that is, we see the shape of a saw!

4.2 Performance of the SAW-ing EA In Table 1 we present the performance results of the SAW-ing EA and the earlier tested EA versions. Comparing the results we see that the performance of the EA highly increases by the usage of SAW-ing, the success rates become very high and the algorithm is twice as fast as the other ones. Besides, the SAW-ing EA performes well independently from the random seeds, i.e., it is very robust. s=0 s=1 s=2 s=3 SR AES SR AES SR AES SR AES DSatur 0.08 125081 0.00 300000 0.00 300000 0.80 155052 EA 0.24 239242 0.00 300000 0.00 300000 0.12 205643 EA+DSatur 0.44 192434 0.24 198748 0.00 300000 0.64 114232 EA+SAW 0.96 76479 0.88 118580 0.92 168060 0.92 89277

all 4 seeds SR AES 0.22 220033 0.09 261221 0.33 201354 0.92 113099

Table 1. Comparing DSatur with backtracking, the EA, the hybrid EA and the SAW-ing EA for n = 1000 and p = 0:010 with di erent random seeds.

5 The SAW-ing EA vs. DSatur The experiments reported in the previous sections clearly indicate the superiority of the SAW-ing EA with respect to other EAs. The real challenge to our weight adaptation mechanism is, however, a comparison with a powerful heuristic graph coloring technique. We have performed an extensive comparison between the SAW-ing EA and DSatur on three di erent types of graphs (arbitrary 3-colorable, equi-partite 3-colorable and at 3-colorable) for di erent sizes (n = 200; 500; 1000) and a range of edge connectivity values comparing SR as well as AES. Space limitations prevent us from presenting all gures; the inter-

1

DSatur GA

DSatur GA

120000

100000

0.8

80000 SR

AES

0.6 60000

0.4 40000 0.2 20000

0

0 0.02

0.04 0.06 Edge Connectivity

0.08

0.1

0.02

0.04 0.06 Edge Connectivity

0.08

0.1

Fig. 4. Comparison of SR (left)and AES (right) for n = 200. ested reader is again referred to [12]. Here we give an illustration on the hardest case: at 3-colorable graphs. Comparative curves of success rates and the number of evaluations to a solution are given in Figure 4 and Figure 5 for n = 200 and n = 1000 respectively. The phase transition is clearly visible on each gure: the

1

300000

DSatur GA

DSatur GA

250000 0.8 200000

SR

AES

0.6 150000

0.4 100000 0.2 50000

0

0 0.005

0.01

0.015 0.02 Edge Connectivity

0.025

0.03

0.005

0.01

0.015 0.02 Edge Connectivity

0.025

Fig. 5. Comparison of SR (left)and AES (right) for n = 1000.

0.03

performance of both algorithms drops at certain values of the edge connectivity

p. On small graphs (n = 200), the deterioration of performance is smaller for DSatur, while on large graphs (n = 1000) the valley in the SR curve and the peak

in the AES curve are narrower for the EA, showing its superiority to DSatur on these problems. The reason for these di erences could be that on the small instances DSatur with backtracking is able to get out of local optima and nd solutions, while this is not possible anymore for large graphs where the search space becomes too big. On the other two graph topologies (arbitrary 3-colorable and equi-partite 3-colorable) the results are similar. An additional good property of the SAW-ing EA is that it can take more advantage of extra time given for search. For instance, on Geq;n=1000;p=0:008;s=5 both algorithms fail (SR=0.00) when 300.000 evaluations are allowed. If we increase the total number of evaluations from 300.000 to 1.000.000 DSatur still has SR=0.00, while the performance of the EA raises from SR=0.00 to SR=0.44 (AES=407283). This shows that the EA is able to bene t from more time given, where the extra time for backtracking is not enough to get out of the local optima. Finally, let us consider the issue of scalability, that is the question of how the performance of an algorithm changes if the problem size grows. Experiments on the hardest instances at the phase transition for p = 8=n show again that DSatur is not able to nd solutions on large problems. Since this leads to SR = 0 and unde ned AES (or AES = Tmax ), we rather perform the comaprison on easier problem instances belonging to p = 10=n. The results are given in Figure 6. These gures clearly show that the SAW-ing EA outperforms DSatur. Moreover, the comparison of the AES curves suggets a linear time complexity of the EA. Taking the scale-up curves into consideration also eliminates a possible drawback of using AES for comparing two di erent algorithms. Recall, that AES is the average number of search steps to a solution. For an EA a search step is the creation and evaluation of a new individual (a new coloring), i.e. AES is the average number of tness evaluations. A search step of DSatur, however, is a backtracking step, i.e. giving a node a new color. Thus, the computational

1

DSatur GA

DSatur GA

300000

0.8

250000

200000

SR

AES

0.6

150000

0.4 100000 0.2 50000

0

0 250

500

750 n

1000

1250

1500

250

500

750 n

1000

1250

1500

Fig. 6. Scale-up curves for SR (left) and AES (right) for p = 10=n.

complexity of DSatur is measured di erently than that of an EA - a problem that cannot be circumvented since it is rooted in the di erent nature of these search algorithms. However, if we compare how the AES changes with growing problem sizes then regardless to the di erent meaning of `AES' this comparison is fair.

6 Conclusions In this paper we considered the Stepwise Adaptation of Weights mechanism that changes the penalty function based on measuring the error of solutions of a constrained problem during an EA run. Comparison of the SAW-ing EA with a simple EA, DSatur and a hybrid EA+DSatur system as showed in Table 1 discloses that SAW-ing is not only powerful (highly increases the success rate and the speed), but also robust (the performance becomes independent from the random seeds). Besides comparing di erent EA versions we also conducted experiments to compare EAs with a traditional graph coloring heuristic. These experiments show that our SAW-ing EA outperforms the best heuristic we could nd in the literature, DSatur. The exact working of the SAW mechanism is still an open research issue. The plot in Figure 3 suggests that a SAW-ing EA solves the problem in two phases. In the rst phase the EA is learning a good setting for the weights. In this phase the penalty increases a lot because of the increased weights. In the second phase the EA is solving the problem, exploiting the knowledge (appropriate weights) learned in the rst phase. In this phase the penalty drops sharply indicating that using the right weights (the right penalty function) in the second phase the problem becomes `easy'. This interpretation of the tness curves is plausible. We, however, do not claim that the EA could learn universally good weights for a given graph instance. First of all, another problem solver might need other weights to solve the problem. Besides, we have applied a SAW-ing EA to a graph and recorded the weights at termination. In a following experiment we have applied an EA to the same graph using the learned weights non-adaptively, i.e. keeping them constant along the evolution. The results became worse than in the rst run when adaptive weights were used. This suggests that reason for the success of the SAW mechanism is not that SAW-ing enables the problem solver to discover some hidden, universally good weights. This seems to contradict our interpretation that distinguishes two phases of search. Another plausible explanation of the results is based on seeing the SAW mechanism as a technique that allows the EA to shift the focus of attention, by changing the priorities given to di erent nodes. It is thus not the weights that is being learned, but rather the proportion between the weights. This can be percepted as an implicite problem decomposition: `solve-these-nodes- rst'. The advantage of such a (quasi) continuous shift of attention is that it nally guides the population through the search space, escaping local optima.

At the moment there are a number of evolutionary constraint handling techniques known and practicized on constraint satisfaction as well as on constrained optimization problems, [11, 26, 27]. Penalty functions embody a natural and simple way of treating constraints, but have some drawbacks. One of them is that the composition of the penalty function has a great impact on the EA performance, in the meanwhile penalty functions are mostly designed in an ad hoc manner. This implies a source of failure, as wrongly set weights may cause the EA fail to solve the given problem. The SAW mechanism eliminates this source of failure in a simple and problem independent manner. Future research issues concern variations of the basic SAW mechanism applied here. These variations include using di erent w's for di erent variables or constraints, as well as subtraction of w from wi 's that belong to well instantiated variables, respectively satis ed constraints. An especially interesting application of SAW concerns constrained optimization problems, where not only a good penalty function needs to be found, but also a suitable combination of the original optimization criterion and the penalty function. 4

4

References 1. A. Blum. An O(n0 4 )-approximation algorithm for 3-coloring (and improved approximation algorithms for k-coloring). In Proceedings of the 21st ACM Symposium on Theory of Computing, pages 535{542, New York, 1989. ACM. 2. D. Brelaz. New methods to color vertices of a graph. Communications of the ACM, 22:251{256, 1979. 3. G.J. Chaitin. Register allocation and spilling via graph coloring. In Proceedings of the ACM SIGPLAN 82 Symposium on Compiler Construction, pages 98{105. ACM Press, 1982. 4. P. Cheeseman, B. Kenefsky, and W. M. Taylor. Where the really hard problems are. In Proceedings of the IJCAI-91, pages 331{337, 1991. 5. S.H. Clearwater and T. Hogg. Problem structure heuristics and scaling behavior for genetic algorithms. Arti cial Intelligence, 81:327{347, 1996. 6. J.C. Culberson and F. Luo. Exploring the k-colorable landscape with iterated greedy. In Second DIMACS Challenge, Discrete Mathematics and Theoretical Computer Science. AMS, 1995. Available by http://web.cs.ualberta.ca/~joe/. 7. L. Davis. Handbook of Genetic Algorithms. Van Nostrand Reinhold, 1991. 8. A.E. Eiben. Multi-parent recombination. In T. Back, D. Fogel, and Z. Michalewicz, editors, Handbook of Evolutionary Computation. Institute of Physics Publishing Ltd, Bristol and Oxford University Press, New York, 1997. Section C3.3.7, to appear in the 1st supplement. 9. A.E. Eiben, P.-E. Raue, and Zs. Ruttkay. Constrained problems. In L. Chambers, editor, Practical Handbook of Genetic Algorithms, pages 307{365. CRC Press, 1995. 10. A.E. Eiben and Zs. Ruttkay. Self-adaptivity for constraint satisfaction: Learning penalty functions. In Proceedings of the 3rd IEEE Conference on Evolutionary Computation, pages 258{261. IEEE, IEEE Press, 1996. 11. A.E. Eiben and Zs. Ruttkay. Constraint satisfaction problems. In Th. Back, D. Fogel, and M. Michalewicz, editors, Handbook of Evolutionary Algorithms, pages C5.7:1{C5.7:8. IOP Publishing Ltd. and Oxford University Press, 1997. :

12. A.E. Eiben and J.K. van der Hauw. Graph coloring with adaptive evolutionary algorithms. Technical Report TR-96-11, Leiden University, August 1996. also available as http:// www.wi.leidenuniv.nl/~gusz/graphcol.ps.gz. 13. A.E. Eiben and J.K. van der Hauw. Solving 3-SAT with adaptive Genetic Algorithms. In Proceedings of the 4th IEEE Conference on Evolutionary Computation, pages 81{86. IEEE, IEEE Press, 1997. 14. E. Falkenauer. A new representation and operators for genetic algorithms applied to grouping problems. Evolutionary Computation, 2(2):123{144, 1994. 15. E. Falkenauer. Solving equal piles with the grouping genetic algorithm. In S. Forrest, editor, Proceedings of the 6th International Conference on Genetic Algorithms, pages 492{497. Morgan Kaufmann, 1995. 16. M.R. Garey and D.S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W.H. Freedman and Co., 1979. 17. M.R. Garey, D.S. Johnson, and H.C. So. An application of graph coloring to printed circuit testing. IEEE Trans. on Circuits and Systems, CAS-23:591{599, 1976. 18. R. Hinterding, Z. Michalewicz, and A.E. Eiben. Adaptation in Evolutionary Computation: a survey. In Proceedings of the 4th IEEE Conference on Evolutionary Computation, pages 65{69. IEEE Service Center, 1997. 19. D.S. Johnson, C.R. Aragon, L.A. McGeoch, and C. Schevon. Optimization by simulated annealing: An experimental evaluation; part II, graph coloring and number partitioning. Operations Research, 39(3):378{406, 1991. 20. L. Kucera. The greedy coloring is a bad probabilistic algorithm. Journal of Algorithms, 12:674{684, 1991. 21. P. Moris. The breakout method for escaping from local minima. In Proceedings of the 11th National Conference on Arti cial Intelligence, AAAI-93. AAAI Press/The MIT Press, 1993. 22. B. Selman and H. Kautz. Domain-independent extensions to GSAT: Solving large structured satis ability problems. In R. Bajcsy, editor, Proceedings of IJCAI'93, pages 290{295. Morgan Kaufmann, 1993. 23. J.S. Turner. Almost all k-colorable graphs are easy to color. Journal of Algorithms, 9:63{82, 1988. 24. G. von Laszewski. Intelligent structural operators for the k-way graph partitioning problem. In R.K. Belew and L.B. Booker, editors, Proceedings of the 4th International Conference on Genetic Algorithms, pages 45{52. Morgan Kaufmann, 1991. 25. D. De Werra. An introduction to timetabling. European Journal of Operations Research, 19:151{162, 1985. 26. Michalewicz Z. and Michalewicz M. Pro-life versus pro-choice strategies in evolutionary computation techniques. In Palaniswami M., Attikiouzel Y., Marks R.J., Fogel D., and Fukuda T., editors, Computational Intelligence: A Dynamic System Perspective, pages 137{151. IEEE Press, 1995. 27. Michalewicz Z. and Schoenauer M. Evolutionary algorithms for constrained parameter optimization problems. Evolutionary Computation, 4(1):1{32, 1996.

This article was processed using the LATEX macro package with LLNCS style