Abstract- Gene duplication theory was first pro- posed by a Japanese biologist, Dr. Susumu Ohno, in the 1970's. Inspired by the theory, we devel- ope a gene ...
Genetic Algorithm Inspired by Gene Duplication Hidefumi Sawai and Susumu Adachi Intelligent Communications Division Communications Research Laboratory Ministry of Posts & Telecommunications 4-2-1, Nukui-Kita-machi, Koganei, Tokyo 184-8795, Japan {sawai , sadac hi} @crl.go.jp
Abstract- Gene duplication theory was first proposed by a Japanese biologist, Dr. Susumu Ohno, in the 1970’s. Inspired by the theory, we develope a gene duplicated genetic algorithm. Several variants for this algorithm are considered. Individuals with various lengths of genes are evolved based on a parameter-free genetic algorighm and then genes with different lengths are concatenated by migrating among subpopulations. To verify the effectiveness of the gene duplicated genetic algorithm, we performed a comparative study using function optimization problems from the first ICEO (International Contest on Evolutionary Optimization) held in 1996.
parity theory of evolutions” which was proposed by Furusawa et al. (Furusawa 1992; Wada 1993). The idea is based on the disparity of copy error rates in the leadzng and laggzng strands of DNA when each strand makes its COPY. In contrast, the gene duplication principle was first advocated by a Japanese biologist, Dr. Susumu Ohno, in the 1970’s (Ohno 1970). Biological organisms including viruses, plants, and animals duplicate their genes during the process of evolution. Inspired by the theory of gene duplication, we developed a new CA with several variants. In the first variant, called gene concatenated GA, each gene is evolved and, after the evolution, all of the genes are concatenated. This is a simple scheme and works very well if a given problem is linearly separable. The second variant is called gene prolonged GA. In this variant, a gene is evolved and copied. Its copy is concatenated with the first gene to double its size. In successive steps, the size of the gene becomes larger and larger. The third variant is gene coupled CA, in which two small genes are concatenated t o become larger genes. The last variant is an extended version of the gene coupled CA, in which loci of genes are taken into account in the gene-coupled GA. We compared the performance of the four variants using function optimization problems from the ICEO ’96. Section 2 of this paper describes the PfGA as well as the disparity theory of evolution on which the PfGA is based. Section 3 describes the four variants of gene duplicated CA. Sections 4, 5, 6, and 7 cover the experiment, results, discussion, and conclusion of this paper.
1 Introduction The genetic algorithm (CA)(Holland 1975) is an evolutionary computation paradigm inspired by biological evolution (Darwin 1859). GAS have been successfully used in many practical applications such as functional optimization problems, combinatorial optimization problems, and the optimal design of parameters in machines (Goldberg 1989). However, the design of genetic parameters in a CA has t o be determined by trial and error, making optimization by CA ad hoc. One of the most important research areas in evolutionary computation is the self-adaptation of genetic parameters and operators. Such adaptation can tune an algorithm while it is being used t o solve a given problem. Hinterding (Hinterding 1997) developed a classification of adaptation, which covers different levels (such as environment, population, individual, and component) and types (such as static and dynamic) of adaptation. However, it is a very time-consuming task t o design an o p timal evolutionary strategy in an adaptive way because we have to run the evolutionary algorithm many times by h i a l and error. To avoid this adaptive parametersetting problem, we proposed a parameter-free genetic algorithm ( P f G A ) (Kizu 1997; Sawai 1998) for which there is no need t o set control parameters for genetic o p erations as constants in advance. This algorithm merely uses random values or probabilities for setting almost all genetic parameters. The PfGA is inspired by the “dis-
0-7803-5536-9/99/$10.00 0 1999 IEEE
2 Parameter-free GA 2.1 Disparity Theory of Evolution As Charles Darwin claimed in the “Origin of Species” in 1859 (Darwin 1859), a major factor contributing t o evolution is mutation, which can be caused by spontaneous misreading of bases during DNA synthesis. Semiconservative replication of double-stranded DNA is an asymmetric process, in which there is a leading and a lagging strand. Furusawa et al. described a “disparity theory of evolution” ( Furusawa 1992) based on the differ-
480
”family“
population
leading strand 3‘
0:parent
3‘ 5‘
+5’ -3‘
0 : chlld
parental DNA
5‘
strand
better
Case 1 lagging strand
0
3‘
Case 2
:enzymes lagging strand
\
4
copy error
U ~8 Case 3
Figure 1: A hypothesis in the disparity theory of evolu-
tion
Case 4
” family”
ence in frequency of strand-specific base misreadings between the leading and lagging DNA strands (i.e., disparity model). Figure 1 shows a hypothesis in the disparity theory of evolution. In the figure, the leading strand is copied smoothly, whereas for the lagging strand, a copy error can occur because multiple enzymes are necessary to produce its copy. This disparity or asymmetry in producing each strand occurs because of the different mutation rates in the leading and lagging strands. Thus “diversity” of DNAs is maintained in a population as generations proceed. The disparity model guarantees that the mutation rate of some leading strands is zero or very small. When circumstances change, for example, when the original wild type can not survive, selected mutants might adapt under the new circumstances as a new wild type. In their study, the disparity model was compared with a parity model in which there was no statistical difference in the frequency of base misreadings between strands as in the generally accepted model. The disparity model outperformed the parity model in a knapsack optimization problem. The authors clearly showed that the disparity model was better in cases which had a small population, strong pressure, a high mutation rate, sexual reproduction with diploidy, and strong competition. In contrast, survival conditions for the parity model are a large population, weak selection pressure, a low mutation rate, asexual reproduction with haploidy, and weak competition (Wada 1992).
00
Figure 2: Population S, subpopulation S’, family S” (left), and selection rules (right) in a parameter-free (2.4 Step 1. Select one individual randomly from the whole population S and add this individual to the subpopulation Step 2. Select one individual randomly from the whole population S and again add this individual to the subpopulation S’. Step 3. Select two individuals randomly from the subpopulation s’and perform crossover between these individuals as “parent 1 (PI)” and “parent 2 (Pz)”. Step 4. For one randomly chosen child of the two children generated from the crossover, perform mutation at random. Step 5. Among the parents (PI and P2) and the two generated children (C, and Cz) select one t o three individuals depending on the following cases (i.e., cases 1 t.0 4 described below) and feed them back to the subpopulation SI. Step 6 . If the number of individuals in subpopulation S’ is greater than one, go to Step 3; otherwise, go t o Step 2. For the crossover operation of PfGA, we use multiplepoint crossover in which n crossover points (n is a random number, 0 < n < gene length, which changes every time the crossover takes place) are randomly selected and genes between two parents’ chromosomes are replaced. For the mutation operation, one child is randomly chosen from the two offspring. Then a randomly chosen portion of the child’s chromosome is inverted (i.e., bit-flipped). For the selection operation, we compare the fitness values of all individuals (Cl, C2, P l , and P2) in the family. The selection rules shown in Fig. 2 (right) are used for four different cases depending on the fitness values f of the parents and children. Case 1: If the fitness values of the two children are better than those of the parents, then C1, C2 and arg rnuzp,(f(Pl),f ( P 2 ) ) are left in SI, thus increasing the
s’.
2.2 Parameter-free GA The PfGA is inspired by the disparity theory of evolution. The population of the PfGA is considered as a whole set S of individuals which correspond t o all possible solutions. From this whole set S, a subset S’ is introduced. All genetic operations such as selection, crossover, and mutation are conducted for SI, thus evolving the subpopulation S‘. From the subpopulation S‘, we introduce a family which contains two parents and two children generated from the two parents (see left side of Fig. 2). The PfGA procedure is as follows: 48 1
size of SI by one. Case 2: If the fitness values of C1 and C2 are worse than those of PI and P2, then only arg m a x p , ( f ( P 1 ) f, ( P 2 ) ) is left in SI, thus decreasing the size of S’ by one. Case 3: If the fitness value of either PI or P2 is better than that of the children, arg m a t c , ( f ( C 1 )f,( C 2 ) )and arg m u t p , ( f ( P l ) , f ( P 2 )are ) left in S’, thus maintaining the size of S‘. Case 4: In all other situations, arg maxc, ( f ( C , )f,( C 2 ) )is preserved and then one individual randomly chosen from S is added to S’, thus maintaining the size of SI. The PfGA is compact and easy to implement. We compared it with the conventional GAS such as the simple GA (Goldberg 1989) and the steady-state GA (Syswerda 1991) and found that it performed with more rapid convergence than these conventional GAS (Kizu 1999). Also, the PfGA performes well compared to eight other algorithms discussed at I C E 0 ’96. It would have come in second place in the five-demensional version of function optimization problems had it been entered in the contest. Furthermore, it has been extended to parallel distributed processing with hierarchical migration methods in which two different architectures with five different migration methods have been proposed and compared with each other. The parallel processing of PfGA (Sawai 1999) demonstrated that ENES (expected number of evaluations per success) decreased and success rates increased significantly to reach the VTR (value to reach). A more detailed description is provided elsewhere (Kim 1997; Sawai 1998; Kizu 1999; Sawai 1999). We used the PfGA to evolve each subpopulation with a different gene size.
Figure 3: Subpopulations of type A for N = 5
Figure 4: Subpopulations of type B for N = 5 f ( t 1 , ~ 2 , ... , t i , O , . . . , O ) . When the individual C1 better than two parents emerges in Si, its gene is prolonged from [ [ t l ] .. . [ x i ] ] to [ [ t l ] . .. [ t i + l ] ]with a duplication probability R by copying [ t i ] to [ z i + l ] ( i = 1, . . . , N - 1) as shown in Fig. 4. The individual with the longer gene by one belongs to the subpopulation Si+l. The worst individual is removed from Si+l as well.
3 GA Inspired by Gene Duplication
3.3 Gene-Coupled GA (type C)
3.1 Gene-concatenated GA (type A) In this algorithm, variable ti (i = 1 , . . . ,N ) is encoded in Gray coding with 1 bits. Then each individual with each gene [ t i ] evolves by using PfGA. The fitness function for each individual is defined as f ( O , O , ..., ti ,O, .. . ,O). When the individual C1 better than two parents emerges in each subpopulation, all genes from [21] to [ Z N ] are concatenated to make a long gene [ [ x l ] . . . [ x ~ ] with a duplication rate R. This long gene belongs to the subpopulation SI as shown in Fig. 3, while the worst individual is removed from S’ to maintain the size of S’.
As the initial population, genes with a length of i x l are generated similar to type B. Each individual evolves in each Si according to PfGA and the fitness function f ( x 1 , 2 2 , .. . , z i , O , . . . ,O). A gene with a length of i x l , [ [ X I ] ... [ t i ] ] in Si is coupled to another gene [ [ t l ] ... [ x j ] ] in S(i to produce [ [ x l ] . . . [ z i + j ] ](2 5 i+j 5 N) with a duplication probability of R. This individual, having a length of (i + j)xZ, belongs to Si+j1 as shown in Fig. ] as well. 5. The worst individual is removed from 3.4 Extended Gene-coupled GA (type D)
Type D is an extension of type C. The loci of genes are not distinguished in type C , but they are distinguished in type D, as shown in Fig. 6. As initial populations genes with a gene length of I are generated as well. Then, genes with two successive variables [ [ t i ] [ z i + l ]are ] generated. Next, genes with three successive variables [ [ t i ] [ x i + 1 ] [ t i + 2are ] ] generated. The last initial genes are
3.2 Gene-prolonged GA (type B)
As initial population, each gene with the length ixZ, [ [ x 1 ] [ t 2 ]. . . [ t i ] ] are generated in each subpopulation Si (i = 1 , 2 , .. . , N ) . Each individual with the gene length i x l evolves in each 5’:according to the fitness function
482
S"
c'
Table 1: Test functions Sphere Model f(x) = -5 5 xi 5 5, V T R = 1.0 x Griewank's Function f(x) = $ c;="= - ,100)Z C.i-
c;&i
d = 4000, -600 Shekel's foxholes f(x) = - C L
nyzlcos(--)
0
-100
5 xi 5 600, V T R = 1.0 x
X"
1 J = l ( ~ i - Q z J ) z + 'c ,
m = 30, 0 5 xi 5 10, V T R = -9 Michalewicz' Function f(x) = sin(xi)sinZm(%), m = 10, 0 5 xi 5 T , V T R = -4.687 Generalized Langerman's Function
Figure 5: Subpopulations of type C for N = 5
+ 1,
cr=l
f(x) = x cos
[
7r
m = 5, 0 5 xi
/
S I3
I
yeFigure 6: Subpopulations of type D for !V = 5 [["cl] . . . [X~V]] as shown in Fig. 6. The fitness function of individuals with two successive genes, for example, is defined as f ( 0 , . . . ,0, xi, xi+l,O,. . . , U ) . When the individual C1 better than two parents emerges in Si, genes are coupled only in case of the successive loci of gene such as [xi] and [Xz], [[Xi][xz]] and [23], [[xi][Xz][x3]] and [[x4][x5]], leading to the final gene [[XI] . . . [x5]], as shown in Fig. 6 with a duplication probability of R.
4 Experiment Experiments were performed using the four types of gene duplication algorithms for five function optimization (minimization) problems shown in Table 1. The first function is DeJong's sphere model (De Jong 1975), which takes a global minimum 0 at xi = l ( i = 1, . . ., N ) . The second function is Griewank's function, which takes a global minimum 0 at xi = 1OO(i = 1, ..., N ) . The third function is Shekel's foxholes, which takes a global
483
[
ci exp -
( x j - aij 1 2
I
(xj - U i j ) 2 ] ,
5
10, V T R = -1.5
minimum at different values of x, ( k 1 , . . ., N) and is not separable like the sphere model. The fourth function is Michalewicz' function, which takes a minimum value of -4.688 at different values of xi ( k l , . . ., N ) in the fivedimensional version. The last function is a generalized Langerman's function, which takes a global minimum of -1.4 or below at different values of x, ( i = l , . . ., N ) , and is not separable as well. Basically PfGA is used in each subpopulation Sl. Genes are duplicated according to each t4ype(type A to D) of duplicated GA with a duplication probability R. This rate R varies from 0 t o 1.0 a t steps of 0.2. The rate R of 0 corresponds t o the PfGA. Each value x, is encoded in Gray coding into 24 bits, except Michalewicz' function, which is encoded into 22 bits. Three hundred independent trials (one trial is defined as 10,000 evaluations) i from different were performed for each subpopulation S random seeds. Evaluation criteria (Bersini 1996) for the performance of each algorithm are success rates R, to reach the VTR or below and ENES. All subpopulations with gene duplication can evolve in parallel. The computation load is approximately proportional to the gene length and is the heaviest in the subpopulation posing the gene length of N.
5 Results For the sphere model, the success rate R, was 100% for all types of GAS. Figure 7 shows the ENES. All gene duplication types are more effective than the PfGA (in the case of R=O) and ENES is lowest at a rate of R = l . Of the four types of algorithms, type B performed the
best because the sphere model takes its global minimum at x; =1 in any dimension.
60
1
loo00
0
0.4 0.6 duplication rate R
0.2
............... .-......_....._,. ,,_
0.8
1
I 0
0.2
0.4 0.6 duplication rate R
0.8
1
Figure 8: Success rates as a function of R for Griewank’s Function
Figure 7: ENES as a function of R for Sphere model For Griewank’s function, the success rate R, is shown in Figure 8 and ENES is shown in Figure 9. The success rates gradually increase, except that of type A, and ENES gradually decreases for types B and C. Among the four types, type B performs best to easily reach the solution because the function takes its global minimum at xi =lo0 in any dimension. In the case of type A, the success rate was zero at duplication rates of more than 0.4, which means the difficulty for type A in finding the global minimum. For Shekel’s foxholes, success rates are shown in Figure 10 and the ENES is shown in Figure 11. According to the results in Figure 10, all success rates are below lo%, which indicates the ineffectiveness of gene duplication for this function because the function takes its global minimum at different values of xi in any dimension. However, the success rate for type B is superior to that of the other types at a duplication rate of 1.0, which means that the gene-prolonged GA is slightly more effective than the others. For Michalewicz’ function, success rates are shown in Figure 12 and the ENES is shown in Figure 13. The success rates for type B are almost constant and the ENES for types B and C are almost the same as that of the PfGA. However, the success rates were 100% for types A and D and the ENES was ten times better than that of the PfGA. This is because this function is separable and the optimal zi in each dimension leads to a better performance in higher dimensions, although all xi (i= 1, . . ., N ) differ from each other. For Langerman’s function, success rates are shown in Figure 14 and the ENES is shown in Figure 15. The success rates are almost constant, except for type A, which indicates the ineffectiveness of gene duplication with this function. In type A, concatenation of genes at each x; does not work well because this function is not separable and takes its global minimum at different values of xi ( k l , . . ., N).
lo00
0
0.2
0.4 0.6 duplication rate R
0.8
1
Figure 9: ENES as a function of R for Griewank‘s Function
6 Discussion Table 2: CPU time (s) for each function and type of GDGA tvDe/func * I
I
A B C
D
I
Ave.
I
SD 5.01 5.58 5.28 12.7 7.15
Gr 5.24 5.50 5.39 13.8 7.48
Sh 6.02 5.97 5.84 14.5 8.09
Mi 5.42 5.77 5.35 13.1 7.40
La I .4ve. 5.58 5.45 5.53 5.67 5.36 5.44 13.3 13.5 7.45 I 7.51
Table 2 shows CPU time (s) on a DEC alpha machine (300 MHz) executing 10,000 evaluations of individuals in each subpopulation Si for each function and each type of GDGA. From this table, the CPU time seems to be propotional to the total gene length in each type of GDGA, I.e., N
c n ( S : )= 10 : 15 : 15 : 35 = 2 : 3 : 3 : 7,
(1)
i=l
where, n(S,!)is the number of genes in subpopulation
484
10
A. D
F;........... ....... ._.”
......... ___..’ ._..
...... ......................... __../
._.”
2t
40
0 ‘ 0
0.2
0.4
0.6
I
0.8
30
1
-
L-..
1 . 9
’
0
0.2
Figure 10: Success rates as a function of R for Shekel’s foxholes
I 0.4 0.6 duplication rate R
0.8
convergence-time = cpufime x ENES/10,000
B C D Ave.
I
Gr
1.39 2.10 7.03
-
Sh 2.34 2.07 1.78 6.34 3.13
Mi 0.24 1.45 2.02 0.51 1.10
0.2
0.4
0.6
0.8
1
such as four ” 1”s for Shekel’s foxholes. Since types B and C do not distinguish between the loci of genes, meaning that the same gene xi can be assigned t o the different loci of a gene, gene duplication is effective for the symmetrical functions in terms of zi, such as the sphere model and Griewank’s function. However, it is not so effective for the asymmetrical functions in terms of zi, such as Michalewicz’ function. In contrast, since types A and D distinguish between the loci of genes and explore the optimal solution xi in each dimension i, they perform better for Michalewicz’ function. For Shekel’s foxholes and Langerman’s function, which are not separable or symmetrical in terms of xi, the four algorithms were not so effective, showing a performance equivalent t o or worse than that of the PfGA. This poor performance arises from the fact that there are two ways to represent a fitness function with one variable xk , for example, for Shekel’s foxholes:
(2)
Table 3: Convergence time (s) for each function and type of GDGA
Sp 0.13 0.10 0.12 0.32 0.17
0
Figure 13: ENES as a function of R for Michalewicz’ Function
For example, Table 3 shows the convergence times in the case of a duplication rate of 0.8, which shows the best performance as a whole. From this table, the GDGA converges within several seconds, regardless of which type is used.
I
I 1
duplication ram R
Si. From this table, we can easily measure the convergence time for reaching or exceeding the VTR as follows:
1
0.8
t
100
1
Figure 11: ENES as a function of R for Shekel’s foxholes
type/func A
0.6
Figure 12: Success rates as a function of R for Michalewicz‘ Function 1-
0.2
__.--
duplication rate R
duplication rate R
0
0.4
,.-
.........---------
La [ Ave. 2.89 I 2.16 1.43 2.35 1.67 5.68 3.98 3.27 -
I
Table 4 shows the relative order of the five types of algorithms for each benchmark function. If the performance was almost the same, the same rank was given,
(3) 485
Table 4: Results
45 -
-
40
.
D
type/func
C D
‘
15t 10 0
0.2
0.4
0.6
0.8
Gr
Sh
Mi
La
3
3
1 5
1
1
1
7 Conclusion We proposed a gene duplicated GA (GDGA) inspired by the principle of gene duplication in biological evolution. We implemented four variants of GDGA. These included a gene-concatenated type, a gene-prolonged type, a genecoupled type and an extended gene-coupled type. It was demonstrated that gene duplicated GAS are more effective for solving some classes of benchmark functions from the first ICEO, held in 1996, than the PfGA.
Figure 14: Success rates as a function of R for Langerman’s Function A - , 6 C D
I
Bibliography
loo0 0
Sp
I
duplication rate R
w
I
0.2
0.4 0.6 duplication rate R
0.8
1
Figwe 15: ENES as a function of R for Langerman’s Function m
(4) In this paper Equation (3) is used. Both equations take different global minima at different values of x k , depending on a,j. This kind of situation also happens for Langerman’s function. Equation (4)is used only for a special case in which one valued function is extracted from the original fitness function, such as Shekel’s foxholes. In usual cases there is no form such as Equation (4) for general n-dimensional functions. Nevertheless, the four variants of GDGA are effective depending on the relationship between the types and functions such as ICEO benchmark problems. In a future study, we will further improve the algorithms to deal with more difficult problems where scaling and/or rotating of variables are operated on the functions (Whitley 1995). For this purpose, the size of subpopulation Si should be increased to maintain diversity so that individuals with duplicated genes can adapt to a new environment.
486
(Darwin 1859) Charles Darwin, “On the Origin of Species by Means of Natural Selection or the Preservation of Favoured Races in the Struggle for Life,”London, John Murray, 1859. (Holland 1975) J.H. Holland, “Adaptation in Natural and Artificial Systems,” The University of Michigan Press, 1975. (Goldberg 1989) D.E. Goldberg, “Genetic Algorithm in Search, Optimization, and Machine Learning,” Addison Wesley, 1989. (Syswerda 1991) G. Syswerda, “A Study of Reproduction in Generational and Steady-State Genetic Algorithms,” Foundation of Genetic Algorithms, pp. 94-101, Morgan Kaufmann, 1991. (Hinterding 1997) R. Hinterding, Z. Michalewicz and A.E. Eiben, “Adaptation in Evolutionary Computation: A Survey,” Proc. of the IEEE Int. Conf. on Evolutionary Computation, pp. 65-69, 1997. (Ohno 1970) S. Ohno, “Evolution by Gene Duplication ,” Springer-Verlag, 1970. (De Jong 1975) De Jong, “An Analysis of the Behavior: a Class of Genetic Adaptive Systems, ” Doctoral dissertation, University of Michigan, 1975. (Furusawa 1992) M. Furusawa and H. Doi, “PromG tion of Evolution: Disparity in the Frequency of Strandspecific Misreading Between the Lagging and Leading DNA Strands Enhances Disproportionate Accumulation of Mutations,” J . theor. Biol., vol. 157, pp. 127-133, 1992. (Wada 1993) K. Wada, H. Doi, S. Tanaka, Y. Wada, and M. Furusawa, “A Neo-Darwinian Algorithm: Asymmetrical Mutations due to Semiconservative DNA-type replication Promote Evolution,” Proc. Natl. Acad. Sci.,
USA, vol. 90, pp. 11934-11938, 1993. (Bersini 1996) The Organising Committee: H. Bersini, M. Doringo, S. Langerman, G. Seront, and L. Gambardella, “Results of the First International Contest on Evolutionary Optimization (1st ICEO),” 1996 IEEE Int. Conf. on Evolutionary Computation (ICEC ’96), pp. 611-615, 1996. (Whitley 1995) D. Whitley, K. Mathias, S. Rana and J . Dzubera, “Building Better Test Functions,” Proc. of the Sixth Int. Conf. on Genetic Algorithms, pp 239-246, Morgan Kaufmann, 1995. (Kizu 1997) S. Kizu, H. Sawai, and T. Endo, “Parameter-free Genetic Algorithm: GA without Setting Genetic Parameters,” Proc. of the 1997 Int. Symp. on Nonlinear Theory and its Applications, vol. 2 of 2, pp. 1273-1276, Dec. 1997. (Sawai 1998) H. Sawai and S. Kizu, “Parameter-free Genetic Algorithm Inspired by ’Disparity Theory of Evolution,’ Proc. of the 1997 Int. Conf. on Parallel Problem Solving from Nature, pp. 702-711, 1998. (Kim 1999) S. Kizu, H. Sawai and S. Adachi, ” Parameter-free Genetic Algorithm (PfGA) Using A d a p tive Search with Variable-size Local Population and its Extension to Parallel Distributed Processing,” Trans. on IEICE, Vol. J82-D-11, No. 3, pp. 1-10, Mar. 1999 (in Japanese) . (Sawai 1999) H. Sawai and S. Adachi, “Parallel Distributed Processing of a Parameter-free GA by Using Hierarchical Migration Methods,” Proceedings of the Genetic and Evolutionary Computation Conference (GECC0)’99, July 1999, t o be published.
487