test set that includes functions with linear and non-linear constraints. ... Indirect methods try to convert the NLP problem ... The underlying idea is that the exchange of information .... will probably suffer an important change, and vice-versa.
Crossover Operator Effect in Function Optimization with Constraints D. Ortiz-Boyer, C. Herv´ as-Mart´ınez, and N. Garc´ıa-Pedrajas Departament of Computing and Numerical Analysis, University of C´ ordoba C2 Building, Campus of Rabanales s/n, E-14071 C´ ordoba, Spain {ma1orbod,chervas,npedrajas}@uco.es
Abstract. Most real-world optimization problems consist of linear cost functions subject to a set of constraints. In genetic algorithms the techniques for coping with such constraints are manifold: penalty functions, keeping the population in the feasible region, etc. Mutation and crossover operators must take into account the specific features of this kind of problems, as they are the responsible of the generation of new individuals. In this work, we make an analysis of the influence of the selection of the crossover operator in the problem of function optimization with constraints. We focus our work on the crossover operator because this operator is the most characteristic of genetic algorithms. We have used a test set that includes functions with linear and non-linear constraints. The results confirm the importance of crossover operator, as great differences are observed in the performance of the studied operators. The crossover based on confidence intervals shows the most robust behavior.
1
Introduction
There is no method for determining the global optimum of a general nonlinear programming (NLP) problem. This kind of problems only have a solution when the cost function and the constraints have certain properties, many of such properties are not common in real-world problems. Genetic algorithms (GAS) in general, and most specifically real coded genetic algorithms (RCGAs), are parallel stochastic search algorithms, robust and widely used in optimization of functions without constraints in such cases where classical methods fail in finding an optimum. Using GAs in the optimization of functions with constraints requires mechanisms to incorporate the constraints into the evolutionary process. A major group of methods maintains all the individuals in the feasible region, and does not allow the existence of individuals that do not fulfill any constraints. The other approach is to allow the existence of individuals outside the feasible region, but penalizing them heavily. In this context the mechanisms of mutation and crossover have a major impact as the creation of new individuals relies on them. Operators suitable for
This work has been financed in part by the project TIC2001-2577 of the Spanish CICYT and FEDER funds
J.J. Merelo Guerv´ os et al. (Eds.): PPSN VII, LNCS 2439, pp. 184–193, 2002. c Springer-Verlag Berlin Heidelberg 2002
Crossover Operator Effect in Function Optimization with Constraints
185
a problem of optimization without constraints may not be suitable for a problem that restricts the search space. The concept of exploration and exploitation must be reformulated in the problems of optimization with constraints. So, it is very interesting a study of the behavior of the most successful crossover operators in an environment where there are constraints in the search space. In this work, we center our interest in the crossover operator as this operator plays a central role in RCGAs. In fact, it may be considered to be one of the algorithm’s defining features, and it is one of the components to be borne in mind to improve the RCGAs behavior [1]. In this context, it is fundamental the ability of crossover operators to solve the balance between the exploration an exploitation of the search space, which is associated to the intervals determined by the extremes of the domain of the genes and by the corresponding alleles of the parents chosen for crossing [1]. This study encompasses different kind of crossovers widely used for optimization problems without constraints, and a new crossover based on confidence intervals whose features make it appropriate for coping with constraints. We have used two set of functions for comparing the operators. The first one is formed by functions whose constraints define a convex search space where it is easy to keep the individuals without introducing additional penalty functions. The second one consists of real-world problems with non-linear constraints where penalty are used. This paper is organized as follows. Section 2 formulates the problem of nonlinear programming and explains succinctly the most common techniques for treating with constraints. Section 3 is dedicated to a brief description of the tested crossovers with a special attention to the confidence interval based crossover. Section 4 explains the experimental setup and describes the test problems. Section 5 shows the results obtained with the different operators. Finally, Section 6 states the conclusions of our work.
2
Nonlinear Programming Problem
The general nonlinear programing problem is stated as follows: find x which optimizes f (x), x = (x1 , ..., xp ) subject to: gi (x) ≤ 0, j = 1, ..., q hi (x) = 0, j = q + 1, ..., p
(1)
gi and hi can be linear or non-linear functions. f and the constraint functions gi and hi must fulfill certain properties to ensure that a global optimum could be found with some probability. Different methods have been developed for the solution of problems of optimization with constraints, these methods are included into two broad categories: direct and indirect methods. Indirect methods try to convert the NLP problem into several linear problems, but that is not always feasible. Direct methods are based on obtaining successive solutions using classical methods modified in a
186
D. Ortiz-Boyer, C. Herv´ as-Mart´ınez, and N. Garc´ıa-Pedrajas
certain way to consider the constraints in order to avoid the generation of nonfeasible solutions. Most of these methods are local search methods that depend on the existence of the derivative of the cost function. That means that these methods are not robust in search spaces that are discontinuous, multimodal or noisy. One of the most interesting tools for solving NLP is the use of real-coded genetic algorithms. For coping with constraints RCGAs use different techniques, such as, preservation of feasibility, penalty functions, searching for feasibility and other hybrids [2] [3]. The first two are the most commonly used and both of them count with a wide variety of algorithms. We will briefly explain these two philosophies. 2.1
Maintaining the Population in the Feasible Region
These methods define mutation and crossover operators that guarantee that the generated individuals fulfill the constraints, this way the population is kept in the feasible region. One of the most used of this kind of methods is GENOCOP (GEnetic algorithm for Numerical Optimization for COnstrained Problems), developed by Michalewicz [4]. This method takes advantage of the features of convex spaces and can be used for any optimization problem with linear constraints. GENOCOP consists of removing as much variables of the problem as the number of equality equations, simplifying the search space. The remaining inequality equations define a convex search space where intervals could be defined for the variable to take values. These intervals are dynamic and depend on the removed variables. The main disadvantages of this method are: (i) it needs an initial feasible population and so a method for generating it, and (ii) it can be applied only to problems with linear constraints. 2.2
Penalty Functions
This technique is one of the most used when there are non-linear constraints. It consists of introducing cost functions that penalize the individuals that are outside the feasible region. The fitness of j individual is defined as fitnessj (x) = f j (x) ± Qj , where Qj represents the penalty of a non-feasible individual or the cost to make it feasible. If the individual is in the feasible region Qj = 0. Most penalty methods use a set of functions, fi (1 ≤ i ≤ p), for constructing the penalty function. Function fi measures the violation of constraint i in the following way: max{0, gi (x)}, if 1 ≤ i ≤ q fi (x) = (2) |hi (x)|, if q + 1 ≤ i ≤ p There is a wide variety of penalty methods. In this work, as the form of the penalty function p is not the object of our study, we will use the most straightforward, Qj = i=1 fi (x).
Crossover Operator Effect in Function Optimization with Constraints
3
187
Crossover Operators
GAs are search methods of general purpose whose operators establish a balance between exploitation and exploration of the search space. Crossover operator plays the major role, it combines the features of two or more parents to generate one or more offsprings. The underlying idea is that the exchange of information among good parents will produce better offsprings. Most crossover operators generate individuals within the limits of the parents. In this way, the crossover operator implements a depth search or exploitation, leaving the wide search or exploration in the hands of the mutation operator. This politics, although intuitively natural, makes the population converge to inner points of the search space, producing a fast diminishing of the diversity of the population that usually ends in the premature convergence to a suboptimal solution. Recent studies on the application of BLX-α [5] and fuzzy connectives based crossovers [6] in optimization of functions without constraints have confirmed the interesting performance of the crossover operators that generate individuals in both exploitation and exploration regions. The exploration introduced by these operators is restricted to the neighborhood of the parents, so it is not an alternative to the wide exploration that is carried out by the mutation operator. If the operator establishes a good balance between exploration (or extrapolation) and exploitation (or interpolation) it is possible to avoid the loss of diversity and the premature convergence. However, in optimization with constraints, it is not clear whether the use of crossover operators with and exploration component is and advantage, as it could produce too many non-feasible individuals. That is the reason why we consider interesting to carry out a study of the influence of crossovers with and without exploration component in the optimization of functions with constraints. Let β f1 = {β1f1 , β2f1 , ..., βif1 , ..., βpf1 } and β f2 = {β1f2 , β2f2 , ..., βif2 , ..., βpf2 } be two parents with p genes. We consider in out study the crossover operators that are described in the following paragraphs. 3.1
Discrete Crossover
An offspring β s = {β1s , β2s , ..., βis , ..., βps }, is obtained where βis is chosen randomly from the set {βif1 , βif2 } [7]. It is an exclusively exploiting operator. 3.2
Aritmetic Crossover
Two offsprings β s1 = {β1s1 , β2s1 , ..., βis1 , ..., βps1 } and β s2 = {β1s2 , β2s2 , ..., βis2 , ..., βps2 } are created, where βis1 = λβif1 + (1 − λ)βif2 and βis2 = λβif2 + (1 − λ)βif1 , where λ is a constant [4]. This crossover tends to generate solutions near the center of the search space. In our experiments, following the bibliography, we have set λ = 0.5.
188
3.3
D. Ortiz-Boyer, C. Herv´ as-Mart´ınez, and N. Garc´ıa-Pedrajas
BLX-α Crossover
An offspring β s = {β1s , β2s , ..., βis , ..., βps } is generated where βis is chosen randomly in the interval [βmin − I · α, βmax + I · α]. Being cmax = max(βif1 , βif2 ), cmin = min(βif1 , βif2 ) and I = cmax − cmin [5]. For α = 0.5, the probability that the genes of the offsprings take values within and without the interval of the values of their parents is the same. In [1] different values of α are tested obtaining a best value of α = 0.5. 3.4
Logic Crossover
Four monotone non-decreasing functions are defined: F , S, M and L from [ai , bi ] × [ai , bi ] to [ai , bi ], where ai , bi ∈ are the bounds of the gene’s values. For obtaining F , S, M and L the fuzzy connectives t-norm, t-conorm, averaging functions and a generalized compensation operator Cˆ are used respectively [8]. Of these four families of fuzzy connectives the best results are obtained using logic family [9]. If we consider Q ∈ {F, S, M, L} we can generate β s = {β1s , β2s , ..., βis , ..., βps } where βis = Q(βif1 , βif2 ), i = 1, 2, ..., p. M function is clearly exploiting while F and S are more exploring, L is relaxedly exploiting. Of the four offsprings generated the best two substitute their parents. 3.5
Extended Fuzzy Crossover
This operator [10] is an extension of the fuzzy recombination operator [11]. In this operator, the probability that a gene βis of an offspring takes a value zi is given by the distribution p(zi ) ∈ {φβ f1 , φµ , φβ f2 }, where φβ f1 , φµ and φβ f2 i i i i are triangular probability distributions. Three offsprings are generated, each one using one the probability distributions, and the two best are chosen. The probability of generating genes within the exploiting interval [βif1 , βif2 ] is higher than the probability of generating genes in the exploring intervals [ai , βif1 ] and [βif2 , bi ]. 3.6
Crossover Based on Confidence Intervals
Let β be the set of N individuals that form a population and let β ∗ ⊂ β be the set of the n best ones, according to their fitness value. If we consider that each one of the genes of the chromosomes of β ∗ is normally distributed, we can define three individuals: those formed by the lower bounds (CILL), upper bounds (CIUL) and means (CIM) of the confidence intervals of each gene: S
S
CILLi = β i − tn−1,α/2 √βni ; CIU Li = β i + tn−1,α/2 √βni ; CIMi = β i being β i , i = 1, . . . , n, the mean of each gene, Sβi the standard deviation of the individuals of the population, tn−1,α/2 a value obtained from a student´s t
Crossover Operator Effect in Function Optimization with Constraints
189
distribution with n − 1 degrees of freedom and α the probability of a gene of not belonging to its confidence interval. The individuals CILL and CIUL divide each gene’s domain, Di , into three subintervals I L , I CI and I R (see Figure 1a), such that Di ≡ I L ∪ I CI ∪ I R and I L ≡ [ai , CILLi ); I CI ≡ [CILLi , CIU Li ]; I R ≡ (CIU Li , bi ] The interval I IC is a confidence interval built from the best n individuals of the population. The probability of a gene of belonging to the confidence interval (the exploitation interval) is 1 − α. So, both parameters α and n, set the adequate balance between exploration and exploitation for each kind of problem. In a previous work [12] we have found optimum values for the parameters: (1 − α) = 0.7 and n = 5. This crossover operator will create, from an individual of the population β f = (β0f , β1f , . . . , βif , . . . βpf ) ∈ β, and the individuals CILL, CIUL and CIM, a single offspring β s in the following way: – βif ∈ I L : if the fitness of β f is higher than CILL then βis = r(βif − CILLi ) + βif , else βis = r(CILLi − βif ) + CILLi . – βif ∈ I CI : if the fitness of β f is higher than CIM then βis = r(βif − CIMi ) + βif , else βis = r(CIMi − βif ) + CIMi . – βif ∈ I R : if the fitness of β f is higher than CIUL then βis = r(βif − CIU Li ) + βif , else βis = r(CIU Li − βif ) + CIU Li . Where r is a random number belonging to [0,1]. From this definition it is clear that the genes of the offspring always take values between the best parent β f and one of CILL, CIUL or CIM. If β f is far from the other parent, the offspring will probably suffer an important change, and vice-versa. The first circumstance will appear mainly in the first stages of evolution and the second one in the last stages.
4 4.1
Experimental Setup Problems with Linear Constraints
We have chosen three functions f1 , f2 , and f3 [4] that cover the three possible situations: constraints that are only inequalities (f1 ), constraints that are only equalities (f2 ), and constraints that are a combination of both (f3 ). These functions and their optimum values are shown on Table 1. For the optimization of these functions we will use the GENOCOP method. Mutation and crossover operators must be modified in order to generate only feasible individuals. 4.2
Problems with Non-linear Constraints
We have chosen two problems both for their complexity and their interest in the field of experimental sciences. These problems are the distribution of electrons in a sphere and the shape optimization of a cam [13].
190
D. Ortiz-Boyer, C. Herv´ as-Mart´ınez, and N. Garc´ıa-Pedrajas Table 1. Functions with linear constraints.
Function f1 (x, y) = −10.5x1 − 7.5x2 − 3.5x3 − 2.5x4 − 1.5x5 − 10y − 0.5 5i=1 x2i subject to: 10x1 + 10x3 + y ≤ 20 6x1 + 3x2 + 3x3 + 2x4 + x5 ≤ 6.5 0 ≤ xi ≤ 1 0≤y f2 (x) =
10
i=1
xi
x ci + ln 10 i
i=1
Optimum f1 (x∗ ,y ∗ )=-213
f2 (x∗ )=-47.760765
xi
subject to: x3 + x7 + x8 + 2x9 + x1 0 = 1 x1 + 2x2 + 2x3 + x6 + x1 0 = 2 x ≥ 0.000001, (i = 1, ..., 10) x4 + 2x5 + x6 + x7 = 1 c1 = −6.089, c2 = −17.164, c3 = −34.054, c4 = −5.914, c5 = −24.721, c6 = −14.986, c7 = −24.100, c8 = −10.708, c9 = −26.662, c10 = −22.179 + x0.6 − 6x1 − 4x3 + 3x4 f3 = (x) = x0.6 1 2 x1 + 2x3 ≤ 4 − 3x1 + x2 − 3x3 = 0 x1 ≤ 3 x2 + 2x4 ≤ 4 0 ≤ xi , (i = 1, 2, 3, 4) x4 ≤ 1
f3 =(x)=-4.5142
Distribution of the Electrons in an Sphere This problem, known as the Thomson problem, consists of finding the lowest energy configuration of p point charges on a conducting sphere, originated with Thomson’s plum pudding model of the atomic nucleus. This problem is representative of an important class of problems in physics and chemistry that determine a structure with respect to atomic positions. Potential energy for p points (xi , yi , zi ) is defined as − 1 p−1 p f (x, y, z) = i=1 j=i+1 (xi − xj )2 + (yi − yj )2 + (zi − zj )2 2 subject to : (3) x2i + yi2 + zi2 = 1, i = 1, ..., p (−1, −1, −1) ≤ (xi , yi , zi ) ≤ (1, 1, 1) This problem has a lot of local minima, whose number increases exponentially with p. For our experiments p = 25[13]. Shape Optimization of a Cam The problem consists of maximizing the arc of the valve opening for one rotation of a convex cam with constraints on the curvature and on the radius of the cam. The function to optimize is: p f (r) = πrv2 p1 i=1 ri subject to : 2ri−1 r˙i+1 cos(2π/5(p ˙ + 1)) ≤ ri (ri−1 + ri+1 ), i = 0, ..., p + 1 (4) r−1 = r0 = rmin , rp+1 = rmax , rp+2 = rp −α≤
ri+1 −ri 2π/5(p+1)
≤α
rmin = 1.0, rmax = 2.0, rv = 1.0 and α = 1.5[13]. We assume that the shape of the cam is circular over an angle of 6/5π of its circumference, with radius rmin . The design variables ri , i = 1, ..., p, represent the radius of the cam at equally spaced angles distributed over an angle of 2/5π.
Crossover Operator Effect in Function Optimization with Constraints
191
Table 2. Results for the test problems. AF: Averaged fitness of the best individual on the 10 experiments; SDF: Standard deviation of AF; AG: Averaged number of generations; SDG: Standard deviation of AG; BF: Best fitness; BG: Generation when the best individual was achieved. Crossover
AF Conf. Int. -210.82 BLX-α -209.13 -204.05 Discrete Arithmetic -185.01 Ext. Fuz. -198.01 Logical -183.40
Crossover
SDF 2.17 4.13 2.35 8.73 6.04 8.59
f1 AG SDG 15.8 0.63 60.4 46.18 54.6 21.97 32.8 14.80 48.6 12.31 19 2.56
BF -213.00 -212.94 -207.95 -203.34 -205.58 -198.08
BG 18 130 76 20 54 20
NLP with linear constraints f2 AF SDF AG SDG BF BG -47.66 0.00 338 48.51 -47.66 390 -47.65 0.01 321.6 22.91 -47. 66 334 -47.65 0.02 378 16.22 -47.66 382 -47.43 0.14 271.4 28.77 -47.27 226 -47.65 0.01 316.4 12.18 -47.66 318 -47.60 0.06 235.8 99.07 -47.66 292
AF -4.48 -4.50 -4.47 -4.42 -4 .49 -4.48
SDF 0.01 0.02 0.01 0.04 0.02 0.0 2
f3 AG SDG 5.8 0.63 23.2 4.23 24.6 2.36 24.2 1.85 17.4 7.26 25 2.56
BF -4.51 -4.51 -4.50 -4.49 -4.51 -4.50
BG 8 12 28 24 12 28
NLP with non-liner constraints Sphere Cam SDF BF BG AF SDF BF BG 2.38 279.21 2990 3.97 0.25 4.35 5000 0.82 279.34 3000 2.94 0.31 3.35 5000 4.30 303.32 2980 3.96 0.21 4.40 5000 5.69 322.06 2.930 4.67 0.02 4.69 5000 1.36 285.72 3000 3.47 0.99 4.28 5000 4.85 314.02 2770 3.73 0.13 3.86 4990
AF Conf. Int. 281.69 BLX-α 280.50 Discrete 310.49 Arithmetic 328.73 Ext. Fuz. 288.26 321.56 Logical
4.3
Setup of the RCGA
We will have a population of 100 individuals, with a crossover probability of pc = 0.6, a mutation probability of pg = 0.005, and a tournament selection method with elitism. As mutation operator we will use uniform mutation as its versatility makes it suitable for a wide range of problems. For every problem we made ten experiments using the library of evolutionary computation jclec. For the problems with linear constraints the stop criterion establishes a threshold of minimum improvement in the averaged fitness during a predefined number of generations. For f1 we will use a threshold of 0.005 in 5 generations, for f2 0.001 in 15 generations, and for f3 0.05 in 15 generations. These values of the threshold reflect the different complexities of the problems. For the problems with non-linear constraints the stop criterion is a fixed number of generations, 2000 for the problems of electron distribution and 5000 for the shape optimization of a cam.
5
Results
Table 2 shows the results obtained for the test problems. For f1 function the crossover based on confidence intervals achieves the best results and converges faster than the other crossovers. For f2 function all crossovers achieve comparative results, with an AF slightly better of the crossover based on confidence intervals. For f3 the results are also very similar, BLX-α achieves the best value of AF, but the crossover based on confidence intervals converges faster. BLX-α, extended fuzzy and based on confidence intervals crossovers reach the optimum value in one of their experiments. For the electron distribution problem, BLX-α and confidence interval based crossover achieve similar results, the former has a slightly better AF and the latter a slightly better BG. For the shape optimization of a cam problem BLX-α obtains the worst results, arithmetic crossover obtains the best results, followed by the crossover based on confidence intervals.
192
D. Ortiz-Boyer, C. Herv´ as-Mart´ınez, and N. Garc´ıa-Pedrajas Average fitness of the best individual in 10 runs
-175
Di L
CI
Ii
ai
R
C I Mi
C I LL i
Ii
βi
s
β
-47
-47.5 100
150
200 Generation
350 Average fitness of the best individual in 10 runs
Average fitness of the best individual in 10 runs
Average fitness of the best individual in 10 runs
-46.5
50
250
300
350
330
320
310
300
290
280
0
500
1000 Generation
-200
-205
-210
1500
0
20
40
60 Generation
80
2000
100
120
(b)
Conf. Int. BLX- α Discrete Arithmetic Extended Fuzzy Logical
-4.3
-4.35
-4.4
-4.45
-4.5
-4.55
400
Conf. Int. BLX- α Discrete Arithmetic Extended Fuzzy Logical
340
-195
-4.25
-46
0
-190
bi
Conf. Int. BLX- α Discrete Arithmetic Extended Fuzzy Logical
-45.5
(c)
-185
-215
C I U Li
-45
(e)
f i
Average fitness of the best individual in 10 runs
(a)
Ii
Conf. Int. BLX- α Discrete Arithmetic Extended Fuzzy Logical
-180
0
5
10
15 Generation
20
25
(d)
Conf. Int. BLX- α Discrete Arithmetic Extended Fuzzy Logical
8 6 4 2 0 -2 -4 -6 -8 0
500
1000
1500
2000
2500 3000 Generation
3500
4000
4500
5000
(f)
Fig. 1. (a) Graphic representation of the crossover based on confidence intervals; (b) Average fitness of best individuals in 10 runs for f1 ; ; (c) Idem for f2 ; (d) Idem for f3 ; (e) Idem for electron distribution problem; (f) Idem for shape optimization of a cam.
Figures 1bcd show the convergence of the crossovers for the f1 , f2 and f3 functions. The figures show clearly that the confidence interval based crossover converges faster than the other crossovers, specially for f2 and f3 functions. Figures 1e and 1f show the same effect for the two problems with non-linear constraints. It is interesting to note that of the analyzed crossovers the crossover based on confidence intervals is the most robust, although it could be outperformed in some problems.
6
Conclusions
In this work we have shown the influence that the crossover operator has over a problem of optimization with linear and non-linear constraints. We have proved
Crossover Operator Effect in Function Optimization with Constraints
193
that the crossover based on confidence intervals is the most robust. That result shows that the dynamic balance between exploration and exploitation of this operator is suitable for this kind of problems. BLX-α crossover, whose performance in optimization problems without constraints is very good, fails in problems with non-linear constraints. This behavior gives us a hint of its possible vulnerability in this environment. Our future research work is centered in deepening the study of the crossover operators adding new test problems, and considering a larger number of crossovers. Also, the relation between mutation and crossover operators must be settle as they are closely related.
References 1. Herrera, F., Lozano, M., Verdegay, J.L.: Tackling real-coded genetic algorithms: Operators and tools for behavioural analysis. Artificial Inteligence Review (1998) 265–319 Kluwer Academic Publisherr. Printed in Netherlands. 2. Michalewicz, Z., Schoenauer, M.: Evolutionary algorithms for constrained parameter optimization problems. Evolutionary Computation 4 (1996) 1–32 3. Koziel, S., Michalewicz, Z.: Evolutionary algorithms, homomorphous mappings, and constrained parameter optimization. Evolutionary Computation 7 (1999) 19– 44 4. Michalewicz, Z.: Genetic Algorithms + Data Structures = Evolution Programs. Springer-Verlag, New York (1992) 5. Eshelman, L.J., Schaffer, J.D.: Real-coded genetic algorithms and intervalshemata. In Whitley, L.D., ed.: Foundation of Genetic Algorithms 2, San Mateo, Morgan Kaufmann (1993) 187C3.3.7:1–C3.3.7:8.–202 6. Herrera, F., Herrera-Viedma, E., Lozano, E., Verdegay, J.L.: Fuzzy tools to improve genetic algorithms. In: Second European Congress on Intelligent Techniques and Soft Computing. (1994) 1532–1539 7. M¨ uhlebein, H., Schlierkamp-Voosen, D.: Predictive models for breeder genetic algorithm i. continuos parameter optimization. Evolutionary Computation (1993) 25–49 8. Mizumoto, M.: Pictorial representations of fuzzy connectives. part i: Cases of tnorms, t-conorms and averaging operators. Fuzzy Sets Systems 31 (1989) 217–242 9. Herrera, F., Lozano, M., Verdegay, J.L.: Fuzzy connectives based crossover operators to model genetic algorithms population diversity. Fuzzy Sets Systems 92 (1997) 21–30 10. Herrera, F., Lozano, M.: Gradual distributed real-coded genetic algorithms. IEEE Transactions on Evolutionary Computation 4 (2000) 43–63 11. Voigt, H.M., M¨ uhlenbein, H., Cvetkovic, D.: Fuzzy recombination for the breeder genetic algorithms. In Eshelman, L., ed.: The 6th International Conference Genetic Algorithms, San Mateo, CA, Morgan Kaufmann (1995) 104–111 12. Herv´ as, C., Ortiz, D.: Operadores de cruce basados en estad´ısticos de localizaci´ on para algoritmos gen´eticos con codificaci´ on real. In Alba, E., Fernandez, F., Gomez, J.A., Herrera, F., Hidalgo, J.I., Lanchares, J., Merelo, J.J., S´ anchez, J.M., eds.: Primer Congreso Espa˜ nol De Algoritmos Evolutivos y Bioinspirados (AEB’02), M´erida, Spain (2002) 1–8 13. Dolan, E.D., More, J.J.: Benchmarking optimization software with cops. Technical Report ANL/MCS-TM-246, Argonne National Laboratory (2000)