Comparison between Ant Colony and Genetic ... - Springer Link

8 downloads 363 Views 505KB Size Report
Algorithms (HGA) and the Ant Colony Optimization (ACO), the fuzzy ... more the search space so the evolutionary algorithms would have more experimental.
Comparison between Ant Colony and Genetic Algorithms for Fuzzy System Optimization Cristina Martinez1, Oscar Castillo1, and Oscar Montiel2 1

Tijuana Institute of Technology, 2 CITEDI-IPN Tijuana, Mexico

Abstract. In this paper we show some of the results that we obtain with different evolutionary methods on a Mamdani Fuzzy Inference System (FIS); we work with Hierarchical Genetic Algorithms (HGA) and the Ant Colony Optimization (ACO), the fuzzy inference system controls a benchmark problem which is “The Ball and Beam” system, optimizing the fuzzy rules of the system. Firs, we work to optimize the FIS that is structured by two inputs (the error and the derived error), an output (the angle of the beam so that we can get the ball position on it); and the 44 fuzzy rules that we used to be reduced with the evolutionary methods (HGA, ACO), so that we could make the comparisons between them via average and standard deviation, and concluding with the best evolutionary method for a fuzzy system optimization control problem.

1 Introduction A large number of papers have been published regarding the combination of fuzzy logic (FL) and genetic algorithms (GA’s) [24, 25 and 26] and also fuzzy logic and ant colony optimization (ACO) [23 and 26]. Fuzzy logic is a useful tool for modeling complex systems and deriving useful fuzzy relations or rules. However, it is often complicated for human experts to define the fuzzy sets and fuzzy rules used by these systems. GA’s have proven to be a useful method for optimizing the membership functions of the fuzzy sets and the fuzzy rules used by these fuzzy systems; ACO has the ability to solve combinatorial optimization problems that has been inspired by the foraging behavior of ant colonies. Most of the papers in optimization of fuzzy systems consider only the optimization of membership functions; for this reason, we decided to optimize the fuzzy rules of a fuzzy system using GA’s and ACO. As we said before it is very difficult for human experts to define a fuzzy system even though we create a FIS based on two inputs one output using triangular membership function for all the fuzzy sets, 44 fuzzy rules (first we started at 25 then we choose to rebuild the initial fuzzy rules set and extend more the search space so the evolutionary algorithms would have more experimental space).

2 Genetic Algorithms Our lives are essentially dominated by genes. They govern our physical features, our behavior, our personalities, our health, and indeed our longevity. The recent greater O. Castillo et al. (Eds.): Soft Computing for Hybrid Intel. Systems, SCI 154, pp. 71–86, 2008. © Springer-Verlag Berlin Heidelberg 2008 springerlink.com

72

C. Martinez, O. Castillo, and O. Montiel

understanding of genetic has proven to be a vital tool for genetic engineering applications in many disciplines, in addition to medicine and agriculture. It is well known that genes can be manipulated, controlled and even turned on and off in order to achieve desirable amino acid sequences of a polypeptide chain. This significant discovery has led to the use of genetic algorithms (GA) for computational engineering. GA has proven to be unique approach for solving various mathematical intangible problems which other gradient type of mathematical optimizers have failed to solve. The basic principles of the GA were first proposed by Holland [2]. Thereafter, a series of literature [3, 4, and 5] and reports [6, 7, 8, 9, 10, and 11] became available. GA is inspired by the mechanism of natural selection where stronger individuals are likely the winners in a competing environment. Here, the GA uses a direct analogy of such natural evolution. Through the genetic evolution method, an optimal solution can be found and represented by the final winner of the genetic game. The GA presumes that the potential solution of any problem is an individual and can be represented by a set of parameters. These parameters are regarded as the genes of a chromosome and can be structured by a string of values in binary form. A positive value, generally known as a fitness value, is used to reflect the degree of “goodness” of the chromosome for the problem which would be highly related with its objective value. Throughout a genetic evolution, the fitter chromosome has a tendency to yield good quality offspring, which means a better solution to any problem. In figure 1 we show the genetic GA cycle.

Fig. 1. A genetic algorithm cycle

The concept of applying a GA to solve engineering problems is feasible and sound. However, despite the distinct advantages of a GA for solving complicated, constrained and multi-objective functions where other techniques may have failed, the full power of the GA in engineering application is yet to be exploited and explored. Hierarchical Genetic Algorithms (HGA) are known for the tree structure that generates, called “dendogram” in which every level is a set of possible solutions of the collection [7]. Each node of the tree (first level) is structured just by one set that contains all the elements. Each leaf of the last level of the tree is a set composed by one element (there are many leafs as objects on the collection). In the intermediate

Comparison between Ant Colony and Genetic Algorithms

73

levels every node of the “n” level is divided for the offspring levels “n+1”. This type of algorithms combine the notion of the fittest survival and the random and structured exchange of the characteristics between individuals of a population of possible solutions, conforming a search algorithm that applies for solving diverse optimization fields. We show in figure 2 an example of HGA with a chromosome of three levels.

1

0

Level 2 control genes

1

0

1

1

1

0

4

7

3

9

6

2

Level 1 control genes Parametric genes

Fig. 2. An example of 3-level chromosome

3 Ant Colony Optimization Ant Colony Optimization (ACO) is part of a larger field of research termed based on social behavior of animals swarm intelligence that deals with algorithmic approaches. Swarm intelligence is a relatively new approach to problem solving that takes inspiration from the social behaviors of insects and of other animals. In particular, ants have inspired a number of methods and techniques among which the most studied and the most successful is the general purpose optimization technique known as ant colony optimization [12] – [14], [15], [16]. ACO takes inspiration from the foraging behavior of some ant species. These ants deposit pheromone on the ground in order to mark some favorable path that should be followed by other members of the colony. ACO exploits a similar mechanism for solving optimization problems. In ACO, the discrete optimization problem considered is mapped onto a graph called a construction graph in such a way that feasible solutions to the original problem correspond to paths on the construction graph. Then, artificial ants can generate feasible solutions by moving on the construction graph. In practice, colonies of artificial ants search for good solutions for several iterations. Every (artificial) ant of a given iteration builds a solution incrementally by taking several probabilistic decisions. The artificial ants that find a good solution mark their paths on the construction graph by putting some amount of pheromone on the edges of the path they followed. The ants in the next iteration are attracted by the pheromones, i.e., their decision probabilities are biased by the pheromones: in this way, they will have a higher probability of building paths that are similar to paths that correspond to good solutions. ACO has been applied successfully to a large number of difficult combinatorial optimization problems including traveling salesman problems, quadratic assignment problems, and scheduling problems, as well as to dynamic routing problems in telecommunication networks. Unfortunately, it is difficult to analyze ACO algorithms

74

C. Martinez, O. Castillo, and O. Montiel

theoretically, the main reason being that they are based on sequences of random decisions (taken by a colony of artificial ants) that are usually not independent and whose probability distribution changes from iteration to iteration. Accordingly, most of the ongoing research in ACO is of an experimental nature, as this is also reflected by the content of most of papers published in the literature. Deneubourg et al. [17] thoroughly investigated the pheromone laying and following behavior of ants. The model proposed by Deneubourg and co-workers for explaining the foraging behavior of ants was the main source of the inspiration for the development of ant colony optimization. In ACO, a number of artificial ants build solutions to the considered optimization problem at hand and exchange information on the quality of these solutions via a communication scheme that is reminiscent of the one adopted by real ants. Different ant colony optimization algorithms have been proposed. The original ant colony optimization algorithm is known as Ant-System [18]-[20] and was proposed in the early nineties. Since then, a number of other ACO algorithms have been introduced. All ant colony optimization algorithms share the same basic idea. ACO has been formalized into a metaheuristic for combinatorial optimization problems by Dorigo and co-workers [21], [22]. A metaheuristic is a set of algorithmic concepts that can be used to define heuristic methods applicable to a wide set of different problems. In other words, a metaheuristic is a general-purpose algorithmic framework that can be applied to different optimization problems with relatively few modifications. In order to apply ACO to a given a combinatorial optimization problem, an adequate model is needed. A model P=(S, Ω, f) of combinatorial optimization problem consists of: ƒ

A search space S defined over a finite set of discrete decision variables Xi, i=1,…,n;

ƒ ƒ

A set Ω of constraints among the variables; and An objective function f: S → to be minimized.

}. A feasible solution s Є S The generic variable Xi takes values in Di = [ … is a complete assignment of values to variables that satisfies all constraints in Ω. A solution s* Є S is called a global optimum if and only if: f(s*) ≤ f(s) ∀s Є S. The model of a combinatorial optimization problem is used to define the pheromone model of ACO. A pheromone value associated with each possible solution component; that is, with each possible assignment of a value to a variable. Formally, the pheromone value τij is associated with the solution component cij, which consists of the assignment Xi= the set of all possible solution components is denoted by C. In ACO, an artificial ant builds a solution by traversing the fully connected construction graph GC (V, E), where V is a set of vertices and E is a set of edges. This graph can be obtained from a set of solution components C in two ways: components may be represented either by vertices and edges. Artificial ants move from a vertex along the edges of the graph, incrementally building a partial solution. Additionally, ants deposit a certain amount of pheromone on the components; that is, either on the vertices or on the edges that they traverse. The amount ∆τ of pheromone deposited

Comparison between Ant Colony and Genetic Algorithms

75

may depend on the quality of the solution found. Subsequent ants use the pheromone information as a guide toward promising regions of search space. The Ant Colony Optimization Algorithm: Set parameters, initialize pheromone trails While termination condition not met do ContructAntSolutions ApplyLocalSearch (optional) UpdatePheromones endwhile

4 Experiment Results The problem we’re about to describe is a benchmark in the control area called “The Ball and Beam System” and it’s primarily based on the scheme shown in figure 3. In this system we place a ball on a beam and it is allowed to roll with certain liberty along the beam, adding a lever arm and a servo-gear at one of the ends of the

Fig. 3. Scheme of the Ball and Beam system

Fig. 4. Structure of the Fuzzy Controller

76

C. Martinez, O. Castillo, and O. Montiel

beam. As the servo-gear rotates an angle θ, the lever arm changes its angle α. When the angle trends to a vertical position the gravity makes the ball roll along the beam. We have now to establish a knowledge base of fuzzy rules to be considered the initial fuzzy controller to be optimized by evolutionary methods. We have two inputs (error and change of error, Fig. 4) and one output (α angle) to know the position of the ball on the beam. We show in table 1 a set of initial fuzzy rules. As we can see in figure 4 the error and the change of error as the inputs and the angle as the output, structure the fuzzy inference system of Mamdani type. Table 1. Initial set of Fuzzy rules (Base of knowledge)

NO.

INDEXED

1

1 1 1 (1): 1

2

1 2 1 (1): 1

3

1 3 1 (1): 1

4

1 4 2 (1): 1

5

1 5 3 (1): 1

6

2 1 1 (1): 1

7

2 2 1 (1): 1

8

2 3 2 (1): 1

9

2 4 3 (1): 1

10

2 5 4 (1): 1

11

3 1 1 (1): 1

12

3 2 2 (1): 1

13 14 15

3 3 3 (1): 1 3 4 4 (1): 1 3 5 5 (1): 1

16

4 1 2 (1): 1

17 18 19 20

4 2 3 (1): 1 4 3 4 (1): 1 4 4 5 (1): 1 4 5 5 (1): 1

21

5 1 3 (1): 1

RULES if error=NL and derror=NL then angulo=NL if error=NL and derror=N then angulo=NL if error=NL and derror=Z then angulo=NL if error=NL and derror=P then angulo=N if error=NL and derror=PL then angulo=Z if error=N and derror=NL then angulo=NL if error=N and derror=N then angulo=NL if error=N and derror=Z then angulo=N if error=N and derror=P then angulo=Z if error=N and derror=PL then angulo=P if error=Z and derror=NL then angulo=NL if error=Z and derror=N then angulo=N if error=Z and derror=Z then angulo=Z if error=Z and derror=P then angulo=P if error=Z and derror=PL then angulo=PL if error=P and derror=NL then angulo=N if error=P and derror=N then angulo=Z if error=P and derror=Z then angulo=P if error=P and derror=P then angulo=PL if error=P and derror=PL then angulo=PL if error=PL and derror=NL then angulo=Z

Comparison between Ant Colony and Genetic Algorithms 22

5 2 4 (1): 1

23

5 3 5 (1): 1

24

5 4 5 (1): 1

25

5 5 5 (1): 1

26

1 2 2 (1): 1

27

1 3 2 (1): 1

28

1 4 3 (1): 1

29

1 5 2 (1): 1

30

1 5 4 (1): 1

31

2 1 2 (1): 1

32 33

2 2 2 (1): 1 2 5 3 (1): 1

34

3 1 2 (1): 1

35 36 37

3 2 3 (1): 1 3 5 4 (1): 1 4 1 3 (1): 1

38 39 40

4 4 4 (1): 1 4 5 4 (1): 1 5 1 2 (1): 1

41

5 1 4 (1): 1

42

5 2 3 (1): 1

43 44

5 3 4 (1): 1 5 4 4 (1): 1

77

if error=PL and derror=N then angulo=N if error=PL and derror=Z then angulo=PL if error=PL and derror=P then angulo=PL if error=PL and derror=PL then angulo=PL if error=NL and derror=N then angulo=NL if error=NL and derror=Z then angulo=N if error=NL and derror=P then angulo=Z if error=NL and derror=PL then angulo=N if error=NL and derror=PL then angulo=P if error=N and derror=NL then angulo=N if error=N and derror=N then angulo=N if error=N and derror=PL then angulo=Z if error=Z and derror=NL then angulo=N if error=Z and derror=N then angulo=Z if error=Z and derror=PL then angulo=P if error=P and derror=NL then angulo=Z if error=P and derror=Pthen angulo=P if error=P and derror=PL then angulo=P if error=PL and derror=NL then angulo=N if error=PL and derror=NL then angulo=P if error=PL and derror=N then angulo=Z if error=PL and derror=Z then angulo=P if error=PL and derror=P then angulo=P

4.1 Genetic Algorithm Experiments Starting with the GA paradigm, we can apply a GA to generate different FIS’ combining the rules of the knowledge base structured previously, with 44 fuzzy rules creating the search space for the evolutionary method. We based our implementation on the simple genetic algorithm, with some improvements to adequate it to our problem, and as a fitness function we have a function that evaluates as many Fuzzy Systems as individuals has the populations; they are created with the same inputs and outputs but

78

C. Martinez, O. Castillo, and O. Montiel

every FIS has different active rules depending on the chromosome. We use the chromosome as an array of active rules; where 1 means “ON” and 0 means “OFF”, as we can appreciate in figure 5 the chromosome representation in our GA.

ACTIVATION BITS

44 RULES

Fig. 5. Representation of the GA chromosome

Evaluation of the FIS is based on a simulation in order to get an average of the error obtained from the reference and the control given by the fuzzy controller. To have an average of the error we’re using the Integral of the Absolute Error (IAE) equation: IAE =



n i =1

| γ i − μi |

(1)

n

Where marks the reference value, μi is the control value and n is the total sample points. We test the algorithm varying the mutation, crossover, with very low and very high levels and also the individuals, generations and the percentage of new individual per generations, all just to test the performance of the genetic algorithm. We obtain very good controllers for the Ball and Beam system; every individual was simulated with a plant that we previously described, where was simulated the reference as the input and we got a fuzzy controller where the inputs are taken by the error and the change of error, the α angle as the output and at the same time it’s the input for the Ball and Beam Model, where the equations are computed and the output is the feedback for the plant shown in figure 6:

Fig. 6. Simulink Ball and Beam plant

Comparison between Ant Colony and Genetic Algorithms

79

As you can see the plant has a “Step” reference, we used 0.25 as the reference but, we also used some generators to change it, amplifying the frequency and the ampli tude, and we’ll see it in the graphics later. As was mentioned before we tested the GA with variations on the genetic operations; in the table 2 we can appreciate the results of the GA. Table 2. Results obtaned with GA tests

We see that the best result (the one marked) was obtained with a population of 30 individuals, 50 generations with a 30% of new individuals per generation, 1% of mutation and the 20% of crossover using a multipoint crossover; we obtained an error of 0.00012907 with a very low standard deviation. In the simulation the fuzzy controller has 27 fuzzy rules. Later in the comparison we could appreciate more clearly the results. 4.2 Ant Colony Optimization Experiment The ant colony optimization (ACO) has to be started with the same base of knowledge. Describing the method, we can say that there’s a colony which is going to explore a search space for the best trails that are saved in a matrix; then when we got the trails (real numbers) we make some adjustments to the numbers so that we can

80

C. Martinez, O. Castillo, and O. Montiel

work with them. Also a Fitness Function is needed for the ACO algorithm and we can use the same function that we used in GA, but with some modifications that must to be made; as we can remember in a GA a chromosome was taken as the vector of active rules, now we only have a real number; 6 digits to be more accurate, and we’re going to need 44 bits, so multiplying the real number by 100,000,000,000 and converting the result to binary code we should have a vector for the active rules. It’s very important to mention that the ACO algorithm that is used in this paper was previously improved. Including some modifications that make the algorithm faster then the original one. Basically from the nest to the food is a line marked and the best trail is the one that makes a diagonal from the nest to the food. If we describe the pseudo code of the algorithm we have that first of all we need to initialize some of the variables like the initial pheromone, the percentage of evaporation we want, we also have to construct a graph (with the dimensions for the search space). Once the variables are initialized every ant has to construct its trails and these trails are updated so the ant could not walk the same trail, the best trails of the ants are the ones we convert to binary for the active rules of the fuzzy controllers when its created the FIS we got the rules average then is the simulation that we have to do so we can obtain the error average and compare the best fuzzy controller of every epoch of the algorithm. Some of the results that were obtained with the ACO algorithm tests are shown in table 3. Table 3. Results obtained with ACO algorithm tests

Comparison between Ant Colony and Genetic Algorithms

81

In the best result given by ACO, the algorithm was executed with very low level of ants (10), 10 epochs with a 20% of initial pheromone and a 5% of evaporation, the error reached is very low as the standard deviation is. 4.3 Comparison between the Evolutionary Methods GA vs. ACO In this section will appreciate the differences between the methods by comparing the results obtained with the evolutionary algorithms. In the graphics we can appreciate how the fuzzy systems had an evolution during the different tests that have been realized. Is also notable how in some cases one method is better than the other and this is shown with the error averages in the result tables previously shown. Most of the samples were realized with a linear reference, but we also tested with a change of reference so we could know the efficiency of the fuzzy controllers, at the same time another samples were realized from the beginning with a certain change of reference so the fuzzy controller could reach more easily a linear reference without any constraint.

Fig. 7. Active rules GA sample

Fig. 8. Active rules ACO sample

We can see in figures 7 and 8 the active rules of the best simulations with each method; there’s a difference of 6 rules where GA has more rules than ACO, at a first sight we could say ACO is better and it is in some cases, but having more active rules could bring a better control in different cases, and we’ll appreciate it later. In figures 9-10 the surfaces of the both cases are shown, the differences are due to the active rules in the fuzzy controllers. Once the fuzzy controllers were created the simulations have to be the next step for the experiments. The simulations in figures 11-12 represent the fuzzy controllers obtained with each method GA and ACO with a linear reference both samples were tested with a 0.25 of linear reference and as we can see the ACO reach perfectly the reference but GA is not.

82

C. Martinez, O. Castillo, and O. Montiel

Modifying the reference the following simulations have a saw tooth reference varying the amplitudes and frequencies the best results were obtained at a 50% of amplitude and a frequency of 10% (Figures 13 and 14). In the saw tooth case we can see that the GA obtained a better fuzzy controller, and ACO is good but in some units of time the control is not as perfect as the GA is; this is result of the difference between the fuzzy rules that each evolutionary method has activated in each sample.

Fig. 9. FIS’ Surface, GA sample

Fig. 10. FIS surface, ACO sample

Fig. 11. Best GA simulation with 27 fuzzy rules, 30 individuals, 50 generations

Fig. 12. Best ACO simulation with 21 fuzzy rules, 10 ants, 10 epochs

Fig. 13. Best GA simulation, saw tooth reference, 0.5 amplitude and a frequency of 0.1

Comparison between Ant Colony and Genetic Algorithms

83

Fig. 14. Best ACO simulation, saw tooth reference, 0.5 amplitude and 0.1 frequency

Fig. 15. GA with sinusoidal reference, 0.5 amplitude, 0.5 frequency

In figures 15-16 the simulations with a sinusoidal reference are shown, the amplitude is maintained in a 50% but frequency is increased at a 50% comparing with the saw tooth reference. The best result with sinusoidal reference was obtained in the GA test with 30 individuals, 50 generations as maximum, 1% of mutation and 20% for the crossover. The ACO best fuzzy controller is outrageously bad at this case of change of reference, as we can see. The controller tries to follow the reference and we could say it has sequence but control it’s never reached.

Fig. 16. ACO with sinusoidal reference, 0.5 amplitude, 0.5 frequency

Fig. 17. GA with square reference, 0.5 amplitude, frequency 0.5

84

C. Martinez, O. Castillo, and O. Montiel

Fig. 18. ACO with square reference, 0.5 amplitude, frequency 0.5

In a square simulation the reference at a 50% of amplitude and 50% of frequency, the results obtained (figure 17 and 18) were not satisfactory at any case, the FIS could not have a perfect control at this reference change and it was because of the active rules at each sample; the range of the fuzzy rules were not able to reach a change of a square reference.

5 Conclusions Natural intelligence is the product of millions of years of biological evolution. Simulation of complex biological evolutionary processes may lead us to discover how evolution propels living systems toward higher-level intelligence. Greater attention it thus being paid to evolutionary computing techniques such genetic algorithms and ant colony systems. The use of fuzzy controllers is very helpful when control issues we’re talking about; fuzzy controller works to obtain a better performance reaching a goal faster than a non fuzzy controller. In this project the control problem that solved is the Ball and Beam model; where we tried to balance a ball on a beam, in every circumstance a good controller could maintain the system working perfectly. It’s very known that finding good parameters for a controller to its well functioning is very hard and it means lost of time. Evolutionary methods that are used to optimize in this project are Genetic Algorithms (GA) and Ant Colony Optimization (ACO). GA is a paradigm that has proved to be a unique approach for solving various mathematical problems which other gradient type of mathematical optimizers have failed to reach; ACO has been applied successfully to a large number of difficult combinatorial optimization problems. For the case of optimizing the ball and beam system, is fine to say that both methods performed with good quality; when parameters are adequate and the reference is set, the controller may reach the goal but when is also used to follow another reference things may not be clear, so the controller could lose sequence most of the time depending on its efficiency. Both methods have advantages and disadvantages; during this project the performance of these methods were kind of similar but once you have experimented individually is notable how a method could be more tolerable at changes, the lapses of time, the error averages at the simulation area. ACO have more possibilities at this time in my opinion, the experience of this project aloud me to say that for the Ball and Beam system ACO is more quick than GA, ACO reached a lower error than GA did, it also optimize more the base of knowledge with good results than GA did, but is also important to mention that sometimes reducing things

Comparison between Ant Colony and Genetic Algorithms

85

reduces are possibilities, so in this case a reduce number of fuzzy rules is not always the best result you can obtain. ACO is better than GA, even when GA obtained good results in this project.

References [1] Man, K.F., Tang, K.S., Kwong, S.: Genetic Algorithms: concepts and designs, City University of Hong Kong. Springer, Heidelberg (1998) [2] Holland, J.H.: Adaptation in natural and artificial systems. MIT Press, Cambridge (1995) [3] David, L.: Handbook of genetic algorithms. Van Nostrand Reinhold (1991) [4] Golderb, D.E.: Genetic algorithms in search, optimization and machine learning. Addison-Wesley, Reading (1989) [5] Michalewicz, Z.: Genetic Algorithms + Data Structures = Evolutionary Program, 3rd edn. Springer, Heidelberg (1996) [6] Beasly, D., Bull, D.R., Martin, R.R.: An overview of Genetic Algorithms: Part 1, fundamentals. University Computing 15(2), 58–69 (1993) [7] Beasly, D., Bull, D.R., Martin, R.R.: An overview of Genetic Algorithms: Part 2, research topics. University Computing 15(4), 170–181 (1993) [8] Man, K.F., Tang, K.S., Kwong, S.: Genetic Algorithms: concepts and applications. IEEE Trans. Industrial Electronics 43(5), 519–534 (1996) [9] Srinivas, M., Patnaik, L.M.: Genetic algorithms: a survey. Computing,?June 17–26 (1994) [10] Tang, K.S., Man, K.F., Kwong, S., He, Q.: Genetic Algorithms and their applications in signal processing. IEEE Signal Processing Magazine 13(6), 22–37 (1996) [11] Whitley, D.: The GENITOR algorithm and Selection pressure: Why rank-based allocation of reproductive trails is best. In: Schatfer, J.D. (ed.) Proc. 3rd Int. Conf. Genetic Algorithms, pp. 116–121 (1989) [12] Bonabeau, E., Dorigo, M., Theraulaz, G.: Swarm Intelligence: From Natural to Artificial Systems. Oxford Univ. Press, New York (1999) [13] Bonabeau, E., Dorigo, M., Theraulaz, G.: Inspiration for optimization from social insect behavior. Nature 406, 39–42 (2000) [14] Camazine, S., Deneubourg, J.L., Franks, N.R., Sneyd, J., Theraulaz, G., Bonabeau, E.: Self-Organization in Biological Systems. Princeton Univ. Press, Princeton (2001) [15] Clark, P., Niblett, T.: The CN2 induction algorithm. Mach. Learn 3(4), 261–283 (1989) [16] Dorigo, M., Bonabeau, E., Theraulaz, G.: Ant algorithms and stigmergy. Future Gener, Comput. Syst. 16(8), 851–871 (2000) [17] Deneubourg, J.L., Aron, S., Goss, S., Pasteels, J.M.: The Self-organizing exploratory pattern of the Argentine ant. Journal of Insect Behavior 3, 159 (1990) [18] Dorigo, M., Maniezzo, V., Colorni, A.: Possitive feedback as a search strategy, Dipartimento di Elettronica, Politecnico di Milano, Italy, Tech. Rep. 91-016 (1991) [19] Dorigo, M.: Optimization, learning and natural algorithms (in Italian), Ph. D. dissertation, Dipartimento di Elettronica, Politecnico di Milano, Italy (1992) [20] Dorigo, M., Maniezzo, V., Colorni, A.: Ant System: Optimization by a Colony of cooperating agents. IEEE Trans. On Systems, Man and Cibernetics Part B 26(1), 29–41 (1996) [21] Dorigo, M., Di Caro, G.: The Ant Colony Optimization meta-heuristic. In: Corne, D., et al. (eds.) New ideas in Optimization, pp. 11–32. McGraw Hill, London (1999) [22] Dorigo, M., Di Caro, G., Gambardella, L.M.: Ant algorithms for discrete optimization. Artificial life 5(2), 137–172 (1999)

86

C. Martinez, O. Castillo, and O. Montiel

[23] Porta-Garcia, M., Montiel, O., Sepúlveda, R., Castillo, O.: Path Planning for Autonomous Mobile Robot Navigation with Rerouting Capability in Dynamic Search Spaces using Ant Colony Optimization. CITEDI-IPN, Department of Computing Science, Tijuana Institute of Technology, Tijuana, Mexico [24] Wang, W., Bridges, S.M.: Genetic Algorithm Optimization of Membership Functions for Mining Fuzzy Association Rules. Department of Computer Science, Mississipi State University, USA (2000) [25] Alcacla, R., Cordon, O., Herrera, F.: Algoritmos Geneticos para el Ajuste de Parametrosy Seleccion de Reglas en el Control Difuso de un Sistema de Climatizacion HVAC para Grandes Edificios. Department of Computer Science, Jaen University, Jaen, Spain (2002) [26] Casillas, J.,Cordon, O., Herrera, F., Villa, P.: Aprendizaje Hibrido de la base de conocimiento de un sistema basado en reglas difusas mediante algoritmos geneticos y colonia de hormigas. Department of Computer Science and Artificial Intelligence, University of Granada, Department of Informatics, University of Vigo, Spain (2003)