Heuristic algorithms for solving the set-partitioning problem - CiteSeerX

0 downloads 0 Views 190KB Size Report
set-partitioning problem (SPP) which is NP-complete. Let N = f1, 2, . . ., ng be a set of customers, and let P = fP1, P2, ..., Psg, s = ?n. 1 + ?n. 2 + ?n. 3 , be a set of ...
Heuristic algorithms for solving the set-partitioning problem Zbigniew J. Czech Institute of Computer Science, Silesia University of Technology Gliwice, Poland e-mail: [email protected] Abstract A delivery problem which reduces to the NP-complete set-partitioning problem is investigated. The 3-opt, simulated annealing, tabu search and ant algorithms to solve the delivery problem are discussed. Key words. Delivery problem, set-partitioning problem, heuristic algorithms, 3-opt algorithm, simulated annealing, tabu search, ant algorithm

1 Introduction A delivery problem (DP) which arose in a transportation company is investigated. It can be formulated as follows. There is a central depot of goods and n customers (nodes) located at the speci ed distances from the depot. The goods have to be delivered to each customer by using vehicles. The number of vehicles which can be used for delivery is unlimited, however in each trip which is e ected during an eight-hour day a vehicle crew can visit at most k customers, where k is a small constant. Let k = 3. Then on a single trip the crew starts from the depot, visits one, two or three customers and returns to the depot. A set of routes which guarantees the delivery of goods to all customers is sought. Furthermore, the cost de ned as the total length of the routes in the set should be minimized. If a strong constraint is imposed on the magnitude of k (e.g. k = 3), then the DP reduces to the set-partitioning problem (SPP)? which Let N = f1, 2, . . ., ng be a set of customers, and  ? is NP-complete. ? let P = fP1, P2 , . . ., Psg, s = n1 + n2 + n3 , be a set of all subsets of N of size at most 3, i.e. Pi  N and jPij  3, i 2 M, where M = f1, 2, . . ., sg. Every Pi represents a possible tour of a solution to the DP. Let ci be a minimum cost (length) of the tour Pi. To obtain the solution to the DP we need to solve the SPP which consists in nding the collection fPl g, l 2 M, of minimum total cost such that every customer j, j 2 N, is covered by the subsets in the collection exactly once. In other words, the intersection of any pair of subsets in fPl g is empty.

2 Exact algorithms In order to compute the exact solutions to the DP, we have implemented the A* algorithm [19], the iterative deepening A* (IDA*) algorithm [15], and the branch-and-bound algorithm with a heuristic cut of branches of the solution tree [24]. The tests have shown that the A* algorithm has a prohibitive storage complexity, and thus it is not useful in practice. Of the two remaining algorithms the branch-and-bound algorithm has much less (albeit exponential) time complexity. Fig. 1 shows the best four solutions to the DP obtained by using the branch-and-bound algorithm for n = 30 random customers (in this paper we use the Euclidean distances between nodes). The computation time to nd these solutions on a Pentium 133 MHz processor exceeded 25 min.  This research was supported in part by the State Committee for Scienti c Research Grant BK-205-RAu2-97. The preliminary version of this paper was presented at the Intelligent Information Systems Workshop, Zakopane, Poland 9-13 June, 1997.

Given k = 3 it can be shown that the number of all possible routes for n customers is determined by the expression 1 0 b(n?3i)=2c bX n=3c X 1 1 A: @i (1) n! j i=0 6 i! j =0 (n ? 3i ? 2j)!2 j! Thus, looking for the optimum solution to our problem of size n = 30 we have to \sieve" 8; 306; 264;068; 494; 786;829;696  = 8  1021 potential solutions.

q q q q q q qq q qq q q q q qq q q qq q q q q q q q q q q qq q qq q q q q q q q qq q q q q q cost = 594.54

cost = 594.99

q q q q q q qqq qq q q q q qq q q qq q q q q q q q q q q qqq qq q q q q q q q qq q q q q q cost = 594.60

cost = 595.13

Figure 1: The best four solutions to the DP found by the branch-and-bound algorithm for n = 30 uniformly distributed customers in the 60  60 square with the depot in the center

3 Approximation algorithms We investigated four heuristic algorithms. The objective was to select the algorithm which seeks good, i.e. near-optimal, solutions to the DP within a reasonable computing time. The quality of solutions was measured by the ratio of the best (approximate) solution value obtained by an algorithm to the optimum value.

3.1 3-opt algorithm

The 3-opt algorithm for solving the DP is a typical algorithm of local optimization. It is modeled upon a similar algorithm given by Lin which was applied to the traveling salesman problem (cf. [24]). Initially, the 2

set of customers is completed by pseudo-customers located at the depot, so that the number of customers is divisible by 3. The optimizing computations of the algorithm can be iterated several times for various orderings of the customers, i.e. for various initial solutions. First, the customers are put in a random order and grouped into the minimum length routes of size 3, where the size is a number of the customers in a route. Then, considering all possible triples of the customers their places in the routes are exchanged. Given a single triple i; j; k of customers and the exchanges i j k i and i ! j ! k ! i, the two new set of routes are obtained. If the cost of the better of these two solutions is less than the cost of the best solution found so far, then it is taken as the best solution. Thus, the 3-opt algorithm employs a descent method which starting from a random solution searches its neighbourhood until some local minimum is found. The descent search can be repeated a number of times with di erent initial solutions, and the best of the local minima found can be taken as an approximation to the optimum.

3-opt algorithm 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Add 3d n3 e ? n pseudo-customers located at the depot; min cost := 1; Put the customers in a random order; Create the route for every consecutive triple of customers in the order;

repeat cost decreased := false ; for all triples i, j, k of the customers do Exchange customers i, j, k in the routes using rotations i j k i and i ! j ! k ! i, and compute new cost as the lower cost of the two solutions; if new cost < min cost then

Update the best solution; min cost := new cost ; cost decreased := true ; else Restore the previous places of the customers; end if; end for; until cost decreased = false ;

Let ! be the number of repetitions of the loop in lines 5-14 (descent search). The in lines ?? operations  8-12 are executed in constant time, hence the cost of the for loop in lines 7-13 is O n3 = O(n3). Since this loop is executed once for each of ! steps of the descent search, the time complexity of the 3-opt algorithm is T(n) = !n3 . The test results of 100 executions of the 3-opt algorithm are shown in Table 1. These were obtained for the subsets of nodes of the test set of n = 100 uniformly distributed customers in the 60  60 square with the depot in the center. (Using expression (1) one can nd that the number of the potential solutions to the DP for n = 100 equals approximately 10103.) The colums of the table contain the cardinalities of the subsets (n), the suspected optimum solution value to the DP (Optimum), the best solution value found by the algorithm (Best found), the average value of the solutions over 100 executions (Average), the standard deviation of the results (), the quality of the solution as de ned before, expressed in per cent ( ), the number of times the optimum solution was obtained (Hits), and the time of a single execution in seconds (Time). n Optimum Best found Average 

Hits Time 40 819.0 819.0 820.5 5.80 100.00 94 1.6 50 982.2 982.2 992.0 11.67 100.00 21 2.7 60 1173.4 1173.7 1180.4 5.11 100.03 0 4.6 70 1364.7 1364.7 1376.3 7.94 100.00 20 9.5 80 1502.9 1502.9 1517.4 9.03 100.00 1 13.4 90 1691.1 1691.1 1702.5 5.54 100.00 3 19.1 100 1835.4 1836.5 1848.5 6.93 100.06 0 30.1 Table 1. Performance of the 3-opt algorithm (Pentium 133 MHz) 3

3.2 Simulated annealing

The technique of simulated annealing which can be regarded as a variant of local search was rst introduced by Metropolis et al . [18], and then used to optimization problems by Kirkpatrick et al . [14] and Cerny [2]. A comprehensive introduction to the subject can be found in [21]. The application of simulated annealing to solve the DP is as follows. Initially a solution to the problem is the set of routes of size 1. On every step a neighbour solution is determined by either moving a randomly chosen customer from one route to another or by exchanging the places of random customers in their routes. The neighbour solutions of lower costs obtained in this way are always accepted, whereas the solutions of higher costs are accepted with the probability p = #=(# + ) (2) 3 where # is the temperature of annealing which drops from a value cost (initial solution)=10 according to the formula # := #  , where  < 1. Eq. (2) implies that large increases in solution cost, so called uphill moves , are more likely to be accepted when # is high. As # approaches zero most uphill moves are rejected. To assure that the same pattern of the temperature reduction is followed regardless of its initial value, the series of  coecients is obtained from a generator (a)  := (500 + 2w)=(500 + 3w) (b) w := w   where instructions (a) and (b) are executed alternately. Initializing the generator with the value w = 1000, we perform 515 cooling steps through the temperatures #0, 0:714285714#0, 0:521235521#0, . . ., 0:001000245#0, where #0 is an initial temperature of annealing. The algorithm can also stop before all the cooling steps are done if equilibrium is encountered. We de ne that equilibrium is reached if the 20n2 neighbourhood moves fail to improve the best solution. Summing up, the annealing algorithm performs the local search by sampling the neighbourhood randomly. It attempts to avoid becoming prematurely trapped in a local optimum by sometimes accepting an inferior solution. The level of this acceptance depends on the magnitude of the increase in solution cost and on the search time to date. The cost of a single iteration of the repeat statement (lines 6-27) is proportionate to n2 , as the steps in lines 8-22 and 24-26 are executed in constant time, and there is n2 repetitions of the for loop (lines 7-23). The cooling schedule gives 515 iterations of the repeat statement, therefore the worst case time complexity of the annealing algorithm is T(n) = 515n2. In Table 2 the experimental results obtained for the annealing algorithm are shown. The columns have the same meaning as those in Table 1. n Optimum Best found Average 

Hits Time 40 819.0 819.0 819.3 1.02 100.00 82 3.2 50 982.2 982.2 983.4 1.80 100.00 43 5.7 60 1173.4 1173.4 1177.1 3.72 100.00 1 9.5 70 1364.7 1364.7 1378.7 6.23 100.00 1 15.2 80 1502.9 1503.6 1520.4 10.35 100.05 0 21.6 90 1691.1 1694.1 1713.2 11.26 100.18 0 31.2 100 1835.4 1837.6 1865.3 14.80 100.12 0 37.5 Table 2. Performance of the annealing algorithm (Pentium 133 MHz)

Annealing algorithm 1 2 3 4 5 6 7 8 9 10

Create an initial, old solution as the set of routes leading to the single customers; best solution := old solution ; w := 1000; fpattern of temperature reductiong equilibrium counter := 0; fset the equilibrium counterg # := cost (best solution )=1000; finitial temperature of annealingg repeat; for iteration counter := 1 to n2 do equilibrium counter := equilibrium counter + 1; Select randomly a customer and a route; if the route size is less than 3 then 4

11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27

Create the new solution by moving the customer into the chosen route;

else

Select randomly a customer; Create the new solution by exchanging the selected customers; end if;  := cost (new solution ) ? cost (old solution ); Generate random x uniformly in the range (0; 1); if ( < 0) or (x < #=(# + )) then old solution := new solution ; if cost (new solution ) < cost (best solution ) then best solution := new solution ; equilibrium counter := 0; end if; end if; end for;  := (500 + 2w)=(500 + 3w); w := w  ; # := #  ; ftemperature reductiong until (equilibrium counter > 20n2) or (w < 1);

3.3 Tabu search

Tabu search (TS) proposed by Glover [10] and Hansen [13] is a form of neighborhood search, where a neighborhood is de ned as a set of `adjacent solutions' which can be reached from a current solution. At each iteration, TS selects a move that leads from a current to the next solution. At most instances a move leads to a solution of the lowest value of an objective function among all solutions in the neighborhood. However, when a move is executed it becomes forbidden (tabu) for a duration (tenure) of a certain number of iterations. Therefore it may happen that TS continuing the search will select an inferior solution because better solutions are tabu. If a tabu move yields a better solution than any encountered so far, its tabu classi cation can be overridden. A rule which permits such an override to occur is called an aspiration criterion.

Tabu search algorithm 1 2 3 4 5 6

Set the current solution to an initial solution; repeat n2 times Compute the neighborhood for the current solution ; Choose the best solution from the neighborhood using tabu conditions and the aspiration criterion and record it as a current solution ; if current solution.cost < best solution.cost then best solution := current solution ; end;

In the DP a move is identi ed with a swap of a pair of nodes between routes (Fig. 2b-c). A set of pairs swapped in a current solution de nes its neighborhood. Overall the six neighborhoods were investigated. The neighborhood given below yielded the best solutions to the DP. Node pair swaps de ning the neighborhood: for i := 1 to n do Set the rst node to i; Select randomly the second node from the set Near ( rst node ); end for; 5

(a)

qrqq q q q q q q '$ qq qq q q q &% qq q q q q q q q

qq qqr q

(b)

i

Near (i)

(c)

qq qrq q

cost = 594.54

Figure 2: (a) The optimum solution to the DP from Fig. 1 with a set Near (i) of nodes located at most at the distance d=2 from node i; (b)-(c) A move de ned as a swap of nodes between routes

P

(i; j ) Let d = i6=j 2distance n(n?1) be the average distance between nodes in a solution. We de ne Near (i) = fj : distance (i; j)  d=2g as a set of nodes located at most at the distance d=2 from node i (Fig. 2a). The time complexity of the tabu search algorithm is determined by the cost of the repeat loop in lines 2{6. The steps in line 3 and 4 can be executed in O(n) time, and step 5 in line 5 in a constant time. Since these steps are repeated n2 times the overall time complexity of the tabu search algorithm is T(n) = n3 . The test results of 100 executions of the tabu search algorithm for the neighborhood de ned above and the tenure equals 2pn iterations are shown in Table 3. n Optimum Best found Average 

Hits Time 40 819.0 824.8 825.0 0.40 100.71 0 1.1 50 982.2 982.2 982.5 0.34 100.00 46 1.8 60 1173.4 1173.4 1173.4 0.03 100.00 88 3.1 70 1364.7 1369.0 1372.1 4.24 100.31 0 5.2 80 1502.9 1502.9 1508.5 4.08 100.00 14 7.3 90 1691.1 1691.1 1697.2 1.80 100.00 7 10.9 100 1835.4 1837.9 1840.5 0.97 100.14 0 15.4 Table 3. Performance of the tabu search algorithm (Pentium 133 MHz)

3.4 Ant algorithm

An idea of employing a colony of cooperating agents to solve combinatorial optimization problems was proposed by Dorigo et al . [7] (see also http://iridia.ulb.ac.be/dorigo/ACO/ACO.html). Its application for solving the DP can be described as follows. The search for solutions is carried out by m concurrently working agents, called ants . The objective of each ant in a given cycle of computations is nding a solution to the DP with possible lowest cost. During the search the ants lay a trail on the edges they traverse1 . The trails laid by di erent ants accumulate on the edges, and the edges with higher amount of the trail (i.e. more frequently visited) are chosen in the next cycles with higher probability. At time t = 0 the ants are placed in the depot node, and a small initial value 0 is assigned to the trails (i; j) for all edges (i; j). At time t every ant chooses randomly the node (a customer or the depot) to visit at time t + 1. In a single iteration of the algorithm the movements of m ants made during the interval (t; t + 1) are determined. A cycle of the algorithm consists of n such iterations. Clearly, at the end of a cycle each ant nds a solution to the DP. After each iteration the trail intensity on the edges is updated using a local trail updating rule of the form (i; j) = (1 ? )(i; j) + (i; j) (3) 1 Real ants deposit a substance called pheromone on the paths which enables them to nd the shortest path from a food source to their nest [1, 12].

6

where , 0 <  < 1, is a coecient such that 1 ?  determines the local evaporation of the trail during an P iteration, and (i; j) = mr=1  r (i; j) is an increase of the trail.  r (i; j) equals 0 if ant r traversed edge (i; j) in the given iteration, and 0 otherwise. The amount of the trail on edges is modi ed again at the end of each cycle, i.e. when each ant has found its solution to the DP. This time a global trail updating rule is used (i; j) = (1 ? )(i; j) + g (i; j) (4) where  1 if edge (i; j) belongs to the globally best solution; (5) g (i; j) = 0Lg otherwise:

, 0 < < 1, is the global trail evaporation parameter, and Lg is the length of the globally best solution found from the beginning of the algorithm run. As can be seen from Eqs. (4) and (5), global updating is intended to reinforce the trail on the edges belonging to the best solution, i.e. to the shortest set of routes found so far. While searching for a solution an ant cannot visit any customer more than once. To satisfy this constraint the lists tabu r and allowed r are introduced. The rst list contains the nodes visited by ant r up to time t, and the second holds the remaining nodes. When an ant chooses a node to go at time t + 1, it considers only the nodes appearing on its allowed r list. (Since the depot node can be visited repeatedly, it is handled in a special way.) At the end of each cycle the tabu r lists are used to retrieve the solutions found by the ants. The lists are emptied before the next cycle begins. Another constraint is that an ant on its route can visit at most k customers. This constraint is ful lled by introducing the variable size r which records the current size of a route for ant r. A node j to which ant r | staying at node i | moves at time t + 1 is chosen based on the rule  max u2allowedr f(i; u)(1=d(i; u)) g if q  q0; (6) j = jarg 0 otherwise; where d(i; j) is the distance between nodes i and j, q is a random number uniformly distributed over the range [0 :: 1], q0 , 0  q0  1, is a parameter, and j 0 is a random node selected according to the following probability distribution

(P

 (i; j )(1=d(i; j )) v 2allowed r  (i; v )(1=d(i; v ))

if j 2 allowed r ; (7) 0 otherwise: For this selection a biased roulette wheel is created where each node j 2 allowed r has a roulette wheel slot sized in proportion to the above probability (cf. [11]). It can be seen that the probability of being chosen is higher for nodes located closer to an ant position. The parameter of the algorithm allows to control the relative importance of the trail versus the node distance while choosing a target node of movement. pr (i; j) =

Ant algorithm 1 2 3 4 5 6 7 8 9 10 11 12 13 14

Place m ants at the depot node; For all edges (i; j) set the initial value of the trail (i; j) = 0; for cycle := 1 to no of cycles do For each ant r put 0 (depot number) on tabu r list and set size r = 0; for iteration counter := 1 to n do for r := 1 to m do if size r = 3 then freturn to the depotg j := 0; size r := 0;

else Choose node j using the state transition rules (6) and (7); if j = 0 then size r := 0; fthe depot was choseng else size r := size r + 1; end if; end if; Move ant r from node i to node j, and add j to tabu r list; 7

15 16 17 18 19 20 21

Update the trail using the local updating rule (3);

end for; end for;

For every ant r compute the cost of the solution described by its tabu r list, and update the globally best solution; Update the trail level using the global updating rule (4); Empty tabu r lists for all ants r; fcycle ends; begin a new cycleg end for;

Assume that no of cycles = l. The complexity of the ant algorithm is determined by the for loop in lines 3-21. Consider the steps of the loop body. They require the following computational e orts: line 4 | O(m), lines 5-17 | O(n2m), line 18 | O(nm), line 19 | O(n2 ), and line 20 | O(m). Hence a single iteration of the loop runs in time O(n2m), and the time complexity of the ant algorithm is T(n) = ln2 m. The performance of the ant algorithm measured over its 100 executions is shown in Table 4. The algorithm was run for the following parameters: 0 = 0:0001 (initial value of the trail), no of cycles = 1000, m = 10 (number of ants), = 1 (see Eqs. (6) and (7)), q0 = 0:95 (see Eq. (6)),  = 0:2 and = 0:1 (trail evaporation; see Eqs. (3) and (4)). n Optimum Best found Average 

Hits Time 40 819.0 819.0 819.1 0.19 100.00 26 6.5 50 982.2 982.8 985.2 1.46 100.06 0 9.3 60 1173.4 1183.7 1190.6 3.20 100.88 0 12.4 70 1364.7 1382.3 1389.6 5.67 101.29 0 15.8 80 1502.9 1511.8 1521.5 5.67 100.59 0 19.6 90 1691.1 1722.4 1727.4 3.77 101.85 0 24.0 100 1835.4 1861.0 1864.6 2.46 101.39 0 29.2 Table 4. Performance of the ant algorithm (Pentium 133 MHz)

4 Conclusions The performance of the 3-opt algorithm is very good, although the optima for n = 60 and n = 100 were not found (see Table 1). The distances, however, between the best solutions obtained in those cases and the optimum values (measured by ) are small. The results generated by the annealing algorithm are also quite good, especially for smaller n (see Table 2). The average length of the routes obtained for n > 60 are slightly worse than those for the 3-opt algorithm. The results obtained by the tabu search algorithm with respect to the average length of the routes are the best (except for n = 40) in comparison to the remaining algorithms (see Table 3). Furthermore, the tabu search algorithm achieved the highest number of hits into the optimum solutions among all algorithms under consideration. On the other hand, in the cases when the optimum is not found, the values are slightly worse than those for the 3-opt and annealing algorithms. As to the ant algorithm, the average quality of the solutions achieved was the lowest. In some instances the solution length was almost 2% worse than the optimum value (see Table 4). It is worth noting that the 3-opt algorithm is the easiest to implement, while the remaining algorithms need careful tuning by setting the values of parameters. For example, in the annealing algorithm one has to choose the initial value of the temperature, the cooling schedule, and the stopping condition. In addition, the acceptance function for uphill moves and the number of iterations to perform at a given temperature must be established. Similarly, the ant algorithm needs specifying the values of 7 parameters. The main diculty in adopting tabu search for our purpose was to de ne properly the notion of a neighborhood. As was mentioned before, six di erent de nitions were investigated and the best one was selected. To conclude, all the algorithms considered in this paper are suitable for solving the discussed set-partitioning problem (note again the enormous size of solution spaces which have to be explored). The best results were obtained by using the 3-opt, simulated annealing and tabu search algorithms, so these algorithms can be recommended for practical applications. 8

Acknowledgments We wish to thank Sebastian Deorowicz, Slawomir Cichonski, Marcin Szoltysek and Adam Skorczynski for implementing and running the experiments on the 3-opt, simulated annealing, tabu search and ant algorithms, respectively.

References

[1] Beckers, R., Deneubourg, J.L., and Goss, S., Trails and U-turns in the selection of the shortest path by the ant Lasius Niger, Journ. of Theoretical Biology 159, (1992), 397-415. [2] Cerny, V., A thermodynamical approach to the travelling salesman proble: an ecient simulation algorithm, J. of Optimization Theory and Applic. 45, (1985), 41-55. [3] Bilchev, G., Parmee, I., The ant colony metaphor for searching continuous design spaces, Lecture Notes in Computer Science 993, T. Fogarty (Ed.), Springer-Verlag, 24-39. [4] Bock, F., An algorithm for solving \traveling-salesman" and related network optimization problems, 14th National Meeting of the ORSA, St. Luis, MO, 1958. [5] Colorni, A., Dorigo, M., Maoli, F., Maniezzo, V., Righini, G., Trubian, M., Heuristics from nature for hard combinatorial problems, Intern. Transactions in Operational Research, 1996, in press. [6] Derwent, D., A better way to control pollution, Nature 331, (1988), 575-578. [7] Dorigo, M., Maniezzo, V., Colorni, A., The Ant System: Optimization by a colony of cooperating agents, IEEE Trans. on Systems, Man, and Cybernetics B, 26, 1 (1996), 29-41. [8] Dorigo, M., Gambardella,L.M., Ant colony system: A comparative learning approach to the travelling salesman problem, IEEE Trans. on Evolutionary Computation, 1 (1997), in press. [9] Dorigo, M., Gambardella, L.M., Ant colonies for the travelling salesman problem, BioSystems, 1997, in press. [10] Glover, F., Laguna, M., Tabu search, in Reeves, C.R., (Ed.) Modern Heuristic Techniques for Combinatorial Problems, McGraw-Hill, London, (1995), 70-150. [11] Goldberg, D.E., Genetic algorithms in search, optimization & machine learning, Addison-Wesley, Reading, Mass., 1989. [12] Goss, S., Aron, S., Deneubourg, J.L., and Pasteels, J.M., Self-organized shortcuts in the argentine ant, Naturwissenschaften 76, (1989), 579-581. [13] Hansen, P., The steepest ascent mildest descent heuristic for combinatorial programming, Congress on Numerical Methods in Combinatorial Optimization, Capri, Italy (1986). [14] Kirkpatrick, S., Gellat, C.D., Vecchi, M.P., Optimization by simulated annealing, Science 220, (1983), 671-680. [15] Korf, R., Optimal path- nding algorithms, in: Kanal, L., Kumar, V., Eds., Search in Arti cial Intelligence, Springer-Verlag, 1988, 223-267. [16] Leerink, L.R., Schultz, S.R., Jabri, M.A., A reinforcement learning exploration strategy based on ant foraging mechanisms, Proc. of the 6th Australian Conf. on Neural Networks, Sydney, Australia, 1995. [17] Lin, S., Computer solutions of the traveling salesman problem, Bell System Tech. J. 44, (1965), 2245-2269. [18] Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H., Teller, E., Equation of state calculation by fast computing machines, Journ. of Chem. Phys. 21, (1953), 1087-1091. [19] Pearl, J., Heuristics. Intelligent search strategies for computer problem solving, Addison-Wesley, Reading, Mass., 1984. 9

[20] Radcli e, N., Wilson, G., Natural solutions give their best, New Scientist, 14 April (1990), 47-50. [21] Reeves, C.R., (Ed.) Modern Heuristic Techniques for Combinatorial Problems, McGraw-Hill, London, 1995. [22] Schoonderwoerd, R., Holland, O., Bruten, J., Ant-like agents for load balancing in telecommunications networks, Proc. of Agents'97, Marina del Rey, CA, ACM Press, in press. [23] Stutzle, T., Hoos, H., Improvements on the Ant Systems: Introducing MAX-MIN ant system, ICANNGA97 | 3rd Intern. Conf. on Arti cial Neural Networks and Genetic Algorithms, University of East Anglia, Norwich, U.K., Springer-Verlag, 1997, in press. [24] Syslo, M.M., Deo, N., Kowalik, J.S., Discrete optimization algorithms with Pascal programs, Prentice-Hall, Inc., 1983.

10

Suggest Documents