A dynamic max-min ant system for solving the ...

5 downloads 16590 Views 307KB Size Report
H. Shah-Hosseini received his BS in Computer Engineering from Tehran University, his MS and ..... where the algorithm includes 2-opt (see online version.
422

Int. J. Bio-Inspired Computation, Vol. 2, No. 6, 2010

A dynamic max-min ant system for solving the travelling salesman problem Mohammad Reza Bonyadi* and Hamed Shah-Hosseini Electrical and Computer Engineering Department, Shahid Behesti University G.C., Velanjak Ave, Tehran, Iran E-mail: [email protected] E-mail: [email protected] E-mail: [email protected] E-mail: [email protected] *Corresponding author Abstract: In this paper, a modified max-min ant system, called dynamic max-min ant system (DMAS) is proposed to solve the travelling salesman problem (TSP). The proposed algorithm updates the value of τmin, the lower bound of pheromone trails during its run. In addition, the used parameters for the DMAS are adjusted to improve the performance of the method. Furthermore, a local search based on 2-Opt is adjoined to the DMAS and the results are reported. Moreover, the DMAS is applied to some standard TSPs and its results are compared to some previous works. Results show that the proposed method outperforms several other well-known population-based methods in many cases. Also, in some standard problems, the proposed method improves the shortest known tour lengths. Moreover, experiments show that the standard deviation of tour lengths that are found by DMAS is very small, which exhibits the stability of the proposed algorithm. Keywords: travelling salesman problem; ant system; ant colony optimisation; intelligent water drops; max-min ant system; MMAS. Reference to this paper should be made as follows: Bonyadi, M.R. and Shah-Hosseini, H. (2010) ‘A dynamic max-min ant system for solving the travelling salesman problem’, Int. J. Bio-Inspired Computation, Vol. 2, No. 6, pp.422–433. Biographical notes: Mohammad Reza Bonyadi received his Masters in Computer Engineering at Shahid Beheshti University. His research interests include issues related to computational intelligence, swarm intelligence, image processing and numerical analysis. He has published research papers at national and international journals, conference proceedings as well as chapters of books. H. Shah-Hosseini received his BS in Computer Engineering from Tehran University, his MS and PhD from Amirkabir University of Technology, all with high honours. He is now with the Electrical and Computer Engineering Department, Shahid Beheshti University, Tehran, Iran. His current research interests include computational intelligence especially time-adaptive self-organising maps, evolutionary computation, swarm intelligence and computer vision.

1

Introduction

The travelling salesman problem (TSP) (Durbin and Willshaw, 1987) is one of the well-studied NP-hard combinatorial optimisation problems. The TSP has attracted researchers because of its wide applications such as the path problems, routing problems and distribution problems. The simple description of the TSP is: find a shortest tour that covers all cities in a region. To be more specific, the TSP is defined as follows: Let V = {a1, a2, … an} be a set of cities, A = {(r, s):r, s ∈ V} be the edge set, and δ (r, s) be a cost associated with edge (r, s) ∈ A. The TSP is the problem of finding a minimal cost closed tour that visits each city once. There are three groups of algorithms that have been applied to solve the TSP so far. The first group is the

Copyright © 2010 Inderscience Enterprises Ltd.

classical exact methods like cutting planes and branch and bound (Padherg and Rinaldi, 1987). Nevertheless, classical exact methods for solving the TSP usually result in exponential computational complexities. In fact, these algorithms can only optimally solve small problems. Hence, new methods are required to overcome this shortcoming. The second group is the heuristic methods, such as 2-opt and 3-opt (Lin and Kernighan, 1973), and Markov chain (Martin et al., 1991). The third group of algorithms for solving the TSP consists of modern heuristic methods such as particle swarm optimisation (PSO) (Yuan et al., 2007), ant colony optimisation (ACO) and ant systems (AS) (Dorigo and Gambardella, 1997; Song et al., 2006; Stutzle and Hoos, 1997; Guo et al., 2006; Duan and Yu, 2007; Shang et al., 2007) genetic algorithms (GA) (Yan et al., 2006; Bonyadi et

A dynamic max-min ant system for solving the travelling salesman problem al., 2007), artificial immune system (AIS) (Zen and Gu, 2007) and bee colony optimisation (BCO) (Teodorovic and Dell’Orco, 2005) that are good algorithms for large problems. For more information about these methods see (Bonyadi et al., 2008). Besides, some algorithms based on greedy principles such as nearest neighbour and spanning tree are considered as efficient solving methods. Because the proposed method is in the third group of algorithms (modern heuristics), we investigate some of these modern heuristics in the related works section (next section). In this paper, a modified dynamic max-min ant system called ‘DMAS’ is proposed to solve the TSP. The DMAS algorithm belongs to the third group of algorithms that introduced above. In the proposed method the value of the pheromone trails is limited dynamically. Indeed, the DMAS utilises information of the pheromone matrix in each iteration and tries to limit the pheromone values on the connection arcs for next iterations. It is obvious that limiting the pheromone trails helps maintain the probability of selecting the arcs and all arcs will have a chance (even very small chance) to be selected in all iterations. In the proposed method this limitation is considered as a dynamic value and is changed according to the current pheromone matrix which helps the algorithm to converge faster. Experimental results show that the algorithm works efficiently and find the optimum or near optimum solutions of some standard test problems. Also, comparative results show that the DMAS algorithm outperforms many other previous works. The paper has been organised as follows: at first, some related approaches, which belong to the third group, are investigated. In Section 3, some principles of ant-based algorithms are presented. Then, our proposed approach is introduced and the results are compared with some well-known algorithms. Finally, the concluding remarks are presented.

2

Related works

In this section, some related works, which are based on meta-heuristic approaches, are presented. At first, some algorithms based on GA are investigated. One of the recent works, which uses the GA approach to solve the TSP is cited in Yan et al. (2005). In this work, a new algorithm based on inver-over operator was proposed. In this algorithm, the authors used some new strategies such as selection operator, replace operator and new control strategies to accelerate the convergence speed. Also, another work in 2007 was proposed, which utilised a combinatorial approach based on the GA (Bonyadi et al., 2007). In that work, the authors used two local search algorithms: shuffled frog leaping (SFL) and the civilisation and society (CS), to improve the population in terms of tour length. In that study, at first, the mutation and crossover operators were applied on population and then for every element of this population, a mixed local search algorithm of both SFL and CS was used to improve them. Also, in that paper a new approach for coding the permutation based problems was introduced.

423

Another meta-heuristic, which successfully applied to the TSP, is PSO. In Yuan et al. (2007), a PSO-based algorithm was proposed for solving the TSP. The authors proposed a novel hybrid algorithm, which employed the sufficiency of both PSO and chaotic optimisation algorithm (COA). In fact, the COA was used to restrain the particles from getting stock on local optima. Also, they proposed some new operators to overcome the difficulties of implementing PSO in solving the discrete problems (Yuan et al., 2007). To solve the TSP, the ant-based methods are also good candidates. One of the first works, which utilised an ant-based approach for solving the TSP, was Dorigo and Gambardella (1997). In that paper, an ant-based algorithm called ant colony system (ACS) was introduced to solve the TSP. In that approach, ants cooperate to each other using an indirect form of a communication tool called pheromones. Ants deposit pheromones on the edges of the TSP graph while building solutions. With regards to the successfulness of this method, so many attempts were introduced to improve the performance of this approach. One of the well-known literatures was presented in Stutzle and Hoos (1997) called ‘max-min ant system’ (MMAS). In MMAS, the authors introduced maximum and minimum values for pheromone matrix, which are used to restrict the pheromone values. Also, in this algorithm, just one ant (best ant) is allowed to update the pheromone matrix. The authors claim that the best performance is appeared when the pheromone matrix is initialised to its maximum value (Stutzle and Hoos, 1997). In recent years, so many other ant-based algorithms have been proposed. An approach based on ant colony was proposed in Song et al. (2006). In that paper, the authors studied the usages of a combination of two kinds of pheromone evaluation models, the change of amount in the ant colony during the run of the algorithm, and the mutation of pheromone. Another work called ACOEA utilises the evolutionary operators like crossover and mutation in combination with ACO to provide a search capability that enhance the rate of convergence (Guo et al., 2006). In addition, the mentioned method adopted a dynamic selection method based on the fitness of each ant. In that algorithm the tours of better ants had high opportunity to obtain pheromone updating (Guo et al., 2006). Another antbased method in combination with a memetic algorithm has been proposed in Duan and Yu (2007). In that method, the Memetic algorithm was used to adjust the parameters of ACO to solve the TSP. In the same year, another work was proposed based on ACO, which utilised the associated rules (AR) in combination with ant colony. They used the AR to find relation above all cities in database (Shang et al., 2007). In the challenge for solving the TSP, the AIS algorithms were successfully applied too. One of the recent works based on AIS has been presented in Zeng and Gu (2007). A reversal exchange crossover and mutation operator was proposed in that paper. These operators were used to find good sub tours and keep individuals different. Next, a new immune operator was used to restrain individuals’ degeneracy (Zeng and Gu, 2007). Also some new

424

M.R. Bonyadi and H. Shah-Hosseini

swarm-based approaches like electromagnetism-like meta-heuristic (EM) (Wu et al., 2006), BCO (Teodorovic and Dell’Orco, 2005) and intelligent water drops (IWD) (Shah-Hosseini, 2007, 2009) are good candidates to solve the TSP. In this paper, a new approach named DMAS based on ant colony optimisation is proposed. The proposed method limits the pheromone trails (like MMAS) and updates this limitation in each iteration according to the current pheromone matrix.

3

Principles and fundamentals

The ant based heuristics are inspired by the real ant behaviour in finding the shortest path between the nest and the food. This is achieved by pheromone trail that exhibits the trace of an ant and the ants use it as their communication tool (Dorigo et al., 1996). In fact, the ant-based algorithms use a set of artificial ants (individuals), which cooperate to the solution of a problem by exchanging information via pheromone deposited on graph edges. In this section, some ant-based algorithms are presented.

3.1 The standard ant system The AS mimics the behaviour of ants to find the optimal path in the TSP. The AS algorithm is summarised as Algorithm 1:

Initialise the pheromone trails Loop Each ant is positioned on a starting node Loop Each ant applies a state transition rule to incrementally build a solution

The state transition rule is: an ant k is positioned in node i, it chooses the node j with probability pijk . The probability pijk is calculated by:

pijk =

β

⎛ 1 τ ij ⎜ ⎜ disij ⎝ ∀j∈Tabu k



α

⎞ ⎟⎟ ⎠

(2)

otherwise

number of ants

Δτ ijk =



Δτ ijk

(3)

τ ij = (1 − ρ )τ ij + ρΔτ ij

(4)

k =1

where τij is the pheromone on the arc between node i and node j. ρ is the decay parameter and Tk is the minimum length tour that has been found by the kth ant and L(k) is its corresponding length. Q is a constant parameter.

3.2 Ant colony system The ACS differs from the AS in the following three main aspects (Dorigo and Gambardella, 1997): •

the state transition rule



only the best ant applies the global updating rule



while ants construct a solution, they update the pheromone matrix.

The ACS works as shown in Algorithm 2 below: Algorithm 2 the ant colony system algorithm for the TSP

Until all ants have built a complete solution A global pheromone updating rule is applied Until end condition

Until all ants have built a complete solution A global pheromone updating rule is applied Until end condition

⎞ ⎟⎟ ⎠

(i, j ) ∈ T k ∀k

Initialise the pheromone trails Loop Each ant is positioned on a starting node Loop Each ant applies a state transition rule to incrementally build a solution and ‘uses a local pheromone updating rule’

Algorithm 1 the ant system algorithm for the TSP

⎛ 1 τ ijα ⎜ ⎜ disij ⎝

⎧ Q ⎪ Δτ ijk = ⎨ L( k ) ⎪ 0 ⎩

β

(1)

In this equation, pijk is the probability of selecting the node j by kth ant, which is in node i. disij is the distance between node i and node j. τij is the pheromone on the arc between node i and node j. Tabuk is a list of nodes that the kth ant may choose one of them as its next node. Updating the pheromones is performed by ants via the following formula:

In ACS, an ant in city r chooses the city s according to the following equation (state transition rule): ⎧ ⎛ ⎛ 1 ⎞β ⎞ α q ≤ q0 ⎪⎪arg max u∈Tabu k ⎜τ ru ⎜ ⎟ ⎟ s=⎨ ⎜ ⎝ disru ⎠ ⎟ ⎝ ⎠ ⎪ S otherwise ⎪⎩

(5)

In this equation, q is a random number, q0 is a parameter and S is a random variable which is determined according to equation (1). The global updating rule for ACS is the same as equation (2) but just the best found tour is considered (best ant). The local pheromone updating rule for ACO has been presented in equation (6).

τ ij = (1 − ρ )τ ij + ρΔτ ij

(6)

Three candidates for the value of Δτij for this equation have been presented in Dorigo and Gambardella (1997).

A dynamic max-min ant system for solving the travelling salesman problem

425

3.3 Max-min ant system

4

In the ACS, it may occur that the pheromone trail intensities on arcs get so high that the same tours are constructed in successive iterations. This may lead to premature convergence, which is undesirable. In the MMAS (Stutzle and Hoos, 1997), only the best ant is allowed to update the pheromone trails in each iteration. The pheromone trails are limited within the maximum and minimum trail strengths (τmin and τmax) on the arcs but the trails are initialised to their maximum value τmax In fact, by using the limits on the trails, the difference on the trails are limited and exploration of new possible better tours is increased. The MMAS algorithm is stated as Algorithm 3 below (Stutzle and Hoos, 1997):

As it was mentioned earlier, the MMAS tries to prevent the arcs with small trail strength from vanishing by considering a limitation for the pheromone trails. It is obvious that the ants’ knowledge about the problem space becomes mature in higher iterations. Hence, adapting the interval (limitation) of pheromone matrix in each iteration can help individuals to increase the exploration of new and possibly better solutions. Here, a new DMAS is proposed which updates the τmin in each iteration. In this method, the value of τmin is considered as a function of maximum value in the current pheromone matrix. Consequently, the probability of selecting the arcs is adjusted according to the current best arc. The following equation shows an example of a possible function to relate the value of max∀i , j (τ ij (t ) ) to τmin.

Algorithm 3 the MMAS algorithm for the TSP For every edge (i, j) do

τ min (t ) =

τ (0) = τ max End for For k = 1 to m do Place ant k on a randomly chosen city End For Let T+ be the shortest tour found from beginning and L+ is its length For t = 1 to tmax do /* Main loop */ For k = 1 to m do Build tour Tk(t) by applying n – 1 times the following step: Choose the next city j with probability in equation 1 End for For k = 1 to m do Compute the length Lk(t) of the tour Tk(t) produced by ant k End For If an improved tour is found then Update T+ and L+ End If For every edge (i, j) do Update pheromone trails by applying the rule:

(

)

τ ij = max min ( (1 − ρ )τ ij + ρΔτ ij ,τ max ) ,τ min . where Δτij is the same as equation (3) ⎧Q ⎪ and Δτ ijk = ⎨ L+ ⎪0 ⎩

Proposed method

(i, j ) ∈ T k ∀k otherwise

End for For every edge (i, j) do

τ ij = (t + 1) = τ ij (t ) End For End For Print the shortest tour T+ and its length L+ End of algorithm The T+ is the shortest tour and L+ is its length.

{

}

1 max ∀i , j (τ ij (t ) ) a

(7)

In this equation, max ∀i , j (τ ij (t ) ) the maximum value in the pheromone matrix in current iteration t. In fact, in this method, τmin is considered as a value, which is updated in each iteration according to the maximum value of pheromone in pheromones matrix τ. The Algorithm 4 below shows the procedure for the proposed DMAS algorithm. Algorithm 4 the DMAS algorithm for the TSP For every edge (i, j) do

τ (0) = τ 0 End for For k = 1 to m do Place ant k on a randomly chosen city End For Let T+ be the shortest tour found from beginning and L+ is its length /* Main loop */ For t = 1 to tmax do Update the τmin(t) according to equation 7 For k = 1 to m do Build tour Tk(t) by applying n – 1 times the following step: Choose the next city j with probability in equation 1 End for For k = 1 to m do Compute the length Lk(t) of the tour Tk(t) produced by ant k End For If an improved tour is found then update T+ and L+ End If For every edge (i, j) do

426

M.R. Bonyadi and H. Shah-Hosseini

Algorithm 4 the DMAS algorithm for the TSP (continued)

Parameters of the proposed DMAS algorithm and its performance (tour length)

Table 1

Update pheromone trails by applying the rule:

⎧Q ⎪ and Δτ ijk = ⎨ L+ ⎪0 ⎩

(i, j ) ∈ T k ∀k

otherwise

End for For every edge (i, j) do

τ ij = (t + 1) = τ ij (t ) End For End For Print the shortest tour T+ and its length L+ End of algorithm +

In this algorithm, T is the best found tour and its length is indicated by L+. The value of τ0 may be considered as a random value or a constant. In next section, this value is investigated. In this method, the value of τmax is not important. The overload of calculating the τmin is very small and is in a constant order (O(c)) and can be disregarded while the exploration of new tours is highly increased.

5

Simulation results

In this section, the parameters of the DMAS are set via several experiments. After setting the parameters, the DMAS is applied to several standard TSPs and its results are compared with other well-known swarm-based methods. There are several parameters in the proposed method which can affect the performance of DMAS. Hence, in next section a case study is performed to find best values for these parameters to find better solutions. Afterward, the experimental results are presented and compared with some ant-based methods such as MMAS, ACS, PMACO, and IACO. Also, the method is compared with some other population-based methods like EA, IGGA and BCO. DMAS is not compared with some swarm-based methods that were referenced in the section related works because their results are far below from our proposed algorithm. In each experiment, the standard test benches are used.

5.1 Parameter setting The DMAS was implemented in MATLAB environment and all experiments were performed on a Personal Computer with 512 MB of RAM and 1.8 GHz of CPU. In our implementations, we used equation (7) to update the value of τmin(t). The parameters of DMAS were adjusted by applying the algorithm on the att48 TSP from OR-Library (Beasley, 1990).

2-Opt-DMAS

DMAS

β

Result (Avr.)

Result (Avr.)

0.5

35210

57934

1

1

34925

49369

1

1

2

34562

40949

1

1

3

34312

37562

1

1

4

33856

36123

1

1

5

33777

34797

1

1

6

33997

35537

10

1

5

33940

35259

20

1

5

33985

35480

30

1

5

34062

35158

40

1

5

33978

35229

50

1

5

33744

34221

60

1

5

33908

34734

70

1

5

34042

35224

Q

α

1

1

1

The parameters Q, α, β and a were varied to find the best performance of the algorithm for the mentioned problem. Table 1 demonstrates these variations and it shows the results. In this table, the results are the average of five runs. In each run, 50 ants were used to find a solution for att48 in 50 iterations. From the table, it is seen that the best performance for the algorithm is appeared when Q is 50, β is 5 and α is 1. In this test, the value of a is considered as 200. Figure 1

Cycle length

where Δτ ij is the same as equation (3)

Parameters

The red points show the results for various values of a where the algorithm includes 2-opt (see online version for colours)

37200 36600 36000 35400 34800 34200 33600

0

100

200

300

400

500

600

400

500

600

a

34800

Cycle length

τ ij = max ( (1 − ρ )τ ij + ρΔτ ij ,τ min (t ) )

34500 34200 33900 33600

0

100

200

300

a Notes: The black points illustrate the results of the algorithm without any additional local search. Also the STD has been presented.

A dynamic max-min ant system for solving the travelling salesman problem In addition, the algorithm was applied on the test case att48 for various values of a. Figure 1 illustrate the behaviour of the algorithm for this test. It is seen in the figure that the minimum tour length is appeared (in average) when the value of a is 455. Hence, in all experiments, the value of a has been chosen as 455. Also, two initialisation methods were considered to initialise the pheromone matrix: random and constant. In random initialisation, the value of pheromones on each arc is randomly selected in the interval [0, 1]. In constant initialisation, the pheromone trails are all initialised with a constant value. The constant value for initialising the pheromone matrix is chosen as 1.0. Figure 2 shows the results of the DMAS applied on att48 test case where the pheromone trails were initialised via two mentioned methods. For each method, the DMAS was run five times independently. Figure 2

Five independent runs of DMAS for att48 test case, (a) initialised randomly, (b) initialised as the constant value 1.0 (see online version for colours)

40000

Tour length

In the case of constant initialisation method [Figure 2 (a)], the algorithm found the average solution of 33843 and when it was initialised randomly, this average was reduced to 34004. Hence, in all experiments we use the constant initialisation approach. The used constant value for initialisation was chosen as 1.0.

5.2 Experimental results In this section, the proposed DMAS algorithm is compared with some well-known algorithms using the standard test benches from OR-Library (Beasley, 1990). At first, the DMAS is compared with the traditional MMAS (Stutzle and Hoos, 1997) in Table 2. Also, the results of applying the DMAS in combination with 2-Opt (DMAS-2Opt) are reported in this table. In this test, the algorithm was terminated by a restriction on the CPU time. 50 ants were used for this test. The CPU time restriction for eil51 was chosen as 30 seconds as used in Stutzle and Hoos (1997) and for kroa100 was chosen as 100 seconds [as used in Stutzle and Hoos (1997)]. Table 2

The comparative results for proposed DMAS algorithm and MMAS

Description

eil51

kroa100

DMAS

433.6 (6.02)

21734 (370)

37000

MMAS (Stutzle and Hoos, 1997)

502 (N/A)

63070 (N/A)

36000

Improvement

14 %

66 %

DMAS+2Opt

426.6 (0.54)

21282 (0)

MMAS+2Opt (Stutzle and Hoos, 1997)

427.5 (N/A)

21290 (N/A)

Improvement

0.5 %

0.5%

39000

Average tour lengths

38000

34000 33000

0

20

40

60

80

100

Iteration Best tour lengths

(a)

DMAS

427

21334

DMAS+2Opt

426

21282

Note: The integer tour lengths are reported here Source: Stutzle and Hoos (1997).

41000 40000 39000 38000 37000 36000 35000 34000 33000

Without local search

With local search (2-Opt)

35000

Tour length

427

0

20

40

60

Iteration (b)

80

100

The algorithm was run ten times independently and the average, best and STD (standard deviation) of the tour lengths has been reported in Table 2 (in this test, the value of tour lengths are integer tours). As it seems from the table, the DMAS could improve the results of MMAS up to 66%. Also, in the hybrid approaches (a combination of algorithms with local search), the DMAS-2Opt outperforms the MMAS-2Opt in both mentioned cases. In the case of the hybrid algorithm (DMAS in combination with 2-Opt), the proposed algorithm can find the optimum tour for kroa100 in all runs. The optimum solution of kroa100 that was found by the proposed method and its convergence curve has been illustrated in Figure 3.

428

M.R. Bonyadi and H. Shah-Hosseini The convergence curve and the best tour found by the 2-Opt-DMAS in 80 iterations (less than 100 sec.) (see online version for colours)

Figure 3

4

x 10

Best cycle length so far: 21282

0

20

Distance

3.5

3

2.5

2

40 Iteration

60

80

without any additional information of the problems. Also, the candidate list (set) approach can improve the DMAS and leads to find better tours. The size of the candidate list was considered as cl = 15. Table 4 compares the DMAS with ACS (Dorigo and Gambardella, 1997). In this test, the population size (the number of ants) was set equal to 20 as selected in Dorigo and Gambardella (1997). Both algorithms were executed for 1250 iterations as defined in Dorigo and Gambardella, (1997) to find solutions. The reported tour lengths for the problems are the best found tour lengths in five independent runs in Table 4. It is worth mentioning that the DMAS and ACS did not use any local searches in this test. Table 4

The comparison results for DMAS and ACS

Problem name

DMAS

ACS

eil50

427.8552

427.96

eil75

542.3227

542.37

kroa100

21285.44

21285.44

d198

15780

15888

att532

27686

28147

Note: The algorithms are not included any additional local searches Source: Dorigo and Gambardella (1997).

Table 3 shows the results for four test cases ‘eil51’, ‘kroa100’, ‘d198’ and ‘lin318’. In this test, the DMAS did not use the local search 2-Opt and a time constraint was used for the termination criteria. The comparison results between the DMAS, DMASc, MMAS and MMAS including the candidate set (MMAS-c)

Table 3

Description Without candidate list

With candidate list

eil51

kroa100

d198

lin318

MMAS (Stutzle and Hoos, 1997)

502

63070

67910

314182

DMAS

433.6

21734

17054

46857

Improvement

14%

65%

74%

85%

MMAS-c (Stutzle and Hoos, 1997)

446

26127

24703

55170

DMAS-c

429.1

22050

16888 44405.53

Improvement

3%

16%

32%

20%

The chosen times for the test cases were 30, 100, 200 and 500 seconds as used in Stutzle and Hoos (1997), respectively. The average of tour lengths were calculated for five independent runs of the DMAS. The MMAS-c and DMAS-c in Table 3 shows the MMAS and DMAS when using candidate set (Stutzle and Hoos, 1997). Table 3 shows that the proposed DMAS finds better solutions in comparison with the MMAS and MMAS-c

In Table 4, it is seen that the DMAS finds better solutions for eil50, eil75, d198 and att532 in comparison with the ACS. In the kroa100 test case, the DMAS and ACS achieve the same tour lengths. The DMAS parameters were set as follow for this test: •

Q = 50



ρ = 0.2



τmin = maximum value in matrix τ in current iteration/(455)



τmax=infinite



α=1



β=5



the best ant updates the pheromone matrix τ.

It should be noted that the DMAS and ACS utilised the candidate list with cl = 15 in d198 and att532 test problems. Figure 4 shows the convergence curves of the DMAS for ‘eil50’, ‘eil75’ and ‘kroa100’ in 1250 iterations. Figure 4(a) shows the curves for five runs of the DMAS applied on ‘eil50’. The figure shows that the algorithm has found the known optimum solution (427.8552) in its second run. In Figure 4(b) the convergence curves of five runs for ‘eil75’ has been presented. In this case, the DMAS algorithm has found the known optimum solution in the fourth run. Finally the DMAS was applied to ‘kroa100’ and its five runs have been shown in Figure 4(c). The DMAS has found the known optimum solution in its first run.

A dynamic max-min ant system for solving the travelling salesman problem Figure 4

The convergence trace of DMAS that is applied to (a) eil50, (b) eil75 and (c) kroa100 (see online version for colours)

Figure 5 Results of applying the DMAS to (a) eil50, (b) eil75 and (c) kroa100 (see online version for colours) (continued)

550

560 Run1 Run2 Run3 Run4 Run5

520 500

548

Tour length

540

Tour length

429

480 460

546

544

542

440 1

420

2

3

4

5

4

5

Run number 0

200

400

600

800

1000 1200 1400

Iteration

(b)

(a)

21440 21420 21400

680

Run1 Run2 Run3 Run4 Run5

660

Tour length

Tour length

700

640 620

21380 21360 21340 21320 21300

600

21280

580

1

560

2

3

Run number

540 0

200

400

600

800

1000 1200 1400

Iteration

428.4

Figure 5(a) denotes that the algorithm has found solutions with less than 0.2% of error ⎛ Worst Achieved Solution − Known Optimum ⎞ of the *100 ⎟ , ⎜ Known Optimum ⎝ ⎠ known optimum solution in all runs for ‘eil50’. For ‘eil75’, in the third run the error from the best known solution was 1.4% and all other four runs have the errors less than 0.7%. The maximum error for ‘kroa100’ is 0.6% that has been appeared in the fourth run. In addition, the DMAS algorithm was applied to some other test cases, which have been presented in Yan et al. (2005), Teodorovic and Dell’Orco (2005) and Song et al. (2006). In these test cases, the parameters were set as follows:

428.2



15 runs for each test bench and 100 iterations each

428.0



population size: 50

427.8



Q:50



ρ = 0.2



τmin = maximum value in matrix τ in current

(b)

Figure 5 shows the achieved solutions in these runs for these problems after 1250 iterations. Results of applying the DMAS to (a) eil50, (b) eil75 and (c) kroa100 (see online version for colours)

428.8 428.6

Tour length

Figure 5

(c)

1

2

3

Run number

(a)

4

5

iteration/455

M.R. Bonyadi and H. Shah-Hosseini



τmax = infinite



α=1



β=5

The DMAS was applied on some bigger problems from TSPLIB95 (TSP, 2005) and was compared to some new algorithms. In some cases, the DMAS improves the best reported results so far in TSPLIB (TSP, 2005).

In Table 5, the DMAS has been compared with other algorithms. The table shows that the DMAS works efficiently and finds the optimum tour in all cases and works better in comparison with results in Yan et al. (2005), Teodorovic and Dell’Orco (2005) and Song et al. (2006). Table 5

EA (Yan et al., 2005)

BCO (Teodorovic and Dell’Orco, 2005)

-

-

eil51

-

431.121

eil76

544.369

-

pr76

-

108790

677.109

678.621

21285.4432

21441

att48

st70 kroa100 IACO (Song et al. 2006) 33523.7085

2-Opt-DMAS Average in 15 runs

Best

33552

33523.70

428.9806

428.8718

-

545

544.3691

-

108588

108159.43

677.1096

677.1096

21285.4432

21285.4432

677.1096 -

The best tour for test benches in Table 5 found by DMAS (see online version for colours)

The comparison results between the proposed algorithm and some other algorithms in terms of tour lengths

Problem name

428.87

Figure 6

Figure 6 illustrates the best found tour by the proposed DMAS algorithm for the test benches in Table 5. Figure 7 shows the convergence curve of the DMAS2Opt for ‘eil51’, ‘eil76’ and ‘kroa100’ in 100 iterations. Figure 7(a) shows the curves of five independent runs for ‘eil51’. The algorithm has reached the known optimum solution in all runs. The DMAS-2Opt was applied five times to ‘eil76’ and the convergence curves have been shown in Figure 7(b). Figure 7(c) shows the curves for kroa100. In Figure 8, the achieved solutions in these runs have been presented. Figure 8(a) shows the results of five runs of the DMAS-2Opt that has been applied to ‘eil51’. The figure denotes that the DMAS-2Opt could find the known optimum solution in all runs. Figure 8(b) shows these results for ‘eil76’. In this case, the algorithm has found the known optimum solution in four runs. In the first run of the DMAS algorithm, the optimum tour was not found but its error is less than 0.2%. Moreover, Figure 8(c) shows the results of applying DMAS-2Opt to ‘kroa100’. In this case, the algorithm has achieved the known optimal solution in all runs.

Test bench: att48, tour length: 33523.71

Test bench: eil51, tour length: 428.87

Test bench: eil76, tour length: 544.3691

Test bench: st70, tour length: 677.1096

Test bench: pr76, tour length: 108159.4383

Test bench: kroa100, tour length: 21285.4432

Figure 7

The convergence traces for five independent runs of DMAS-2Opt that is applied to (a) eil51 (b) eil76 and (c) kroa100 (see online version for colours)

460 455

Run1 Run2 Run3 Run4 Run5

450

Tour length

430

445 440 435 430 425

0

20

40

60

Iteration (a)

80

100

A dynamic max-min ant system for solving the travelling salesman problem Figure 7

The convergence traces for five independent runs of DMAS-2Opt that is applied to (a) eil51 (b) eil76 and (c) kroa100 (see online version for colours) (continued)

Figure 8

610

The results of applying the DMAS-2Opt to (a) eil51, (b) eil76 and (c) kroa100 (see online version for colours) (continued)

545.4

600

Run1 Run2 Run3 Run4 Run5

580

545.2

Tour length

590

Tour length

431

570 560 550

545.0 544.8 544.6 544.4

540

544.2 0

20

40

60

80

100

1

2

Iteration

4

5

4

5

(b)

(b) 23500

23000 22500

22500

Tour length

Run1 Run2 Run3 Run4 Run5

23000

Tour length

3

Run number

22000

22000 21500 21000 20500 20000

21500

21000

19500 1

0

20

40

60

80

100

Iterations

The results of applying the DMAS-2Opt to (a) eil51, (b) eil76 and (c) kroa100 (see online version for colours)

470 460

Tour length

450 440 430 420 410 400 390 1

2

3

Run number (a)

3

Run number (c)

(c) Figure 8

2

4

5

In Table 6, the DMAS and DMAS-2Opt were used to solve these problems. In this case, 50 ants were used that cooperate with each other to solve the problems in 1,250 iterations. For the DMAS-2Opt, the number of iterations was reduced to 200 and the 2Opt algorithm was used to improve the found tours by the ants. As it is seen in Table 6, in some cases, the optimal tours that have been reported in TSP (2005) has been improved by the DMAS (underlined in the table). Also, the DMAS-2Opt has improved the reported tour lengths for the problems ‘Ch130’, Gr120’, ‘Gr202’ and ‘Pr107’. In addition, in some cases, the DMAS algorithm has found better tours in comparison to the IGGA (Zeng and Gu, 2007) and PMACO (Song et al., 2006). The IGGA is an algorithm based on the genetic algorithm in combination with an immune system. The IGGA algorithm utilises a population size in the range [500, 1000] for the problems in Table 6, and it terminates in the iterations in the range [500, 3000]. The PMACO is an algorithm based on ACO (improved ACO). The population size and number of iterations in (Song et al., 2006) have not been reported.

M.R. Bonyadi and H. Shah-Hosseini

432 Table 6

The comparative results between DMAS, DMAS-2Opt and some new algorithms for bigger than 100 cities problems. In each test case, just the best found tour length is reported Results in TSPLIB (TSP, 2005)

DMAS

Ch130

6110.86

6148.98

Ch150

6532.281

6552.54

Gr120

1666.5087

1629.45

Gr202

549.998

502.14

Pr107

44538

44614.17

Problem name

Other algorithms

DMAS-2Opt

Name

Result

6110.72

IGGA (Zeng and Gu, 2007)

6110.72

6532.281

IGGA

6532.281

1610.69

PMACO (Song et al., 2006)

1648.7840

490.20

IGGA

536.472

44301.68

IGGA

44280

6

Conclusions

In this paper, an ant-based algorithm named DMAS for solving the TSP was proposed. The DMAS algorithm administers the Max-Min approach in Ant Systems with a new adaptive restriction method for the pheromone matrix. The restriction method’s complexity is of order O(c), which is negligible. The proposed DMAS algorithm was applied to some standard test benches from standard libraries and compared with some recent approaches. The results of applying the DMAS algorithm to some test cases showed that the DMAS works strongly better than the MMAS (Tables 2 and 3). Also, the algorithm was compared with the ACS and the results showed that the proposed DMAS method performs better than ACS in the reported cases (Table 4). In addition, a local search (2-Opt) was combined with the DMAS and the results of applying the hybrid method (DMAS-2Opt) were compared with the MMAS-2Opt. In this case, the proposed DMAS method works well and outperforms the MMAS-2Opt. Also, the candidate list approach (DMAS-c) was adjoined to the DMAS and the results were compared with the MMAS-c (MMAS with candidate list). The results showed that the DMAS-c performs better than the MMAS-c (Table 3). Furthermore, the DMAS-2Opt was applied on some standard problems and the results showed that the DMAS method works very well and outperforms some other swarm-based methods (in all cases, DMAS found the optimum solution and works better than the algorithms as reported in Table 5). Finally, the DMAS was applied to some other TSP cases to test its performance on bigger problems. The DMAS improves the best known solutions reported so far in some cases (underlined in Table 6). Also, the DMAS-2Opt was applied to these cases and the results were compared with some recent methods (Table 6, the best results are bolded). The experimental results showed that

the DMAS algorithm works better than the traditional antbased methods. In summary, the proposed DMAS often overcome the other well-known swarm-based optimisation algorithms in solving the TSP. In future, we are going to apply the DMAS on other NP problems such as multidimensional knapsack problem (MKP) and job shop scheduling problem (JSSP).

References Beasley, J.E. (1990) ‘OR-library: distributing test problems by electronic mail’, Journal of Operational Research Society, Vol. 41, No. 11, pp.1069–1072. Bonyadi, M.R., Rahimi Azghadi, S.M. and Shah-Hosseini, H. (2007) ‘Solving travelling salesman problem using combinational evolutionary algorithm’, in Boukis, C., Pnevmatikakis, L. and Polymenakos, L., (Eds.): IFIP International Federation for Information Processing, Artificial Intelligence and Innovations 2007: From Theory to Applications, Springer, Boston, Vol. 247, pp.37–44. Bonyadi, M.R., Azghadi, M.R. and Shah-Hosseini, H. (2008) Travelling Sales Man Problem, Invited chapter book, First chapter: Population based optimization algorithms for solving the travelling salesman problem, ITECH publications, April 2008, ISBN 978-3-902613-31-8, pp.1–34. Dorigo, M. and Gambardella, L.M. (1997) ‘Ant colony system: a cooperative learning approach to the traveling salesman problem’, IEEE Transactions on Evolutionary Computation, April, Vol. 1, No. 1, ISSN 1089–778X/97. Dorigo, M., Maniezzo, V. and Colorni, A. (1996) ‘The ant system: optimization by a colony of cooperating agents’, IEEE Transactions on Systems, Man, and Cybernetics – Part B, Vol. 26, No. 1, pp.29–42. Duan, H., and Yu, X. (2007) ‘Hybrid ant colony optimization using memetic algorithm for traveling salesman problem’, Proceedings of the 2007 IEEE Symposium on Approximate Dynamic Programming and Reinforcement Learning (ADPRL 2007). Durbin, R. and Willshaw D. (1987) ‘An anlaogue approach to the traveling salesman problem using an elastic net approach’, Nature, Vol. 26, No. 6114, pp.689–691. Guo, J., Wu, Y. and Liu, W. (2006) ‘An ant colony optimization algorithm with evolutionary operator for traveling salesman problem’, Proceedings of the Sixth International Conference on Intelligent Systems Design and Applications (ISDA’06), IEEE. Lin, S. and Kernighan, B. (1973) ‘An effective heuristic algorithm for the traveling-salesman problem’, Operations Research, Vol. 21, No. 2, pp.498–516. Martin, O., Otto, S. and Felten, E. (1991) ‘Large-step markov chains for the traveling salesman problem’, Complex Systems, Vol. 5, No. 3, pp.299–326. Padherg M. and Rinaldi, R. (1987) ‘Optimization of a 532-city symmetric travelling salesman problem by branch and cut’, Operations Research Letters, Vol. 6, No. 1, pp.1–7. Shah-Hosseini, H. (2007) ‘Problem solving by intelligent water drops’, IEEE Congress on Evolutionary Computation (CEC 2007), pp.3226–3231. Shah-Hosseini, H. (2009) ‘The intelligent water drops algorithm: a nature-inspired swarm-based optimisation algorithm’, Int. J. Bio-Inspired Computation, Vol. 1, Nos. 1/2, pp.71–79.

A dynamic max-min ant system for solving the travelling salesman problem Shang, G., Lei, Z., Fengting, Z. and Chunxian, Z. (2007) ‘Solving traveling salesman problem by ant colony optimization algorithm with association rule’, Third International Conference on Natural Computation (ICNC 2007), IEEE, 2007. Song, X., Li, B. and Yang, H. (2006) ‘Improved ant colony algorithm and its applications in TSP’, Proceedings of the Sixth International Conference on Intelligent Systems Design and Applications (ISDA’06). Stutzle, T. and Hoos, H. (1997) ‘MAX -MIN ant system and local search for the traveling salesman problem,’ Proc. 1997 IEEE International Conference on Evolutionary Computation, 1997. Symmetric traveling salesman problem (TSP) (2005) available at http://www. iwr. uni-heidelberg.de/groups/comopt/software/ TSPLIB95 Teodorovic, D. and Dell’Orco, M. (2005) ‘Bee colony optimization: a cooperative learning approach to complex transportation problems’, Advanced OR and AI Methods in Transportation, pp.51–60.

433

Wu, P., Yang, K. and Fang, H. (2006) ‘A revised EM-like algorithm + K-OPT method for solving the traveling salesman problem’, Proceedings of the First International Conference on Innovative Computing, Information and Control, ISBN 0-7695-2616-0/06. Yan, X. S., Li, H., Cai, Z.H. and Kang, L.S. (2005) ‘A fast evolutionary algorithm for combinatorial optimization problem’, Proceedings of the Fourth International Conference on Machine Learning and Cybernetics, pp.3288–3292. Yuan, Z., Yang, L., Wu, Y., Liao, L. and Li, G. (2007) ‘Chaotic particle swarm optimization algorithm for traveling salesman problem’, Proceedings of the IEEE International Conference on Automation and Logistics, 1-4244-1531-4, Jinan, China. Zeng, C., and Gu, T. (2007) ‘A novel immunity-growth genetic algorithm for traveling salesman problem’, Third International Conference on Natural Computation (ICNC 2007), ISBN: 0-7695-2875-9/07.

Suggest Documents