108
IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,
VOL. 14, NO. 1,
JANUARY/FEBRUARY 2017
Solving NP-Hard Problems with Physarum-Based Ant Colony System Yuxin Liu, Chao Gao, Zili Zhang, Yuxiao Lu, Shi Chen, Mingxin Liang, and Li Tao Abstract—NP-hard problems exist in many real world applications. Ant colony optimization (ACO) algorithms can provide approximate solutions for those NP-hard problems, but the performance of ACO algorithms is significantly reduced due to premature convergence and weak robustness, etc. With these observations in mind, this paper proposes a Physarum-based pheromone matrix optimization strategy in ant colony system (ACS) for solving NP-hard problems such as traveling salesman problem (TSP) and 0/1 knapsack problem (0/1 KP). In the Physarum-inspired mathematical model, one of the unique characteristics is that critical tubes can be reserved in the process of network evolution. The optimized updating strategy employs the unique feature and accelerates the positive feedback process in ACS, which contributes to the quick convergence of the optimal solution. Some experiments were conducted using both benchmark and real datasets. The experimental results show that the optimized ACS outperforms other meta-heuristic algorithms in accuracy and robustness for solving TSPs. Meanwhile, the convergence rate and robustness for solving 0/1 KPs are better than those of classical ACS. Index Terms—Physarum-inspired mathematical model, ant colony system, NP-hard problem, traveling salesman problem, 0/1 knapsack problem, positive feedback mechanism
Ç 1
S
INTRODUCTION
real-world problems, such as route designing and goods dispatching, can be formulated by a traveling salesman problem (TSP) and a 0/1 knapsack problem (0/1 KP) respectively, which are two classical NP-hard problems [1], [2]. Designing efficient approaches to solving NP-hard problems has great practical significance. More specially, many meta-heuristic algorithms, such as ant colony optimization (ACO) [3], [4], [5], particle swarm optimization (PSO) [6], [7], genetic algorithm (GA) [8], [9] and artificial fish swarm algorithm (AFSA) [10], have been proposed based on the swarm intelligence for solving a TSP. In particular, ACO algorithms are originally designed and have a long tradition in solving a TSP [3], [11]. Moreover, ACO algorithms, as one of powerful nature-inspired algorithms, can also be used for solving a 0/1 KP [12], [13], [14], [15], [16], [17]. Therefore, ACO algorithms have been considered as one of the most efficient algorithms in finding out solutions to NP-hard problems [11], [18]. However, ACO algorithms have problems of premature convergence and stagnation in general, and the convergence OME
Y.X. Liu, Y.X. Lu, S. Chen, M.X. Liang, and L. Tao are with the College of Computer and Information Science & College of Software, Southwest University, Chongqing 400715, China. E-mail: {xinjia, soldate, tli}@swu.edu.cn, {117403139, 327130735}@qq.com. C. Gao is with the College of Computer and Information Science & College of Software, Southwest University, Chongqing 400715, China, and the Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China. E-mail:
[email protected]. Z. Zhang is with the College of Computer and Information Science & College of Software, Southwest University, Chongqing 400715, China, and the School of Information Technology, Deakin University, VIC 3217, Australia. E-mail:
[email protected]. Manuscript received 30 Apr. 2015; revised 13 July 2015; accepted 20 July 2015. Date of publication 29 July 2015; date of current version 2 Feb. 2017. For information on obtaining reprints of this article, please send e-mail to:
[email protected], and reference the Digital Object Identifier below. Digital Object Identifier no. 10.1109/TCBB.2015.2462349
rates of ACO algorithms are low for solving these NP-hard problems, especially with the increment of problem scale [19], [20], [21], [22], [23], [24]. Currently, a unicellular and multi-headed slime mold, Physarum polycephalum, shows an ability to form a selfadaptive and high efficient network in biological experiments [25], [26], [27], [28], [29], [30]. Moreover, Tero et al. have captured the positive feedback mechanism of Physarum in foraging and have built a Physarum-inspired mathematical model (PMM) [31], [32], [33]. PMM exhibits a unique feature of critical tubes reserved in the process of network evolution. Critical tubes can be seemed as optimal routes in ants traveling. Taking advantage of this feature, Zhang et al. have proposed an optimization strategy for updating the pheromone matrix in ACO algorithms based on PMM [34], [35]. The effectiveness of the optimization strategy is verified by comparing the optimal solution, convergence rate and robustness between the optimized ACO algorithms, named as the Physarum model-based ACO (PMACO) algorithms, and the corresponding original ACO algorithms for solving a TSP in benchmark datasets. Their work obtains primary achievements in improving the search ability of ACO algorithms for solving a TSP. In order to highlight the excellent performance and practical significance of the optimization strategy in ACO for solving a TSP, and extend PMACO algorithms for solving other NP-hard problems (e.g., 0/1 KPs), we devote to answering the following three questions in this paper: 1) Are the efficiency of PMACO algorithms better than other traditional meta-heuristic algorithms (e.g., PSO, GA and AFSA) for solving TSPs? 2) What is the efficiency of PMACO algorithms when solving the real-world transportation problems? 3) Can the optimization strategy optimize ACO algorithms for solving other NP-hard problems (e.g., 0/1 KPs)? The organization of this paper is as follows. In Section 2, we first outline the framework of meta-heuristic algorithms
1545-5963 ß 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
LIU ET AL.: SOLVING NP-HARD PROBLEMS WITH PHYSARUM-BASED ANT COLONY SYSTEM
for solving a TSP and give some measurements that can be used to estimate the performance of algorithms when solving NP-hard problems. Then, taking PM-ACS (Physarum model-based ant colony system, one of the typical PMACO algorithms) as an example, we present the basic ideas of the optimization strategy for ant colony system (ACS) to solve a TSP and a 0/1 KP, respectively. In Section 3, we first provide comparable results of different algorithms for solving TSPs in benchmark and real datasets. Then, we compare the convergence rate and robustness between PM-ACS and the corresponding ACS for solving 0/1 KPs. Final, we analyze the influence of main parameters on the performance of PM-ACS. Section 4 concludes this paper.
2
RELATED WORK
Section 2.1 first formulates two types of NP-hard problems. Then, Section 2.2 describes the basic ideas of traditional meta-heuristic algorithms (i.e., ACO, PSO, GA and AFSA) for solving a TSP and the basic ideas of ACS for solving a 0/1 KP. Final, Section 2.3 presents the working mechanisms of PM-ACS for solving those two problems.
2.1 Description of Two Classical NP-Hard Problems 2.1.1 The Formulation of a TSP A TSP can be formulated as follows: for a complete weighted graph G ¼ ðN; EÞ, let nodes N ¼ f1; 2; . . . ; yg represent geographical locations of y cities, and edges E ¼ fði; jÞji; j 2 N; i 6¼ jg represent a set of roads. The distance (i.e., the length of a road) between two cities i and j can be denoted as dij . Then, TSP solution algorithms are used to find the shortest Hamiltonian circuit x, which is the optimal route. The length of x, denoted as Smin , is shown in (1), where xi represents the ith city in the Hamiltonian circuit x, and xi 2 N. ! y1 X Smin ¼ min dxi xiþ1 þ dxy x1 : (1) i¼1
2.1.2 The Formulation of a 0/1 KP A 0/1 KP can be formulated as follows: there are h items with various values and weights. A knapsack has a fixed capacity that is less than the total weight of items. Items are selected and loaded into the knapsack in order to make the total value of items in the knapsack as high as possible, while the total weight cannot exceed the capacity of the knapsack, and each item can be selected only once [1]. Supposing c is the capacity of a knapsack, vi and vi stand for the weight and value of an item i, respectively. Each item has a binary decision variable gi , in which gi ¼ 1 represents the item will be selected, and gi ¼ 0 means the opposite. The goal of a 0/1 KP, as shown in (2), is to maximize the total value of items in the knapsack under the condition that the total weight is less than or equal to the capacity of knapsack. Smax ¼ max
h X
(P s:t:
vi gi
i¼1 h i¼1
(2)
wi gi c;
gi 2 f0; 1g
i ¼ 1; 2; 3; . . . ; h:
109
2.2 Algorithms for Solving a TSP and a 0/1 KP 2.2.1 Solving a TSP with Meta-Heuristic Algorithms There have been many approaches for solving a TSP. These approaches range from mathematical modeling to metaheuristic algorithms [36], [37], [38], [39]. In particular, many meta-heuristic algorithms can find approximate solutions in reasonable time . In the following, we will take some metaheuristic algorithms (i.e., ACO [3], PSO [6], GA [8] and AFSA [10]) as examples to describe the solving process of a TSP. The framework is shown in Fig. 1.
ACO [3] uses the track of ants to present a solution of a TSP. First, ants are randomly placed on cities. Then, each ant chooses the next city according to the heuristic information and the pheromone information on roads. Ants communicate to each other through a chemical substance called pheromone. The more pheromone on a road, the higher the possibility of that road to be selected by ants. And the more time a road is visited by ants, the more pheromone is released. Such a self-strengthening behavior leads all ants to follow the short route when solving a TSP. Final, the route traveled by the global best ant is the optimal solution of a TSP. PSO [6] searches a space by individual vectors, called particles. Each particle utilizes a sequence of cities to represent a solution of a TSP. First, each particle is initialized a random sequence and a random swap sequence, namely velocity. Then, all particles calculate their new sequences based on their own previous best sequence and the previous global best sequence of all particles. Final, the best sequences of particles and the global best sequence are updated. The global best sequence composes the optimal solution of a TSP when the sequence is good enough or the algorithm achieves the maximum iteration steps. GA [8] uses an integer string representation to encode a solution, where each integer stands for a particular city, and a string, i.e., the permutation of integers, represents a route. First, some permutations are randomly generated. Then, specialized crossover and mutation operations, including local search, are designed to generate new offspring permutations from parents. Only the best permutations are allowed to become parents and to generate offspring. Final, the optimal solution of a TSP is composed of the optimal permutation in this process. AFSA [10] utilizes a sequence of cities to represent the state of an artificial fish, which encode a solution of a TSP. First, each fish is initialized a random state. A matrix, called as bulletin board, is used to record the state of the fish that has the optimal solution. Then, individual fish updates its state based on foraging, clustering, colliding and random behaviors, and updates the bulletin board if its new state better than the bulletin board. Final, the state of the global best artificial fish, i.e., the optimal solution of a TSP, is emerged in the bulletin board. In order to evaluate the efficiency and the robustness of TSP solution algorithms, and the influence of positive
110
IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,
VOL. 14, NO. 1,
JANUARY/FEBRUARY 2017
Fig. 1. The framework of meta-heuristic algorithms for solving a TSP. The left big square shows the unifying process of meta-heuristics, and the right squares show the special process of each algorithm. The rounded squares at the bottom of each algorithm show the representation of the optimal solution for solving a TSP.
feedback mechanism on an ant colony system, some measurements are defined as follows: 1)
2)
3)
Smin stands for the optimal value to a TSP, which is the length of the shortest Hamiltonian circuit calculated by an algorithm. Saverage stands for the average value of results to a TSP. Svariance stands for the variance of results, which can be seemed as a promotion of t-test. They are the averaged values over C times, such as Saverage is calcuP i;stepsðkÞ i;stepsðkÞ lated as Ci¼1 Smin =C, where Smin represents the optimal solution in the kth step for the ith time. Stotal stands for the total length of all ants traveled in Ps an iteration, which is calculated as Stotal ¼ ant¼1 Sant , where Sant represents the length of an ant traveled in one iteration, and s means the total number of ants. The greater the influence of positive feedback mechanism on the ant colony system is, the shorter path ants travel.
2.2.2 Solving a 0/1 KP with ACS The ideas of ACS [13] for solving a 0/1 KP are a little different from ACS for solving a TSP, because there is no conception of path in a 0/1 KP. An ant k selects an item i based on a certain probability pki , as shown in (3), where t i is the pheromone concentration of a selected item i, hi ¼ vi =wj is the heuristic information, which represents the expectation that the ant k chooses the item i. a and b determine the relative importance of pheromone concentration and heuristic information, respectively. tabuk is a list that stores the chosen items by the ant k in the current iteration. The updating strategy of global pheromone matrix is shown in (4) and (5), where r is the pheromone evaporation rate, B is a constant, Vbest stands for the best solution traveled by ants in an iteration, tabubest is the tabu list of the best ant. And the updating
strategy of local pheromone matrix is shown in (6), where t 0 represents the initial pheromone in each item. 8 < P tai ðtÞhbi ðtÞ ; i 2 tabuk k b tas ðtÞhs ðtÞ (3) pi ðtÞ ¼ : s 2= tabuk 0; otherwise ti
ð1 rÞt i þ rDt best i
¼ Dt best i ti
B V vi ; i 2 tabubest best 0; otherwise ð1 rÞt i þ rt 0 :
(4) (5) (6)
In order to evaluate the efficiency and robustness of 0/1 KP solution algorithms, some measurements are defined as follows: 1) 2)
Smax stands for the maximum value of items filled with a fixed-capacity knapsack to a 0/1 KP. Saverage and Svariance stand for the average value and the variance of Smax to a 0/1 KP, respectively.
2.3 PM-ACS for Solving a TSP and a 0/1 KP Although the aforementioned algorithms can get approximate solutions to these NP-hard problems, they have problems of low efficiency and premature convergence in generally [19], [40]. Hence, we present a novel ACS for solving a TSP and a 0/1 KP in the following section. 2.3.1 The Formulation of PM-ACS for Solving a TSP The new ACS, denoted as PM-ACS, proposes an optimization strategy for updating the global pheromone matrix in ACS based on PMM for solving a TSP [34]. As shown in
LIU ET AL.: SOLVING NP-HARD PROBLEMS WITH PHYSARUM-BASED ANT COLONY SYSTEM
111
nodes, respectively, the pressure on each node pm i can be calculated according to the Kirchhoff Law based on (9). X Dij i
Fig. 2. The illustration of working mechanism of PM-ACS for solving a TSP. Food sources and tubes of the Physarum network represent cities and roads in the road network, respectively.
Fig. 2, we assume that there is a Physarum network with pheromone flows in tubes. Specially, in the Physarum network, the higher the flowing pheromone in a tube is, the wider the tube becomes. After each iteration of ant colony system, the amount of pheromone in each tube of the Physarum network can be calculated. When updating the global pheromone matrix, PM-ACS considers both the pheromone released by ants and the flowing pheromone in the Physarum network. Since the critical tubes in the Physarum network are also the shortest paths that have a larger amount of flowing pheromone, they have higher opportunities to be selected for ants traveling. Hence, by the coupling actions of ACS and PMM, the exploitation of the optimal solution and the convergence rate of ACS can be improved. Formally, in PM-ACS, the pseudo-random proportional rule for ants moving and the local pheromone matrix updating rule remain the same as ACS [11]. The global pheromone matrix updating rule in ACS is optimized by appending the amount of flowing pheromone in the Physarum network. As shown in (7), the first two terms come from ACS and the last one is newly added based on PMM. The meaning of each parameter in PMM and the calculation process of the amount of flowing pheromone in the Physarum network are defined in the following: t ij
½ð1 rÞt ij þ
r r Qij M þ" ; Sglobalbest I0
(7)
8ði; jÞ 2 Sglobalbest where "¼1
1 Psteps 2 ðtþ1Þ
1þ
:
(8)
In (7), " is defined as an impact factor to measure the effect of flowing pheromone in the Physarum network on the final pheromone matrix. As shown in (8), P steps stands for the total steps of iteration affected by PMM, t is the steps of iteration at present, and 2 ð1; 1:2Þ. M represents the number of tubes in the Physarum network, which is equal to the number of roads in a TSP. I0 represents the fixed flux flowing in the Physarum network. In an iteration of PMM, each pair of nodes connected by a tube has an opportunity to be selected as inlet/outlet nodes. When two nodes i and j connected by the mth tube are selected as inlet and outlet
Lij
pm i
pm j
8 < I0 ¼ I0 : 0
for j ¼ a for j ¼ b otherwise:
(9)
where Lij is the length of a tube ði; jÞ, and Dij is defined as a measure of the conductivity, which is related to the thickness of the tube. The conductivity of each tube has an initial value before computation. The above process is repeated until all pairs of nodes in each tube are selected as inlet/outlet nodes once. Then, the flux Qij through the tube ði; jÞ is calculated based on (10). As the iteration goes on, the conductivity of a tube is adapted according to the flux based on (11), which implies that the conductivity is enhanced by the flux increases, and tends to decline if the flux decreases. Then, the conductivities at the next iteration will be fed back to (9), and the flux will be updated based on (10). Based on the positive feedback mechanism between the conductivity and flux, the shorter tubes, denoted as critical tubes, will become wider and be reserved in the process of network evolution. While, other longer tubes will become narrower and finally disappear. M 1 X Dij pm pm j M m¼1 Lij i Qij dDij Dij : ¼ dt 1 þ Qij
Qij ¼
(10)
(11)
Algorithm 1 presents steps of PM-ACS for solving a TSP. Table 1 lists the meaning of each parameter.
Algorithm 1. PM-ACS for solving a TSP Input dij : The distance between nodes i and j; y: The number of node; Output Smin : The length of the shortest Hamiltonian circuit; Begin 1: Initializing parameters a; b; r; s; q0 ; ; I0 , P steps and T steps 2: Initializing the pheromone trail t 0 and the conductivity of each tube Dij 3: Setting the iteration counter N :¼ 0 4: While N < T steps Do 5: For k :¼ 1 to s Do 6: Constructing a tour by an ant k 7: Updating the local pheromone matrix 8: End For 9: best := the global best ant 10: Smin := the length of the tour generated by the ant best 11: Calculating the flowing pheromone in the Physarum network rQ M 12: t ij ½ð1 rÞt ij þ S r þ " Iij0 // Based on (7) globalbest 13: N :¼ N þ 1 14: End While 15: Outputting the optimal solution Smin End
2.3.2 The Formulation of PM-ACS for Solving a 0/1 KP The basic ideas of PM-ACS for solving a 0/1 KP are similar to that of PM-ACS for solving a TSP, except construction of
112
IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,
VOL. 14, NO. 1,
JANUARY/FEBRUARY 2017
TABLE 1 Main Parameters and Their Values Used in This Paper Parameter
Explanation
Value
a b
The relative importance of the pheromone trail The relative importance of the heuristic information
r
The pheromone evaporation rate
s
The number of ants
q0 "
A predefined parameter between 0 and 1 A parameter determined the value of " for solving a TSP An impact factor to measure the effect of flowing pheromone in PMM on matrix t The fixed flux flowing in the Physarum network The total steps of iteration affected by the PMM The total steps of iteration
I0 Psteps Tsteps t0
The initial amount of pheromone in each road The initial amount of pheromone in each item The initial value of the conductivity of each tube
Dij
pheromone matrix. The global pheromone matrix t in PMACS for solving a 0/1 KP is a 1 h matrix, where h is the number of items. PMM is adapted to solve a 0/1 KP as shown in (12)-(14). First, in order to transform the pheromone matrix t from 1h to h h, we construct an intermediary matrix P , as shown in (12).
t þt
i j ; i; j 2 ½ 1; h Pij ¼ Pji ¼ hðh1Þ i ¼ j: Pij ¼ Pji ¼ 0;
(12)
Second, we construct an intermediary matrix S based on (13), where B is a h h constant matrix, the constant value in B is larger than the maximum pheromone in t. Sij ¼ Bij Pij :
(13)
Finally, we set the matrix S as the input matrix for PMM, and get an output matrix V , which is a h h optimized matrix calculated by PMM. Moreover, according to (14), we can get a 1h optimization matrix E, which is a map for ant colony system to approach to the best solution. Ei ¼ t i þ ½ ð Vi1 þ þVih Þ þ ð V1i þ þVhi Þ :
(14)
The optimization strategy for updating the global pheromone matrix is shown in (15). The first two terms come from ACS and the last one is newly added based on PMM. And " is an impact factor which measures the effect of flowing pheromone in the Physarum network on the final item pheromone matrix t. r ð1 rÞt i þ (15) ti þ "Ei ; i 2 ½1; h: Sglobalbest Algorithm 2 presents steps of PM-ACS for solving a 0/1 KP, F1 ; F2 and F3 are functions calculated based on the
1 2 for solving a TSP 1 for solving a 0/1 KP 0.8 for solving a TSP 0.4 for solving a 0/1 KP the number of cities for solving a TSP the number of items for solving a 0/1 KP 0.1 1.05 0.5 20 300 for solving a TSP 300 for solving a TSP 50 for solving a 0/1 KP in items50 and items100 1000 for solving a 0/1 KP in items500 1 for solving a TSP 1=s for solving a 0/1 KP 1
corresponding equations. Table 1 lists the meaning of each parameter.
Algorithm 2. PM-ACS for solving a 0/1 KP Input h: The number of items; vi : The weight of the ith item; c: The capacity of the knapsack; xi : The binary decision variable of the ith item; B: The temporary h h constant matrix; Output Smax : The maximum value of items in the knapsack; Begin 1: Initializing parameters a; b; r 2: Initializing the pheromone trail t 0 3: Setting the iteration counter N :¼ 0 4: While N < T steps Do 5: For j :¼ 1 to s Do 6: For k :¼ 1 to h Do 7: Selecting an item outside the knapsack 8: Calculating the total weight of existing items A 9: If A c Then 10: Loading an item k into knapsack 11: Updating the local pheromone matrix 12: End If 13: End For 14: End For 15: best := the global best ant 16: Smax := the value of the items selected by the ant best //Transforming the Matrix from 1h to h h 17: Pij F1 ðt i ; t j ; hÞ // Based on (12) 18: Sij F2 ðPij ; Bij Þ // Based on (13) //Optimizing the Matrix S based on PMM 19: Vij PMMðSij ; hÞ // Based on (9)-(11) //Transforming the Matrix from h h to 1h 20: Ei t i þ F3 ðVij ; hÞ // Based on (14) 21: t i ½ð1 rÞt i þ S r þ "Ei // Based on (15) globalbest 22: N :¼ N þ 1 23: End While 24: Outputting the optimal solution Smax End
LIU ET AL.: SOLVING NP-HARD PROBLEMS WITH PHYSARUM-BASED ANT COLONY SYSTEM
113
TABLE 2 The Latitude and Longitude of 17 Prefecture-Level Cities in Sichuan Province No.
City
latitude
longitude
No.
City
latitude
longitude
1 2 3 4 5 6 7 8 9
Chengdu Mianyang Zigong Bazhong Guang’an Leshan Yibin Guangyuan Dazhou
30.5723 31.4675 29.3390 31.8679 30.4560 29.5521 28.7518 32.4354 31.2096
104.0665 104.6791 104.7784 106.7475 106.6332 103.7656 104.6434 105.8434 107.4680
10 11 12 13 14 15 16 17
Deyang Meishan Neijiang Luzhou Nanchong Ya’an Suining Ziyang
31.1269 30.0754 29.5802 28.8718 30.8378 29.9805 30.5328 30.1289
104.3979 103.8485 105.0584 105.4423 106.1107 103.0133 105.5929 104.6276
TABLE 3 The Latitude and Longitude of 34 Cities in China No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
City Ha’er’bin Changchun Shenyang Beijing Tianjin Shijiazhuang Zhengzhou Taiyuan Hefei Nanjing Shanghai Hangzhou Nanchang Fuzhou Wuhan Changsha Chongqing
latitude
longitude
No.
City
latitude
longitude
45.8023 43.8171 41.8057 39.9040 39.0842 38.0423 34.7466 37.8706 31.8206 32.0603 31.2304 30.2741 28.6832 26.0745 30.5931 28.2282 29.5630
126.5363 125.3235 123.4315 116.4075 117.2010 114.5149 113.6254 112.5489 117.2272 118.7969 121.4737 120.1551 115.8581 119.2965 114.3054 112.9388 106.5516
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
Chengdu Guiyang Kunming Nanning Guangzhou Xi’an Lasa Urumqi Lanzhou Xining Yinchuan Hohhot Hongkong Macao Jinan Haikou Taipei
30.5723 26.6477 24.8801 22.8172 23.1292 34.3416 29.6456 43.8256 36.0611 36.6171 38.4872 40.8426 22.3964 22.1987 36.6510 20.0308 25.0911
104.0665 106.6302 102.8329 108.3661 113.2644 108.9402 91.1409 87.6168 103.8343 101.7782 106.2309 111.7492 114.1095 113.5439 117.1205 110.3289 121.5598
In the next section, we present some experiments to estimate the efficiency of PM-ACS based on the measurements defined in Section 2.2.
3
EXPERIMENTAL RESULTS AND PARAMETER ANALYSIS
This section presents two kinds of experiments for comparing the performance of PM-ACS with other meta-heuristic algorithms. Section 3.1 is the datasets formation for TSPs and 0/1 KPs. Section 3.2 compares PM-ACS with other meta-heuristic algorithms (i.e., ACS, PSO, GA, AFSA) for solving TSPs in the benchmark datasets and real-world transportation datasets. Section 3.3 compares PM-ACS with ACS for solving 0/1 KPs in three benchmark datasets. Section 3.4 provides parameter analysis.
3.1 Datasets In this section, we present how to formulate three realworld transportation datasets in Section 3.1.1 and how to construct the 0/1 KP datasets in Section 3.1.2. 3.1.1 Datasets of Real-World Transportation Networks In order to further confirm the validity of the theoretical results in a real-world setting, we consider three real
scenarios: how travelers plan their trips to visit 17 prefecturelevel cities in Sichuan province (shorted as Sichuan17), 34 cities in China (shorted as China34) and 332 airports in USA (shorted as USA332) with the lowest cost. According to the analysis in Section 2.1.1, these problems can be modeled as the classical TSPs as follows. First, the latitude and longitude of each prefecture-level city in Sichuan province and each city in China are extracted from Google Maps,1 as shown in Tables 2 and 3, respectively. Moreover, the US air transportation network with 332 nodes is compiled by Vlado Vladimir Batagelj and Andrej Mrvar,2 in which each node represents an airport [41]. Second, the distance (DAB ) between cities A and B is defined as the spherical distance, which can be calculated based on (16)-(19). Letting latA and lngA represent the latitude and longitude of city A, respectively. ’A ¼ latA p=180 represents the radian of the latitude of city A, and A ¼ lngA p=180 represents the radian of the longitude of city A. R0 is the radius of earth, p is a symbol in the mathematics. More specifically, we set R0 ¼ 6378:1370; p ¼ 3:1416 in our paper. DAB ¼ 2R0 arcsin C
(16)
1. https://maps.google.com 2. http://vlado.fmf.uni-lj.si/pub/networks/data/map/USAir97.net
114
IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,
X ¼ sin2
’ ’ A B : 2
Y ¼ cosð’A Þcosð’B Þsin
C¼
2
pffiffiffiffiffiffiffiffiffiffiffiffiffiffi XþY:
A B 2
VOL. 14, NO. 1,
JANUARY/FEBRUARY 2017
(17) (18)
(19)
3.1.2 Benchmark Datasets of 0/1 KP For estimating the efficiency of the optimized ACS based on PMM for solving a 0/1 KP, three typical datasets are used in our study. In detail, for each dataset, we construct two 1h matrices (i.e., v and w) and a variable (i.e., c), where matrix v represents the value of items, matrix w represents the weight of items. c stands for the capacity of the knapsack. The first and the second datasets [42], which have 50 items and 100 items, are denoted as items50 and items100, respectively. And the third dateset,3 which has 500 items, is denoted as items500. Specially, matrices v, w and the value of c of datasets items50 and items100 are shown as follows. items50: c ¼ 1000 v ¼ {220, 208, 198, 192, 180, 180, 165, 162, 160, 158, 155, 130, 125, 122, 120, 118, 115, 110, 105, 101, 100, 100, 98, 96, 95, 90, 88, 82, 80, 77, 75, 73, 70, 69, 66, 65, 63, 60, 58, 56, 50, 30, 20, 15, 10, 8, 5, 3, 1, 1} and w ¼ {80, 82, 85, 70, 72, 70, 66, 50, 55, 25, 50, 55, 40, 48, 50, 32, 22, 60, 30, 32, 40, 38, 35, 32, 25, 28, 30, 22, 50, 30, 45, 30, 60, 50, 20, 65, 20, 25, 30, 10, 20, 25, 15, 10, 10, 10, 4, 4, 2, 1} items100: c ¼ 6718 v ¼ {597, 596, 593, 586, 581, 568, 567, 560, 549, 548, 547, 529, 529,527, 520, 491, 482, 478, 475, 475, 466, 462, 459, 458, 454, 451, 449, 443, 442, 421, 410, 409, 395, 394, 390, 377, 375, 366, 361, 347, 334, 322, 315, 313, 311, 309, 296, 295, 294, 289, 285, 279, 277, 276, 272, 248, 246, 245, 238, 237, 232, 231, 230, 225, 192, 184, 183, 176, 174, 171, 169, 165, 165, 154, 153, 150, 149, 147, 143, 140, 138, 134, 132, 127, 124, 123, 114, 111, 104, 89, 74, 63, 62, 58, 55, 48, 27, 22, 12, 6} and w ¼ {54, 183, 106, 82, 30, 58, 71, 166 117, 190, 90, 191, 205, 128, 110, 89, 63, 6, 140, 86, 30, 91, 156, 31, 70, 199, 142, 98, 178, 16, 140, 31, 24, 197, 101, 73, 169, 73, 92, 159, 71, 102, 144, 151, 27, 131, 209, 164, 177, 177, 129, 146, 17, 53, 164, 146, 43,
3.2.1 Results in Benchmark Datasets Two benchmark datasets, eil51 and eil76, are downloaded from the website TSPLIB.4 The comparison results in eil51 and eil76 among PM-ACS, ACS, PSO and GA are shown in Fig. 3. It can be seen that both the optimal value (Smin ) and the average value (Saverage ) of PM-ACS are the minimum
3. http://www.cs.colostate.edu/ cs575dl/Sp2015/home_assignments.php/
4. http://www.iwr.uni-heidelberg.de/groups/comopt/software/ TSPLIB95/
Fig. 3. The comparison of solutions among PM-ACS, ACS, PSO, and GA in the benchmark datasets: (a) eil51 and (b) eil76.
170, 180, 171, 130, 183, 5, 113, 207, 57, 13, 163, 20, 63, 12, 24, 9, 42, 6, 109, 170, 108, 46, 69, 43, 175, 81, 5, 34, 146, 148, 114, 160, 174, 156, 82, 47, 126, 102, 83, 58, 34, 21, 14}
3.2 Experimental Results of TSPs This section presents the experimental results that compare PM-ACS with other meta-heuristic algorithms (i.e., ACS [11], PSO [6], GA [8] and AFSA [10]) for solving TSPs. Specially, Section 3.2.1 shows comparison results based on benchmark datasets. Section 3.2.2 shows comparison results based on three real-world datasets formulated in Section 3.1.1. All experiments are undertaken in the same experimental environment. The main parameters and their values are listed in Table 1. In order to wipe off the computational fluctuation, all results in our experiments are averaged over 50 times.
LIU ET AL.: SOLVING NP-HARD PROBLEMS WITH PHYSARUM-BASED ANT COLONY SYSTEM
TABLE 4 The Comparison Results among PM-ACS, ACS, PSO, GA, and AFSA in the Real-World Datasets Network
Sichuan17
China34
USA332
Alg. PM-ACS ACS PSO GA AFSA PM-ACS ACS PSO GA AFSA PM-ACS ACS PSO GA AFSA
Smin 1445.32 1445.32 1445.32 1445.32 1445.32 15673.08 15756.84 16058.08 15886.41 16748.53 7.30 7.51 53.22 35.60 29.08
Saverage 1450.69 1462.51 1455.04 1504.39 1499.82 16140.90 16356.16 17968.81 17043.95 17191.87 7.31 7.52 54.33 36.58 29.53
Svariance
AVR
15.33 29.34 24.25 66.04 45.72 263.55 356.40 882.68 690.43 318.70 0.08 0.39 2.20 0.75 0.22
0.37 1.19 0.67 4.09 3.77 2.98 3.80 11.90 7.29 2.65 0.14 0.13 2.09 2.75 1.55
compared with those of ACS, PSO and GA, which means that PM-ACS has the strongest ability to exploit the optimal solution. Furthermore, the variance (Svariance ) of PM-ACS is the lowest of these four algorithms, which means that PMACS has a stronger robustness than others, and can obtain more stable optimization results.
3.2.2 Results in Real-World Datasets Three real-world transportation problems, i.e., Sichuan17, China34 and USA332, are solved by PM-ACS, ACS, PSO,
115
GA and AFSA, respectively. First, Table 4 lists the comparison results, in which the average deviation rate (AVR) is S
S
min j 100%. AVR is designed based calculated as j average Smin on the idea of chi-squared test and can be used to measure the deviation ratio between the average value and the optimal value [34]. The smaller the value of AVR is, the higher the possibility to obtain the optimal solution of algorithms in each running is. It can be seen that PM-ACS always has the smallest value of AVR. Table 4 also shows that, although all five algorithms can find the optimal solution (Smin ¼ 1445.32) in Sichuan17, the average value (Saverage ) and the variance (Svariance ) of the results calculated by PMACS are better than those of ACS, PSO, GA and AFSA. Specially, the advantage of PM-ACS is more obvious in China34 and USA332, where Smin , Saverage and Svariance of PM-ACS are all better than those of ACS, PSO, GA and AFSA. In particular, Fig. 4 illustrates the optimal routes found by PM-ACS in Sichuan17, China34 and USA332. Second, Fig. 5 plots the convergence process of Saverage and Svariance with the increment of iterative steps of PMACS, ACS, PSO, GA and AFSA for solving three real-world transportation problems. Figs. 5a, 5b, and 5c show that Saverage of PM-ACS or ACS decreases more obviously than that of PSO, GA and AFSA. Specially, in the earlier iteration, there is a little gap of Saverage between PM-ACS and ACS. With the increment of steps, Saverage of PM-ACS performs better than that of ACS. These indicate that PM-ACS has a higher convergence rate than other algorithms. The convergence processes of Svariance in Figs. 5d, 5e, and 5f indicate
Fig. 4. Illustrations of the optimal routes of real-world TSPs found by PM-ACS: (a) 17 prefecture-level cities in Sichuan province and 34 cities in China and (b) 332 airports in USA.
116
IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,
VOL. 14, NO. 1,
JANUARY/FEBRUARY 2017
Fig. 5. The comparison of the convergence process of Saverage and Svariance among PM-ACS, ACS, PSO, GA, and AFSA in the real-world datasets: (a) and (d) Sichuan17, (b) and (e) China34, and (c) and (f) USA332.
that PM-ACS has the strongest robustness compared with ACS, PSO, GA and AFSA. Third, Fig. 6 plots the comparison of Smin between PMACS and ACS in 50 times for solving the real-world TSPs. It can be seen that Smin of ACS in 50 times has a large fluctuation, while Smin of PM-ACS is relatively stable. For example, more than half of Smin of ACS is larger than 16,100 in Fig. 6b, while the probability that Smin of PM-ACS is larger than 16,100 is 20 percent. Finally, Fig. 7 compares the convergence progress of Stotal with the increment of iterative steps between PM-ACS and ACS for solving the real-world TSPs. Stotal of PM-ACS has a faster convergence rate (i.e., larger slope) in the early iteration than that of ACS, as shown in Fig. 7. What’s more, Figs. 7a and 7b show that most values of Stotal of PM-ACS are obviously lower than that of ACS after 50 iterations. And in Fig. 7c, both the convergence rate and the value of Stotal of PM-ACS perform better than those of ACS. Based
on the experimental results in Fig. 7, we can conclude that the routes selected by ants in PM-ACS are better than those in ACS with the increment of iterations, which means that the optimization strategy can accelerate the positive feedback process of ant colony systems.
3.3 Experimental Results of 0/1 KPs This section presents the comparison of solutions between PM-ACS and ACS [13] for solving 0/1 KPs. The main parameters and their values are listed in Table 1. All experiments are undertaken in the same experimental environment. In order to wipe off the computational fluctuation, all results in our experiments are averaged over 100 times. Table 5 lists the comparison results between PM-ACS and ACS for solving 0/1 KPs in items50 and items100, in S Smax j 100%. It can be which AVR is calculated as j average Smax seen that Smax and Saverage of the results calculated by PMACS are larger than those of ACS, Svariance and AVR of
Fig. 6. The comparison of Smin in 50 times between PM-ACS with ACS in the real-world datasets: (a) Sichuan17, (b) China34, and (c) USA332.
LIU ET AL.: SOLVING NP-HARD PROBLEMS WITH PHYSARUM-BASED ANT COLONY SYSTEM
117
Fig. 7. The comparison of the reflection of positive feedback mechanism (Stotal ) between PM-ACS and ACS in the real-world datasets: (a) Sichuan17, (b) China34, and (c) USA332.
results calculated by PM-ACS are lower than those of ACS in both items50 and items100. In detail, Fig. 8 shows the convergence process of Saverage and Svariance in items50 and items100. We find that Saverage of PM-ACS is higher than that of ACS in both of items50 (Fig. 8a) and items100 (Fig. 8b), which means that the ability in exploiting the optimal solution of PM-ACS is stronger than that of ACS. Furthermore, Fig. 8a shows that the convergence process of Svariance of PM-ACS decreases faster than that of ACS after a few steps and holds the lead in most of the time, which means that PM-ACS has a stronger robustness than ACS after a period of adjustment. Fig. 8b also shows that Svariance of PM-ACS is better than that of ACS. In order to further show the performance of PM-ACS for solving 0/1 KPs, we compare the convergence process of Smax , Saverage and Svariance with the increment of iterative steps among PM-ACS, ACS, PSO and GA in items500, as shown in Fig. 9. It can be seen that PM-ACS has the fastest speed (i.e., largest slope) to get the maximum value among these algorithms (Figs. 9a and 9b). The optimal solution obtained by PM-ACS is the maximum compared with ACS, PSO and GA. These indicate that PM-ACS has the best convergence rate and the strongest ability in exploiting the optimal solution. Fig. 9c shows that PMACS has the minimum and most stable Svariance , which means that PM-ACS has the strongest robustness among four algorithms. All in all, these results show that the optimization strategy can improve the search ability, convergence rate and robustness of ACS. Moreover, PM-ACS has a better performance than other meta-heuristic algorithms in all measurements. Meanwhile, the experiments with the change of problem scales exhibit the scalability of PM-ACS for solving TSPs and 0/1 KPs.
3.4 Parameter Analysis This section estimates the influence of main parameters, i.e., a, b and r, on the performance of PM-ACS. Based on our previous studies and estimations [34], we let a 2 ½0:1; 5, b 2 ½1; 6, r 2 ½0:45; 0:95, and observe the dynamic change of results with a, b and r, respectively. The experiments have been undertaken in China34. In each experiment, only one of parameters is changed, while all the others are held
TABLE 5 The Comparison of Solutions between PM-ACS and ACS for Solving 0/1 KPs in Items50 and Items100 Network items50 items100
Alg. PM-ACS ACS PM-ACS ACS
Smax 3103.00 3103.00 26559.00 26559.00
Saverage 3099.02 3091.45 26557.44 26551.84
Svariance
AVR
14.65 75.02 16.45 69.85
0.13 0.37 0.006 0.03
Fig. 8. The comparison of the convergence process of Saverage and Svariance between PM-ACS and ACS for solving 0/1 KPs: (a) items50 and (b) items100. Lines with the upward sloping represents the comparison of Saverage , which uses the left ordinate. Lines with the downward sloping represents the comparison of Svariance , which uses the right ordinate.
118
IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,
VOL. 14, NO. 1,
JANUARY/FEBRUARY 2017
Fig. 9. The comparison of the convergence process of the maximum value, the average value and the robustness among PM-ACS, ACS, PSO, and GA in items500: (a) Smax , (b) Saverage , and (c) Svariance .
constant. The default values of parameters are shown in Table 1. First, Fig. 10 shows the relative importance of a on Saverage . As shown in Fig. 10a, the starting point of Saverage (i.e., the value of Saverage when Steps ¼ 0) when a 1:5 is obviously lower than that of a 2.0, which means that PMACS can find better results at the beginning if a 1:5 than a 2.0. Fig. 10b shows that Saverage of PM-ACS is better if a 2 ½0:5; 1:5 than other values of a. Hence, we recommend a 2 ½0:5; 1:5 in PM-ACS. Second, Fig. 11 shows the relative importance of b on Saverage . As shown in Fig. 11a, the starting point of Saverage when b 2:5 is obviously lower than that of b 2.0, which
means that b 2:5 is beneficial for finding optimal results of PM-ACS at the beginning. What’s more, the convergence rate of Saverage when b 2:0 is higher than b ¼ 1.0 or b ¼ 1:5. Fig. 11b shows that, when b 2:5, Saverage is very stable. According to these two figures, we suggest b 2 ½2:5; 5:0 in PM-ACS. Finally, Fig. 12 shows the relative importance of r on Saverage . As shown in Fig. 12a, there is a little difference of the convergence rate of Saverage among different values of r. Fig. 12b plots the dynamic change of Saverage with r. It can be seen that, Saverage obtains the minimum value when r ¼ 0:85. Saverage is relatively optimal when r 2 ½0:75; 0:9, which is the recommended range of r in PM-ACS.
Fig. 10. (a) The convergence process of Saverage with the increment of iterative steps under different values of a. (b) The dynamic change of Saverage with a.
Fig. 11. (a) The convergence process of Saverage with the increment of iterative steps under different values of b. (b) The dynamic change of Saverage with b.
LIU ET AL.: SOLVING NP-HARD PROBLEMS WITH PHYSARUM-BASED ANT COLONY SYSTEM
119
and Development Program of China (No. 2013AA013801), National Natural Science Foundation of China (Nos. 61402379, 61403315), and Natural Science Foundation of Chongqing (Nos. cstc2012jjA40013, cstc2013jcyjA40022).
REFERENCES [1] [2] [3] [4] [5]
[6] [7] [8] [9]
Fig. 12. (a) The convergence process of Saverage with the increment of iterative steps under different values of r. (b) The dynamic change of Saverage with r.
4
CONCLUSION
In this paper, we presented a pheromone matrix optimization strategy in ACS based on PMM and evaluated the performance of the optimized ACS (PM-ACS) for solving NP-hard problems (i.e., TSPs and 0/1 KPs) by comparing with other meta-heuristic algorithms. Taking advantage of the unique feature of critical tubes reserved in the process of network evolution of PMM, PM-ACS can enhance the amount of pheromone in critical paths for a TSP or enhance the amount of pheromone in critical items for a 0/1 KP. Our optimization strategy can accelerate the positive feedback process of ACS, which is one of the major mechanisms that implements the exploitation of the optimal solution. Experimental results showed that both of the optimal solutions and robustness of PM-ACS are better than other traditional meta-heuristic algorithms for solving benchmark-based and real-world transportation networks-based TSPs. Moreover, PM-ACS also showed better performance than ACS for solving other NP-hard problems, such as 0/1 KPs. Our study suggested that it be worth incorporating PMM into ACS for solving NP-hard problems.
ACKNOWLEDGMENTS Prof. Z. Zhang is the corresponding author. Y. Liu and C. Gao contributed equally to this work and should be considered as co-first authors. This work was supported by the National Science and Technology Support Program (No. 2012BAD35B08), National High Technology Research
[10]
[11] [12] [13] [14] [15]
[16] [17] [18]
[19] [20] [21]
M. Garey and D. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness. San Francisco, CA, USA: Freeman, 1979. G. Laporte, A. Asef-Vaziri, and C. Sriskandarajah, “Some applications of the generalized travelling salesman problem,” J. Oper. Res. Soc., vol. 47, no. 12, pp. 1461–1467, Dec. 1996. A. Colorni, M. Dorigo, and V. Maniezzo, “Distributed optimization by ant colonies,” in Proc. 1st Eur. Conf. Artif. Life, Dec. 1991, pp. 134–142. M. Dorigo and G. D. Caro, “Ant colony optimization: A new metaheuristic,” in Proc. Congr. Evol. Comput., Jul. 1999, vol. 2, pp. 1470–1477. M. Dorigo and G. D. Caro, “The ant colony optimization metaheuristic,” in New Ideas in Optimization, D. Corne, M. Dorigo, and F. Glover, Eds. New York, NY, USA: McGraw-Hill, 1999, pp. 11–32. K. P. Wang, L. Huang, C. G. Zhou, and W. Pang, “Particle swarm optimization for traveling salesman problem,” in Proc. 2nd Int. Conf. Mach. Learn. Cybern., Nov. 2003, vol. 3, pp. 1583–1585. X. H. Shi, Y. C. Liang, H. P. Lee, C. Lu, and Q. X. Wang, “Particle swarm optimization-based algorithms for TSP and generalized TSP,” Inf. Process. Lett., vol. 103, no. 5, pp. 169–176, Aug. 2007. S. Chatterjee, C. Carrera, and L. A. Lynch, “Genetic algorithms and traveling salesman problems,” Eur. J. Oper. Res., vol. 93, no. 3, pp. 490–510, Sep. 1996. Y. Marinakis, A. Migdalas, P. M. Pardalos, “A new bilevel formulation for the vehicle routing problem and a solution method using a genetic algorithm,” J. Glob. Optim., vol. 38, no. 4, pp. 555– 580, Aug. 2007. T. Fei, L. Y. Zhang, Y. Li, Y. L. Yang, and F. Wang, “The artificial fish swarm algorithm to solve traveling salesman problem,” in Proc. Int. Conf. Comput. Sci. Inf. Technol., Sep. 2014, vol. 255, pp. 679–685. M. Dorigo and L. M. Gambardella, “Ant colony system: A cooperative learning approach to the traveling salesman problem,” IEEE Trans. Evol. Comput., vol. 1, no. 1, pp. 53–66, Apr. 1997. L. Ma and L. D. Wang, “Ant optimization algorithm for knapsack problem,” J. Comput. Appl., vol. 21, no. 8, pp. 4–5, Aug. 2001. H. X. Shi, “Solution to 0/1 knapsack problem based on improved ant colony algorithm,” in Proc. IEEE Int. Conf. Inf. Acquisition, Aug. 2006, pp. 1062–1066. L. He and Y. Y. Huang, “Research of ant colony algorithm and the application of 0-1 knapsack,” in Proc. 6th Int. Conf. Comput. Sci. Educ., Aug. 2011, pp. 464–467. F. Zhao, Y. L. Ma, and J. P. Zhang, “Solving 0-1 knapsack problem based on immune clonal algorithm and ant colony algorithm,” in Proc. Int. Conf. Commun., Electron. Autom. Eng., Aug. 2013, vol. 181, pp. 1047–1053. Z. J. Hu and R. Li, “Ant colony optimization algorithm for the 0-1 knapsack problem based on genetic operators,” Adv. Mater. Res., vol. 230–232, pp. 973–977, May 2011. W. G. Zhang and T. Y. Lu, “The research of genetic ant colony algorithm and its application,” in Proc. 2nd SREE Conf. Eng. Model. Simul., Jun. 2012, vol. 37, pp. 101–106. S. A. Sheibat Alhamdy, A. N. Noudehi, and M. Majdara, “Solving traveling salesman problem (TSP) using ants colony (ACO) algorithm and comparing with tabu search, simulated annealing and genetic algorithm,” J. Appl. Sci. Res., vol. 8, no. 1, pp. 434–440, Jan. 2012. Z. C. S. S Hlaing and M. A. Khine, “Solving traveling salesman problem by using improved ant colony optimization algorithm,” Int. J. Inf. Educ. Technol., vol. 1, no. 5, pp. 404–409, Dec. 2011. M. Mavrovouniotis and S. Yang, “A memetic ant colony optimization algorithm for the dynamic travelling salesman problem,” Soft Comput., vol. 15, no. 7, pp. 1405–1425, Jul. 2011. K. Jun-manKan and Z. Yi, “Application of an improved ant colony optimization on generalized traveling salesman problem,” in Proc. Int. Conf. Future Electr. Power Energy Syst., Feb. 2012, vol. 17, pp. 319–325.
120
IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS,
[22] T. Saenphon, S. Phimoltares, and C. Lursinsap, “Combining new fast opposite gradient search with ant colony optimization for solving travelling salesman problem,” Eng. Appl. Artif. Intell., vol. 35, pp. 324–334, Oct. 2014. [23] M. Kong, P. Tian, and Y. Kao, “A new ant colony optimization algorithm for the multidimensional knapsack problem,” Comput. Oper. Res., vol. 35, no. 8, pp. 2672–2683, Aug. 2008. [24] R. T. Liu and X. J. Lv, “MapReduce-based ant colony optimization algorithm for multi-dimensional knapsack problem,” Appl. Mech. Mater., vol. 380–384, pp. 1877–1880, Aug. 2013. [25] T. Nakagaki, H. Yamada, and A. T oth, “Maze-solving by an amoeboid organism,” Nature, vol. 407, no. 6803, p. 470, Sep. 2000. [26] C. R. Reid and M. Beekman, “Solving the towers of Hanoi-how an amoeboid organism efficiently constructs transport networks,” J. Exp. Biol., vol. 216, no. 9, pp. 1546–1551, May 2013. [27] A. Adamatzk and G. J. Martinez, “Bio-imitation of Mexican migration routes to the USA with slime mould on 3D terrains,” J. Bionic Eng., vol. 10, no. 2, pp. 242–250, Apr. 2013. [28] A. Adamatzky, X. S. Yang, and Y. X. Zhao, “Slime mould imitates transport networks in China,” Int. J. Intell. Comput. Cybern., vol. 6, no. 3, pp. 232–251, 2013. [29] A. Adamatzky, “Route 20, autobahn 7, and slime mold: Approximating the longest roads in USA and Germany with slime mold on 3-D terrains,” IEEE Trans. Cybern., vol. 44, no. 1, pp. 126–136, Mar. 2013. [30] A. Adamatzky and T. Schubert, ”Slime mold microfluidic logical gates,” Mater. Today, vol. 17, no. 2, pp. 86–91, Mar. 2014. [31] A. Tero, R. Kobaysahi, and T. Nakagaki, “Physarum solver: A biologically inspired method of road-network navigation,” Physica A, vol. 363, no. 1, pp. 115–119, Apr. 2006. [32] A. Tero, R. Kobayashi, and T. Nakagaki, “A mathematical model for adaptive transport network in path finding by true slime mold,” J. Theor. Biol., vol. 244, no. 4, pp. 553–564, Feb. 2007. [33] A. Tero, S. Takagi, T. Saigusa, K. Ito, D. P. Bebber, M. D. Fricker, K. Yumiki, R. Kobayashi, and T. Nakagaki, “Rules for biologically inspired adaptive network design,” Science, vol. 327, no. 5964, pp. 439–442, Jan. 2010. [34] Z. L. Zhang, C. Gao, Y. X. Liu, and T. Qian, “A universal optimization strategy for ant colony optimization algorithms based on the Physarum-inspired mathematical model,” Bioinspir. Biomim., vol. 9, p. 036006, Mar. 2014. [35] Y. X. Liu, Y. X. Lu, C. Gao, Z. L. Zhang, and L. Tao, “A multiobjective ant colony optimization algorithm based on the Physarum-inspired mathematical model,” in Proc. 10th Int. Conf. Natural Comput., Oct. 2014, pp. 304–309. [36] G. Laporte, “The traveling salesman problem: An overview of exact and approximate algorithms,” Eur. J. Oper. Res., vol. 59, no. 2, pp. 231–247, Jun. 1992. [37] M. Gendreau and J.-Y. Potvin, “Metaheuristics in combinatorial optimization,” Ann. Oper. Res., vol. 140, no. 1, pp. 189–213, Nov. 2005. [38] K. Ganesh and T. T. Narendran, “TASTE: A two-phase heuristic to solve a routing problem with simultaneous delivery and pick-up,” Int. J. Adv. Manuf. Technol., vol. 37, nos. 11/12, pp. 1221–1231, Jul. 2008. [39] J.-Y. Potvin, “State-of-the art review evolutionary algorithms for vehicle routing,” INFORMS J. Comput., vol. 21, no. 4, pp. 518–548, Apr. 2009. [40] O. J. Mengshoel, S. F. Galan, and A. D. Dios, “Adaptive generalized crowding for genetic algorithms,” Inf. Sci., vol. 258, no. 10, pp. 140–159, Feb. 2014. [41] Y. W. Gong, Y. R. Song, and G. P. Jiang, “Epidemic spreading in metapopulation networks with Heterogeneous Infection Rates,” Physica A, vol. 416, pp. 208–218, Dec. 2014. [42] J. C. Kiefer, “On large deviations of the empiric D. F. of vector chance variables and a law of the iterated logarithm,” Pac. J. Math., vol. 11, no. 2, pp. 649–660, 1961. Yuxin Liu received the bachelor’s and master’s degrees in software engineering from Southwest University in 2012 and 2015, respectively. She is currently a doctoral candidate in the College of Computer and Information Science & College of Software, Southwest University, Chongqing, China. Her research interests include bio-inspired algorithms, optimization problems, and multiagent systems.
VOL. 14, NO. 1,
JANUARY/FEBRUARY 2017
Chao Gao received the PhD degree in computer science from the International WIC Institute, Beijing University of Technology. He is an associate professor in the College of Computer and Information Science & College of Software, Southwest University, Chongqing, China. His post-doctoral degree is received from Hong Kong Baptist University. His current research interests include intelligent computing, behavior-oriented modeling, and complex social networks analysis. Zili Zhang received the BSc degree from Sichuan University, the MEng degree from the Harbin Institute of Technology, and the PhD degree from Deakin University, all in computing. He is currently the dean of the College of Computer and Information Science & College of Software, Southwest University, Chongqing, China, and a senior lecturer at Deakin University, Australia. He authored and coauthored more than 100 refereed papers in International Journals or Conference Proceedings, six monographes or textbooks published by Springer. His research interests include multi-agent system, bio-inspired AI, and big data statistics and analysis. Yuxiao Lu received the bachelor’s degree in information security from Chongqing University, Chongqing, China, in 2013. He is currently a postgraduate student at Southwest University, Chongqing, China, majoring in computer application. His research interests include bio-inspired algorithm and artificial intelligence.
Shi Chen is currently an undergraduate student in Southwest University, Chongqing, China, majoring in computer science and technology. His research interests include bio-inspired algorithm and combination optimization problem.
Mingxin Liang received the bachelor’s degree in information and computing science from the Beijing Institute of Technology, Beijing, China, in 2014. He is currently a postgraduate student at Southwest University, Chongqing, China, majoring in computer application. His research interests include bio-inspired algorithm and intelligence algorithm.
Li Tao received the PhD degree from Hong Kong Baptist University. She is currently a lecturer in the College of Computer and Information Science & College of Software, Southwest University, Chongqing, China. Her research interests include complex health care system modeling and optimization, data mining, and multi-agent autonomyoriented computing (AOC).
" For more information on this or any other computing topic, please visit our Digital Library at www.computer.org/publications/dlib.