Dynamic Reduction-Expansion Operator to Improve Performance of

1 downloads 0 Views 2MB Size Report
Jul 24, 2018 - a touchstone for new strategies and algorithms to solve combinatorial ... Mathematical Problems in Engineering. Volume 2018, Article ID ...
Hindawi Mathematical Problems in Engineering Volume 2018, Article ID 2517460, 12 pages https://doi.org/10.1155/2018/2517460

Research Article Dynamic Reduction-Expansion Operator to Improve Performance of Genetic Algorithms for the Traveling Salesman Problem Santiago-Omar Caballero-Morales , Jose-Luis Martinez-Flores and Diana Sanchez-Partida

,

Universidad Popular Autonoma del Estado de Puebla, A.C., Postgraduate Department of Logistics and Supply Chain Management, 17 Sur 711, Barrio de Santiago, Puebla, PUE 72410, Mexico Correspondence should be addressed to Santiago-Omar Caballero-Morales; [email protected] Received 10 January 2018; Revised 7 July 2018; Accepted 24 July 2018; Published 2 September 2018 Academic Editor: Erik Cuevas Copyright © 2018 Santiago-Omar Caballero-Morales et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Traveling Salesman Problem (TSP) is an important routing problem within the transportation industry. However, finding optimal solutions for this problem is not easy due to its computational complexity. In this work, a novel operator based on dynamic reduction-expansion of minimum distance is presented as an initial population strategy to improve the search mechanisms of Genetic Algorithms (GA) for the TSP. This operator, termed as 𝑅𝑒𝑑𝐸𝑥𝑝, consists of four stages: (a) clustering to identify candidate supply/demand locations to be reduced, (b) coding of clustered and nonclustered locations to obtain the set of reduced locations, (c) sequencing of minimum distances for the set of reduced locations (nearest neighbor strategy), and (d) decoding (expansion) of the reduced set of locations. Experiments performed on TSP instances with more than 150 nodes provided evidence that 𝑅𝑒𝑑𝐸𝑥𝑝 can improve convergence of the GA and provide more suitable solutions than other approaches focused on the GA’s initial population.

1. Introduction As defined by [1], routing is the process of selecting “best” routes in a graph 𝐺 = (𝑉, 𝐴), where 𝑉 is a node set and 𝐴 is an arc set. Within this context, route planning is the calculation of the most effective route (route of minimum distance, cost, or travel time) from an origin to a destination node on a network, and the Traveling Salesman Problem (TSP) is one of the most studied and applied routing models in the transportation, manufacturing, and logistic industries [2]. As presented by [3] the TSP “is the fundamental problem in the fields of computer science, engineering, operations research, discrete mathematics, graph theory, and so forth”. This is the reason why the TSP has frequently been considered a touchstone for new strategies and algorithms to solve combinatorial optimization problems as commented by [2]. The TSP can be modeled as an undirected weighted graph where locations (i.e., nodes) are the graph’s vertexes, paths are the graph’s edges (i.e., arcs), and the path’s distance, cost, or

time is the edge’s length [4]. Then, the objective of solving the TSP consists on minimizing the total distance of a complete sequence of paths (total route) which starts and finishes at a specific vertex (i.e., depot node) after having visited all vertexes once and only once. Figure 1 presents a solution example for the TSP which is also known as a Hamiltonian Circuit of minimum cost. Finding optimal solutions for the TSP is a challenging task due to its computational complexity which is defined as NP-hard (nondeterministic polynomial-time hard) [5]. In example, if 15 cities are considered, there are 1.31e+12 ways of performing a Hamiltonian Circuit to visit them. In such case, finding the optimal solution (i.e., finding the Hamiltonian Circuit of minimum cost) can be a time-consuming task which becomes infeasible when larger number of cities is considered. As reported in [2] only small TSP instances (up to approximately 100 nodes) can be solved to optimality. Due to this situation, development of metaheuristics has been performed to provide high-quality solutions in

2

Mathematical Problems in Engineering

Depot Node

Demand/Supply Node

Route

Figure 1: Distribution network modeled by the TSP.

a reasonable time for different combinatorial optimization problems such as the TSP [6]. Among the most efficient metaheuristics for the TSP the following can be mentioned: Genetic Algorithms (GA), Particle Swarm Optimization (PSO), Tabu Search (TS), Simulated Annealing (SA), Ant Colony Optimization (ACO), and Artificial Neural Networks (ANNs) [3, 6]. Although GA is one of the most important metaheuristics applied for the TSP, its performance depends on its parameters settings such as initial population, selection and reproduction operators, and stop condition. As presented in [2, 7, 8] the quality of the initial population plays an important role in the solving mechanism of the GA. In the present work, a dynamic reduction-expansion operator, termed as 𝑅𝑒𝑑𝐸𝑥𝑝, is presented as a strategy to improve the quality of the initial population and the convergence of a GA. When compared to other approaches on GA the 𝑅𝑒𝑑𝐸𝑥𝑝 operator can provide more suitable solutions for the TSP. The advances of the present work are presented as follows: in Section 2 the technical details of the stages of the 𝑅𝑒𝑑𝐸𝑥𝑝 operator are presented; then, in Section 3 the results obtained on TSP instances are presented and discussed; finally in Section 4 the conclusions and future work are discussed.

2. Structure of the Reduction-Expansion Operator (RedExp) The 𝑅𝑒𝑑𝐸𝑥𝑝 operator is similar to the clustering strategy presented in [2] where the k-Means Clustering algorithm was considered to generate the initial population for a GA. In [2] 𝑁 nodes were clustered into 𝐾 groups based on 𝐾 = √𝑁 + 0.5 in order to solve a TSP with smaller number of nodes. Thus, finding a route of minimum distance was performed considering the cluster centers, and once the route of minimum distance was obtained, the clusters were “disconnected” and “rewired” to assemble a route considering the original 𝑁 nodes. On a selection of 14 symmetric TSP

instances (with 𝑁 = [52 − 442], mean = 204 nodes) and 10 trials this strategy led to a mean average error of 9.22% (mean best error of 6.97%). As presented in [2], clustering can improve the performance of GA for the TSP. However, the distribution patterns of the nodes may affect the performance of the clustering and declustering processes by increasing variability in the initial population. This is because nodes that represent key features of the complete set of nodes can be missed by performing the clustering process, leading to their removal in the reduced (i.e., clustered) set of nodes. An example of key nodes is presented in Figure 2(a) where data from the TSP instance a280 of the TSPLIB 95 database [9] was considered. As presented, the distribution of these nodes is an important feature in its optimal solution which is presented in Figure 2(b). By performing the clustering presented in Figure 2(c) the distribution of the key nodes of instance a280 is simplified. As a consequence, the optimal solution’s pattern of the reduced set of nodes (Figure 2(d)) is significantly different from the pattern observed for the complete set (Figure 2(b)). Note that, as presented in Figure 2(e), the pattern observed in Figure 2(d) is preserved even after declustering. In order to address this issue, the proposed 𝑅𝑒𝑑𝐸𝑥𝑝 operator considers clustering of only two nodes, and only nodes which are very close to each other are candidates for clustering. This leads to a not much relaxed TSP to reduce the loss of key features. The number of pairs of nodes which are candidates for clustering is defined by a dynamic acceptance threshold metric. This process is defined as “reduction” and a route of minimum distance is estimated by a greedy heuristic. Then, “expansion” of the clustered nodes is performed to represent the route considering the original 𝑁 nodes. This strategy was evaluated with a selection of 41 symmetric TSP instances (with 𝑁 = [51 − 1432], mean = 474 nodes) and considering six scenarios where 𝑅𝑒𝑑𝐸𝑥𝑝 could be used alone or in conjunction with other standard processes to generate an initial population. Initial assessment of the 𝑅𝑒𝑑𝐸𝑥𝑝 operator was performed with a single execution (trial) of the GA for each scenario, leading to results supporting the positive effect of the 𝑅𝑒𝑑𝐸𝑥𝑝 operator with a combined mean best error of 4.9%. Then, an extended assessment with 10 executions or trials of the GA was performed to evaluate its statistical significance. Based on these results, the proposed operator represents a suitable alternative to improve performance of GAs or similar metaheuristics that depend on initial solutions. Also, it can be a suitable alternative to improve performance when compared to approaches focused on modifying reproduction operators [3]. The details of the 𝑅𝑒𝑑𝐸𝑥𝑝 operator are described in the present section. 2.1. Clustering Stage. The first stage in the reductionexpansion process consists on determining the set of locations to be reduced. This is accomplished by the clustering process that is described in Pseudocode 1. For this process, an acceptance distance threshold 𝑑𝑐 is defined which is computed as 𝑑𝑐 = 𝑑𝑚𝑖𝑛 + 𝐾𝜎, (1)

Mathematical Problems in Engineering

3

180

180

160

160

140

140

120

120

100

100

80

80

60

60

40

40

20

20

0

0

50

100

150

200

250

0

300

0

50

100

(a)

180

180

160

160

140

140

120

120

100

100

80

80

60

60

40

40

20

20

0

0

50

100

150

200

250

300

200

250

300

(b)

150

200

250

0

300

0

50

100

(c)

150 (d)

180 160 140 120 100 80 60 40 20 0

0

50

100

150

200

250

300

(e)

Figure 2: TSP instance a280.tsp [9]: (a) complete set of nodes, (b) optimal total route for the complete set, (c) clustering of the complete set, (d) optimal total route for the reduced set, and (e) declustered total route for the complete set of nodes.

where (a) 𝑑𝑚𝑖𝑛 is the minimum distance between all locations which is computed as 𝑑𝑚𝑖𝑛 = min 𝑑 (𝑖, 𝑗) , (2)

where 𝑑(𝑖, 𝑗) is the distance between locations 𝑖 and 𝑗, 𝑖 and 𝑗 = 1,. . ., 𝑁, and 𝑖 ≠ 𝑗. (b) 𝜎 is the standard deviation of the distances between all locations which is computed as

4

Mathematical Problems in Engineering

𝑈 = {𝑖 = 1, . . . , 𝑁} % All nodes 𝑅 = ⌀% Set of coded nodes 𝑥𝑖 = 𝑥-coordinate of node i 𝑦𝑖 = 𝑦-coordinate of node i 𝑑𝑚𝑖𝑛 = min 𝑑 (𝑖, 𝑗) for all 𝑖, 𝑗 and 𝑖 ≠ 𝑗 } } 𝑑𝑐 = 𝑑𝑚𝑖𝑛 + 𝐾𝜎, where 𝐾 = rand(0.0, 1.0) 𝜎 = √𝑉 (𝑑 (𝑖, 𝑗)) for all 𝑖, 𝑗 and 𝑖 ≠ 𝑗 } 𝑟 = 1 % index for coded nodes for 𝑖 = 1:N for 𝑗 = 𝑖+ 1: 𝑁 if 𝑑(𝑖, 𝑗) ≤ 𝑑𝑐 coded nodes(r,1) = 𝑖 % node i is stored for clustering coded nodes(r,2) = 𝑗 % node j is stored for clustering coded nodes(r,3) = 𝑐𝑥𝑟 % x-coordinate of equivalent coded node for (i,j) coded nodes(r,4) = 𝑐𝑦𝑟 % y-coordinate of equivalent coded node for (i,j) U = U\{𝑖, 𝑗} % U is updated (i and j are removed from U) R = 𝑅 ∪ {𝑟} % R is updated (equivalent coded node added to R) r=r+1 end end end % Now, U contains the remaining nodes i that were not clustered or coded. These are added to R as follows: for 𝑖= 1: |𝑈| coded nodes(r,1) = 𝑖 % node i is stored for assignment of new index r coded nodes(r,2) = - % there is no node j for non-clustered nodes coded nodes(r,3) = 𝑥𝑖 %𝑥-coodinate of node 𝑖 coded nodes(r,4) = 𝑦𝑖 %𝑦-coodinate of node 𝑖 r=r+1 end Pseudocode 1: Pseudo-code of the clustering and coding processes.

𝜎 = √𝑉 (𝑑 (𝑖, 𝑗)),

(3)

where 𝑉(𝑑(𝑖, 𝑗)) is the variance of the distances between locations 𝑑(𝑖, 𝑗), where 𝑖 and 𝑗 = 1,. . ., 𝑁, and 𝑖 ≠ 𝑗. (c) 𝐾 is the reduction factor which is computed as 𝐾 ∈ rand (0.0, 1.0) .

(4)

It is important to mention that (1) 𝑑𝑚𝑖𝑛 and 𝜎 exclude the distance between a location and itself (this leads to a distance equal to zero) and (2) equation (4) is computed each time that an individual is generated (hence, a different acceptance distance threshold 𝑑𝑐 is computed to generate each individual in the initial population). In this way, the acceptance threshold metric 𝑑𝑐 ensures that only locations or nodes that are closer than 𝑑𝑐 are considered as candidates for clustering. Also, in order to avoid significant variability between the original and reduced sets, a criterion of minimum distance was defined for the clustering candidates. This can be explained with the following example: consider that pairs (4,6), (6,20), and (12,6) comply with the restriction of 𝑑𝑐 and the distances between the nodes of each pair are 100, 150, and 120, respectively. In this case there are three clustering options for node “6”; however, the most

suitable option is (4,6) because node “6” is closer to node “4” than to nodes “20” or “12”. 2.2. Coding Stage. This stage consists on coding the clustered pair of nodes (𝑖, 𝑗) as a single equivalent node 𝑟 with mean coordinates (𝑐𝑥𝑟 , 𝑐𝑦𝑟 ) estimated as 𝑐𝑥𝑟 = 𝑐𝑦𝑟 =

(𝑥𝑖 + 𝑥𝑗 ) 2 (𝑦𝑖 + 𝑦𝑗 ) 2

, (5) ,

where 𝑟 is the index for the (new) reduced node. It is important to mention that under this process, pairs of nodes separated by distances larger than 𝑑𝑐 are not clustered and remain unchanged. This also happens with candidate nodes that were released from clustering due to not meeting the criterion of minimum distance. In such cases, the indexes of these nonclustered nodes are reassigned in terms of the new index 𝑟. Figure 3 presents an example of the clustering and coding processes for a problem with 𝑁 = 7 locations. As presented, the array 𝑐𝑜𝑑𝑒𝑑 𝑛𝑜𝑑𝑒𝑠 contains the registry of “equivalencies” for 𝑈 󳨀→ 𝑅. Thus, the coded nodes in 𝑐𝑜𝑑𝑒𝑑 𝑛𝑜𝑑𝑒𝑠 represent the reduced nodes from 𝑈 (the original nodes). This registry is important for the decoding

Mathematical Problems in Engineering

5

Original Nodes i=4

i=2 dmin

i=5

i=3 i=7

i=6

i=1

U = {1, 2, 3, 4, 5, 6, 7} i xi yi 1 x1 y1 2 x2 y2 3 x3 y3 4 x4 y4 5 x5 y5 6 x6 y6 7 x7 y7

Coded Nodes r=1

r=2 r=5

r=4

r 1 2 3 4 5

R = {1, 2, 3, 4, 5} i j cxr cyr 2 3 cx1 cy1 4 5 cx2 cy2 1 - x1 y1 6 - x6 y6 7 - x7 y7 coded_nodes array

r=3

dt = dmin +K

Figure 3: Example of the clustering and coding processes with 𝑁 = 7 nodes.

𝑝 = current node; Initialization: 𝑝 = 1 𝑅 = 𝑅 \ {𝑝} route min cost = [p] % TSP route of minimum cost starts at node 1 Sequencing: while 𝑅 ≠ ⌀ closest node = node in R with the minimum distance to p. If more than one node comply with this requirement, then it is randomly selected from the complying set of nodes. 𝑅 = 𝑅 \ {𝑐𝑙𝑜𝑠𝑒𝑠𝑡 𝑛𝑜𝑑𝑒}; route min cost = [route min cost closest mode]; % closest node is inserted at the right side of current route min cost p=closest node; % current node is updated with the closest node end route min cost = [route min cost 1]; %the TSP route ends at node 1 Pseudocode 2: Pseudo-code of the sequencing process (nearest neighbor strategy).

and declustering processes for 𝑅 󳨀→ 𝑈. Note that this process ensures that close pairs of nodes (within a distance 𝑑𝑐 ) of minimum distance are kept together. 2.3. Sequencing Stage: Nearest Neighbor Strategy. Most of the routing problems consider an initial and a final node to define a particular route which consists of a sequence of nodes. This sequence can lead to a route defined as 𝑟𝑜𝑢𝑡𝑒 𝑚𝑖𝑛 𝑐𝑜𝑠𝑡 of minimum traveling cost (i.e., distance) throughout all nodes. For the sequencing process of all nodes in 𝑅 it is important to identify the initial and/or final node of the route. This node depends of the routing problem itself and it is commonly identified as node 0 or node 1. Then, sequencing is performed as described in Pseudocode 2. As presented, sequencing is performed with a simple heuristic based on the nearest neighbor strategy which is expected to make the operator time-efficient and also to add random flexibility to achieve a feasible (not optimal) route of minimum cost. 2.4. Decoding Stage. Because the route generated by the heuristic described in Pseudocode 2 consists of elements from the reduced set of nodes in 𝑅, it is required to represent this route in terms of the original set of nodes in 𝑈. This expansion from 𝑅 to 𝑈 is performed by representing each unique node 𝑟 as the equivalent nodes (𝑖, 𝑗) from 𝑐𝑜𝑑𝑒𝑑 𝑛𝑜𝑑𝑒𝑠.

It is important to mention that for clustered nodes, this process implies two decoding alternatives because an equivalent node 𝑟 can be decoded as (𝑖, 𝑗) or (𝑗, 𝑖). Because decoding is sequentially performed left-to-right from 𝑟𝑜𝑢𝑡𝑒 𝑚𝑖𝑛 𝑐𝑜𝑠𝑡, the decoding decision for clustered nodes is performed by computing the effect of both alternatives on the cumulative cost of the partially decoded (expanded) route.

3. Assessment 3.1. Integration with Genetic Algorithm. For assessment of the 𝑅𝑒𝑑𝐸𝑥𝑝 operator on the performance of the GA the following scenarios were considered for the generation of the initial population (in all cases the initial population consisted of 500 individuals): (a) 𝑅𝑝 : all individuals are generated by random permutations (𝑅𝑝 operator) as considered by the GA presented in [7, 11]. (b) 𝑆ℎ : all individuals are generated by a sequencing heuristic of random permutations based on the nearest neighbor strategy (𝑆ℎ operator) as described in Section 2.3. (c) 𝑅𝑒𝑑𝐸𝑥𝑝: all individuals are generated by the 𝑅𝑒𝑑𝐸𝑥𝑝 operator.

6

Mathematical Problems in Engineering

Start Rp Sℎ RedExp Rp _Sℎ Rp _RedExp Sℎ _RedExp

Fitness Evaluation

Initial Population (X Individuals → Parents)

Population Update with Best X Individuals (Parents + Offsprings)

Fitness Evaluation

Ascending Sort

Ascending Sort

main_best_cost

best_cost_updated_population

Selection of Individuals for Reproduction (Roulette Wheel) - Parents -

IF best_cost_updated_population < main_best_cost no_best_cost = 0 main_best_cost ← best_cost_updated_population ELSE no_best_cost = no_best_cost+1 END

Reproduction of Selected Individuals Crossover (Position-Based, Order-Based) Mutation (Inversion, Exchange) - Offsprings -

STOP condition is met?

Yes End

No Parameter

Value

Crossover Probability (J= )

0.50

Mutation Probability (JG )

0.20

Crossover Offspring

J= X

Mutation Offspring

JG X

X

500

Figure 4: General structure of the GA.

(d) 𝑅𝑝 𝑆ℎ : 50% of all individuals are generated by the 𝑅𝑝 operator and the other 50% are generated with the 𝑆ℎ operator. (e) 𝑅𝑝 𝑅𝑒𝑑𝐸𝑥𝑝: 50% of all individuals are generated by the 𝑅𝑝 operator and the other 50% are generated with the 𝑅e𝑑𝐸𝑥𝑝 operator. (f) 𝑆ℎ 𝑅𝑒𝑑𝐸𝑥𝑝: 50% of all individuals are generated by the 𝑆ℎ operator and the other 50% are generated with the 𝑅𝑒𝑑𝐸𝑥𝑝 operator. As mentioned in Section 2.1 the acceptance threshold metric 𝑑𝑐 is reestimated each time that a solution is generated. Thus, due to (1), different degrees of “reduction” can be performed during the process of generating an initial population with the 𝑅𝑒𝑑𝐸𝑥𝑝 operator. Then, the initial population was integrated into the standard GA which is presented in Figure 4. The selection of the crossover and mutation operators which are also presented in Figure 4 was based on the findings reported in [7, 12–15]. Finally, comparison was performed with other works that have performed initial population strategies. Hence, the following works were considered for comparison purposes: (a) KMC [2]: in this work, the initial population of the GA was generated by using the k-Means Clustering (KMC) algorithm. The algorithm was tested with 14 TSP instances with 𝑁 = [52 − 442] nodes (mean = 204 nodes).

(b) HNN [10]: in this work, the initial population of the GA was generated by a Hopfield Neural Network (HNN) and the hybrid algorithm was tested with two small TSP instances with 51 and 76 nodes. Implementation of the GA code was performed with Octave [16] and MATLAB in a HP Z230 Workstation with Intel Zeon CPU at 3.40 GHz with 8 GB RAM. All executions of the GA started with the same random generator with its seed set at Infinite (𝐼𝑛𝑓). 3.2. Results on Main Set of 41 TSP Instances. The main test was performed with 41 TSP instances which were selected from the TSPLIB95 [9], National TSP, and VLSI TSP [17] libraries to evaluate the statistical significance of the 𝑅𝑒𝑑𝐸𝑥𝑝 operator on the GA’s convergence. Error from optimal solutions was computed by using the following equation [12]: 𝐸=

𝑎V𝑒𝑟𝑎𝑔𝑒 − 𝑜𝑝𝑡𝑖𝑚𝑎𝑙 . 𝑜𝑝𝑡𝑖𝑚𝑎𝑙

(6)

Initial assessment with these instances was performed with a single execution of the GA and a dynamic stop condition. This was performed to establish an intensive search process. The dynamic stop condition was applied on the no best cost variable of the main GA (see Figure 4). This variable increases, while no best solution is found within the search process, and it is set to zero when a new best solution is found. In this case, the GA iterates, while no best cost > 1000.

Mathematical Problems in Engineering

7

Because only one execution of the GA was considered, the 𝑎V𝑒𝑟𝑎𝑔𝑒 result in (6) is the best solution obtained with a single execution of the GA. Table 1 presents the results of the GA and the estimated error when compared with optimal results for each assessment scenario. As presented in Table 1 the minimum mean best errors (5.5%, 6.3%, and 5.7%) were obtained with initial populations generated with 𝑅𝑒𝑑𝐸𝑥𝑝, 𝑅𝑝 𝑅𝑒𝑑𝐸𝑥𝑝, and 𝑆ℎ 𝑅𝑒𝑑𝐸𝑥𝑝, respectively. Hence, 𝑅𝑒𝑑𝐸𝑥𝑝 as a single operator, or as a complement to 𝑅𝑝 and 𝑆ℎ operators, has a positive effect on the final solution obtained by the GA. By selecting the minimum error achieved for each instance (throughout all scenarios) a total mean best error of 4.9% is computed. It is important to mention that these results consider the same size of the population through all generations of the GA which was set at 𝑁𝐺𝐴 = 500 individuals or solutions. Hence, particularly for instances of size larger than 500 supply/demand nodes, achieving solutions with significant reductions in TSP distances with the 𝑅𝑒𝑑𝐸𝑥𝑝 operator supports its feasibility to improve convergence of the GA. As presented in Figure 5, if the GA is adapted to run only for 1000 generations (fixed stop condition) the faster convergences to minimum distance values are achieved if the 𝑅𝑒𝑑𝐸𝑥𝑝 operator is used for the initial population. An extended assessment of the 𝑅𝑒𝑑𝐸𝑥𝑝 operator on the largest TSP instances (with more than 250 nodes) was performed with 10 executions or trials of the GA (as performed in [2]) and a fixed stop condition (run for 500 generations). This was performed to assess the statistical significance of the results obtained with the 𝑅𝑒𝑑𝐸𝑥𝑝 operator. Table 2 presents the average, best, and worst errors obtained for each of the considered instances. The results presented in Table 2 corroborate those presented in Table 1 and Figure 5. The worst average and best error rates are observed if the initial population of the GA is generated with the 𝑅𝑝 operator. These are significantly improved if the initial population incorporates better solutions obtained by the 𝑅𝑒𝑑𝐸𝑥𝑝 or 𝑆ℎ operators. When comparing the error rates between 𝑆ℎ and 𝑅𝑝 𝑅𝑒𝑑𝐸𝑥𝑝, 𝑆ℎ 𝑅𝑒𝑑𝐸𝑥𝑝, and 𝑅𝑒𝑑𝐸𝑥𝑝, it is observed that the minimum error rates are obtained with the 𝑅𝑒𝑑𝐸𝑥𝑝 operator. To quantitatively assess this difference a statistical significance test was performed on the errors reported in Table 2. For this purpose, a paired t-test was performed with the following null hypothesis: 𝐻0 : 𝜇𝐴 − 𝜇𝐵 < 0,

(7)

where 𝐴 and 𝐵 are the two scenarios to be compared, and the hypothesis is focused on rejecting or validating that the mean error of 𝐴 is smaller than the mean error of 𝐵. In contrast, the alternative hypothesis is defined as 𝐻1 : 𝜇𝐴 − 𝜇𝐵 ≥ 0.

(8)

Table 3 presents the results of the significance test with a 𝑝-value of 0.10 for all scenarios. As presented, the mean errors obtained with 𝑆ℎ and 𝑅𝑝 𝑆ℎ are statistically smaller than the mean error obtained with 𝑅𝑝 . In contrast, the mean errors obtained with 𝑅𝑒𝑑𝐸𝑥𝑝 and 𝑆ℎ 𝑅𝑒𝑑𝐸𝑥𝑝 are

statistically smaller than those obtained with 𝑅𝑝 , 𝑆ℎ , 𝑅𝑝 𝑆ℎ , and 𝑅𝑝 𝑅𝑒𝑑𝐸𝑥𝑝. For 𝑅𝑝 𝑅𝑒𝑑𝐸𝑥𝑝 the mean error is only statistically smaller than the mean errors of 𝑅𝑝 , 𝑆ℎ , and 𝑅𝑝 𝑆ℎ . Hence, this information provides evidence that convergence of a GA can be improved if the initial population is generated with the 𝑅𝑒𝑑𝐸𝑥𝑝 operator alone or in conjunction with 𝑆ℎ and 𝑅𝑝 . 3.3. Comparison with KMC. The “improved GA” developed in [2] which was used to evaluate the KMC strategy considered only three mutation operators (flip, swap, and slide) and no crossover was performed. In this case, strictly speaking, the GA presented in [2] does not include all the elements of a GA. In contrast, our GA which is presented in Figure 4 more closely resembles the “simple GA” which was reviewed in [2] as it considers crossover and mutation operators. Other differences are the following: (a) Population size and stop conditions: in [2], according to the description of the “improved GA” and the examples that were discussed, the population size was set to 3000 and the number of iterations was set to 20000. In our GA the population is smaller (500 individuals) and the number of iterations is not fixed. (b) Construction of the initial population: in [2] once that the complete set of nodes is clustered into 𝐾 groups, the GA is used to obtain the local optimal path of each group and a global optimal path of 𝐾 groups. Then, according to the global optimal path, one edge of each local optimal path disconnects to rewire the front and back groups. This process is repeated in order to generate the initial population. In our GA, as described in Section 2, the local optimal path of clustered and nonclustered nodes is performed by the nearest neighbor heuristic described in Section 2.3. Then, declustering is performed by the decoding algorithm described in Section 2.4. Thus, our GA is only executed after the initial population is built which, after the decoding stage, considers the complete set of nodes (the GA is not executed with an initial population consisting of clustered nodes). Due to these differences, and others associated with the hardware resources used for implementation, strict fair comparison is difficult to be performed. Nevertheless, a close comparison was performed by restricting our GA to be executed up to the average execution time of the GA presented in [2] which is very competitive. Table 4 presents the results on the TSP instances considered by [2]. As presented in Table 4 the GA with the 𝑅𝑒𝑑𝐸𝑥𝑝 operator, when executed during the same average time as the KMC approach, can achieve a smaller mean best error (4.6078% ≤ 6.9763%). Although for small instances the KMC achieved very small errors (i.e., for berlin52 and kroA100), for larger instances with more than 150 nodes the GA with the 𝑅𝑒𝑑𝐸𝑥𝑝 operator can achieve smaller errors than those obtained by the KMC approach. These results must be considered with caution due to the differences previously discussed.

TSP Instance Size Optimal Value Library TSPLIB95 a280 280 2579.0 berlin52 52 7542.0 bier127 127 118282.0 ch150 150 6528.0 d198 198 15780.0 d493 493 35002.0 d657 657 48912.0 dsj1000 1000 18659688.0 eil51 51 426.0 eil76 76 538.0 fl417 417 11861.0 gil262 262 2378.0 kroB100 100 22141.0 kroC100 100 20749.0 kroD100 100 21294.0 kroA150 150 26524.0 kroB150 150 26130.0 kroA200 200 29368.0 kroB200 200 29437.0 lin105 105 14379.0 lin318 318 42029.0 nrw1379 1379 56638.0 pcb1173 1173 56892.0 pr107 107 44303.0 pr124 124 59030.0 pr136 136 96772.0 pr264 264 49135.0 pr439 439 107217.0 pr1002 1002 259045.0 rat195 195 2323.0 rat575 575 6773.0 rat783 783 8806.0 st70 70 675.0 u724 724 41910.0 u1432 1432 152970.0 VLSITSP dca1389 1389 5085.0 VLSITSP dka1376 1376 4666.0 VLSITSP pbm436 436 1443.0 NTSP LU980 980 11340.0 NTSP UY734 734 79114.0 NTSP ZI929 929 95345.0 TSP Distance Values of Solutions Obtained with the Genetic Algorithm (GA) Rp Sh RedExp Rp Sh Rp RedExp Sh RedExp 3063.3 2736.0 2709.9 2769.9 2910.3 2711.6 8105.2 7850.8 7841.4 8000.8 7959.6 7850.8 126507.8 123003.7 121705.6 123034.1 122001.9 121312.1 7268.4 6842.4 6708.3 6746.0 6786.2 6783.0 16996.7 17019.6 16086.1 17111.6 16227.2 16990.3 38900.8 37190.3 36609.2 36807.4 36581.6 36749.3 55426.9 52830.9 51372.5 52469.4 52094.2 52395.7 20991643.2 20760980.1 20454938.6 20923295.9 20545095.0 20156541.3 454.4 437.4 435.4 438.3 437.4 439.7 582.7 591.2 556.7 571.6 580.2 564.7 12764.2 13015.7 12874.6 12796.3 12809.8 12651.8 2616.3 2589.0 2539.3 2527.5 2493.4 2503.0 24484.7 22437.5 22896.9 22496.7 22971.3 22399.5 22221.5 22144.8 22093.7 21913.6 22531.1 21883.5 23996.3 23030.8 22298.0 22719.1 22255.6 21903.7 28772.8 29162.2 27208.4 28820.5 29100.3 27748.1 29195.1 27739.2 27178.0 27593.8 27412.4 27464.8 32293.2 29968.2 29790.0 29920.6 30537.1 30035.5 32004.6 31324.0 31348.9 32099.6 31124.8 30900.6 15108.6 14930.6 14633.7 15029.9 14643.5 14578.2 47687.1 44509.1 43973.7 46097.5 45226.4 44504.0 64651.8 61695.0 61192.2 61373.3 61211.2 61340.3 67114.3 62582.5 62004.2 62532.9 61903.0 61824.0 49850.8 45179.0 45123.2 45512.6 45318.3 45033.2 61408.4 61554.7 60595.2 61329.9 61281.3 60300.7 103518.4 105707.1 104396.2 103833.5 103417.9 104568.2 54992.3 53304.4 52560.5 52336.5 51673.2 53139.5 121892.8 114895.1 113016.7 116812.5 116505.1 115850.4 296309.0 282091.6 280868.3 280048.9 278721.5 276725.7 2638.2 2424.7 2405.9 2448.1 2409.4 2393.4 7606.9 7315.9 7178.1 7255.8 7264.9 7316.1 10285.2 9644.8 9510.9 9547.1 9507.9 9457.4 739.3 688.1 682.6 688.1 678.1 678.4 47136.2 44735.9 44448.6 44490.1 44644.2 44892.4 175842.4 166679.9 167534.6 165992.8 164462.7 166085.7 6125.2 5591.0 5670.7 5557.8 5608.9 5521.5 5467.6 5138.2 5121.5 5146.0 5124.6 5143.6 1694.7 1536.2 1539.1 1547.3 1527.9 1561.0 12843.2 12180.7 12245.9 12281.0 12042.7 12146.6 91652.3 85269.0 84715.4 85487.5 86227.1 85230.5 107284.6 103768.2 102227.8 103170.9 102125.1 101745.0 Average Errors =

Error Values E=[(GA-optimal)/optimal]×100% Minimum Error Rp Sh RedExp Rp Sh Rp RedExp Sh RedExp 5.1 18.8 6.1 5.1 7.4 12.8 5.1 4.0 7.5 4.1 4.0 6.1 5.5 4.1 2.6 7.0 4.0 2.9 4.0 3.1 2.6 2.8 11.3 4.8 2.8 3.3 4.0 3.9 1.9 7.7 7.9 1.9 8.4 2.8 7.7 4.5 11.1 6.3 4.6 5.2 4.5 5.0 5.0 13.3 8.0 5.0 7.3 6.5 7.1 8.0 12.5 11.3 9.6 12.1 10.1 8.0 2.2 6.7 2.7 2.2 2.9 2.7 3.2 3.5 8.3 9.9 3.5 6.2 7.8 5.0 6.7 7.6 9.7 8.5 7.9 8.0 6.7 4.9 10.0 8.9 6.8 6.3 4.9 5.3 1.2 10.6 1.3 3.4 1.6 3.8 1.2 5.5 7.1 6.7 6.5 5.6 8.6 5.5 2.9 12.7 8.2 4.7 6.7 4.5 2.9 2.6 8.5 9.9 2.6 8.7 9.7 4.6 4.0 11.7 6.2 4.0 5.6 4.9 5.1 1.4 10.0 2.0 1.4 1.9 4.0 2.3 5.0 8.7 6.4 6.5 9.0 5.7 5.0 1.4 5.1 3.8 1.8 4.5 1.8 1.4 4.6 13.5 5.9 4.6 9.7 7.6 5.9 8.0 14.1 8.9 8.0 8.4 8.1 8.3 8.7 18.0 10.0 9.0 9.9 8.8 8.7 1.6 12.5 2.0 1.9 2.7 2.3 1.6 2.2 4.0 4.3 2.7 3.9 3.8 2.2 6.9 7.0 9.2 7.9 7.3 6.9 8.1 5.2 11.9 8.5 7.0 6.5 5.2 8.1 5.4 13.7 7.2 5.4 8.9 8.7 8.1 6.8 14.4 8.9 8.4 8.1 7.6 6.8 3.0 13.6 4.4 3.6 5.4 3.7 3.0 6.0 12.3 8.0 6.0 7.1 7.3 8.0 7.4 16.8 9.5 8.0 8.4 8.0 7.4 0.5 9.5 1.9 1.1 1.9 0.5 0.5 6.1 12.5 6.7 6.1 6.2 6.5 7.1 7.5 15.0 9.0 9.5 8.5 7.5 8.6 8.6 20.5 10.0 11.5 9.3 10.3 8.6 9.8 17.2 10.1 9.8 10.3 9.8 10.2 5.9 17.4 6.5 6.7 7.2 5.9 8.2 6.2 13.3 7.4 8.0 8.3 6.2 7.1 7.1 15.8 7.8 7.1 8.1 9.0 7.7 6.7 12.5 8.8 7.2 8.2 7.1 6.7 4.9 11.7 6.9 5.5 6.7 6.3 5.7

Table 1: Results on main set of 41 TSP instances: errors on assessment scenarios.

8 Mathematical Problems in Engineering

Average a280 219.82% d493 353.42% d657 648.07% dsj1000 1241.81% fl417 606.45% lin318 264.31% nrw1379 1261.95% pcb1173 1195.06% pr439 445.22% pr1002 1102.60% rat575 523.40% rat783 781.88% u724 764.02% u1432 1323.86% uy734 803.96% pbm436 447.86% dka1376 1569.52% zi929 945.79% lu980 1228.25% dca1389 1671.61% Average Error= 869.94%

Instance

Rp Best 210.47% 338.64% 637.43% 1213.86% 567.84% 256.94% 1248.51% 1167.86% 427.80% 1095.34% 509.95% 770.09% 752.12% 1297.30% 774.71% 444.54% 1546.66% 919.41% 1218.35% 1660.84% 852.93%

Worst 237.19% 362.27% 659.50% 1269.45% 655.64% 286.52% 1277.91% 1211.13% 451.11% 1106.83% 531.09% 792.67% 772.07% 1338.61% 822.22% 450.02% 1579.51% 971.55% 1249.95% 1684.23% 885.47%

Average 8.56% 10.09% 16.58% 23.97% 10.46% 11.41% 19.51% 19.44% 13.01% 15.32% 14.74% 16.47% 16.71% 20.07% 16.91% 11.22% 18.32% 16.96% 18.60% 17.87% 15.81%

Sh Best 7.07% 9.17% 15.87% 22.73% 7.00% 9.71% 18.28% 18.67% 11.05% 14.51% 13.46% 15.50% 15.00% 19.53% 14.73% 9.77% 17.41% 15.92% 17.54% 17.26% 14.51% Worst 11.29% 10.95% 17.39% 24.52% 11.69% 13.01% 20.42% 20.39% 15.80% 16.16% 16.12% 17.56% 18.26% 21.11% 17.99% 12.46% 18.96% 17.58% 19.69% 18.32% 16.98%

Average 7.23% 9.96% 12.80% 16.14% 11.36% 6.42% 18.38% 18.09% 8.91% 14.87% 14.28% 15.62% 14.85% 20.05% 15.61% 10.66% 18.77% 13.76% 17.91% 17.66% 14.17%

RedExp Best 3.83% 8.71% 11.77% 14.31% 6.93% 5.36% 17.79% 17.14% 7.08% 14.22% 13.47% 14.98% 13.25% 18.92% 14.92% 8.03% 17.28% 13.15% 14.52% 16.77% 12.62% Worst 9.33% 11.07% 13.45% 17.27% 13.97% 7.59% 19.19% 18.78% 10.30% 15.66% 15.04% 16.22% 16.05% 20.96% 16.12% 12.20% 20.14% 14.64% 19.70% 18.62% 15.32%

Average 8.22% 9.27% 17.25% 22.59% 10.87% 11.80% 20.19% 19.53% 11.96% 15.62% 15.25% 16.74% 17.08% 20.17% 16.40% 10.82% 19.67% 16.63% 18.02% 18.00% 15.80%

Rp Sh Best 6.04% 8.52% 15.32% 19.79% 6.69% 10.52% 19.11% 18.42% 9.74% 15.02% 13.40% 16.12% 15.28% 19.22% 13.64% 9.42% 18.30% 14.98% 15.59% 16.73% 14.09% Worst 10.29% 11.01% 18.91% 23.69% 14.17% 14.28% 20.72% 20.04% 14.23% 16.79% 16.40% 17.48% 18.37% 21.59% 17.69% 13.47% 20.38% 18.43% 19.58% 18.93% 17.32%

Rp RedExp Average Best Worst 8.85% 4.95% 11.97% 10.22% 9.44% 10.92% 13.42% 11.84% 14.31% 15.87% 13.96% 16.93% 9.65% 8.16% 13.17% 7.53% 5.00% 9.57% 18.74% 18.02% 19.50% 18.84% 17.22% 20.10% 10.44% 8.35% 12.75% 15.31% 15.01% 15.88% 14.58% 13.20% 15.94% 15.82% 14.35% 17.45% 15.19% 13.73% 16.86% 19.96% 19.16% 20.73% 16.27% 14.97% 17.41% 10.86% 8.87% 12.48% 18.83% 18.05% 19.59% 13.70% 12.85% 14.65% 18.48% 16.96% 19.70% 17.84% 16.78% 18.93% 14.52% 13.04% 15.94%

Table 2: Results on the 20 largest TSP instances: average, best, and worst results on 10 runs of the GA with 500 generations. Sh RedExp Average Best Worst 8.88% 6.90% 10.19% 9.31% 8.19% 10.67% 13.06% 11.61% 14.13% 16.66% 15.32% 17.59% 8.61% 3.99% 12.20% 7.72% 7.02% 8.86% 19.01% 18.11% 19.89% 18.78% 17.23% 21.06% 10.02% 8.81% 11.38% 14.99% 13.83% 15.86% 14.26% 11.91% 15.34% 16.19% 15.67% 17.02% 14.77% 13.27% 15.65% 19.79% 18.56% 21.14% 15.51% 14.73% 16.24% 10.46% 8.65% 11.48% 18.56% 17.56% 19.36% 13.53% 12.13% 15.12% 18.75% 17.40% 20.87% 17.86% 16.77% 19.61% 14.34% 12.88% 15.68%

Mathematical Problems in Engineering 9

10

Mathematical Problems in Engineering 16000000 14000000 12000000 10000000 8000000 6000000 4000000

690000

670000

Average TSP Distance

650000

630000

610000

590000

570000

550000 1

51

101

151

201

251

301

351

401

451 501 551 Generations

601

651

701

751

801

851

901

951 1001

Av. Rp_Sh Av. Rp_RedExp Av. Sh_RedExp

Av. Rp Av. Sh Av. RedExp

Figure 5: Results on main set of 41 TSP instances: average convergence of the GA. Table 3: Results on the 20 largest TSP instances: statistical significance test. B Rp

A

Rp Sh RedExp Rp Sh Rp RedExp Sh RedExp

Accept Accept Accept Accept Accept

Sh Reject Accept Reject Accept Accept

RedExp Reject Reject Reject Reject Reject

Rp Sh Reject Reject Accept Accept Accept

Rp RedExp Reject Reject Accept Reject Accept

Sh RedExp Reject Reject Reject Reject Reject

Mathematical Problems in Engineering

11

Table 4: Results on set of 14 TSP instances: comparison with KMC [2] (same average execution time).

Instance Size Optimal Value RedExp Error Rp RedExp berlin52 52 7542 7903.8 4.7967 7985.8 kroA100 100 21282 22445.1 5.4649 22445.1 pr144 144 58537 61399.2 4.8896 61399.2 ch150 150 6528 6707.4 2.7477 6784.3 kroB150 150 26130 27439.0 5.0095 26787.4 pr152 152 73682 77133.3 4.6841 77082.1 rat195 195 2323 2419.8 4.1675 2411.1 d198 198 15780 16253.2 2.9985 16270.4 kroA200 200 29368 30026.6 2.2427 30290.5 ts225 225 126643 138526.9 9.3837 139043.0 pr226 226 80369 82685.6 2.8825 82113.5 pr299 299 48191 51740.7 7.3658 53164.5 lin318 318 42029 46135.6 9.7710 46507.3 pcb442 442 50778 57006.4 12.2660 57917.6 204 Average = 5.6193

REDEXP Error Sh RedExp 5.8841 7841.4 5.4649 22381.9 4.8896 61243.8 3.9264 6798.6 2.5158 27084.9 4.6145 75852.8 3.7936 2411.8 3.1075 16504.8 3.1413 30303.1 9.7913 130054.6 2.1706 83026.1 10.3205 51731.9 10.6552 45821.8 14.0604 57032.5 6.0240

KMC Error Best Error Average Time (s) Best Error 3.9696 3.9696 16.3208 0.0000 5.1683 5.1683 20.3318 0.1035 4.6241 4.6241 26.5335 2.0176 4.1450 2.7477 25.4257 5.1038 3.6543 2.5158 25.5628 3.3544 2.9461 2.9461 26.4204 2.5998 3.8231 3.7936 30.0717 8.7133 4.5930 2.9985 32.5780 2.8149 3.1840 2.2427 31.8691 5.3725 2.6938 2.6938 34.1776 10.3008 3.3061 2.1706 36.3008 4.9741 7.3476 7.3476 44.6964 14.6584 9.0243 9.0243 47.2462 15.0669 12.3173 12.2660 62.8187 22.5887 5.0569 4.6078 32.8824 6.9763

Table 5: Results on set of 2 TSP instances: comparison with HNN [10] (same iterations).

Instance Size eil51 51 eil76 76

Optimal Value 426 538

RedExp 437.4 579.8

Error 2.6781 7.7785

Rp RedExp 435.5 571.9

3.4. Comparison with HNN. In [10] a Hopfield Neural Network (HNN) was considered for the creation of the initial population of a GA. The GA had the standard structure that was considered by our GA although with different reproduction operators as it considered heuristic crossover and mutation operators. Also, it considered a small population with 50 individuals and 100 iterations for the GA. Testing in [10] was performed with only two instances (eil51 and eil76). Table 5 presents the results reported by [10] and those obtained by the proposed GA with the 𝑅𝑒𝑑𝐸𝑥𝑝 operator. For consistency purposes our GA was executed during 100 iterations. As presented in Table 5 the HNN approach achieved a smaller error than our GA with the 𝑅𝑒𝑑𝐸𝑥𝑝 operator. This is consistent with the significant differences observed for instances berlin52, kroA100, and pr144 in Table 4. In this case, it is important to observe that best performance of the GA with the 𝑅𝑒𝑑𝐸𝑥𝑝 operator is observed in large instances (i.e., more than 150 nodes) and not in small instances. While the HNN approach presents a very small error when compared to the GA with the 𝑅𝑒𝑑𝐸𝑥𝑝 operator, the use of the HNN may be restricted to the size of the instance. As stated in [10] the Hopfield scheme requires 𝑁2 neurons for a 𝑁-node problem. This will be further discussed in the following section.

4. Discussion and Future Work In this work, a reduction-expansion operator, termed as 𝑅𝑒𝑑𝐸𝑥𝑝, was developed to improve the performance of

REDEXP Error Sh RedExp 2.2318 439.7 6.2973 578.6

Error 3.2095 7.5446

Best Error 2.2318 6.2973

Best 429 549

HNN Best Error 0.7042 2.0446

Genetic Algorithms (GA) for the TSP. The application of this operator was focused on improving the initial population of the GA as performed by other works such as [2, 10]. While the operator is based on clustering as in [2], only pairs of the closest nodes were considered for clustering, and the number of clusters was dynamically defined by an acceptance threshold which considers the distance variation between all nodes in the network. Experiments performed with a set of 41 well-known symmetric TSP instances led to corroborate the suitability of the operator to improve the convergence and the quality of the final solutions obtained by a GA by obtaining a mean best error of 4.9%. Extended assessment was performed with the 20 largest instances of this set and it was observed that, within 500 generations of the GA, the 𝑅𝑒𝑑𝐸𝑥𝑝 operator can improve the performance when compared to 𝑅𝑝 and 𝑆ℎ operators. When compared with other strategies focused on the initial population of the GA, it was observed that the proposed approach presents significant errors when tested on small instances with less than 150 nodes. However, this may be caused by the clustering process itself. As discussed in Section 2 the distribution patterns of the nodes may affect the performance of the clustering and declustering processes by increasing variability in the initial population. In this work, additional evidence about the number of nodes was also found. Particularly, for small instances, the distribution patterns are more representative of its key features. Hence, clustering can more severely affect the integrity of the key features, even if clustering is small. This can provide

12

Mathematical Problems in Engineering

important insights regarding other logistic problems and solving methods based on clustering. Another aspect that must be studied is the effect of the 𝑅𝑒𝑑𝐸𝑥𝑝 operator (and in general, of the clustering and nearest neighbor approaches) on the genetic diversity of the initial population. This is because, if the initial population is initialized with very good solutions (obtained by a nearest neighbor heuristic or by a deterministic method such as the Clarke and Wright (C&W) algorithm), many solutions are likely to share the same subsequences of genes. This may affect the diversification performance of the reproduction operators, leading to converge to local optima. Thus, future work is focused on extending on the limitations of the 𝑅𝑒𝑑𝐸𝑥𝑝 operator. The following are considered as research topics: (a) Adaptation of the Scheme Theorem to determine the subsequences of genes which are common to all solutions within the initial solutions of the considered scenarios: in this way, the effect of these improvement strategies on the genetic diversity of the initial populations could be preliminarily assessed. (b) Developing more efficient metrics for the acceptance threshold 𝑑𝑐 because it has a direct effect on the clustering stage: this is focused on finding better solutions for large instances and optimal solutions for smaller instances. (c) Integrating the HNN approach within the clustering process to analyze the performance on large instances. (d) To extend on the use for other routing problems as the Capacitated Vehicle Routing Problem (CVRP). (e) To develop a metric to assess the loss of features when applying clustering.

Data Availability The databases which were used are publicly available in the Internet. Reference URL was provided in the manuscript.

Conflicts of Interest The authors declare that there are no conflicts of interest regarding the publication of this paper.

References [1] M. Gendreau, G. Ghiani, and E. Guerriero, “Time-dependent routing problems: A review,” Computers & Operations Research, vol. 64, pp. 189–197, 2015. [2] Y. Deng, Y. Liu, and D. Zhou, “An improved genetic algorithm with initial population strategy for symmetric TSP,” Mathematical Problems in Engineering, vol. 2015, Article ID 212794, 6 pages, 2015. [3] A. Hussain, Y. Shad-Muhammad, M. Nauman-Sajid, and I. Hussain, “Genetic Algorithm for Traveling Salesman Problem with Modified Cycle Crossover Operator,” Computational Intelligence and Neuroscience, vol. 2017, Article ID 7430125, 7 pages, 2017.

[4] V. Zharfi and A. Mirzazadeh, “A novel metaheuristic for travelling salesman problem,” Journal of Industrial Engineering, vol. 2013, Article ID 347825, 5 pages, 2013. [5] M. Anantathanavit and M. Munlin, “Using K-means radius particle swarm optimization for the travelling salesman problem,” IETE Technical Review, vol. 33, no. 2, pp. 172–180, 2016. [6] A. Mohsen, “Annealing Ant Colony Optimization with Mutation Operator for Solving TSP,” Computational Intelligence and Neuroscience, vol. 2016, Article ID 8932896, 13 pages, 2016. [7] P. Larra˜naga, C. M. H. Kuijpers, R. H. Murga, I. Inza, and S. Dizdarevic, “Genetic algorithms for the travelling salesman problem: a review of representations and operators,” Artificial Intelligence Review, vol. 13, no. 2, pp. 129–170, 1999. [8] V. Togan and A. T. Daloglu, “An improved genetic algorithm with initial population strategy and self-adaptive member grouping,” Computers & Structures, vol. 86, pp. 1204–1218, 2008. [9] G. Reinelt, TSPLIB 95, Universit¨at Heidelberg, Institut f¨ur Informatik, Heidelberg, Germany, 2016, http://comopt.ifi.uniheidelberg.de/software/TSPLIB95/. [10] G. Vahdati, S. Y. Ghouchani, and M. Yaghoobi, “A hybrid search algorithm with Hopfield neural network and genetic algorithm for solving traveling salesman problem,” in Proceedings of the 2nd International Conference on Computer and Automation Engineering, ICCAE 2010, pp. 435–439, Singapore, February 2010. [11] P. Chen, “An improved genetic algorithm for solving the Traveling Salesman Problem,” in Proceedings of the 2013 Ninth International Conference on Natural Computation (ICNC), pp. 397–401, 2013. [12] S. S. Ray, S. Bandyopadhyay, and S. K. Pal, “Genetic operators for combinatorial optimization in TSP and microarray gene ordering,” Applied Intelligence, vol. 26, no. 3, pp. 183–195, 2007. [13] O. Abdoun, J. Abouchabaka, and C. Tajani, “Analyzing the performance of mutation operators to solve the travelling salesman problem,” International Journal of Emerging Sciences, vol. 2, no. 1, pp. 61–77, 2012. ¨ [14] G. Ucoluk, “Genetic algorithm solution of the tsp avoiding special crossover and mutation,” Intelligent Automation & Soft Computing, vol. 8, no. 3, pp. 1–9, 2013. [15] K. Pulji´c and R. Manger, “Comparison of eight evolutionary crossover operators for the vehicle routing problem,” Mathematical Communications, vol. 18, no. 2, pp. 359–375, 2013. [16] J. Eaton, “GNU Octave 4.2.1,” 2017, https://www.gnu.org/ software/octave/. [17] W. Cook and A. Rohe, National Travelling Salesman Problems, VLSI Data Sets, University of Waterloo (Canada) and Universit¨at Bonn (Germany), 2016, http://www.math.uwaterloo.ca/ tsp/world/countries.html.

Advances in

Operations Research Hindawi www.hindawi.com

Volume 2018

Advances in

Decision Sciences Hindawi www.hindawi.com

Volume 2018

Journal of

Applied Mathematics Hindawi www.hindawi.com

Volume 2018

The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com www.hindawi.com

Volume 2018 2013

Journal of

Probability and Statistics Hindawi www.hindawi.com

Volume 2018

International Journal of Mathematics and Mathematical Sciences

Journal of

Optimization Hindawi www.hindawi.com

Hindawi www.hindawi.com

Volume 2018

Volume 2018

Submit your manuscripts at www.hindawi.com International Journal of

Engineering Mathematics Hindawi www.hindawi.com

International Journal of

Analysis

Journal of

Complex Analysis Hindawi www.hindawi.com

Volume 2018

International Journal of

Stochastic Analysis Hindawi www.hindawi.com

Hindawi www.hindawi.com

Volume 2018

Volume 2018

Advances in

Numerical Analysis Hindawi www.hindawi.com

Volume 2018

Journal of

Hindawi www.hindawi.com

Volume 2018

Journal of

Mathematics Hindawi www.hindawi.com

Mathematical Problems in Engineering

Function Spaces Volume 2018

Hindawi www.hindawi.com

Volume 2018

International Journal of

Differential Equations Hindawi www.hindawi.com

Volume 2018

Abstract and Applied Analysis Hindawi www.hindawi.com

Volume 2018

Discrete Dynamics in Nature and Society Hindawi www.hindawi.com

Volume 2018

Advances in

Mathematical Physics Volume 2018

Hindawi www.hindawi.com

Volume 2018

Suggest Documents