IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 53, NO. 6, JUNE 2006
1353
Improving Local Minima of Columnar Competitive Model for TSPs Hong Qu, Zhang Yi, and Huajin Tang
Abstract—The columnar competitive model (CCM) has been recently proposed to solve the traveling salesman problem. This method performs much better than the original Hopfield network in terms of both the number and the quality of valid solutions. However, the local minima is still an open unsolved issue. This paper studies the performance of the CCM and aims to improve its local minima problem. The contributions of this paper are: 1) it proves by mathematics that the CCM is hardly to escape from local minimum states in general; 2) an alternate CCM is presented based on a modified energy function so as to enhance the capability of the original CCM; 3) a new algorithm is proposed by combining the alternative CCM with the original one, which enables the network to have lower energy level when trapped in local minima, this makes the network to reach the optimal or near-optimal state quickly; 4) Simulations are carried out to illustrate the performance of the proposed method. Index Terms—Columnar competitive model (CCM), Hopfield network, local minima, optimization, traveling salesman problem (TSP).
still a problem to the CCM. Moreover, with the increase in the network scale, the local minima problem becomes much worse. In order to avoid the local minima problem, many researchers have adopted different heuristic search techniques, such as chaotic network [12]–[14], and local minima escape (LME) algorithms [15]–[17], which can be used to find a new state or lower energy level whenever the network is trapped into a local minimum state. In this paper, we propose a new algorithm based on the CCM to solve traveling salesman problem, which allows the network to escape from local minima and converge to the global optimal or near-optimal state quickly. A brief description of CCM is presented in Section II. Performance analysis of CCM is presented in Section III. An improved model for CCM is described in Section IV. The simulations result are given in Section V to illustrate the theoretical finding. Finally, conclusions are drawn in Section VI. II. CCM FOR TSP
I. INTRODUCTION INCE the Hopfield network was first used [1] to solve optimization problems, mainly the traveling salesman problem (TSP), much research has been aimed at applying the Hopfield neural networks to solve combinatorial optimization problems [2]–[6]. It is well known that there are some limitations with Hopfield networks when it is applied to optimization problems: invalidity of the obtained solutions, trial-and-error parameter settings, and low computation efficiency and so on [8], [9]. Recently, Tang et al. [10] proposed a new approach to solve TSP with a Hopfield network based columnar competitive model (CCM), which also be named as existing maximum neuron model by Y. Takefuji et al. [11]. This method incorporating winner-takes-all (WTA) learning rule has guaranteed convergence to valid states and total suppression of spurious states. Theoretical analysis and the simulation results illustrated that this model offered the advantage of fast convergence with low computational effort, and performed much better than the Hopfield network in terms of both the number and the quality of valid solutions, thereby allowing it to solve large scale computational problems effectively. However, the local minima
S
Manuscript received July 5, 2005; revised September 29, 2005. This work was supported by National Science Foundation of China under Grant 60471055 and Specialized Research Fund for the Doctoral Program of Higher Education under Grant 20040614017. This paper was recommended by Associate Editor C. T. Lin. H. Qu and Z. Yi are with the Computational Intelligence Laboratory, School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China (e-mail:
[email protected];
[email protected]). H. Tang is with Department of Electrical and Computer Engineering, National University of Singapore, Singapore 117576 (e-mail:
[email protected]). Digital Object Identifier 10.1109/TCSI.2006.874180
The TSP is an optimization task that arises in many practical situations. It may be stated as: given a group of cities to be visited and the distance between any two of them, it finds the shortest tour that visits each city only once and returns to the starting point. Let be the number of cities and let be the distance between the cities and , . A tour of the permutation matrix, where TSP can be represented by a each row and each column is associated respectively to a particular city and order in the tour. Let and be an unit hypercube of and its corner set, respectively. Given a fixed , define and as the sum of rows and columns. Then, the valid tour set for TSP is
(1) In the seminal work of Hopfield, it has been shown that an energy function of the form
(2) can be minimized by the continuous-time neural network if is symmetric [1], with the parameters: is the connection matrix, , is the number of neurons, represents the output sate of the neuron , and is the vector of bias. As described in [10], the CCM has a similar connection structure as Hopfield network, but obeys a different updating rule
1057-7122/$20.00 © 2006 IEEE Authorized licensed use limited to: ASTAR. Downloaded on November 5, 2009 at 23:59 from IEEE Xplore. Restrictions apply.
1354
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 53, NO. 6, JUNE 2006
which incorporates WTA [18] column wise. The WTA mechanism can be described as follows. Given a set of neurons, the input to each neuron is calculated and the neuron with the maximum input value is declared the winner. The winner’s output is set to “1” while the remaining neurons will have their values set to “0.” The intrinsic competitive nature is in favor of the convergence of CCM to feasible solutions, since it can reduce the number of penalty terms compared to Hopfield’s formulation. The associated energy function of CCM for TSP can be written as
(3) is a scaling parameter, represents the output where sate of the neuron , is the distance between cities and . Comparing (2) and (3), the connection matrix and the external input of the network are computed as follows:
To investigate the evolution of CCM in this case, we consider the input of the network for any column, assumed to be column . Suppose the state at iteration to be .. .
.. .
.. .
.. .
.. .
.. .
.. .
.. .
and let if otherwise.
(10)
Then, the input to each neuron in the th column can be calculated as
(4) (5) where is the Kronecker delta function. Then, the input to neuron is calculated as
Thus
(6)
when
The CCM based on WTA leaning rule, the neurons compete with one another in each column, and the winner is with the largest input. The updating rule of outputs is given by according to (7), the outputs of the th columnar are if otherwise. The whole algorithm is as described in [10].
(7) if otherwise.
III. PERFORMANCE ANALYSIS FOR CCM In the following, we present some analytical results about the CCM. Theorem 1: When , where and is the maximum and the minimum distance respectively, then the competitive model defined by (3)–(7) is always convergent to valid states. Proof: See in [10]. Theorem 2: Given , , when , then the evolution would be stopped whenever the network defined by (3)–(7) is trapped into state . Proof: Since
(11)
it is clearly that the competition of the neurons in th columnar will be stopped. Hence
when , the evolution of the CCM would be stopped whenever the network is trapped into state , which is obviously a local minimum of the energy function. This completes the proof. To gain deeper insight into the performance of CCM for TSP, we employ a simple example, which consists of four cities as shown in Fig. 1. There are six valid tours in Fig. 1, such as
(8) (9)
Authorized licensed use limited to: ASTAR. Downloaded on November 5, 2009 at 23:59 from IEEE Xplore. Restrictions apply.
QU et al.: IMPROVING LOCAL MINIMA OF CCM FOR TSPS
1355
A. Some Preliminary Knowledge From the view of mathematical programming, the TSP can be described as a quadratic 0–1 programming problem with linear constraints (12) Fig. 1. The example. 1 is the start city.
(13)
TABLE I PROBABILITY OF CONVERGENCE TO EVERY VALID SOLUTION
Let
denote the length of tour
and is the total tour length. Then, where the energy function to be minimized by the network is
(14)
, then
where are the constraints described by (13). The input of the network also can be written as the sum of two portions, accordingly
Obviously, and are the global optimal solution, and the rest are local minima. The competition will stop when the network converges to any one of the local minimum, such as , , , or . Moreover, the final solution is randomly selected from these six solutions, since the initial value of is obtained randomly. Table I gives the computer experimental results of 100 runs, which have explained the probability of , , , , , and that the CCM can converge to: the solutions distribute in the valid state’s space nearly uniformly. The above analysis shows that local minima is still a restriction for CCM when it is applied to TSP, especially when the city’s number becomes large. The value of parameters can not guarantee the network converging to the global minima, though it can lead the network arriving at the valid solution quickly. In the next section, an algorithm is presented to tackle this problem, which permits the network to escape from local minima and converge to optimal or near-optimal state effectively.
(15) where is the input produced by (12) and is by (13). Moreover, from the point of original neural network, we can write as
(16) is the portion of input brought form the connection matrix and results from the external input of the network. , and Apparently, the expressions of are different for different neural representations. According to the representation of (3) presented by [10], it holds that (17) (18)
IV. IMPROVEMENT FOR CCM To improve the performance of original network, Wilson and Pawley [19] reexamined the Hopfield’s formulation for TSP and encountered difficulties in converging to valid tours. They found that the solutions represented by local minima may correspond to invalid tours. To remedy this problem, Brandt et al. [20] proposed a modified energy function, which gives better convergence to valid tours than Hopfield and Tank’s formulation, but the tour quality is not as good as the original one [21]. To study the performance of CCM, the engery’s formulation is also an essential factor that should be taken into account. In this section, by examining the competitive updating rule, some conditions that ensure the network escaping from local minima is derived. Subsequently, an alternative representation of CCM satisfying this condition is proposed.
(19) Hence, for
it is obvious that if if
(20)
The following theorem presents the condition ensuring CCM to escape from local minima. and is not the global optimal Theorem 3: Give solution. If
Authorized licensed use limited to: ASTAR. Downloaded on November 5, 2009 at 23:59 from IEEE Xplore. Restrictions apply.
if if
(21)
1356
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 53, NO. 6, JUNE 2006
are guaranteed, then the CCM will escape from the local minimum state . Proof: When if if
is a scaling parameter and is the distance where between cities and . Comparing (2) and (3), the connection matrix and the external input of the network are computed as follows:
(22)
are guaranteed, the input to the neurons in -th column can be computed as, according to (15), (16)
(27) (28) where is the Kronecker delta function. The input to neuron is calculated by
(29)
(23) where is a constant. , there is only one active neuron in th column. Since is the active neuron, is one of the inactive Suppose neurons, then their input difference is computed by
(24) It shows that the difference of input between any two neurons in column is independent of the value of , but only depending on the distance terms. The neuron which has the shortest path to city and will win the competition. To ensure the network escape from the local minimum , the input to the active neuron should not be the maximal one, i.e., (25) considering the updating rule of CCM. is not the global optimal solution, a columnar Since should exist, in which the input to the active neuron is in fornot maximal. That is to say there exists a neuron , mulation (25) such that, Therefore, the network will escape from . This completes the proof. In the sequel, a new neural representation following the condition of Theorem 3 will be introduced.
For
, it is derived that
(30) This shows that the condition in Theorem 3 can be satisfied by the modified energy function (26). Thus, the CCM using this energy function will escape from the local minimum state when the network is trapped into a valid solution. On the basis of this modified representation, an improving algorithm is proposed in the next subsection. C. The Improvement for CCM Theorem 2 has shown that the CCM, denoted as , defined by (3)–(7) operates as a local minima searching algorithm. Given any initial state, it will stabilize at a local minimum state. While will the modified CCM defined by (26)–(29), denoted as cause the network to escape from the local minimum state and bring the network to a new state, which can be used as a new initial state of . Then the network can proceed further and stabilize at a new local minimum state. Hence, to attain the optimal solution, we can alternately use and . Starting from this idea, we develope a local minima escape algorithm for CCM. This algorithm is a realization of combining the network disturbing technique and the local minima searching property of CCM. The connections and the biases for both networks are as follows:
B. A Modified Neural Representation for CCM Let’s consider a modified energy function formulation as follows:
(31) and (32)
(26)
The networks and have the same architecture but different parameters. Therefore, the neurons of the two networks are one-to-one related and the states of the two networks can be
Authorized licensed use limited to: ASTAR. Downloaded on November 5, 2009 at 23:59 from IEEE Xplore. Restrictions apply.
QU et al.: IMPROVING LOCAL MINIMA OF CCM FOR TSPS
1357
Fig. 2. Block diagram of the improved CCM.
easily mapped to each other. Based on those two networks, the algorithm performs such computations as
and
when
if otherwise.
when
The input is the vector equal to Then, the output of can be computed as
(33)
,
.
and when when Then, the input to each neuron
(34) is calculated as when
(35)
when the updating rule of outputs is the same as the rule given by (7). Fig. 2 shows the simple block diagram of the improved CCM. denotes the recurrent connections from , respectively, which have the same struc. In order to decide when to switch between and ture as , single layer perceptrons, with identical architecture, such as the weight matrix , the bias and the transfer function , defined by
if otherwise. Subsequently, the logic element operation
(36) performs the “and”
if otherwise. It is clearly shown that the value of will be equal to “1” if and only if the output of the network is a valid solution. In this case, the network takes the parameters of . defined above, the new alWith the two networks and gorithm can be expressed as follows: is at one of its local minima, by Assuming the network disturbing it’s parameters according to (32), a new network is created. Keep a copy of the current local minima state of and then set it as the initial state of . Then the network will proceed and lead the system escape from the local minimum will be mapped back to as its new state. The new state of initial state. Then check whether the network , starting from
Authorized licensed use limited to: ASTAR. Downloaded on November 5, 2009 at 23:59 from IEEE Xplore. Restrictions apply.
1358
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 53, NO. 6, JUNE 2006
TABLE II CITY COORDINATES FOR THREE 10-CITY PROBLEMS
the new state, can converge to a new local minima state which is at a lower energy level than the present local minimum state. If it can, the new local minima state of is accepted; This completes an iteration. The next iteration will re-do the above process. The algorithm will terminate if the whole iteration number reaches a preset number of if new local minima state has been accepted for a certain number of consecutive iterations. The implementing of this algorithm is listed in the following.
b) If
then
else
Notation The best solution at present. The system’s energy when the . network reaches the state The current repeat count of the network. The repeat count of the network when no better solution is found. The maximal value of . The maximal value of .
Algorithm , 1) Initialize the network with a starting state this means each neuron of the network having a value or . 2) Select a column (e.g., the first column ). Set , , . 3) Searching for the valid solution: a) Set , if then goto step 4. b) Compute the input of each neuron in th column using
shown in (35). c) Apply WTA and update the outputs of the neurons in th column using (7), then the network reaches a , set . new state d) Go to the next column, set and then repeat step 3. 4) Bring the network to escape from a local minimum state a) .
c) If ( d) Compute the input column using
or ) then go to step 5. of each neuron in th
shown in (35). e) Apply WTA and update the outputs of the neurons in th column using (7), then the network reach a new , set . state go to step 3. f) Set 5) Stop updating and output the final solution and the . corresponding energy:
V. SIMULATION RESULTS To verify the theoretical results and the effectiveness of the improved CCM in improving local minima when applied to TSP, some experiments have been carried out. These programs coded in MATLAB 6.5 run in a compatible IBM PC with Pentium 4 2.66 GHz and 256 MB RAM. The first experiment is on three 10-city problems where the city coordinates are shown in Table II. The first data set is the one used in [1] and [3] and the other sets are randomly generated within the unit box. The optimal tours for these data sets are known to be (1, 4, 5, 6, 7, 8, 9, 10, 2,3), (1, 4, 8, 6, 2, 3, 5, 10, 9,7) and (1, 10, 3, 9, 4, 6, 5, 2, 7,8), respectively, with the corresponding minimal tour lengths being 2.6907, 2.1851, and 2.6872. To look into the relationship between the value of and the convergence of CCM and the improved CCM, 500 simulations
Authorized licensed use limited to: ASTAR. Downloaded on November 5, 2009 at 23:59 from IEEE Xplore. Restrictions apply.
QU et al.: IMPROVING LOCAL MINIMA OF CCM FOR TSPS
1359
TABLE III PERFORMANCE OF CCM APPLIED TO 10-CITY TSP
TABLE IV PERFORMANCE OF IMPROVED CCM APPLIED TO 10-CITY TS
Fig. 3. Tour length histograms for 10-city problem 1, 2, and 3 (from left to right) produced by CCM and improved CCM.
have been performed for each problem. In each running of imand are set to be proved CCM, 2000 and 100, respectively. The simulation results are given in Tables III and IV. The simulation results show that both the CCM and the improved CCM can converge to the valid solution with a per. But the average length of the centage 100%, when valid tour for the 10-city problems 1, 2 and 3, can be reduced
by 32.52%, 38.62% and 39.67%, by the improvement to CCM. This is to say the improved CCM is more effective than CCM when concerned with the tour length of the final solution. , the improved In terms of the solution quality, when CCM found the minimal tour 45, 21, and 31 times for the 500 runs, this is, 9%, 4.2%, and 6.2% solutions attained by the improved CCM for these three 10-city problems were optimal solution; in comparison, the CCM found the minimal tour
Authorized licensed use limited to: ASTAR. Downloaded on November 5, 2009 at 23:59 from IEEE Xplore. Restrictions apply.
1360
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 53, NO. 6, JUNE 2006
Fig. 4. Average iteration number histograms for problem 1, 2, and 3 produced by CCM and improved CCM.
TABLE V COMPARISON PERFORMANCE FOR 10-CITY PROBLEMS
TABLE VI THE COMPARISON OF CCM AND IMPROVED CCM FOR 51-CITY TSP
6, 2 and 9 times. Moreover, The improved CCM found the “GOOD” solution 417, 491, and 435 times for the three data sets, respectively, while the CCM did 56 and 34 times. All the solutions found by improved CCM are “ACCEPT”, while CCM did only 26.4%, 22.2%, and 19.4%. The “GOOD” indicates the tour distance within 110% of the optimum distance. The “ACCEPT” indicates the tour distance within 130% of the optimum distance. Fig. 3 shows the histograms of the tour . lengths found by the two algorithms with parameter On the other hand, concerning the convergence rate, the CCM is much faster than the improved CCM, as shown in Fig. 4. Table V summarizes the overall performance of these two models. The theoretical results are also validated upon a 51-city example whose data are from a TSP data library collected by G. Renelt [22]. This problem is believed to have an optimal tour is 169. In the with a length of 426. The value of and are set to be 5000 simulation, and 300, respectively. For a total of 50 runs of three different , the average tour length obtained by CCM is 1027, which is 141.1% longer than the optimal tour. The shortest tour is 991, 132.6% longer than the optimal tour. While, when using the improved CCM, the average tour length is 580, which is 36.2% longer than the optimal tour. The shortest tour CCM is 524, 23.0% longer than the optimal tour. The simulation result: a comparison of CCM and Improved CCM for 51-City TSP, are shown in Table VI. A typical tour obtained is shown in Fig. 5.
In Table VI, MR represents the optimal degree of length of the best solution obtained by CCM or improved CCM as comparing with the length of the optimal solution, is given by
AR represents the optimal degree of the average length of the solution obtained by CCM or improved CCM as comparing with the length of the optimal solution, is given by
Another simulation is carried out to show the performance of the new algorithm when it be applied to solve large size problems. This simulation is based on a 280-city example whose data are also from a TSP data library collected by G. Renelt [22]. This problem is believed to have an optimal tour with a length is 605. In the simulation, of 2586. The value of and are set to be 15 000 and 800, respectively. For a total of ten runs when is set to be 800, all the solution found by CCM and improved CCM are valid solution. The average tour length obtained by CCM is 10396, which is 302.1% longer than the optimal tour. The shortest tour is 9845, 280.7% longer than the optimal tour. While, the average tour length is 3796, which is 46.8% longer than the optimal tour, the shortest tour CCM is 3465, 34.0% longer than the optimal tour, when
Authorized licensed use limited to: ASTAR. Downloaded on November 5, 2009 at 23:59 from IEEE Xplore. Restrictions apply.
QU et al.: IMPROVING LOCAL MINIMA OF CCM FOR TSPS
1361
Fig. 5. Tours found by CCM and improved CCM for 51-city.
Fig. 6. The best tour of 280-city found by improved CCM.
the improved CCM is used. The average iteration number is 89 when CCM is used, and 3859 when improved CCM is used. The best tour of this problem found by improved CCM is shown in Fig. 6. The results of this experiment show that the presented method works well when the problems size is large, and can be applicable to real applications. Moreover, the values of and take a great role to the convergence of the presented algorithm when the problems size is large. An approand can lead to a priate setting of
faster convergence. This character will be studied moreover in our following works. VI. CONCLUSION In this paper, a novel method is proposed to improve the local minimum problem of the CCM. By exploring the local searching properties of the CCM, some conditions for driving the network to escape from the local minimum states are obtained. The network with these conditions can converge to global optimal or near-optimal states. Simulations have been
Authorized licensed use limited to: ASTAR. Downloaded on November 5, 2009 at 23:59 from IEEE Xplore. Restrictions apply.
1362
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—I: REGULAR PAPERS, VOL. 53, NO. 6, JUNE 2006
carried out to illustrate the improvement of the improved CCM. It shows that the improved CCM is more effective than the original CCM. REFERENCES [1] J. J. Hopfield and D. W. Tank, ““Neural” computation of decisions in optimization problems,” Biol. Cybern., vol. 52, pp. 141–152, 1985. [2] D. W. Tank and J. J. Hofield, “Simple “neural” optimization networks: an A/D converter, signal decision circuit, and a linear programming circuit,” IEEE Trans. Circuits Syst., vol. CAS-33, no. 5, May 1986. [3] G. W. Wilson and G. S. Pawley, “On the stability of the travelling salesman problem algorithm of Hopfield and Tank,” Biol. Cybern., vol. 58, pp. 63–70, 1988. [4] B. Kanmgar-Parsi and B. Kamgar-Parsi, “On the problem solving with Hopfield neural networks,” Biol. Cybern., vol. 62, pp. 415–423, 1990. [5] P. M. Talavan and Yanez, “Parameter setting of the Hopfield network applied to TSP,” Neural Netw., vol. 15, pp. 363–373, 2002. [6] S. V. B. Aiyer, M. Niranjan, and F. Fallside, “A theoretical investigation into the performance of the Hopfield model,” IEEE Trans. Neural Netw., vol. 1, pp. 204–215, Jun. 1990. [7] S. Abe, “Global convergence and suppression of spurious states of the Hopfield neural networks,” IEEE Trans. Circuits Syst. I, Fundam. Theory Appl., vol. 40, no. 4, pp. 246–257, Apr. 1993. [8] K. A. Smith, “Neural networks for combinatorial optimization: a review of more than a decade of research,” INFORMS J. Comput., vol. 11, no. 1, pp. 15–34, 1999. [9] K. C. Tan, H. Tang, and S. S. Ge, “On parameter settings of Hopfield networks applied to traveling salesman problems,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 52, no. 5, pp. 994–1002, May 2005. [10] H. J. Tang, K. C. Tang, and Z. Yi, “A columnar competitive model for solving combinatorial optimization problems,” IEEE Trans. Neural Netw., vol. 15, no. 6, pp. 1568–1574, Jun. 2004. [11] Y. Takefuji, K. C. Lee, and H. Aiso, “An artificial maximum neural network: a winner-take-all neuron model forcing the state of the system in a solution domain,” Biol. Cybern., vol. 67, no. 3, pp. 243–251, 1992. [12] L. Chen and K. Aihara, “Chaotic simulated annealing by a neural network model with transient chaos,” Neural Netw., vol. 8, no. 6, pp. 915–930, 1995. [13] M. Hasegawa, T. Lkeguchi, and K. Aihara, “Solving large scale traveling problems by choatic neurodynamics,” Neural Netw., vol. 15, pp. 271–385, 2002. [14] L. Wang and K. Smith, “On chaotic simulated annealing,” IEEE Trans. Neural Netw., vol. 9, no. 4, pp. 716–718, Jul. 1998. [15] M. Peng, K. Narendra, and A. Gupta, “An investigation into the improvement of local minimum of the Hopfield networks,” Neural Netw., vol. 9, pp. 1241–1253, 1996. [16] G. Papageorgiou, A. Likas, and A. Stafylopatis, “Improved exploration in Hopfield network state-space through parameter perturbation driven by simulated annealing,” Eur. J. Oper., vol. 108, pp. 283–292, 1998. [17] M. Martin-Valdivia, A. Ruiz-Sepulveda, and F. Triguero-Ruiz, “Improved local minima of Hopfield networks with augmented Lagrange multipliers for large scale TSPs,” Neural Netw. Lett., vol. 13, pp. 283–283, 2000. [18] Z. Yi, P. A. Heng, and P. F. Fung, “Winner-take-all discrete recurrent neural networks,” IEEE Trans. Circuits Syst. I, Fundam. Theory Appl.I, vol. 47, no. 12, pp. 1584–1589, Dec. 2000. [19] V. Wilson and G. S. Pawley, “On the stability of the TSP problems algorithm of Hopfield and Tank,” Bio. Cybern., vol. 58, pp. 63–70, 1988.
[20] R. D. Brandt, Y. Wang, A. J. Laub, and S. K. Mitra, “Alternative networks for solving the traveling salesman problem and the list-matching problem,” in Proc. Int. Joint Conf. Neural Netw., 1998, vol. 2, pp. 333–340. [21] P. W. Protzel, D. L. Palumbo, and M. K. Arras, “Performance and faulttolerance of neural networks for optimization,” IEEE Trans. Neural Netw., vol. 4, no. 4, pp. 600–614, Jul. 1994. [22] G. Reinelt, “TSPLIB—a traveling salesman problem library,” ORSA J. Comput., vol. 3, pp. 376–384, 1991. Hong Qu received the B.S. and M.S. degrees in computer science and engineering from the University of Electronic Science and Technology of China, Chengdu, China, in 2000 and 2003, respectively. He is currently working toward the Ph.D. degree in the Computational Intelligence Laboratory, School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China. His current research interests include neural networks, neurodynamics, intelligent computation and optimization.
Zhang Yi received the B.S. degree in mathematics from Sichuan Normal University, Chengdu, China, in 1983, the M.S. degree in mathematics from Hebei Normal University, Shijiazhuang, China, in 1986, and the Ph.D. degree in mathematics from the Institute of Mathematics, The Chinese Academy of Science, Beijing, China, in 1994. From 1989 to 1990, he was a Senior Visiting Scholar in the Department of Automatic Control and Systems Engineering, The university of Sheffield, Sheffield, U.K. From February 1999 to August 2001, he was a Research Associate in the Department of Computer Science and Engineering at The Chinese University of Hong Kong. From August 2001 to December 2002, he was a Research Fellow in the Department of Electrical and Computer Engineering, National University of Singapore. He is currently a Professor in the School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China. His current research interests include neural networks and data miming.
Huajin Tang received the Bachelor’s degree from Zhe Jiang University, Hangzhou, China, and the Master’s degree from Shanghai Jiao Tong University, Shanghai, China, in 1998 and 2001, respectively, both in engineering. He is currently working toward the Ph.D. degree in electrical and computer engineering at the National University of Singapore, Singapore. He has authored papers more than ten published journal and conference papers. His research interests include neural networks, neurodynamics, intelligent computation and optimization.
Authorized licensed use limited to: ASTAR. Downloaded on November 5, 2009 at 23:59 from IEEE Xplore. Restrictions apply.