Neural Searching Algorithm for Combinatorial ...

3 downloads 0 Views 3MB Size Report
Laboratory for Brainware / Nanoelectronics and Spintronics Research Institute of Electrical Communication, Tohoku University. Introduction: In 1985, Hopfield ...
Neural Searching Algorithm for Combinatorial Optimization Problems 組み合わせ最適化問題探索のニューラルアルゴリズム Ali Lemus, Koji Nakajima, Yoshihiro Hayakawa Laboratory for Brainware / Nanoelectronics and Spintronics Research Institute of Electrical Communication, Tohoku University

Introduction:

Maximum Inhibition:

Basic Ideas:

In 1985, Hopfield and Tank proposed recurrent neural networks with an associated energy function which could solve optimization problems [1]. One can imagine the energy function describing a energy landscape and the dynamics, as the motion of a particle under the influence of forces such as gravity and friction.

If all synapses are inhibitory, and all neurons are firing (the output of the other neurons is 1), then this neuron will be in the maximum inhibition state.

Suppose an ANN where all synapses are inhibitory, from an individual neuron’s point of view, there are 3 possible cases, namely: [BI0]All other neurons are silent (min inhibition) and the input is inhibitory, the neuron should become silent. [BI1]All other neurons are firing (max inhibition) and the input is excitatory, the neuron should fire. [BI2] Neither 0 or 1 are true. So the neuron with the largest force will change state faster than the rest.

● We will propose a new method that can solve optimization problems in time O(n), which is based on force analysis derived from the energy function. ● The two most appealing factors of this new method are: ● The high speed for solving optimization problems ● The low parameter dependence of the method, which is a major difficulty when solving optimization problems by using neural networks.

The energy function used to solve the TSP is inherently symmetric, for this reason 2 new rules were created: [BI3] Starting Point: any column from the row with the largest force is chosen as the starting point [BI4] Tie: in case of a tie in max force, half of the tied neurons are set accordingly.

● We use the forces derived from the energy function of an ANN in order to reach the global minimum of the system without having to simulate the neural network. ● This algorithm can solve optimization problems which can be mapped into a neural network, and where all synapses are inhibitory.

Minimum Inhibition: If all synapses are inhibitory, and none of the neurons are firing (the output of the other neurons is 0), this neuron will be in a state of minimum inhibition.

[1] J. Hopfield, D. Tank, "Neural Computation of Decisions in Optimization Problems", Biological Cybernetics, Volume 52, no.3, pp.141-152, 1985.

Proposed Algorithm: 1. all neurons ← unknown; 2. calculate forces for all neurons assuming max and min inhibition; 3. set initial conditions [BI3] (just for TSP) 4. while (unknown neurons exist) do 5. If ([BI0] = true) for any neuron then 6. | neurons ← 0; 7. elseif ([BI1] = true) for any neuron then 8. | neurons ← 1; 9. else 10. get neuron with largest force [BI4]; 11. if (largest force > 0) 12. | half of the neurons ← 1; (half is defined as ciel(N/2)) 13. else 14. | half of the neurons ← 0; (half is defined as ciel(N/2)) 15. endif 16. endif 17. recalculate forces for all neurons 18. endwhile

Map:

Traveling Salesman Problem (TSP):

city order

Must make a closed tour among all the cities.

1

2

3

4

5

6

7

8

9

10

1

TSP Energy Function:

2

Solving the 10 city Traveling Salesman Problem: Here we run a simulation on the 10 city Traveling Salesman Problem, only the important steps are shown, we see the initial conditions, then a tie in forces where it is solved by setting half of the neurons, finally the optimal solution is found.

3 4 5

city number

6 7 8 9 10

The Force acting on a neuron is:

Position of the Cities Initial conditions, use [BI3]

Tie in forces , use [BI4]

Conclusions: ●This new method seems to work well with simple optimization problems (ADC). ● Preliminary results show that it has the potentiality to solve hard NP-Complete problems (up to 10 city instances of the TSP have been solved). ●In the case of the TSP, the parameter search is still needed but there is only one parameter to set, this makes the parameter search much easier. ●This algorithm achieves a time complexity of O(n) or less even for hard problems such as TSP. ●For example, it takes less than a second to reach the global minima for a 10 city TSP in a intel pentium 4 (3GHz) with 2GB of RAM. ●By understanding the way in which Artificial Neural Networks solve problems, it is possible to approach optimization problems in a new way.

Optimal Route Final state which represents optimal route.

The tie between forces has been solved.

Suggest Documents