Chaotic Optimization for Quadratic Assignment Problems

0 downloads 0 Views 416KB Size Report
Kobe. 8. 4. 2. 3. 6. 5. Assignment. X Scottsdale. Y Bankok. Z Vancouver. W Kobe. F(p. 1) = 4 · 2 + 3 · 8 + 5 · 3 + 4 · 6 + 3 · 6 + 4 · 5 = 109. ISCAS2002 – p.5/23 ...
Chaotic Optimization for Quadratic Assignment Problems Tohru Ikeguchi1 , Keiichi Sato1 , Mikio Hasegawa2 , and Kazuyuki Aihara3 [email protected]. 1

Saitama University, 2 Communication Research Laboratory, 3 The University of Tokyo

ISCAS2002 – p.1/23

Introduction Combinatorial Optimization Problems Scheduling, Vehicle routing, Assignment problems, etc · · ·

Almost impossible to obtain optimal solutions

Develop effective algorithms, which offer very good near optimum solutions in a reasonable time. ISCAS2002 – p.2/23

Quadratic Assignment Problem One of the most difficult NP-hard combinatorial optimization problems. F(p) =

N N  

ci j d p(i) p( j)

i=1 j=1

ci j di j pi N

: : : :

the (i, j) th element of a flow matrix C the (i, j) th element of a distance matrix D the i th element of a permutation p the size of the problem Find a permutation p, which provides the minimum value of the objective function F. ISCAS2002 – p.3/23

Example of QAP 1. Assign 4 factories, X, Y, Z and W, to the four cities 2. The amounts of items between 4 factories X Y Z W Vancouver   X  0 4 3 5  Y  4 0 6 3  C=    Z  3 6 0 4  4 W 5 3 4 0 5 Kobe 8 3. Distance S B V K 6   S  0 2 8 3  Bankok 2 3   B  2 0 4 6  D =  V  8 4 0 5  Scottsdale K 3 6 5 0 ISCAS2002 – p.4/23

Example of QAP

Vancouver Z

Assignment X Y Z W

Scottsdale Bankok Vancouver Kobe

4

5 Kobe

8 6

Y Bankok 2

W 3

X Scottsdale F(p1 ) = 4 · 2 + 3 · 8 + 5 · 3 + 4 · 6 + 3 · 6 + 4 · 5 = 109

ISCAS2002 – p.5/23

Example of QAP

Vancouver Y Z

Assignment X Z Y W

Scottsdale Bankok Vancouver Kobe

4

5 Kobe

8 6

Y Z Bankok 2

W 3

X Scottsdale F(p1 ) = 4 · 2 + 3 · 8 + 5 · 3 + 4 · 6 + 3 · 6 + 4 · 5 = 109 F(p2 ) = 4 · 8 + 3 · 2 + 5 · 3 + 6 · 4 + 3 · 5 + 4 · 6 = 116 ISCAS2002 – p.5/23

Example of QAP

Vancouver Y Z

Assignment X Z Y W

Scottsdale Bankok Vancouver Kobe

4

5 Kobe

8 6

Y Z Bankok 2

W 3

X Scottsdale F(p1 ) = 4 · 2 + 3 · 8 + 5 · 3 + 4 · 6 + 3 · 6 + 4 · 5 = 109 F(p2 ) = 4 · 8 + 3 · 2 + 5 · 3 + 6 · 4 + 3 · 5 + 4 · 6 = 116 F(p1 ) < F(p2 ) ISCAS2002 – p.5/23

One of the conventional method Approach based on the gradient dynamics of mutual connection neural networks (Hopfield – Tank neural network approach) •

Good news We can obtain good solutions (possibly optimum) if we can decide connection weights with appropriate initial conditions.

ISCAS2002 – p.6/23

Coding • •

Solving an N-size problem, N × N neurons are prepared. The (i, m)-th neuron firing (xim = 1) assigns the i-th element to the j-th element.

S X  Y  Z  W 

B    

V    

K    

ISCAS2002 – p.7/23

Coding • •

Solving an N-size problem, N × N neurons are prepared. The (i, m)-th neuron firing (xim = 1) assigns the i-th element to the j-th element.

S X 1 Y 0 Z 0 W 0

B 0 1 0 0

V 0 0 1 0

K 0 0 0 1

1 −→ Firing, 0 −→ Resting. X : Scottsdale, Y : Bankok, Z : Vancouver and W : Kobe. ISCAS2002 – p.7/23

Coding • •

Solving an N-size problem, N × N neurons are prepared. The (i, m)-th neuron firing (xim = 1) assigns the i-th element to the j-th element.

S X 0 Y 0 Z 0 W 1

B 0 1 0 0

V 1 0 0 0

K 0 0 1 0

1 −→ Firing, 0 −→ Resting. X : Vancouver, Y : Bankok, Z : Kobe and W : Scottsdale. ISCAS2002 – p.7/23

Connection Weights N  N N  N   F(x) = A ( xim − 1)2 + B ( xim − 1)2

+

i=1 m=1 N  N  N N  

m=1 i=1

ci j dmn xim x jn ,

i=1 m=1 j=1 n=1

E(x) = −

N  N  N N   1

2

i=1 m=1 j=1 n=1

wim; jn xim x jn +

N N  

θim xim

i=1 m=1

wim; jn = −2{A(1 − δmn )δi j + Bδmn (1 − δi j ) + ci j dmn } A, B : positive constants, δi j : Kronecker’s delta ISCAS2002 – p.8/23

One of the conventional method Approach based on the gradient dynamics of mutual connection neural networks (Hopfield – Tank neural network approach) •

Good news We can obtain good solutions (possibly optimum) if we can decide connection weights with appropriate initial conditions.

ISCAS2002 – p.9/23

One of the conventional method Approach based on the gradient dynamics of mutual connection neural networks (Hopfield – Tank neural network approach) •



Good news We can obtain good solutions (possibly optimum) if we can decide connection weights with appropriate initial conditions. Bad news 1. Difficult to apply to large scale size problems N-size problem → N 2 neurons → N 4 connections 2. Unfeasible solutions 3. Local minimum problem ISCAS2002 – p.9/23

Chaos for combinatorial optimization

ISCAS2002 – p.10/23

Chaos for combinatorial optimization •

Chaotic dynamics for escaping from local minima.

ISCAS2002 – p.10/23

Chaos for combinatorial optimization • •

Chaotic dynamics for escaping from local minima. Fluctuation in a state space possibly leads to an optimum solution.

ISCAS2002 – p.10/23

Chaos for combinatorial optimization • •

Chaotic dynamics for escaping from local minima. Fluctuation in a state space possibly leads to an optimum solution. Mutual connection NN → CNN for TSPs Nozawa,1992 Yamada et al, 1993 Chen & Aihara, 1995 Ishii et al., 1997

ISCAS2002 – p.10/23

Chaos for combinatorial optimization • •

Chaotic dynamics for escaping from local minima. Fluctuation in a state space possibly leads to an optimum solution. Mutual connection NN → CNN for TSPs Nozawa,1992 Yamada et al, 1993 Chen & Aihara, 1995 Ishii et al., 1997



CNN for QAPs

ISCAS2002 – p.10/23

Chaotic neural network yim (t + 1) = kyim (t) +

N N  

wim; jn x jn (t) − α f (yim (t)) + aim

j=1 n=1

yim (t) wim; jn f k α, a, 

: : : : :

the internal state of the (i, m) th neuron at t the connection weight output function, f (z) = 1/(1 + exp(−z/)) the decay parameters for the refractoriness parameters

ISCAS2002 – p.11/23

Use CNN to solve QAPs N  N N  N   F(x) = A ( xim − 1)2 + B ( xim − 1)2

+

i=1 m=1 N  N  N N  

m=1 i=1

ci j dmn xim x jn ,

i=1 m=1 j=1 n=1

wim; jn

ci j dmn = −2{A(1 − δmn )δi j + Bδmn (1 − δi j ) + } q q : a normalization parameter ISCAS2002 – p.12/23

Definition of firing yi1 yi2 yi3 yi4 yi5

xi1 xi2 xi3 xi4 xi5

y1m 0.9 0.4 0.3 -0.2 0.5

x1m

y2m 0.4 0.0 0.2 0.4 0.1

x2m

y3m 0.3 0.8 -0.7 0.3 -0.2

x3m

y4m 0.2 -0.4 0.3 0.0 0.1

x4m

y5m -0.5 0.5 -0.1 0.7 -0.1

x5m

ISCAS2002 – p.13/23

Definition of firing yi1 yi2 yi3 yi4 yi5

xi1 xi2 xi3 xi4 xi5

y1m 0.9 0.4 0.3 -0.2 0.5

x1m

y2m 0.4 0.0 0.2 0.4 0.1

x2m

y3m 0.3 0.8 -0.7 0.3 -0.2

x3m

y4m 0.2 -0.4 0.3 0.0 0.1

x4m

y5m -0.5 0.5 -0.1 0.7 -0.1

x5m

ISCAS2002 – p.13/23

Definition of firing yi1 yi2 yi3 yi4 yi5

xi1 xi2 xi3 xi4 xi5

y1m 0.9 0.4 0.3 -0.2 0.5

x1m

y2m 0.4 0.0 0.2 0.4 0.1

x2m

y3m 0.3 0.8 -0.7 0.3 -0.2

x3m

y4m 0.2 -0.4 0.3 0.0 0.1

x4m

y5m -0.5 0.5 -0.1 0.7 -0.1

x5m

ISCAS2002 – p.13/23

Definition of firing yi1 yi2 yi3 yi4 yi5

xi1 xi2 xi3 xi4 xi5

y1m 0.9 0.4 0.3 -0.2 0.5

1

y2m 0.4 0.0 0.2 0.4 0.1

0

x2m

y3m 0.3 0.8 -0.7 0.3 -0.2

0

x3m

y4m 0.2 -0.4 0.3 0.0 0.1

0

x4m

y5m -0.5 0.5 -0.1 0.7 -0.1

0

x5m

0

0

0

0

x1m

ISCAS2002 – p.13/23

Definition of firing yi1 yi2 yi3 yi4 yi5

xi1 xi2 xi3 xi4 xi5

y1m 0.9 0.4 0.3 -0.2 0.5

1

y2m 0.4 0.0 0.2 0.4 0.1

0

x2m

y3m 0.3 0.8 -0.7 0.3 -0.2

0

x3m

y4m 0.2 -0.4 0.3 0.0 0.1

0

x4m

y5m -0.5 0.5 -0.1 0.7 -0.1

0

x5m

0

0

0

0

x1m

ISCAS2002 – p.13/23

Definition of firing yi1 yi2 yi3 yi4 yi5

xi1 xi2 xi3 xi4 xi5

y1m 0.9 0.4 0.3 -0.2 0.5

1

y2m 0.4 0.0 0.2 0.4 0.1

0

x2m

y3m 0.3 0.8 -0.7 0.3 -0.2

0

x3m

y4m 0.2 -0.4 0.3 0.0 0.1

0

x4m

y5m -0.5 0.5 -0.1 0.7 -0.1

0

x5m

0

0

0

0

x1m

ISCAS2002 – p.13/23

Definition of firing yi1 yi2 yi3 yi4 yi5

xi1 xi2 xi3 xi4 xi5

y1m 0.9 0.4 0.3 -0.2 0.5

1

0

y2m 0.4 0.0 0.2 0.4 0.1

0

0

y3m 0.3 0.8 -0.7 0.3 -0.2

0

1

y4m 0.2 -0.4 0.3 0.0 0.1

0

0

x4m

y5m -0.5 0.5 -0.1 0.7 -0.1

0

0

x5m

0

0

0

x1m x2m

0

0

0

x3m

ISCAS2002 – p.13/23

Definition of firing yi1 yi2 yi3 yi4 yi5

xi1 xi2 xi3 xi4 xi5

y1m 0.9 0.4 0.3 -0.2 0.5

1

0

y2m 0.4 0.0 0.2 0.4 0.1

0

0

y3m 0.3 0.8 -0.7 0.3 -0.2

0

1

y4m 0.2 -0.4 0.3 0.0 0.1

0

0

x4m

y5m -0.5 0.5 -0.1 0.7 -0.1

0

0

x5m

0

0

0

x1m x2m

0

0

0

x3m

ISCAS2002 – p.13/23

Definition of firing yi1 yi2 yi3 yi4 yi5

xi1 xi2 xi3 xi4 xi5

y1m 0.9 0.4 0.3 -0.2 0.5

1

0

y2m 0.4 0.0 0.2 0.4 0.1

0

0

y3m 0.3 0.8 -0.7 0.3 -0.2

0

1

y4m 0.2 -0.4 0.3 0.0 0.1

0

0

x4m

y5m -0.5 0.5 -0.1 0.7 -0.1

0

0

x5m

0

0

0

x1m x2m

0

0

0

x3m

ISCAS2002 – p.13/23

Definition of firing yi1 yi2 yi3 yi4 yi5

xi1 xi2 xi3 xi4 xi5

y1m 0.9 0.4 0.3 -0.2 0.5

1

0

y2m 0.4 0.0 0.2 0.4 0.1

0

0

y3m 0.3 0.8 -0.7 0.3 -0.2

0

1

y4m 0.2 -0.4 0.3 0.0 0.1

0

0

y5m -0.5 0.5 -0.1 0.7 -0.1

0

0

0

0

0

x2m

0 0

0

0

1

x3m x4m

0 0

x1m

0

x5m

ISCAS2002 – p.13/23

Definition of firing yi1 yi2 yi3 yi4 yi5

xi1 xi2 xi3 xi4 xi5

y1m 0.9 0.4 0.3 -0.2 0.5

1

0

y2m 0.4 0.0 0.2 0.4 0.1

0

0

y3m 0.3 0.8 -0.7 0.3 -0.2

0

1

y4m 0.2 -0.4 0.3 0.0 0.1

0

0

y5m -0.5 0.5 -0.1 0.7 -0.1

0

0

0

0

0

x2m

0 0

0

0

1

x3m x4m

0 0

x1m

0

x5m

ISCAS2002 – p.13/23

Definition of firing yi1 yi2 yi3 yi4 yi5

xi1 xi2 xi3 xi4 xi5

y1m 0.9 0.4 0.3 -0.2 0.5

1

0

y2m 0.4 0.0 0.2 0.4 0.1

0

0

y3m 0.3 0.8 -0.7 0.3 -0.2

0

1

y4m 0.2 -0.4 0.3 0.0 0.1

0

0

y5m -0.5 0.5 -0.1 0.7 -0.1

0

0

0

0

0

x2m

0 0

0

0

1

x3m x4m

0 0

x1m

0

x5m

ISCAS2002 – p.13/23

Definition of firing yi1 yi2 yi3 yi4 yi5

xi1 xi2 xi3 xi4 xi5

y1m 0.9 0.4 0.3 -0.2 0.5

1

0

0

0

y2m 0.4 0.0 0.2 0.4 0.1

0

0

0

0

y3m 0.3 0.8 -0.7 0.3 -0.2

0

1

0

0

0

x3m

y4m 0.2 -0.4 0.3 0.0 0.1

0

0

1

0

0

x4m

y5m -0.5 0.5 -0.1 0.7 -0.1

0

0

0

1

0

x5m

0

x1m x2m

ISCAS2002 – p.13/23

Definition of firing yi1 yi2 yi3 yi4 yi5

xi1 xi2 xi3 xi4 xi5

y1m 0.9 0.4 0.3 -0.2 0.5

1

0

0

0

y2m 0.4 0.0 0.2 0.4 0.1

0

0

0

0

y3m 0.3 0.8 -0.7 0.3 -0.2

0

1

0

0

0

x3m

y4m 0.2 -0.4 0.3 0.0 0.1

0

0

1

0

0

x4m

y5m -0.5 0.5 -0.1 0.7 -0.1

0

0

0

1

0

x5m

0

x1m x2m

ISCAS2002 – p.13/23

Definition of firing yi1 yi2 yi3 yi4 yi5

xi1 xi2 xi3 xi4 xi5

y1m 0.9 0.4 0.3 -0.2 0.5

1

0

0

0

0

x1m

y2m 0.4 0.0 0.2 0.4 0.1

0

0

0

0

1

x2m

y3m 0.3 0.8 -0.7 0.3 -0.2

0

1

0

0

0

x3m

y4m 0.2 -0.4 0.3 0.0 0.1

0

0

1

0

0

x4m

y5m -0.5 0.5 -0.1 0.7 -0.1

0

0

0

1

0

x5m

ISCAS2002 – p.13/23

Definition of firing yi1 yi2 yi3 yi4 yi5

xi1 xi2 xi3 xi4 xi5

y1m 0.9 0.4 0.3 -0.2 0.5

1

0

0

0

0

x1m

y2m 0.4 0.0 0.2 0.4 0.1

0

0

0

0

1

x2m

y3m 0.3 0.8 -0.7 0.3 -0.2

0

1

0

0

0

x3m

y4m 0.2 -0.4 0.3 0.0 0.1

0

0

1

0

0

x4m

y5m -0.5 0.5 -0.1 0.7 -0.1

0

0

0

1

0

x5m

ISCAS2002 – p.13/23

Definition of firing yi1 yi2 yi3 yi4 yi5

xi1 xi2 xi3 xi4 xi5

y1m 0.9 0.4 0.3 -0.2 0.5

1

0

0

0

0

x1m

y2m 0.4 0.0 0.2 0.4 0.1

0

0

0

0

1

x2m

y3m 0.3 0.8 -0.7 0.3 -0.2

0

1

0

0

0

x3m

y4m 0.2 -0.4 0.3 0.0 0.1

0

0

1

0

0

x4m

y5m -0.5 0.5 -0.1 0.7 -0.1

0

0

0

1

0

x5m

This coding scheme always offers feasible solutions. → Higher solvable performance ISCAS2002 – p.13/23

Benchmark problems •

From QAPLIB (URL: http://www.opt.math.tu-graz.ac.at/qaplib/) 1. Nug N = 12, 14, 15, 16, 17, 20, 21, 22, 24, 25, 27, 30 2. Had N = 12, 14, 16, 18, 20

ISCAS2002 – p.14/23

Chaotic Dynamics and Solvable Performance We evaluate the solvable performance on benchmark problems by calculating 1. Lyapunov dimension 2. sum of positive Lyapunov exponents 3. the number of positive Lyapunov exponents for 1,000 parameter sets (a, k). 3 ≤ a ≤ 8, 0.8 ≤ k < 1 A = B = 0.5, α = 1.01,  = 0.02, ISCAS2002 – p.15/23

q = 600(Nug12), q = 1, 000(Nug15,16)

Estimating Lyapunov Exponents y(t + 1) = F(y(t)) y(t + 1) + ∆y(t + 1) = F(y(t) + ∆y(t)) ∆y(t + 1) = DF(y(t))∆y(t) DF(y(t)) : Jacobian matrix at y(t)    y11 (t)     y12 (t)    N2 y(t) =  y13 (t)  ∈ R    · · ·    yNN (t) ISCAS2002 – p.16/23

QR decomposition = = = ··· DF(y(t))Qt = ···

DF(y(0)) DF(y(1))Q1 DF(y(2))Q2

λi = lim

n→∞

n  1

n

Q1 R1 Q 2 R2 Q 3 R3 Qt+1 Rt+1

log |Riit |

t=1

where Riit is the i-th diagonal element of Rt . ISCAS2002 – p.17/23

Lyapunov dimension •

Lyapunov dimension j 

DL = j + •

λi

i=1

λ j+1

, j = max{ k

k 

λi > 0}

i=1

# of positive Lyapunov exponents p λ1 > λ2 > · · · λ p > 0 > λ p+1 > · · · λN 2



Sum of positive Lyapunov exponents S S =

p  i=1

λi ISCAS2002 – p.18/23

80

80

70

70

60

60

Probability(%)

Probability(%)

Results for Nug12 (578) 50

40

30

50

40

30

20

20

10

10

0

0

5

10

15

20

25

30

35

0

1

2

3

4

5

6

7

8

9

10

The number of positive Lyapunov exponents

Dimension 80

70

Probability(%)

60

50

40

30

20

10

0

0

5

10

15

20

25

30

The sum of positive Lyapunov exponents

35

ISCAS2002 – p.19/23

Results for Nug16 (1240) 15

Probability(%)

Probability(%)

15

10

5

0

0

5

10

15

20

25

30

35

40

10

5

0

1

2

3

4

5

6

7

8

9

10

The number of positive Lyapunov exponents

Dimension

Probability(%)

15

10

5

0

0

5

10

15

20

25

30

35

40

45

The sum of positive Lyapunov exponents

50

ISCAS2002 – p.20/23

How to find a good parameter set?

ISCAS2002 – p.21/23

How to find a good parameter set? 1. Robust application for different type problems , larger size problems

ISCAS2002 – p.21/23

How to find a good parameter set? 1. Robust application for different type problems , larger size problems

ISCAS2002 – p.21/23

How to find a good parameter set? 1. Robust application for different type problems , larger size problems

2 Firing rates FR of CNN that solves an N-size problem

ISCAS2002 – p.21/23

How to find a good parameter set? 1. Robust application for different type problems , larger size problems

2 Firing rates FR of CNN that solves an N-size problem N 1 • FB = = 2 N N

ISCAS2002 – p.21/23

How to find a good parameter set? 1. Robust application for different type problems , larger size problems

2 Firing rates FR of CNN that solves an N-size problem N 1 • FB = = 2 N N FR • the relation between N and FB

ISCAS2002 – p.21/23

N vs FR/F B 1.95

1.95

1.9

1.9

1.85

1.85

1.8

1.8

1.75

1.75

R

F /F

FR/FB

2

B

2

1.7

1.7

1.65

1.65

1.6

1.6

1.55

1.55

1.5 12

14

16

18

20

22

The Size N

24

26

28

30

1.5 12

13

14

15

16

17

18

19

20

The size N

Good solutions are obtained when FR ≈ 1.85F B

Possibility to set good parameter values of CNN by observing firing rates ISCAS2002 – p.22/23

Conclusions CNN is applied for solving QAPs.

ISCAS2002 – p.23/23

Conclusions CNN is applied for solving QAPs. 1. Evaluate chaotic dynamics and solvable performance   small DL       p = λi , λi > 0 small high performance ⇐⇒  p        S = λi small

ISCAS2002 – p.23/23

Conclusions CNN is applied for solving QAPs. 1. Evaluate chaotic dynamics and solvable performance   small DL       p = λi , λi > 0 small high performance ⇐⇒  p        S = λi small −→ “Shy” search ⇔ CNN for Associative memories

ISCAS2002 – p.23/23

Conclusions CNN is applied for solving QAPs. 1. Evaluate chaotic dynamics and solvable performance   small DL       p = λi , λi > 0 small high performance ⇐⇒  p        S = λi small −→ “Shy” search ⇔ CNN for Associative memories 2. How to set parameters of CNN for robust applications? (a) Observe the firing rate FR (b) Set parameters at FR ≈ 1.85/N

ISCAS2002 – p.23/23

Suggest Documents