star cellular neural networks for associative and ... - Semantic Scholar

9 downloads 0 Views 3MB Size Report
Keywords: Star network; associative memory; dynamic memory; spurious pattern; CNN ... (a) Bus topology, (b) mesh topology, (c) star topology, (d) ring topology.
International Journal of Bifurcation and Chaos, Vol. 14, No. 5 (2004) 1725–1772 c World Scientific Publishing Company

STAR CELLULAR NEURAL NETWORKS FOR ASSOCIATIVE AND DYNAMIC MEMORIES MAKOTO ITOH Department of Information and Communication Engineering, Fukuoka Institute of Technology, Fukuoka 811-0295, Japan LEON O. CHUA Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720, USA Received October 5, 2003; Revised January 15, 2004 In this paper, we propose a Star cellular neural network (Star CNN) for associative and dynamic memories. A Star CNN consists of local oscillators and a central system. All oscillators are connected to a central system in the shape of a Star, and communicate with each other through a central system. A Star CNN can store and retrieve given patterns in the form of synchronized chaotic states with appropriate phase relations between the oscillators (associative memories). Furthermore, the output pattern can occasionally travel around the stored patterns, their reverse patterns, and new relevant patterns which are called spurious patterns (dynamic memories). Keywords: Star network; associative memory; dynamic memory; spurious pattern; CNN, Star CNN; dreams.

1. Introduction There are four principal network topologies (the physical shape or the layout of networks) in Local Area Networks, namely, bus topology, mesh topology, star topology, and ring topology, as shown in Figs. 1(a)–1(d), respectively. The network topology plays a fundamental role in its functionality and performance [Bergeron, 2002]. These topologies can be applied to neural networks: • bus topology All cells are connected to a central backbone, called the bus. The bus permits all cells to receive every transmission. It is relatively simple to control traffic flow among cells. The main drawback stems from the fact that only one communication channel exists to service all cells on the network. • mesh topology Each cell is connected to every other cell by a separate wire. This configuration provides

redundant paths through the network, so if one cell blows up, we do not lose the network. • star topology All cells are connected to a central cell. All traffic emanates from the central cell. The advantage of the star topology is that if one cell fails, then only the failed cell is unable to send or receive data. The star networks are relatively easy to install, but have potential bottlenecks and failure problems at the central cell because all data must pass through this central cell. • ring topology All cells are connected to one another in the shape of a closed loop, so that each cell is connected directly to two other cells. In most cases, data flows in one direction only, with one cell receiving the signal and relaying it to the next cell on the ring. If a channel between two cells fails, then the entire network is lost. 1725

1726

M. Itoh & L. O. Chua

A

B

D

C

E

A

B

D

C (b)

(a)

A F

A B

F

B

C

E

C

G E D

D

(c)

(d)

Fig. 1. Network topology. (a) Bus topology, (b) mesh topology, (c) star topology, (d) ring topology. Figure 1: Network topology. (a) bus topology. (b) mesh topology. (c) star topology. (d) ring topology.

Note that the cells are not necessarily identical. Furthermore, these topologies can also be mixed. For example, a tree topology combines characteristics of bus and star topologies. It consists of groups of star-configured cells connected to a bus. Tree topologies allow for the expansion of an existing network. Recently, a new architecture for neural network was introduced by [Hoppensteadt & Izhikevich, 1999]. They proposed a bus-topology-based neural network called a neurocomputer, which consists of oscillators having different frequencies and that are connected weakly via a common media (bus) forced by an external input (see Fig. 2). The neurocomputer can store and retrieve given pat-

terns in the form of synchronized states with appropriate phase relations between the oscillators. The oscillatory neurocomputer needs only N connections between each oscillator and a common media (the symbol N indicates the number of cells). In this paper, we propose a new architecture called Star Cellular Neural Network (Star CNN) for associative memories, which is based on the star topology and chaotic synchronization. A Star CNN is a new dynamic nonlinear system defined by connecting N identical dynamical systems called local cells with a central system in the shape of a star, as shown in Fig. 2. All local cells communicate with each other through a central system. Thus, a Star CNN has N connections from the N

Star Cellular Neural Networks for Associative and Dynamic Memories

A

B

C

1727

D

External Input

(a)

A F

B Central system

E

C D (b)

Fig. 2. Neural networks. (a) Neurocomputer, (b) star cellular network. Figure 2: Neural networks. (a) neurocomputer. (b) star cellular network.

local cells to a central system. The Star CNN has the same bottleneck as that of the star topology. However, the Star CNN can be easily implemented in hardware using only N connections, except that a central cell has to supply complicated signals. A Star CNN can store and retrieve complex oscillatory patterns in the forms of synchronized chaotic states

(associative memories). Furthermore, the Star CNN can function as dynamic memories, namely its output pattern can occasionally travel around the stored patterns, their reverse patterns, and new relevant patterns (spurious patterns) [Christos, 2003; Adachi & Aihara, 1997]. It is motivated by the observation that a human being’s associative memory

1728

M. Itoh & L. O. Chua

is not always static, but sometime wanders from a certain memory to another memory, one after another. Furthermore, a flash of inspiration (new pattern) sometimes appears which is relevant to known memories. Change of memory also sometimes seems to be chaotic. Our proposed Star CNN can functions as both associative (static) and dynamic memories.

2. Associative Memories

[M¨ uller & Reinhardt, 1990]). Substituting σ jk into vj (n) in Eq. (3), we obtain     N N X X sij σjk  sij vj (n) = sgn  vi (n + 1) = sgn  j=1

j=1

 " #  N M X X 1 = sgn  σim σjm σjk  N m=1

j=1

An information storage device is called an associative memory if it permits the recall of information on the basis of a partial knowledge of its content, but without knowing its storage location. The dynamics of associative memories is described as follows: Consider M binary (−1/1) patterns σ 1 , σ 2 , . . . , σ M , where each pattern σ i contains N bits of information σim ; namely  2  M  1 σ1 σ1 σ1  σ2   σM   σ1       2  , σ2 =  2  , . . . , σM =  2  . σ1 =  . .  ..   .   .   .   .   .  2 1 M σN σN σN (1)

The dynamics of our associative memory is defined as follows:





N 1  kX k k  σi σj σj = sgn N

+

M X

sij =

M 1 X m m σi σj . N

(2)

m=1

(2) Assign the initial state vi (0). (3) Update the state of the network:   N X vi (n + 1) = sgn  sij vj (n) .

(3)

j=1

where

sgn(x) =



1 −1

if x ≥ 0, if x < 0.

The above associative memory will converge to the desired pattern if M  N [M¨ uller & Reinhardt, 1990]. To make this paper somewhat self-contained, we will prove that the patterns σ k (k = 1, 2, . . . , N ) are the fixed points of Eq. (3) (for more details, see

N X

σim

j=1

m=1, m6=k



= sgnσik +





σjm σjk 

M X

1  N

σim

m=1, m6=k

N X j=1



σjm σjk .

(4)

If the M patterns are uncorrelated, then the second term   M N X 1  X (5) σim σjm σjk  , N m=1, m6=k

(1) Assign the coupling coefficients:

j=1

j=1

p uller & is estimated to be of size (M − 1)/N [M¨ Reinhardt, 1990]. Therefore, it will most likely not affect the sign of σik as long as M  N , i.e. the number of stored patterns is much smaller than the number of bits. Then, we have   N X vi (n + 1) = sgn  sij σjk  = σik , (6) j=1

which implies that σ k (k = 1, 2, . . . , N ) are the fixed points of Eq. (3). If the stored patterns are orthogonal to each other, we obtain N 1 X m m0 σi σi = δmm0 . N

(7)

i=1

where δmm0 = 1 if m = m0 and δmm0 = 0 if m 6= m0 . In this case, Eq. (5) vanishes, and so the patterns σ k (k = 1, 2, . . . , N ) also become the fixed points of Eq. (3). Furthermore, if the first p cells start out

Star Cellular Neural Networks for Associative and Dynamic Memories

in the “wrong” state, then we get the relation vi (n + 1) = sgn 

×

h

1−

3.1. First-order Star CNNs The dynamics of a first-order Star CNN is governed by a system of n differential equations

pi k 1 σi + N N

M X

m=1, m6=k

σim

N X j=1

1729



σjm σjk  .

Under the condition n, p  N , we still have   N X sij σjk  = σik . vi (n + 1) = sgn 

(8)

(9)

j=1

That is, the cell will converge to the desired pattern within a single update.

3. Star CNN for Associative Memories In this section, we design our Star CNN for associative memories. Let us define first the dynamics of an nth-order cell by  dx1i  1 1 2 n = fi (xi , xi , . . . , xi ),     dt     2  dxi  2 1 2 n = fi (xi , xi , . . . , xi ),  dt (i = 1, 2, . . . , N )  .. ..    . .      n  dxi  n 1 2 n = fi (xi , xi , . . . , xi ),  dt (10) where the subscript “i” denotes the ith cell and fi1 , fi2 , . . . , fin are functions of (x1i , x2i , . . . , xni ). The dynamics of an nth-order Star CNN is defined by  dx1i  1 1 2 n = fi (xi , xi , . . . , xi )+ui ,     dt     2  dxi  2 1 2 n  = fi (xi , xi , . . . , xi ), dt (i = 1, 2, . . . , N )  .. ..    . .      n  dxi  n 1 2 n  = fi (xi , xi , . . . , xi ), dt (11) where ui denotes the input of the ith cell. The central system receives the signal x1i from the ith cell and supplies the signal ui to each cell.

dxi = −xi + ui , dt

(i = 1, 2, . . . , N )

(12)

where xi and ui denote the state and the input of the ith cell, respectively. The output y i and the state xi of each cell are related through the standard piecewise-linear saturation function [Chua & Roska, 2002]: yi = h(xi ) =

1 (|xi + 1| − |xi − 1|) . 2

(13)

The central system receives these output y i = h(xi ) and supplies the following signal to each cell ui =

N X

aij yj =

j=1

N X

aij h(xj ) .

(14)

j=1

Substituting ui into Eq. (12), we obtain the dynamics of a fully-connected CNN without input and threshold parameters [Chua, 1998]: N

N

j=1

j=1

X X dxi = −xi + aij yj = −xi + aij h(xj ) . dt

(15)

Thus, the dynamics of the systems (12) and (15) are equivalent. Since a fully-connected CNN has a mesh topology and self loops, it needs N (N + 1)/2 connections, whereas a Star CNN needs only N connections (see Fig. 3). Next, we show that the Star CNN (12) can function as associative memories, if we modify the output yi and the signal ui as follows yi = h(xi ) = sgn(xi ) ,     N N X X ui = sgn  sij yj  = sgn  sij sgn(xj ) . j=1

j=1

(16)

Substituting Eq. (16) into Eq. (12), we obtain   N X dxi = −xi + sgn  sij sgn(xj ) . (17) dt j=1

If the initial condition satisfies xj (0) = σjk ,

(j = 1, 2, . . . , N )

(18)

1730

M. Itoh & L. O. Chua

Local cell

Self loop

Local cell

Self loop

(a) (a) (a)

Central system

Master cell

Central system Local cell

Master cell

Local cell

(b) (b) (b) 25 cells. (b) Star CNN with 26 cells. Figure 3: (a) Fully-connected CNN with Figure 25 cells. cells. (b) (b)Star StarCNN CNNwith with2626 cells. Fig. 3: 3. (a) (a)Fully-connected Fully-connected CNN CNN with with 25 cells.

Star Cellular Neural Networks for Associative and Dynamic Memories

then we have 

p uller & is estimated to be of size (M − 1)/N [M¨ Reinhardt, 1990]. Therefore, it will not affect the sign of σik as long as M  N . Thus, σik becomes an equilibrium point, and yi = sgn(σik ) = σik at this point. Furthermore, if the first p cells start out in the “wrong” initial state, then we get h pi k dxi = −xi + sgn 1 − σ dt N i   M N X 1  X σim σjm σjk  . (21) + N



N X dxi = −xi + sgn  sij sgn(σjk ) dt j=1



N X



N X

= −xi + sgn 

= −xi + sgn 



sij σjk 

j=1

j=1

"



#  M 1 X m m k σ σ σj N m=1 i j

m=1, m6=k

 N X 1 = −xi + sgn  σik σjk σjk N j=1 +

M X

σim

m=1, m6=k



N X j=1

 1 = −xi +sgn σik +  N



σjm σjk  M X

m=1, m6=k

σim

N X j=1



σjm σjk  .

(19)

If the patterns are uncorrelated, the second term   M N X 1  X σim σjm σjk  , (20) N m=1, m6=k

Fig. 4.

j=1

1731

j=1

Under the condition M , p  N , yj (t) = sgn(xj (t)) converges to σik as t increases. Thus, the Star CNN (17) has the functions of associative memories. Some computer simulations of the above associative memories are given in Figs. 4 and 5. The four stored patterns are shown in Figs. 4(a)–4(d). Observe that since (−σim )(−σjm ) = σim σjm in Eq. (2), it follows that the coupling coefficient s ij defined by Eq. (2) is the same for each pattern σ i and its complement −σ i . In other words, if σ i is a fixed point of Eq. (17), then the complementary pattern −σ i is also a fixed point even though it is not included in Eq. (2) as a separate pattern. It follows that the four complementary patterns shown in Figs. 4(e)– 4(h) are also stored as associate memory in Eq. (17).

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

Stored patterns for Star CNN associative memories. (a)–(d) Stored patterns, (e)–(h) stored reverse patterns.

Figure 4: Stored patterns for Star CNN associative memories. (a)-(d) stored patterns. (e)-(h) stored reverse patterns.

1732

M. Itoh & L. O. Chua











(a)

(b)

(c)

(d)

(i)











(e)

(f)

(g)

(h)

(j)



(k) Fig. 5. Figure Retrieved The first, third andfirst, fifth third, rows indicate the rows initialindicate patterns,the andinitial the second, fourth andthe sixth rows 5: patterns. Retrieved patterns. The and fifth patterns, and indicate the recognized patterns.

second, fourth, and sixth rows indicate the recognized patterns.

Star Cellular Neural Networks for Associative and Dynamic Memories

The outputs of the blue and red cells represent the binary values −1 and 1, respectively. The coupling coefficients sij for the stored patterns are given in Appendix A. Observe that the Star CNN (17)

(a)

1733

can indeed retrieve the patterns. Observe also that some additional patterns [(i)–(k)] which differ from the original set of stored patterns are retrieved as well in Fig. 5. The patterns (i) and (j) are called

(b) ↓

(g)

(i)

(e)

(f) ↓

(c)

(j) Fig. 6. Two spurious patterns generated by generated the Star CNN (17). The spurious (i) and (j) are (i) obtained Figure 6: Two spurious patterns by the Star CNN (17). patterns The spurious patterns and (j)by applying a majority rule to the stored patterns {(a), (b), (g)} and {(e), (f), (c)}, respectively. are obtained by applying a majority rule to the stored patterns { (a), (b), (g) } and {(e), (f), (c) },

respectively.

1734

M. Itoh & L. O. Chua

“spurious” patterns, which are derived from the will synchronize when the coupling coefficient d stored patterns [Christos, 2003]. That is, the spu(> 0) is sufficiently large and the signal x 0 is obrious patterns (i) and (j) are obtained by applytained from the master cell ing a majority rule to the stored patterns {(a), (b), dx0 and (f), g are functions of xi and . Suppose = fsystem (x0 , y0 ), (g)}where and f{(e), (c)}, respectively (seeyiFig. 6). now that the following dt  (26) The number of spurious patterns grows as we store dxi  dy 0  = anf (x , y ) + d(x − x ), more patterns [Christos, 2003]. If inappropriate  i i 0 i = g(x0 , y0 ). dt (i = 1, 2,dt· · · , N ) (25) initial pattern is given, then the Star CNN (17) can dyi   not pinpoint a particular pattern, an outIt is known that the system (25) will synchronize if = and g(xgives , y ), i i put pattern (k) which bearsdtno relationship to the the Lyapunov exponents of the following difference stored are all large negative will patterns. synchronize when the coupling coefficient d (> 0)system is sufficiently and the signal x0 is obtained   In this paper, a forward Euler algorithm with a ∂f (x0 , y0 ) ∂f (x0 , y0 ) from master  time stepthe size ∆t =cell 0.01 is applied to all the com−d    dx0   ∂x0 ∂y0 ∆x = f (x0 , yd0 ),     puter simulations. The output of each cell is given = dt  ∆y dt   ∂g(x0 , y0 ) by ∂g(x0 , y0(26) ) dy0  = g(x0 , y0 ).  ∂x0 ∂y0 yj (x) = sgn(xj ) , dt (22)   andItthe 25 Star CNN are(25) arranged as follows: if the Lyapunov exponents is known that thecells system will synchronize ∆xof the following difference × (27) system are all negative ∆y ]1 ]2 ]3 ]4 ]5   ] 6 ] 7 ] 8 ] 9 ]10 ∂f∆x (x0 ,=y0x) i − x0 , ∆y= yi − y0 (for more details, ∂f (x0 , y0 ) where

 −see d [Madan, 1993]).  ∆x  ]11 ]12 ]13 d]14 ∆x ]15 (23) ∂x ∂y0 0   =  we suppose that each cell(27)  Furthermore, has a ]16 ]17 ]18 dt]19 ]20  ∂g(x0 , y0 ) ∂g(x0 , y0 )  ∆y ∆y stable limit cycle which is symmetric with respect ]21 ]22 ]23 ]24 ]25 ∂x0 ∂y0 namely, if (ϕ (t), ψ (t)) is a limit cyto the origin; i

i

Thewhere symbol the=ith clesee solution, then (−ϕi (t), −ψi (t)) is also a solution. ∆x ]i = indicates xi − x0 , ∆y yi cell. − y0 (For more details, [Madan, 1993]). Thus, we can also use the signal −x0 (t) as a driving Furthermore, we suppose that each cell has a stable limit cycle which is symmetric with respect 3.2. Second-order Star CNNs signal, and the following system will also synchrothe origin;ofnamely, if (ϕi (t), cell ψi (t)) is a limit cycle solution, then (−ϕi (t), −ψi (t)) is also a solution. nize in anti-phase Thetodynamics a second-order (oscillator) is  Thus,aswe can also use the signal −x0 (t) as a driving written dxi signal, and the followingsystem will also  = f (xi , yi )+d(−x0 − xi ),   dxi synchronize in anti-phase  dt = f (xi , yi ),   (i = 1, 2, . . . , N ) dt  dyi  (i = (24)  dx i 1, 2, . . . , N )     dyi = f (xi , yi ) + d(−x0 − xidt ), =  g(xi , yi ),  = g(xi , yi ),  dt (i = 1, 2, · · · , N ) (28) dt  (28) dyi   g(xi ,yyii.),Suppose where f and g are functions of=xi and dt The above symmetric condition is satisfied if f and now that the following system  above symmetric condition is satisfied if f and gg satisfy satisfy dxThe i  = f (xi , yi ) + d(x0 − xi ),   i , yi ) = −f (−xi , −yi ) , f (x dt (i =f1, (x2, , y. i.). , N = ) −f (−xi , −yi ), i  dyi (29)(29)   g(x , y ) = −g(−x , −y ). = g(xi , yi ), i i i i g(x , y ) = −g(−x , −y ). i i i i dt The dynamics of a second-order Star CNN for (25) The dynamics of a second-order Star CNN for associative associativememories memoriesisisdefined definedby by  dynamics





dxi dt dyi dt

=

  f (xi , yi ) + ui ,     

= g(xi , yi ),

(i = 1, 2, · · · , N )

(30) 

where ui denotes the input of the ith cell. The central system receives the signal xi from the ith cell where ui denotes the input of the ith cell. The central system receives the signal x i from the ith cell and and supplies the following signal to each cell

8

Star Cellular Neural Networks for Associative and Dynamic Memories

1735

supplies the following signal to each cell  signal



ui = d(mi x0 − xi ), (i = 1, 2, · · · , N )



(31)



where the parameter d > 0 is assumed to be sufficiently large, and where the parameter d > 0 is assumed to be suffi-  NIf we set wj = sgn(y0 yj ), then we can also determine ciently large, and  (32) mi = sgn  the sij phase wj  , relation using y0 and yj .   N X j=1 If the second-order Star CNN (for associative (32) sij wj  , mi = sgn  memories) has N local cells and one master cell, where j=1 the Star CNN needs N + 1 connections, while the wj = sgn(x 0 xj ). fully-connected CNN with N cells has N (N (33) + 1)/2 where connections. Therefore, if the number of local cells The signal xw the master (33) cell (26) and the function wj gives the phase relation sgn(x0 xj )from . 0 j is=obtained are equal to 25, namely, N = 25, the fully-connected between the master cell and the jth cell, that is, CNN has 325 connections, whereas the Star CNN The signal x0 is obtained from the master cell (26) has only 26 connections. Observe that the Star and the function wj gives the phase relation between 1 if CNN x0 xjis≥much 0 simpler compared to a fully-connected the master cell and the jth cell, that is,wj = (34)  −1 if CNN, x0 xj as < shown 0 in Fig. 3. 1 if x0 xj ≥ 0 Next, we study the phase relation between the wj = (34) −1 0ifyj ), x0then xj < we 0 can also determine If we set wj = sgn(y the and phase local therelation masterusing cells. yIn the yperiod satisfying 0 and j. 1 , we have w = sgn(x (t)x (t)) = σ j 0 N jlocal cells If the second-order Star CNN (for associative memories) has j and one master cell,  N (N + 1)     CNN # has " the Star CNN needs  N+ 1 connections, while the fully-connected with N cells M N N N 2 X X X 1 X m N m =1  1 are equal connections.miTherefore, number of local cells to 25, namely, 25, the fully   σ σ σ = sgn  if athe w = sgn = sgn a σ ij j ij j j i j N m=1 j=1 j=1 j=1 connected CNN has 325 connections, whereas the Star CNN has only 26 connections. Observe that the Star CNN is much CNN, as shown in  Fig. 3.  simpler compared toa fully-connected  M X N M N X X X Next, we study the phase relationmbetween the local and the cells. In the period satisfying 1 1 master = sgn  1 (35) σi σjm σj1  = sgn σi1 + σim σjm σj1  wj = sgn(x0 (t)xj (t))=σN N j , we have m=1 j=1

m6=1

j=1

 If the stored patterns are uncorrelated, then the term N N N M   1  1 m m     = sgn mi = sgn aij wj = sgn aij σi σj σj1  M N σj X X N 1 m 1 m=1 j=1 j=1 j=1 σim σ (36) j σj    (35) N N M N M j=1 =1 1   m m6 1  m  m 1 = sgn  p σi σjm σj1  = sgn σi1 + σi σj σj N m will be estimated to be of sizeN m=1 (Mj=1 − 1)/N [M¨ uller & Reinhardt, 1990]. If the number of stored patterns j=1 =1 is much smaller than the total number of cells (M  N ), this term becomes sufficiently small compared with σi1 . Thus, we have If the stored patterns are uncorrelated, then the term mi = σi1 , M N (37)  1  m −xm ui = d(σσi1 ix σij) .σj1 (36) 0 N m=1 j=1 If σi1 = 1 (resp. σi1 = −1), then the driving signal is written as 













will be estimated to be of usize (M −x1)/N [M¨ uller & d(−x Reinhardt, ui = i = d(x 0− i ) , )(resp. 0 − xi ))1990]. If the number of stored(38) is the much than total number of cells (M cell  in N ),phase this (resp. term becomes sufficiently andpatterns therefore ithsmaller cell tends to the synchronize with the master in anti-phase). 1 1 small σi . Thus, we have to the stored pattern σ has ith and jth cell synchronized Notecompared that the with attractor corresponding in phase when σi1 = σj1 , and synchronized in anti-phase when σ i1 = −σj1 . Since σ 1 and −σ 1 define the mi & = Izhikevich, σi1 , same pattern of phase relation [Hoppensteadt 1999], the Star CNN with the output function (37) 1 1 1 sgn(xi (t)) oscillates between σ and −σ periodically. Thus, ui = d(σi x0 − xi ).the Star CNNs can store patterns in the form of synchronized states with appropriate phase relations between the oscillators. 9

function sgn(xi (t)) oscillates between σ 1 and -σ 1 periodically. Thus, the Star CNNs can store patterns in the form of synchronized states with appropriate phase relations between the oscillators. 1736 M. Itoh & L. O. Chua Observe that if we substitute Eqs. (31)-(33) into Eq. (30), we would obtain the following fullyconnected CNN: Observe that if we substitute Eqs. (31)–(33) into Eq. (30), we would obtain the following fully-connected CNN: 





      N N  X dx  dxii   sgn   sijsijsgn(x  ,  sgn(x )xx ==f (xfi(x , yi ,i )y+ d dsgn i) + 0jx)j i ,  0 0−−xx i 0x dt dt j=1 (i(i==1,1,2,2,. . .· ,· ·N, )N ) (39) (39) j=1     dy   i dyi = g(x , y ).   i i dt = g(xi , yi ), dt



Here, the signal x is obtained from the following master cell Here, the signal x0 is0 obtained from the following master cell  dx dx00 = f (x , y ),   = f (x0 , y00 ),0  dt dt (40)(40)  dy0  dy0 = g(x0 , y0 ).  dt = g(x0 , y0 ). dt Let us consider next a simplified version thesecond-order second-orderStar StarCNN CNN(30). (30). InInthis thisscheme, scheme,the Let us consider next a simplified version ofofthe system can be can builtbewith fewer The dynamics of the simplified Star CNNStar is governed the system built withcircuit fewer elements. circuit elements. The dynamics of the simplified CNN is by a first-order differential equation governed by a first-order differential equation  dynamics





dyi = g(ui , yi ), (i = 1, 2, · · · , N ) dt

(41) 

where the input ui is given by where the input u given by i is signal  signal  



ui = mi x0 , (i = 1, 2, · · · , N )

(42)

ui = mi x0 , (i = 1, 2, · · · , N ) .

(42)

  

The signal x0 is obtained from the master cell and mi is obtained from The signal x0 is obtained from the master cell and  order cells  except the master cell. The dynamics of N  mi is obtained from the master cell is given by the second-order system mi = sgn  sij wj  , (43)   dx0 N j=1 X = f (x0 , y0 ), dt mi = sgn  sij wj  , (43) (46) j=1 dy0 10 = g(x0 , y0 ). dt where Another interesting neural network for associawj = sgn(y0 yj ) . (44) tive memories consists of oscillators having different frequencies, and connected weakly via a common The phase relation is obtained from y i , and the drivmedia (bus) forced by an external input [Hoppening signal x0 is obtained from the master cell (26). steadt & Izhikevich, 1999]. The dynamics of this The local cells will synchronize, if the Lyapunov exneural network is not defined on the usual state ponent of the following difference system is negative space, but on the phase space. The relationship be∂g(x0 , y0 ) d∆y tween our second-order Star CNN and the above = ∆y , (i = 1, 2, . . . , N ) (45) dt ∂yi neurocomputer is described in Appendix B. where ∆y = yi − y0 . For more details, see [Pecora We now present two examples of second-order & Carroll, 1990]. In this system, all cells are firstStar CNN cells:

Star Cellular Neural Networks for Associative and Dynamic Memories

1737

following system (1) CNNs [Zhou Nosseck, 1991] 3.3Two-cell Third-order Star &CNNs

 dxi   = f (x , y , z )+d(x −x ), i i i 0 i   dxi  dt  = −x ph(xi )third-order + rh(yi ), Star CNN cell is defined by  i+ The dynamics of a chaotic  dt dy  i (47) (i = 1, 2, . . . , N ) = g(x , y , z ), dxi i i i  dyi = f (xi , dt yi , zi ),       = −yi + sh(xi ) + qh(yi ), dt      dt dzi    dyi = h(xi , yi , zi ), (49) = g(xi , ydti , zi ),  where p, q, r, and s are constants and dt  (50)   1    h(xi ) = (|xi + 1| − |xi − 1|) dz , i = h(x will synchronize when the coupling coefficient d is i , yi , z i ),  2 dt assumed to be sufficiently large, and the driving sig(for f, example, = 1.1, q = of 1.1,xi ,ryi=and −2,zi . We assume that each cell has where g and hp are functions a chaotic attractor nal x is obtained from the master chaotic cell 0 s = 2). which is symmetric with respect to the origin, and the following system dx0 (2) Piecewise-linear Van der Pol oscillator  = f (x0 , y0 , z0 ) , dx dt [Itoh & Chua, 2003] i  = f (xi , yi , zi ) + d(x0 − xi ),     dt  dy0 dxi   (51) = g(x0 , y0 , z0 ) , = α(yi −dy xii + 2h(xi )) , dt (i = 1,dt 2, · · · , N ) (50) = g(xi , yi , zi ),  (48) dt   dz0   dyi  = h(x0 , y0 , z0 ) . dz  = −βxi , i = h(xi , yi , zi ),  dt dt dt The system (50) will synchronize if the Lyapunov where α and β are constants and coefficient d is assumed will synchronize when the coupling to of bethe sufficiently and the driving exponents followinglarge, difference system are all 1 signal x0 h(x is obtained from the master chaotic cell negative (|xi + 1| − |xi − 1|) , i) = 2   dx0  ∆x  = f (x , y , z ),  (for example, α = 2, β = 2). 0 d0  0   dt   ∆y    dt dy0 (51) = g(x0 , y0 , z0∆z ),  dt     3.3. Third-order Star CNNs  ∂f (x ∂f (x0 , y0 , z0 ) ∂f (x0 , y0 , z0 ) 0 , y0 , z0 )   dz0 −d  ∂x0 = h(x0 , y0 ,  z0 ).  ∂y0 ∂z0 The dynamics of a chaotic third-order Star   dt CNN    ∂g(x0 , y0 , z0 ) cell is defined by ∂g(x0 , y0 , z0 ) ∂g(x0 , y0 , z0 )   The system (50) will synchronize if the Lyapunov exponents of the following difference system are =   ∂x0 ∂y0 ∂z0 dxi     all negative = f (xi , yi , zi ),  ∂h(x0 , y0 , z0 ) dt ∂h(x , y , z ) ∂h(x , y , z ) 0 0 0   0 0 0 ∂f (x0 , y0 , z0 ) ∂f (x0 , y0 , z0 ) ∂f ∂x0(x0 , y0 , z0 ) ∂y0 ∂z0 −d dyi     = g(xi, yi ,  ∂x ∂y ∂z0 (49) 0 0 zi ),  ∆x   ∆x dt ∆x ∂g(x , y , z ) ∂g(x , y , z0 ) ∂g(x0 , y0 , z0 )      0 0 0 0 0 d    × ∆y     (52)(52)  ∆y  dzi  ∆y  =    ∂x ∂y ∂z  0 0 0 dt = h(xi , yi , zi ), ∆z   dt ∆z ∆z  ∂h(x0 , y0 , z0 ) ∂h(x0 , y0 , z0 ) ∂h(x0 , y0 , z0 )    where f , g and h are functions of xi ∂x , yi and zi . We where ∂y0 ∆x = xi − ∂zx00 , ∆y = yi − y0 , ∆z = zi − z0 0 assume that each cell has a chaotic attractor which (for more details, see [Madan, 1993]). The dynamics of a third-order Star CNN for is symmetric origin, and the where ∆x =with xi −respect x0 , ∆y to = ythe i − y0 , ∆z = zi − z0 (For more details, see [Madan, 1993]). associative memories is defined by The dynamics of a third-order Star CNN for associative memories is defined by  dynamics



dxi dt dyi dt dzi dt

 

 = f (xi , yi , zi ) + ui ,       

= g(xi , yi , zi ),

       

= h(xi , yi , zi ),

13

(i = 1, 2, · · · , N )

(53)



1738

M. Itoh & L. O. Chua

where ui denotes thethe input of the ith ith cell.cell. The The central system receives the output x i from the ith where ui denotes input of the central system receives the output xi from thecell ithand supplies the following signal to each cell cell and supplies the following signal to each cell  signal 



ui = d(mi x0 − xi ), (i = 1, 2, · · · , N )

(54)

where the parameter d is assumed to be positive and sufficiently large and



 large and  sufficiently where the parameter d is assumed to be positive and N  (55) mi = sgn sij w  j , Nj=1 X (55) sij wj  , mi = sgn  where j=1 wj = sgn(x0 xj ). (56) where The signal x0 is obtained form the master cell (51) and wj gives the phase relation between the wj = sgn(x0 xj ) . (56) master and the local cells. The signal x0 isthat obtained form the master cell (51)into andEq. w j (53), gives we thewould phaseobtain relation thefullymaster Observe if we substitute Eqs. (54)-(56) thebetween following andconnected the local CNN: cells. Observe that if we substitute Eqs. (54)–(56) into Eq. (53), we would obtain the following fully-connected CNN: 





dxii dx dt dt dy dyii dt dt dzi dzi dt dt

      NN  X        sij   sgn    ==f (xfi(x , yii,,yzi ,i )zi+ d x ) x − x , )+ d sgn sijsgn(x sgn(x x ) x − x , 00j j 00 ii        j=1 j=1     , yi ,i ), zi ), ==g(xg(x i , yi i , z

=1,1, 2,2,. . ·. ·, ·N , )N ) (57)(57) (i(i=

                  

= h(xi , yi , zi ), = h(xi , yi , zi ).



Here, the signal x0 is obtained from the following master cell Here, the signal x0 is obtained from the following master cell  dx0  = f (x0 , y0 , z0 ),     dt0 dx  = f (x0 , y0 , z0 ) ,   dy dt0 (58) = g(x0 , y0 , z0 ),  dt0  dy  (58)  = g(x0 , y0 , z0 ) ,  dz  dt0 = h(x0 , y0 , z0 ).   dt dz 0 = CNN h(x0 , y(51) ) . described by the second-order differential 0 , z0 is A simplified version of the third-order dt Star equation: A simplified version of the third-order Star CNN (51) is described by the second-order differential equation:   dynamics   dynamics  dyi   = g(ui , yi , zi ),   dy dti  (59) = g(ui , yi , zi ),   (i = 1, 2, · · · , N )  dz dti   (59) = h(ui , yi , zi ).  (i = 1, 2, · · · , N ) dz dti   = h(u , y , z ), i i i   dt 

14



Star Cellular Neural Networks for Associative and Dynamic Memories

1739

where the input signal ui is given by  signal



ui = mi x0 , (i = 1, 2, · · · , N ) .



The signal x0 is obtained from the master cell and mi is obtained from   N X (61) sij wj  , mi = sgn  j=1

where

wj = sgn(y0 yj ) .

(62)

In this system, we assumed that the following local cells  dyi  = g(x0 , yi , zi ),   dt (1, 2, . . . , N ) (63)  dzi   = h(x0 , yi , zi ). dt

(60)



driven by x0 will synchronize. This can be satisfied if the Lyapunov exponents of the following difference system are all negative   ∂g(x0 , y0 , z0 ) ∂g(x0 , y0 , z0 )     d ∆y ∂y0 ∂z0  =  dt ∆z ∂h(x0 , y0 , z0 ) ∂h(x0 , y0 , z0 )  ∂y0 ∂z0   ∆y × (64) ∆z where ∆y = yi − y0 , ∆z = zi − z0 (for more details, see [Pecora & Carroll, 1990]). Finally, we present two examples of chaotic oscillators which can be exploited for associative memories:

(1) Chua’s oscillator [Madan, 1993] dxi = α(yi − xi − k(xi )) , dt dyi = xi − yi + zi , dt dzi = βyi − γzi , dt where α, β, and γ are constants and 1 k(xi ) = bxi + (a − b)(|xi + 1.0| − |xi − 1.0|) , 2 (for example, α = 10, β = 10.5, γ = 0.45, a = −1.22, b = −0.734). (2) Canonical Chua’s oscillator [Itoh & Chua, 2003] dxi = α(x ˜ i + yi ) , dt dyi = zi − xi , dt dzi ˜ i + k(zi )) , = −β(y dt ˜ and γ˜ are constants and where α ˜ , β, 1 k(zi ) = ˜bzi + (˜ a − ˜b)(|zi + 1.0| − |zi − 1.0|) . 2 (for example, α ˜ = 0.3, β˜ = 2, a ˜ = −2, ˜b = 4).

(65)

(66)

(67)

(68)

1740

M. Itoh & L. O. Chua

See [Madan, 1993] and [Itoh et al., 1994] for the chaotic synchronization of these oscillators.

3.4. Computer simulations of associative memories Some computer simulations of associative memories using Chua’s oscillators are given in Fig. 7. The four complementary stored patterns are given in Fig. 4. Observe that the Star CNN (53) can retrieve the patterns as synchronized oscillating patterns. Figure 8 shows the waveforms of the two local cells. Figure 9 shows the synchronization of the local

and master cells. Figure 10 shows the relationship between the output waveforms and the sequence of patterns. Additional spurious patterns are also memorized as shown in Fig. 11. The simplified Star CNN (59) can retrieve not only the stored patterns, but also generate some spurious patterns, as shown in Fig. 11. We have also applied the Star CNNs with associative memories to other kinds of oscillators, namely, canonical Chua’s oscillators, two-cell CNNs, and piecewise-linear Van der Pol oscillators, and obtained similar results. Note that in these examples, we used the function vj (x) = sgn(xj ) as the output for the cells.

















⇓ ⇑ oscillatory

⇓ ⇑ oscillatory

⇓ ⇑ oscillatory

⇓ ⇑ oscillatory

Fig. 7. Retrieved patterns. The first and second (third) rows indicate the initial patterns and the recognized oscillatory Figure 7: Retrieved patterns. The first and second (third) rows indicate the initial patterns and the patterns, respectively. Since the output function li (x) = sgn(xi (t)) is used, the Star CNN oscillates between σ j and −σj recognized oscillatory patterns, respectively. Since output function li (x) = sgn(xi (t)) is used, the periodically. The Star CNN consists of 26 Chua’s oscillators (25the local cells and 1 master cell).

Star CNN oscillates between σ j and -σ j periodically. The Star CNN consists of 26 Chua’s oscillators (25 local cells and 1 master cell).

Star Cellular Neural Networks for Associative and Dynamic Memories

4

4

3

3

2

2

1

x

1

x

0

0

-1

-1

-2

-2

-3

-3

-4 200

210

220

230

240

250

260

270

280

290

-4 200

300

210

220

230

240

t

0.8

0.6

0.6

0.4

0.4

0.2

0.2

y

0

-0.2

-0.4

-0.4

-0.6

-0.6

210

220

230

240

250

260

270

280

290

-0.8 200

300

210

220

230

240

250

t

5

5

4

4

3

3

2

2

280

290

300

260

270

280

290

300

290

300

1

z

0

0

-1

-1

-2

-2

-3

-3

-4

-4

-5 200

210

220

230

240

250

260

270

280

290

-5 200

300

210

220

230

240

t

r

270

t

1

z

260

0

-0.2

-0.8 200

250

t

0.8

y

1741

1.5

1

1

0.5

0.5

r

0

-0.5

-1

-1

240

270

280

0

-0.5

220

260

t

1.5

-1.5 200

250

t

260

280

300

-1.5 200

220

240

t

260

280

300

Fig. 8. Waveforms of the 1–10th and 16–25th cells, which are synchronized with the master cell in phase (left). Waveforms Figure Waveforms of 15th the 1-10th and 16-25th cells, which the master in symbols of the 11th, 12th,8:13th, 14th, and cells, which are synchronized withare thesynchronized master cell in with anti-phase (right).cell The x, y, z and r denote x , y , z , and the cell output sgn(x ), respectively. j j j j phase (left). Waveforms of the 11th, 12th, 13th, 14th, and 15th cells, which are synchronized with

the master cell in anti-phase (right). The symbols x, y, z and r denote xj , yj , zj , and the cell output sgn(xj ), respectively.

1742

M. Itoh & L. O. Chua

4

3

2

1

x

0

-1

-2

-3

-4 200

210

220

230

240

250

260

270

280

290

300

t

x0 (t)

x

4

4

3

3

2

2

1

1

x

0

0

-1

-1

-2

-2

-3

-3

-4 200

210

220

230

240

250

260

270

280

290

-4 200

300

210

220

230

240

t

250

260

270

280

290

300

t

x1 (t)

x11 (t)

4

4

3

3

2

2

1

1

x1

x11 0

0

-1

-1

-2

-2

-3

-3

-4

-4 -4

-3

-2

-1

0

1

2

3

4

x0

(x0 , x1 )

-4

-3

-2

-1

0

1

2

3

4

x0

(x0 , x11 )

Fig. 9. Synchronization of x1 and x11 . The state variables x1 and x0 are synchronized in phase, and x11 and x0 are synchronized in anti-phase. Figure 9: Synchronization of x1 and x11 . The state variables x1 and x0 are synchronized in phase,

and x11 and x0 are synchronized in anti-phase.

Star Cellular Neural Networks for Associative and Dynamic Memories

r

1.5

1.5

1

1

0.5

0.5

r

0

0

-0.5

-0.5

-1

-1

-1.5 200

220

240

t

260

280

-1.5 200

300

(a) Output of the 1-10th and 16-25th cells.

220

240

t

260

280

300

(b) Output of the 11-15th cells.

t=200

t=204

t=210

t=211

t=216

t=224

t=232

t=242

t=249

t=257

t=265

t=278

t=284

t=291

t=296

Fig. 10.

1743

Relationship between the output waveforms and the sequence of patterns. The change of patterns is chaotic.

Figure 10: Relationship between the output waveforms and the sequence of patterns. The change of patterns is chaotic.

1744

M. Itoh & L. O. Chua

practical to simulate associative memories for large patterns on a PC. However, under ceratin specialized situations, we can recall an entire image from a much smaller and corrupted subset, as illustrated by the computer simulations shown in Figs. 12 and 13. In this example, only a 3 × 3 pattern (1% of the image) marked by a green square is used for the recognition, and the following palettes are applied to the cell state xi and the cell output vi = sgn(xi ): ↓ ↓

State xi

Color

1
1) (85) stored patterns are given in Fig. 4. In this example, 20 we set d = 4, e = 1. Observe that many new re-

where wj = sgn(x0 xj )

(e > 1)

4.3. Computer simulations of dynamic memories

sgn(y0 yj ) .

(86)

Note that the parameter e is added to m ˜ i . Then, the attractor of the local cells driven by Eq. (84) is not the same attractor driven by Eq. (70). Furthermore, if we get the phase relation from the equation

lated patterns (spurious patterns) appear between stored patterns and their reverse patterns. The sequence of these patterns is extremely sensitive to initial conditions, and the staying period of spurious patterns is shorter than that of stored patterns. The spurious patterns are usually made up of combinations of stored patterns. Several spurious

Star Cellular Neural Networks for Associative and Dynamic Memories

t=0

t=1

t=3

t=4

t=6

t=11

t=13

t=14

t=16

t=18

t=19

t=21

t=22

t=23

t=24

t=25

t=26

t=28

t=29

t=30

t=31

t=35

t=37

t=38

t=39

t=40

t=41

t=42

t=43

t=44

1749

Fig. 14. Sequence of output patterns of the Star CNN (69) for t ∈ [0, 44]. These patterns are sampled every second by using Figure 14: Sequence of output patterns of the Star CNN (69) for t ∈ [0, 44]. These patterns are the output function sgn(xi ) and illustrated only when the pattern changes. The Star CNN consists of 26 Chua’s oscillators.

sampled every second by using the output function sgn(xi ) and illustrated only when the pattern changes. The Star CNN consists of 26 Chua’s oscillators.

1750

M. Itoh & L. O. Chua

t=45

t=47

t=48

t=50

t=51

t=52

t=54

t=55

t=56

t=57

t=58

t=59

t=60

t=61

t=62

t=63

t=64

t=65

t=66

t=67

t=68

t=69

t=70

t=71

t=72

t=73

t=74

t=75

t=76

t=77

Fig.15: 15. Sequence Sequenceof of output output patterns [45, 77].77]. Figure patternsfor fort ∈ t∈ [45,

Star Cellular Neural Networks for Associative and Dynamic Memories

t=78

t=79

t=80

t=81

t=82

t=83

t=84

t=85

t=86

t=87

t=88

t=89

t=91

t=93

t=94

t=95

t=96

t=97

t=98

t=99

t=100 Sequence output patterns t ∈t[78, 100]. 100]. Figure Fig. 16: 16. Sequence of ofoutput patternsforfor ∈ [78,

1751

1752

M. Itoh & L. O. Chua 







Figure 17: A set of basic patterns. Fig. 17. A set of basic patterns.

patterns and their basic components are illustrated mi , and wi are chosen as follows in Figs. 17–20. These results are related to the fact that we generally solve problems and generate new ui = d(m ˜ i x0 − x i ) , ideas by combining various notions, ideas, or memN X ories in our mind, and sometimes have a sudden m ˜i = e sij wj , (90) flash of inspiration. George Christos suggests that j=1 as follows Figs. spurious 23-24. Inmemories, this example, the functions ui ,bemi , and wi are chosen  without the brain would not 1 if z0 zj ≥ 0  capable of learning anything new, wj = sgn(z0 zj )= ui and = furthermore d(m ˜ i x0 − xi ),  −1 if z0 zj < 0   it would become obsessed with its own  strongest N     where d = 0.1, e = 6.25. That is, the phase relation stored memories. Spurious memories m ˜ i =help e thesbrain ij wj , is obtained from z , but the driving signal(90) is obj=1 albeit avoid this problem and learn something new, i     tained from x . It is possible to get both the phase similar in some respect to what is already stored in i 0  1 if z0 zj ≥   w = sgn(z z ) = j 0 j  relation and the driving signal from x . A simplified i the network [Christos, 2003]. −1 if z0 zj < 0 Star CNN (80) based on the canonical Chua’s osLet us investigate next the synchronization bewhere = 0.1, 6.25.21.That thecells phase is obtained zi , but the driving signal cillators can alsofrom exhibit dynamic memories, since havior of dlocal cellse in= Fig. The is, local arerelation ˜ i + k(zi )) is a nonlinear (pieceh(u , y , z ) = − β(y i i i obtained from xi . It is since possible get both notissynchronized completely, the to output pat- the phase relation and the driving signal from xi . wise linear) function (see Appendix C). tern occasionally travels around the stored patterns. A simplified Star CNN (80) based on the canonical Chua’s oscillators can also exhibit dynamic Figure 22 shows the computer simulation the ˜ i + k(ziof memories, since h(ui , yi , zi ) = −β(y )) is a nonlinear (piecewise linear)Star function (see Appendix 4.4. Second-order CNNs with simplified Star CNN (80). The output pattern C). dynamic memories converges to the stored oscillatory pattern, since g(ui , yi , zi ) and h(ui , yi , zi ) in Eq. (80) are linear In this subsection, we investigate the possibility of functions (see Appendix C). Therefore, system 4.4 Second-order Star CNNsthis with dynamic memories creating dynamic memories in second-order Star CNNs. Although autonomous second-order oscilcannot exhibit dynamic memories. InThe thiscanonical subsection, we investigate possibility dynamic in second-order Star oslators cannot be memories chaotic, coupled second-order Chua’s oscillatorsthecan also be of creating cillators may produce chaotic attractors. Thus, used for creating dynamic memories, as shown oscillators in CNNs. Although autonomous second-order can not be chaotic, coupled second-order thefollowing following second-order Star CNN may exhibit Figs. 23 and 24. this example, functionsThus, u i , the oscillators mayInproduce chaoticthe attractors. second-order Star CNN may exhibit dynamic memories: dynamic memories:  dynamics





dxi dt dyi dt



 = f (xi , yi ) + ui ,  

= g(xi , yi ),

  

(i = 1, 2, · · · , N )

(91) 

where ui denote the input of the ith cell. The central system receives the output xi from the Star CNN cell and supplies the following signal to each cell  signal



Star Cellular Neural Networks for Associative and Dynamic Memories



1753





 

 



 

 



or



 Fig. 18.

(Left) Spurious patterns which are made up of combinations of (right) basic patterns.

Figure 18: Spurious patterns (left) which are made up of combinations of basic patterns (right).

1754

M. Itoh & L. O. Chua







 

 

 

 





 

 



 Fig. 19. (Left) Spurious patterns which are made up of combinations of (right) basic patterns. Figure 19: Spurious patterns (left) which are made up of combinations of basic patterns (right).



Star Cellular Neural Networks for Associative and Dynamic Memories

1755







 

 



 

 





 Fig. 20.

(Left) Spurious patterns which are made up of combinations of (right) basic patterns.

Figure 20: Spurious patterns (left) which are made up of combinations of basic patterns (right).

1756

M. Itoh & L. O. Chua

5

3

4 2 3 2 1 1

x

x

0

0 -1

-1 -2 -3 -2 -4 -3 400

450

500

550

600

650

700

-5 400

450

500

t (a) (a)

5 3

4

4 3

2 3 2

2 1 1

1

x 00 x12

x12

0

-1 -1

-1 -2 -2

-3 -2 -3

-4

650

700

-5 -3 400 -3

450 -2

500 -1

550 0

600 1

650 2

700 3

-4 -5

-4

t x9 (b) (b) (c)

Fig. 21. Synchronization and waveforms of x9 and x12 . (a) Waveforms xj (t) of the 9th cell (red) and 12th cell (blue) for d = 10. Two cells are desynchronized at t ≈ 490, and then synchronized again at t ≈ 670. (b) Waveforms x j (t) of the 9th cell (red) and 12th cell (blue) for d = 4. (c) Synchronization of x9 and x12 for d = 10. (d) Synchronization of x9 and x12 for d = 4. 4 3

2

1

-3

-2

(a)

Star Cellular Neural Networks for Associative and Dynamic Memories

53

1757

4

4 3

2 3 2

2 1 1

1

x 00 x12

x12

0

-1 -1

-1 -2 -2

-3 -2 -3

-4

650

700

-5 -3 400 -3

-4

450 -2

500 -1

550 0

600 1

650 2

700 3

t x9 (c) (b) (c)

4

3

2

1

x12

0

-1

-2

-3

-4

2

3

-5

-4

-3

-2

-1

0

1

x9 (d) (d) Fig. 21.

(Continued )

2

3

4

5

-5

-4

-3

1758

M. Itoh & L. O. Chua



dt dyi dt

= f (xi , yi ) + ui , 

(i = 1, 2, · · · , N )

(91)

  

= g(xi , yi ),



where ui denotes the input of of thethe ithith cell. The central system receives thethe output x i xfrom thethe Star CNN where ui denote the input cell. The central system receives output Star i from cellCNN and supplies the following signal to each cell cell and supplies the following signal to each cell  signal

where where





ui = d(m ˜ i x0 − xi ), (i = 1, 2, · · · , N )

(92)



N 

N m ˜i = sij wj , X m ˜ i = j=1 sij wj ,

where where

(93) (93)

j=1

wj = sgn(x0 xj ) or sgn(y0 yj ). wj = sgn(xcell 0 xj ) or sgn(y0 yj ) . The signal x0 is obtained from the master

(94) (94)

The signal x0 is obtained from the master dx cell0 = f (x0 , y0 ),    dt (95) dx dy00 = f (x , y ) ,    0 0 dt = g(x0 , y0 ). dt (95) dy 0 Observe that if we substitute Eqs. (92)-(94) = into (91), we would obtain a fully-connected CNN: g(xEq. 0 , y0 ) . dt 

Observe that if we substitute Eqs. (92)–(94) into Eq. 22(91), we would obtain a fully-connected CNN: 

    N  X     dxi   x0 − x i  ,  N  = f (x , y ) + d s sgn(x x )  i i ij 0 j  dx i dt  = f (xi , yi ) + dj=1 sij sgn(x0 xj )    x0 − xi  , dt  N  j=1  dydx  i i , = g(x = i ,fy(x sij sgn(x0 xj ) x0 − xi i ).i , yi ) + d  dtdy dti j=1 = g(xi , yi ), dt dy i  g(xi , yfrom Here, the signal x0 is = obtained the following master cell i ), dt Here, the signal x0 is obtained from the following master cell  dx0  , y ),  0 = f (x0master Here, the signal x0 is obtained from thedx following  = f (x00, y0 ),cell dt  dt  dy dx  dy00 ==g(xf0(x , y00,)y.0 ),   dt = g(x0 , y0 ).  dt  dy0follows  A simplified version of Eq. (91)(91) is defined as = g(x0 , y0 ).  A simplified version of Eq. is defined as follows dt

       (i = 1, 2, . . . , N ) (96)    (i = 1, 2, · · · , N ) (96)        (i = 1, 2, · · · , N ) (96)       

(97) (97) (97)

 A simplified version of Eq. (91) is defined as follows dynamics  dynamics

dyi = g(ui , yi ), (i = 0, 1, 2, · · · , N ) dt dyi  = g(ui , yi ), (i = 0, 1, 2, · · · , N ) where the input ui is given by dt



(98)

(98) 



 

where thethe input u given by i is where input usignal i is given by  signal 

ui = m ˜ i x0 , (i = 1, 2, · · · , N )

u =m ˜ i x0 , (i = 1, 2, · · · , N ) Here, the signal x0 is obtained fromi the master cell and m ˜ i is obtained from 

N  Here, the signal x0 is obtained from the master cell and m ˜ i is obtained from m ˜i = sij wj ,

where where

m ˜i =

N j=1 



(99) (99)

  

(100)

sij wj ,

(100)

wj = sgn(y0 yj ).

(101)

j=1

Star Cellular Neural Networks for Associative and Dynamic Memories

1759

However, the simplified Star CNN (98) cannot exhibit dynamic memories because Star CNN cells are one-dimensional systems. Similarly, the following signals (with the parameters d and e): ui = d(m ˜ i x0 − x i ) ,

(a)

m ˜i = e



N X

(102) sij wj ,

(e > 1)

j=1



where wj = sgn(x0 xj ) ,

(103)

and ui = m ˜ i x0 , m ˜i = e ⇓ ⇑ oscillatory

N X

(104) sij wj ,

(e > 1)

j=1

where wj = sgn(y0 yj ) ,

(105)

can be applied to Eqs. (91) and (98), respectively. Several computer simulations of dynamic memwhere ories via two-cell CNNs are illustrated in Figs. 25– 28. The stored patterns are given in Fig. 4.(105) In this wj = sgn(y (b) 0 yj ), example, we set d = 0.2, e = 6.25. Observe that can applied Eq. (91)oscillatory and Eq. (98), respectively. 22. be Output pattern converges to the stored oscillatory 2: Output Fig. pattern converges totothe stored pattern. (a) initial pattern. (b) many new related patterns appear between stored pattern. (a) Initial pattern, (b) converged oscillatory pattern. patterns and their reverse Several computer simulations of dynamic memories via two-cell CNNs are patterns. illustratedInin the Figs.case d oscillatory pattern. of the piecewise-linear oscillators, 25-28. The stored patterns are given in Fig. 4. In this example, we set d =Van 0.2,der e =Pol 6.25. Observe we could not find appropriate parameter values Here, signal x0 related is obtained from appear the master cell stored patterns and their reverse patterns. In theof d thatthe many new patterns between and e for creating dynamic memories. and m ˜ is obtained from casei of the piecewise-linear Van der Pol oscillators, we could not find appropriate parameter values N X of d and e for creating memories. (100) m ˜ = s dynamic w , i

ij

j

5. Relationship between Associative and Dynamic Memories

j=1

where 5

Relationship between Associative andsection, Dynamic In this we studyMemories the relationship

between wj = sgn(y0 yj ) . (101) the dynamics of associative memories and dynamic In this section, we study the relationship between thememories. dynamics Consider of associative memoriessystem and dynamic the following memories. Consider the following system  dynamics



dxi dt dyi dt dzi dt

 

  = f (xi , yi , zi ) + ui ,  

= g(xi , yi , zi ),

= h(xi , yi , zi ),

           

(i = 1, 2, · · · , N )

(106)



where ui denotes the input of the ith cell. The central system receives the output xi from the local cell and supplies the following signal to each cell  signal



1760

M. Itoh & L. O. Chua

t=0

t=2

t=3

t=4

t=13

t=15

t=16

t=18

t=19

t=20

t=21

t=22

t=24

t=25

t=26

t=27

t=28

t=30

t=31

t=32

t=33

t=37

t=38

t=42

t=52

t=53

t=55

t=57

t=58

Fig. 23. Sequence of output patterns of the Star CNN Figure 23: Sequence of output patterns of the output function sgn(xi ) and illustrated only when sampled every second by using the output oscillators.

t=12

(69) for t ∈ [0, 63]. These patterns are sampled every second by using the Star CNN (69) for t ∈ [0, 63]. These patterns are the pattern changes. The Star CNN consists of 26 canonical Chua’s

function sgn(xi ) and illustrated only when the pattern changes. The Star CNN consists of 26 canonical Chua’s oscillators.

Star Cellular Neural Networks for Associative and Dynamic Memories

t=64

t=65

t=66

t=67

t=68

t=69

t=70

t=71

t=75

t=76

t=78

t=83

t=84

t=86

t=87

t=89

t=90 Fig. 24.

t=92

t=95

Sequence of output patterns for t ∈ [64, 100].

Figure 24: Sequence of output patterns for t ∈ [64, 100].

t=96-100

1761

1762

M. Itoh & L. O. Chua

t=0

t=1

t=3

t=4

t=6

t=7

t=8

t=9

t=10

t=11

t=12

t=13

t=14

t=15

t=17

t=18

t=19

t=20

t=21

t=22

t=23

t=24

t=25

t=26

t=27

t=28

t=29

t=30

t=31

t=32

Fig. 25. Sequence of output patterns of the Star CNN (91) for t ∈ [0, 32]. These patterns are sampled every second by using Figurefunction 25: Sequence of illustrated output patterns of the thepattern Star CNN (91) t∈ [0, consists 32]. These patterns are the output sgn(xi ) and only when changes. Thefor Star CNN of 26 two-cell CNNs.

sampled every second by using the output function sgn(xi ) and illustrated only when the pattern changes. The Star CNN consists of 26 two-cell CNNs.

Star Cellular Neural Networks for Associative and Dynamic Memories

t=33

t=34

t=35

t=36

t=38

t=39

t=40

t=41

t=42

t=43

t=44

t=45

t=46

t=47

t=48

t=49

t=50

t=51

t=52

t=53

t=54

t=55

t=56

t=57

t=58

t=59

t=60

t=61

t=62

Fig. 26. Sequence of output patterns for t ∈ [33, 62]. Figure 26: Sequence of output patterns for t ∈ [33, 62].

t=37

1763

1764

M. Itoh & L. O. Chua

t=63

t=64

t=65

t=66

t=68

t=69

t=70

t=71

t=72

t=73

t=74

t=75

t=76

t=77

t=78

t=79

t=80

t=81

t=82

t=83

t=84

t=85

t=86

t=87

t=88

t=89

t=90

t=91

t=92

Fig. 27. Sequence of output patterns for t ∈ [63, 92]. Figure 27: Sequence of output patterns for t ∈ [63, 92].

t=67

can be applied to Eq. (91) and Eq. (98), respectively. Star Cellular Neural Networks for Associative and Dynamic Memories Several computer simulations of dynamic memories via two-cell CNNs are illustrated in Figs.1765 25-28. The stored patterns are given in Fig. 4. In this example, we set d = 0.2, e = 6.25. Observe that many new related patterns appear between stored patterns and their reverse patterns. In the case of the piecewise-linear Van der Pol oscillators, we could not find appropriate parameter values of d and e for creating dynamic memories.

5

Relationship between Associative and Dynamic Memories t=93

t=94

t=95

t=96

t=97

In this section, we study the relationship between the dynamics of associative memories and dynamic memories. Consider the following system  dynamics



dxi  = f (xi , yi , zi ) + ui ,     dt    t=100 dyi t=99 (i = 1, 2, · · · , N ) (106) = g(xi , yi , zi ), dt Fig. 28. Sequence of output   patterns for t ∈ [93, 100]. Figure 28: Sequence of output patterns for t ∈ [93, 100].   dzi   = h(xi , yi , zi ), dt 

t=98





where ui udenotes the input of the ith cell. The central system receives the output x from the local cell where i denotes the input of the ith cell. The central system receives the output ixi from the local and supplies the following signal to each cell cell and supplies the following signal to each cell  signal 

where d > 0 and where d > 0 and

where



ui = d(mi x0 − xi ), (i = 1, 2, · · · , N ) 

N 

(107)



mi =k e sij wj  , N X j=1 mi = k e sij wj  ,



(108) (108)

j=1

wj = sgn(x0 xj ). (109) where Here, the symbol k(·) denotes a scalar function, wj specifies the phase relation between the master wj = sgn(x0 xj ) . (109) and the local cells, and x0 is obtained form the master cell  Here, the symbol k(·) denotes a scalar function, relation between the master and the dx0 w j specifies the phase   = f (x , y , z ),  0 0 0  local cells, and x0 is obtained from the master cell  dt  

 dy00 dx (110) == f (x0g(x , y0 , yz00,)z,0 ),  dt  dt     dz00 dy  = g(x , y , z ). (110) = g(x0 , y0 z00 ) ,0  dt dt dz0 The scalar functions k(·) and the parameters d and e in Eqs. (107) and (108) are defined as follows = g(x0 , y0 , z0 ) . dt The scalar functions k(·) and the parameters d and 24 e in Eqs. (107) and (108) are defined as follows Memory Type Associative memory Dynamic memory

k(x)

d

e

mi

sgn(x)

d1

e=1

mi = ±1

x

d 1)

(D.3)

where or

sgn(y0 yj ) .

(D.4)

For the time period of t satisfying w j = sgn(x0 (t)xj (t)) = σj1 (j = 1, 2, . . . , N ), we have ! M N N N X X X 1 X m m 1 σ σ σj1 sij wj = sij σj = N m=1 i j j=1

j=1

=

σi1

j=1

M N 1 X mX m 1 σi σj σj = σi1 + ε + N j=1

m6=1

(D.5)

where ε=

1 N

M X

m6=1

= de(σi1 + ε)x0 − dxi = d(eσi1 + eε)x0 + (dσi1 x0 − dσi1 x0 ) − dxi = [dσi1 x0 − dxi ] + [d(eσi1 + eε)x0 − dσi1 x0 ] = d(σi1 x0 − xi ) + d[eε + (e − 1)σi1 ]x0 (t) . (D.7)

j=1

wj = sgn(x0 xj )

= d(e(σi1 + ε)x0 − xi )

σim

N X j=1

σjm σj1

(D.6)

If we assume that ε  1, then the second term is estimated as d(eε + (e − 1)σi1 )x0 (t) ≈ d(e − 1)σi1 x0 (t) , (d(e − 1)σi1 6= 0) .

(D.8)

Since this term does not vanish, it disturbs the synchronization. For example, in the case of canonical Chua’s oscillators, we set d = 0.1 and e = 6.25. Thus, we have d(e − 1)σi1 x0 (t) = 0.525σi1 x0 (t) , ui ≈ 0.1(σi1 x0 − xi ) + 0.525σi1 x0 (t) .

(D.9)

The output pattern may oscillate in the phase relation corresponding to the pattern σ 1 for a while, but does not converge to the pattern σ 1 . Thus, spurious patterns can appear easily in this system.