Stochastic Neurodynamics and the System Size Expansion - CiteSeerX

2 downloads 83 Views 166KB Size Report
1Sony Computer Science Laboratory. 3-14-13 Higashi-gotanda, Shinagawa, Tokyo 141, Japan. E-mail: ohira@csl.sony.co.jp and .... where rij is the distance between two neurons, and w0 and s are constants. We show in Figure 2 the average ...
Baltzer Journals

Stochastic Neurodynamics and the System Size Expansion Toru Ohira

1

and Jack D. Cowan

2

Sony Computer Science Laboratory 3-14-13 Higashi-gotanda, Shinagawa, Tokyo 141, Japan E-mail: [email protected] and 2 Departments of Mathematics and Neurology, The University of Chicago, Chicago, IL 60637 E-mail: [email protected] 1

We present here a method for the study of stochastic neurodynamics in the master equation framework. Our aim is to obtain a statistical description of the dynamics of uctuations and correlations of neural activity in large neural networks. We focus on a macroscopic description of the network via a master equation for the number of active neurons in the network. We present a systematic expansion of this equation using the \system size expansion". We obtain coupled dynamical equations for the average activity and of uctuations around this average. These equations exhibit non{monotonic approaches to equilibrium, as seen in Monte Carlo simulations. Keywords: stochastic neurodynamics, master equation, system size expansion.

1 Introduction The correlated ring of neurons is considered to be an integral part of information processing in the brain[12, 2]. Experimentally, cross{correlations are used to study synaptic interactions between neurons and to probe for synchronous network activity. In theoretical studies of stochastic neural networks, understanding the dynamics of correlated neural activity requires one to go beyond the mean eld approximation that neglects correlations in non{ equilibrium states[8, 6]. In other words, we need to go beyond the simple mean{ eld approximation to study the e ects of uctuations about average ring activities. Recently, we have analyzed stochastic neurodynamics using a master equation[5, 8]. A network comprising binary neurons with asynchronous stochastic dynamics

T.Ohira and J.D.Cowan / Stochastic Neurodynamics

2

is considered, and a master equation is written in \second quantized form" to take advantage of the theoretical tools that then become available for its analysis. A hierarchy of moment equations is obtained and a heuristic closure at the level of second moment equations is introduced. Another approach based on the master equation via path integrals, and the extension to neurons with a refractory state are discussed in [9, 10]. In this paper, we introduce another master equation based approach to go beyond the mean eld approximation. We concentrate on the macroscopic behavior of a network of two{state neurons, and introduce a master equation for the number of active neurons in the network at time t. We use a more systematic expansion of the master equation than hitherto, the \system size expansion"[11]. The expansion parameter is the inverse of the total number of the neurons in the network. We truncate the expansion at second order and obtain an equation for

uctuations about the mean number of active neurons, which is itself coupled to the equation for the average number of active neurons at time t. These equations show non{monotonic approaches to equilibrium values near critical points, a feature which is not seen in the mean eld approximation. Monte Carlo simulations of the master equation itself show qualitatively similar non{monotonic behavior.

2 Master Equation and the System Size Expansion We rst construct a master equation for a network comprising N binary elements with two states, \active" or ring and \quiescent" or non{ ring. The transitions between these states are probabilistic and we assume that the transition rate from active to quiescent is a constant for every neuron in the network. We do not make any special assumption about network connectivity, but assume that it is \homogeneous", i.e., all neurons are statistically equivalent with respect to their activities, which depend only on the proportion of active neurons in the network. More speci cally, the transition rate from quiescent to active is given as a function  of the number of active neurons in the network. Taking the ring time to be about 2ms, we have  500s?1 . For the transition rate from quiescent to active, the range of the function is  30 ? 100s?1 re ecting empirically observed ring{ rates of cortical neurons. With these assumptions, one can write a master equation as follows.

? @t@ PN [n; t] = (nPN [n; t] ? (n + 1)PN [(n + 1); t]) + N (1 ? Nn )( Nn )PN [n; t] ? N (1 ? (n N? 1) )( n N? 1 )PN [(n ? 1); t]; (1)

T.Ohira and J.D.Cowan / Stochastic Neurodynamics

3

where PN [n; t] is the probability that the number of active neurons is n at time t. (We absorbed the parameter representing total synaptic weight into the function .) This master equation can be deduced from the second quantized form cited earlier, which will be discussed elsewhere. The standard form of this equation can be rewritten by introducing the \step operator", de ned by the following action on an arbitrary function of n: E f (n) = f (n + 1); E ?1f (n) = f (n ? 1) (2) In e ect, E and E ?1 shift n by one. Using such step operators, Eq. (1) becomes @ P [n; t] = (E ? 1)r P [n; t] + (E ?1 ? 1)g P [n; t]; (3) n N n N @t N where rn = n, and gn = (N ? n)( Nn ). This master equation is non{linear since gn is a nonlinear function of n. Linear master equations, in which both rn and gn are linear functions of n, can be solved exactly. However, in general, non{linear master equations cannot be solved exactly, so in our case, we seek an approximate solution. We now expand the master equation to obtain approximate equations for the stochastic dynamics of the network. We use the system size expansion, which is closely related to the Kramers{Moyal expansion, to obtain the \macroscopic equation" and time{dependent approximations to uctuations about the solutions of such equation. In essence, this method is a way to expand master equations in powers of a small parameter, which is usually identi ed as the inverse size of the system. Here, we identify the system size with the total number of neurons in a network. We make a change of variables in the master equation given in (3). We assume that uctuations about the macroscopic value of n are of order N (1=2). In other words, we expect that PN (n; t) will have a maximum around the macroscopic value of n with a width of order N (1=2). Hence, we set (4) n(t) = N(t) + N 12  (t) where  satis es the macroscopic equation and  is a new variable, whose distribution is equivalent to PN (n; t), i.e., PN (n; t) = (; t). We expand the step operators as: @ + N ?1 @ 2 . . . ; E ?1 = 1 ? N ? 21 @ + N ?1 @ 2 . . . (5) E = 1 + N ? 21 @ @ 2 @ @ 2 as E shifts  by  + N (?1=2). With this change of variables, the master equation is given as follows: @ (; t) ? N 12 @ (t) @ (; t) = N (N ? 21 @ + 1 N ?1 @ 2 . . .)[ ( + N ? 21  )] @t @t @ @ 2 @ 2 1 2 +N (?N ? 2 @@ + 21 N ?1 @@ 2 . . .)[1 ? ( + N ? 21  )][( + N ? 21  )](; t) (6)

T.Ohira and J.D.Cowan / Stochastic Neurodynamics

4

Collecting terms, we obtain, to order N (1=2), ? @ (7) @t =  ? (1 ? )() This is the macroscopic equation, which can be obtained also by using a mean{ led approximation to the master equation. We make  satisfy this equation so that terms of order N (1=2) vanish. The next order is N (0) , which gives a Fokker{Planck equation for the uctuation ( ()  @x@ (x)jx=): 0

@  = ? @ [?  ? () + (1 ? ) ()] + 1 @ 2 [  + (1 ? )()] (8) @t @ 2 @ 2 We note that this equation does not depend on the variable N justifying our assumption that uctuations are of order N (1=2). We now study the behavior of the equations obtained through the system size expansion to the second order. From Eqs. (7) and (8), we obtain d = ?  + (1 ? )( ? ) + (1 ?  + ) ( ? ) (9) dt d = ?  ? ( ? ) + (1 ?  + ) ( ? ) (10) dt where   Nn =  + , and  = N ? 21  . Equations (9) and (10) can be numerically integrated. Some examples are shown in Figure 1(B). For comparison, we plot solutions of the macroscopic equations with the same parameter sets in Figure 1(A). We observe a physically expected bifurcation into an active network state with decreasing , for either approximation. However, di erent dynamics are seen as one approaches bifurcation points. In particular, the coupled{equations exhibit a non{monotonic approach to the limiting value. The validity of the system size expansion is limited to the region not close to the bifurcation point, as discussed in the last section. The point is that by incorporating higher order terms into the approximation, we can extend its validity to a domain closer to the bifurcation point and thereby better capture stochastic dynamical behavior of such networks. Monte Carlo simulations[3] of the two dimensional network based on (1) with 2500 neurons with periodic boundary conditions were performed. The connectivity is set up as follows: a neuron is connected to a speci ed number k of other neurons chosen randomly from the network. The strength of connection is given by the Poisson form: s (11) wij = w0 rsij! e?r where rij is the distance between two neurons, and w0 and s are constants. We show in Figure 2 the average behavior of  for (A) k = 200 (k=N = 0:08) and (B) 0

0

0

ij

T.Ohira and J.D.Cowan / Stochastic Neurodynamics

5

k = 15m (k=N = 0:006). The non{monotonic dynamics is more noticeable in the low connectivity network. More quantitative comparisons between simulations and theory will be carried out in the future. The qualitative comparison shown here, however, indicate the need to model uctuations of total activity near critical points in order to capture the dynamics of sparsely connected networks. This is consistent with our earlier investigations of a one{dimensional ring of neurons via a master equation. A (a)

{0.2}

χ1

(b) {0.493}

χ1

0.8

χ

0.8

0.6

0.6

0.6

0.4

0.4

0.4

0.2

0.2

0

4

2

6

8

14

12

10

(c) {0.9}

1

0.8

0.2

0

4

2

6

8

10

14

12

0

4

2

6

8

10

Ti me

Ti me

12

14

Ti me

B (a)

χ1

χ

0.8

(b)

1

χ

0.8

0.6

0.6

0.6

0.4

0.4

0.4

0.2

0

0.2

4

2

6

8

14

12

10

0

(c)

1

0.8

0.2

2

4

6

8

14

12

10

Ti me

0

2

4

6

8

10

Ti me

14

12

Ti me

Figure 1: Comparison of solutions of (A) the macroscopic equation, and (B) (23) and (24). The parameters are set at = 15,  = 0:5 and = (a) 0.2, (b) 0.493, and (c) 0.9. The initial conditions are  = 0:5, and  =(A)0.01, and (B) 0. A (a) 0.05

χ1

(b) 0.1

χ1

0.8

χ

0.8

0.6

0.6

0.6

0.4

0.4

0.4

0.2

0.2

0

4

2

6

8

10

12

0.2

0

14

0.4 (c)

1

0.8

4

2

6

8

10

12

0

14

4

2

6

8

10

Ti me

Ti me

12

14

Ti me

B χ

(a) 0.001

1

χ

0.8

0.05 (b)

1

χ

0.8

0.6

0.6

0.6

0.4

0.4

0.4

0.2 0

0.2

10

20

30

40

50

60

Ti me

0

0.2 (c)

1 0.8

0.2

10

20

30

40

50

60

Ti me

0

10

20

30

40

50

60

Ti me

Figure 2: Comparisons of Monte Carlo simulations of the master equation with high (k = 200) and low (k = 15) connectivities per neuron. The parameters are set at = 15,  = 0:5, w0 = 100:0, s = 3:0, and for (A) = (a) 0.05, (b) 0.1, and (c) 0.4, and for (B) = (a) 0.001, (b) 0.05, and (c) 0.2. The initial condition is a random con guration.

T.Ohira and J.D.Cowan / Stochastic Neurodynamics

6

3 Discussion We have here outlined an application of the system size expansion to a master equation for stochastic neural network activity. It produced a dynamical equation for the uctuations about mean activity levels, the solutions of which showed a non{monotonic approach to such levels near a critical point. This has been seen in model networks with low connectivity. Two issues raised by this approach require further comment: (1) In this work we have used the number of neurons in the network as a expansion parameter. Given the observation that the overall connectedness a ects the stochastic dynamics, a parameter representing the average connectivity per neuron may be better suited as an expansion parameter. We note that this parameter is typically small for biological neural networks. (2) There are many studies of Hebbian learning in neural networks[1, 4, 7]. In such studies attempts have also been made to incorporate correlations of neural activities. It is of interest to see if we can formulate such attempts within the framework presented here.

References [1] S. Amari, K. Magiu, Statistical neurodynamics of associative memory, Neural Networks, 1 (1988) 63{73. [2] D. J. Amit, N. Brunel, M. V. Tsodyks, Correlations of cortical Hebbian reverberations: theory versus experiment, J. of Neuroscience, 14 (1994) 6435{6445. [3] K. Binder, Introduction: Theory and \technical" aspects of Monte Carlo simulations. Monte Carlo Methods in Statistical Physics, 2nd Ed., Springer-Verlag, Berlin, 1986. [4] A. C. C. Coolen, D. Sherrington, Dynamics of fully connected attractor neural networks near saturation, Phys. Rev. Lett., 71 (1994) 3886{3889. [5] J. D. Cowan, Stochastic neurodynamics in Advances in Neural Information Processing Systems 3, R. P. Lippman, J. E. Moody, and D. S. Toretzky, eds., Morgan Kaufmann Publishers, San Mateo, 1991. [6] I. Ginzburg, H. Sompolinsky, Theory of correlation in stochastic neural networks, Phys. Rev. E, 50 (1995) 3171{3191. [7] H. Nishimori, T. Ozeki, Retrieval dynamics of associative memory of the Hop eld type, J. Phys. A: Math. Gen., 26 (1993) 859{871. [8] T. Ohira, J. D. Cowan, Master equation approach to stochastic neurodynamics, Phys. Rev. E, 48 (1993) 2259{2266. [9] T. Ohira, J. D. Cowan, Feynman diagrams for stochastic neurodynamics in Proceedings of Fifth Australian Conference of Neural Networks, Brisbane, 1994. [10] Ohira, T., and Cowan, J. D. 1995. Stochastic dynamics of three{state neural networks in Advances in Neural Information Processing Systems 7, G. Tesauro, D. S. Toretzky, T. K. Leen, eds., MIT Press, Cambridge, 1995. [11] N. G. van Kampen, Stochastic Processes in Physics and Chemistry. North{Holland, Amsterdam, 1992. [12] D. Wang, J. Buhmann, C. von der Malsburg, Pattern segmentation in associative memory, Neural Computation, 2 (1990) 94{106.