t. 0 ds esIu(s) + c. ] . As shown in Fig. 2 (in the main text), we label the offset between upstream and downstream gates by T0. The solution is broken into a set of ...
Graded, Dynamically Routable Information Processing with Synfire-Gated Synfire Chains Zhuo Wang1 , Andrew T. Sornborger2,* , Louis Tao1,3,* ,
Supporting Information Appendix 1: Graded Propagation via Overlapping Gating Pulses Mean-Field Analysis To analyze graded propagation for the case in which the integration of graded information in successive populations overlaps, we assume that the gating pulse is square with amplitude sufficient to bring neuronal populations up to the firing threshold. We also assume that above threshold the activity function is linear [?]. Firing in the upstream population is integrated by the downstream population. The downstream synaptic current obeys τ
d Id = −Id + Smu , dt
where S is a synaptic coupling strength, Id (t) is the downstream synaptic current, and τ is a synaptic timescale. For simplicity, we set τ = 1. Therefore, from here on, time will be measured in units of τ . The upstream firing rate is + mu ≈ Iu (t) + I0Exc − g0 , where I0Exc = g0 p(t) is an excitatory gating pulse, p(t) = θ(t) − θ(t − T ) and θ is the Heaviside step function, causing the downstream population to integrate Iu (t), giving the current Z t
G [Id ] (t) ≡ Se−t
ds es Iu (s) + c .
0
As shown in Fig. 2 (in the main text), we label the offset between upstream and downstream gates by T0 . The solution is broken into a set of n + 1 equal intervals of length T0 labelled from later to earlier in time. The n + 1’st (earliest) interval typically extends before the onset of the pulse, since an arbitrary pulse’s length is not an integer multiple of the lag T0 . The delay between the beginning of the final interval and the beginning of the pulse is denoted T1 (see Fig. 2b). During each interval, the downstream current integrates upstream firing shifted by T0 . The integrated currents and matching conditions for each interval solution are given by G [I0 ] (t) =
I1 (t),
I0 (0) = I1 (T0 )
G [I1 ] (t) = .. .
I2 (t),
I1 (0) = I2 (T0 )
G [In−1 ] (t) =
In (t),
In−1 (0) = In (T0 )
G [In (s)θ(s − T1 )] (t) = In+1 (t),
In (0) = In+1 (T0 ) In+1 (T1 ) = 0 .
PLOS
1/4
Here, due to our definition of G, each interval solution begins at time t = 0. This leads to the particular form of the matching conditions above. The interval solution will be shifted later on to give a continuous function, made up of the integral solutions, for the synaptic current. Evaluating, we find = Se−t c0 Z t = Se−t ds es e−s Sc0 + c1
I0 (t) I1 (t)
0
= Se−t [Sc0 t + c1 ] .. . j X (St)j−i Ij (t) = Se−t ci (j − i)! i=0 .. . = Se−t
In (t)
n X
ci
i=0 −t
In+1 (t)
"Z
(St)n−i (n − i)!
t s
= Se
−s
ds e Se T1
= Se−t
n X
ci
i=0
n X
(Ss)n−i ci (n − i)! i=0
#
S n−i+1 (tn−i+1 − T1n−i+1 ) , (n − i + 1)!
where I0 (t) is a current due to the exponential decay of the upstream population’s firing rate after its integration. Applying the matching conditions, we obtain equations for the coefficients, ci , I0 (0) = I1 (T0 ) ⇒ c0 = e−T0 [Sc0 T0 + c1 ] and, in general, Ij−1 (0) = Ij (T0 ) ⇒ cj−1 = e
−T0
j X i=0
ci
(ST0 )j−i . (j − i)!
Matching conditions on the final segment In (0)
=
In+1 (T0 )
⇒ cn = e−T0
n X i=0
In+1 (T1 )
ci
S n−i+1 (T n−i+1 − T1n−i+1 ) n − i + 1! 0
0 ⇒ cn+1 = 0
=
Translating to matrix notation, we define M =
PLOS
e−T0 ST0 − 1 T2 e−T0 S 2 20 T3 e−T0 S 3 60 .. . Tn
0 e−T0 S n (n)!
e−T0 S n+1
(T0n+1 −T1n+1 ) (n+1)!
e−T0 e−T0 ST0 − 1 T2 e−T0 S 2 20 .. . T n−1
0 e−T0 S n−1 (n−1)!
e−T0 S n
(T0n −T1n ) (n)!
0 e
... 0 e−T0 .. .
... ... 0 .. .
... ... ... .. .
0 0 0 .. .
...
...
...
...
e−T0
...
...
...
...
e−T0 S(T0 − T1 ) − 1
e−T0 ST0 − 1 .. .
−T0
2/4
and c = [c0 c1 . . . cn ]T . We then solve det(M ) = 0 to determine the values, Si , for which a solution exists. In general, the determinantal equation is an n + 1’st order polynomial in S. Solution vectors may be found by solving M (Si )c = 0 . For n < 16 (as far as we have checked), we find that all {Si } are positive and real and only the smallest eigenvalue, Smin , corresponds with a solution vector of all positive coefficients, ci . For this nonlinear eigenvalue problem, nonlinear Perron-Frobenius theory applies [1, 2]. In particular, for concave self-mappings, T, of the standard cone K into itself, the nonlinear eigenvalue problem has a unique positive eigenvector with positive eigenvalue. There is, therefore, just one graded solution for a given T .
General Expression for Determinant of M The matrix M is a full Hessenberg matrix with one upper diagonal and all filled lower triangular elements. A general recursive expression for the determinant may be derived in the standard way, i.e. to use row operations (of determinant 1) to reduce M to diagonal form. Then, the determinant is the product of the values, {λi }, along the Q diagonal, i λi . We will work with the matrix M 0 ≡ e−T0 M . Note that det M = enT0 det M 0 . With (ST0 )j (ST1 )j T0 a1 = ST0 − e , aj = j! , j > 1 and bj = j! , j ≥ 1, we have 0 M =
a1 a2 a3 .. .
1 a1 a2 .. .
0 1 a1 .. .
... 0 1 .. .
... ... 0 .. .
... ... ... .. .
0 0 0 .. .
an an+1 − bn+1
an−1 a n − bn
... ...
... ...
... ...
a1 ...
1 a1 − b1
The super diagonal 1s are now eliminated row by row, giving diagonal terms λ1 λ2 λ3
= a 1 − b1 a2 − b2 = a1 − λ1 3 a2 − a3λ−b 1 = a1 − λ2 .. .
After dividing through by a common denominator, we find " j # j−1 j−k Y 1 X Y k−1 j λj = (−1) ak λp + (−1) bj λ m=1 m p=1 k=1
This gives a recursion relation for the determinant, dn ≡ det Mn0 , where Mn0 denotes the n × n lower-right hand submatrix of M 0 , " n # n Y X k−1 n dn ≡ λm = (−1) ak dj−k + (−1) bn m=1
PLOS
k=1
3/4
From an investigation of the low order values of this expression, we conjecture that dn =
n X n−j (−1)j ((j + 1)T0 − T1 ) Se−T0 , (n − j)! j=0
(1)
which we have checked for n < 16. This latter expression, (1), is faster to compute than the recursion relation that precedes it.
Appendix 2: Network Parameters for Neural Circuit in Figure 5 Below we provide the network parameters for the simulation presented in Fig. 5. Population sizes are: N1 = 1000, N2 = 100 11 Coupling strengths and probabilities are S 11 = 2 with Kjk = 1 for memory and 11 11 Hadamard Copy populations. Kjk = 0.2 for the first layer, and Kjk = ±0.5 for other 11 layers in Hadamard population. Kjk = 1.3 for the connectivity from the Hadamard Copy to Shut down population. The coupling probability p11 = 0.01. 22 S 22 = 2.7 with Kjk = 1 for all gating populations (Compute, Vigilance and Output Copy). p22 = 0.8. There is a time delay between layers in the gating population with tdelay = 1ms. For the Trigger population, we used the same parameters as in the gating populations (N = 100, S = 2.7, p = 0.8, ν = 25Hz, f = 0.2, tdelay = 0, no self-inhibition). The Trigger population is inhibited by the first gating population with 22 12 = −3.7. S 12 = 0.37, with Kjk = 1 for the connectivity between gating S 22 = 2.7, Kjk chain and graded chain. p12 = 0.01. 21 For the logic populations, S 21 = 2, Kjk = 1 and p21 = 1. The Logic - Trigger population receives inputs from both the Trigger and the Input. The Logic Conditional Output population receives inputs from the 8th Hadamard population and 21 the first Vigilance population. For the Shutdown population, S 21 = 3.4, Kjk = −2, and p21 = 1. Note that the shut down population inhibits all gating populations and the Input population. 11 There is self-inhibition for all neurons with S 11 = 2, Kjk = −0.6; S 22 = 2.7, 22 Kjk = −0.5. The background Poisson inputs are ν 1 = 25Hz, f 1 = 0.2; ν 2 = 25Hz, f 2 = 0.2. For all neurons, the refractory period is 2ms.
References 1. Krause U. Perron’s stability theorem for non-linear mappings. J Math Econ. 1986;15:275–282. 2. Lemmens B, Nussbaum R, editors. Nonlinear Perron-Frobenius Theory. Cambridge, UK: Cambridge University Press; 2012.
PLOS
4/4