Background Synaptic Activity as a Switch Between ... - Semantic Scholar

3 downloads 0 Views 581KB Size Report
does the nervous system enable or disable whole networks so that they .... a simplified model in which inhibition acts in a purely divisive way rather than in the ...
ARTICLE

Communicated by Peter Latham

Background Synaptic Activity as a Switch Between Dynamical States in a Network Emilio Salinas [email protected] Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, NC 27157-1010, U.S.A.

A bright red light may trigger a sudden motor action in a driver crossing an intersection: stepping at once on the brakes. The same red light, however, may be entirely inconsequential if it appears, say, inside a movie theater. Clearly, context determines whether a particular stimulus will trigger a motor response, but what is the neural correlate of this? How does the nervous system enable or disable whole networks so that they are responsive or not to a given sensory signal? Using theoretical models and computer simulations, I show that networks of neurons have a built-in capacity to switch between two types of dynamic state: one in which activity is low and approximately equal for all units, and another in which different activity distributions are possible and may even change dynamically. This property allows whole circuits to be turned on or off by weak, unstructured inputs. These results are illustrated using networks of integrate-and-fire neurons with diverse architectures. In agreement with the analytic calculations, a uniform background input may determine whether a random network has one or two stable firing levels; it may give rise to randomly alternating firing episodes in a circuit with reciprocal inhibition; and it may regulate the capacity of a center-surround circuit to produce either self-sustained activity or traveling waves. Thus, the functional properties of a network may be drastically modified by a simple, weak signal. This mechanism works as long as the network is able to exhibit stable firing states, or attractors. 1 Introduction The mammalian brain performs remarkable computational tasks, such as recognizing complex stimuli of diverse modalities, remembering sensory data in a variety of formats, and generating complex motor sequences (Churchland & Sejnowski, 1992). However, the underlying circuits are not always activated. When we need to remember a phone number announced on the radio, some neural process in the brain enables our short-term memory; without the context information indicating that it is important to store it, the same number vanishes promptly from the mind. In the motor realm, c 2003 Massachusetts Institute of Technology Neural Computation 15, 1439–1475 (2003) 

1440

E. Salinas

this is a classic problem that neurophysiologists have studied using go versus no-go paradigms (see, e.g., Schultz & Romo, 1992; Lauwereyns et al., 2001; Sommer & Wurtz, 2001). In these tasks, a subject performs one of several possible motor actions in response to a presented stimulus, but the movement is contingent on a separate cue giving either a go or a no-go instruction. Neural activity evoked during these tasks has been extensively documented, yet little is known in mechanistic terms about how the go signal gates the interaction between the sensory information that remains fixed and the motor apparatus. This appears to be a switching problem, or a traffic control problem. Single neurons recorded from behaving monkeys may reflect this switching by displaying quick, dramatic changes in their properties. For instance, during a task in which a stimulus may trigger either a saccade or an antisaccade (a saccade away from the stimulus), some neurons in area LIP fire as if their visual receptive fields were being remapped—that is, as if their sensory receptive fields moved as functions of task contingencies (Zhang & Barash, 2000; see also Duhamel, Colby, & Goldberg, 1992). Another study demonstrated that LIP neurons show color selectivity only when color is a relevant cue that guides visuomotor behavior; otherwise, this feature is ignored (Toth & Assad, 2002). Clearly, the defining characteristics of a neuron (e.g., receptive field location, selectivity) may change radically and fast. The problem of determining how these switching processes occur is complicated by the fact that they probably proceed according to a subtle internal schedule. This is suggested by experiments with intracortical microstimulation applied during performance of a motion discrimination task (Seidemann, Zohary, & Newsome, 1998). The neural activity evoked by artificial stimulation, which would normally have a strong impact on the perceptual decision, has no effect when it is produced slightly early or slightly late relative to the normal sequence of events in the task, as if the communication between sensory and motor networks were established only during a certain time window. Hence, communication channels between neurons also seem to open and close fast. The types of switching processes just discussed must be extremely common (Duhamel et al., 1992; Weimann & Marder, 1994; Linden, Grunewald, & Andersen, 1999; Zhang & Barash, 2000; Terman, Rubin, Yew, & Wilson, 2002; Toth & Assad, 2002), yet they are mostly unknown. Here I study a mechanism that may underlie some of the dramatic functional changes exhibited by neurons. I show that under very general conditions, neural networks are highly sensitive to the baseline level of excitatory drive, which, in contrast to sensory-triggered activity, may be weak and uniform across a population. Small changes in this background input level may shift a network from a relatively quiet state to some other state with highly complex dynamics. Thus, the background may act as a switch that allows networks to be turned on or off regardless of their function. This phenomenon is illustrated with a variety of network architectures using both a relatively abstract theoretical

Background Activity as a Network Switch

1441

model and more realistic computer simulations based on integrate-and-fire neurons. 2 Theoretical Methods: The Network Equations For the analysis, I consider the response of a neuron as the mean rate at which it produces action potentials, or spikes. This response is described as a function of the activity of other neurons that provide synaptic input. Specifically, the firing rate ri of neuron i is determined by  ( j wij rj + hi )2 dri = −ri + τ ,  dt s + v j rj2

(2.1)

where τ is a time constant, s is a saturation constant, and hi represents an excitatory external input whose value is independent of the network’s activity. This is the background component. Here, a neighboring neuron j can either directly drive neuron i to fire through synaptic connection wij , or decrease its gain through a synaptic connection of strength v. For simplicity, all gain interactions have equal strength, but this need not be the case (see appendix D). For simplicity also, there is a single population of neurons rather than separate populations of excitatory and inhibitory units. Notice that all quantities, including the synaptic weights, are always positive or zero. The above equations are similar to so-called divisive normalization models, in which the response of a neuron is divided by a factor related to the activity of others (Wilson & Humanski, 1993; Carandini & Heeger, 1994; Carandini, Heeger, & Movshon, 1997; Simoncelli & Heeger, 1998; Wilson, 1998; Schwartz & Simoncelli, 2001). This formulation may be interpreted as a simplified model in which inhibition acts in a purely divisive way rather than in the more traditional subtractive way (Hertz, Krogh, & Palmer, 1991; Dayan & Abbott, 2001). This form of gain control provides a good empirical description of a variety of response nonlinearities observed in visual cortical neurons (Wilson & Humanski, 1993; Carandini & Heeger, 1994; Carandini et al., 1997; Simoncelli & Heeger, 1998; Wilson, 1998; Schwartz & Simoncelli, 2001). It is also consistent with numerous experiments showing that the responses of many neurons can be described as a multiplication of two terms that depend on different types of input signals (Salinas & Thier, 2000; Salinas & Abbott, 2001; Salinas & Sejnowski, 2001). This mechanism is also appealing from a theoretical point of view because it gives rise to more efficient, less redundant neural codes (Deneve, Latham, & Pouget, 1999; Brady & Field, 2000; Schwartz & Simoncelli, 2001). For some of the theoretical derivations that follow, it is useful to rewrite equation 2.1 in an essentially equivalent form that involves continuous functions rather than vectors and matrices. The key is to consider a relatively

1442

E. Salinas

large number of neurons, so that sums over cells can be replaced by integrals (Dayan & Abbott, 2001). In this case, the vector of firing rates turns into a function r(a), where a acts as the neuron’s index or label; similarly, the connection matrix turns into a function w(a, b) of two variables. The continuous version of the network equations is thus  (ρ w(a, b)r(b) db + h(a))2 dr(a)  τ = −r(a) + , (2.2) dt s + vρ r2 (b) db where ρ is the density of neurons. This factor ensuresthat for any quantity x, the sum over neurons j xj is equal to the integral ρ x(b) db. Equations 2.1 and 2.2 will be used interchangeably, as they are basically identical. As mentioned above, in these expressions, w and v correspond to excitatory and inhibitory connections, respectively, and h is the background input. In the rest of the article, I explore the dynamics of this model for various connection patterns w. I show that these equations typically have several stable solutions that coexist and that small changes in h can switch the network from one to another. The results are compared to the behavior of networks of spiking neurons with similar connectivities. 3 Simulation Methods 3.1 Spiking Network Implementation. All spiking networks consist of interconnected integrate-and-fire neurons (Dayan & Abbott, 2001; Troyer & Miller, 1997; Salinas & Sejnowski, 2000). There are NE excitatory and NI inhibitory units, and in all cases NE /NI =4. In these models, the voltage of neuron i evolves according to τm

NE NI   dV i gEij (Vi − VE ) − gIij (Vi − VI ) = −Vi − dt j j BI − gBE i (Vi − VE ) − gi (Vi − VI ),

(3.1)

where τm is the membrane time constant of the cell. The resting potential is set at 0 mV, so all voltages are relative to rest. All conductances are measured in units of the leak conductance (leak conductance equals 1). The first term on the right is the leak current, which is simply Vi in these units. The second and third terms are the total currents contributed by excitatory and inhibitory cells in the network, respectively. The last two terms represent the background input, which has excitatory and inhibitory components too. Neuron i produces a spike when Vi exceeds a threshold Vθ . After this, Vi is reset to an initial value Vreset and, after an absolute refractory period tref , the integration process continues according to equation 3.1. For excitatory neurons, τm = 20, tref = 1.8 ms. For inhibitory neurons, τm = 12, tref = 1.2 ms. For all neurons, VE = 74, VI = −10, Vθ = 20, Vreset = 10 mV.

Background Activity as a Network Switch

1443

In the above expression gEij is the conductance at the synapse from excitatory neuron j to neuron i. This conductance increases instantaneously when neuron j produces a spike and decreases exponentially thereafter. The update rule following a spike is gEij → gEij + WijE . Here, the size of the increase is the connection strength, which depends on the connectivity profile—equations 7.1 or 8.1 below, for example. A similar update rule applies to inhibitory synapses. Exponential decay of all recurrent excitatory and inhibitory conductances occurs with synaptic time constants τE and τI , respectively. Self-connections are eliminated in all simulations. The conductance gBE i is determined by the total rate of background input spikes rBE , the synaptic strength of background synapses WBE , and their corresponding time constant τBE . Poisson spike trains can be used to drive these conductances using the same procedure just described for recurrent synapses. However, since the parameters are constant, a series of random numbers may be combined instead, such that gBE i as a function of time has the same statistics as the conductance generated by a barrage of input spikes. These statistics are the mean, µBE = rBE τBE WBE ,

(3.2)

the variance, 2 = σBE

2 rBE τBE WBE , 2

(3.3)

and the correlation time, which is equal to τBE . Thus, the recipe to update the background conductances at each time step is (Salinas & Sejnowski, 2002),  BE 2 gBE i → gi + µBE (1 − ) + σBE 1 − γ ,

(3.4)

where γ is a gaussian random number with zero mean and unit variance, ≡ exp(− t/τBE ), and t is the integration time step. This procedure is more efficient than generating spikes, and the approximation is excellent whenever rBE τBE > 2. A similar rule is used for gBI i . In general, to avoid bursts of synchronous activity, the synaptic time constants of excitation need to be different from those of inhibition. This can typically be achieved by adjusting either the recurrent parameters τE and τI or the background parameters τBE and τBI . All results can be replicated qualitatively with diverse combinations of background input parameters; for instance, with different balance between excitation and inhibition or different levels of noise. The integration time step is 0.1 ms. Matlab (The Mathworks Inc., Natick, MA) code of the simulations is available from the author upon request.

1444

E. Salinas

3.2 Parameters for Figure 2a. NE = 300. For all neurons, WBE = 0.02, WBI = 0.06, and WijI ∈ [0, 0.002], which means that WijI is a random number distributed uniformly between 0 and 0.002; τBE = 3, τBI = 6, τE = 10, τI = 3 ms. For excitatory neurons, WijE ∈ [0, 0.0024]; rBE = 6350 and rBI = 420 spikes per second. During transient activity, rBE increases by 660 spikes per second. The quantity GEE is the total conductance in excitatory neurons generated by recurrent excitatory synapses (see Figure 2). Here it equals NE times the average value of WijE , which gives 0.36. Similarly, GIE is the total conductance in excitatory neurons generated by recurrent inhibitory synapses and is equal to NI times the average value of WijI , which gives 0.075. In Figures 2c and 2d, GIE = 0.075; in Figures 2e and 2f, GEE = 0.36. The parameters in Figures 2d and 2f are the same as in Figures 2c and 2e, respectively, except that rBE = 6670 spikes per second. For inhibitory neurons, WijE ∈ [0, 0.0023]; rBE = 6200 and rBI = 440 spikes per second. 3.3 Parameters for Figure 4a. NE = 150 in each of two subnetworks. For all neurons, WBE = 0.02, WBI = 0.06; τBE = 3, τBI = 3, τE = 5, τI = 3 ms. For excitatory neurons within a subnetwork, WijE ∈ [0, 0.010], WijI ∈ [0, 0.023]; rBE = 7870 and rBI = 1830 spikes per second; during increased background activity, rBE = 8140 spikes per second. For inhibitory neurons within a subnetwork, WijE ∈ [0, 0.011], WijI ∈ [0, 0.008]; rBE = 7420 and rBI = 1890 spikes per second. In addition, excitatory neurons in one subnetwork are connected to inhibitory neurons in the other, also with strengths in the interval [0, 0.011]. 3.4 Parameters for Figures 5 and 6. NE = 800; excitatory and inhibitory neurons are labeled uniformly between −10 and 10, and periodic boundary conditions are used. The label of neuron i is xi . For all neurons, WBE = 0.02, WBI = 0.06; τBE = 3, τBI = 3, τE = 10, τI = 3 ms. For excitatory neurons, WijE = 0.0053 exp(−0.5(xi − xj − )2 /(0.8)2 ), where = 0 for Figure 5 and = 0.15 for Figure 6, WijI = 0.019 exp(−0.5(xi − xj )2 ); rBE = 7770 and rBI = 1830 spikes per second; during increased background activity, rBE = 7880 spikes per second. For inhibitory neurons, WijE = 0.0023[0.3 + 0.7 exp(−0.5(xi − xj )2 )], WijI = 0.011 exp(−0.5(xi − xj )2 ); rBE = 7420 and rBI = 1890 spikes per second. In this configuration, the inhibitory connections are not constant, as implied by equation 2.2. This is done to show that uniform inhibition, which was implemented by Compte, Brunel, Goldman-Rakic, and Wang (2000), is not an absolute requirement; localized inhibition leads to the same qualitative results. The inhibitory structure modifies the width and amplitude of the activity profile, but it is still approximately gaussian (see appendix D).

Background Activity as a Network Switch

1445

4 Neurons Firing at Equal Rates A key property of equation 2.2 is that it allows a solution in which all firing rates are equal; refer to this uniform rate as R. If the firing rate is the same for all neurons, the network equations are reduced to a single, nonlinear equation τ

dR (wtot R + h)2 , = −R + dt s + vNR2

(4.1)

 where N is the total number of neurons; this appears because ρ db = N. Four conditions are necessary for this uniform solution to be possible. First, the background input should be the same for all units. This is why the term h(a) now appears as a constant h. Second, the quantity  wtot ≡ ρ

w(a, b) db

(4.2)

must also be independent of a. Because w(a, b) corresponds to the synaptic weight from neuron b to neuron a, this means that the total synaptic input to all neurons must be the same. Notice that this is a normalization condition; it does not restrict the distribution of synaptic weights, just the total amount per cell. Third, equation 4.1 must have a stable steady-state solution, so that R tends to a fixed, finite value. To find the steady-state points, set the derivative dR/dt to zero in equation 4.1 and solve for R. For a resulting steady-state point Rss to be stable, the uniform rate must tend to return to it after a small perturbation; the condition for this to happen is 2wtot (wtot Rss + h) < s + 3vNR2ss .

(4.3)

The fourth condition is about the stability of the full system, not only regarding equation 4.1. What happens when all rates are near the steady state but are perturbed by independent amounts? For the system to be stable, each of the deviations must decrease with time. There is no general rule to determine the stability of the uniform rate without specifying the connections w. Individual conditions can be calculated analytically for some particular cases. For instance, when all synaptic weights wij are identical, the same expression above also guarantees that small differences between the rates ri and the steady-state Rss will vanish over time (see appendix A). In general, however, the stability of the uniform rate may depend differently on the distribution of synaptic weights, so condition 4.3 is necessary but may not be sufficient. The dynamics of equation 4.1 can be visualized by plotting the derivative dR/dt against R, as in Figure 1a. The derivative becomes zero when the curve crosses the horizontal line; those crossing points are steady-state solutions

1446

E. Salinas

20

dR/dt

a

Ŧ10

R at steady state

0 120

R

120

b

c

0 0.9

w

1.35

0

R at steady state

tot

120

w

1.33

tot

d

e

0 0

u

1

0

u

3

Figure 1: Uniform rate solution for a recurrent neural network. All units may fire at the rate R if they have the same background h and total recurrent synaptic input wtot . (a–c) Time derivative of R versus R. Points that intersect the horizontal line are steady-state values. Negative and positive slopes indicate stable and unstable points, respectively. With wtot = 0.98 (left), there is a single low-rate solution that is stable. With wtot = 1.12 (middle), there are two stable solutions and an unstable one. With wtot = 1.29 (right), there is a single high-rate solution that is stable. (b) Steady-state values of R as functions of wtot . Solid and dashed lines indicate stable and unstable solutions, respectively. Bistability arises at intermediate values of wtot . Background input is h = 7.9 and vN = 0.0136. The same holds for the above panels. (c) As in b but with h = 14. (d) Steadystate values of R as functions of inhibitory synaptic strength u, where vN = 0.0105(1 + (u + 0.15)2 ). Here h = 7.9 and wtot = 1.1. (e) As in d but with h = 14. All curves were obtained from equation 4.1 with τ = 1 and s = 40. Firing rate is in spikes per second.

Background Activity as a Network Switch

1447

to the equation. Zero points with positive and negative slopes correspond to unstable and stable Rss values, respectively. Depending on the parameters, there may be one or two stable points. Hence, there may be either one or two possible firing levels such that all neurons are activated with the same intensity. The case in which the background input h is zero is special: then the solution in which all rates are equal to zero (Rss = 0) is always possible, and it is always stable (see appendix A). Conversely, for the lower Rss value to be above zero, the background must be above zero as well. This is in agreement with a theoretical study (Latham, Richmond, Nelson, & Nirenberg, 2000) that focused specifically on the conditions under which a neural network exhibits a nonzero firing rate that is stable, low, and approximately uniform. That study found that a minimum amount of intrinsic activity is needed; that is, some neurons must be active regardless of the recurrent synaptic input. In the present model, this corresponds to h > 0. Figure 1b shows the steady-state values of R (Rss ) as functions of the total strength of excitatory connections. Stable and unstable points are indicated by continuous and broken lines, respectively. When the neurons are disconnected, they fire at a low rate close to h2 /s, which in Figure 1b is 1.6 spikes per second. The uniform rate Rss increases slowly as the recurrent connections become stronger, but at a certain point, two additional solutions appear, one stable and one unstable. In this bistable regime, R can settle at either a low or a high value; which one is reached depends on how the rates in the network are initialized. As the strength of the connections is increased further, the low-rate solution disappears, and only a uniform, high rate is possible; its value is approximately w2tot /(Nv). Figure 1c is similar to Figure 1b; the only difference is that the background input h is higher. Comparison between the two figures shows that an increase in background can eliminate the lowactivity state, switching the network between regimes with one or two uniform activity levels that are stable. Notice that this happens for the full range of connection strengths tested. Figures 1d and 1e show that this background effect is also observed as the strength of the inhibitory connections varies. The generality of these results should be underscored: the existence of a uniform rate applies to any network described by equation 2.1 or 2.2, regardless of its architecture, as long as it satisfies certain constraints. The crucial conditions involve homogeneity; the background input and the total sum of incoming synaptic weights should be the same for all neurons. Shifting the background level affects the existence and stability of these uniform-rate solutions. Now the question is whether these analytic results apply to more realistic networks. This is investigated next. 5 Bistability in Unstructured Spiking Networks The first specific network architecture to consider is a simple one in which the conditions for a uniform rate are always met. This occurs when the back-

1448

E. Salinas

ground input and synaptic weights are identical for all units. In this case, equation 2.2 is always reduced to the uniform rate equation, equation 4.1. In other words, in this arrangement, the neurons can do nothing else but fire all at the same rate. An alternative way to generate the same behavior is to use random connections drawn from a given distribution. Strictly speaking, this is not equivalent to identical weights, as explained in appendix A. But based on simulation experiments, the results are typically very similar with either uniform or gaussian weight distributions. Figure 2 illustrates the behavior of a model network composed of integrate-and-fire neurons (Troyer & Miller, 1997; Salinas & Sejnowski, 2000; Dayan & Abbott, 2001) that produce spikes (see section 2). The connectivity was all-to-all and random, but the total synaptic weights were approximately the same for all neurons. The background inputs fluctuated, but averaged over time they were equal too. Figures 2a and 2b demonstrate the bistable regime in which the mean rate can switch between low and high values (see Amit & Brunel, 1997). Initially, all neurons fire at a low rate around 3 spikes per second. All rates increase after a transient excitatory input lasting 100 ms, and activity stays high thereafter. Note that the firing rates are indeed approximately the same for all units. Figures 2c through 2f show the steady-state rate Rss at which the spiking neurons are seen to fire, as a function of the strength of excitatory (see Figures 2c and 2d) or inhibitory (see Figures 2e and 2f) coupling. These figures are analogous to Figures 1b through 1e. To compare them, the total synaptic weight wtot of the rate model was equated to the total conductance due to recurrent excitatory synapses in the spiking model. Similarly, the gain weight Nv was equated to a monotonic function of the total conductance due to recurrent inhibitory synapses (see Figure 1). Qualitatively, the steady-state firing rates vary similarly in the rate- and spike-based models. The main difference is that in the latter, the discreteness of spikes acts as a source of noise. Fluctuations due to spiking may vary depending on model parameters. In particular, bistability is more robust when bursts of synchronous activity are minimized; otherwise, the network may oscillate or jump randomly between the low and high rates (Brunel, 2000). With this caveat, however, equation 4.1 does capture the underlying dynamics of the more realistic network. As found in the theoretical model, an increase in background activity can eliminate the low-rate steady state. The examples that follow show that a similar switch effect is also observed when a network has additional solutions or behaviors apart from a uniform rate. 6 Rivaling Steady States In this section, I discuss a network with three possible stable steady states. Here, the background serves as a switch that turns on a different type of dynamics in which two populations fire in random alternation.

Background Activity as a Network Switch

1449

30

Firing rate

Neuron

a

1 150

b

0

Firing rate at steady state

Firing rate at steady state

0

700

Time (ms)

120

c

0 0.31

GEE

120

0.41

d

0

GEE

e

0.4

f

0 0

GIE

0.25

0

GIE

0.75

Figure 2: Bistability in a spiking network with 300 excitatory and 75 inhibitory model neurons. Connections were unstructured (see section 2). (a) Raster plot showing spike trains from 30 excitatory neurons chosen randomly. Each row corresponds to one neuron and each tick mark to one spike. Time runs along the x-axis, as in the panel below. The horizontal bar indicates a transient increase in background excitatory input. This causes the neurons to switch from a low to a high firing rate, which is sustained. (b) Mean firing rate, averaged over all neurons, as a function of time. Thick and thin curves are for excitatory and inhibitory populations, respectively. (c) Steady-state values of the average firing rate as functions of excitatory strength GEE , equal to the total excitatory conductance in excitatory cells due to recurrent connections. Filled circles show the low and high steady-state rates measured as in a, before and after a transient input. Each point is based on 10 s of simulated sustained activity. Open symbols represent maximum firing rates reached in separate runs in which weaker transient inputs failed to switch the network to the high state. (d) As in c, but with a slightly larger background excitatory input to all excitatory cells. (e) Steady-state values of the average firing rate as functions of inhibitory strength GIE , equal to the total inhibitory conductance in excitatory cells due to recurrent synapses. (f) As in e, but with a slightly larger background excitatory input to all excitatory cells.

1450

E. Salinas

Consider two identical, unstructured subnetworks that inhibit each other. In the framework of equation 2.1, this means that they decrease each other’s gain. As above, unstructured means with constant or random synaptic weights, so each subnetwork is described by a uniform rate governed by equation 4.1. Thus, there are two populations of N neurons that fire at uniform rates R1 and R2 given by τ

(wtot R1 + h1 )2 dR1 = −R1 + dt s + vN(R21 + R22 )

τ

dR2 (wtot R2 + h2 )2 = −R2 + . dt s + vN(R21 + R22 )

(6.1)

When h1 = h2 = h, this system also has a uniform rate solution for which R1 = R2 = Rss . However, the condition for stability is now wtot Rss < h

(6.2)

(see appendix A). If one of the rates, say R2 , is held constant, the dynamics of the other becomes identical to the case discussed in the previous section. This is shown in Figures 3a and 3b, which plot the derivative of R1 versus R1 for four values of R2 . Again, steady-state values correspond to points that intersect the horizontal line. In Figure 3a, the background input h1 is low; regardless of R2 , the steady-state rate reached by R1 is close to 5 spikes per second. In contrast, Figure 3b shows that with a higher background input, the curves move up and R2 is able to shift the steady state of R1 between 3 and 30 spikes per second approximately. The dynamics of the coupled equations can be visualized by plotting the steady-state values in a graph with R1 and R2 as axes (Rinzel & Ermentrout, 1998; Wilson, 1998; Latham et al., 2000; Dayan & Abbott, 2001). Such a plot is shown in Figure 3c, where black and gray lines indicate points at which the derivatives of R1 and R2 , respectively, become zero. The steady state of the coupled system is the intersection between those two lines. The vectors in the plot go from the dots to the tips of the lines. Each component is proportional to the derivative of the corresponding rate. When h1 and h2 are low and equal, all initial conditions lead to the only steady-state value, which is around 5 spikes per second for both rates. Condition 6.2 is satisfied. Figure 3d shows the steady states and vector field when the background inputs h1 and h2 are still equal but stronger. Now there are three intersections—two that correspond to stable steady-state solutions and one that is unstable. The stable solutions are symmetric, so that either R1 is low and R2 is high, or vice versa. The solution with equal rates is unstable and does not satisfy condition 6.2. Notice that the system cannot cross the diagonal R2 = R1 : if it starts with R2 > R1 , it eventually settles at the high R2 solution, and correspondingly for the other point.

Background Activity as a Network Switch

3

1451

b

1

dR /dt

a

Ŧ3 0

R1

40

0

R1

40

c

d

R2

40

0 0

R1

40

0

R1

40

Figure 3: Rate description of two unstructured networks that inhibit each other. Their mean rates are R1 and R2 . (a) Time derivative of R1 versus R1 for four values of R2 : 2.5, 11.4, 20, and 30 spikes per second with thicker lines corresponding to higher rates. Here h1 = 8.5; the steady state stays near 5 spikes per second. (b) As in a, but with stronger background input: h1 = 10.15. (c) Steady states of R1 and R2 and the associated vector field. Short lines are vectors with components proportional to dR1 /dt and dR2 /dt; they indicate how R1 and R2 change at each point. Dots correspond to the tails of the vectors. Black and gray lines correspond to points at which the derivatives of R1 and R2 , respectively, equal zero. On the black curve, the vectors point vertically, and on the gray curve, they point horizontally. Intersections are steady-state solutions of the full system. Here h2 = h1 = 8.5; there is a single intersection for which both rates are about 4 spikes per second. (d) Steady states of R1 and R2 for h2 = h1 = 10.15. There are two stable steady states, each with a low and a high rate around 3 and 30 spikes per second, respectively, and an unstable point with equal rates of about 11.4 spikes per second. All points were obtained from equations 6.1 with τ = 1, s = 40, wtot = 1.154, and vN = 0.0295.

1452

E. Salinas

This is as far as the rate description goes. However, in real networks, firing rates fluctuate because spikes are discrete events; they act as a source of noise superimposed on the underlying vector fields. If independent random fluctuations were added to R1 and R2 in Figure 3c, no qualitative change would be observed; the rates would just fluctuate around the steady-state point. In contrast, adding noise to Figure 3d could have a more dramatic consequence: the system would still fluctuate around a steady-state point, but it could also cross the diagonal and switch to the other steady state if the fluctuations were large enough. This is exactly what is observed in simulations in which noise is added to equations 6.1 (not shown). More important, this is also what is observed in more realistic networks of spiking neurons (see Brunel, 2000). Figure 4 illustrates this. The full spiking network consists of two subnetworks. Synaptic connections within each subnetwork are random, as in Figure 2, but across the two groups, there is an interaction: the excitatory neurons of one group contact the inhibitory neurons of the other. Thus, all neurons in a subnetwork should fire at approximately the same rate, with the interaction between groups being reciprocally inhibitory. Figure 4a shows the spike trains from 40 of the 300 excitatory neurons; 20 belong to one group (black) and 20 to the other (gray). Figure 4b shows the average rate for each subnetwork as a function of time. Initially, background inputs are low, and all neurons fire at about 4 spikes per second. At t = 0.5 s all background inputs are increased equally. This switches the system to a different dynamic mode: now the network alternates between two states, with one or the other group firing more intensely. Figure 4c shows that for each subnetwork, the distribution of rate values is indeed bimodal. The plot of R2 versus R1 shows two clusters, as expected from Figure 3d. In fact, simulations of equations 6.1 with noise give rise to very similar distributions. Conversely, if parameters are adjusted to reduce variability in the spiking network, the reversals in activity become less frequent or may disappear entirely. This example illustrates how a uniform background can switch a network from one dynamic regime to another. It is also evident that the activity of this network resembles the behavior observed during binocular rivalry; this is worth commenting on. In rivalry, as in other multistable situations, two mutually exclusive sensory percepts dominate at alternating intervals (Leopold & Logothetis, 1996, 1999; Blake & Logothetis, 2002). A fundamental characteristic of these processes, which is observed perceptually as well as in single-neuron recordings, is their randomness: the distribution of dominance intervals is always unimodal, asymmetric and has a long tail. Often, it is well fit by a gamma distribution (Lehky, 1995; Leopold & Logothetis, 1996, 1999). The distributions shown in Figure 4d have these properties. Furthermore, in the simulations, successive periods of dominance and suppression are independent; no correlation is found between the length of one period and the next, as reported experimentally also (Lehky, 1995; Leopold

Background Activity as a Network Switch

1453

40

Neuron

a

Firing rate

1 50

b

0 0

1

2

3

4

Time (s)

50

d

80

Counts

c

0

Counts

R

2

80

0

0 0

R1

50

0

1

2

3

4

Interval duration (s)

Figure 4: Spiking network with random alternations in firing pattern. The full network consisted of two unstructured subnetworks, each with 150 excitatory and 37 inhibitory model neurons connected randomly. In addition, the excitatory neurons from each subnetwork were connected to the inhibitory neurons of the other, producing reciprocal inhibition. (a) Raster plot showing spike trains from 40 of the 300 excitatory neurons—20 from each group. Initially, all neurons fire at about 4 spikes per second. At t = 0.5 s (vertical line), all background excitatory inputs are increased, and remain so. This elicits alternating periods of dominance. (b) Mean firing rates, averaged over neurons within a subpopulation, as functions of time. (c) Black and gray histograms are firing-rate distributions for individual subnetworks. The middle plot shows the joint distribution. Compare to Figure 3d, noting different x-axes. (d) Distributions of activation periods. A subnetwork was considered activated if its average rate was more than 13 spikes per second. The periods of time during which the subnetworks remained activated were measured, and histograms were constructed from the resulting lists of interval lengths. Black and gray histograms had mean interval durations of 532 and 704 ms, respectively. Continuous curves are exponential distributions with the same means as the histograms. Using somewhat different threshold values or using R2 > R1 as a criterion produced almost identical results. Distributions in c and d are based on 960 s of continuous simulation time.

1454

E. Salinas

& Logothetis, 1999). Models of rivalry (Lehky, 1988; Kalarickal & Marshall, 2000; Laing & Chow, 2002) reproduce these and other experimental results, but do so relying on slow processes such as spike rate adaptation or synaptic depression. The underlying mechanism in this network is different because it produces alternations with timescales on the order of seconds without any time constants longer than a few milliseconds. Alternations in perception probably reflect competitive interactions between and within multiple visual areas (Dayan, 1998; Lumer, 1998; Leopold & Logothetis, 1999; Blake & Logothetis, 2002). However, these results show that random alternations in neuronal activity, like those observed experimentally, are not that exceptional and may arise from reciprocal inhibition in very simple networks, even in the absence of intrinsic oscillators or slow processes. And these alternations may be turned on or off by the background. 7 Self-Sustained Activity Network models have been used extensively to study several types of persistent activity (Wang, 2001), such as neural correlates of working memory in prefrontal cortex (Compte et al., 2000), responses of neurons encoding head direction (Zhang, 1996), and responses of oculomotor cells that stabilize gaze (Seung, Lee, Reis, & Tank, 2000), among others (Wang, 2001). In these recurrent networks, many states or firing rate distributions are possible and are stable over a relatively long timescale. Which state is observed depends on initial conditions or transient inputs that may steer the network from one distribution to another. In this section, I show that the background is also crucial in allowing a network to exhibit such persistent patterns of activity. Models of persistent activity are characterized by so-called line attractors or bump attractors, which refer to continuous collections of steady-state points. To some extent, such attractors can be studied analytically (Seung, 1996), but this is typically difficult for those that generate unimodal profiles of activity (Ben-Yishai, Bar-Or, & Sompolinsky, 1995; Zhang, 1996; Laing & Chow, 2001). However, using the present theoretical framework, an attractor solution can be calculated explicitly; this is done first. In this case, a center-surround connectivity is used. The excitatory connections w have a gaussian shape with standard deviation σ , such that   (a − b)2 w(a, b) = w(a − b) = wmax exp − . 2σ 2

(7.1)

This organization can be interpreted as excitation between nearby neurons or between similarly tuned neurons. When this connectivity is used and all background inputs h(a) are set to zero, another solution to equation 2.2, apart from the uniform rate, is a gaussian profile of activity that can be

Background Activity as a Network Switch

1455

centered at an arbitrary point x within the array,   (a − x)2 r(a; t) = r(a − x; t) = rmax (t) exp − . 2σ 2

(7.2)

Note that each value of x corresponds to a different steady-state point and that this solution applies in the absence of tuned or structured input. The virtue of the network is that the peak of activity can be localized at any value of x within a range and that this value is set by initial conditions. In a model of working memory, x corresponds to the quantity being stored. This solution can be verified by direct substitution into the right and left sides of equation 2.2. For the left side, first compute the time derivative of the above expression. The key for this result is that the integral in the numerator of equation 2.2 is a convolution of two gaussians, which produces another gaussian. Although the resulting gaussian is wider than any of the original two, the squaring operation decreases the width, so the gaussian profile ends up with the same standard deviation σ as the connectivity pattern. Substitution into equation 2.2 reduces the network equation to a scalar equation for the amplitude of the gaussian profile, τ

1 2 2 drmax 2 wtot rmax , = −rmax + √ dt s + vρ π σ r2max

(7.3)

√ where, in this case, wtot = ρwmax 2π σ . Compare this expression to equation 4.1: with zero background input, they give rise to the same dynamics. Therefore, there are three possible scenarios for this solution, depending on the steady-state values that rmax can attain. First, if the connections are too weak, only the zero-rate steady state exists; the amplitude always shrinks to zero; the memory always fades away. Second, when the connections are stronger, rmax is bistable, and the memory can be switched on or off. Third, if the connections are even stronger, a high-amplitude gaussian profile of activity always develops; the memory is never off. Similar calculations can be performed with a constant, nonzero background input (see appendix B). The solution in that case can still be approximated as a gaussian profile of activity centered at an arbitrary point, but the background input has two major consequences: it shifts the activity profile upward and makes it wider; that is, it adds a constant to the right side of equation 7.2 and multiplies σ by a factor larger than 1. These effects are indeed observed in spiking networks (not shown). With this architecture, there are two distinct situations that are interesting in terms of the background component, and both regimes can be seen in simulations in which equation 2.1 is solved numerically. First, the network may be multistable, in the sense that both a uniform, low-rate solution and a gaussian profile of activity are possible for the same level of background

1456

E. Salinas

input. Second, the network may have a single stable solution that changes as a function of the background. In this mode, a low background gives rise to only a uniform, low-rate solution (the gaussian profile is unstable), whereas a higher background gives rise to only a gaussian profile (the uniform rate is unstable). In this latter mode the persistent activity may be switched on or off by the background, which can act as a gating or context signal for the memory circuit. These results are in agreement with more realistic simulations. The first mode was demonstrated in a detailed simulation study (Compte et al., 2000) showing that a spiking network can reproduce many of the features of prefrontal responses recorded during working memory tasks. I have also observed such bistable regime in similar networks. The second mode is illustrated in Figure 5a, which shows spikes from a network with gaussian connectivity. Initially, the network fires at a uniform low rate. At t = 0.4 s, two things happen: a cue is presented, which lasts 100 ms, and all background excitatory inputs are increased by a small amount. The cue evokes a burst of localized activity that persists. At t = 2.5 s, all background inputs are decreased back to their original values, and the persistent activity disappears. Note that if the cue is presented without increasing the background excitation, the evoked activity does not persist; likewise, if the background is simply increased without presenting a cue, a gaussian profile eventually develops at a random position (not shown). The main point here is that the level of background input may act as a gate that controls whether the transient sensory signal will be stored. In effect, this input transforms the circuit’s response from sensory to mnemonic. Evidently, this scenario requires an additional mechanism for switching the level of background input and maintaining it throughout a delay. However, this may be a simpler problem because the background is not specific; on average, it is the same for all excitatory neurons. This would effectively split the problem of storing a quantity into two parts: a circuit indicating whether something needs to be stored, and another, conditional on the first, specifying the actual quantity. 8 Traveling Waves The networks considered so far have had symmetric connectivity patterns. Asymmetries typically give rise to solutions with richer, nonstationary dynamics, but these also depend on the level of background input. This is an example. Consider a network with a center-surround connectivity profile that is slightly shifted, 

(a − b − )2 w(a, b) = w(a − b − ) = wmax exp − 2σ 2

 .

(8.1)

Background Activity as a Network Switch

1457

a

Neuron label

10

Ŧ10 0

1

70

2

Time (s) 70

c

Firing rate

Firing rate

b

3

0 Ŧ10

0

Neuron label

10

Ŧ10

Neuron label

10

Figure 5: A spiking network with self-sustained activity. The full network consisted of 800 excitatory and 200 inhibitory model neurons connected in a centersurround organization with periodic boundaries (see section 2). (a) Raster plot showing spike trains from 70 excitatory neurons. Initially, all units fire at about 3.5 spikes per second. A cue is presented between t = 0.4 and t = 0.5 s. This is implemented through a localized and transient increase in background excitatory input (bar). In addition, at t = 0.4 s, all background excitatory inputs are uniformly increased, remaining so until t = 2.5 s, when they go back to their original levels. Self-sustained activity disappears if the background input is below a minimum value. The continuous line traces the center of mass of the population activity, calculated using the vector method (Georgopoulos, Schwartz, & Kettner, 1986; Dayan & Abbott, 2001). (b) Average rates of all excitatory neurons. Two sets of data points are shown: rates measured during sustained activity, between t = 1.5 and t = 2.5 s (peaked responses), and rates measured before cue presentation. Gray lines were obtained by smoothing the respective data sets with a gaussian filter. (c) As in b, but for inhibitory neurons.

1458

E. Salinas

Parameter is the shift, which is small relative to the width σ . This shift introduces an asymmetry; now neuron b excites neuron b+ most strongly. This does not alter the shape of the resulting activity profile substantially, but forces it to move continuously from any given position toward a position

units away. Now the firing rates are given by 

(a − x(t))2 r(a; t) = r(a − x(t)) = rmax exp − 2σ 2

 ,

(8.2)

where

dx = . dt τ

(8.3)

This solution is valid when rmax is equal to the steady-state value of equation 7.3. To verify it, again substitute into equation 2.2 assuming h = 0 and using the derivative of equation 8.2 on the left-hand side. In addition, notice that now the result of the convolution integral followed by squaring is a gaussian function of a − x − , which can be expanded in a Taylor series around a − x. The above expressions describe a gaussian profile of activity centered at an arbitrary point x and moving with a constant speed, which is proportional to the connection shift . The amplitude of the profile can still have multiple steady states, as with a stationary profile. Figure 6a illustrates this and the effect of the background. The situation in Figure 6a is identical to that in Figure 5a, except that the connectivity pattern is shifted (see equation 8.1). After the cue, a wave of activity traverses the network at a constant speed. When the background excitation decreases, the wave stops and the network returns to a uniform low rate. Figure 6b shows that propagation speed is indeed proportional to , as predicted by the theory (see also Idiart & Abbott, 1993; Ermentrout, 1998a). Suppose such propagating activity underlies a sequence of motor actions. In this scenario, a strong, localized input such as a sensory-evoked response would be the trigger for the sequence, whereas a weak, uniform signal could act in a context-dependent way to allow or stop the corresponding sequence of neural activations. In terms of a go/no-go paradigm, the background would carry information about the type of trial (go or no-go), while the localized input would reflect the turning on of the sensory cue. Thus, a highly complex chain of events could potentially be controlled by a simple mechanism. These results have another important consequence. The connection shift in this network can be seen as the difference between feedforward and feedback components of the connectivity. This network, for which the feedback component is large, operates in a nonsynchronous mode (van Rossum, Turrigiano, & Nelson, 2002), whereas in purely feedforward networks, activity can also propagate via synchronous volleys of spikes (Diesmann, Gewaltig,

Background Activity as a Network Switch

1459

a

Neuron label

10

Ŧ10 0

1

Time (s)

50

2

3

Propagation speed

b

0 0

Connection shift '

0.4

Figure 6: Traveling wave of activity in a spiking network. The simulation was identical to that in Figure 5, except that synaptic connections between excitatory neurons were shifted by 0.15 label units. For reference, the standard deviation of the gaussian connections was about 1 unit. (a) Spike trains from 70 excitatory neurons. Stable propagation of activity is maintained if the background input is above a minimum value; otherwise, it stops. (b) Propagation speed as a function of connection shift. Five points per shift value are shown. Each point is the result of a different simulation identical to that in a. The gray line is the best linear fit. The connection shift is in label units, and speed is in label units per second.

& Aertsen, 1999; van Rossum et al., 2002). According to the results in this and the previous section, with strong feedback and low synchrony, the stability requirements for the propagation of activity are practically identical to those for stationary activity. In other words, stabilizing a stationary rate distribution and a nonsynchronous traveling wave are very similar problems. Thus, except for the shift itself, the mechanisms needed to establish and maintain the functionality of both types of circuits (Renart, Song, Compte, & Wang, 2001) could be the same.

1460

E. Salinas

9 Discussion I have presented a general mathematical argument indicating that the global properties of a network—in particular, which attractors it may exhibit— may be controlled by one of the simplest possible mechanisms: a uniform, low-amplitude excitatory signal. This idea is consistent with data showing task-dependent changes in background or undriven activity in high-order cortical areas (Sakagami & Niki, 1994; Lauwereyns et al., 2001; Port, Kruse, Lee, & Georgopoulas, 2001). Although it is not surprising that widespread excitation can produce drastic changes in a network, the generality of this effect is not intuitively obvious. The background input can turn a simple, unstructured network into a bistable element; it can induce competition between two networks that are mutually inhibitory; it can turn a selective, transient response into a sustained one when the local network has a centersurround organization; it can gate the propagation of activity in a network with a feedforward component. The main analytic result—how the background controls the stability of the state in which all neurons fire at the same rate—applies to any network regardless of its architecture, as long as it is described by the present mathematical model. A second important finding is that this mechanism could operate through fairly weak changes: in the simulations of Figures 5 and 6, the background excitation varied by about 2% (see section 2). Hence, with neurons receiving a background synaptic drive of 10,000 spikes per second (from, say, 2000 excitatory neurons firing at 5 spikes per second), the switching could be produced by an increase in the total input rate for each neuron of as little as 200 spikes per second. Although all input changes considered here were implemented through changes in background firing, similar effects could also occur through the action of neuromodulators, which may alter the overall excitability of cells as well as their interactions (Harris-Warrick & Marder, 1991; Gu, 2002). For instance, neurons in the hippocampus are capable of producing various types of rhythmic activity characterized by their dominant frequency bands, and there is evidence that acetylcholine may switch the hippocampal circuitry between different rhythms (Fellous & Sejnowski, 2000; Tiesinga, Fellous, Jos´e, & Sejnowski, 2001). Except for the fact that most of these states are oscillatory, the neuromodulator causes uniform changes in excitability that regulate the transition between various dynamical states, consistent with the results presented here. Although fast oscillations have not been studied in detail within the present framework, preliminary simulations of the rate model indicate that the background level can indeed gate the transition between uniform firing and oscillatory activity, as expected from the main result. The existence of multiple attractors in neural networks has been pointed out before using somewhat different mathematical formulations (Wilson & Cowan, 1973; Yuille & Grzywacz, 1989; Ermentrout, 1998b; Latham et al.,

Background Activity as a Network Switch

1461

2000). In these studies, inhibition is usually considered subtractive. For all specific examples discussed here, there was good qualitative agreement between theory and simulations. Given the range of behaviors explored, such consistency is somewhat remarkable because the divisive mechanism was originally derived from empirical observations (Wilson & Humanski, 1993; Carandini & Heeger, 1994; Carandini et al., 1997; Simoncelli & Heeger, 1998; Wilson, 1998; Schwartz & Simoncelli, 2001) and its biophysical justification is still uncertain (Holt & Koch, 1997; but see Chance, Abbott, & Reyes, 2002, and Jos´e, Tiesinga, Fellous, Salinas, & Sejnowski, 2002). However, several variants of the equations used here give rise to similar results (Wilson, 1998; Salinas, in press; see appendix C), so the general form of equation 2.1 does seem to capture key properties of recurrent neural networks. An important challenge for the future is to derive an expression similar to equation 2.1 starting from a realistic biophysical description, for instance, from the equations determining a single neuron’s voltage. The most serious limitation of the model presented here is that the recurrent connections in the simulations cannot be too strong. This is of concern in view of the suggestion (Abeles, 1991) that the cortex might operate in a high-gain regime, in which synapses are strong and a single spike from one neuron can trigger spikes from multiple targets. With the chosen parameters, an excitatory spike in Figure 5 or 6 produces a maximum depolarization of about 0.13 mV in a postsynaptic target at rest (the background input acts as if individual spikes produced 0.24 mV depolarizations at rest, but this can be increased). By making the synapses stronger but sparse, this number can be safely raised by a factor of 2 or 3, and possibly more if the size of the network also increases. This and other ways to generate large postsynaptic potentials were not explored systematically. Increasing all synaptic conductances (excitatory and inhibitory) eventually leads to oscillations and unstable behavior. This problem is most serious for the bistable circuit in Figure 2 and least so for the network with alternating dominant states, because it relies precisely on such instabilities (see also Brunel & Wang, 2001). This, however, seems to be a general limitation of spiking network models: fine tuning, homogeneity, and stabilizing mechanisms are necessary to obtain stable attractors (Compte et al., 2000; Seung et al., 2000; Renart et al., 2001). For example, NMDA synapses, which saturate and have long timescales, are crucial to stabilize realistic models of persistent activity (Wang, 1999; Compte et al., 2000; Seung et al., 2000). In these models, as in those presented here, variability in interspike intervals is lower than measured from cortical neurons (Holt, Softky, Koch, & Douglas, 1996). It should be pointed out that this is a problem for continuous attractor models where receptive fields overlap and responses are graded; networks with discrete numbers of steady states and nongraded responses may overcome this limitation (Brunel & Wang, 2001). It is possible that the cortex uses other stabilization mechanisms that are not yet understood in order to work at high gain and with high spike train variability, but this merits a separate in-depth investigation.

1462

E. Salinas

With this caveat, these results could have profound implications for any functional interpretation of neural network activity. Cortical areas associated with a given function may not always display the expected properties. For instance, an area regarded as involved in working memory may display transient responses typical of sensory cortices, as is often the case in prefrontal cortex and other regions with persistent activity (di Pellegrino & Wise, 1993; Ungerleider, 1995; O Scalaidhe, Wilson, & Goldman-Rakic, 1997; Thompson & Schall, 1999). And vice versa, an area with mostly sensory characteristics may sometimes display persistent activity, as has been observed in visual (Miyashita & Chang, 1988; Ungerleider, 1995; Kobatake & Tanaka, 1994; Gibson & Maunsell, 1997) and somatosensory cortices (Zhou & Fuster, 1997; Pruett, Sinclair, & Burton, 2000; Salinas, Hern´andez, Zainos, & Romo, 2000). Similarly, areas associated with motor control and which may trigger sequences of neural activity (Graziano, Taylor, & Moore, 2002) can also exhibit short-lived sensory responses (Fadiga, Fogassi, & Rizzolatti, 1996; Rizzolatti, Fadiga, Gallese, & Fogassi, 1996; Graziano, Hu, & Gross, 1997a; Zhang, Riehle, Requin, & Kornblum, 1997; Buccino et al., 2001; Port et al., 2001; Hern´andez, Zainos, & Romo, 2002) and persistent activity (Ungerleider, 1995; Kettner, Marcario, & Port, 1996; Graziano, Hu, & Gross, 1997b). Many of these responses are observed even when no motor action is actually produced. The results just presented suggest that such functional changes may be caused by small differences in uniform, modulatory inputs that switch the dynamics of local networks rather than by large changes in strong feedforward input, which would still beg for an explanation. Appendix A: Stability of the Uniform-Rate Solution This is to clarify equation 4.3 and the difference between stability conditions for the uniform rate equation versus the full network. From equation 4.1, the uniform rate at steady-state Rss must satisfy Rss =

(wtot Rss + h)2 s + vNR2ss

.

(A.1)

To determine if Rss is a stable point of equation 4.1, set the uniform rate equal to the steady state plus a small perturbation, R → Rss + δR. Then use the expression above and eliminate quadratic terms in δR to obtain  dδR 2wtot (wtot Rss + h) − 2vNR2ss τ = δR −1 + . dt s + vNR2ss

(A.2)

The requirement for stability is that the coefficient multiplying δR be negative, which leads to condition 4.3.

Background Activity as a Network Switch

1463

In terms of the full network, however, the above result is equivalent to varying all rates ri identically; the procedure is more complicated if each neuron is perturbed by an independent amount. This is as follows. Substitute rj for Rss + δrj in equation 2.1. Again use equation A.1 above and linearize to obtain τ

  dδri wij δrj − α2 δrj , = −δri + α1 dt j j

(A.3)

with α1 ≡ α2 ≡

2(wtot Rss + h) s + vNR2ss 2vR2ss s + vNR2ss

.

(A.4)

Rewriting the equation in matrix form, τ

dδ r = Lδ r = (−1 + α1 w − α2 U)δ r, dt

(A.5)

where 1 is the unit matrix and U is a matrix with all entries equal to 1. The stability of the system now depends on the matrix L; for all perturbations to decrease with time, the real parts of all eigenvalues must be negative (Hertz et al., 1991; Wilson, 1998; Dayan & Abbott, 2001). Determining the eigenvalues of L is generally difficult, but for some specific cases, the calculation is straightforward. The key is to point out three results. First, U has one eigen√ value equal to N, which corresponds to the eigenvector e = (1, 1, . . . , 1)/ N, and all other eigenvalues equal to 0. Second, the largest eigenvalue of w is equal to wtot , which also  corresponds to e . This results because of the normalization condition, j wij = wtot , and because all entries wij are positive or zero. Third, as a consequence of these two results, the largest eigenvalue of L is either −1 + α1 wtot − α2 N

(A.6)

−1 + α1 λ2 ,

(A.7)

or

where λ2 is the second largest eigenvalue of w. The first option results because e is an eigenvector of L, and the second occurs because multiplying any other eigenvector of w by U gives zero.

1464

E. Salinas

Given these observations, consider four simple cases: 1. Suppose w = w0 U. This corresponds to all neurons exciting each other by the same amount w0 , which is the example mentioned in the main text. In this case, wtot = Nw0 , and the largest eigenvalue of L is equal to −1 + α1 wtot − α2 N. Imposing that this be smaller than 0 leads to condition 4.3, as claimed in the main text. 2. The situation is practically the same if each neuron excites all others by the same amount but does not excite itself; that is, w = w0 (U − 1). Now the total synaptic weight is wtot = (N − 1)w0 , but the maximum eigenvalue is still equal to expression A.6. Therefore, the same stability condition applies. The case of random connections can be viewed as an extension of cases 1 or 2, where w0 is the mean value of the connection weights (here it is important to recall that negative connections are not allowed). If the variance of the weight distribution is low, the same stability condition should apply. However, high variances may give rise to situations in which the second largest eigenvalue of w is large enough so that expression A.7 should be used. In practice, using a uniform distribution, for instance, gives practically the same results as using constant weights, except that the firing rates are distributed around Rss . 3. Now consider the opposite extreme, in which each unit excites itself only; that is, w = w0 1. Now wtot = w0 , but the major difference is that all eigenvalues of w, not just one of them, are equal to wtot . The maximum eigenvalue of L is thus −1 + α1 wtot . Imposing that this be smaller than 0 leads to a new condition,

2wtot (wtot Rss + h) < s + vNR2ss .

(A.8)

This is the same as expression 6.2, because of equation A.1. This connectivity matrix describes a network in which all interactions are mainly inhibitory (via the divisive mechanism). The two-subnetwork model with mutual inhibition, equations 6.1, corresponds to this case with N = 2, so the above expression is the corresponding stability condition. Note that the above inequality implies inequality 4.3, but not vice versa. 4. Finally, consider a simplified center-surround organization where each neuron excites M of its neighbors equally and does not excite itself. For example, for N = 8 neurons and M = 4 neighbors, the

Background Activity as a Network Switch

1465

connectivity matrix is  0 1   1  0  w = w0  0  0   1 1

1 0 1 1 0 0 0 1

1 1 0 1 1 0 0 0

0 1 1 0 1 1 0 0

0 0 1 1 0 1 1 0

0 0 0 1 1 0 1 1

1 0 0 0 1 1 0 1

 1 1   0  0  . 0  1   1 0

(A.9)

This corresponds to a ring of neurons with a symmetric, square connectivity profile. Using this square connectivity in equation 2.1 actually results in behaviors that are almost indistinguishable from those obtained with gaussian connections. In this case, wtot = Mw0 . For any N even, the eigenvalues λ of such matrices are given by λ = 2w0

M/2  j=1

 cos

 2πkj , N

(A.10)

with k = 0, 1, . . . , N/2; the first and largest eigenvalue corresponds to k = 0 and is equal to wtot , as found in general. This expression is interesting because it shows that the eigenvalue spectrum—and thus the stability condition—depends on the size of the excitatory neighborhood, M. Take two opposite examples; for both, suppose that N = 100. If the neighborhood is large, the second largest eigenvalue of w is much smaller than wtot ; with M = 90, the above sum gives about 0.09wtot . In this case, w is similar to U. In contrast, when the neighborhood is small, the second largest eigenvalue of w is very close to wtot ; with M = 10, the above sum gives about 0.98wtot . In this case, w is more similar to the identity matrix 1. Thus, the width of the connections has a large impact on the stability of the system. Appendix B: Solution for Gaussian Connectivity and Nonzero Background When the recurrent connectivity is gaussian, as in equation 7.1, and the background activity h is nonzero, the steady-state, nonuniform solution for the recurrent network is approximately given by 

(a − x)2 r(a) = r(a − x) = R0 + R1 exp − 2σ12

.

(B.1)

1466

E. Salinas

This is still a gaussian profile of activity centered at an arbitrary point x. However, the baseline rate R0 , the amplitude of the gaussian R1 , and the width σ1 now depend on h, in addition to the other network parameters. To see that this expression is a solution, substitute it into equation 2.2, and do the two integrals. Notice that the one in the numerator is a convolution of two gaussians. Assuming that the integration limits are much larger than σ , the integral results in another gaussian. This assumption simply indicates that the width of the connectivity profile must be considerably smaller than the full length of the neural array. Then define √ σ1 σ σ1 A ≡ R1 ρwmax 2π  = R1 wtot  σ 2 + σ12 σ 2 + σ12 B ≡ R0 wtot σ + h √ √ C ≡ v(ρ πσ1 R21 + ρ2 2π σ1 R0 R1 + NR20 ).

(B.2) (B.3) (B.4)

In terms of these quantities, the network equation becomes  (a − x)2 dr(a) = −R0 − R1 exp − τ dt 2σ12 

+

 2 (a−x)2 B + A exp − 2(σ 2 +σ 2 ) 1

C

.

(B.5)

The next step is the key of the derivation. It relies on the observation that the square of a gaussian plus a constant is not exactly a gaussian, but can nevertheless be approximated quite accurately as such. This is   2   y2 y2 2 2 B + A exp − 2 ≈ B + (A + 2AB) exp − 2 , 2α 2α

(B.6)

where √ α 1 + 2 2(B/A) . α≡ √ 2 1 + 2(B/A)

(B.7)

For A + B = 1 the maximum error is less than 0.012, so at worst it is on the order of 1%. Using this approximation in equation B.5, it follows that three conditions are required for dr/dt to be zero: R0 =

B2 C

(B.8)

R1 =

A2 + 2AB C

(B.9)

Background Activity as a Network Switch

σ12 = σ 2

√ (1 + 2 2(B/A))2 . √ 1 + (8 − 4 2)(B/A)

1467

(B.10)

Clearly, all three variables, R0 , R1 , and σ1 , are coupled nonlinearly. However, the main characteristics of this solution can be appreciated from the above expressions. First, the presence of a background input means that the the baseline rate R0 must be larger than zero. This is because for h > 0, equations B.3 and B.8 imply that R0 > 0. On the other hand, when the background is zero, the original solution, equation 7.2, is recovered: setting B = R0 = 0 makes σ1 = σ and R1 equal to the steady-state value of rmax in equation 7.3. The second and more interesting consequence of the background input in this case is that it widens the resulting profile of activity. This can be seen from equation B.10, which shows that the width of the activity profile is always larger than the width of the connections. The proportionality factor between them is always larger than 1; it is a monotonic function of B/A, which increases monotonically with R0 /R1 . To see this, combine equations B.8 and B.9 to obtain   2 B R0 R0 R0 + + . (B.11) = A R1 R1 R1 Thus, the larger the baseline rate is relative to the amplitude of the activity profile, the wider the profile will be relative to the spread of the synaptic connections. This effect can be quite significant. For example, for R0 = 3 and R1 = 40, the widening factor is 1.49, almost a 50% increase. Notice that the theoretical model predicts this widening effect without any free parameters involved, just based on the underlying assumption of a center-surround organization. When the background is not zero, solving for R0 , R1 , and σ1 is quite difficult; it is easier simply to simulate the rate-based network. However, there is an alternative analytical approach that is simple. First, set R0 and R1 , leaving h and wmax undetermined, and solve for h and wmax according to constraints B.8 through B.10. This is as follows. (1) Set R0 and R1 and find B/A from equation B.11. (2) Find σ1 , C, and B from equations B.10, B.4, and B.8, respectively. (3) Find A using B and the ratio B/A. (4) Solve for wmax using equation B.2. (5) Solve for h using equation B.3. This procedure matches the results of computer simulations of equation 2.1, confirming that expression B.6 is a reasonable approximation and that equation B.1 is close to the true unimodal activity profile that results. Appendix C: Variants on the Recurrent Equations The exact form of the equations used throughout the article is not critical for the results presented because, qualitatively, the same types of dynamics

1468

E. Salinas

are obtained with a variety of similar equations (Salinas, in press). This is demonstrated for arbitrary exponents n and m such that  ( j wij rj + hi )n dri  . (C.1) = −ri + τ dt s + v j rjm The uniform rate solution in this case is given by τ

(wtot R + h)n dR . = −R + dt s + vNRm

(C.2)

In order for all steady-state solutions to remain finite, the derivative dR/dt must be negative when R tends to infinity. By analyzing the limit behavior, it is straightforward to see that this happens for all parameter combinations when m > n − 1. An additional constraint, n > 1, is also applied (see below). With these conditions, the qualitative behavior of this equation is very similar to the case n = m = 2 discussed in the main text: there are at least one and up to three steady-state points. This can be seen by plotting dR/dt as a function of R. The stability of the uniform rate can be analyzed exactly as in appendix A. The resulting expression is the same as equation A.5, except with coefficients that depend on the exponents, α1 ≡

n(wtot Rss + h)n−1 s + vNRm ss

α2 ≡

mvRm ss . s + vNRm ss

(C.3)

The same matrix equation is reached because in the calculation, only linear terms in the perturbations are retained, regardless of the exponents. Equation C.1 also gives rise to a sustained pattern of activity when the connections are gaussian. To see this, first rewrite the equation using integrals, as in equation 2.2, and consider the case h = 0. The nonuniform solution is   (a − x)2 r(a − x; t) = rmax (t) exp − , (C.4) 2σr2 which can be verified by substitution into the integral equation. The key is that the convolution integral in the numerator has not changed; again, it produces a wider gaussian, and exponentiation decreases its width. In order for the convolution-exponentiation combination to produce a gaussian with the same width σr as the original gaussian profile of activity, it must happen that σr2 =

σ2 . n−1

(C.5)

Background Activity as a Network Switch

1469

This shows that n must be larger than 1, as mentioned above, and that the width of the activity profile is the same as the width of the connectivity when n = 2. The amplitude of the activity profile now evolves according to  τ

√1 wtot rmax n

n

drmax  , = −rmax + 2π dt s + vρ m(n−1) σ rm max

(C.6)

which for n = m = 2 is the same as equation 7.3. Finally, traveling waves of activity can also be produced by shifting the synaptic weights, as in equation 8.1. The derivation is identical to the case discussed in the main text, except that the amplitude is given by the steady state of the expression above. In general, simulations of equation C.1 and of other variants based on divisive inhibition produced similar qualitative results—uniform rate, bistability, localized profiles of activity, traveling waves—as the equations used in the main text (see Salinas, in press). Appendix D: Nonuniform Inhibition The simulations in Figures 5 and 6 differed in one important aspect from the theoretical model: instead of all units effectively inhibiting each other uniformly, the inhibitory connections were spatially localized. This was to show that perfectly uniform inhibition, which was used in a detailed modeling study of persistent activity in prefrontal cortex (Compte et al., 2000), is not an essential requirement of these types of attractor models. An argument for why localized inhibition produces qualitatively the same dynamics can by outlined analytically. Again, the excitatory connections are gaussian (see equation 7.1). The basic idea is that, as discussed in appendix B, a combination of unimodal functions centered at the same point can be approximated quite accurately by a single gaussian; in this case, the combination is a ratio. As a shorthand, use G(a; σ ) to indicate a gaussian function of a with standard deviation σ and unit amplitude. Suppose that h = 0 and that the inhibitory connections are given by v(a, b) = v(a − b) = vmax G(a − b; σv ).

(D.1)

Substitute solution 7.2 into the integral equation, equation 2.2, and note that there are now two convolutions, as the denominator now contains a term proportional to  G(a − b; σv )G2 (b − x; σr ) db.

(D.2)

1470

E. Salinas

After the substitutions, τ

drmax G(a − x; σr ) = −rmax G(a − x; σr ) dt  √ AG(a − x; σr2 + σ 2 / 2)  + , s + BG(a − x; σv2 + σr2 /2)

(D.3)

where A and B absorb all the amplitude terms. Here, all gaussian functions are centered at the same point. The unknowns are the maximum rate rmax and the width σr of the activity profile. The crucial observation is that the ratio of gaussian functions that appears above can be accurately approximated as a single gaussian of amplitude A/(s + B), and its width can be set by imposing that the areas under the curves match up (as was done in equations B.6 and B.7). The approximation fails if σv < σ , but as long as the effective spread of inhibition is larger than that of excitation, the error is very small. Thus, there are two constraints for the steady-state solution: that the amplitude of the approximating gaussian be A/(s + B) and that its width be equal to σr . Consistent with the case of uniform inhibition, the resulting expressions show that as σv increases, the width σr of the activity profile approaches σ . Although this procedure ultimately results in a system of coupled, nonlinear equations that is hard to solve, it shows that in principle, the same type of solution is found with localized inhibition. This is in agreement with the results of computer simulations of both the rate and spiking models. Acknowledgments This research was supported by start-up funds from Wake Forest University School of Medicine and by NIH grant 1 R01 NS044894-01. I thank Peter Latham for his thorough critique and Larry Abbott for his continuing support and mentoring. References Abeles, M. (1991). Corticonics: Neural circuits of the cerebral cortex. New York: Cambridge University Press. Amit, D. J., & Brunel, N. (1997). Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cereb. Cortex, 7, 237–252. Ben-Yishai, R., Bar-Or, R. L., & Sompolinsky, H. (1995). Theory of orientation tuning in visual cortex. Proc. Natl. Acad. Sci. U.S.A., 92, 3844–3848. Blake, R., & Logothetis, N. K. (2002). Visual competition. Nat. Rev. Neurosci., 3, 13–23. Brady, N., & Field, D. J. (2000). Local contrast in natural images: Normalization and coding efficiency. Perception, 29, 1041–1055.

Background Activity as a Network Switch

1471

Brunel, N. (2000). Persistent activity and the single cell f-I curve in a cortical network model. Network, 11, 261–280. Brunel, N., & Wang, X.-J. (2001). Effects of neuromodulation in a cortical network model of object working memory dominated by recurrent inhibition. J. Comput. Neurosci., 11, 63–85. Buccino, G., Binkofski, F., Fink, G. R., Fadiga, L., Fogassi, L., Gallese, V., Seitz, R. J., Zilles, K., Rizzolatti, G., & Freund, H. J. (2001). Action observation activates premotor and parietal areas in a somatotopic manner: An fMRI study. Eur. J. Neurosci., 13, 400–404. Carandini, M., & Heeger, D. J. (1994). Summation and division by neurons in primate visual cortex. Science, 264, 1333–1336. Carandini, M., Heeger, D. J., & Movshon, J. A. (1997). Linearity and normalization in simple cells of the macaque primary visual cortex. J. Neurosci., 17, 8621–8644. Chance, F. S., Abbott, L. F., & Reyes, A. D. (2002). Gain modulation from background synaptic input. Neuron, 35, 773–782. Churchland, P. S., & Sejnowski, T. J. (1992). The computational brain. Cambridge, MA: MIT Press. Compte, A., Brunel, N., Goldman-Rakic, P. S., & Wang, X.-J. (2000). Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model. Cereb. Cortex, 10, 910–923. Dayan, P. (1998). A hierarchical model of binocular rivalry. Neural Comput., 10, 1119–1135. Dayan, P., & Abbott, L. F. (2001). Theoretical neuroscience. Cambridge, MA: MIT Press. Deneve, S., Latham, P. E., & Pouget, A. (1999). Reading population codes: A neural implementation of ideal observers. Nat. Neurosci., 2, 740–745. Diesmann, M., Gewaltig, M-O., & Aertsen, A. (1999). Stable propagation of synchronous spiking in cortical neural networks. Nature, 402, 529–533. di Pellegrino, G., & Wise, S. P. (1993). Visuospatial versus visuomotor activity in the premotor and prefrontal cortex of a primate. J. Neurosci., 13, 1227–1243. Duhamel, J.-R., Colby, C. L., & Goldberg, M. E. (1992). The updating of the representation of visual space in parietal cortex by intended eye movements. Science, 255, 90–92. Ermentrout, G. B. (1998a). The analysis of synaptically generated traveling waves. J. Comput. Neurosci., 5, 191–208. Ermentrout, G. B. (1998b). Neural networks as spatio-temporal pattern-forming systems. Rep. Prog. Phys., 61, 353–430. Fadiga, L., Fogassi, L., & Rizzolatti, G. (1996). Action recognition in the premotor cortex. Brain, 119, 593–609. Fellous, J.-M., & Sejnowski, T. J. (2000). Cholinergic induction of spontaneous oscillations in the hippocampal slice in the slow (.5–2 Hz), theta (5–12 Hz), and gamma (35–70 Hz) bands. Hippocampus, 10, 187–197. Georgopoulos, A. P., Schwartz, A., & Kettner, R. E. (1986). Neuronal population coding of movement direction. Science, 233, 1416–1419. Gibson, J. R., & Maunsell, J. H. (1997). Sensory modality specificity of neural activity related to memory in visual cortex. J. Neurophysiol., 78, 1263–1275.

1472

E. Salinas

Graziano, M. S., Hu, T. X., & Gross, C. G. (1997a). Visuospatial properties of ventral premotor cortex. J. Neurophysiol., 77, 2268–2292. Graziano, M. S., Hu, X. T., & Gross, C. G. (1997b). Coding the locations of objects in the dark. Science, 277, 239–241. Graziano, M. S., Taylor, C. S., & Moore, T. (2002). Complex movements evoked by microstimulation of precentral cortex. Neuron, 34, 841–851. Gu, Q. (2002). Neuromodulatory transmitter systems in the cortex and their role in cortical plasticity. Neuroscience, 111, 815–835. Harris-Warrick, R. M., & Marder, E. (1991). Modulation of neural networks for behavior. Annu. Rev. Neurosci., 14, 39–57. Hern´andez, A., Zainos, A., & Romo, R. (2002). Temporal evolution of a decisionmaking process in medial premotor cortex. Neuron, 33, 959–972. Hertz, J., Krogh, A., & Palmer, R. G. (1991). Introduction to the theory of neural computation. Reading, MA: Addison-Wesley. Holt, G. R., & Koch, C. (1997). Shunting inhibition does not have a divisive effect on firing rates. Neural Comput., 9, 1001–1013. Holt, G. R., Softky, W. R., Koch, C., Douglas, R. J. (1996). Comparison of discharge variability in vitro and in vivo in cat visual cortex neurons. J. Neurophysiol., 75, 1806–1814. Idiart, M. A. P., & Abbott, L. F. (1993). Propagation of excitation in neural network models. Network, 4, 285–294. Jos´e, J. V., Tiesinga, P., Fellous, J.-M., Salinas, E., & Sejnowski, T. J. (2002). Is attentional gain modulation optimal at gamma frequencies? Soc. Neurosci. Abstr., 28, 55.6. Kalarickal, G. J., & Marshall, J. A. (2000). Neural model of temporal and stochastic properties of binocular rivalry. Neurocomput., 32, 843–853. Kettner, R. E., Marcario, J. K., & Port, N. L. (1996). Control of remembered reaching sequences in monkey. II. Storage and preparation before movement in motor and premotor cortex. Exp. Brain Res., 112, 347–358. Kobatake, E., & Tanaka, K. (1994). Neuronal selectivities to complex object features in the ventral pathway of the macaque cerebral cortex. J. Neurophysiol., 71, 856–867. Laing, C. R., & Chow, C. C. (2001). Stationary bumps in networks of spiking neurons. Neural Comput., 13, 1473–1494. Laing, C. R., & Chow, C. C. (2002). A spiking neuron model for binocular rivalry. J. Comput. Neurosci., 12, 39–53. Latham, P. E., Richmond, B. J., Nelson, P. G., & Nirenberg, S. N. (2000). Intrinsic dynamics in neuronal networks. I. Theory. J. Neurophysiol., 83, 808–827. Lauwereyns, J., Sakagami, M., Tsutsui, K., Kobayashi, S., Koizumi, M., & Hikosaka, O. (2001). Responses to task-irrelevant visual features by primate prefrontal neurons. J. Neurophysiol., 86, 2001–2010. Lehky, S. R. (1988). An astable multivibrator model of binocular rivalry. Perception, 17, 215–228. Lehky, S. R. (1995). Binocular rivalry is not chaotic. Proc. R. Soc. Lond. B Biol. Sci., 259, 71–76. Leopold, D. A., & Logothetis, N. K. (1996). Activity changes in early visual cortex reflect monkeys’ percepts during binocular rivalry. Nature, 379, 549–553.

Background Activity as a Network Switch

1473

Leopold, D. A., & Logothetis, N. K. (1999). Multistable phenomena: Changing views in perception. Trends Cogn. Sci., 3, 254–264. Linden, J. F., Grunewald, A., & Andersen, R. A. (1999). Responses to auditory stimuli in macaque lateral intraparietal area. II. Behavioral modulation. J. Neurophysiol., 82, 343–358. Lumer, E. D. (1998). A neural model of binocular integration and rivalry based on the coordination of action-potential timing in primary visual cortex. Cereb. Cortex, 8, 553–561. Miyashita, Y., & Chang, H. S. (1988). Neuronal correlate of pictorial short-term memory in the primate temporal cortex. Nature, 331, 68–70. O Scalaidhe, S. P., Wilson, F. A., & Goldman-Rakic, P. S. (1997). Areal segregation of face-processing neurons in prefrontal cortex. Science, 278, 1135–1138. Port, N. L., Kruse, W., Lee, D., & Georgopoulos, A. P. (2001). Motor cortical activity during interception of moving targets. J. Cogn. Neurosci., 13, 306– 318. Pruett, J. R., Sinclair, R. J., & Burton, H. (2000). Response patterns in second somatosensory cortex (SII) of awake monkeys to passively applied tactile gratings. J. Neurophysiol., 84, 780–797. Renart, A., Song, S., Compte, A., & Wang, X.-J. (2001). Homogenization by homeostatic synaptic plasticity in a recurrent network model of continuous bump attractors. Soc. Neurosci. Abstr., 27, 535.14. Rinzel, J., & Ermentrout, G. B. (1998). Analysis of neural excitability and oscillations. In C. Koch & I. Segev (Eds.), Methods in neuronal modeling (2nd ed.). Cambridge, MA: MIT Press. Rizzolatti, G., Fadiga, L., Gallese, V., & Fogassi, L. (1996). Premotor cortex and the recognition of motor actions. Cog. Brain Res., 3, 131–141. Sakagami, M., & Niki, H. (1994). Encoding of behavioral significance of visual stimuli by primate prefrontal neurons: Relation to relevant task conditions. Exp. Brain Res., 97, 423–436. Salinas, E. (in press). Self-sustained activity in networks of gain-modulated neurons. Neurocomputing. Salinas, E., & Abbott, L. F. (2001). Coordinate transformations in the visual system: How to generate gain fields and what to compute with them. Prog. Brain Res., 130, 175–190. Salinas, E., Hern´andez, H., Zainos, A., & Romo, R. (2000). Periodicity and firing rate as candidate neural codes for the frequency of vibrotactile stimuli. J. Neurosci., 20, 5503–5515. Salinas, E., & Sejnowski, T. J. (2000). Impact of correlated synaptic input on output firing rate and variability in simple neuronal models. J. Neurosci., 20, 6193–6209. Salinas, E., & Sejnowski, T. J. (2001). Gain modulation in the central nervous system: Where behavior, neurophysiology and computation meet. Neuroscientist, 7, 430–440. Salinas, E., & Sejnowski, T. J. (2002). Integrate-and-fire neurons driven by correlated stochastic input. Neural Comput., 14, 2111–2155. Salinas, E., & Thier, P. (2000). Gain modulation—a major computational principle of the central nervous system. Neuron, 27, 15–21.

1474

E. Salinas

Schultz, W., & Romo, R. (1992). Role of primate basal ganglia and frontal cortex in the internal generation of movements. I. Preparatory activity in the anterior striatum. Exp. Brain Res., 91, 363–384. Schwartz, O., & Simoncelli, E. P. (2001). Natural signal statistics and sensory gain control. Nat. Neurosci., 4, 819–825. Seidemann, E., Zohary, U., & Newsome, W. T. (1998). Temporal gating of neural signals during performance of a visual discrimination task. Nature, 394, 72–75. Seung, H. S. (1996). How the brain keeps the eyes still. Proc. Natl. Acad. Sci. U.S.A., 93, 13339–13344. Seung, H. S., Lee, D. D., Reis, B. Y., & Tank, D. W. (2000). Stability of the memory of eye position in a recurrent network of conductance-based model neurons. Neuron, 26, 259–271. Simoncelli, E. P., & Heeger, D. J. (1998). A model of neuronal responses in visual area MT. Vision Res., 38, 743–761. Sommer, M. A., & Wurtz, R. H. (2001). Frontal eye field sends delay activity related to movement, memory, and vision to the superior colliculus. J. Neurophysiol., 85, 1673–1685. Terman, D., Rubin, J. E., Yew, A. C., & Wilson, C. J. (2002). Activity patterns in a model for the subthalamopallidal network of the basal ganglia. J. Neurosci., 22, 2963–2976. Thompson, K. G., & Schall, J. D. (1999). The detection of visual signals by macaque frontal eye field during masking. Nat. Neurosci., 2, 283–288. Tiesinga, P. H. E., Fellous, J.-M., Jos´e, J. V., & Sejnowski, T. J. (2001). Computational model of carbachol-induced delta, theta and gamma oscillations in the hippocampus. Hippocampus, 11, 251–274. Toth, L. J., & Assad, J. A. (2002). Dynamic coding of behaviourally relevant stimuli in parietal cortex Nature, 415, 1651–1668. Troyer, T. W., & Miller, K. D. (1997). Physiological gain leads to high ISI variability in a simple model of a cortical regular spiking cell. Neural Comput., 9, 971–983. Ungerleider, L. G. (1995). Functional brain imaging studies of cortical mechanisms for memory. Science, 270, 769–775. van Rossum, M. C. W., Turrigiano, G. G., & Nelson, S. B. (2002). Fast propagation of firing rates through layered networks of noisy neurons. J. Neurosci., 22, 1956–1966. Wang, X.-J. (1999). Synaptic basis of cortical persistent activity: The importance of NMDA receptors to working memory. J. Neurosci., 19, 9587–9603. Wang, X.-J. (2001). Synaptic reverberation underlying mnemonic persistent activity. Trends Neurosci., 24, 455–463. Weimann, J. M., & Marder, E. (1994). Switching neurons are integral members of multiple oscillatory networks. Curr. Biol., 4, 896–902. Wilson, H. R. (1998). Spikes, decisions, and actions. Oxford: Oxford University Press. Wilson, H. R., & Cowan, J. D. (1973). A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik, 13, 55–80. Wilson, H. R., & Humanski, R. (1993). Spatial frequency adaptation and contrast gain control. Vision Res., 33, 1133–1149.

Background Activity as a Network Switch

1475

Yuille, A. L., & Grzywacz, N. M. (1989). A winner–take–all mechanism based on presynaptic inhibition. Neural Comput., 1, 334–347. Zhang, K. (1996). Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: A theory. J. Neurosci., 16, 2112–2126. Zhang, M., & Barash, S. (2000). Neuronal switching of sensorimotor transformations for anti-saccades. Nature, 408, 971–975. Zhang, J., Riehle, A., Requin, J., & Kornblum, S. (1997). Dynamics of single neuron activity in monkey primary motor cortex related to sensorimotor transformation. J. Neurosci., 17, 2227–2246. Zhou, Y. D., & Fuster, J. M. (1997). Neuronal activity of somatosensory cortex in a cross-modal (visuo-haptic) memory task. Exp. Brain Res., 116, 551–555. Received October 29, 2002; accepted January 2, 2003.

Suggest Documents