Network Topology Influences Synchronization and Intrinsic ... - arXiv

4 downloads 255296 Views 1MB Size Report
neuronal types (1-6, plus the generic neuron g) with 30 Neurons each. .... uncorrelated input leads to uniform behavior vs. read-out of stored differences in ion ...
Network Topology Influences Synchronization and Intrinsic Read-out Gabriele Scheler Abstract: What are the effects of neuromodulation on a large network model? Neuromodulation influences neural processing by presynaptic and postsynaptic regulation of synaptic efficacy and by ion channel regulation for dendritic excitability. We present a model, where regulation of synaptic efficacy changes the overall connectivity, or topology, of the network, and regulation of dendritic ion channels sets intrinsic excitability, or the gain of the neuron. We show that network topology influences synchronization, i. e. the correlations of spiking activity generated in the network and synchronization influences the read-out of intrinsic properties. Highly synchronous input drives neurons, such that differences in intrinsic properties disappear, while asynchronous input lets intrinsic properties determine output behavior. We conclude that neuromodulation may allow a network to alternate between a synchronized transmission mode and an asynchronous intrinsic read-out mode.

Introduction We have previously shown for a conductance-based neural model of striatal medium spiny neurons that neuronal variability in the contribution of individual ion channels (such as slowly inactivating potassium channels and GIRK channels) can yield uniform responses, if the neurons are driven with sufficiently strong synaptic input fluctuations. If the same neurons are driven by more asynchronous synaptic input, the variability is manifested in their response pattern, i.e. their spike rates and the asynchronous timing of their spikes (see [5]). These results are here reproduced and further evaluated with the help of both a conductancebased and a simplified two-dimensional neuronal model. Neuromodulation influences the contribution of ion channels towards intrinsic excitability, but also synaptic efficacy. Here we want to investigate these effects for a large collection of neurons. For this purpose we shall explore the effect of different network topologies on the statistics of synaptic input received at individual model neurons. We hypothesize that synaptic efficacy changes allow to operate with different network topologies, and that network topology is a decisive factor towards creating and sustaining synchronized inputs vs. producing asynchronous input without a temporal structure.

Results Conditional Expression of Intrinsic Excitability in Conductance-based Models We show how we can model frequency-specificity as a stored intrinsic property, defined as a fixed spike rate in response to constant or noisy input. We use a full ion channel based model (the MSN model [5]), with variation in the slow A-type potassium channel (IAs). In Fig. 1A, we show the response of MSN model neurons with a scaling of μ IAs = 1.0, 1.3, 1.5 to a noisy signal, derived from uncorrelated Poisson-distributed synaptic input. The top panel shows the development of the membrane potential, Vm , over time for all neurons . The middle panel shows the spike-train for each neuron with the mean ISI and its standard deviation; the total number of spikes is shown on the right. The total number of spikes includes bursts, which are excluded from ISI calculation. The bottom panel shows the synaptic input. The dots correspond to the spiking events for a single neuron 3 (μ IAs = 1.5). The resulting mean ISIs are 25, 37, and 45 ms. With a standard deviation of 6, 11, and 8, they are clearly distinguishable. This is also shown by the Gaussian distribution for the mean ISIs for each neuron type (Fig. 1B). This model shows frequency-specificity as read-out of the relative contribution of the slow Atype potassium channel, indicated by the scaling factor μ IAs. The relative contribution of an ion channel corresponds to its density or distribution on the somato-dendritic membrane, or in some cases its specific localization at dendritic branch points. Experimental evidence has shown that this is a plastic feature for neurons. We then employ highly correlated synaptic input, defined as in [5]. We stimulate the same neurons with the correlated input and observe the spike pattern (Fig. 2). We can show that the frequency-specificity of the neuron disappears. Instead we see a time-locked spike pattern which is expressed by a similar spike frequency (Fig. 2A) and an overlap of the mean ISIs (Fig. 2B). What this experiment showed is that a stored intrinsic property, the gain, is available to the processing network in a conditional manner. The property is continually expressed, the differences in ion channel density persist. Depending on the mode of stimulation, however, this property is manifested as intrinsic gain, or it is obscured when a neuron is driven by strongly correlated input.

Figure 1 A. Frequency response of 3 conductance-based MSN model neurons with variable scaling of IAs to uncorrelated input B. Gaussian distributions of ISIs. We see a clear separation of frequency responses.

Figure 2 A. Frequency response of the same MSN model neurons as in Fig.1 to correlated input. B. Gaussian distribution of ISIs. We see overlapping of frequency responses.

Results for Simplified Model Neurons To continue with exploring this property of model neurons in a networked context, we switched to a simplified model neuron [2] and created a set of variations for this model. We show the response of two-dimensional model neurons to asynchronous input in Fig. 3, and to regular, synchronous input in Fig. 4. In the first case, we have clearly separated frequencies, and in the second case, the ISIs are nearly identical with a narrow distribution. When we stimulate the neurons with irregular, but synchronous input, the ISIs become identical, but with a wider distribution to reflect the different duration of pauses between the synchronous stimulation (Fig.5).

Figure 3 A. Spike response of the two-dimensional dynamic model neurons (1,2,3) to asynchronous input. B. Gaussian distributions of ISIs for six model neurons (1,2,3,4,5,6) to asynchronous input as in A. We see a clear separation of frequency responses for model neurons.

Figure 4 A. Spike response of the two-dimensional dynamic model neurons (1,2,3) to regularly timed, correlated input. B. Gaussian distributions of ISIs for six model neurons (1,2,3,4,5,6) to the same input as in A. We see strong overlapping of frequency responses, at about 50ms ISI, in accordance with the input. We notice that neuron 6 fires at lower frequencies than the input, it probably has a longer reset period, as seen in Fig.3B.

Figure 5 A. Spike response of the two-dimensional dynamic model neurons (1,2,3) to irregularly timed, correlated input. B. Gaussian distributions of ISIs for six model neurons (1,2,3,4,5,6). We see strong overlap of frequency responses.

Multiplexing synchronous and asynchronous input We may also consider the question of whether a neuron can simultaneously respond to an input and read out its stored spike frequency. If there are single synchronous events, which interrupt ongoing spiking, can we recover the intrinsic properties for each neuron? In Fig. 6 it is shown that this is possible. Fig.6A shows the input and the synchronous responses, and in Fig.6B we still see a clear separation of frequencies.

Figure 6A: Multiplexed input and response of different model neurons (1-6). Three synchronous events are clearly represented in the spike pattern of all neurons. B. Mean ISIs for each neuron type. The separation of frequencies is kept. Compared to Fig.3B the standard deviation is somewhat higher because of the additional spikes caused by strong synchronous input.

We conclude that we can multiplex asynchronous and synchronous input. It is also apparent that there needs to be a lower limit on the intervals between synchronous events that can be processed without disrupting intrinsic properties. This interval needs to be defined as functionally dependent on the intrinsic frequencies. In this case, it is 3/s for the synchronous events, with 10Hz for the slowest neuron.

Synchronicity depends on network topology The simplified model neurons allow to create large networks of heterogeneous neurons and explore different topologies. We hypothesized that a small-world graph, because of its hierarchical topology and the existence of hub neurons would lead to synchronization of action potentials – even with heterogeneous neurons – while a random topology would support asynchronous spiking behavior. We define synchronization S in a network by pairwise correlations: for each pair of neurons n1, n2, we count the number of spikes which occur within a window W (W = 10ms) divided by the total number of spikes for n1 or n2. We then calculate an average pairwise correlation value S over all neurons in the network. The spike frequency for each neuron type is assessed by the mean and standard deviation for ISIs, as before.

Figure 7 A. Part of a Random Graph, here for 50 neurons B. Part of a Small-World Graph for 50 neurons. The higher clustering for the small world graph is apparent.

We first use a randomly connected graph (RG) with N=210 and K=1800 and use 7 different neuronal types (1-6, plus the generic neuron g) with 30 Neurons each. Fig. 7A shows an excerpt of the graph structure. We can see that the graph is loosely connected such that all neurons have a comparable number of connections. This is also apparent in Fig. 8, where we can see a (narrow) Gaussian distribution for connectivity for the random graph.

Figure 8. Outdegree histogram for the Random Graph (red) and the Small-World Graph (blue). The SWG has many neurons with only one connection('leaves'), but also neurons with connections of 17 and more ('hubs'), which are lacking in the random graph.

We now stimulate the graph by an initial stimulation to 10 neurons (for about 1s). There is also a small background inhibition to all neurons present, implemented by 21 (10%) inhibitory neurons with complete connectivity. In Fig. 9 A, we see highly asynchronous neuronal activity after 1s of stimulation. The pairwise correlation values S is low (S=0.22). Fig. 9 B shows that each neuronal type retains its own frequency, i.e. has its own typical ISI, separated from other neuronal types. We also notice that some neurons fire with low frequencies (5Hz) and others with higher frequencies (20Hz)

Figure 9 A. Asynchronous behavior in a random graph with variable neuron types. Groups of neuronal types are apparent in the rasterplot. Some structure is probably due to background inhibition. B. Average ISIs for all neuronal types, with clear separation by frequency. Pairwise correlation is S = 0.22.

Next we changed the topology of the network such that a small-world graph (SWG) results, with N=210 and K=1822 and use the same neurons as before. Fig. 7B shows an excerpt of the small world graph structure. The connectivity structure seems much denser, because of a 'hub' neuron in the center of the graph. In Fig. 8, we can see a much wider distribution of out-degrees for the small-world graph (blue), with a number of nodes with high connectivity. Presumably, those nodes are capable of synchronizing the network, because they can reach many neurons simultaneously. Does this work in the presence of neural heterogeneity?

Fig. 10A. Synchronization in a small-world graph graph (SWG) with variable neuron types. The rasterplot shows that different neuronal types respond uniformly. B. Frequency distributions. High overlap between neuronal types is apparent. Pairwise correlation in the graph is high with S = 0.53.

Fig. 10 shows that a high amount of synchronization is compatible with heterogeneity of intrinsic frequency of model neurons. The rasterplot (Fig.10A) shows the activity in the SWG with the same neurons and the same stimulation as before. The overall correlation, defined by pairwise correlation of neurons, is much higher (S=0.53). The distribution of ISIs in this case is strongly overlapping (Fig.10B), similar to Fig. 4B, where neurons were explicitly driven by highly synchronous input. We realize that the pairwise synchronicity S is not determined by the intrinsic properties of the neuron. Instead S is dominated by the network topology. We conclude that differences in intrinsic properties disappear as the synchronicity in a network increases. To confirm this observation we used a number of intermediate graphs X1-X4, with clustering indices between the random graph and the small-world graph. If there is a phase transition in synchronicity during a gradual transformation of the graph structure, this would indicate a self-organized switch between time-locked behavior and read-out. However, at least in this case, the pairwise synchronization S develops proportional to the clustering index (Fig.11).

Fig. 11 Pairwise synchronization S is directly correlated to cluster index for the transition between random and small-world graph.

Discussion The Role of Neuromodulation Neuromodulation influences both intrinsic properties and synaptic connectivity. Experimental estimates on the distribution of synaptic neuromodulatory receptors are at approximately 30% of connections. For a random graph that is sufficient to be transformed into a smallworld graph. Empirically, we could effect a topology change between random and small-world graph with 10% to 40% of changed connections. Secondly, neuromodulation disables or enhances some ion channels, e.g. Sk-channels which guide reset times after a spike, or A-type potassium channels which influence latency to spike. In this way, neuromodulation switches intrinsic properties, such as, in our model, from type 1 to type 2. Our simulations have shown that this is only relevant in the asynchronous network mode – in the synchronous mode, the differences are negligible. If neuromodulation induces enhanced synchrony by synaptic receptors, the invariance of neuronal intrinsic properties in synchronous mode prevents type switching to have an acute effect during processing. Network Topology and Intrinsic Read-out We employ a parametrizable two-dimensional neural oscillator model to encode different intrinsic excitability manifested by different frequency responses to constant input (gain). What the first experiment showed is that a stored intrinsic property, the gain, is available to the processing network in a conditional manner. The property is continually expressed, the differences in ion channel density persist. Depending on the mode of stimulation, however, this property is manifested as intrinsic gain, or it is obscured when a neuron is driven by strongly correlated input. Different statistical properties of synaptic input can be modeled by a variability in the correlation properties of input neurons. In a network model, this means that the overall correlation in the network determines what input a neuron receives. With a random topology, correlation is low and neurons fire irregularly with their own preferred frequency. With a small-world topology, correlation is much higher, and neurons fire when they receive correlated input, irrespective of intrinsic properties. I.e. driving neurons by correlated vs. uncorrelated input leads to uniform behavior vs. read-out of stored differences in ion channel conductances.

Inhibition Conditions for neuronal read-out may not be restricted to highly correlated vs. more distributed synaptic input. Another type of synaptic input that is assumed to influence variability in neuronal spiking behavior is the balance of inhibitory and excitatory input. The neuron is either in a low inhibition state such that it is driven only by strong synaptic input, or exists in high inhibition or high-conductance state where it remains close to firing threshold and fires continuously, reading out stored properties. Synaptic input in the high inhibition state becomes a form of ongoing input that causes a neuron to emit spikes according to its own stored intrinsic properties. Memorization Differences in intrinsic properties are interesting for models of memorization. Neurons may acquire differences in their ion channel distribution and density in a use-dependent way, storing aspects of previous activations. This memory, however, is not always present to influence network processing. Rather, it is conditionally present, i.e. it requires specific conditions of asynchronous synaptic input to be ”read-out”. Any actual read-out requires not only a localized activation pattern, but also enhancement and transmission of the pattern by (synchronized) neural activity. This observation provides a new perspective on the role of neuronal circuits in encapsulating, i.e. hiding or making available, neuronal memory. Symbols The results we obtained were done with two-dimensional neurons, as models of cortical pyramidal neurons. We used both a generic model, and parametrization according to a few distinct types. In this way, we can demonstrate a ”symbolic” property of storage in distinct frequencies that a neuron emits, when stimulated in an asynchronous way. The 6 different types of neurons can be considered to correspond to 6 symbols (A..F), and any directed, target input to the graph will excite a typical signature ('word') from these symbols (Fig. 12). Let us explain the role of a neuron in the model: if neuron A receives asynchronous synaptic input, its intrinsic properties will determine its firing rate. It will be more of a ”source” neuron, whose stored properties determine its output. If the same neuron A receives more synchronous synaptic input which puts it into a driven mode, its behavior will be that of a ”transfer” neuron, a uniform processing device that reliably transmits temporal information.

Fig. 12: Two examples for symbolic information contained in the firing rates of neurons (6 neuronal types, each with its own overall activation) that could appear in an asynchronously spiking network.

Conclusions We created a number of different parametrized neuron models to capture neuronal heterogeneity. Neuromodulation may change the properties of the neuron such that it has less or more intrinsic excitability, leading to different firing rates when stimulated in an asynchronous way. Under synchronous stimulation the differences disappear. We also suggested that synaptic neuromodulation can be an effective way of rapidly altering network topology. We investigated changes in network topology along the dimensions of small-world connectivity vs. random graph connectivity. We hypothesized that SW graphs produce more globally synchronized behavior than comparable random graphs. In accordance with the hypothesis, we find that in a SW graph, because of highly synchronous inputs, the difference between neuronal intrinsic properties is minimized, while a random graph allows read-out of neuronal intrinsic properties. Thus, altering network topology can alter the balance between intrinsically determined vs. synaptically driven network activity.

Methods Conductance-based neuron model and synaptic input The conductance-based neural model of a striatal medium spiny neuron is described in detail in [5]. The membrane voltage Vm is modeled using the equation ˙Vm = −1/C [μ1(I1) + μ2(I2) . . . + μn(In) − Isyn] where the Ii are the currents, induced by the individual ion channels. Variability of the neuron is modeled by modifications to μi. This model includes ion channels for Na (INa), K (IK), slow A-type K channels (IAs), fast A-type K channels (IAf ), inward rectifying K channels (IKir), Ltype calcium channels (ICaL), and the leak current (Ileak). The definition of all parameters and the dynamics of the ion channels can be found in [5]. For the experiments in this paper, we only focus on variability induced by changes in the strength of the slow A-type K channels. The total current contribution for this channel is μIAs where μ was selected between 1.0 and 1.5. In order to illustrate the variability in neuron behavior, we excited the neuron model by input signals, resembling two kinds of synaptic input: uncorrelated and correlated. These signals were generated by superposition of excitatory and inhibitory spikes from individual Poissondistributed spike trains (50 excitatory and 10 inhibitory), and biased Gaussian background noise. The amount of pairwise correlation in these spike trains governs the type of input signal. A high correlation factor was used in order to generate sequences which have short periods (1015ms) of high activity. Heterogeneity in a two-dimensional model In order to do large-scale simulation we needed to employ a simple, computationally tractable neuron model. We used a two-dimensional model of a neural oscillator (cf. [2]), and employed an instantiation of the model with parameters fitted to the general properties of cortical pyramidal neurons [3] as a generic model g. The model consists of an equation for the membrane model v (Eq. 1), fitted to experimental values for cortical pyramidal neurons, and an equation for a gating parameter u (Eq. 2). v˙ = 0.04v2 + 5v + 140 − u − Isyn u˙ = a(bv − u) b = 0.2; a = 0.02

(Eq.1) (Eq.2)

When the neuron fires a spike (defined as v(t) = 30 mV ), v is set back to a low membrane potential v := c; c = −65.8 mV and the gating variable u is increased by a fixed amount d (u := u+d, d = 8) (cf. [3]). This formulation allows for a very simple neuron model, which avoids the explicit modeling of the downslope of the action potential, and rather resets the voltage. Timedependence is modeled by the gating variable u. Neuronal heterogeneity is achieved by systematic variation of inactivation parameters. By varying d, we can vary the inactivation dynamics of the model after a spike, by varying a we vary the inactivation dynamics throughout the computation. In this way, we can attempt to model neuronal variability in activation/inactivation dynamics, which is sufficient to model frequency-selectivity as a stored intrinsic property. The parameters used in this paper for different neuron types are listed in Table 1.

name generic type1 type2 type3 type4 type5 type6

a 0.02 0.025 0.02 0.015 0.015 0.022 0.022

b 0.2 0.2 0.2 0.2 0.15 0.3 0.3

c -65 -65 -65 -65 -65 -65 -65

d 8 6 9 12 14 14 9.5

Table 1: Parameters for different neuron types Graph properties We create graphs of N excitatory and 10% inhibitory neurons. We change the connectivity pattern for the excitatory neurons and keep the inhibitory neurons fully connected to all excitatory neurons. We compare randomly connected to small-world graphs. A randomly connected graph is fully specified by N and K, the number of links. A SW graph can be created using linear preferential attachment (LPA, [1]): the graph is generated one vertex at a time, and each newly created node is attached to a selection of d already existing nodes. The probability of selecting a specific node is proportional to its current degree. For our experiments, we use an efficient algorithm found in [6]. There is a useful overview of this and other methods in [4]. We use specific instantiations of these graphs (RG and SWG) for the simulations. The SW graph was created by LPA which resulted in K = 1822 connections (N = 210), the random graph had the same number of neurons (N = 210) and approximately the same number of connections K = 1800. Additionally we created 4 intermediate graph types X1-X4. Table 2 shows global graph characteristics for the random graph (RG), the small-world graph (SWG), and the intermediate graphs X1-X4. Property

random graph(RG) small-world (SWG)

N 210 K 1800 indegree 8.57[2 . . .20] outdegree 8.57[1 . . . 19] cluster index 0.041 mean plength 2.72 Synch S 0.22

210 1822 8.67[1 . . .25] 8.67[0 . . .142] 0.12 2.46 0.53

X4

X3

X2

X1

210 210 210 210 1783 1779 1796 1796 8.49[0..20] 8.47[1..19] 8.55[2..19] 8.55[2..17] 8.49[1..28] 8.47[1..53] 8.55[2..75] 8.55[1.142] 0.045 0.049 0.056 0.08 2.8 2.98 2.94 3 0.25 0.23 0.23 0.31

Table 2: Small World vs. Random Graph Properties In general, a SW graph is defined by (a) a short average path length (b) high clustering and (c) a power-law distribution of degrees. In this case, it is noticeable that the differences between RG and SWG in mean path length are small, probably because of the small size of the graph, while the cluster index is three times higher for the SW graph than the random graph. Fig. 7 shows a 50-neuron subgraph from each structure. The higher clustering in the SW graph is apparent. Fig. 8 shows the distribution of outdegrees (for degrees k < 25). The degree distribution is that of a power-law for the SWG vs. a Gaussian distribution for the RG, in accordance with the definitions. Fig. 12 shows the full outdegree distribution.

Fig.12. Outdegree distribution for SWG. There are values for k>25 for individual neurons.

References [1] Barabasi, A. and Albert R. Emergence of scaling in random networks. Science, 286(5439): 509–512, 1999. [2] Izhikevich, E. Which model to use for cortical spiking neurons? IEEE Trans Neural Networks, 15(5):1063–1070, 2004. [3] Izhikevich, E, Gally J A, and Edelman G M. Spike-timing dynamics of neuronal groups. Cerebral Cortex, 14(8):933–944, 2004. [4] Middendorf, M, Ziv E, and Wiggins C. Inferring network mechanisms: The drosophila melanogaster protein interaction network. PNAS, Vol. 102, No. 9, pp. 3192-3197, 2005. doi: 10.1073/pnas.0409515102 [5] Scheler, G. Learning intrinsic excitability with medium spiny neurons. F1000Research 2:88. doi: 10.12688/f1000research.2-88.v2, 2013. [6] Batagelj V and Brandes U. Efficient generation of large random networks. Physical Review E, 71:036113, 2005.