shortcomings in regards to the quantitative accuracy of its predictions. Also it ..... The top-down models that usually neglect details of the neural ... explain behavioral manifestations associated with striatal pathologies. .... will have an exponential decay towards the equilibrium state due to the ...... [22] Kress, Geraldine J. et al.
Role of input fluctuations in shaping the balance of Striatal activity
AITAKIN EZZATI
Master’s Thesis at: Department Of Computational Science And Technology Supervisor: Arvind Kumar Examiner: Jack Lidmar
TRITA-SCI-GRU 2018:381
Abstract In this thesis we attempt to develop an analytical framework to further explore underlying dynamics of the striatum, which is a key component of cortex-basal ganglia loops. We specifically investigate the contribution of input fluctuations in shaping the relative spiking activity of two neural populations embedded in the striatum, known as D1 and D2 medium spiny neurons(MSNs). These sub-circuits are associated with two competing pathways in the basal ganglia that are thought to control Go and N o − Go decision making, respectively. Having a functional understanding of various variables that govern the dynamics of this network, could help us infer causal relations between population activities and behavioral manifestations affiliated with action selection disorders. This causal map enables scientists to manipulate and adjust the neuronal activity with various novel tools, to restore the healthy state of network functions and ultimately behavior. In a recent study, Bahuguna et.al (2015) shed light on number of important dynamical aspects of striatal network, namely the existence of an operational point/s (OP) that facilitates a rapid shift between the Go and No-Go states, thereby promoting a suitable response to the state of cortical stimuli. However, due to the lack of biological plausibility, the suggested framework had shortcomings in regards to the quantitative accuracy of its predictions. Also it could not account for the role of input structure in its dynamical description of the network. To overcome these limitations we perused a more suitable analytical approach with higher biological relevance, and quantitative accuracy. Moreover, our model can disclose the impact of input noise and correlations on D1 and D2 population activities. The validity of theoretical predictions was confirmed by comparison with the results obtained from the network simulations.
Rollen av ingående fluktuationer för att forma balansen av Striatal-aktiviteten Referat I denna avhandling försöker vi utveckla en analytisk ram för att ytterligare utforska den underliggande dynamiken i stammen, vilken är en nyckelkomponent i cortex-basala ganglia-slingor. Vi undersöker specifikt bidraget från ingående fluktuationer för att forma den relativa spikningsaktiviteten hos två neurala populationer inbäddade i striatumet, känd som D1 och D2 medium spiny neurons (MSNs). Dessa delkretsar är förknippade med två konkurrerande banor i de basala ganglierna som anses ha kontroll av Go respektive N o − Go beslutsfattande. Att ha en funktionell förståelse för olika variabler som styr dynamiken i det här nätverket kan hjälpa oss att ta fram orsakssamband mellan befolkningsaktiviteter och beteendemässiga manifestationer som är kopplade till handlingsselektionsstörningar. Denna orsakskarta gör det möjligt för forskare att manipulera och justera neuronaktiviteten med olika nya verktyg, för att återställa det friska tillståndet för nätverksfunktioner och slutligt beteende. I en nyligen genomförd studie avslöjade Bahuguna et.al (2015) antalet viktiga dynamiska aspekter av striatalnätet, nämligen existensen av en operationspunkt / s som underlättar en snabb växling mellan Go och No-Go-staterna och därigenom främjar en lämpligt svar på tillståndet av kortikala stimuli. På grund av avsaknaden av biologisk trovärdighet hade emellertid den föreslagna ramen brister i fråga om den kvantitativa noggrannheten av dess förutsägelser. Det kunde inte heller redogöra för rollen som ingångsstruktur i sin dynamiska beskrivning av nätverket. För att övervinna dessa begränsningar har vi prövat en mer lämplig analysmetod med högre biologisk relevans och kvantitativ noggrannhet. Dessutom kan vår modell avslöja effekterna av inmatningsstörningar och korrelationer på D1 och D2-befolkningsaktiviteter. Giltigheten av teoretiska förutsägelser bekräftades genom jämförelse med resultaten från nätverkssimuleringarna.
Acknowledgements I would first like to thank my supervisor, Arvind Kumar, who provided me with a constructive learning opportunity during the time I was working on this thesis. He gave me the freedom to explore and conduct research independently, on this fascinating topic, despite my lack of background in the field of Computational Neuroscience. I am also very thankful to Sebastian Spreizer a Ph.D. student at Freiburg University, who was very kind to teach me Python during his short visits at KTH. Finally, I am very grateful to my family and my friends, Azi and Paria for their never-ending support.
Contents 1 Introduction
1
2 Background 2.1 Striatum Biology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Computational Models Of the Striatum . . . . . . . . . . . . . . . .
3 4 6
3 Neural Network Model 3.1 Neuron Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Synapse Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Network Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9 9 13 13
4 Analytical Model 4.1 Stochastic Neural Activity . . . . . . 4.2 Framework Description: Part One . 4.3 Diffusion Equation . . . . . . . . . . 4.4 Framework Description: Part Two . 4.4.1 Methods justification . . . . . 4.4.2 Effective diffusion equation . 4.4.3 Effective boundary conditions 4.5 Stationary firing rate . . . . . . . . .
. . . . . . . .
19 20 20 24 25 26 26 27 29
. . . .
31 31 39 39 43
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
5 Results And Discussion 5.1 Single neuron . . . . . . . . . . . . . . . . . 5.2 Striatum Network . . . . . . . . . . . . . . 5.2.1 Self-Consistent Equations . . . . . . 5.2.2 Dynamic State of Network Activity .
. . . . . . . .
. . . .
. . . . . . . .
. . . .
. . . . . . . .
. . . .
. . . . . . . .
. . . .
. . . . . . . .
. . . .
. . . . . . . .
. . . .
. . . . . . . .
. . . .
. . . . . . . .
. . . .
. . . . . . . .
. . . .
. . . . . . . .
. . . .
. . . . . . . .
. . . .
. . . . . . . .
. . . .
. . . . . . . .
. . . .
6 Conclusion and Future Prospects
55
Bibliography
57
Chapter 1
Introduction Imagine a country with a two-part system, say republican and democrat. It is a given that the parties are in constant rivalry to form a government that rules the country according to their own doctrine and policies. The politicians inside each party also compete among themselves to get a nomination to fight an election for a seat in the parliament. That is, the competition is both within and in between the parties. Considering that it is not a corrupt government, in the end, the public vote is the definitive factor that drives the competition. A politically aware person, might be very familiar with the parties ideologies as well as public political proclivities. But how this knowledge at the individual level could help one predict the election outcome that is emerging at a societal level? In other words what other underlying factors or forces are involved that shape the public opinion and swing the votes for or against a party? Undoubtedly media is a powerful agent in changing people’s political tendencies and biasing the public sentiment through propagandas and occasionally truthful broadcasts. Another not so obvious but imminent component is the fact that people perpetually exchange their political point of views in a community and the number of votes for a particular candidate could fluctuate simply as a result of public political debates. Therefore, it is very important to establish to what extent fluctuations shape the electoral polls and what is the ramification of this correlated behavior on the final result of an upcoming election? Curiously this simple analogy could be extended to the networks in the brain, especially for the striatum which is the main input port of the basal ganglia. Striatum plays a crucial role in learning, action-selection and motor control and striatal dysfunction invariably leads to mental disorders such as Parkinson’s disease, Huntington’s disease and the Tourette’s syndrome. Striatum is a network of inhibitory neurons. The medium spiny neurons (MSNs) constitute about 95% of the population. Remaining 5% are different types of interneurons. All striatal neurons receive excitatory inputs from the thalamus and cortex. The MSNs express either D1-type or D2-type dopamine receptors and form distinct sub-populations. Importantly, just like the two party political system, both
1
2
CHAPTER 1. INTRODUCTION
MSN populations have within and between population inhibition. Both types of MSNs project to different downstream targets and initiate two parallel pathways in the basal ganglia that play antagonistic roles in action selection procedures. Thus, the winning population, with a higher activity, ultimately dictates a behavioral decision which resembles the authority of the winning party. Here also the general question is: how processes at the micro level, individual neural interactions, give rise to behavior? As was stipulated in the anecdote, we have to strive to better clarify the gap between the micro and phenomenological levels in order to understand how behavior emerges from this network. For instance, just like media, neurotransmitters such as dopamine could modulate neuronal activities, and bias the network towards or away from a dynamical state. And finally following the analogy we have to ask “how cortical input fluctuations could affect the dynamics?” Developing a theoretical model that properly addresses this question is the main focus of this thesis. To do so we have utilized a 2 dimensional population density method and numerical simulations of a spiking neuron model. Accordingly Key questions that we perused to answer were: • What is the solution of the density equation that was derived based on the biological values for the parameters in the stationary states? given the solution, what would be the value of the population activity for D1 and D2 MSNs? • What is the value of the variance of the solution considering different input correlations scenarios? • How much the new approach improves the predictions of the analytical model compared to the previous study (naive-mean-field) approach? The following chapter is a brief report on, principal biological and computational studies of the striatum that are relevant to this thesis. Chapter 3 provides a general review of the computational neuroscience models/methods that were employed here. Chapter 4 is the detailed description of the theoretical framework of the model. In chapter 5, I evaluate the model’s accuracy by comparing the theoretical predictions with numerical results from simulations . Chapter 6 concludes the thesis by remarking strengths and weaknesses of the proposed model and demonstrates the possible modifications and further developments that could expand the utility of the model beyond the stationary state.
Chapter 2
Background
BEARc14.qrk(451-478).ps
12/1/05
11:27 AM
Page 465
▼ THE BASAL GANGLIA
Prefrontal cortex
FIGURE 14.10 A summary of the motor loop from the cortex to the basal ganglia to the thalamus and back to area 6.
Sensory cortex
Motor cortex
465
The Basal ganglia(BG, Fig. 2.1) is an aggregate of interconnected sub-cortical nuclei, that regulates motor, emotional and cognitive functions, via numerous cortico_basal ganglia loops [19]. BG has two main input nuclei, striatum and subthalamic nucleus(STN) and two principal output nuclei, Globus pallidus interna(GPi) and Substantia nigra pars resticulata(SNr) that project to cortex and superior colliculus. Number of Unique structural and chemical attributes distinguish BG from other areas in the brain. For example almost all the sub-nuclei (striatum, Globus pallidus externa(GPe), Gpi, substantia nigra compacta(SNc)) predominantly possess inhibitory neurons except for STN that consists of mostly excitatory neurons, whereas other brain reigns constitute of both inhibitory and excitatory neurons. Area 6
Area 4
Cortex
Basal ganglia
VLo
Corticospinal tract
Red nucleus
Reticular Superior colliculus nuclei and vestibular nuclei
Lateral pathways
Ventromedial pathways
Spinal cord
VL nucleus of thalamus Basal ganglia and associated structures: Caudate nucleus Striatum Putamen
Globus pallidus
FIGURE 14.11 The basal ganglia and associated structures.
Substantia nigra
Subthalamic nucleus
465
Figure 2.1. The constituents nuclei of the basal ganglia.(1)the striatum consists of three important subdivisions: the caudate nucleus, the putamen and the ventral striatum including nucleus accumbens - not shown,(2)the globus pallidus,(3) the substantia nigra,(4) the subthalamic nucleus. Figure taken from Bear Connors Paradiso []
Given that the striatum forms the largest sub circuit in BG and is considered to be the primary input station, it is very important to acquire knowledge about its structure, dynamics and function in order to understand its significance in shaping functional attributes of BG. 3
4
CHAPTER 2. BACKGROUND
2.1
Striatum Biology
Structurally 95% of the striatum is composed of medium spiny neurons (MSNs) that express D1 or D2 dopamine receptors and are sparsely connected in a recurrent inhibitory fashion [37]. The remaining 5% are different types of inhibitory interneurons, mainly, fast spiking inter-neurons (FSI) and tonically active cholinergic inter-neurons(TANs) [38].FSIs mostly make synaptic connections to MSNs and not vice versa, hence are the source of feed-forward inhibition in the network. TANs only affect MSNs and FSIs indirectly by modifying their excitatory synaptic strength [26] and dopamine release [39]. To some extent D1 and D2 MSNs can be distinguished based on their morphology and have heterogeneous input resistance and spiking threshold [14]. It was recently discovered that the two MSN types have different number and strength of mutual and recurrent connectivities[28, 36]. Moreover, FSIs more frequently target D1 rather than D2 MSNs [17, 28, 36]. table 2.1 connection direction ctx → D1 ; ctx → D2 F SI → D1 ; F SI → D2 D1 → D1 ; D2 → D2 D1 → D2 ; D2 → D1
strength of the connection JC1 iJC2 JF 1 iJF 2 J11 ≤ J22 J12 hJ21
number of connections here it is assumed equal NF 1 iNF 2 N11 ≥ N22 N12 hN21
Table 2.1. Connection Properties
Striatum receives and integrates information from the neocortex, thalamus, globus pallidus externa and mid-brain. According to Wall et.al [41] excitatory projections from different cortical regions could be biased towards D1 or D2 populations. For instance, inputs from Sensory cortex and limbic structures mostly target d1MSNs whereas primary Motor cortex (M1) preferentially innervates d2MSNs. By contrast projections from intratelencephalic tract (IT), pyramidal tract (PT), Snc and secondary motor cortex(M2) equally project to the two types of MSNs. Nevertheless, PT neurons make stronger connections with D1 MSNs [22]. The fact that different cortical areas target D1 and D2 MSNs preferentially, implies distinct functionality for these populations. According to Albin et.al [2], BG performance could be summarized to the interplay of three functional pathways, two of which originate from the striatum. “Direct or Go” that initiates an action, is associated with D1 MSNs projecting to Gpi/SNr and “indirect or No-Go” which suppresses an action, with D2 projecting to Gpe figure 3.4. Number of Animal experiments have confirmed this hypothesis during the past decade. For instance, rats show elevated level of movements when exposed to D1 optogenetic stimulations, and show freezing behavior after receiving exclusive D2 stimulations [21]. Nevertheless with out exclusive stimulations to either of the sub-populations, based on experimental measurements, neither of them are in a shut down state and have somewhat similar low activity.
2.1. STRIATUM BIOLOGY
5
At this state a proportionate elevation in the activity of one population evokes one of the mentioned movement states. Thus an action selection/initiation mechanism arise from the competition between direct and indirect pathways and not a winner take all dynamics as it was suggested previously.
Figure 2.2. The Striatum. Figure taken from: cdn.com/content/image/1-s2.0-S0306452214005843-gr1.jpg
https://ars.els-
Exclusive expression of dopamine receptor types in MSNs together with being supplied by a fair amount of dopamine(DA) from dopaminergic neurons in SNc and Venttral tegmental area(VAT), arose scientists attention to investigate its significance thoroughly. Theses studies, indicated that DA increases(decreases) D1(D2) MSNs intrinsic excitability [12]. Therefore it was implied that, Snc dopaminergic signaling modulates direct and indirect pathways differentially and by directly molding the future activity of these populations plays a major role in shaping the action selection mechanisms and goal directed behaviors. DA related motor disorders such as Parkinson’s disease, expedite neuronal degeneration in Snc, thereby diminish DA supply in the striatum [12]. The absence of this neuromodulatory intervention was proven to inflict devastating motor function manifestation such as slowing down/rigidity in movements and inability in action initiation. These disabilities were consistent with D2 MSNs hyperactivity and D1 MSNs hypo activity. Curiously symptoms could be alleviated by administrating l-dopa i.e providing more dopamine to the striatum. In later phases of PD, higher dosage of L-dopa was needed for maintaining the dynamical balance of direct and indirect pathways in order to restore natural states of movements. Unfortunately due to loss of synaptic homeostasis, excessive dopamine concentration could not be regulated properly and another movement disorder known as dyskinesia (abnormal movement) would evince. Striatal malfunctions in PD, Huntington disease (HD) and L-dopa induced dyskinesia has also been linked to degeneration [8, 12] of MSNs dendrite arbors which could directly affect synaptic transfer mechanisms and by disturbing input correlations change the output rates of these pathways[42].
6
CHAPTER 2. BACKGROUND
The empirical observations thus far, boils down to three important micro level properties, namely: Asymmetric connectivities, DA modulation, input correlations and provides us with three distinct causal relation between each property and its key role in shaping the striatal dynamics. Although, It is still not exactly clear how these properties interact and relate with each other and what is the ramification of their interplay for the dynamics of this system. This reductionist view point, sets the basis for theoreticians to integrate these empirical remarks in a reliable computational model. These models produce an approximate version of the emerging dynamics via analytical calculations and visualize the ongoing complex processes with numerical simulations. When the statistical average account of simulations complies with the approximate calculative version, the computational model renders as reliable. Thus our understanding of the striatum will no longer be limited to partial accounts obtained from in vitro or to the “input/output through a black box” scenarios, offered by in vivo experiments. Instead computational models create a whole picture of the more transparent striatum and enable us to look at a gray box instead of a black one.
2.2
Computational Models Of the Striatum
Prior to the recent experimental insights on heterogeneity of MSNs, computational models of striatum were developed based on assuming a single homogeneous inhibitory population. The top-down models that usually neglect details of the neural network, assumed a winner take all (WTA) state for striatal dynamics, which was not consistent with weak recurrent inhibition and low firing rates of MSNs. Buttomup models that are the focus of this thesis and are the more detailed description of the system, suggested winer less competition (WLC) state for the striatum performance, that was a reasonable assumption according to the previous state-of-the-art . Despite of being intriguing, ignoring asymmetric connectivities rendered this hypothesis unreliable. Right after D. Surmeier et.al [14] established that dichotomous somatodendritic properties of D1 and D2 MSNs mirror differences in their network connectivities, computational models considered striatum as a recurrent network of two subpopulations. Here I will mention number of interesting results that emerged from this new view point.
table 2.1 shows experimentally measured connectivity property values for the striatum network . Comparing recurrent and feed-forward inhibitory connectivity values of the two populations, clearly shows that D1-MSNs receive more inhibition than D2 by quite a fair amount. Similar and low activity rates of MSNs, in the silent awake state, indicates that naturally D1 should receive higher level of excitation than D2 to maintain this balance. Additional excitation to D1 could be acquired through higher number or strength of cortical synaptic connectivities. Otherwise the additional inhibition received by D1 would cause a prevailing “No-Go”
2.2. COMPUTATIONAL MODELS OF THE STRIATUM
7
state. Now the question is how does striatum reconcile the action selection/initiation mechanisms with the existing balance? Very recently, Bahuguna et.al [3] developed a computational model of the striatum by means of a simple firing rate model and spiking neural network simulations, that puts this conundrum in a new perspective. In this model D1 and D2 populations received the same input rate from cortex. Her results clearly demonstrated that the 2 populations have very small and almost the same firing activity rates due to the network background noise(which resembles the ongoing cortical activityno specific stimuli). As the cortical input rate incrementally increases(which represents exposure to external stimuli), D1 MSNs comparatively attain higher activity rates up to a certain point at which both populations reach the same activity level again. Beyond this point D2 displays dominantly higher population activity. Evidently, the ensemble goes through a dynamical transition from prevalence of the “Go” state to the “No-Go” state at this special point, otherwise known as Decision transition threshold (DTT). Further simulations indicate that DTT is not a fixed value and varies according to the level of cortical input correlations, FSIs activity rates, dopamine and asymmetry in connectivities. Ergo, striatum dynamics provides a fast decision-making procedure in response to receiving slightly higher/lower cortical input rate/correlations at these operational points(primary and secondary points of balanced activity). In addition this dynamical description could further explain behavioral manifestations associated with striatal pathologies. For example the model predicted a smaller value of DTT for lower dopamine levels. Which means for the same cortical input rate, D1 MSNs have lower rates and No-Go state could be attained at lower rates of cortical activity, which is consistent with slower movements and disability of action initiation in PD patients. This simple rate model provided a sophisticated analogy for action selection/initiation mechanism in the striatum, however the price of simplicity could be rather high when it comes to modeling a complex system. For instance, there was only a qualitative match between the theoretical predictions and numerical simulations. The main reason for this discrepancy is that it is not possible to map the firing rate based models to neurons, synapses and network properties. Second, the firing rate based models cant incorporate the effect of input correlations in their calculations, which was proved to be significant according to Bahuguna et al. numerical simulations. Thus, to address the scientific problem of “modeling the striatum” we have to resort to more rigorous mathematical representations of neuronal activities. One suitable analytical approach can be formulated using the Fokker-Planck equation which will allow us to quantify the effect that neuron and synapse properties have on network dynamics. Moreover, the FP formulation of the striatum network dynamics would also help us analyze the role of input noise and correlations on the balance of D1 and D2 MSNs activity.
Chapter 3
Neural Network Model experiments provided us with a vast knowledge about the biophysics of neurons based on which numerous single neuron models have been developed. High resolution models such as Hodgkin-Huxley reproduce a very realistic biophysical picture of the neuron. Nevertheless, the more detailed a model is the more parameters it acquires hence it will be computationally expensive. To reconcile this matter we usually stick to a reduced neuron model that despite its simplicity provides an abstraction of the phenomenon under consideration. Thus it would suffice that the model captures the dynamic of a neuron from the view point of what is thought to be the most important aspect of properties. What makes neurons special is their electrical excitability that facilitates information propagation in a neural ensemble in form of signal transmission. For the purpose of this thesis it suffices to mention only the important preliminary concepts, necessary for introducing the single neuron model that was utilized in the spiking network model of the striatum.
3.1
Neuron Model
What morphologically differentiates neurons from other cells is having dendrites and axon/s. Roughly speaking, dendrites are considered as input devices that pick up the incoming signals form pre-synaptic neurons and pass it on to the processing unit, the soma. If the strength of the processed signal surpasses a certain threshold, neuron will spike an action potential, i.e the signal will be passed on to the next neuron that has a synaptic connection with this neuron’s axon. Therefore the axon is the output unit that transmits the signal to the next neuron [13]. When a neuron spikes, neurotransmitters are being released from the axon terminal into the synaptic cleft. These neurotransmitters are picked up by the channels receptors that are placed on the dendrites of the post synaptic neuron. We should mention that a neuron at rest is hyper-polarized, meaning, there is a potential gradient between the inside and outside of the membrane known as resting or reversal potential that is usually around -70 mV. Reception of neurotransmitters provokes a chain of events that affects the dynamics of responsive channels. As a result of channel’s 9
10
CHAPTER 3. NEURAL NETWORK MODEL
opening and closing, an excitatory or inhibitory post synaptic potential is inflicted on the post synaptic neuron. Depending on its previous state, neuron could become depolarized or hyper polarized respectively. It is worth noting that a neuron can make numerous synaptic connections on its dendrites with axons of other neurons. Thus a neuron could receive lots of miniature excitatory/inhibitory post synaptic potentials (E/IPSPs). Therefore if the outcome summation depolarizes the neuron enough so that it reaches its threshold potential, a compensatory mechanism is activated to retain the hyper polarized resting state of the membrane potential. At the same time it facilitates transmitting information to the post synaptic neuron by transforming chemical reactions to electrical currents. At this point a very fast influx of NA+ depolarizes the neuron to around +45, which is followed by out flux of K+ that hyper polarizes the neuron to even more negative values than resting potential. This event is known as firing or spiking. After spiking the neuron retains its resting potential eventually and is not capable of being depolarized for a short period of time known as the refractory period. Curiously these spikes or action potentials have more or less a similar shape and amplitude, therefore they can be considered as unitary events that do not carry any information in their shape but in their time of occurrence. Due to this fact it is possible to neglect some biophysical aspects of this mechanism and still find satisfactory results from reduced neuron models. Generally the passive electrical attributes of a neuron is described by an
Figure 3.1. Equivalent electric circuit of membrane. Figure taken from: [1]
equivalent circuit figure 3.1. To clarify this analogy I shall further elaborate on the membrane biophysical properties. Ionic transportation through ion channels, are influenced by the deriving force that is the difference between the channels intrinsic potential(reversal potential) and the membrane potential. Also they are manipulated by the type of channels conductances that could be passive or time and/or voltage dependent. Therefore the total membrane currents is obtained by: Im =
X
gi (V − Ei )
(3.1)
i
Different types of ion channels serve different purposes. For instance at rest only a special kind of K + and a few N a+ channels are open to guarantee the maintenance of resting potential gradient through the membrane. These channels together are described by a voltage Independent conductance gL = 1/R and constitute the leak
3.1. NEURON MODEL
11
current. IL = gL (V − EL )
(3.2)
Another type of channels that are transmitter-activated are involved in synaptic transmission. Simply put, These channels are responsive when pre-synaptic spikes arrive. Thus their conductance is time dependent. Here we ignore the possibility of implicit voltage dependence of conductances [13]. Is (t) = gs (t) (V − Es )
(3.3)
Membrane it self, could be considered as a semipermeable surface, acting as a capacitance That gets charged as a result of ion accumulation inside the cell. By convention membrane currents are positive when positive ions leave the membrane and negative when they enter. Therefore the injected current in experimental settings is considered to be positive and membrane inward(positive ion)currents as negative [1]. According to the definition of capacitance and the conservation of dV currents [13]: C = Q/V ; Ic (t) = C dt Iinj (t) = Ic (t) +
X
Ich (t)
(3.4)
ch
which brings us to: C
X dV =− Ich (t) + Iinj (t) dt ch
(3.5)
Now to simplify things further we can reduce this equivalent circuit even more by
Figure 3.2. Lapicuqe Model. Figure http://fourier.eng.hmc.edu/e180/lectures/signal2/node4.html
taken
from:
neglecting the contribution of all active conductances. Therefore we end up with a simple leaky integrate and fire (LIF) neuron model introduced by Lapicque in
12
CHAPTER 3. NEURAL NETWORK MODEL
1907 long before the discovery of action potential generating mechanisms [1]. The equivalent circuit for this simplified model is shown in figure 3.2 (here Vrest = 0) C
dV = −gL (V (t) − Vrest ) + Iinj dt
(3.6)
Figure 3.3. Stimulation of a LIF neuron by a constant current injection. Tint is the time interval between the spikes.tref is the refractory period during which the capacitance would not be charged, consistent with the associated equivalent circuit.Figure taken from: http://fourier.eng.hmc.edu/e180/lectures/signal2/node4.html
while V hVθ the temporal evolution of the membrane potential could be captured by a simple linear eq.[3.6] According to figure 3.3 as soon as V iVθ , neuron spikes, there is a sudden drop to the resting point and a short plateau of refractoriness(which is not captured by the LIF eq. the illustration is based on a simulation with added tref ). It is easy to gather that in the absence of any input the potential will have an exponential decay towards the equilibrium state due to the passive ion diffusions explained by the leak term. In this example Lapicque model describes the dynamics of a neuron with a constant stimulation, but the level of rigor could be extended to include synaptic conductances, stochastic currents and etc. Our choice of input current will be: I (t) = Is (t) = gs (t) (V − Es )
(3.7)
3.2. SYNAPSE MODEL
3.2
13
Synapse Model
Synaptic transmission is usually thought as a conductance change in the postsynaptic neuron membrane, as a result of the exposure to neurotransmitters that are released in the synaptic cleft, due to the strike of a pre-synaptic action potential. According to experimental observations the time course of this post-synaptic change due to an incoming spike, in its simplest form is described by an exponential decay with a time constant τ and peak amplitude gpeak . Θ (t) is the unit step function that grantees absenteeism of conductance increase before the spike arrival [23]. gx (t) = 0 for t < 0 t − τ gx (t) = gpeak e Θ (t) (3.8) As it was mentioned before, the shape of action potential can be ignored and spikes are estimated by consecutive unitary events with identical forms that occur at random times tj . Thus the so called spike train can be described as follows: S (t) =
X
δ (t − tj )
(3.9)
j
and the total post-synaptic change inflicted by a single synapse is the sum of timeshifted conductance function: gs (t) =
X
gx (t − tj )
(3.10)
j
This means that each spike inflicts the same conductance change and the conductance change in response to serial spikes is the sum of these individual changes [23]. Here we ignored the effect of synaptic plasticity due to facilitation and depression mechanisms.
3.3
Network Model
Neurons are connected via synapses and form ensembles, known as neural circuits. The striatum network figure 3.4(following the same structure as [3]), is modeled as 3 sub- cortical circuits, with recurrent inhibitory connections between D1, D2 and feed forward inhibitory connections from FSIs to D1,D2 populations. All populations receive back ground excitations that represents the ongoing brain activity and is modeled as independent Poisson distributions with a minimum rate of 2500 HZ to evoke the basal network activity associated with the silent awake state. Cortical excitatory input to these sub-circuits, are primarily modeled as independent Poisson distributions with the rate range (50-4550)HZ. Network parameters such as connection probabilities and neuron constants are mostly adapted from [3].Peak conductances and a few other parameters had to be adjusted with the choice of synaptic model used here. Network specifications are demonstrated in table 3.1, table 3.2, table 3.3. As was stated in the previous sections the LIF neuron model
14
CHAPTER 3. NEURAL NETWORK MODEL
Figure 3.4. Striatum Schematic Point arrows represent excitation(red) from cortex and(cyan) from the background. Circles represent recurrent(green) and feedforward(blue) inhibition.
3.3. NETWORK MODEL
15
was chosen to simulate the striatal neuron. Each MSN receives synaptic currents from D1, D2, FSI populations besides the Cortex. x ∈ D1 , D2 ; q ∈ exc, inh Cx
dV x + Gx [V x (t) − Vrest ] = Is (t) dt
(3.11)
Is (t) = Id1 (t) + Id2 (t) + If si (t) + Ictx (t)
gqx (t)
=
gqx e
−
t τ,
0,
(3.12)
for t ≥ 0 for t < 0
(3.13)
Here I chose the same time constant and temporal function for all excitatory and inhibitory synapses in the network eq.[3.13]. If we suppose all synaptic connections from a homogeneous pre-synaptic pool have an average strength value, then all the action potentials from different spike trains belonging to the same pre-synaptic pool presumably impose identical conductance changes. Therefore the total excitatory conductance Gxexc (t) in a MSN i is: Gxexc,i =
X
X
m∈Kictx
n
x gexc t − tctx mn
(3.14)
m sums the number of synaptic connections that a MSN receive from the cortex and n runs over the number of spikes coming from each synapse. By the same token, the total inhibitory conductance Gxinh (t) in a MSN i is from 3 separately homogeneous, subpopulations. ∆ stands for synaptic delays. Gxinh,i =
X
X f si
si ginh t − tfmn − ∆f si +
+
d1 d1 ginh t − td1 mn − ∆
m∈Kid1 n
m∈Kif si n
X X
X X
d2 d2 ginh t − td2 mn − ∆
(3.15)
m∈Kid2 n
The total synaptic current for a MSN i is: h
x x Iix (t) = −Gxexc,i (t) [Vix (t) − Vexc ] − Gxinh,i (t) Vix (t) − Vinh,i
i
(3.16)
And finally total synaptic input for a FSI i, that only comes from cortex is: si Gfexc,i =
X
X
f si gexc t − tctx mn
(3.17)
m∈Kif si n
h
si f si Iif si (t) = −Gfexc,i (t) Vif si (t) − Vexc
i
(3.18)
Numerical simulations were conducted in a Neural Simulation Tool (NEST) [15] for a network of 4080 population, using PyNEST [11]as user interface. The
16
CHAPTER 3. NEURAL NETWORK MODEL
time step for recording data was 0.1 or 0.01 ms. Each Simulation was performed for at least 10 sec and the results were obtained from the expectation of 50-100 simulations for a given cortical input rate. NEST integrates differential equations of the chosen neuron/synapse model with a rigorous technique known as Exact Integration [31]. For dynamical systems such as LIF neuron that could be somewhat nonlinear, NEST applies the matrix exponential methods to minimize numerical errors. It is possible to record singular firing events by connecting spike detectors to the spiking neurons of different populations in the network. Hence the average population activity is easily obtained from dividing the total sum of recorded spikes of a population, by number of neurons and simulation duration. To visualize and analyze the data Matplotlib/Pylab and other Python packages such as NumPy [40], SciPy were employed. Parameter Number of Neurons Vrest (mV ) Vexc (mV ) Vinh (mV ) Vth (mV ) τexc,inh (ms) τref (ms) C (pF ) Grest (nS)
MSN(D1 , D2 ) 4000 -80 0 -64 -45 0.3 2 500 12.5
FSI 80 -80 0 -76 -54 0.3 2 500 25
Table 3.1. Parameters for model neurons in network simulations
source D1 D1 D2 D2 F SI F SI
target D1 D2 D2 D1 D1 D2
probability 0.13 0.03 0.18 0.135 0.27 0.16
peak conductance(nS) 9.06 21.92 18.12 28.94 47.11 47.11
Table 3.2. Inhibitory input to strital neurons
delay(ms) 1 1 1 1 0.5 0.5
3.3. NETWORK MODEL
target D1 D2 F SI
Background input rate(HZ) 2500 2500 2500
17
peak conductance(nS) 10.11 8.15 12.23
Table 3.3. cortical input to strital neurons
Chapter 4
Analytical Model The level of activity in a neural network is usually sampled by neurons, and the dynamical evolution of neurons is characterized by membrane potential fluctuations therefore this information is encoded in their membrane potential fluctuations [27]; figure 4.1. The causal relationship among potential temporal evolution, presynaptic activities and membrane discharge was formally introduced in chapter 3. In this chapter I will provide a deeper analyses of the readily introduced models and describe the mathematical approaches that I have followed to characterize the membrane potential fluctuations and infer the level of striatal population activity along with other interesting quantities from this characterization. To avoid repetitive citations, the main framework is divided in two parts that are mostly based on the works of [29] and [35] respectively.
Figure 4.1. Membrane potential fluctuations carry information of the biological properties of the neuron and the activity of afferent neural networks. The black arrow represents an intracellular electrode recording the membrane potential. Important biological properties include the number of excitatory and inhibitory synapses Ne,i , the synaptic time constant τe,i and synaptic strength he,i . Figure adapted from [27]
19
20
4.1
CHAPTER 4. ANALYTICAL MODEL
Stochastic Neural Activity
figure 3.3 shows the stimulation of a LIF neuron by a constant current. This regular pattern of action potential in response to supra-threshold injected current is not observed for many neurons in vitro [23]. According to some in vivo experiments [4]Strong visual stimuli with a special time course through out the experiment, could elicit quite similar firing patterns over different trials. However, trial to trial variability is commonly observed in response to weaker stimuli [18]. In addition neurons show spontaneous irregular activity in the absence of any special stimuli. Considering the fact that neurons are constantly being bombarded by a barrage of pre-synaptic inputs, it seems reasonable to relate the response variabilities and irregularities to a considerable amount of fluctuations in the input currents despite the weakness or absence of the stimuli.[13] The stochastic nature of the membrane potential fluctuations could be considered as the evidence of noise in the central nervous system [13]. Therefore in order for the model neuron to mimic a realistic behavior, the membrane potential or synaptic input could be modeled as a noise process with appropriate statistical properties[27].
4.2
Framework Description: Part One
Consistent with the choice of neuron and synapse model used in numerical simulations, here a passive membrane equation with fluctuating conductances represents the potential time evolution of a single MSN neuron. C
dV = − gL (V − EL ) − gctx (V − Ectx ) − gf si (V − Ef si ) dt − gd1 (V − Ed1 ) − gd2 (V − Ed2 )
(4.1)
Due to the assumption that each MSN is bombarded by many incoming spikes per integration time, from local and external afferents and the fact that the Striatum network has sparse but numerous connectivities it seems reasonable to model Pre-synaptic spike trains, as Poisson point processes with constant intensities ri (generally the intensities can vary in time).These spikes that arrive at a random time with a Poisson distribution, elicit stereotypical quantal conductance rises ci that decay with time constant τ . The synaptic input is shot noise generated from superposition of these quantal responses upon spike arrivals figure 4.2. Since we considered that each pre-synaptic pool is homogeneous therefore all synaptic events coming from a specific pool are assumed identical and independent processes that could be summed linearly figure 4.1 thus multiplying the rate of these i.i.d processes by the number of afferent connections constitute the total rate of the Poisson process from each pre-synaptic pool. Therefore we can capture the conductance time evolution in the following diff. eq. i = crx, f si, d1, d2 τ
X dgi = −gi + ci τ δ (t − tki ) dt t ki
(4.2)
4.2. FRAMEWORK DESCRIPTION: PART ONE
21
Figure 4.2. Pre-synaptic spikes produce synaptic input (top) that drives membrane potential fluctuations (bottom). Spike arrival times are generated by a Poisson point process with intensity that can vary in time. The synaptic input is shot noise generated by superposition of individual responses to pre-synaptic spikes. Figure adapted from [27]
Under the assumption that each synapse receives many quanta impingements per integration time, the shot noise process could be approximated by a Gaussian white noise with a constant term. This constitutes the diffusion approximation. Therefore the synaptic conductance is approximated by an Ornstein-Uhlenbeck process with a time evolution shown by this stochastic differential eq.(SDE) √ τ dgi ' gio − gi + 2τ σi ξi (t) dt
(4.3)
The Gaussian white noise has the following mean and autocorrelation: hξi (t)i = 0 and hξi (t) ξi (t0 )i = δ (t − t0 ) The Ornstein-Uhlenbeck process reflect the statistics of the conductance fluctuations properly according to the literature [9] and the diffusion approximation is considered a good approximation for the first two moments of the shot noise process. Ri is the total rate of incoming spike trains from different pre-synaptic populations r τ Ri gio = ci τ Ri and σi = ci 2 To simplify the membrane eq we can describe the synaptic conductance as a combination of a tonic gio and a fluctuating term giF . C
dV = − g0 (V − E0 ) − gctx−F (V − Ectx ) − gf si−F (V − Ef si ) dt − gd1−F (V − Ed1 ) − gd2−F (V − Ed2 )
where g0 = gL + gctx−0 + gf si−0 + gd1−0 + gd2−0
(4.4)
22
CHAPTER 4. ANALYTICAL MODEL
E0 =
1 (gL EL + gctx−0 Ectx + gf si−0 Ef si + gd1−0 Ed1 + gd2−0 Ed2 ) g0
lets analyze a fluctuating voltage dependent derive as an example: giF (V − Ei ) = giF (E0 − Ei ) + giF (V − E0 )
(4.5)
expanding around the equilibrium voltage, we will end up with an additive noise term that acts as a fluctuating current and a multiplicative noise term that actually captures the fluctuations of the total conductance g0 . By Comparison, the noise terms are of first and second orders, since V − E0 grows linearly with giF . Diffusion approximation only accounts for the first two moments of the shot noise and is not a consistent approximation when the higher moments are not negligible. Therefore in order for the diffusion approximation to remain intact, nonlinear noise terms such as the multiplicative noise are bound to be dismissed.
C
dV ' − g0 (V − E0 ) + gctx−F (Ectx − E0 ) + gf si−F (Ef si − E0 ) dt + gd1−F (Ed1 − E0 ) + gd2−F (Ed2 − E0 )
(4.6)
Now we end up with a current-based model with an effective leak term g0 (V − E0 ). This constitute the Gaussian approximation, because by remaining consistent with diffusion approximation, we arrive at an SDE that predicts a Gaussian distribution for the voltage. The dynamics of the model neuron now is described by five coupled SDEs. The system alternatively can be fully described by the probability density function (pdf) of the state variables V and gi ; P (V, gctx , gf si , gd1 , gd2 , t). The dynamics of this pdf is associated with a partial differential equation (PDE). The PDE describes the time evolution of internal states distribution. Accordingly This probabilistic description leads to finding the distribution of sub-threshold membrane potential fluctuations. The quantity that reflects the level of activity and possibly correlations in the pre-synaptic populations. For our recurrent network primarily we are interested in finding the stationary mean firing rate of a d1 and a d2 MSN, since the assumption of homogeneity and independent Poisson type activity(for each subpopulation) imply that the population activity (D1/D2)is equal to the mean firing rate of a single MSN (d1/d2)in the corresponding population. Thus we are perusing to derive the static transfer function of a random d1/d2 MSN’s under the assumption of homogeneity of each subpopulation. That is, by exploiting the assumption of sparse random connectivity, we neglected pairwise correlations between the activity of all neurons, hence extending the assumption of i.i.d Poisson processes for each pre-synaptic pool to having no correlations either, between the spiking activities of different pools, i.e independent synaptic currents. From this probabilistic description we only can infer the mean level of activity of the pre-synaptic populations. In the following section I will derive the PDE, so called Fokker-Planck equation(FPE) for a D1 or a D2 neuron. Since the procedures of derivation and analyses are identical for both, it would suffice to show the framework for only one MSN neuron. The
4.2. FRAMEWORK DESCRIPTION: PART ONE
23
mutual and self recurrent connectivities sets the ground for deriving two coupled self consistent equations whose numerical solution defines the fixed points of the Striatum network. The fixed points therefore are tuples of D1 and D2 mean firing response to any given cortical input rates. To simplify derivation of this PDE it is useful to introduce some variable transformations that unifies the synaptic conductance SDE’s into a langevin equation of the total input current received by the MSN. The 4 conductance eqs. can easily be summed into one, since they represent same stochastic processes with identical time scales. I begin by simplifying the notation: Ectx = Ee ; Ef si = Ed1 = Ed2 = Ei ; giF (t) ≡ gi (t) − gi0
IF (t) = gctx−F
p τ dgiF (4.7) = −giF + τ ci Ri ξi (t) dt (Ee − E0 ) + gf si−F (Ei − E0 ) + gd1−F (Ei − E0 ) + gd2−F (Ei − E0 )
IF (t) = Ictx−F + IF si−F + Id1−F + Id2−F C τm = Effective membrane time constant g0 V − E0 = v IF dv = −v + τm τm dt C we replace
(4.8)
IF with I and by taking advantage of some Gaussian properties: C ξi (t) ≡ lim N (0, 1/dt) (4.9) t→0
βN m, σ 2 = N βm, β 2 σ 2
N m1 , σ12 + N m2 , σ22 = N m1 + m2 , σ12 + σ22
(4.10)
(4.11)
we obtain a LIF neuron with noisy input current [6]. dv = −v + τm I dt dI τ = −I + σξ (t) dt τm
(4.12) (4.13)
where τq σ= (Ee − E0 )2 Rctx c2ctx + (Ei − E0 )2 Rf si c2f si + (Ei − E0 )2 Rd1 c2d1 + (Ei − E0 )2 Rd2 c2d2 C σ √ with change of variables: I = √ z and v = τm σx we obtain dimensionless τ state variables x and z with the following eqs. x z dx =− +√ dt τm τm τ dz −z ξ (t) = + √ dt τ τ
(4.14) (4.15)
24
CHAPTER 4. ANALYTICAL MODEL
the system has 2 characteristic time scales τm and τ . In this thesis we chose to work with only fast synapses. Therefore we are interested in the behavior of the system r τ with small k = values. After rescaling time t → τm t [6] τm dx z = −x + dt k z ξ (t) dz =− 2 + dt k k
(4.16) (4.17)
This is the pair of coupled SDEs with a slow component x driven by a fast OrnsteinUhlenbeck process z.
4.3
Diffusion Equation
Here I will use a short hand notation adapted from [24]for deriving the FPE. Formal derivations can be found in [25]. First we need to have discretized time in the dynamics. The Gaussian white noise is a continuous and uncorrelated stochastic process. We can use a binary random variable with the same statistics to approximate the white noise. w(t) takes values +1 and -1 with equal probabilities and is drawn independently at each time step. Thus the overall probabilities of observing positive or negative values is 1/2, which means it has zero mean hw (t)i = 0 , unit variance hω 2 (t)i = 1 and the correlation function hω (t) w (t0 )i = 0 for t 6= t0 . To wj ±1 obtain discretized white noise we use the following approximation: ξj = √ = √ δt δt [32]. Hence we arrive at the following update rule [16]: z x (t + δt) = x (t) − x (t) δt + δt k w√ z z (t + δt) = z (t) − 2 δt + δt k k
(4.18) (4.19)
δt shows an infinitesimal increment of time. P (x, z, t) is the probability density of having the neuron in the state (x, z) at time t. Since the system of eqs. Describe a Markovian process, to determine the FPE we only need to relate the density at time t + δt ; P (x, z, t + δt) with the density at a previous time t ; P (x, z, t) . The probability that the neuron state in the phase space, is observed at an infinitesimal square δx0 δz 0 around the state (x0 , z 0 ) at time t, is P (x0 , z 0 , t) δx0 δz 0 . This state square that has the surface δx0 δz 0 and the center (x0 , z 0 ) , will be projected onto a new state square close to itself, with center (x, z) and surface δxδz, after an infinitesimal time step is passed. The transition is governed by the rules of coupled SDEs. To obey the conservation of probability we have to sum over the probabilities of all possible paths for this transition. P (x, z, t + δt) δxδz =
X w=±1
p (w) P x0 (w) , z 0 (ω) , t δx0 δz 0
(4.20)
4.4. FRAMEWORK DESCRIPTION: PART TWO
25
to reach (x, z) after an infinitesimal time step the only possible path is via: z x0 (ω) = x + xδt − δt k z w√ z 0 (w) = z + 2 δt − δt k k
(4.21) (4.22)
Besides, the previous area is squeezed to the new vicinity around (x, z) according to the decay terms in SDEs such that:
δxδz = (1 − δt) 1 −
δt δx0 δz 0 k2
By performing a Taylor expansion up to the second order, on the expressions under √ the sum, and neglecting terms with higher orders than δt , all the terms with δt will cancel each other. After sorting the terms regarding to O (δt) and dividing equal sides by δt that is approaching zero, we finally derive the FPE: k2
4.4
∂ ∂P = ∂t ∂z
1 ∂ ∂ z + z P − k2 −x + P 2 ∂z ∂x k
(4.23)
Framework Description: Part Two
We modeled the sub-threshold dynamics of the membrane potential, in the diffusion limit, as a two variate continuous Markov process that mathematically could be described by a diffusion eq.[4.24]. A realization of the trajectory of x(t)resembles the over-damped Brownian motion of a particle subject to a potential with meta-stable states, forced by a colored Gaussian noise. Thus in the fluctuation driven regime, neuronal spiking, could be treated as a noise activated escape process. Accordingly the problem of finding the neuron’s mean spiking activity is actually a mean first passage time( MFPT) problem, where the expected interval between events is the inverse of the mean firing rate (Note: the refractory time could be taken into account in the end as it is a certain value). To find the MFPT we have to evaluate the free diffusion equation according to the physical constraints of the system. However it is very difficult to analytically deal with this 2 dimensional FPE. The usual strategy is to reduce the dimensions by introducing an effective diffusion process [10] and find the appropriate boundary conditions for the stationary state [20]. Schuecker.J et.al has recently complemented this method by introducing time dependent boundary conditions. This approach leads to find a transfer function that is valid in a wider range of frequencies, in contrast to the previous ones [7] that could account for very high or low frequency limits. In the coming sections I will demonstrate the essential mathematical steps that led to finding the stationary firing rate of the MSN neuron following his method.[34].
26
4.4.1
CHAPTER 4. ANALYTICAL MODEL
Methods justification
The fluctuating derive in eq.[4.18], is a Gaussian noise with zero mean and an exponential correlation function : 1 |s0 | hz (s) z s + s0 i = exp − 2 2 k
(4.24)
Since x(s) is much slower than z(s), essentially its variance can be calculated by integrating the autocorrelation function of z: Z
|s| 1/2exp − 2 ds = k 2 k
(4.25)
on the other hand, when k → 0 in the case of instantaneous synapses, fluctuating derive is replaced by a term z (s) = kξ (s) which has the same autocorrelation integral k 2 . This heuristic argument justifies our attempt to model the underlying dynamics of this system by a 1 dimensional effective diffusion process.
4.4.2
Effective diffusion equation
Our aim in this section is to find the z marginalized probability flux based on which we can derive the effective one dimensional FPE. To do so we first have to simplify the z dependent pdf by factoring off the stationary solution of the fast part of the 2 e−z FP operator: P (x, z, s) = Q (x, z, s) √ . This transforms the FP operator into π the following form: k2
with operatorL =
∂Q ∂ ∂ = LQ − kz Q + k 2 (xQ) ∂s ∂x ∂x
(4.26)
1 ∂ ∂ −z . Now by applying a perturbation ansatz, 2 ∂z ∂z
Q (x, z, s) =
2 X
k n Qn (x, z, s) + O k 3
(4.27)
n=0
we can equate terms associating with different orders of k. Note that according to z the flux operator in x direction Sx = −x+ in eq.[4.24] in order to find the first order k 2 R e−z correction to the z marginalized probability flux νx (x, s) ≡ dz √ Sx Q (x, z, s) we π need to perform the perturbation expansion up to the second order of k. LQ0 = 0 1
(4.28) 0
LQ = z∂x Q
(4.29)
LQ2 = ∂s Q0 + z∂x Q1 − ∂x xQ0
(4.30)
4.4. FRAMEWORK DESCRIPTION: PART TWO
27
solving these equations we will arrive at: Q (x, z, s) = Q0 (x, s) + kQ10 (x, s) − kz
∂ 0 ∂ Q (x, s) − k 2 Q10 (y, s) + . . . ∂x ∂x
(4.31)
1 Considering the properties of operator Lz n = n (n − 1) z n−2 − nz n the first term is 2 independent of z and corresponds to the pdf of the white noise FPE. We will neglect the terms that are related to k 2 z 2 since they are corresponding to the higher degree correction of the flux. In addition we only keep the terms that have obvious non vanishing contributions to the marginalized flux. Substituting this outer solution in the z marginalized probability flux, we will end up with : 2
e−z z νx (x, s) = dz √ −x + Q π k 1 ∂ = −x − Q0 (x, s) + kQ10 (x, s) + O k 2 2 ∂x Z
(4.32)
The first term in parenthesis is basically the flux operator of a white noise FPE with marginalized density: P (x, s) ≡ Q0 (x, s) + kQ10 (x, s)
(4.33)
Using continuity equation we will obtain the effective 1 dimensional FPE:
∂s P = −∂x νx (x, s) = ∂x
1 x + ∂x P 2
(4.34)
Now its time to evaluate the marginalized density by introducing the proper boundary conditions. The one dimensional FPE for the white noise problem has been solved exactly, hence the only unknown component here, is the boundary values of Q10 . To find this value, we have to revisit eq.[4.24]
4.4.3
Effective boundary conditions
Assuming an absorbing boundary at x = θ means that the flux should vanish for all z trajectories with negative velocities at this boarder, hence for −θ + < 0 we have k Q (θ, z, s) = 0. We choose to work in a space of z shifted coordinate z − kθ → z, with the following flux operator and boundary condition: Sx = −y + θ +
z k
(4.35)
z Q (θ, z, s) = 0∀z < 0 (4.36) k Here we are dealing with a physical system that resets the variable x to the resting value R after firing. Thus the escaped flux re-enters the system from the reset point: νx (θ, z, s) = Sx (Q (R+, z, s) − Q (R−, z, s)) = 0
(4.37)
28
CHAPTER 4. ANALYTICAL MODEL
which yields another boundary condition: 0 = Q (R+, z, s) − Q (R−, z, s) ∀z < 0
(4.38)
This B.C accounts for a non-continuous jump of the marginalized density at the x − {θ, R} reset point. Replacing x by a scaled variable r = in the shifted FPE, k B will offer a boundary layer solution Q (r, z, s) ≡ Q [x (r) , z, s] that should agree with the outer solution. k 2 ∂s Qb = LQb + k [θ (∂z − 2z) + ∂r (kr)] Qb − z∂r Qb
(4.39)
With a boundary condition: QB (0, z, s) ∀z < 0 Note that here we will only demonstrate the procedure of matching the solutions at threshold which is analogous to doing so at the reset point. Applying the same perturbation ansantz, we will arrive at: LQb0 − z∂r Qb0 = 0 b1
b1
LQ − z∂r Q
(4.40) b0
= [θ (∂z − 2z) + ∂r (kr)] Q
(4.41)
The outer solution doesn’t have a strong dependence on r, hence its Taylor expansion should match the boundary layer solution: Qb (r, z, s) = Q (θ, z, s) + kr∂x Q (θ, z, s)
(4.42)
By matching the terms of zero order in k the solution of eq.[4.41],Qb0 , becomes zero, since we know the boundary solution of the white noise system: Q0 (θ, z, s) = 0 Taylor expansion of the outer solution eq.[4.32] will result in 3 terms of first order in k that should match the solution of eq.[4.42] Qb1 (r, z, s) = Q10 (θ, s) − z∂x Q0 (θ, s) + r∂x Q0 (θ, s)
(4.43)
revisiting the instantaneous flux at threshold in the white noise problem will simplify the latter equation. 1 ∂x Q (θ, s) = −2 − ∂x − θ Q0 (θ, s) = −2νx0 (θ, s) 2
0
(4.44)
Thus eq.[4.44] now takes the form Qb1 (r, z, s) = Q10 (θ, s) + 2νx0 (s) (z − r)
(4.45)
The PDE in eq[4.42] takes the homogeneous form when Qb0 = 0 and it has a known solution according to [20]. Therefore by matching eq.[4.46] with this solution we can finally find the unknown term of the marginalized density. Q10 (θ, s)
+
2νx0 (θ, s) (z
∞ √ X α bn (z) e 2nr − r) = C (s) +z−r+ 2 n=1
!
(4.46)
4.5. STATIONARY FIRING RATE
29
√ 1 . The exponential term dose 2 ζ 2 not have a counter part on the LHS since it is a function of small range variable whereas the outer solution changes in a long range, thus it is bound to be ignored. As a result of matching the corresponding terms we finally have:
α is related to Riemann’s ζ-function α =
Q10 (θ, s) = ανx0 (θ, s)
(4.47)
which fixes our boundary solution for the effective FPE in eq.[4.34] P (θ, s) = Q0 (θ, s) + kQ10 (θ, s) = kανx0 (θ, s)
(4.48)
According to the previous methods suggested by [7, 20] the stationary boundary solutions of the colored noise system is equal to the solution attained from shifting the boundaries in the white noise case. Curiously we can reach the same conclusion by expanding our time dependent boundary solution around this shift. kα P θ, s = P θ + ,s 2 α = P (θ, s) + k ∂x P (θ, s) + O k 2 2 α 0 = kανx (θ, s) − k 2νx0 (θ, s) + O k 2 2 2 =O k
(4.49) (4.50) (4.51) (4.52)
Analogously we can reach this conclusion for the reset boundary solution. Therefore in the stationary state we can replace the colored noise case with a white noise diffusion process with shifted boundaries.
4.5
Stationary firing rate
Effective diffusion eq.[4.35] could be written in theoperator notation via a simple √ 1 y variable transformation: y = 2x, ρ (y, s) ≡ √ P √ , s 2 2 ∂s ρ (y, s) = −∂y Φ (y, s) ≡ L0 ρ (y, s) Φ (y, s) = − (y + ∂y ) ρ (y, s)
(4.53) (4.54)
According to [30] when there exist a stationary state solution for a FPE, such as 1 − y2 ρ0 = e 2 here, it is possible to transform the operator to a hermitian form. This transformation is useful since the resulting Hamiltonian has a known steady state solution. Here by factoring off the square rout of the stationary solution,u (y) = 1 − y2 e 4 , we achieve such transformation for the FP operator in eq.[4.54] ∂s q (y, s) = −a† aq (y, s)
(4.55)
30
CHAPTER 4. ANALYTICAL MODEL
1 Where q (y, s) = u−1 (y) ρ (y, s). The ascending/descending operators a† ≡ y − ∂y, h i n 2 n 1 † † † q0 = n a† q0 . a ≡ y + ∂y, fulfill the following equations: a, a = 1, a a a 2 In the stationary state the probability flux for the white noise scenario, is a constant value, which exists between the reset and threshold, and is captured by the following differential eq: − uaq0 (y) = τ νH (y − yR ) H (yθ − y) (4.56) Which has a homogeneous solution qh0 = u (y), aqh0 = 0 the complete solution of eq.[4.57] is known for the white noise boundary conditions and should be normalized in such a way that: Z 1=
ρ (y) dy
(4.57)
u−2 (y) F (y) dy
(4.58)
After normalization finally we arrive at: (τ ν0 )
−1
Z yθ
= yR
π y and ν0 the stationary firing rate for the white noise 1 + erf √ 2 2 case. Therefore by simply shifting the boundaries r
with F =
{θ, R} → {θ, R} + k
α 2
(4.59)
we obtain the stationary firing rate for the system with colored noise: −1 αk 2 u−2 F dy ν= τ αk yR + √ 2
Z yθ + √
(4.60)
This integral defines the mean firing activity of an MSN neuron based on the first and second moments of its rate dependent stationary synaptic inputs . In other words it is a self consistent formulation of the neuron’s static transfer function. Owing to the asymmetric connectivity, in order to find the stationary activities of a D1 and D2 neuron simultaneously, we have to solve the corresponding coupled integral equations numerically or graphically. The solution thus reveals the fixed points of the network dynamics. This will constitute a first order mean-field description of the stationary dynamics of the striatum network. Schuecker et. al also offered a formulation for the non-stationary transfer function following the same methodology described in this section. From there it is possible to infer the second moments of the network activity which reveals key characteristic features of the corresponding state.
Chapter 5
Results And Discussion Before presenting the results lets recapitulate the principal presuppositions that let us define a mesoscopic description for the striatum activity. The assumption of independent Poisson activity, is aligned with a commonly observed neuronal phenomena delimited by asynchronous and irregular behavior, at the level of network and neuron activity, respectively. By definition this macroscopic(refer to the network level) state has a constant population activity, i.e a unique steady state that is asymptotically stable [13]. Such premises, together with the sparse random connectivity in a finite size network model of two separately homogeneous subpopulations, comprised the essential ingredients for us to postulate a first order mean-field description of the dynamics. [13] In the following section we examine the accuracy of the calculated static transfer function for a single neuron that receives independent excitatory and inhibitory Poisson spike trains as inputs. In other words, we take a look at the way we applied calculus to build approximations to functions(ultimately the static transfer function for an LIF neuron driven by colored noise) and try to quantify how accurate we should expect these approximations to be. Later we can exploit the stipulated accuracy measure for the network analysis. For the sake of consistency, here we chose similar parameters as we have for MSN’s in the network. (except for the absolute point conductance values).
5.1
Single neuron
Here we change the inhibitory input rates relative to the excitatory input rates in such a manner that the mean membrane potential remains fixed at -48 (mV) for the entire input range. We chose this configuration to be able to analyze the results versus the changing course of one input range (excitatory or inhibitory). Otherwise we have to deal with 3D matrices that are displayed by colored diagrams, which can be quite ambiguous. The neuron parameters are displayed in table 3.1. As mentioned in the second part of the framework, dynamics of the colored noise system was approximated by an effective white noise system, using a perturbation method 31
32
CHAPTER 5. RESULTS AND DISCUSSION
τ . The approximation is valid up to the first order of k τm for the flux probability distribution. Based on Schuecker.J et.al evaluations, this √ method is accurate over a limited range of 0 < k < 0.1 . In this study a current synaptic model was utilized. Therefore using the perturbation method is the main approximation besides the diffusion approximation. As a result the accuracy of the predicted firing rate compared to numerical simulations was evaluated versus the incrementally increasing k, which shows the better match in the begging of the interval. In our case however, we applied a conductance synaptic model and according to the first part of the framework, had to neglect the multiplicative noise in order to remain consistent with the diffusion approximation, which imposed yet another(Gaussian) approximation on the calculations. Hence for evaluating the accuracy, we also have to take into account validity of the Gaussian approximation. According to [13] to model the membrane voltage in greater details one has to consider the fluctuations of the total conductance (effective time constant) together with an extra term in the additive noise(look at eq.4.5) which comes from the σ higher moments of the shot noise(' ). Based on this framework, the first order g0 correction, adds an extra term to the mean membrane potential(that solely initiates from conductance fluctuations) but leaves the potential variance unchanged. Moreover it results in having a skewed potential distribution. It turns out that these higher order terms skew the Gaussian distribution in two opposite directions. Here I have measured the total skew and the leading order correction term to the mean potential, based on their calculations, in order to measure the contribution of the Gaussian approximation to the miss match between the numerical simulations and my results. Since the membrane time constant is a definitive factor in both approximations(effective white noise and Gaussian), I have evaluated the model for 5 different values of the membrane capacitance, hence 5 different ranges of effective time constants(since it varies over the input range). As input rates increases, the total membrane conductance also increases therefore the effective time constant decreases. Here the range √ of input rates is bounded in such a manner that k roughly stays smaller than 0.1 across the whole input range for C = 100,200,300,400,500 pF. The results are compared with 100 realizations of network simulations for 50 sec and 0.01 millisecond interval. Error bars indicate the standard deviation from the average realization. As expected, the first two moments of diffusion approximation from calculation match the first two moments of the shot noise process perfectly according to plots A,B in figure 5.1. These results are identical for different capacitance values. Plot C in figure 5.1 represents that the third moment of shot noise process(proportional σ to ) becomes more negligible as the conductance noise strength becomes smaller g0 at higher input rates(Hence improving the diffusion approximation). figure 5.2 demonstrates the parameter k and the skew of the Gaussian distribution due to the conductance fluctuations(cyan dashes) and shot noise(yellow dashes) for different capacitances over the entire input range. The total skew (green dashes) with parameter k =
r
5.1. SINGLE NEURON
Figure 5.1. Measuring the accuracy and adequacy of diffusion approximation. Mean and standard deviation of inhibitory and excitatory synaptic conductances and their good match with the first two moments of synaptic shot noise obtained by numerical simulations, are captured in (A) and σ (B). (C)shows the magnitude of e,i , which determines whether it is posg0 sible to approximate the shot noise processes only based on it’s first two moments. High values of this ratio for inhibitory input, demonstrate the importance of higher order moments of shot noise,proportional to this ratio,missed by diffusion approximation.The approximation improves as we approach the high conductance regime. i.e for higher input rates
33
34
CHAPTER 5. RESULTS AND DISCUSSION
Figure 5.2. Upper panel: Measuring the third cumulant of the subthreshold membrane potential distribution, that accounts for the skewness of the Gaussian form. The total skew (Green)is the linear summation of two terms due to contribution of higher order noise, that tilt the Gaussian distribution in opposite directions. The skew due to third moments of the shot noise (Yellow) and the skew from conductance fluctuations (Cyan). A-E demonstrate the level of the skew for different capacitance values over the same synaptic conductance range. Lower panel : Validity of the effective white noise approximation. a-e show the amplitude of k for different capacitance values over the same synaptic conductance range.
5.1. SINGLE NEURON
35
is measured by adding these two skews. According to these plots both,Gaussian and effective white noise, approximations improve as the time constant becomes bigger(going from A-a to E-e) since the range of skew and k decreases in such a manner that the skew and k ranges for the case C = 500 are half of their value for the case C = 100
Figure 5.3. Comparing the calculated mean values of potential fluctuations with numerical simulation results.Red lines represent the analytically obtained values. Blue bars represent the std. of different realizations from the averaged(blue dots) 100 simulations. Upper panel: Mean membrane potential according to the Gaussian approximation. A-E depict the calculated results for different capacitance values over the same conductance range. Lower panel: Analytical results for the mean membrane potential when the calculation goes beyond the Gaussian approximation. a-e same as in the upper panel. By adding the first order correction term, that is solely due to conductance fluctuations, we obtain a perfect match with numerical results, as it is shown in a-e.
figure 5.3 Shows the match for mean membrane potentials. The first row (A-E) demonstrates the match before adding the first correction order. It is obvious from the second row(a-e) that the match becomes perfect after adding the correction term. Here we also observe that the match improves from A to E as the higher
36
CHAPTER 5. RESULTS AND DISCUSSION
noise order becomes more negligible for the bigger effective time constants, in such a way that in plot E the calculated value falls within the confidence interval of the numerical simulation with out adding the correction.
Figure 5.4. Comparing the calculated standard deviation of membrane fluctuations with the simulation results. Red lines represent the analytically obtained values. Blue bars represent the std. of different realizations from the averaged(blue dots) 100 simulations. A-E depict the results for different capacitance values over the same conductance range.
figure 5.4 states that the calculations could capture the potential standard deviation quite perfectly as all the predicted values are within the confidence interval of numerical simulations. figure 5.5 represents the average spiking rate of a single neuron anticipated in eq.4.61 for C = 200 to 500. To evaluate this integral we applied the left Riemann 1 . Here we can observe that the sum method with very small intervals of 4000 calculated values overlaps with the upper edge of the confidence interval for the case C = 500 result in having a slightly better match for D, which is aligned with our anticipations thus far. According to what we have observed in the above, we would naturally expect
5.1. SINGLE NEURON
37
Figure 5.5. Comparing analytical(Red)and numerical(Blue) results obtained for average firing rates. Average firing rate calculated based on shifted boundaries(a first order correction due to the additive colored noise)and modified mean potential(a first order correction due to conductance fluctuations). Symbols and colors are the same as in the previous figure. Here A-D shows the result for four capacitance values over the same range of conductance.
to see the most deviation from the simulation results for the firing rate in case of C = 100 which possesses the biggest k and skew range. But figure 5.6 states the exact opposite, which seemed counter intuitive at first. This observation led us to pay more attention to the properties of the function in 4.58 that is being integrated over a certain boundary for each input rate value. figure 5.7 shows these boundaries for each capacitance value over the entire input range. It is obvious that C = 100 possesses the smallest boundary range for all the inputs. But this fact should not really contribute to having a smaller numerical estimation error, since we performed the summation for very small intervals in all cases. figure 5.9 After plotting this function we noticed that it is a monotonically increasing function, except for the erratic behavior that appears over a small range of -5.5 to -6 before it approaches 0 for the values smaller than -6. ??According to the
38
CHAPTER 5. RESULTS AND DISCUSSION
Figure 5.6. Average firing rate for the case with the lowest capacitance value, C = 100pF. (A) Calculated average firing rate based on the corrected mean potential value.(B) calculated firing rate based on the mean potential value obtained from the Gaussian approximation. Both cases are exceptionally good matches with numerical results, compared to the cases with higher capacitance values. Note that for the latter cases(C = 200, 300, 400, 500pF) we did not included the counter part of the figure (B) since as expected they showed a worse match with numerical results.
boundary values, for C = 100 case the numerical summation only partially covers this erratic behavior(and only for the first two input rates) whereas for all the other cases the integration boundaries cover the whole unstable area. As a result we can expect an additional numerical estimation error (with a small value)for c-200 to 500 . Therefore in the case with out this additional error, our calculations results in a perfect match for the corrected mean potential value. As a result we conclude that within the parameter space explored in the above, all the approximations are valid. Also the effect of second order noise could safely be ignored for larger effective time constants such as the case C − 500 that has a very small leading order correction term and very small total skewness. figure 5.10
5.2. STRIATUM NETWORK
39
Figure 5.7. Demonstrating the range over which the MFPT integral 4.58 is calculated numerically, for cases with different capacitance values(different colors). Lower boundary values yR are negative and upper boundary values are positive yΘ . each Dot represents the boundary value for a specific synaptic conductance (which corresponds to a specific excitatory input rateλexc ). Thus the vertical distance between two dots with identical color, shows the range of integral boundary for that specific input rate.
5.2 5.2.1
Striatum Network Self-Consistent Equations
Following the mean field formalism each d1 MSN receives KCT X−d1 , KD1−d1 , KD2−d1 , KF SI−d1 connections from cortex,FSI,D1,D2 populations with their corresponding intensities, and each d2 MSN analogously receives KCT X−d2 , KD1−d2 , KD2−d2 , KF SI−d2 connections from these populations. The dynamics of the network is defined in terms of the population activity of D1 and D2 MSN’s; therefore λD1 , λD2 , are variables and cortical excitation rate together with feed forward inhibition rate that it would invoke; λCT X ,λF SI , constitute the control parameters. Hence in order for us to solve the coupled rate equations (eq.) numerically and find the self consistent solutions, we had to explore a vast space of input values consist
40
CHAPTER 5. RESULTS AND DISCUSSION
Figure 5.8. Plotting the integrand function in 4.58 for the case C = 100pF at the third input rate value over the associated integral boundary. The area under the graph is to be estimated numerically, utilizing left Riemann sum method. The horizontal axis represents the domain of the function. The vertical axis represents the range of the function. Both axis posses dimension less values. The purpose of this plot was to merely depict the monotonically increasing behavior of the integrand over this specific domain.
of (λD1 , λD2 ) tuples for each control parameter (λF SI , λCT X ) and find the tuples that simultaneously were the fixed points of both equations. Instead of doing this rather exhausting search we decided, to initially, employ a root finding algorithm (scipy.optimize.root module, method:’hybr’, xtol’: 1.49012e-08) that would take the array of (λF SI , λCT X ) as fixed parameters and the array of simulated population activities (λD1 , λD2 ) as the initial search point until the following condition is fulfilled with the error tolerance defined by xtol: 1.49012e-08
"
#
ν (λ , λ , λ ,λ ) F = D1 D1 D2 F SI CT X νD2 (λD1 , λD2 , λF SI , λCT X )
5.2. STRIATUM NETWORK
41
Figure 5.9. Erratic behavior of the integrand function in 4.58. Same as in the previous figure we plotted the integrand function for the case C = 100pF this time at the first input rate value over the associated boundary domain. Here we zoomed over the portion of the domain at which the function starts to behave erratically which is depicted by ripples of various size. The erratic behavior gets worse for the domain values closer to -6 which corresponds to all other cases with capacitance values higher than 100 that posses a wider domain. Look at figure 5.7(The plots for other cases are not shown).
and "
λ F = D1 λD2
#
figure 5.11 Analytical results are compared to the average realization of 100 simulations with 20sec duration and 0.01 millisecond time step . Neuron and network parameters are displayed in table 3.1, table 3.2 ,table 3.3 Next we considered a simple scenario for correlated cortical inputs in such a way that both D1 and D2 subpopulations receive inputs from 2 unconnected cortical pools that have the same level of within correlations and output rates. In this input scenario figure 5.12 we increase the effective input variance of the cortical excitation independently from its mean value at each examined input rate, which is the same as taking the effect of coincidence excitatory impinging into account, i.e assuming cross correlations with zero time lags for cortical inputs to each neuron in the sub-populations. Note that since we still neglect pairwise correlations among
42
CHAPTER 5. RESULTS AND DISCUSSION
Figure 5.10. Percentage error of the estimated transfer function output based on divinations from the average realization of the numerical simulations. For different capacitance values(different colors) at each specific input rate, we estimated the error of the calculated firing rates, utilizing the percentage error method.
MSNs, as well as MSNs/FSI activities, this scenario investigates consequences of receiving enhanced external noise. Here we intend to observe the motive of emergent dynamical balance, attained after updating the correlation values. Increasing the level of cortical input correlation,indicated by α in the legend, increases the synaptic noise at the same input rate: σef f = σctx (1 + α) [24] According to this result the network reaches the operating point at higher input rates as correlations grow stronger. It means D2 MSNs are left behind in the power dynamics of the new regime, hence it takes further external intervention, excitation derive, to balance the dynamical ground.Simply put, the ground state becomes biased in favor of the Go pathway. This result is in qualitative agreement with numerical simulations of Bahuguna et.al. fig6(doi: https://doi.org/10.1371/journal.pcbi.1004233.g006) on the effect of cortical within correlations on OP values.
5.2. STRIATUM NETWORK
43
Figure 5.11. The match between estimated values of the network fixed points, and the population activity of network simulations.As a result of solving a system of coupled integral equations, which represented our analytical transfer functions 4.60 for the two sub populations we obtained the fixed points of the network for the corresponding cortex input rate. The solutions of the coupled system was obtained numerically.
5.2.2
Dynamic State of Network Activity
Thus far quantitative and qualitative predictions of our theoretical model were in good agreement with simulations. However the numerical method that was utilized is very limiting since the search is confined to find the immediate root in the vicinity of its starting point, thus the information regarding the possibility of finding other fixed points is lost. Besides knowing the behavior of the transfer function around the fixed points could provide some insights about the stability of the points. According to numerous studies [5, 33], various regions in the space of parameters could be associated with different neuronal phenomena. For instance the firing pattern of individual neurons, could be regular(R) or irregular(I), and the population activity could manifest synchronous(S) or asynchronous(A) behaviors. Here we assumed that the network activity is in the AI state, consistent with the Independence ar-
44
CHAPTER 5. RESULTS AND DISCUSSION
Figure 5.12. Variation of the operating point of the striatum in response to the elevating level of correlation in cortical inputs. Note that different shades of each (blue,red) pair corresponds to the estimated network response over the same range of cortical input rates.
gument, the fluctuation driven regime and the rest of premises mentioned before, for all the examined constellations. In [33] it was shown that predictions of their numerical mean field model for firing rates and mean and standard deviation of free membrane potential fluctuations was a good match with the results from their recurrent network simulations for the AI type activity. They argue that deviations are attributed to two main factors that are not accounted for in the mean field model: Correlations in the recurrent input, due to shared connectivity, and inputs with a regularity that is different from that of a Poisson process. They conclude that their model could be improved by accounting for these effects underlying the neuronal transfer function. Therefore, given that we have established the accuracy level of our analytical mean field formalism in case of truly independent Poisson processes in previous section, we can argue that divinations of mean field predictions from network simulation results, indicates a divination from AI type activity. To prob the accuracy of predictions for the network case, here we compared the mean and variance of the free membrane potential fluctuations as well as the response of d1
5.2. STRIATUM NETWORK
45
Figure 5.13. (A)dashed lines: output response of static transfer function of d1 and d2 (solved separately). Solid lines: The input fed to the transfer functions which is the array of population activities(FSI rates are not shown) corresponding to each cortical input rate.(B),(C) the match between calculations and network simulations for mean and variance of membrane fluctuations for the mentioned array of input rates. (D) Perturbation parameter values associated with this input array.
and d2 MSNs static transfer functions to the array of simulations output rates(for all constellations of control parameters). According to the accuracy level that was established in previous section for the static transfer function of the model neuron figure 5.10, the output rates of d1 and d2 neurons should not deviate from the simulation results more than around 5 percent(when they are actually exposed to stationary Poisson inputs). This expectation is clearly defied according to the figure 5.13 A. Whereas the perfect match for the mean membrane potential(B) together with small values for perturbation parameter K (C), indicate that none of the principal approximations(Gaussian, perturbation) for calculating the transfer function of a single neuron are violated. However, the mismatch observed in (A) is attributed to the rather noticeable discrepancy between the calculated and simulated potential variance. Revisiting figure 5.4 clearly suggests that the model
46
CHAPTER 5. RESULTS AND DISCUSSION
is capturing the second moment of the free membrane potential fluctuations quite accurately for a model neuron. Now to prob the effect of shared connectivity and deviations from Poisson regularities, As suggested in [33], we compute the following measures: 2 = V ar[ISI] where ISI is the inter spike interval of conseIrregularity: CVISI E[ISI]2 quent firing events of each neuron. This formula captures the squared coefficient of 2 = 1 thus smaller variation of the ISI distribution. Poisson spike trains have CVISI values imply more regular spiking. Cov[Ci, Cj] Synchrony: Corr[Ci, Cj] = p Which measures the correlation V ar[Ci]V ar[Cj] coefficients between the joint spike count of each pair of neurons in the network. The spike trains are divided into bins of 2 millisecond. Positive/Negative values correspond to correlated/Anti-correlated behavior. The smaller values naturally represents lower level of synchronous behavior. To visualize the network behavior, in figure 5.14 we show the raster and PSTH (peristimulus time histogram) of both sub-populations activity for the examined control parameters. After the third plot(corresponding to the third cortical input rate) we can observe a gradual emergence of a pattern in the network activity which becomes more visible in forms of distinguishable stripes towards the last panels. It is interesting that, the appearance of synchronous activity coincides with constellations at which the variance discrepancy occurs. (Displayed in) figure 5.13 C. figure 5.15 , figure 5.16 figure 5.17 demonstrate the histogram of interval density of ISI’s and the squared coefficient of variation of ISI averaged over all the neurons in each subpopulation. First panel corresponds to the regime before the operating point. Apart from figure 5.15 B, in which D2 has close to zero activity, neurons in subpopulation with higher activity have more regular spiking. This observation remains valid for the second and third panel, after the operating point. The third panel demonstrates that the activity of neurons in D2 population are more regular than D1 population. However none of the plots show noticeable divinations from Poisson regularity.(Note that here we are dealing with neurons with refractoriness, which could on its own make the spiking slightly less irregular than Poisson type activity [13]. figure 5.18 , figure 5.19 and figure 5.20 demonstrate the normalized histogram of cross correlation coefficients (CC’s)of pairwise activity of all neurons in the network. Paying attention to the yaxis , we can see that the number of positive CC’s start to increase gradually through out the panels, in a heterogeneous manner. This increase that corresponds to appearance of synchronous behavior is consistent with what is observed from the raster plots, and the increased level of discrepancy observed in figure 5.13 C. These observations could suggest some level of deviation from AI type activity i.e deviation from the stationary state. Therefore the next important step to characterize the dynamics of the network is to have access to the second-order statistical
5.2. STRIATUM NETWORK
Figure 5.14. The raster and PSTH for the spiking activity of D1, D2 and FSI populations(in blue,red and green respectively) are shown for nine values of cortical input rates. Upper panel: plots from left to right corresponding with lambdaCT X = 550, 1050, 1550HZ respectively. Middle panel: lambdaCT X = 2050, 2550, 3050HZ Lower panel: lambdaCT X = 3550, 4050, 4550HZ . Each plot is a snap shot(100 ms) of a long simulation and includes four subplots(boxes from top to down).First box: the raster plot that depicts the spiking neurons of each population via different colored dots.Second and third box: PSTH, shows the transient firing rates of D1,D2 and FSI respectively.Forth box: shows the difference between the firing rates of D1 and D2 populations.
47
48
CHAPTER 5. RESULTS AND DISCUSSION
Figure 5.15. Irregularity is a description of network activity that is related to the inter-spike interval density. Here we measured the squared 2 coefficient of variation CVISI of the inter-spike intervals, averaged over all neurons in the network. The plots here depict the p.d.f of the ISI’s of each subpopulation activity, in blue and red for D1 and D2 respectively, at three cortical input rates. A,B:λCT X = 550hz. C,D:λCT X = 1050hz E,F:λCT X = 1550hz
5.2. STRIATUM NETWORK
Figure 5.16. Analogous description as in the previous figure of Interval density for another three sample rates. A,B:λCT X = 2050hz. C,D:λCT X = 2550hz E,F:λCT X = 3050hz
49
50
CHAPTER 5. RESULTS AND DISCUSSION
Figure 5.17. Analogous description as in the previous figure of Interval density for higher values of sample rates. A,B:λCT X = 3550hz. C,D:λCT X = 4050hz E,F:λCT X = 4550hz
5.2. STRIATUM NETWORK
51
Figure 5.18. Synchrony is another description of the network activity that is related to the spike count cross-correlation. Here we measured the count correlation coefficient(CCs ) for counting windows of 2 ms width, for all neuron pairs in the network and plot the normalized histogram of CCs for D1-D1 pairs(blue),D2-D2 pairs(red),D2-D1 pairs(green) at three sample cortical input rates(low rates). A-C correspond to λCRX = 550, 1050, 1550
moments of the population activity(auto-spectra and cross-spectra) which can be obtained by deriving the time dependent transfer function.
52
CHAPTER 5. RESULTS AND DISCUSSION
Figure 5.19. Analogous to the previous figure for synchrony measure, with medium sample input rates. A-C correspond to λCRX = 2050, 2550, 3050
5.2. STRIATUM NETWORK
Figure 5.20. Analogous to the previous figure for synchrony measure, with high sample input rates. A-C correspond to λCRX = 3550, 4050, 4550
53
Chapter 6
Conclusion and Future Prospects Here we aspired to introduce a framework suitable for dynamical analysis of the Striatum network, modeled as 2 sub populations with self and mutual recurrent connectivity, while receiving feed-forward excitation and inhibition from cortex and FSI. We studied the network behavior in the fluctuation driven regime at the stationary state. Hence it was imperative to have a realistic description of the noise processes. Align with this objective we obtained the static transfer function of a single MSN driven by a Gaussian colored noise. In addition we computed the first order corrections for description of membrane fluctuations i.e modulations of the Gaussian distribution, due to third moment of the shot noise and the multiplicative noise. Next we derived a first order mean-field description of the network dynamics at AI state and computed the corresponding fixed points for a range of control parameters analogous to the previous study [3]. As expected the suggested approach improved the match between simulations and analytical predictions. However at this stage our framework does not provide a systematic method for linear stability analysis. To achieve that we have to extend the method by introducing a time dependent perturbation to the proposed FPE and as a result recovering a second order mean field description. This will enable us to map out the phase space including reigns of stability and emerging oscillations.
55
Bibliography [1]
Abbott, L. F. Theoretical Neuroscience Rising. Vol. 60. 3. 2008, pp. 489–495.
[2]
Albin, Roger L, Young, Anne B, and Penney, John B. “The functional anatomy of disorders of the basal ganglia”. In: Trends in Neurosciences 18.2 (1995), pp. 63–64.
[3]
Bahuguna, Jyotika, Aertsen, Ad, and Kumar, Arvind. “Existence and Control of Go/No-Go Decision Transition Threshold in the Striatum”. In: PLoS Comput Biol 11.4 (2015), e1004233.
[4]
Berry, M. J., Warland, D. K., and Meister, M. “The structure and precision of retinal spike trains”. In: Proceedings of the National Academy of Sciences 94.10 (1997), pp. 5411–5416.
[5]
Brunel, N. “Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons”. In: Journal of computational neuroscience 208 (2000), pp. 183–208.
[6]
Brunel, Nicolas and Simone, Sergi. “Firing Frequency of Leaky Integrate-andfire Neurons with Synaptic Current Dynamics”. In: Simulation Knight 1972 (1998), pp. 87–95.
[7]
Brunel, Nicolas et al. “Effects of synaptic noise and filtering on the frequency response of spiking neurons”. In: Physical Review Letters 86.10 (2001), pp. 2186–2189.
[8]
Cepeda, Carlos et al. “The corticostriatal pathway in Huntington’s disease”. In: Progress in Neurobiology 81.5-6 (2007), pp. 253–271.
[9]
Destexhe, A. et al. “Fluctuating synaptic conductances recreate in vivo-like activity in neocortical neurons”. In: Neuroscience 107.1 (2001), pp. 13–24.
[10]
Doering, Charles R., Hagan, Patrick S., and Levermore, C. David. “Bistability driven by weakly colored Gaussian noise: The Fokker-Planck boundary layer and mean first-passage times”. In: Physical Review Letters 59.19 (1987), pp. 2129–2132.
[11]
Eppler, Jochen M. “PyNEST: A convenient interface to the NEST simulator”. In: Frontiers in Neuroinformatics 2.January (2008), pp. 1–12.
57
58
BIBLIOGRAPHY
[12]
Fieblinger, Tim et al. “Cell type-specific plasticity of striatal projection neurons in parkinsonism and L-DOPA-induced dyskinesia”. In: Nature Communications 5 (2014), pp. 1–15. arXiv: 9809069v1 [gr-qc].
[13]
Gerstner, Wulfram et al. Spiking Neuron Models. 2002, p. 494.
[14]
Gertler, T S, Chan, C S, and Surmeier, D J. “Dichotomous Anatomical Properties of Adult Striatal Medium Spiny Neurons”. In: Journal of Neuroscience 28.43 (2008), pp. 10814–10824.
[15]
Gewaltig, Marc-Oliver. {N}{E}{S}{T} ({N}{E}ural {S}imulation {T}ool). 2007.
[16]
Gillespie, Daniel T. “The mathematics of Brownian motion and Johnson noise”. In: American Journal of Physics 64.3 (1996), pp. 225–240.
[17]
Gittis, A. H. et al. “Distinct Roles of GABAergic Interneurons in the Regulation of Striatal Output Pathways”. In: Journal of Neuroscience 30.6 (2010), pp. 2223–2234.
[18]
Hubel, D R and Wiesel, T N. “Functional architecture of macaque monkey visual cortex”. In: Proc. R. Soc. Lond. B 198.July (1977), pp. 1–59.
[19]
Jahanshahi, Marjan et al. A fronto-striato-subthalamic-pallidal network for goal-directed and habitual inhibition. 2015.
[20]
Kłosek, M. M. and Hagan, P. S. “Colored noise and a characteristic level crossing problem”. In: Journal of Mathematical Physics 39.2 (1998), pp. 931– 953.
[21]
Kravitz, Alexxai V. et al. “Regulation of parkinsonian motor behaviours by optogenetic control of basal ganglia circuitry”. In: Nature 466.7306 (2010), pp. 622–626.
[22]
Kress, Geraldine J. et al. “Convergent cortical innervation of striatal projection neurons”. In: Nature Neuroscience 16.6 (2013), pp. 665–667.
[23]
Manwani, Amit and Koch, Christof. “Detecting and Estimating Signals in Noisy Cable Structures, I: Neuronal Noise Sources”. In: Neural Computation 11.8 (1999), pp. 1797–1829.
[24]
Moreno-Bote, Ruben, Renart, Alfonso, and Parga, Nestor. “Theory of input spike auto- and cross-correlations and their effect on the response of spiking neurons”. In: 1705 (2007), pp. 1651–1705.
[25]
M.Ricciardi, Luigi. Diffusion Processes and Related Topics in Biology. Vol. 53. 9. 1977, pp. 1–207.
[26]
Pakhotin, P and Bracci, E. “Cholinergic Interneurons Control the Excitatory Input to the Striatum”. In: Journal of Neuroscience 27.2 (2007), pp. 391–400.
[27]
Paulo, Marco and Brigham, Ferreira. “Nonstationary Stochastic Dynamics of Neuronal Membranes”. PhD thesis. 2015.
BIBLIOGRAPHY
59
[28]
Planert, H. et al. “Dynamics of Synaptic Transmission between Fast-Spiking Interneurons and Striatal Projection Neurons of the Direct and Indirect Pathways”. In: Journal of Neuroscience 30.9 (2010), pp. 3499–3507.
[29]
Richardson, Magnus J. E. and Gerstner, Wulfram. “Synaptic Shot Noise and Conductance Fluctuations Affect the Membrane Voltage with Equal Significance”. In: Neural Computation 17.4 (2005), pp. 923–947.
[30]
Risken, Hannes and Frank, Till. “The Fokker-Planck Equation: Methods of Solutions and Applications (Springer Series in Synergetics)”. In: The FokkerPlanck Equation: Methods of Solutions and Applications (Springer Series in Synergetics) (1996).
[31]
Rotter, S and Diesmann, M. “Exact digital simulation of time-invariant linear systems with applications to neuronal modeling.” In: Biological cybernetics 81.5-6 (1999), pp. 381–402.
[32]
Salinas, Emilio and Sejnowski, Terrence J. “Integrate-and-Fire Neurons Driven by Correlated Stochastic Input”. In: Neural Computation 14.9 (2002), pp. 2111– 2155.
[33]
Schrader, Sven and Rotter, Stefan. “The High-Conductance State of Cortical Networks”. In: Neural Computation 20.1 (2007), pp. 1–43.
[34]
Schuecker, Jannis, Diesmann, Markus, and Helias, Moritz. “Modulated escape from a metastable state driven by colored noise”. In: Physical Review E Statistical, Nonlinear, and Soft Matter Physics 92.5 (2015), pp. 1–11.
[35]
Schuecker, Jannis, Diesmann, Markus, and Helias, Moritz. “Reduction of colored noise in excitable systems to white noise and dynamic boundary conditions”. In: (2015), pp. 1–23.
[36]
Taverna, S., Ilijic, E., and Surmeier, D. J. “Recurrent Collateral Connections of Striatal Medium Spiny Neurons Are Disrupted in Models of Parkinson’s Disease”. In: Journal of Neuroscience 28.21 (2008), pp. 5504–5512.
[37]
Tepper, James M, Koós, Tibor, and Wilson, Charles J. “GABAergic microcircuits in the neostriatum”. In: Trends in Neurosciences 27.11 (2004), pp. 662– 669.
[38]
Tepper, James M. et al. “Heterogeneity and Diversity of Striatal GABAergic Interneurons”. In: Frontiers in Neuroanatomy 4.December (2010), pp. 1–18.
[39]
Threlfell, Sarah et al. “Striatal dopamine release is triggered by synchronized activity in cholinergic interneurons”. In: Neuron 75.1 (2012), pp. 58–64.
[40]
Van Der Walt, Stéfan, Colbert, S. Chris, and Varoquaux, Gaël. “The NumPy array: A structure for efficient numerical computation”. In: Computing in Science and Engineering 13.2 (2011), pp. 22–30.
[41]
Wall, Nicholas R. et al. “Differential innervation of direct- and indirect-pathway striatal projection neurons”. In: Neuron 79.2 (2013), pp. 347–360.
60
[42]
BIBLIOGRAPHY
Yim, Man Yi, Aertsen, Ad, and Kumar, Arvind. “Significance of input correlations in striatal function”. In: PLoS Computational Biology 7.11 (2011).