Noise in Neurons and Other Constraints - Department of Computing

2 downloads 375 Views 2MB Size Report
and function biological networks such as those of our brain? Here we focus ... the average response could be related to the stimulus (Adrian 1927, 1928). Biology .... The action potential (AP) is the fundamental signal used for communication.
Chapter 8

Noise in Neurons and Other Constraints A. Aldo Faisal

Abstract How do the properties of signalling molecules constrain the structure and function biological networks such as those of our brain? Here we focus on the action potential, the fundamental electrical signal of the brain, because malfunction of the action potential causes many neurological conditions. The action potential is mediated by the concerted action of voltagegated ion channels and relating the properties of these signalling molecules to the properties of neurons at the systems level is essential for biomedical brain research, as minor variations in properties of a neurons individual component, can have large, pathological effects on the physiology of the whole nervous system and the behaviour it generates. This approach is very complex and requires us to discuss computational methods that can span across many levels of biological organization, from single signalling proteins to the organization of the entire nervous system, and encompassing time scales from milliseconds to hours. Within this methodical framework, we will focus on how the properties of voltagegated ion channels relate to the functional and structural requirements of axonal signalling and the engineering design principles of neurons and their axons (nerve fibres). This is important, not only because axons are the essential wires that allow information transmission between neurons, but also because they play a crucial in neural computation itself. Many properties at the molecular level of the nervous system display noise and variability, which in turn makes it difficult to understand neuronal design and behaviour at the systems level without incorporating the sources of this probabilistic behaviour. To this end we have developed computational methods, which will enable us to conduct stochastic simulations of neurons that account for the probabilistic behaviour of ion channels. This allows us to explore the relationship between individual ion channel properties, derived from high-resolution patch clamp data,

A.A. Faisal (!) Department of Bioengineering and Department of Computing, Imperial College London, South Kensington Campus, London, UK e-mail: [email protected] N. Le Nov`ere (ed.), Computational Systems Neurobiology, DOI 10.1007/978-94-007-3858-4 8, © Springer ScienceCBusiness Media Dordrecht 2012

227

228

A.A. Faisal

and the properties of axons. The computational techniques we introduce here will allow us to tackle problems that are (1) beyond the reach of experimental methods, because we can disambiguate the effects of variability and reliability of individual molecular components to whole cell behaviour, and (2) allow us to consider the many finer fibers in the central and peripheral system, which are experimentally difficulty to access and record from. We start with the well-established data that Ion channels behave with an element of randomness resulting in “channel noise”. The impact of channel noise in determining axonal structure and function became apparent only very recently, because in the past findings were extrapolated from very large unmyelinated axons (squid giant axon), where channel noise had little impact due to the law of large numbers. However, the many axons in the central and peripheral nervous system are over 1,000 times thinner and the small number of ion channels involved in sustaining the action potential, imply that channel noise can affect signalling and constraint both the reliability of neural circuit function, but also sets limits to the anatomy of the brain as a whole.

8.1 A Systems Biology View of Neurons: Variability and Noise Our brain processes information using electrical impulses, the action potentials, which are mediated by “protein transistors”, so called voltage-gated ion channels. We understand exquisitley the mechanisms and molecular and cellular components involved in the action potential: such as the ion current basis of the action potential (Nobel prize for Hodgkin and Huxley in 1963), how action potentials are translated at the synapse (Nobel prize for Katz in 1963), how ion channels are the fundamental elements gating these currents, acting effectively like “protein transistors” (Nobel prize for Sakmann and Neher in 1991) and how the protein structure of ion channels allows for voltage-dependent conduction (Nobel prize for McKinnon in 2003). Yet, it remains still unclear how these characteristics determine the brain’s design principles at the systems level. For example, our brain requires just 16 W of power – much less than any computer of equivalent computational power would need (and in fact less then current portable computers). This is, taking an engineering, view surprising. Brains use low quality components for electrical signals: fat as electrical insulator, protein transistors and salty water as conducting core.1 How well can the brain process information being made up from such poor electrical components? In the following we will explore the fundamental implications of this question. When Adrian began to record from neurons in the 1920s, he observed that neural responses were highly variable across identical stimulation trials and only the average response could be related to the stimulus (Adrian 1927, 1928). Biology viewed this variable nature of neuronal signalling as “variability”, engineers called

1

On the other side brains and organisms build themselves from themselves.

8 Noise in Neurons and Other Constraints

229

it “noise”. The two terms are closely related, but as we shall see imply two very different approaches to think about the brain – one operating at the systems level, the other operating at the molecular level: On the one side the healthy brain functions efficiently and reliably, as we routinely experience ourselves.2 Variability is a reflection of the complexity of the nervous sytem. On the other side engineers viewed neurons as unreliable elements, subject to metabolic and other variations, but more importantly, perturbed by random disturbances of a more fundamental origin (von Neumann 1951, 1956). Taking this view the brain processes information in the presence of considerable random variability. Thus, individual neurons in the nervous system are highly variable because they are quite unreliable (“noisy”), and this poses a severe constraint on a neurons information processing power. Taking a molecular view of the nervous sytem, this is actually not suprising: we know that at the biochemical and biophysical level there are many stochastic processes operating in neurons: protein production and degradation, opening and closing of ion channels (i.e. confirmation changes of proteins), fusing of synaptic vesicles and diffusion and binding of signalling molecules. In the classic of neurobiology it is implicitly assumed that averaging large numbers of such small stochastic elements effectively wipes out the randomness of individual elements at the level of neurons and neural circuits. This assumption, however requires careful consideration for two reasons: 1. First, neurons perform highly non-linear operations involving high gain amplification and positive feedback. Therefore, small biochemical and electrochemical fluctuations of a random nature can significantly change whole cell responses. 2. Second, many neuronal structures are very small. This implies that they are sensitive to (and require only) a relatively small number of discrete signalling molecules to affect the whole. These molecules, such as voltage-gated ion channels or neurotransmitters, are invariably subject to thermodynamic fluctuations and hence their behaviour will have a stochastic component which may affect whole cell behaviour. This suggests that unpredictable random variability – noise – produced by thermodynamic mechanisms (e.g. diffusion of signalling molecules) or quantum mechanisms (e.g. photon absorption in vision) at the molecular level can have deep and lasting influence on variability present at the system level. In fact, as we shall we our deterministic experience of our own nervous system implies that the design principles of the brain must mitigate or even exploit the constraints set by noise and other constraints such as energetic demands. It is worth considering what the implication of noise are for information processing: noise cannot be removed from a signal once it has been added to it. 2

but, consider the following reflections of our nervous system’s variability: the little random motions of a pointed finger, our uncertainty when try to understand a conversation in the presence of loud background noise, or when we seem not able to see our keys that were in plain view when we were searching for them.

230

A.A. Faisal

Fig. 8.1 How does network topology affect noise? (a) Convergence of signals onto a single neuron. (b) Serial propagation of a noisy signal through successive neurons. (c) Recurrent connections (“loops”) in networks (Figure reproduced from Faisal et al. (2008))

Since signals can easily be lost, and noise easily added, this sets a one-sided limit on how well information can be represented – as measured by the signal-to-noise ratio. In many situations noise can be thought of as additive effect on a signal. It is reasonable to assume that a major feature of neurons is the summation-like integration of incoming signals, and so we can illustrate noise in three basic neural network topologies, using toy neurons that simply sum their synaptic inputs. Figure 8.1a shows convergence of signals onto a single neuron. If the incoming pre-synpatic signals have independent noise then noise levels in the postsynaptic neuron will scale in proportion to the square root of the number of input signals (N), whereas the signal scales in proportion to N. Meaning that the more independent signals are integrated the more the integrated signal will stand out over the integrated noise (proportional to the square root of N). However, this signal-boosting effect underlies diminishing returns (especially if the cost scales proportionally to the number of inputs). If the noise in the signals is perfectly correlated, then the noise in the neuron will also scale in proportion to N. Also, consider that the integrating neuron itself will add it’s own internal noise to the integrated signal. The only way to raise this signal-to-noise ratio, is by adding further relevant information. This usually implies information from the same source, but with noise that is independent (in some way) of the original message, e.g. by using another sensory neuron with overlapping receptive fields or using a second axon that carries the same information. However, decreasing the effects of noise by increasing the amount of redundant information comes at two costs: a fixed cost that results from maintaining redundant neurones, fibers, sensors and a dynamic cost, that results from signalling with these systems. Given the hight energetic efficiency of our brain and the low

8 Noise in Neurons and Other Constraints

231

reliability of its individual molecular components, in contrast to the low energetic efficiency of computer chips and the very high reliability of their digital electronic components electronic circuits, mitigating the consequences of noise must have had a fundamental impact on the evolution and design of nervous systems: from the first “biochemical nervous system” in bacterial chemotaxis (where nutrient levels in the surrounding were used to control the degree of random swimming motion) to the mammalian cortex. Figure 8.1b shows the passage of a noisy signal through successive neurons, and noise levels are incremented at each stage as internal noise is added to the signal. Note that parallel connections (not shown) do not augment noise through network interactions. In fact, it was suggested that the highly parallel and distributed yet compact structure of the CNS might help to limit the amount of noise that builds up from serial connections (Faisal et al. 2008). Finally, Fig. 8.1c shows that recurrence (“loops”) in neural circuits, can, if unchecked results in the build-up of correlated noise. Moreover, the whole nervous system operates in a continuous closed loop with the environment: from perception to action and back (see Fig. 8.2). Given this highly recurrent structure at all levels of biological organisation it is therefore important that noise is kept “private” to a neuron (De Weese and Zador 2004). Note, that the traditional view is that the all-or-nothing response of the action potential effectively thresholds out random fluctuations and provides an effective all-or- none response signal which is therefore thought to be noise free. How can we know what constitutes noise when recording signals from the brain? For instance, neuronal membrane potential shows small variations even at rest, even if synaptic inputs are pharmacologically blocked. Synpatic or electrode inputs near action potential threshold, a neuron may or may not fire an action potentials, because of the all-or-nothing nature of the action potential (Skaugen and Wallœ 1979). It seems that some neurons can react in a very reliable and reproducible manner to fluctuating currents that are injected via intracellular electrodes. As long as the same time-course of the injected current is used the action potentials occur with precisely the same timing relative to the stimulation (Bryant and Segundo 1976; Mainen et al. 1995). A similar behaviour has been reported for neurons of the visual system in flies (de Ruyter van Steveninck et al. 1997) and monkeys (Bair and Koch 1996). On the other hand, neurons produce irregular spike trains in the absence of any highly temporally structured stimuli. Irregular spontaneous activity, i.e., activity that is not related in any obvious way to external stimulation, and trial-to-trial variations in neuronal responses are often considered as “noise” (Shadlen and Newsome 1995; Softky and Koch 1993). The question whether this neuronal trial-to-trial variability is • Indeed just noise (defined in the following as individually unpredictable, random events that corrupt signals) • Results because the brain is to complex to control the conditions across trials (e.g. the organisms may become increasingly hungry or tired across trials) • Or rather the reflection of a highly efficient way of coding information

232

A.A. Faisal

Fig. 8.2 Noise in the behavioural loop – the nervous system continuously perceives the world and executes actions (movements) in response to it. Noise is thereby encountered along all stations: from the conversion of physical stimuli into neuronal responses in sensory processing (a), neuronal information processing and signal transmission in axons and synapses, and in the motor system (c) when neuronal signals are converted into muscle forces and movements (Figure reproduced from Faisal et al. (2008))

cannot easily be answered. In fact, being able to decide whether we are measuring the neuronal activity that is underlying the logical reasoning and not just meaningless noise is a fundamental problem in neuroscience, with striking resemblance to finding the underlying message in cryptographic code breaking efforts (Rieke et al. 1997). There are multiple sources contributing to neuronal trial-to-trial variability: deterministic ones, such as changes of internal states of neurons and networks, as well as stochastic ones, noise inside and across neurons (White et al. 2000; Faisal et al. 2008). To what extent each of these sources makes up the total observed trial-to-trial variability remains unclear. What has become clear is that to solve this question it is not sufficient to study neuronal behaviour only experimentally (as this measures only the total variability of the system): It requires taking a system biology view of neuronal information processing (Faisal et al. 2008). This is because noise is ultimately due to the thermodynamic and quantum nature of sensory signals, neuronal and muscular processes operating at the molecular level. Given that the molecular biology and biophysics of neurones is so well known, it allows us to use stochastic modelling of these molecular components to control and assess the impact of each source of (random) variability at the level of neurones, circuits and

8 Noise in Neurons and Other Constraints

233

the whole organism. How we can link molecular noise to system level variability, and what new insights this offers us in understanding the design of the brain, is what we are going to explore in the remainder of the chapter.

8.2 Stochastic Versus Deterministic Views of Neurons: Small Means Noisy Noise as a fundamental constraint to information processing and transmission, and variability is inherent in our brains and our behaviour. This variability however cannot captured by computational models that are deterministic in nature, such as the beautiful Hodgkin–Huxley model of the action potential. To account for variability we have to make use of stochastic modes. Classically large neuronal structures, such as the squid giant axon have been key in understanding and explaining neural mechanisms such as the action potential. This is because, given their scale, they are experimentally easily accessible and appear to function deterministically. This is because random variability averages out quickly as size increases: the standard deviation of variability over the mean activity of a set of signalling molecules will go as the inverse square root of the number of involved molecules. However, neurones and synapses in many pathways are tiny: In comparison to squid giant axon (0.5 mm diameter) neuronal connections in our cortex can be over 10,000 times smaller. Cerebellar parallel fibres have 0.2 !m average diameter, C-fibres involved in sensory and pain transmission range between 0.1 !m and 0.2 !m, while the unmyelinated pyramidal cell axon collaterals which form the vast majority of local cortico-cortical connections have an average diameter 0.3 !m (Faisal et al. 2005). Thus, only as few as a hundred ion channels will be involved in transmitting the action potential per given unit length of axon. This is in contrast to several millions of the same unit length in squid giant axon. Similarly, the majority of central nervous system synapses (spiny- or boutontype) are below a micrometer in size and biochemical processes and concentrations occur within volumes smaller than picoliters. For example, in the classic synaptic preparation of the frog neuromuscular junction several thousand post-synaptic receptors will be ‘listening’ to incoming neurotransmitters released by hundreds of vesicles. However, in the much smaller bouton-type synapses found in mammalian cortex as few as three post-synaptic receptors have to detect the release of a single vesicle (containing some 1,000–2,000 neurotransmitter molecules), triggered by a single action potential. The action potential (AP) is the fundamental signal used for communication in the brain’s neural networks. Measuring the timing of APs in vivo and in vitro shows that neuronal activity displays considerable variability both within and across trials (Shadlen and Newsome 1998; Strong et al. 1998). This variability can have statistical characteristics that match those of simple random processes such

234

A.A. Faisal

as Poisson or Bernoulli processes. However, only because neuronal activity has some statistics of a random processes it does not necessarily follow that neuronal activity is generated by a random process itself. In fact, Shannon’s theory of information (Shannon 1948) suggests that to maximize information transmission the optimal way to encode (neural) signals will make the stream of signals appear like a random (Cover and Thomas 1991; Rieke et al. 1997). Thus, to what extent neuronal variability is part of meaningful processing and meaningless noise remains a fundamental problem of neuroscience. We will illustrate this approach now by looking at the initiation and propagation of the action potential.

8.3 Computational Neuroscience of Stochastic Neurons A neurons AP is carried by the spread of membrane potential depolarisation along the membrane and is mediated by voltage-gated ionic conductances (Fig. 8.3a). The depolarisation of the membrane potential is (re)generated by the non-linear voltagegated NaC conductances that open at low levels of membrane depolarisation, which depolarise the membrane further, thereby recruiting more NaC conductances. Thus, NaC conductactences act like positive feedback amplifiers. The resting membrane potential is then restored by the inactivation of the NaC conductance and is assisted by the (delayed) opening of KC and membrane leak conductances (and the two to three order of magnitude slower NaC -KC -pumps) that repolarise the membrane, thus KC channels and leak conductances provide negative feedback (Koch 1999; Hille 2001). The patch-clamp technique showed that these ionic conductances resulted from populations of discrete and stochastic voltage-gated ion channels (Sakmann and Neher 1995; Katz 1971) (see Fig. 8.3b). These ion channels are transmembrane proteins that act as pores which can open and close in response to changes in the membrane potential (“channel gating”) across the channel, thus acting like protein transistors. These voltage-gated ion channels operate with an element of randomness due to thermodynamic effects. This stochastic behaviour produces random electrical currents, called channel noise (White et al. 2000), which is by one to two orders of magnitude the most dominant source of intracellular noise in neurons3 (Manwani and Koch 1999b; Faisal et al. 2005). What are the effects of channel noise on the action potential? To answer this question we require first a stochastic model of the action potential. In most cases such stochastic models cannot be solved analytically, and require the need for computational approaches, through stochastic or “Monte-

3

We ignore here synaptic input as a form of electrical “noise” and note that the common use of the term “synaptic background noise” denotes the (not necessarily random) variability produced by massive synaptic input in cortical neurons (Faisal et al. 2008).

8 Noise in Neurons and Other Constraints

235

Fig. 8.3 (a) Schematic model of the action potential and the mediating NaC and KC currents. Note, that here the idealized deterministic ion channel behaviour is drawn, for small number of ion channels the more accurate picture would look like in (b). (b) Illustration of ion channel variability in repeated identical voltage-step trials. Patch-clamp recording of a few unitary NaC channels in mouse muscle during a 40 mV voltage step. The ensemble average – averaging over 352 repeated identical trials – approaches the idealized deterministic description (akin to the “NaC channels” curve in (a) (Figure reproduced from Faisal (2010) and generated using the Modigliani stochastic neuron simulator (Faisal et al. 2002, 2005)) (freely available from www. modigliani.co.uk)

Carlo” simulation. At the cellular level our stochastic simulations will use data on the responses of individual molecules – i.e. the properties of voltage-gated ion channels – to derive the responses of systems of interacting molecules – i.e. the response of a neuron.

236

A.A. Faisal

8.3.1 Modelling Noisy Neurons [OPTIONAL SECTION Electrical activity in neurons arises due to the selective movement of charged ions across the membrane, which we call membrane excitability. However, in most cases the amount of ions flowing through ion channels during episodes of electrical activity are minute compared to the number of ions present in the respective medium (Hodgkin 1964). In the following we ignore the changes in ionic concentration due to signalling, thus instead of a microscopic description of the neuron in terms of ions, a macroscopic description is used: individual ions and local concentration gradients are ignored, and replaced by a description of the membrane potential based on electrical circuit elements, including batteries and ionic currents (which are related to individual ions flows via Faraday’s constant and the ionic charge). An equivalent electrical circuit description (see Fig. 8.4) is derived by equating the currents inside and through the membrane compartment according to Kirchhoff’s current law. This method balances all currents flowing through the membrane and to other compartments (including branch points). Each transmembrane circuit describes an iso-potential patch of membrane and is represented by a membrane compartment. It is, therefore possible to mimic a neuron’s morphology using tree-like networks of cylindrical or spherical compartments (Rall 1969a,b; see also Chap. 7)]. The action potential mechanism and its theory is arguably the most successful quantitatively modelled system in biology. Reliability and noise in action potential generation has been studied for almost as long as the ionic basis underlying membrane excitability (Blair and Erlanger 1933; Pecher 1939). Reliability of action potential generation in response to a current step input was measured at the Nodes of Ranvier (Verveen 1962; Derksen and Verveen 1966; Verveen et al. 1967). The probability of trigeering an AP was fitted by a Gaussian cumulative probability function, parameterised by the injected current stimulus amplitude. This phenomenological model captured the feature that the stimulus had to drive the membrane over a fluctuating threshold to generate an AP. Threshold fluctuations were postulated to result from an internal noise source of possibly ionic origins and it was concluded that the threshold’s coefficient of variation (ratio of standard deviation over the mean) must depend on axon diameter. Later an analytical relationship between an assumed transmembrane noise source and AP threshold fluctuations (Verveen 1962; Derksen and Verveen 1966; Lecar and Nossal 1971). Transmembranen noise sources considered both thermal resistance noise produced by the neuronal membrane’s resistance and noise resulting from discrete, stochastic ion channels (which at the time were not conclusively experimentally demonstrated). Noise that could result from ion channels was estimated to have an over 20 times larger effect on threshold fluctuations than thermal resistance noise (see also Hille 2001). In stochastic simulation study by Skaugen and Wallœ (1979) well ahead of it’s time, the impact of discrete, stochastic ion channels on AP initiation in squidgiant-axon type iso-potential membrane patches. They showed how current inputs below AP threshold in the deterministic Hodgkin–Huxley model could trigger APs. This was because channel noise linearised the highly non-linear input-current

8 Noise in Neurons and Other Constraints

237

Fig. 8.4 Schematic view of how a compartmental model of an axon with stochastic ion channels. The left column shows the level of description and the right column the corresponding electrical and stochastic modelling abstraction. (Top) The axon is modelled as sequence of cylindrical compartments. (Middle) Each compartments of axon membrane contains two populations of ion channels. (Bottom) The voltagte-gated ion channel is described by a finite-state Markov random process. We depict here a Na channel model, which has a single open state, three closed, and four inactivated states (Figure adapted from Faisal et al. (2005))

versus firing-rate characteristic(when averaged over many trials). In other words, channel noise could increase and linearize the non-linear signalling range of neurons (Skaugen and Wallœ 1979). To introduce channel noise they replaced the deterministic linear kinetic equations representing the voltage-gated ion channels with stochastic Markov processes over populations of discrete ion channels (Fitzhugh 1965). This important modelling technique is worth understanding better, which we shall do in the following. As we will see this technique is ideal to model ion channels and is by now a well-established biophysical technique and confirmed by direct recordings of single ion channel behaviour (Sakmann and Neher 1995).

8.3.2 From Hodgkin–Huxley Conductances to Stochastic Ion Channels First we have to establish the notation to model the action potential. Hodgkin and Huxley (1952) showed that the observed ionic currents within an iso-potential patch

238

A.A. Faisal

of membrane (or more correctly, a single cylindrical compartment of axon with iso-potential membrane), could be explained by voltage-dependent ionic membrane conductances. To model the dynamics they postulated that the activation of a conductance was determined by the binding of “gating particles” to the conductance (or in modern terms the ion channel). An ion channel would only open if all necessary “gating particles” have bound to it. This approach enabled Hodgkin and Huxley to model the ratio of open channels using linear chemical kinetic reaction schemes for NaC and KC conductances by directly fitting their experimental data. While this gatingparticle approach is probabilistic in nature the resulting model of the conductance is deterministic. Note, that this deterministic behaviour model is strictly speaking not always equal to the average behaviour of the system, as we shall see later. Since Sakmann and Neher’s work we now know with certainty that the voltgategated conductances are made up of populations of discrete ion channels. Assuming a large number of channels within a given area of membrane the probability pi of a single channel being open corresponds to the ratio of open channels to all channels of the specific kind in that part of the membrane. In deterministic models the open channel ratio and the channel open probability are interchangeable, but when we account for stochastic effects there can be considerable, persistent deviations between the two quantities (Faisal and Laughlin 2007). To simplify the notation we will use pi as notation for both the open channel ratio and the channel open probability and highlight the differences later when necessary. The ionic conductance per unit membrane area gi is the product of the total ionic membrane conductance per unit membrane area gN i and the time and voltage dependent ratio of open channels pi . gi .V; t/ D gN i pi .V; t/

(8.1)

gN i is the product of the single channel conductance !i and the number of channels per unit membrane area (channel density) "i – determining these parameters is important for stochastic simulations, as both the number of channels present and their individual conductance determine the level of noise gN i D !i "i

(8.2)

gi .V; t/ D !i "i pi .V; t/

(8.3)

Rewriting Eq. 8.2 into Eq. 8.1 and interpreting pi as channel open probability yields a molecular level description of the ionic conductance per unit membrane area.

In Hodgkin–Huxley’s original gating particle model the probability that a channel will be open is given by the probability that all their gating particles are simultaneously bound. Implicitly, the gating-particle model assumes independence of the gating particles and the open channel probability pi is therefore the product of the probabilities of each particle being bound. With qj being the probability that

8 Noise in Neurons and Other Constraints

239

a gating particle of type j is bound and l.j / being the multiplicity of particles of type j that have to bind to support channel opening we can write pi .V; t/ D

Y

qj .V; t/l.j /

(8.4)

j

In the case of the standard squid axon NaC channel (Hodgkin and Huxley 1952) we have qj 2 fm; hg and l.m/ D 3 and l.h/ D 1, thus p.V; t/ D m3 h. The qj themselves are governed by linear chemical reactions: qPj .V; t/ D ˛j .V /.1 ! qj / ! ˇj .V /qj

(8.5)

The reaction’s kinetic rate functions ˛j .V /,ˇj .V / describe the rate of change of the probability qj , Eq. 8.5. These rate functions are characteristic to the ion channel protein (and gene) studied (Hille 2001), such that they can be identified from whole cell behaviour (e.g. Faisal and Niven 2006; Faisal 2007, but see Prinz et al. 2004b). These rate functions are either sigmoidal or exponential functions empircially fitted to voltage-clamp data (e.g. Hodgkin and Huxley 1952) or Boltzmann functions derived from the ion channel’s voltage sensor behaviour in a constant, uniform electrical field (Patlak 1991; Hille 2001). We can define a voltage-dependent ion channel time constant !j characterising the time in which sudden changes in membrane potential will affect ion channel gating. 1 !j .V / D (8.6) ˛j .V / C ˇj .V / Directly related to the time constant is the voltage dependent steady-state value, qj1 , to which the qj will converge for a constant membrane potential V . qj1 .V / D

˛j .V / ˛j .V / D ˛j .V / C ˇj .V / !j .V /

The qj are probabilities and have to satisfy qj 2 Œ0; 1". ntegrating.

(8.7)

Modelling each stochastic ion channel. Instead of describing the behaviour of lumped deterministic ionic conductance, we want to model (probabilistic) behaviour of individual ion channels. One way to describe the gating behaviour of individual channels is the use of Markov processes (Markov 1906, 1971). A Markov process is a probabilistic system, that assumes that has set of discrete states. In each state there are a number of possible transitions that can cause a transition from the current state to another state. These transitions are probabilistic, i.e. have a specific probability of occurring. Like Hodgkin–Huxley’s model these Markov models use linear chemical kinetic schemes to determine transition probabilities between discrete channel states, in fact these are the same transition rate functions as calculated by Hodgkin–Huxley model. The probability to transit from one state to another within a given time horizon is the transition rate.

240

A.A. Faisal

Conveniently these Markov models can be recovered from gating-particle type models of conductances (Conti and Wanke 1975; Hille 2001). The deterministic gating-particle model is reformulated as a specific subclass of Markov model as described in the following: Every possible combination of bound and unbound gating particles corresponds to a discrete ion channel state. The deterministic kinetic functions that describe the binding and unbinding rate of gating particles are correspondingly used to describe the probability of transition per unit time between individual ion channel states. Each transition corresponds to the binding or unbinding of one gating particle. The Markov states and the associated transition probability rate functions together form a Markovian kinetic gating scheme. A central assumption of the gating-particle models is that individual gating particles of the same type are indistinguishable and independent from each other. Multiple states, therefore, may have the same set of numbers of bound particles for each particle type. Without loss of generality these multiple, yet indistinguishable states are lumped together into one Markov state. To account for lumped states the transition rates are multiplied by a factor k, which is determined as follows. A transition corresponding to the unbinding of a gating particle of type j has a factor k that equals the number of particles of type j bound to the state where the transition originated. A transition corresponding to the binding of a gating particle of type j has a k that equals the multiplicity l.j / minus the j particles bound at the target state. This procedure allows one to transform any deterministic gating-particle model of conductances into a stochastic model of ion channel gating. The Markovian kinetic scheme derived for the gating-particle model for Hudgkin–Huxley’s squid axon m3 h-type gating-particle model of NaC channels is shown in Fig. 8.4 (bottom right): For example the transition rate from transition rate from the third closed state to the open state is ˛m .V /, the inverse transition has rate 3ˇm .V / and the transition probability away from the open state is 3ˇm .V / C ˇh .V / (towards the two adjacent closed and inactivated states). Having established how to model the stochastic nature of the action potential we now return to the question how we can link noise (resulting from ion channels) to variability in neuronal behaviour.

8.4 Neuronal Signalling Variability from Ion Channel Noise The discrete and stochastic nature of individual Na ion channels (Sigworth and Neher 1980) was confirmed by the patch-clamp technique (Sakmann and Neher 1995). Experiments revealed that NaC channel fluctuations could be large enough in size to account for the observed threshold fluctuations in Nodes of Ranvier (several !m diameter, NaC channel densities >1;000 !m!2 ) (Sigworth 1980). Moreover neuron simulations, in which stochastic models of NaC channels were the only source of variability, showed that the NaC channel noise alone produced AP threshold fluctuations which compared well with respect to experimental

8 Noise in Neurons and Other Constraints

241

data (Clay and DeFelice 1983; Rubinstein 1995). Variability in the experimental context could be quantified as the coefficient of variation, defined as the standard deviation over the mean of a variable value. These studies suggested that the AP threshold’s coefficient of variation was dependent on the square root of the number of NaC channel, N , present in the membrane. For large N this would imply that channel noise should have only little impact on spike-based information processing, as fluctuations in the number of open channels !N would have been small in most cells, because they are proportional to r p N 1 !N / D : N N For Hodgkin–Huxley’s squid giant axon this was certainly true as it measured several millimeters in diameter and possessed millions of ion channels per unit length. Similarly the myelinated nerves’ Nodes of Ranvier considered in these studies, although about hundred-folds smaller in diameter then squid giant axon, featured 100-fold higher channel densities. Nodes of Ranvier were, thus, comparable to squid axon in terms of the expected variability. The general validity of this assumption required re-consideration for most neurons, as we shall see in the following. Spike time variability measurements at the soma can be explained by channel noise. Random action potentials constitute form a major disruption of neuronto-neuron communication. While the presence or absence of APs does carry information, it is known that also the precise timing of each AP carries information (e.g. Rieke et al. 1997). The trial-to-trial variability of AP timing in vivo and in vitro in many systems can be on the order of milliseconds (1–10 ms) and the timing of individual APs in the millisecond scale was shown to be behaviorally relevant in perception and movement of invertebrates (see Faisal et al. 2008 for review). How large can the influence of channel noise be on neuronal firing variability? Schneidman et al. (1998) showed that in iso-potential membrane patches (of comparable area as pyramidal cell soma) with large numbers of ion channels channel noise can play a significant role for a neuron’s spike time reliability, i.e. the timing precision at which an action potential is initiated (see also Fig. 8.5, top two plots). This is because during AP generation the instant when the membrane potential crosses AP threshold is determined by the small probability and thus small number of ion channels open around AP threshold N ! D popen N and not, as was often implicitly assumed, the much larger number N of ion channels present in total. The magnitude of the fluctuations of N ! could be considerable even for large N and were better described by a binomial random variable: s p Npopen .1 ! popen / 1 ! popen D : !N / Npopen Npopen !

This implied that many neurons in the cerebral cortex, e.g. pyramidal cells, could be influenced by channel noise. The study was able to generate comparable spike time

242

A.A. Faisal

Fig. 8.5 The stacked raster plot visualizes traveling APs produced by identical repeated trials and is organized as follows. The top-most row shows the white noise current input. Below that, each row contains a spike raster plot recorded at equally spaced axonal positions (from the proximal stimulus site at the top to the distal end of the axon at the bottom). In each spike raster plot, the precise timing of a spike marked by a dot on an invisible time line. These time lines are stacked over each other for the N D 60 repeated trials. The linear shift visible in the overall spike pattern across rows reflects the APs traveling along the axon. The top-most raster plot reflects the spike initiation variability, all subsequent plots reflect variability produced during propagation which quickly exceed that of initiation. Data based on 10 s trial length, squid axon of 0.2 !m diameter (average diameter of cerebellar parallel fibers) with a frozen noise current stimulus (zero mean, 0.01 nA SD, band limited at 1 kHz) injected at the proximal end. See text for details (Figure adapted from Faisal and Laughlin (2007))

variability as found in cortical neurons in vitro. Furthermore, spike initiation had high temporal precision when the size of ionic current fluctuations near AP threshold were small compared to the injected stimulus current. Thus, weaker stimuli will produce more unreliable spiking in agreement with experimental data (Schneidman et al. 1998). These results were extrapolated to AP propagation in axons, where the current flowing ahead of the AP (re)generates the AP driving it forward. It was assumed that this axial current constituted a strong driving input and, hence, it was inferred that APs should propagate very reliably in axons, however as we shall discuss shortly conduction velocity will fluctuate significantly in thin axons due to channel noise (Faisal and Laughlin 2007). Stochastic simulations showed that super-threshold inputs could fail to generate an AP and, more importantly, make random APs appear in the absence of any input (Strassberg and DeFelice 1993; Chow and White 1996; Faisal et al. 2005).

8 Noise in Neurons and Other Constraints

243

This deletion and addition of action potentials cannot be explained at all by deterministic models and signicantly contributes to neuronal variability. Moreover, Chow and White (1996) were able to derive analytically the rate at which these random action potentials were triggered in an iso-potential membrane patch due to the critical role of Na channel noise (and not K channel noise). Their analytical derivation of the random action potential rate was supported by detailed stochastic simulations that modelled both Na and K channels. Note, noise triggered action potentials were originally named ‘spontaneous action potentials’ (Chow and White 1996; Faisal et al. 2005). This term may be confounded with ‘spontaneously activity’ which describes neurons that are actively spiking in the absence of synaptic input. Such ‘spontaneous activity’ is often the result of purposeful instability of a neuron’s resting state (e.g. when action potential threshold is below resting potential), and can thus appear also in deterministically modelled neurons. To disambiguate the term ‘random action potentials’ is used to refer to noise-initiated action potentials.

8.4.1 Variability of the Propagating Action Potential While a neuron will contain many ion channels, typically these ion channels do not interact instantaneously (neurons are not iso-potential) and thus their fluctuations do not average out, as much smaller (and thus noisier) subsets of ion channels are responsible for driving activity locally in the neuron. However, little was known on how the stochasticity of ion channels influences spikes as they travel along the axon to the synapse and how much information arrives there. Experimentally axonal spike time jitter has previously been only measured in vitro at myelinated cat and frog axons of several !m diameter and was in the order of 0.01 ms (Lass and Abeles 1975a,b). Biologically accurate stochastic simulations of axons showed (Faisal and Laughlin 2007) that the variability of action potential propagation (measured as spike time jitter) in unmyelinated axons between 0.1–0.5 !m diameter was in the order of 0.1–1 ms SD over distances of millimeters (cf. Fig. 8.5). Thus, axonal variability can grow several orders of magnitude larger then previously expected and having considerable impact on neural coding. Why can AP propagation become so variable? The spatial spread of membrane potential follows different input-response relationships than in point-like iso-potential membrane patches (Faisal et al. 2005): In fact, the current driving the AP ahead is one to two orders of magnitude smaller than the minimum stimulus current (“Rheobase”) required to trigger an AP in a resting axon (Faisal and Laughlin 2007). Consequently the driving axial current is a weak input that is susceptible to channel noise. Channel noise acts in two ways that are implicit to the AP mechanism (see Fig. 8.6 for illustration): First, only a small number of NaC channels are involved in driving the AP, when the membrane is between resting potential and AP threshold, and these small Na currents are thus subject to large fluctuations. Second, the resting membrane ahead of the AP is far from being at rest, but fluctuates considerably.

244

A.A. Faisal

Fig. 8.6 (a) Diagrammatic representation of a traveling AP on the axon based on Faisal and Laughlin (2007). Stacked over each other, the leftward traveling membrane potential wave form of the AP (V ) along the axon, axial currents flowing along the axon (Iaxial ), Na and K currents (INa ; IK ) and the representation of the axon itself. Axial, NaC and KC currents are denoted by black, red and blue arrows scaled to represent the relative size of the current in the various phases. Hollow and light shaded arrows denote the size of the current fluctuations relative to the average currents. The AP wave form is subdivided into six phases, resting membrane, early rising phase, late rising phase, early repolarizing phase, late repolarizing phase and an optional hyperpolarized phase (Figure adapted from Faisal and Laughlin (2007)). (b) Synaptic variability from axonal variabiltiy. (Top) Wave forms of 713 consecutive APs arriving at the terminal end of a 1:6 mm long unmyelinated axon of 0.2 !m diameter. (Middle and Bottom) CaCC current and total CaCC influx resulting from the integration of the above AP wave forms into a model of a Calyx-of-Held type synapse (Figure adapted from Faisal and Laughlin (2004))

8 Noise in Neurons and Other Constraints

245

Inspection of Fig. 8.5 shows that APs generated by the same stimulus are not precisely aligned across trials, and the misalignment (“jitter”) in this AP set grows considerably the further AP propagate. In general four distinct stochastic effects of channel noise on APs propagating in axons can be identified. To describe these effects the portion of the input stimulus which triggers an AP will be called a stimulus event. APs which were triggered across trials by the same stimulus event form an AP set. The timing of APs in a set is jittered but remains unimodally distributed (Fig. 8.5, arrows A, B, C), or grow to be markedly multimodally distributed (Fig. 8.5, D, fourth row) – splitting into distinct groups of APs across trials. For a stimulus event we quantify the jitter at a given position on the axon as the standard deviation (SD) of spike timing in its corresponding AP set. For a 0.2 !m axon (shown in Fig. 8.5) AP generation at the proximal end of the axon had on average a SD of 0.38 ms, similar to spike generation in simulated membrane patches (Schneidman et al. 1998). However, spike time jitter increases over relatively short distance, such that at 2 mm the average jitter over all AP sets increased to !0.6 ms SD. This jitter implies that post-synaptic coincidence detection windows cannot be more precise than 2–3 m s at this short distance. Furthermore, at the site of spike generation the timings within each AP set are unimodally distributed (Fig. 8.5, top raster plot). However, during propagation the spike time distribution can become multimodal, with the different peaks several milliseconds apart. In other words the AP set splits into distinct groups (Fig. 8.5, D, fourth row). Thus, axonal channel noise sets limits to the precision at which neurons in the densely wired cortex can communicate with each other over a given distance. These effects become relevant in unmyelinated axons below 0.5 !m, as commonly found in the tightly packed circuits of the central nervous system. Five unexpected features of noisy axons will provide more intuition how noise effects can have counterintuitive impacts on neuronal signalling and contribute to observable neuronal variability. 1. Fewer action potential failures. Although channel noise provides a means for APs to fail, stochastic simulations show (Faisal and Laughlin 2007) that conduction failures are rare – 1% in axons of the smallest known diameter and thus the noisiest axons – while empirically observed failure rates can be as high as 50%. Several neuronal mechanisms that purposely produce conduction failure are known, acting through membrane hyperpolarization, shunting effects, spikefrequency dependent block of APs at axonal branch points (Debanne 2004). In contrast channel noise has the opposite effect of promoting AP generation, because of the strong positive feedback of NaC channels. This suggests that channel noise cannot account for the AP failures observed in many systems and that other factors must be responsible. This suggests, that when propagation failures occur in the healthy nervous system, this is due to purposely designed mechanisms for pre-synaptic information processing, which allow the incorporation of local information not available at the site of spike initiation (Faisal and Laughlin 2007).

246

A.A. Faisal

2. Saltatory conduction in unmyelinated axons. Some APs in thin axons travel faster than a continuously moving wave front, because the AP wave front suddenly jumps ahead. This mode of conduction results because of a collaborative channel effect, where random opening of nearby NaC channels pre-depolarize the membrane by a few millivolts. The axial current from an incoming AP, then triggers AP threshold and the AP jumps several hundred micrometers ahead to the pre-depolarized region. Thus, the spike time at a given position of the axon appears to be shifted in the order of a millisecond. This stochastic micro-saltatory conduction effect resembles saltatory conduction between the morphologically specialized Nodes of Ranvier in myelinated nerve fibers. Here, however, it is produced by the stochastic behavior of individual channels embedded in an axon of uniform morphology and it occurs randomly. This adds considerable variability across trials and will enhance the effects of jitter and can initiate the following effect. 3. Body temperature reduces the impact of noise. Temperature is not only a key factor in determining the speed of biochemical reactions such as ion channel gating but also controls the amount of ion channel variability (Faisal et al. 2002, 2005). From a modelling perspective channel kinetics and simulations should be always accompanied by appropriate temperature-dependent scaling factors and base temperatures (or have to be assumed temperature invariant). Commonly, temperature-dependence is accounted for by scaling the transition rates ˛.V / and ! !6:3ı C ı

ˇ.V / by the factor Q1010 C , where Q10 is an empirically determined, channelspecific parameter and ! is the temperature in Celsius. While commonly overlooked, temperature, and via it’s effects on the kinetics of ion channels also the resulting channel noise, can vary greatly across the nervous system: cold-blooded insects can warm-up their body to over 40ı C prior to taking flight, while human extremities and the sensory and motor neurons therein can be exposed to temperature differences of up to 10ı C or more between their dendrites, cell bodies and axon terminals – as they span from the (cold) extremities to the (warmer) spinal chord. Once, one accounts for temperature-dependent stochasticity in computational models it can produce some counterintuitive effects – whereby increasing temperature can lower noise levels. Channel noise effects decrease with increasing temperature: As ion channel kinetics speed up with temperature, the duration of spontaneous depolarizing currents decreases and the membrane is less likely to reach AP threshold (this effect prevails over the increased rate of spontaneous channel openings). In other words increasing temperature shifts channel noise to higher frequencies where it is attenuated by the low pass characteristics of the axon (Steinmetz et al. 2000; Faisal et al. 2005). This may suggest that increasing temperature allowed polikliotherm animals, such as mammals, to develop more reliable, smaller, more densely connected and thus faster neural circuits. 4. Synaptic transmission variability from axonal channel noise. AP timing precision is bound to decrease the further the AP travels, thus long-range communication is in this respect noisier than short range communication, given

8 Noise in Neurons and Other Constraints

247

the same axon diameter. Axonal channel noise may have also an effect on information transmission in short range synaptic connections in unmyelinated axons of up to 1 !m diameter, because the shape of the AP wave form is perturbed by channel noise (Faisal and Laughlin 2004). The wave form of the presynaptic AP is of fundamental importance in determining the strength of synaptic transmission. It determines the calcium signal that controls synaptic transmitter vesicle release, by both controlling the opening of voltage-gated CaCC channels and the driving force for CaCC influx (Augustine 2001). Stochastic simulations showed that the traveling AP wave form fluctuated considerably (see Fig. 8.6b) (Faisal and Laughlin 2004) and the wave form of an AP mid-way down the axon and at the terminal end were little correlated, thus in thin axons below 1 !m somatically triggered AP are unlikely to carry much information in the AP wave form to the synapse, as has been measured in the soma of cortical neurons (de Polavieja et al. 2005). Stochastic modelling from the soma to the synapse is essential, as synaptic reliability and variability has been in general attributed to mechanisms inside the cortical synapse, but knowledge is typically based on paired soma recordings or large synapses, while most of our brain’s synapses are very small. Thus, it is difficult to dissociate synaptic and axonal stochastic effects in these preparations. Furthermore, most studies so far ignored, synaptic channel noise at presynaptic CaCC channels, which may produce spontaneous postsynaptic potentials and further increase trial-to-trial transmission variability. 5. Stochastic simulations for real neurons: The dynamic clamp technique. While the discussion so far dealt with findings based on simulations, the difference between deterministic and stochastic channel behavior was recently investigated in living neurons using the dynamic clamp method. Dynamic clamp is an electrophysiological technique that uses a real-time interface between neurons and a computer that simulates dynamic processes of neurons (Sharp et al. 1993; Robinson and Kawai 1993; Prinz et al. 2004a). It reads the membrane potential of the neuron and calculates the transmembrane current produced by virtual, simulated voltage-gated or synaptic conductances. The simulated current is injected into the neuron, which therefore receives the same current as if it biologically contained the virtual conductances. Dorval and White used this technique to study the role of channel noise in cortical neurons in vitro, allowing them to investigate what happens if ion channels were to act deterministically in a real neuron (Dorval and White 2005). A NaC channel population was blocked pharmacologically and using dynamic clamp replaced by an equivalent virtual population of NaC channels. The virtual channels were simulated either deterministically or stochastically.4 These neurons showed near threshold oscillations of the membrane potential, characteristic for their morphological class in vitro, and were able to phase lock their activity with other neurons, if and only if the virtual NaC channels were simulated stochastically. Neurons lost these two 4

using the Gillespie algorithm described.

248

A.A. Faisal

properties with deterministically stimulated NaC channels. These experimental results provide the first direct demonstration that physiological levels of channel noise can produce qualitative changes in the integrative properties of neurons. This suggests, that channel noise could even have a more profound effect on the evolution and development of neurons.

8.5 Fundamental Biophysical Constraints on the Size of Axons The impact of channel noise on membrane potential grows larger the smaller the membrane surface area. How small can neurons or axons be mode before channel noise effects disrupt action potential signaling? Hille (1970) suggested that in very fine axons the opening of a small number of NaC channels could generate an AP. This idea was subsequently used to highlight that channel noise could generate random action potentials (RAPs) posing limits to myelinated axons (Franciolini 1987). Based on a probabilistic argument predicting a RAP rate of 1 Hz in myelinated axons, it was suggested that RAPs would discourage the use of Nodes of Ranvier below 1 !m diameter. Unfortunately the calculation was flawed because ion channel state transition probabilities were confused with ion channel state transition rates. Furthermore, it was previously shown that in the mammalian nervous system myelinated Nodes of Ranvier exist with diameters as fine as 0.2 !m (Waxman and Bennett 1972). The first stochastic simulations of unmyelinated axons (Horikawa 1991), using simplified channel kinetics (discussed elsewhere in this chapter), showed that in fine axons more APs arrived at the distal end than were generated at the proximal end. Based on this single finding a lower limit to axon diameter of 0.2 !m was postulated. The relationship between diameter, biophysical parameters and RAP rate, however was not studied and the findings were not related to anatomical data. Anatomists had previously shown that axons as fine as 0.1 !m are commonly found in the central nervous system. Detailed stochastic simulations (Faisal et al. 2005) showed that spontaneous opening of NaC channels can, in theory, trigger random action potentials below a critical axon diameter of 0.15–0.2 !m. Figure 8.7 shows this is because at these diameters the input resistance of a NaC channel is comparable to the input resistance of the axon. The single, persistent opening of a single NaC channel can therefore depolarize the axon membrane to threshold. Below this diameter, the rate at which randomly generated action potentials appear increases exponentially as diameter decreases. This will disrupt signaling in axons below a limiting diameter of about 0.1 !m, as random action potentials cannot be distinguished from signal carrying action potentials. This limit is robust with respect to parameter variation around two contrasting axon models, mammalian cortical axon collaterals and the invertebrate squid axon, show that the limit is mainly set by the order of magnitude of the properties of ubiquitous cellular components, conserved across neurons of different

8 Noise in Neurons and Other Constraints

249

Fig. 8.7 The emergence and propagation of random action potentials (RAPs) in axons (in absence of any input). Space-time plots of membrane potential (a) and transmembrane Na and K currents (b and c respectively) in a simulation of a 1-mm- long pyramidal cell axon collateral (d D 0.1 !m) at 23ı C. In b and c the regions which show no transmembrane current was flowing are not color coded, making ionic currents from randomly opening ion channels at resting potential clearly visible (dark blue dots). The prolonged open time of single Na channels at t D 15 ms and t D 77 ms depolarizes the membrane to AP threshold, recruiting several nearby channels and resulting in spontaneous Aps, at t D 17 ms and t D 79 ms, that subsequently propagate along the axon. The horizonal time axis divisions has divisions 10 ms (Figure adapted from Faisal et al. (2005))

species. The occurrence of random action potentials and the exponential increase in RAP rate as diameter decreases is an inescapable consequence of the action potential mechanism. The stochasticity of the system becomes critical when its inherent randomness makes it operation unfeasible. The rate of RAPs triggered by channels noise counterintuitively decreases as temperature increases much unlike one would expect from electrical Johnston noise. Stochastic simulations (Faisal et al. 2005) showed that RAP rate is inversely temperature dependent in the cortical pyramidal cell and the squid axon which operated at 6:3ı and 36ı . Other biophysical limits to axon size. How small can a functioning axon be constructed, given the finite size of it’s individual components? Faisal et al. (2005) showed using. We use a volume exclusion argument to show that it is possible to construct axons much finer than 0.1 !m (Fig. 8.8). Neural membrane (5 nm thickness) can be bent to form axons of 30 nm diameter because it also forms spherical synaptic vesicles of that diameter. A few essential molecular components are required to fit inside the axon; this includes an actin felt work (7 nm thick) to support membrane shape, the supporting cytoskeleton (a microtubule of 23 nm

250

A.A. Faisal

Fig. 8.8 Sterically minimal axon: this to scale drawing illustrating how components essential to a spiking axon can be packed into the cross-section of a fiber of 50 ! 70 nm. The unfilled circle illustrates the finest known AP-conducting axons of diameter 100 nm (Figure adapted from Faisal et al. (2005))

diameter), the intracellular domains of ion channels and pumps (intruding 5–7 nm), and kinesin motor proteins (10 nm length) that transport vesicles (30 nm diameter) and essential materials (

Suggest Documents