Mathematical modelling in the early visual system

0 downloads 0 Views 960KB Size Report
comprehensive, Johnston and Wu's book might be more accessible for people less ... applied neural networks exist on the market, the book of Hertz, Krogh, and Palmer [8] is ...... A comparison to methods in enzymology 14 (1998) 342-351.
Published in: Modulation of Neuronal Responses: Implications for Active Vision, eds. G. T. Buracas, O. Ruksenas, G. M. Boynton and T.D. Albright, NATO Science Series, Vol. 334, IOS Press, Amsterdam, 2003

Mathematical modelling in the early visual system: Why and how? G.T. Einevoll* Physics, Agricultural University of Norway, 1432 Ås, Norway

Abstract. An overview over the different approaches to mathematical modelling in neuroscience in general, with special emphasis on the early visual system, is presented. Questions such as ``Why do we make mathematical models at all?'', ``What makes a mathematical model good?'', ``What types of mathematical models exist?'', and ``What is the right level of detail in a model?'' are addressed. Results from a project on constructing mechanistic models of the spatial receptive-field organization of cells in the dorsal lateral geniculate nucleus (dLGN) are also presented. In contrast to the traditional descriptive modelling based on the difference-of-Gaussians model, our model takes the known physiological couplings between retina and dLGN and within dLGN into account. The advantage of this modelling approach is that in addition to providing mathematical descriptions of the receptive fields of dLGN neurons, it also make explicit the contributions from the geniculate circuit. Moreover, the model parameters have direct physiological relevance and can be manipulated and measured experimentally. The model is applied to experimental data on neural responses to spots of varying sizes for X dLGN cells and for their retinal input (S-potentials). The model is able to account for these results. Moreover, model predictions regarding receptive-field center sizes of interneurons, distances between neighboring retinal ganglion cells providing input to interneurons, and the amount of center-surround antagonism for interneurons compared to relay cells, are all compatible with data available in the literature.

*

E-mail: [email protected], home page: http://arken.nlh.no/~itfgev

1. Introduction The aim of this chapter is to give an overview over mathematical modelling in neuroscience in general, with special emphasis on the early visual system. Mathematical models have been used in visual science for more than a century. Following the success of Hodgkin and Huxley [1] 50 years ago in describing axonal signal transport in nerve cells, arguably the greatest success story of mathematical biology, there seems to be a general consensus that mathematical modelling can be useful in neuroscience, including visual neuroscience. In the first part of the chapter I will address questions like, • • • •

Why do we make mathematical models at all? What types of mathematical models exist? What is the right level of detail in a model? What makes a mathematical model good?

These very general modelling questions are often not explicitly addressed in, e.g., the teaching of physics at universities. One reason is that most mathematical modelling in physics is limited to the so called mechanistic approach where the starting point often is the mathematical formulation of one of the fundamental physical “laws of nature” (e.g., Schrödinger equation for atoms or solids or Maxwell's equations for electromagnetic phenomena). In biological sciences such as neuroscience, the high level of complexity of the system often prohibits such a mechanistic approach. Therefore other modelling approaches such as descriptive modelling or interpretive modelling (cf. Sec. 3) have to be used instead. Also the question regarding the level of detail to include in the model requires more consideration in neuroscience than in most areas of physics. The interest and activity in computational neuroscience seems to be steadily increasing, and this is reflected in a rapid growth in the number of textbooks on mathematical modelling in neuroscience. Such books are no longer limited to ion-channel based modelling of electrical properties of single neurons. The rather recent books by Johnston and Wu [2] and Koch [3] focuses on single neurons. While Koch's book is more comprehensive, Johnston and Wu's book might be more accessible for people less experienced in mathematics. An excellent book on the use of so called information theory [4] in neuroscience is provided by Rieke et al. [5]. The more than 20 year old book by Marmarelis and Marmarelis [6] gives a thorough and excellent overview on the use of systems-identification techniques inherited from electrical engineering to physiological systems and includes examples from the early visual pathway. Cruse's newer book [7] has a similar scope but is shorter and thus easier to read. While several books on artificial or applied neural networks exist on the market, the book of Hertz, Krogh, and Palmer [8] is different in that it gives an introduction to both applied and neuroscience-oriented neural networks. The upcoming book by Dayan and Abbott [9] seems to be the first book which covers most types of mathematical modelling approaches used in neuroscience, and it will probably become a standard text for modelers in the field. In the first sections (Secs. 2-5) I will provide some answers to the four general questions on mathematical modelling listed above using examples from the visual system. In Sec. 6 I will describe some of the results from my own activity on modelling of receptive fields of cells in dorsal lateral geniculate nucleus (dLGN). Here I will also use this specific modelling project to illustrate some of the general points raised in the earlier sections. 2. Why do we make mathematical models at all? When biologists, physiologists, and psychologists sometimes raise this question, physicists are typically puzzled. The reason is that for centuries the core activity in physics has been to build mathematical models describing how nature works. The role of experiments in physics has been either (1) to discover and probe new phenomena to (hopefully) be described by mathematical models or (2) to test specific mathematical models. This is not to say that experimental physics plays a secondary role to mathematical model building, or theoretical physics as it is called. On the contrary, the limiting factor to progress in

understanding a physical phenomenon is often the lack of good experimental data. It rather means that the goal of physics, both experimental and theoretical, has been to converge on mathematically formulated descriptions of nature. This is so ingrained in the physics community that when physicists use the terms model or theory it is implicitly assumed that one is thinking of a mathematical model and mathematical theory. 2.1 Precision of statements derived from model Since the language of mathematics is difficult and time-consuming to learn, why isn't it better to formulate models in plain language rather than in mathematical terms? One reason can be formulated as follows: Mathematical models give more precise statements about nature than models in plain language. Precise statements are preferable since they make it easier to sort between good and bad candidate models (falsification). The precise statements inherent in mathematical models makes it possible to produce quantitative predictions (“How much?”') on the behavior of the system, while non-mathematical models generally can only make qualitative predictions (“In what direction?”) about the system. A familiar example from high-school physics is the description of falling objects in earth's gravity field. From Newton's laws it follows that, e.g., the formula for the time of flight before a ball dropped from a height h hits the ground, is given by t=(2hg)1/2 where g is the acceleration of gravity. This is a more precise statement on the system than the corresponding qualitative statement that the time of flight increases with height. As an example from visual neuroscience we can consider the question regarding the observed increased surround-inhibition in the spatial receptive-fields for relay cells in the dorsal lateral geniculate nucleus (dLGN) compared to the retinal ganglion cells which constitute their primary input (cf. Sec. 6). From physiological and anatomical studies of the dLGN circuit [10,11] it is known that both the intrageniculate interneurons and perigeniculate neurons can provide this inhibition (as illustrated in Fig. 5). By comparing quantitative predictions from mathematical models representing this circuitry with (quantitative) experimental data, one might elucidate the relative role these two candidate inhibitory processes play under different experimental conditions. A modelling project contributing to this is presented in Sec. 6. 2.2 Validity of complicated model arguments Another advantage of mathematical models can be stated as follows [12,13]: The precise, formal characteristics of mathematics assure that mathematical arguments remain sound even if they are long and complex. In contrast, arguments based on qualitative models can generally only be trusted if they remain short [12]. For example, the mathematical formulation of Newtonian mechanics can be applied to understand much more complicated systems than the ball-dropping experiment mentioned above. Newtonian mechanics predict accurately (and correctly) the detailed trajectory of satellites orbiting earth, taking the detailed earth mass distribution into account. As long as the model in itself is correct, the strict rules of mathematics assures that the model predictions can be trusted even for systems which are too complicated for common-sense reasoning. Compartmental modelling of the accumulated effect from synaptic currents acting on a complex dendritic tree, may serve as an example from neuroscience. With numerous synaptic currents of different size spread all over the dendritic branches, it is generally very difficult to “argue with words” what the detailed time-dependence of the membrane potential at the axon hillock, which determines whether the neuron fires or not, will be.

However, given a precise description of dendritic morphology, ion-channel distributions, synaptic connections etc., one can on a computer calculate this potential at any position in the neuron with the use of mathematical (compartmental) modelling [14,15]. 2.3 But neuroscience isn't physics! A common rebuttal to claims of importance of mathematical modelling in neuroscience based on stories from physics is: “But biology isn't physics!”. However, the subdivision of natural science into physics, chemistry and biology is a man-made construction. An atom in an ion channel in a neuron is no different from an atom in a gas or in a computer chip. The same basic laws of physics apply. The difference lies in the level of complexity. Even the simplest living organism or the simplest neuron are very complex physical systems, and this complexity has prevented widespread use of a theoretical physics approach to try to understand such systems. However, this may (and I think, will) change. In the 20th century the methods of theoretical physics proved very efficient in developing accurate mathematical models for the properties of, for example, semiconductors and metals (which consists of enormous numbers of atoms stacked up in a lattice structure). These mathematical models have been indispensable ingredients for the enormous engineering achievements that made electronic devices both powerful and cheap. There is no a priori reason why such theoretical methods should only be applicable to “dead” and not living physical systems. The difference lies in the level of difficulty. With the observed continuous improvement in experimental techniques, continuous growth in computer power, the accumulated and growing experience of doing mathematical modelling on increasingly complex systems, and the increasing number of mathematicians and physicists entering neuroscience, it is realistic to expect significant advances in the years to come. Regardless of how warranted this optimism is at this stage, it should not be controversial that such an attempt should be made. After all, compared to all the resources that go into acquiring good experimental data in neuroscience, modelling is relatively speaking an inexpensive activity. The mathematical models in neuroscience will look very different from, e.g., the mathematical description of falling balls or orbiting satellites. These physical systems are so simple that the mathematical descriptions can be based on the basic physical laws (Newton's mechanics + the law of gravitation) with very few, if any, unknown parameters describing the system. In neuroscience the situation is and will remain different. In practice, it will be impossible to determine all the parameters describing, for example, the electrical properties of a real neuron including every little detail such as the precise position of each single ion channel. However, the detailed specification of some of these minute biological details is likely to be of little relevance to the question regarding information processing. After all, biological systems like a neuron are not constructed according to a detailed plan prescribing the detailed position of each molecule in the whole cell. Analogously, if one, say, with electron microscopes studies the atomic composition of two perfectly functioning, identical silicon-based computers, one will observe small differences reflecting the slightly different production histories of the two computers. However, the level at which these differences become apparent are irrelevant for understanding the signal-processing properties of the computer. The nervous system is of course vastly more complex than man-made computers, and the question regarding what level of biological detail is needed to be able to describe information processing in the brain, is still open. 3. What types of mathematical models exist? The mathematical models used in neuroscience can be categorized into three types [9]: • • •

Descriptive models Mechanistic models Interpretive models

Figure 1: Example of shape of receptive-field function g(r) (solid line) described by the DOG model (Eq. 1). The dotted line corresponds to the center mechanism (first term in Eq. (1)), while the dashed line corresponds to the surround mechanism (second term in Eq. (1)). The parameters used are A2 / A1=0.85, a1=0.6 deg, and a2=1.2 deg. B: Three-dimensional representation of the DOG receptive-field function in A.

In this section these different types of models will be discussed and examples taken from visual neuroscience for each type of model will be given. It is important to stress that it is incorrect to think of one model type as superior to the others. All model types have particular advantageous features, and all three types of models will be required to get mathematical understanding of the nervous system. 3.1 Descriptive models: The difference-of-Gaussians model The goal of descriptive models is to summarize experimental data compactly yet accurately. Even though such models may be motivated by knowledge about the underlying neuronal circuitry, the goal of such a model is to account mathematically for a phenomenon, not to explain it [9]. An example of a commonly used descriptive model in visual neuroscience is the difference-of-Gaussians (DOG) model introduced by Rodieck [16] in 1965 to describe the spatial aspect of the receptive-field structure in retinal ganglion cells. The concept of receptive field has proven very useful in visual neuroscience. It refers to the limited area on the retina which upon illumination changes the activity of a neuron. Commonly it is generalized to refer not only to the overall area, but also the spatial and temporal structure of the response within the area. For retinal ganglion cells the receptive fields are small and circular, and they exhibit so called center-surround antagonism. These cells will exhibit their largest response when a circular stimulating light spot (for on-cells) exactly covers the receptive-field center. A small light stimulus outside this receptive-field center, on the other hand, may reduce the neuronal activity. The DOG model [16] was introduced to capture these properties of the receptive field in a mathematically convenient form1. Here we use the following formulation of the DOG model: g (r ) =

A2 − r 2 a 2 A1 − r 2 a 2 2 1 − e 2 e π a1 π a22

,

(1)

where the first Gaussian function models the excitatory center while the second Gaussian function models the inhibitory surround. The parameters A1 and A2 correspond to the excitatory and inhibitory weights, respectively, while a1 and a2 are the corresponding width parameters. Fig. 1 shows an example of such a DOG function. The receptive-field function g(r) describes the change in neural response of a particular neuron to a small test spot of illumination at a position r in the visual field. Here the position r= 0 corresponds to the center point of the receptive field. The visual field has two 1

Gaussian functions are easy to work with mathematically.

Figure 2: A: Hypothetical experimental data (points) measuring the change in neuronal response (trialaveraged spike count) for a retinal ganglion cell following stimulation with a small test spot. The best fit to the DOG model expression (Eq. 2) is shown as a solid line. B: Predicted response vs. spot-diameter curve (Eq. 6) for the cell in A. In addition to the model parameters obtained from the fit in A, the area of the test spot in A is assumed to be 0.2 deg2.

spatial dimensions, and the position r can thus be described as a two-dimensional vector r=[x,y]. However, the original DOG model in Eq. (1) assumes circular symmetry. This means that the neural response to the test spot in this model only depends on the distance r=| r| from the receptive-field center point, and not the angular position. What is the use of the DOG model? Let us say we make make an electrophysiological recording of a retinal on-center ganglion cell to probe the receptive field by using a small test spot of higher luminance than the background luminance. We first identify the center point of the receptive-field, i.e., the test-spot position which gives the largest increase in firing activity for the retinal ganglion cell. Then we measure the changes in neural firing activity due to stimulation with the small test spot for a set of positions along a straight line going through the receptive-field center point. The question on how to best quantify neural firing activity is not trivial since it depends on what aspects of the measured spike trains (action-potential trains) are relevant for conveying information. This question regarding the neural code has been, and still is, hotly debated, and we will not go into it here (for a thorough introduction to this question, see Rieke et al. [5]). If we assume that flashing test spots are used in the experiment, a simple measure for the neural activity is the number of spikes during the time the stimulus is on. This corresponds to what is called a spike-count code [5], and will be used in the example below. To reduce the effect of experimental variability from trial to trial (even with the same stimulus) it is customary to take the average over the results from several repetitions of the experiment. With our spike-count code this will give a trial-averaged spike count. The measured change of the trial-averaged spike count in the experiment described above may give results of the type shown in Fig. 2A. If we assume that the neural response is linearly related to the change in luminance2, the DOG model for the receptive field predicts the shape of this curve to be

∆Rg ( x )

2

=

Rg ( with spot) − Rg (background)



R g (background) + ∆L S g ( x ) − R g (background)

=

2 2 2 2  A A ∆L S g ( x ) = ∆L S  1 2 e − x a1 − 2 2 e − x a2  π a2   π a1

=

 1 − x2 a2 ω − x2 a2  1 − 2  ∆L S A1  e 2 e  π a22   π a1

This will be fulfilled if the “extra” luminance ∆L is sufficiently small.

,

, (2)

where Rg is the trial-averaged spike count of the retinal ganglion cell, ∆Rg(x) is the difference in this trial-averaged spike count between the situations with and without a test spot present at position x, ∆L is the difference in luminance between test spot and background, S is the area of the test spot, and ω is defined to be the ratio A2/A1. In Eq. (2) x parametrizes the position along the straight line the test spot follows in the experiment (x=0 corresponds to the receptive-field center point). Note that the expression in Eq. (2) presupposes that the function values of g(x) are approximately constant over the test spot, i.e., that the test spot is small. To see if the DOG model is able to describe this data, one can try to fit the expression in Eq. (2) to the experimental data points, and the best fit is shown in Fig. 2A. By visual inspection one can see that in this case, the model captures the data well. From the fitting procedure one also obtains the values of the model parameters which give the best fit. For our hypothetical data we find the model-parameter product ∆L S A1 to be 38 spikes. This is not so interesting in itself, but as shown below it allows for useful quantitative model predictions of results from experiments with other types of visual stimuli. For the ratio between the inhibitory and excitatory weight we find the value ω=0.88 from the present data fit. The two remaining parameters are found to be a1=0.59 deg and a2=1.27 deg. Even though there is some benefit with being able to describe the roughly 30 experimental data points in Fig. 2 with a mathematical model (and four fitted model parameters), the DOG model would be of limited interest if it only was applicable to describe neural responses to small test spots. The range of application is also wider: Assuming linearity of spatial summation (i.e., that contributions to the neural response from a set of differently positioned stimulating test spots applied simultaneously, add up linearly) one can formulate a mathematical expression for the response to any shape of the visual stimulus, namely R g = ∫∫ g (r ) Lstim (r ) dr .

(3)

r

Here Lstim(r) is a mathematical function describing the spatial shape of the stimulus (and background), and the spatial integral is over the whole visual field, i.e., over all twodimensional space. For a static sinusoidal grating in the x-direction the stimulus function would, e.g., be given by Lstim(r)=L0 + L1sin(2π x/λ) where L0 and L1 are constants, and λ is the grating wave length. As an example we here consider the response to a circular spot (not necessarily small) with diameter d, concentric with the receptive field. The stimulus function Lstim(r) during the time the spot is on, can be formulated mathematically as  Lbkg + ∆L for r < d 2 Lstim (r ) =  for r > d 2  Lbkg

,

(4)

where Lbkg is the background luminance, and ∆L is the “extra” luminance in the spot. Then in analogy to Eq. (2) we find by introducing polar coordinates (dr=dθrdr) the following formula for the change in trial-averaged spike counts for the case with and without a spot with diameter d: ∆R g (d )

d 2 2π

=

∫ ∫ g (r ) ∆L dθ rdr 0

= =

0

( ( ∆L A ( 1 − e

∆L A1 1 − e − d 1

2

4 a12

− d 2 4 a12

)− A (1 − e − ω (1 − e 2

)) )) .

− d 2 4 a 22

− d 2 4 a22

(5)

If the DOG model and linearity assumptions are applicable for the cell probed in Fig. 2A, this formula should predict correctly the curve for the spike-count change vs. spot diameter

when one inserts the parameters values found from fits to the test-spot data above. With these parameters inserted, one finds the following formula for this diameter dependence of the response (in units of spikes): ∆Rg (d ) =

2 2 2 2 38    1 − e − d 4(0.59 ) − 0.88 1 − e − d 4(1.27 )     S 

.

(6)

Here it is implicitly assumed that the diameter d is given in deg and the test spot area S in deg2. Except for the area S of the test spot used to obtain the data in Fig. 2A, the formula now contains only numbers. If the test spot area S is 0.2 deg2, say, the change in spikecount following spot stimulation can be predicted, and the resulting prediction is shown in Fig. 2B. By comparing this theoretical prediction with the experimentally measured curve for change in spike-count vs. spot diameter on this particular cell, one thus obtains evidence on whether the DOG model is applicable for this particular cell or not. The strategy of trying to test a model against experimental data upon which it is not based, is important in all mathematical modelling. If only a single data set is available, it is a standard mistake in modelling to “believe in” a model just because one can successfully fit the model formula to this experimental data. Generally speaking, many models can be made to fit a single experimental curve if there are sufficiently many model parameters allowed to vary freely to optimize the fit3. Thus one cannot conclude much based on an observed nice fit. This is just a first hurdle the model has to pass. The real test comes when predictions regarding other types of experimental data (with few, if any, unknown model parameters left to vary) can be checked against reality. Thus if one wants to make progress in mathematical modelling in neuroscience, one should look for systems where many different types of experiments can be done. If too little experimental data exist, even the best modeler will soon become stranded. In this example I have for reasons of simplicity neglected the temporal aspects of the receptive field. For a complete description of the linear response properties of retinal ganglion cells and other neurons in the visual system, the full spatiotemporal response function must be considered [17]. For an introduction to this, see Ch. 2 of Dayan and Abbott [9]. An important use of descriptive models is to describe input in mechanistic models at a higher level of description. For example, Hodgkin and Huxley [1] used descriptive models of the voltage dependence of ion channels in their mechanistic mathematical modelling which addressed action-potential generation in electrically excitable cells (cf. next section). Likewise, in Sec. 6 we use the descriptive DOG model to represent the retinal input in mechanistic modelling of the geniculate circuit [17,18].

3.2 Mechanistic models: The Hodgkin-Huxley equations Dayan and Abbott [9] distinguish between descriptive and mechanistic models. While descriptive models are used to summarize experimental data compactly, the goal of mechanistic modelling is to account for nervous system activity on the basis of neuronal morphology, physiology and circuitry. In its approach this type of modelling follows the traditional physics approach to mathematical modelling of natural systems. As an example of a mechanistic model I will consider the celebrated Hodgkin-Huxley model for action-potential generation and propagation in spiking neurons [1]. This model arguably represents the biggest success in theoretical biology, and its validity and applicability is so apparent that few neuroscientists question its significance. Action potentials are of course found all over the nervous system and are not unique to the visual system. Since well-established successful mechanistic models are rather scarce in the visual system, I will nevertheless use it as an example model. 3

A humorous phrase stating this point is: ”With five parameters, you can fit an elephant. And with six parameters you can make it blink!”.

Hodgkin and Huxley's elegant and impressive mathematical analysis of the electrical properties of the squid giant axon, can be found in many textbooks [2,3,9]. I will therefore not discuss it in detail here. Rather, I will just comment on some aspects of the model which is of particular relevance for the overall topic of this article. For simplicity we consider the so called space-clamped case where the membrane potential is forced to be constant along the axon4. In the absence of injected external currents the equation governing the current flow across the membrane in the HodgkinHuxley model is given by [1]

Cm

dV (t ) + I Na (t ) + I K (t ) + I leak (t ) = 0 dt

(7)

where Cm is membrane capacitance, INa is the ionic sodium current, IK is the ionic potassium current, and Ileak is a term which accounts for the remaining ionic currents. Following Hodgkin and Huxley the membrane potential V, i.e., the potential difference between the inside and outside of the axon, is defined such that V=0 at the resting potential where the total membrane current is zero. The leak current Ileak is simply described via Ohm's law I leak (t ) = g leak (V (t ) − Eleak )

(8)

where Eleak is the reversal potential of the leak current. The sodium and potassium currents are described via a generalization of Ohm's law, I Na (t ) = g Na (V , t ) (V (t ) − E Na ) , I K (t ) = g K (V , t ) (V (t ) − EK )

(9)

where gNa(V,t) and gK(V,t) are ionic conductances, and ENa and EK are the ionic reversal potentials (Nernst potentials) for the sodium and potassium channels, respectively. These potentials are found from Nernst's equation using the known ionic concentrations on the inside and outside of the axon (see, e.g., Refs. [2,3,9]). Until we specify a mathematical description for the time- and voltage-dependent ionic conductances in Eq. (9), we do not have a complete and usable model. It should be noted at this stage, however, that the model framework given by Eqs. (7)-(9) can be said to be mechanistic. It assumes (1) that the action potential is due to ionic currents flowing through the cell membrane, and (2) that the membrane current can be split into ion-specific sodium and potassium currents plus an unspecific Ohmic leak current. Thus the model has a basis in known physiological facts and observations. A key ingredient in the Hodgkin-Huxley model is the assumption that the sodium and potassium conductances can be expressed as maximum conductances, ĝNa and ĝK, multiplied with numbers between 0 and 1 representing the fraction of conducting ionic channels which are open. These numbers in turn correspond to products of probabilities for fictive gating particles (conventionally denoted n, m, and h) to be in an open or closed state. The conductances are given by [1] g Na (V , t ) = gˆ Na m 3h , g K (V , t ) = gˆ K n 4

(10)

where the dynamics (time-dependence) of the gating particles n, m, and h are governed by simple first-order differential equations,

4

This is done by threading thin metal wires inside the axon.

dn dt dm dt dh dt

=

α n (V )(1 − n ) − β n (V ) n

=

α m (V )(1 − m ) − β m (V ) m

=

α h (V )(1 − h ) − β h (V ) h

(11)

The voltage dependence of n, m and h comes in via the voltage-dependent coefficients α(V) and β(V). Since knowledge about the underlying ion channels was meager at the time (~1950), Hodgkin and Huxley were precluded from constructing a mechanistic model for the voltage dependence of the α's and β's. However, for their goal of describing action-potential generation, descriptive models for these coefficients were sufficient. These descriptive models were obtained from fitting results from voltage-clamp (and space-clamp) experiments, and the following formulas were obtained:

α n (V ) =

10 − V 100 e (10−V ) 10 − 1

)

β n (V ) = 0.125 e −V 180

25 − V

)

β m (V ) = 4 e −V 18

α m (V ) =

(

(

10 e

( 25−V ) 10

−1

α h (V ) = 0.07 e −V 20

β h (V ) =

1 e

( 30 −V ) 10

+1

(12)

In these formulas the membrane potential V is measured in millivolts (mV) and is defined to be zero (V=0) for the resting potential. This combination of a mechanistic model for the equation governing the total axonal membrane current (Eq. 7) and a descriptive model for the voltage dependence of the coefficients α(V) and β(V) describing the dynamics of the gating particles, illustrates a general point in mathematical modelling. Models act as bridges between different levels of understanding [9]. A descriptive model for the behavior at a low level (here ion-channel kinetics) is often sufficient and preferable when it enters a mechanistic model at a higher level (here the whole axon membrane). Of course, the descriptive model of the ion-channel kinetics does not shed much light on why the membrane-embedded ion-channel proteins exhibit such kinetics, but this is not necessary for the overall problem of explaining the generation of action potentials. An obvious modelling project at a lower level would be to study the ion-channel kinetics using present-day knowledge about the molecular composition of the ion-channel proteins and their position in the cell membrane. This is presently pursued by modelers, and a goal of on-going projects [19,20] is to explain why the coefficients α(V) and β(V) have the empirically determined voltage dependence given in Eq. (12). Why is the Hodgkin-Huxley model regarded as such a success? If the model only could account for the results from the voltage- and space-clamp experiments used to construct the model and fit the model parameters, the reception in the scientific community would have been more sober. However, from their model they could predict the shape and velocity of the action potential and an impressive agreement with experiment was observed. From their model they estimated the propagation velocity of the action potential down the squid giant axon to be 18.8 m/s (at 18.3 ˚C) which is roughly 10% off the experimental value of 21.2 m/s. This is particularly impressive since their model and its parameters were based on voltage- and space-clamp experiments alone, which is quite different from the conditions during propagation of an action potential. Such quantitatively accurate model predictions are rare in theoretical neuroscience and in theoretical biology altogether. Moreover, their model could explain experimentally observed phenomena such as (1) reduced excitability of the membrane immediately after firing an action potential (refractoriness), and (2) the

existence of a voltage threshold for initiation of an action potential. The fact that the model produced several correct predictions for experimental conditions very different from the conditions used to construct the model, was, and still is, the main argument for its validity. “So what”, a skeptic may argue. “In experiments where one blocks the sodium or potassium current, we observe that (normal) action potentials cannot be elicited. We do not need a mathematical model to find out that the sodium and potassium currents are essential for the generation of action potentials. And we do not need a model to predict the shape of the action potential, we might as well measure it experimentally.” However, an additional insight we get from the Hodgkin-Huxley model is that the sodium and potassium currents are sufficient to generate and propagate the action potential. The model also describes how the properties of the action potential depend on the biophysical parameters of the system, e.g., that the propagation velocity is proportional to the square root of the diameter of the (unmyelinated) axon (see Sec. 6.5 in [3]). Moreover, due to its obvious success in describing action potentials, the Hodgkin-Huxley model has opened up for mathematical analysis of a wide variety of diverse membrane phenomena. One example is the compartmental modelling of propagation of synaptic signals to the soma which is crucial for understanding the information processing properties of a single neuron [14,15]. It has also opened up for mathematical analysis of membrane phenomena outside the nervous system, e.g., in the heart [21]. The success story of the Hodgkin-Huxley model has for two reasons made the life for theoretical neuroscientists easier than for modelers in other branches of biology: First, it has given the modelers a relatively firm starting point for mathematical explorations of both single neurons and neural networks. Secondly, experimental neuroscientists may have a more positive view on the potential benefit of mathematical modelling than their colleagues working in other fields of biology where such an example of a successful mathematical theory is still lacking. 3.3 Interpretive models Interpretive models represent the third class of mathematical models used to explore the nervous system. Here the goal is to model the functional roles of neural systems, i.e., relating neuronal responses to the task of processing useful information for the animal. Interpretive modelling is unique to biological systems and does not exist in theoretical physics. In the ball-dropping experiment mentioned in Sec. 2.1 a question on why the ball falls to the ground is not fruitful. The law of gravity is not set up to perform a certain task. Biological systems, however, have developed under evolutionary pressure, and the “why”question is sensible. Information theory, introduced by Shannon 50 years ago [4], is an essential tool in interpretive modelling since it provides a method for quantifying information and information flow in the nervous system. An excellent book covering this topic is Rieke et al. [5]. An example of interpretive modelling in the early visual pathway is the success of information theory in explaining important features of receptive fields for neurons in retina, dLGN and primary visual cortex (see Ch. 4 in [9]). The basic idea is that the form of these receptive fields is optimized to convey the maximum amount of information about the natural world. Based on measured statistics on visual scenes, and taking the presence of noise into account, one can with the use of information theory calculate the optimal spatiotemporal form of the receptive field. For example, under low-noise conditions the optimal spatial organization of the receptive field was found to be the circular centersurround organization seen in retinal ganglion cells and dLGN relay cells. With a higher noise level the theoretically estimated optimal form retains only the circular excitatory center and loses the inhibitory surround. In the retina one can expect the signal-to-noise ratio to be controlled by the level of ambient light, and low levels of illumination should correspond to the high-noise case. If this is so, the predicted change in the spatial form of the receptive field at low illumination (high noise) corresponds to what is seen experimentally in the retina. For a derivation and discussion of this and other results from interpretive modelling in the visual pathway, I refer to Ch. 4 in Dayan and Abbott [9].

4. What is the right level of detail in a model?

For neuroscientists thinking about starting a modelling activity, a common problem is to find out the appropriate level of anatomical or physiological detail to include in the model. A common mistake is to think that by including more details, the model automatically gets better. The main problem with including more details in a model is that for each, e.g., new ion-channel type, dendritic branch, or type of neuron one includes, one gets extra terms in the mathematical equations. In general, this makes the solution of the mathematical problem more difficult. But the most important problem is that it generally gives more undetermined model parameters which in turn, most often, must be determined by fits to experimental data. And with too many undetermined parameters, one risks being fooled by observing good fits to experimental data even if the mathematical model is ill-constructed (cf. discussion in Sec. 3.1). There are also conceptual problems connected to including too many details in a model. Let us make the (completely unrealistic) assumption that sometime in the future one was able to measure all the neurons and their interconnections in the brain of an animal and describe them on a molecular level in complete detail. Let us also assume that one was able to make accurate mathematical descriptions of all the underlying biophysical mechanisms, and, moreover, was able to find a computer powerful enough to simulate the whole brain simultaneously at this molecular level of detail. Even if one found a complete agreement between experimental recordings of, e.g., neuronal firing activity, and the predictions from the mathematical simulations, could one say that one understood this brain? The agreement between experiment and theory, would certainly indicate that one had successfully accounted for all the biophysical mechanisms involved in the nervous activity, but it would be difficult to claim that one understood how this brain works. A satisfactory understanding of the brain will probably have to involve a range of concepts (like the concept of receptive field) describing the brain activity at different levels of detail. Mathematical models will then act as bridges between these different levels, and a satisfactory understanding of the brain will involve a series of small bridges rather than a single large one. In order to function as bridges, each model in this hierarchy of models must be detailed enough to make contact with the lower level, yet simple enough to give clear results at the higher level [9]. A hypothetical example from the early visual system of such a hierarchy of models could be: 1. Mechanistic (biophysical) model for proteins and cell membrane ⇒ descriptive model for ion-channel kinetics5 2. Descriptive ion-channel model ⇒ input to mechanistic model for electrical properties of retinal neurons 3. Mechanistic models for cells in retina ⇒ descriptive model for receptive field of retinal ganglion cell 4. Descriptive model for receptive field of retinal ganglion cell ⇒ input to mechanistic model for signal processing in dLGN6 Traditionally, modelers with a biological background have tended to focus on ionchannel modelling of single-neuron dynamics, sometimes using supercomputers to be able to include a lot of biological detail [15,22]. Modelers with a physics background have often been accused of (sometimes rightfully so) not paying sufficient attention to biology and constructing too simplified and abstract models of little relevance7. With the hierarchy structure of models in mind, it seems that it is not a question of “either-or”, but rather that several types of approaches are needed. 5

For example, the Hodgkin-Huxley description in Eqs. (9-12). The descriptive difference-of-Gaussians model for the receptive field of the retinal ganglion cell (Sec. 3.1) is used in this way in Sec. 6. 7 A joke alluding to this is: ”I want to study the brain”, says the physicist. “Tell me something helpful”. “Well, first of all, the brain has two sides”, the neuroscientist says and is immediately interrupted by the physicist: “Stop! You’ve told me too much!” [23]. 6

5. What makes a mathematical model good?

What are the characteristics of a good mathematical model? As a rule of thumb a successful model should have a high predictive power, i.e., it should be able to correctly predict results from many experiments, preferably from several types of experiments. Successful quantitative predictions (e.g., the prediction from the Hodgkin-Huxley model of the propagation velocity of an action potential down a squid axon, cf. Sec. 3.2) generally carry more weight than qualitative predictions (e.g., the correct prediction from the HodgkinHuxley model on the existence of a refractory period in neurons following action-potential generation during which it is more difficult to fire a second action potential). Further, correct predictions of experimental results for situations very different from the experiments on which the model is based, carry more weight. Below I list a few questions which can be directed to a modeler (maybe yourself!) trying to persuade you about the virtues of his/her model: 1. 2. 3. 4. 5.

What is the goal of your modelling? How can this model help us understand this particular neural system? How many “free” model parameters are allowed to vary in fits to experimental data? What is the predictive power of the model? What experimental observation(s) can prove your model wrong?

Question 1 will check that the modeler has a clear view of what “level” in the hierarchy discussed in Sec. 4 the model is intended for. Question 2 will elucidate whether the modeler has thought about what biological insights can come out of the project (and that it is not merely a mathematical exercise). Question 3 is intended to check that the modeler is aware of the possible problem of having too many undetermined parameters in a model. Question 4 and 5 will contribute to elucidate whether the modeler has a plan on how to find out whether the model reflects reality or not. A model which cannot in any way be falsified by experiment, is generally of little interest. 6. Mechanistic modelling in the dorsal lateral geniculate nucleus

In this section I will review some results from a modelling project on the spatial organization of the receptive field for neurons in the dorsal lateral geniculate nucleus (dLGN) [18]. In the presentation an effort is made to also illustrate some of the general points on mathematical modelling raised in the previous sections. The goal of the modelling project was to study what aspects of the geniculate circuitry is responsible for the observed changes in spatial receptive-field structures between the retinal and geniculate levels. The experimental data motivating and allowing for this investigation was provided by Ruksenas et al. [24] who recorded neuronal activity extracellularly in the dLGN during stimulation of circular light or dark spots. A complete description of the modelling project can be found in Ref. [18]. 6.1 Experimental data Ruksenas et al. [24] recorded action potentials of nonlagged X cells in dLGN, as well as so called S-potentials, to study the spatial organization of the receptive fields of the dLGN cells and their retinal input. An S-potential is in the following assumed to be a post-synaptic potential that reflects an action potential of a single retinal ganglion cell providing all the primary excitatory input to the relay cell [25-28]. Circular light or dark spots were used as stimuli, and post-stimulus time histograms (PSTHs) were recorded (see upper and middle left panel in Fig. 3 for an example result). From these PSTHs the mean firing rate8, i.e., the

8

The mean firing rate is proportional to the spike count used in Sec. 3.1. In the experiments of Ruksenas et al. [24] where the stimulus was on for 500 ms, the mean firing rate given in spikes/s is exactly twice the spike count.

Figure 3: Example of results for a nonlagged off-center X cell measured in dLGN from Ruksenas et al. [24]. Post-stimulus time histograms (PSTHs) for spot diameter d=1.6 deg for retinal input (S-potentials, top left) and action potentials (middle left), and temporal profile of spot stimulus (bottom left). Time-averaged PSTHs (right), denoted mean firing rate, during 500 ms stimulus period as a function of spot diameter (filled dots correspond to relay-cell action potentials, open dots to retinal input).

PSTH firing rate averaged over the 500 ms time-interval the circular spot was on [24], was calculated. This mean firing-rate was measured for a set of spot sizes to determine mean firing rate vs. spot-diameter data curves. When we in the following refer to neuronal responses, we refer to this trial-averaged mean firing rate. The result for an example cell pair (relay cell + retinal ganglion cell providing the primary input) is shown in Fig. 3 (right panel). Even though the retinal-input and relay-cell response curves for the example data in Fig. 3 appear to be qualitatively the same, one observes some quantitative differences. First, the relay cell response is consistently lower than the retinal input. Second, the responses for the largest spot sizes are more suppressed for the relay cell than for the retinal input (i.e, stronger surround inhibition at the geniculate level). These features were seen for all 22 cell pairs studied in the modelling project [18]. For the example cell pair in Fig. 3 one also finds the peak of the response curve for the relay cell to occur at a slightly smaller spot diameter than for the retinal input. Since the peak occurs when the spot exactly fills the receptive-field center, this means that the receptive-field center of the relay cell is slightly smaller than for the retinal ganglion cell providing the input. This reduction in receptive-field center size at the geniculate level was seen for most of the 22 X cell pairs considered in Ref. [18], but not all. 6.2 Descriptive modelling The traditional way to analyze such data has been to assume the difference-of-Gaussians (DOG) model [16] for the spatial receptive-field function (Eq. 1). With circular spot stimuli the theoretical formula for the response vs. spot-diameter curve assuming the DOG model for the receptive field is (cf. Sec. 3.1)

(

R(d ) = Rbkg + A 1 − e − d

2

4 a12

(

− ω 1 − e −d

2

4 a 22

))

(13)

where Rbkg is the background response (i.e., without a spot stimulus), and the concatenated

Figure 4: Example of comparison of experimental response data from Ruksenas et al. [24] with the descriptive response formula based on the DOG model (Eq. 13). The results for the example cell pair in Fig. 3 are used as example data. Upper line corresponds to the best fit to the retinal-input data while lower line corresponds to the best fit to the relay-cell action-potential data. Fitted parameter values are Rbkg=37 spikes/s, A=129 spikes/s, ω =0.85, a1=0.62 deg, a2=1.26 deg for the retinal input, and Rbkg=4.8 spikes/s, A=1300 spikes/s, ω =0.993, a1=0.74 deg, a2=0.77 deg for the relay cell.

constant A has been introduced (cf. Eq. (5)). For the example data in Fig. 3 the best fits are shown in Fig. 4. As seen here the descriptive DOG model is able to fit the response vs. spot-diameter curves well. However, no insight has been gained into how the geniculate circuitry produces the observed quantitative changes in the spatial receptive-field organization. To have a hope of answering this, a mechanistic modelling approach is needed. 6.3 Geniculate circuit and mechanistic model assumptions To obtain a response formula analogous to Eq. (13) with a mechanistic approach, knowledge about the pattern of functional neuronal couplings in dLGN is needed (for reviews of these see Refs. [10,11]). A schematic illustration of the geniculate circuit is shown in Fig. 5. Relay X cells receive excitatory input from a single or a few retinal ganglion cells [27,29-33]. They also receive feedforward inhibition from intrageniculate interneurons which in turn receive excitation from a few retinal ganglion cells [33,34]. In addition there are feedback inputs from the perigeniculate nucleus (PGN) and cortex, as well as modulatory inputs from the brainstem reticular formation. Here we consider a reduced neuronal circuit involving only the feedforward afferents to the relay cell and study whether this simplified model can account for the experimental data in Ref. [24]. The approximation of neglecting cortical feedback might be justified by the observation that the layer VI cortical cells providing the corticothalamic afferents to dLGN are not well activated by the circular spot stimuli considered here [35]. The neglect of the afferents from the GABAergic neurons in PGN is supported by observations that PGN cells have a less clearly defined receptive field with mixed on-off response [34,36] in the anesthetized preparations used by Ruksenas et al. [24]. Moreover, in other experiments it has been observed that the PGN cells respond best to circular spots much larger than the relay-cell receptive-field center [37], and this may suggest that the error in neglecting the PGN feedback in our model for the spatial receptive-field organization might be largest for the largest spot sizes used in the experiment by Ruksenas et al. [24]. In the model we make the following initial assumptions motivated by physiological and anatomical observations: (1) A relay cell receives a single excitatory input from a retinal

Figure 5: Schematic overview of coupling pattern in dLGN circuit. The geniculate neurons RC (relay cell) and IN (interneurons) receive their primary input from retinal ganglion cells (GC) as well as feedback from perigeniculate (PGN) cells (labeled PC in figure) and cortical cells (CC). The relay cells, interneurons and PGN cells also receive modulatory inputs from the brainstem reticular formation (BRF). The excitatory connections are shown as solid lines, the inhibitory as dashed lines, while dotted lines symbolize the modulatory connections.

ganglion cell. (2) A relay cell also receives indirect feedforward inhibition, via a single intrageniculate interneuron, from n retinal ganglion cells of the same class (X) and type (on/off). We also have to specify the number and spatial position of the retinal ganglion cells providing excitatory input to the interneuron. Anatomical studies [38] revealed a disordered grid of retinal ganglion cells with typically 4-6 nearest neighbors of the same type. Physiological studies [33] indicated that an interneuron receives excitation from a similar number of retinal ganglion cells. A simple choice for the spatial distribution of ganglioncell inputs to the interneuron is thus to assume that: (1) The retinal ganglion cell which provides the excitatory input to the relay cell, also excites the interneuron. (2) The other n-1 ganglion cells are positioned at n-1 nearest-neighbor positions. The nearest-neighbor distance is denoted ra. A natural conjecture could be that these n inputs correspond to an assembly of nearest-neighbor cells of the same type in the disordered grid of retinal ganglion cells [38,39]. An illustration of the model for the choices n=5 (square model) and n=7 (hexagonal model) is given in Fig. 6. 6.4 Modelling of retinal input In this modelling project we are focusing on the effect on the spatial receptive field of geniculate cells due to the geniculate circuitry. The origin of the spatial receptive-field structure of the retinal ganglion cell feeding into the nucleus, is thus not of interest here. But we need a reliable and accurate description of the retinal ganglion-cell response, i.e., we need a descriptive model of the ganglion-cell receptive field. For this we use the DOG model discussed in Secs. 3.1 and 6.2.

Figure 6: Schematic drawing of couplings at the geniculate level assumed in the model with a single excitatory input and n inputs to a single interneuron (IN) for the specific example n=5. B: Illustration of square model (n=5) for spatial distribution of inputs from retinal ganglion cells to interneuron. The circles correspond to ganglion-cell receptive-field centers which are set unrealistically small and non-overlapping for reasons of figure clarity. C: Illustration of alternative hexagonal model (n=7) for spatial distribution of inputs from retinal ganglion cells to interneuron.

With the receptive-field function for the retinal ganglion-cell given by g g (r ) =

1 − r 2 a2 ω − r 2 a2 2 1 − e e π a12 π a22

,

(14)

the response to a centered spot with diameter d, where the stimulus profile is mathematically described by Eq. (4), can be written as [18] d 2 2π

Rg (d ) = l (Lspot ) ∫ 0

∞ 2π

∫ g g (r ) dθ rdr + l (Lbkg ) ∫ 0

∫g

g

(r ) dθ rdr .

(15)

d 2 0

Here the first term represents the contribution from the spot (with luminance Lspot) while the second term represents the contribution from the background surrounding the spot (with luminance Lbkg). The function l(L) is a non-linear activity function of the luminance which is introduced as a trick to make the model applicable outside the narrow luminance regime where the response is linearly related to changes in the luminance (as assumed throughout Sec. 3.1). For a discussion of this see Ref. [18]. As shown in the Appendix evaluation of the integrals in Eq. (15) then gives

(

Rg (d ) = l (Lbkg )(1 − ω ) + (l (Lspot ) − l (Lbkg )) 1 − e − d

2

4 a12

(

− ω 1 − e −d

2

4 a 22

)) .

(16)

This formula can now be used in conjunction with the data of Ruksenas et al. [24] to determine the five unknown ganglion-cell parameters l(Lbkg), l(Lspot),ω, a1, and a2. This will be pursued in Sec. 6.6.1. 6.5 Modelling of relay-cell response The relay-cell response is assumed to be determined by the linear sum of the direct excitation from the single central retinal ganglion cell and the indirect inhibition from the collection of n retinal ganglion cells as schematically illustrated in Fig. 6 for the two examples n=5 and n=7. We also assume that these n retinal ganglion cells have identical receptive-field properties, an assumption partially justified by the similar retinal positions of their receptive fields.

The weight of the direct excitation is denoted B1. For the indirect inhibition we have (i) the contribution from the “central” ganglion cell providing the excitation (defined to be the position r=0 with weight B2(0)) and (ii) the contributions from the n-1 “non-central” inputs from ganglion cells with their receptive-field center positioned at rj, j=2,...,n where | rj|=ra. The weights of these non-central inputs are denoted B2(rj), j=2,...,n. The mathematical expression for the relay cell response Rr(d) in this model is Rr (d )

=

n   S  B1 Rg (d ;0 ) − ∑ B2 (r j ) Rg (d ; rj ) j =1  

=

   n  S (B1 − B2 (0)) Rg (d ;0 ) −  ∑ B2 (r j ) Rg (d ; ra )  j =2   

=

S B1* Rg (d ;0 ) − B2* Rg (d ; ra ) ,

[

]

(17)

where we have introduced the net central weight (B1*) and total non-central inhibitory weight (B2*) given by B1* = B1 − B2 (0) , B2* = ∑ B2 (r j ) . n

(18)

j =2

Here Rg(d;0) denotes the response of a retinal ganglion cell to a circular spot concentric with its receptive field, i.e., the expression already derived in Eq. (16). Correspondingly, Rg(d;ra) denotes the response when the circular spot (with diameter d) is centered ra away from the center point of the receptive field. (For symmetry reasons the response does not depend on the angular position, only the radial distance.) To account for possible non-linear effects in the relay-cell response, the transfer function S[x] has been introduced. For simplicity we here use the “rectifying function” S[x]= x θ(x) where θ(x) is the Heaviside step function given by θ(x0) =1. This choice corresponds to a linear response for a positive net input, but with a negative net input a zero firing-rate is assigned. From the last line of Eq. (17) we see that the relay-cell response depends only on three “geniculate” parameters; the net central weight B1*, the total non-central (inhibitory) weight B2* and the nearest-neighbor distance ra. Since we already have an expression for Rg(d;0) from Eq. (16), we only need to find the response expression Rg(d;ra) for off-center stimuli spots to obtain an explicit mathematical formula for the relay-cell response. This is mathematically more difficult to derive than the expression for Rg(d;0) for concentric stimuli. The derivation is shown in the Appendix, and the result is

[

Rg (d ; ra ) = S l (Lbkg ) (1 − ω ) + (l (Lspot ) − l (Lbkg )) ×  2  e − ra  

a12

2m

2 1  ra    γ m + 1, d 2 4a12 − ω e − ra ∑ m = 0 m!  a1 



(

)

a22

2m  1  ra    γ m + 1, d 2 4a22  ∑  m = 0 m!  a2   ∞

(

)

(19)

where γ(m,x) is the so-called incomplete gamma function given by

γ (m + 1, x ) =

x

1 u m e −u du ∫ m! 0

(20)

when m is a non-negative integer. Here we have also incorporated the “rectifying function” S[x]= x θ(x) since non-centered spots may stimulate the inhibitory surround only (if ra is

larger than the radius of the excitatory center of the receptive field) and thus may make the summed input to the cell negative. With Rg(d;0) from Eq. (16) and Rg(d;ra) from Eq. (19) inserted into Eq. (17), one finds

[ (

(

Rr (d ) = S B1* l (Lbkg )(1 − ω ) + (l (Lspot ) − l (Lbkg )) 1 − e − d

[

2

4 a12

(

− ω 1 − e −d

− B2* S l (Lbkg )(1 − ω ) + (l (Lspot ) − l (Lbkg )) ×

 2  e − ra  

a12

2m

2 1  ra    γ m + 1, d 2 4a12 − ω e − ra ∑ m = 0 m!  a1 



(

)

a22

2m

2

4 a 22

1  ra    γ m + 1, d 2 4a22 ∑ m = 0 m!  a2  ∞

(

)))     .    

)

(21)

The mathematical formula in Eq. (21) is very different from the corresponding descriptive DOG model response expression in Eq. (13) in that the model parameters are divided into two groups: Descriptive model parameters accounting for the retinal response (l(Lbkg), l(Lspot), ω, a1, a2) and parameters connected to the mechanistic model for the geniculate circuit (B1*, B2*, ra). In contrast, the model parameters in the descriptive DOG model for the relay-cell response (Eq. 13) describe the accumulated effect of both the retina and the geniculate circuit. There it is thus not possible to sort out the effects from the geniculate circuit specifically. With our approach one also obtains mechanistic expressions for the relay-cell receptive-field function given in terms of these five retinal and three geniculate model parameters (cf. [18]). Even though the mathematical formula in Eq. (21) is long and cumbersome, it consists only of standard mathematical functions which are provided by mathematical computer programs (e.g., MATLAB). Thus, just as for Eq. (16) for the retinal response, the formula can straightforwardly be fitted to experimental data. 6.6 Fitting against experimental data 6.6.1 Retinal input The five parameters l(Lbkg), l(Lspot), ω, a1, and a2 can now be determined by fitting the descriptive formula for retinal ganglion-cell response in Eq. (16) to the retinal-input response vs. spot-diameter data from Ruksenas et al. [24]. The fit was obtained by minimizing the sum of quadratic deviations between the experimental and theoretical values using the computer program MATLAB. The fit to the example data in Fig. 3 (right panel) is shown in Fig. 7 (upper curve). For this example cell pair the fit was quite good with a mean square error less than 2 spikes/s. The values of the fitted ganglion-cell parameters for the example cell pair are listed in Table 1 (cell no. 2). 6.6.2 Relay cell The three geniculate parameters B1*, B2*, and ra can now be determined by fitting the mechanistic model expression in Eq. (21) to the experimental relay-cell response data, keeping the five ganglion-cell parameters fitted by the retinal-input data fixed. The two infinite sums in Eq. (21) must in the numerical procedure be approximated by finite sums over the summing index m. By summing from m=0 to a maximum value mmax, we found mmax=30 to be sufficient to obtain fitted parameters for the example cell pair with the accuracy of model parameters listed in Table 1. The relay-cell fit for the example cell pair is shown in Fig. 7 (lower curve). Also here the fit was good with a mean square error of 2.6 spikes/s. The values of the fitted relay-cell parameters for the example cell pair are listed in Table 1 (cell no. 2).

Figure 7: Comparison of experimental response data curves with model response formulas for the example data in Fig. 3 (right panel). Open dots correspond to retinal ganglion-cell response (S-potentials) while filled dots correspond to relay-cell response (action potentials). Upper line corresponds to the best fit of the descriptive ganglion-cell response function in Eq. (16), while lower line corresponds to the best fit of the mechanistic relay-cell response function in Eq. (21).

6.6.3 Results for 22 cell pairs Ruksenas et al. [24] measured response vs. spot-diameter curves from 22 pairs of X cells (17 on-cells, 5 off-cells). The fitted parameters to these data are listed in Table 1. Overall, good fits were obtained, and the average of the mean square errors for the 22 cell pairs were found to be less than 5 spikes/s. This should be compared with an average peak value (spot exactly filling receptive-field center) of 135 spikes/s for the retinal response curve and 68 spikes/s for the relay-cell response curve [18]. Moreover, none of the resulting parameter values from the fits were clearly unphysiological. 6.7 Predictions 6.7.1 Retinal ganglion-cell grid The distance ra between neighboring ganglion-cell inputs to the interneuron is a model parameter determined by the relay-cell fit. From our modelling we therefore also obtain predictions regarding the grid of retinal ganglion cells. If we assume that the retinal ganglion cells providing this interneuron input (see Fig. 6B) are nearest-neighbors in the grid of ganglion cells of the same class (X) and type (on/off), ra can be compared with direct measurements of nearest-neighbor distances on the retinal ganglion-cell grid [38,40]. Both the ganglion-cell density and receptive-field center diameter dcgang vary systematically with visual eccentricity. A direct comparison between the ra's predicted from the modelling and direct measurements of this nearest-neighbor distance is thus not feasible without precise data on the visual-field position for each of the cell pairs studied in the experiment. However, the coverage factor c, which is the number of receptive-field centers from cells of a certain type overlapping a single point, is fairly constant for X cells over a wide range of eccentricities except close to area centralis [40]. The coverage factor is found by multiplying the local cell density with the receptive-field center size. The true ganglion-cell grid is disordered [38,39]. If we, however, assume as an approximation a square ganglioncell grid with nearest-neighbor distance ra, we find the coverage factor to be c=π(dcgang)2/4ra2, i.e., ra /dcgang=(π/4c)1/2. For a hexagonal grid we find c=π(dcgang)2/4 31/2ra2, i.e., ra /dcgang=(π/4 31/2c)1/2.

Table 1: Fitted parameters for the 22 pairs of experimental response curves for nonlagged X cells from Ruksenas et al. [24]. The parameters ω, l(Lbkg), l(Lspot), a1, and a2 are found from fitting the ganglion-cell response data to Eq. (16), while B1*, B2*, and ra are found from fitting the relay-cell response curve to Eq. (21). Mean values and standard deviations (sd) are also listed. The unit for l(Lbkg)(1-ω) and l(Lspot)(1-ω) is spikes/s. The unit for a1, a2, and ra is degrees, while ω, B1*, and B2* are dimensionless. For cell pair no. 21 mmax=100 was required, while for the other cell pairs mmax=30 was sufficient to approximate the infinite sums in Eq. (21) sufficiently well to not affect the values of the listed parameters for the numerical accuracy used in the table. Cell no.

ω

l(Lbkg)(1-ω)

l(Lspot)(1-ω)

a1

a2

B1*

B2*

ra

1

0.88

15.8

29.7

0.25

0.87

0.38

0.19

0.57

2

0.85

36.8

56.5

0.62

1.26

0.63

0.38

0.99

3

0.78

17.3

44.6

0.77

1.63

0.58

0.39

0.60

4

0.59

-13.0

39.1

0.70

2.49

0.74

0.26

1.94

5

0.74

-20.3

13.2

0.35

1.98

0.64

0.25

0.78

6

0.996

3.5

107

0.31

0.31

0.76

0.49

0.57

7

0.71

19.4

56.3

0.26

1.97

0.53

0.40

0.82

8

0.66

26.4

50.8

0.82

3.43

0.45

0.28

1.61

9

0.63

31.1

54.7

0.82

2.28

0.76

0.50

1.82

10

0.85

41.3

84.3

0.71

1.41

0.42

0.24

1.67

11

0.67

12.5

68.6

0.24

0.70

0.51

0.05

0.84

12

0.95

14.6

37.5

0.57

0.75

0.52

0.42

1.25

13

0.82

71.0

101

0.59

1.64

0.60

0.50

1.09

14

0.90

50.3

80.6

0.25

0.94

0.42

0.25

0.82

15

0.77

62.8

96.3

0.30

1.13

0.53

0.37

1.07

16

0.90

11.9

25.0

0.36

0.81

0.77

0.76

1.25

17

0.95

36.6

39.9

0.22

1.67

0.78

0.67

0.85

18

0.70

30.8

47.8

0.22

0.65

0.57

0.25

0.45

19

0.80

39.5

48.4

0.30

0.93

0.38

0.13

0.72

20

0.57

55.6

110

0.16

1.60

0.88

0.48

0.26

21

0.79

33.7

56.3

0.083

1.51

0.60

0.46

0.64

22

0.97

48.9

54.4

0.41

0.86

0.85

0.70

0.74

mean

0.79

28.5

59.2

0.42

1.40

0.60

0.38

0.97

sd

0.12

22.8

26.8

0.23

0.73

0.15

0.18

0.45

Since ra /dcgang only depends on the coverage factor, this ratio is well suited for comparison with our theoretical findings based on experiments from a range of eccentricities. The prediction from our modelling effort was, after averaging over the 22 cell pairs, found to be ra /dcgang = 0.77 ± 0.29. Ruksenas et al. [24] preferentially sampled cells located outside area centralis. For this part of the visual field Peichl and Wässle [40] measured c ~ 3.5-5. With c=4 we find ra /dcgang = π1/2/4 = 0.44 for the square grid and ra /dcgang = π1/2/4 31/4 = 0.34 for the hexagonal grid (see Fig. 6C). This is approximately half of the average

Figure 8: Predicted shape of response function for the intrageniculate interneuron corresponding to the example cell pair in Fig. 7 (solid). The fitted theoretical ganglion-cell and relay-cell response curves from Fig. 7 are shown with dotted lines. B: Predicted shape of receptive-field function for interneuron according to our model (n=5, B2 identical for all five inputs, see Fig. 6A,B) and the same example cell pair.

predicted ratio from our modelling. Note, however, that Peichl and Wässle [40] used the socalled area threshold method to measure the receptive-field center sizes. There the so called equivalent center diameter was found from the data by determining the intersection of two straight lines drawn through the data points in a log-log plot. Since this method of evaluating the receptive-field center sizes is significantly different from our method, their results are not straightforwardly comparable to ours. When one considers the approximations and uncertainties in our modelling effort, even a factor of two deviation might not be unexpected. Nevertheless, one possible reason for the observed deviation may be that the interneurons do not receive all inputs from the neighboring retinal ganglion cells, but that more distant retinal ganglion cells also contribute to the excitation of interneurons. More studies are needed to clarify this point. In particular, a model which includes more distant neighbors than the nearest-neighbor ganglion cells as inputs to the inhibitory mechanism, should be explored. 6.7.2 Shape of interneuron receptive field With all model parameters determined by fits to experimental data for the retinal-input and relay-cell responses, our model can also predict the shape of the interneuron response vs. spot-diameter curve. This however requires an additional assumption about how the inhibitory weights are distributed. We now further assume that the retinal afferents of the interneuron can be described by (i) the “square” model illustrated in Fig. 6A and B and (ii) that all five indirect inhibitory inputs to the relay cell carry the same weight B2. Then the net central weight is given by B1*=B1-B2 and the total non-central inhibitory weight is given by B2*=4B2. The formula for the shape of the interneuron response vs. spot-diameter curve is then found directly from the above response expression for relay cells (Eq. 21) by omitting the direct excitatory term, i.e., setting B1*=-B2, and replacing B2* with -4B2. Note that the absolute magnitude of the interneuron response function cannot be determined since the weight B2 also involves the (unknown) strength of the connection between the interneuron and the relay cell. The predicted shape of the interneuron response curve as well as the corresponding interneuron receptive field for the example cell pair in Fig. 7 is shown in Fig. 8. From the fitted parameters listed in Table 1 we found for the square model the average (over the 22 cell pairs) predicted diameter9 of the receptive-field centers of the interneurons dcinter to be 2.87 ± 1.34 deg (see Table 3 in [18]). A substantial part of the observed variation in dcinter is expected to come from the range of visual eccentricities of the cells sampled in the experiments. The ratio between the diameter of the interneuron (dcinter) and 9

Note that in contrast to the retinal ganglion cells, the interneuron and relay-cell receptive fields are not circularly symmetric in our modell. For these cells the center widths referred to here correspond to the diameters of the circularly-averaged receptive fields.

relay-cell (dcrelay) receptive fields is expected to vary less with eccentricity, and this is supported by our calculations which gave dcinter/dcrelay = 2.31 ± 0.41 [18]. Mastronarde [33] estimated the width of the receptive fields of putative interneurons from responses to a moving slit and from responses to hand-held discs of varying diameter. Response to moving slits suggested an average center width for interneurons (N=5) that was 2.1 times the average center width for nonlagged X relay cells (N=50). Responses to the discs showed an average center width for the interneurons (N=19) that was 1.5 times the center width of the relay cells (N=97). Although our theoretical result was slightly larger than the largest estimate of Mastronarde [33], it is reasonable to say that our result is compatible with Mastronarde's results, considering the significant spread in measured dcinter/dcrelay for the different experimental methods. It should be noted, however, that other choices of (1) number of retinal inputs to the interneuron and (2) distributions of weights of retinal ganglion-cell inputs to the interneuron, i.e., B2 no longer identical for all ganglion-cell inputs, would give somewhat different model predictions for the interneuron center width. These alternative choices for weight distribution give identical results for the relay-cell response vs. spot-diameter curve predicted by the model and thus cannot be distinguished by performing circular-spot experiments alone [18] (cf. Sec. 6.8). 6.7.3 Center-surround antagonism A quantity of interest is the degree of center-surround antagonism which has been observed to be larger for relay cells than for retinal ganglion cells [26]. Center-surround antagonism α was defined as

α=

R max − R min ⋅100% R max

where Rmax is the maximum neuronal response (for spots filling exactly the receptive-field center, d=dc), and Rmin is the minimum response for spot diameters in the range d > dc. From the fitted parameters for the 22 cell pairs we found the following average antagonisms: αgang=53 ± 15 % for retinal ganglion cells, αrelay=81 ± 17 % for relay cells. The model prediction for the interneuron center-surround antagonism depends not only on the fitted parameters B1* and B2* but also on the distribution of inhibitory weights between the retinal afferents of the interneuron. With the same assumptions as in Sec. 6.7.2, namely that the retinal afferents of the interneuron can be described by the “square” model illustrated in Fig. 6A and B and with all five inhibitory inputs carry the same weight, we found the predicted mean interneuron center-surround antagonism to be αinter=40 ± 16 %. The predicted mean center-surround antagonism for interneurons was thus found to be smaller than both the corresponding retinal ganglion-cell and relay-cell antagonisms. This agrees with the qualitative observation of Mastronarde [33] that X class interneurons showed weaker surround suppression for the largest disks than the relay cells. 6.8 Model generalization and assessment We have seen that the theoretical relay-cell response vs. spot-diameter curve in Eq. (21) is only dependent on (1) the net weight (excitation minus indirect inhibition) from the central retinal ganglion cell (B1*), (2) the total indirect inhibition from the equidistant “non-central” retinal ganglion cells (B2*), and (3) the nearest-neighbor distance (ra). Thus the theoretical relay-cell response curve for circular spots remains unchanged if the weights of the noncentral inputs are shifted internally as long as their sum is conserved. We have, however, seen that the shape of the interneuron receptive field (Fig. 8B) and its center size (cf. Sec. 6.7.2) are affected by such weight shifts. For a more thorough discussion of this, see Ref. [18]. The present response vs. spot-diameter formulas can straightforwardly be extended to a variety of other model choices for the distribution of ganglion-cell inputs to relay cells and interneurons. An example is a model which includes more distant neighbors than the

nearest-neighbor ganglion cells as inputs to the inhibitory mechanism, as hinted upon by the results of Sec. 6.7.1 . In Ref. [18] we also studied continuous models where an infinite number of ganglioncell inputs to dLGN neurons are included. While discrete models with a finite number of inputs evidently are biologically more realistic, some formulas found with the continuous models are mathematically simpler. Continuous models may therefore be useful in more extended network calculations. To summarize, we found our simple feedforward model to account well for the results from the nonlagged X cells in the experiments of Ruksenas et al. [24] with circular spot stimuli. Predictions regarding (1) distances between neighboring retinal ganglion cells providing input to interneurons, (2) receptive-field center sizes of interneurons, and (3) amount of center-surround antagonism for interneurons compared to relay cells, are all found to be compatible with data available in the literature. However, more tests are needed. 6.9 Further tests of mechanistic model As discussed in Sec. 4 a model claiming general validity should have a high predictive power, i.e., it should be able to predict correctly results from several types of experiments. Moreover, correct predictions of experimental results for situations very different from the experiments on which the model is based, are the most convincing. To test the model presented here further, one thus should look for other types of experimental tests. In Ref. [41] we describe how data from drifting-grating experiments can be used to test this and other mechanistic models for the geniculate circuitry. A good approach for testing models for the geniculate circuitry would be to record the response of single neurons to both circular-spot and drifting-grating stimuli. Then a mathematical model fitted to experimental results for, e.g., circular-spot stimuli for one particular cell pair, would produce testable predictions for the experimental response when driftinggrating stimuli are presented to the same cell pair. However, when comparing predictions from models fitted to data from circular flashing-spot experiments, with drifting-grating results, a complication arises from the differences in the temporal profiles of the two stimuli. The needed mathematical relation between the spatial receptive fields measured with flashing spots using the spike-count code and the firing-rate based spatiotemporal receptive fields measured with drifting gratings, was derived in Ref. [17]. The effect of the modulatory afferents from the brainstem reticular formation (BRF) is naturally accounted for in our mechanistic model where the retinal and geniculate circuitry are clearly separated. Both relay cells and interneurons receive brainstem afferents which affect their excitability [11]. In our model this would correspond to shifts in the parameters B1* and B2*. However, the distance ra between neighboring retinal ganglion cells feeding into the interneuron, will of course not change. Experimental data for relay-cell responses to visual stimuli performed both with and without brainstem stimulation, thus offer another way to test the validity of this and similar mechanistic models for the geniculate circuitry. If the present model is correct, one would expect the parameters B1* and B2* to shift systematically when model parameters fitted to experimental data taken with and without brainstem stimulation, are compared. The fitted values of the model parameter ra should be the same, however. 6.10 Implications for active vision An interesting feature of dLGN is that the transmission of visual information through it (from the retina to cortex) depends on the behavioural state of the animal. Relay cells, interneurons and perigeniculate cells change their signal-processing properties in a statedependent manner. In tonic mode the relay cells transmit signals reliably, and the output is, to a large degree, linearly related to the input. In burst mode a low-threshold calcium spike can be initiated with several sodium-based action potentials “riding” on the top. In this firing mode the cells responds in a much more non-linear way, and the question on to what extent information to cortex is conveyed in burst mode, is still debated. A detailed discussion on this state-dependency of signal transmission is beyond the scope of this

chapter. More information on this topic can be found in two other chapters in this volume [42,43] or in Refs. [9,10]. Of particular significance for the present discussion is the fact that shifts beween the two firing modes can be induced by modulatory systems that have their cell bodies in the brainstem, hypothalamus, or basal forebrain. The modulatory neurotransmitters (acetylcholine, noradrenaline, seretonine, etc.) can affect the way the dLGN cells process incoming synaptic activity and the synaptic communication between cells. Thus one possible channel for “active vision” is modulation of the signal processing properties via these modulatory systems. The mechanistic approach to modelling of the dLGN circuit described in this chapter is well suited for elucidating effects of these modulations. Since, e.g., the geniculate parameters in our model (B1* ,B2*, ra) all have direct physiological interpretations, one can utilize the results from in vitro studies of effects of the modulatory substance on geniculate cells to make predictions about the influence on the effect on the whole circuit. For example, Eq. (21) gives the response of a relay cell to a circular stimulating spot in terms of the five retinal parameters (l(Lbkg), l(Lspot), ω, a1, a2) and the three geniculate parameters (B1* ,B2*, ra). The best studied modulator system in terms of effects on visual response, is the cholinergic system that has its cell bodies located in the parabrachial region (PBR) of the brainstem. The application of acetylcholine (ACh) on relay cells can cause a significant depolarisation and thus shift the cell potential closer to the firing threshold. In our model this increased excitability would correspond to a larger value of the parameter B110. The effects of ACh on the interneurons are less well known, but application of ACh has been reported to give an inhibition of action potential generation [44]. If so, this would correspond to a decrease in the parameter B2*. However, the distance ra between neighboring retinal ganglion cells feeding into the interneuron, will of course not change by application of ACh (either by local injection or PBR-stimulation). Thus our model gives qualitative predictions about the possible changes in receptive-field organization from modulation by ACh via PBR-stimulation. Another obvious channel for active vision is the corticothalamic feedback. Relay cells, interneurons and perigeniculate cells all receive glutamatergic input from layer VI cells, and this input seems to be mainly modulatory [45]. The thalamocortical/corticothalamic loop has recently been suggested to play a role in modulation of orientation and directional selectivity in cortical cells [46]. In contrast to modulatory afferents from, e.g., the brain stem, the corticothalamic feedback is retinotopically organized. This pathway may change the transfer properties of the dLGN circuit in a specific localized region of the visual field and may thus be a mechanism for selective attention. The role of this can be elucidated when the present mechanistic model for the geniculate circuit is expanded to include feedback from both perigeniculate cells and cortex [47]. Acknowledgements

A. Kielland is acknowledged for evaluating the manuscript from a “critical biologist's” point of view and for suggesting several improvements. A. Kocbach, H.E. Plesser and V. Strengen are also acknowledged for commenting on an early version of the manuscript. Appendix

This appendix shows the derivation of the retinal ganglion-cell spot response functions for the cases when the spot stimulus is concentric (Eq. 16) and non-concentric (Eq. 19) with the receptive-field center of the ganglion cell. Concentric case For the concentric case the two integrals in Eq. (15) can be reorganized as follows: 10

Provided B2(0) does not increase with a similar or larger amount, B1* will also increase.

d 2 2π

Rg (d ) = l (Lspot ) ∫

∞ 2π

∫ g g (r ) dθ rdr + l (Lbkg ) ∫

0

0

∫g

0

g

(r ) dθ rdr

d 2 0

∞ 2π

= l (Lbkg ) ∫

∫g

d 2 2π

g

0

(r ) dθ rdr + (l (Lspot ) − l (Lbkg )) ∫

∫g

0

g

(r ) dθ rdr .

(22)

0

The two integrals are of the same type, the only difference being the upper limit in the radial integration. We thus only need to solve the second integral for a general value for the spot diameter d, and then let d → ∞ in this result to obtain the first integral. By insertion of gg(r) from Eq. (14) we see that the integral we need to solve is d 2 2π

∫ 0

∫ g g (r ) dθ rdr = 0

d 2 2π

 1

∫ ∫  π a 0

2 1

0

e −r

2

a12



ω −r e π a22

2

a22

  dθ rdr . 

(23)

This again consists of two integrals of the same type, d 2 2π

1

∫ ∫π a 0

2

e −r

2

a2

dθ rdr

(24)

0

which we solve straightforwardly by use of the dimensionless variable t=r/a: d 2 2π

∫ 0

1 −r 2 ∫0 π a 2 e

a2

d 2

dθ rdr = 2π

∫ 0

d 2a

2 2 1 −r 2 a 2 e rdr = 2 ∫ e −t tdt = 1 − e − d 2 πa 0

4a2

.

(25)

When the upper limit d → ∞, we see from the final expression that the integral becomes 1. Insertion into Eq. (24) then gives

(

Rg (d ;0) = l (Lbkg )(1 − ω ) + (l (Lspot ) − l (Lbkg )) 1 − e − d

2

4 a12

(

− ω 1 − e −d

2

4 a22

)) ,

(26)

which corresponds to Eq. (16) in the main text. Non-centric case The mathematical expression for the retinal ganglion-cell response to a spot with diameter d centered a distance ra away from the receptive-field center, is a straightforward generalization of Eq. (15), d 2 2π

Rg (d ; ra ) = l (Lspot ) ∫ 0

∞ 2π

= l (Lbkg ) ∫ 0

∫g 0

∞ 2π

g

( r − ra ) dθ rdr + l (Lbkg ) ∫

∫g

g

( r − ra ) dθ rdr

d 2 0

d 2 2π

∫ g g ( r − ra ) dθ rdr + (l (Lspot ) − l (Lbkg )) ∫ 0

0

∫g

g

( r − ra ) dθ rdr .

(27)

0

Here ra is any vector which satisfies | ra|=ra. The first integral where one integrates over the whole visual field, gives the same result as in the centric case ra=0. The second integral becomes more complicated, however. When gg(r) from Eq. (14) is inserted, one sees that this second term consists of two integrals of the type

1 ~ R= π a2

d 2 2π

∫e

∫ 0

− r − ra

2

a2

dθ rdr =

0

d 2 a 2π

1

∫ ∫e

π

0

−t a2

e −t e 2t ta cos(θ −θ a ) dθ tdt 2

(28)

0

where we have introduced the additional dimensionless variable ta=ra/a. Note that the position vectors are given in polar coordinates via ra=(ra,θa) and r=(r,θ). The angular integrations can be performed to give 2π

∫e

2 t t a cos(θ −θ a )

0

dθ =



∫e

2 t t a cos x

dx = 2π I 0 (2t a t )

(29)

0

where the modified Bessel function I0(x) is given by ∞

I 0 (x ) = ∑

m =0

1  x   (m!)2  2 

2m

(30)

.

Then the integral simplifies to d 2a

∞ ∞ 2 2 1 2m 2m 1 2m ~ −t a2 = R = 2 ∫ e −t e −t a ∑ t t t dt e ta ∑ 2 a m = 0 (m!) m = 0 m! 0

=e

−t a2



1 2m ta ∑ m = 0 m!

d 2 4a2

∫ 0

d 2a

∫ 0

2

t 2 m +1 −t 2 e dt m!

u m −u e du . m!

(31)

In the final step we have made the substitution u=t2. By introduction of the incomplete gamma function given as x

1 γ (m + 1, x ) = ∫ u m e −u du m! 0

(32)

the integral can be compactly written as

(

)

∞ 2 2 1 ~ R = e −ta ∑ t a2 m γ m + 1, d 2 4a 2 = e − ra m = 0 m!

a2



2m

(

1  ra  2 2   γ m + 1, d 4a ∑ m = 0 m!  a 

)

.

(33)

The result in Eq. (19) is obtained by adding the contributions from the center and the surround. References [1] A.L. Hodgkin and A.F. Huxley, A quantitative description of membrane current and its application to conduction and excitation in nerve, Journal of Physiology 117 (1952) 500-544. [2] D. Johnston and S.M-S. Wu, Foundations of Cellular Neurophysiology, MIT Press, Cambridge, 1994. [3] C. Koch, Biophysics of Computation, Oxford, New York, 1999. [4] C.E. Shannon and W. Weaver, The Mathematical Theory of Communication, University of Illinois Press, Urbana, 1949. [5] F. Rieke, D. Warland, R. de Ruyter van Steveninck, W. Bialek, Spikes: exploring the neural code, MIT Press, Cambridge, 1996.

[6] P.Z. Marmarelis and V.Z. Marmarelis, Analysis of physiological systems: The white-noise approach, Plenum Press, New York, 1978. [7] H. Cruse, Neural networks as cybernetic systems, Georg Thieme Verlag, New York, 1996. [8] J. Hertz, A. Krogh, R.G. Palmer, Introduction to the theory of neural computation, Addison-Wesley, New York, 1991. [9] P. Dayan and L.F. Abbott, Theoretical Neuroscience, MIT Press, Cambridge, 2001. [10] S.H. Sherman and R.W. Guillery, Functional organization of thalamocortical relays, Journal of Neurophysiology 76 (1996) 1367-1395. [11] S.M. Sherman and C. Koch, Thalamus. In: G. Shepherd (ed.), The synaptic organization of the brain (4. ed.). Oxford University Press, New York, 1998. [12] J.T. Schwartz, Economics, mathematical and empirical. In: M. Kac et al (eds.), Discrete thoughts: Essays on mathematics, science and philosophy. Birkhäuser, 1986, pp. 117-149. [13] G. McCollum, Social barriers to a theoretical neuroscience, Trends in Neurosciences 23 (2000) 334-336. [14] C. Koch and I. Segev (eds.), Methods in Neuronal Modeling (2. ed.), MIT Press, Cambridge, 1998. [15] J.M. Bower and D. Beeman (eds.), The Book of Genesis: Exploring Realistic Neural Models with the General Neural Simulation System (2. ed.), Springer, New York, 1998. [16] R. W. Rodieck, Quantitative analysis of cat retinal ganglion cell response to visual stimuli, Vision Research 5 (1965) 583-601. [17] G.T. Einevoll, A. Kocbach, P. Heggelund, Probing the retino-geniculate circuit in cat using circular spot stimuli, Neurocomputing 32-33 (2000) 727-733. [18] G.T. Einevoll and P. Heggelund, Mathematical models for the spatial receptive-field organization of nonlagged X cells in dorsal lateral geniculate nucleus of cat, Visual Neuroscience 17 (2000) 871-886. [19] E. Jakobsson, Using theory and simulation to understand permeation and selectivity in ion channels, Methods. A comparison to methods in enzymology 14 (1998) 342-351. [20] R.J. Mashl, Y. Tang, J. Schnitzer, E. Jakobsson, Hierarchical approach to predicting permeation in ion channels. Using theory and simulation to understand permeation and selectivity in ion channels, Biophysical Society Abstract (2001). [21] D. Noble, The initiation of the heartbeat, Clarendon Press, Oxford, 1979. [22] F.W. Howell, J. Dyhrfjeld-Johnsen, R. Maex, N. Goddard, E. deSchutter, A large-scale model of the cerebellar cortex using PGENESIS, Neurocomputing 32-33 (2000) 1041-1046. [23] V.A. Parsegian, Harness the hubris: Useful things physicists could do in biology, Physics Today 50 (1997) 23-27. [24] O. Ruksenas, I.T. Fjeld, P. Heggelund, Spatial summation and center-surround antagonism in the receptive field of single units in the dorsal lateral geniculate nucleus of cat. Comparison with retinal input. Visual Neuroscience 17 (2000) 855-870. [25] P.O. Bishop, W. Burke, R. Davis, Synapse discharge by single fibre in mammalian visual system, Nature 182 (1958) 728-730. [26] D. Hubel, T.N. Wiesel, Integrative action in the cat's lateral geniculate body, Journal of Physiology 155 (1961) 385-398. [27] B.G. Cleland, M.W. Dubin, W.R. Levick, Sustained and transient neurones in cat's retina and lateral geniculate nucleus, Journal of Physiology 217 (1989) 473-496. [28] E. Kaplan, R.M. Shapley, The origin of the S (slow) potential in the mammalian lateral geniculate nucleus, Experimental Brain Research 55 (1984) 111-116. [29] A.M.L. Coenen and A.J.H Vendrik, Determination of the transfer ratio of cat's geniculate neurons through quasi-intracellular recordings and the relation with the level of alertness, Experimental Brain Research 14 (1972) 227-242. [30] B.G. Cleland and B.B. Lee, A comparison of visual responses of cat lateral geniculate nucleus neurones with those of ganglion cells afferent to them, Journal of Physiology 369 (1985) 249-268. [31] D.N. Mastronarde, Two classes of single-input X-cells in cat lateral geniculate nucleus. I. Receptive-field properties and classification of cells. Journal of Neurophysiology 57 (1987) 357-380. [32] D.N. Mastronarde, Two classes of single-input X-cells in cat lateral geniculate nucleus. II. Retinal inputs and the generation of receptive-field properties. Journal of Neurophysiology 57 (1987) 381-413. [33] D.N. Mastronarde, Non-lagged relay cells and interneurons in the cat lateral geniculate nucleus:Receptive field properties and retinal inputs. Visual Neuroscience 8 (1992) 407-441. [34] M.W. Dubin and B.G. Cleland, Organization of visual inputs to interneurons of lateral geniculate nucleus of cat. Journal of Neurophysiology 40 (1977) 410-427. [35] A.M. Sillito, H.E. Jones, Functional organization influencing neurotransmission in the lateral geniculate nucleus. In: M. Steriade et al. (eds.), Thalamus, Vol. II. Elsevier, New York, 1997, pp. 1-52.

[36] G. Ahlsén, S. Lindström, F.-S. Lo. Excitation of perigeniculate neurones from X and Y principal cells in the lateral geniculate nucleus of the cat. Acta physiologica Scandinavica 118 (1983) 445-448. [37] K. Funke and U. Eysel. Inverse correlation of firing patterns of single topographically matched perigeniculate neurons and cat dorsal lateral geniculate relay cells. Visual Neuroscience 15 (1998) 711-729. [38] H. Wässle, B.B. Boycott, R.-B. Illing, Morphology and mosaic of on- and off-beta cells in the cat retina and some functional considerations. Proc. Royal Soc. B 212 (1981) 177-195. [39] H. Wässle, L. Peichl, B.B. Boycott, Morphology and topography of on- and off-alpha cells in the cat retina, Proc. Royal Soc. B 212 (1981) 151-175. [40] L. Peichl and H Wässle, Size, scatter and coverage of ganglion cell receptive field centres in the cat retina, Journal of Physiology 291 (1979) 117-141. [41] A. Kocbach, G.T. Einevoll, P. Heggelund, Probing mechanistic models for the retinogeniculate circuit in cat using drifting gratings. Neurocomputing 38-40 (2001) 727-732. [42] P. Heggelund, Signal processing in the dorsal lateral geniculate nucleus, present volume. [43] O. Ruksenas, Modulation of LGN responses by ascending activating system, present volume. [44] D.A. McCormick, J.R. Huguenard, T. Bal, H.C. Pape, Electrophysiological and pharmacological properties of thalamic GABAergic neurons. In: M. Steriade, E.G. Jones, D.A. McCormick (eds.). Thalamus, Vol. 2, Experimental and clinical studies, Elsevier, Oxford (1997) 155-212. [45] D.A. McCormick, Neurotransmitter actions in the thalamus and cerebral cortex and their role in neuromodulation of thalamocortical activity, Prog. Neurobiol. 39 (1992) 337-388. [46] P.C. Murphy, S.G. Duckett, A.M. Sillito, Feedback connections to the lateral geniculate nucleus and cortical response properties, Science 286 (1999) 1552-1554. [47] H.E. Plesser, G.T. Einevoll, Mechanistic models of the retino-geniculate transfer function, Society for Neuroscience Abstracts (2001).