Estimation of the input parameters in the Feller neuronal model Susanne Ditlevsen∗ Department of Biostatistics, University of Copenhagen Øster Farimagsgade 5, 1014 Copenhagen K, Denmark
Petr Lansky† Institute of Physiology, Academy of Sciences of the Czech Republic Videnska 1082, 142 20 Prague 4, Czech Republic (Dated: February 27, 2006) The stochastic Feller neuronal model is studied, and estimators of the model input parameters, depending on the firing regime of the process, are derived. Closed expressions for the first two moments of functionals of the first-passage time through a constant boundary in the suprathreshold regime are derived, which are used to calculate moment estimators. In the subthreshold regime, the exponentiality of the first-passage time is utilized to characterize the input parameters. The methods are illustrated on simulated data. Finally, approximations of the first-passage-time moments are suggested, and biological interpretations and comparisons of the parameters in the Feller and the Ornstein-Uhlenbeck models are discussed. PACS numbers: 87.10.+e; 87.19.La; 05.40.-a; 02.50.-r
I.
INTRODUCTION
The stochastic leaky integrate-and-fire (LIF) neuronal models are common theoretical tools for studying properties of real neuronal systems. They represent a compromise between similarity to real neurons and mathematical tractability (see e.g. [2, 4, 9, 38]). In these models, a neuron is characterized by a single stochastic differential equation describing the evolution of neuronal membrane potential in dependency on time. Due to the simplicity ∗
Electronic address:
[email protected]; URL: http://staff.pubhealth.ku.dk/~sudi/
†
Electronic address:
[email protected]
2 of these models, some of their features are qualitatively questionable, the most critical ones being unlimited membrane potential fluctuations and state-independent changes of the voltage. Modifications of the basic LIF model have been introduced, in which these unfavorable properties are removed (e.g. [11, 19, 36, 37]). The deterministic LIF model is and has been very popular in theoretical neuroscience, with roots in the beginning of the previous century [21], and it has been compared to more elaborated models (e.g. [12, 15]). The results suggest that despite drastic simplifications, the model is relatively reliable in mimicking the more complex ones. The simplest way to construct a stochastic LIF model is to take the deterministic LIF model and add a suitable type of noise, usually Gaussian white noise, but other forms are also possible. Adding Gaussian white noise converts the deterministic LIF model into the Ornstein-Uhlenbeck (OU) model. If Poissonian noise is added we end up with Stein’s model or modifications of it [35, 36]. In this paper we replace the additive Gaussian white noise by a multiplicative Gaussian white noise that makes neuronal response to input state-dependent. This step ensures that membrane potential fluctuations are bounded. Another way to construct a stochastic model with Gaussian white noise (either additive or multiplicative) is to apply a diffusion approximation method, starting from a model with Poissonian noise in which the input rates of synaptic bombardment are assumed to be high and the contributions of individual postsynaptic potentials are assumed very small. Whereas the diffusion approximation of the basic LIF model results in the OU process, for the LIF model with state-dependent jumps, the range of limiting processes is large [19, 37]. The OU process retains the criticizable unlimited membrane potential fluctuations, whereas the diffusion variants of the LIF models with variable synaptic conductance effects caused by introduction of reversal (also called equilibrium) potentials can have either bounded or unbounded state space (for details see [19]). All these models have been deeply investigated in recent years [17, 18, 22, 29–32]. Firing is not an intrinsic property of the LIF models or their modifications, and a firing threshold has to be imposed. An action potential (spike) is produced when the membrane voltage reaches the voltage threshold and corresponds to the first-passage time for the associated stochastic process describing the voltage. In the moment of spike generation, the voltage is instantaneously reset to its initial value. Time intervals between action potentials are identified with experimentally observable interspike intervals (ISIs). The importance
3 of the ISIs follows from the generally accepted hypothesis that the information transferred within the nervous system is encoded by the timing of the action potentials. The diffusion variants of the LIF models are popular because the first-passage-time problem is easier to solve for the diffusion process than for its counterpart with discontinuous trajectories. Studies devoted to the comparison of models incorporating variable synaptic conductances with experimental data based on ISI measurement are rare. At present we are not aware of any attempt to identify parameters of a diffusion model with restricted state space to ISI data. The aim of the present contribution is to derive methods of parameter estimation based on ISI data in the diffusion variant of the LIF model with inhibitory reversal potential. This model is usually called the Feller model. The choice of considering only one reversal potential simplifies the treatment and it has been shown [20] that the inhibitory reversal potential probably plays a more important role than the excitatory one. Similarly to the OU model, the parameters of the Feller model can be divided into two categories: the input parameters depend on the activity of the neurons in the network, and the intrinsic parameters characterize the neuron itself, independently of its activity. Only estimation of the input parameters is considered, assuming the intrinsic parameters known and fixed. The Feller process has many applications apart from neuronal modelling. In 1951 Feller [8] proposed it as a model for population growth, from where the neuronal community adapted its name. Cox, Ingersoll and Ross [3] proposed it in 1985 to model short term interest rates, and it has been and still is widely studied in the mathematical finance literature under the name of the CIR model. Pedersen [24] used it to model nitrous oxide emission from soil. In survival analysis, Aalen and Gjessing [1] applied the process as a model for the individual hazard rate. Recently Doering and co-workers studied numerical aspects of the process [7]. Closed expressions for the first two moments of functionals of the first-passage time through a constant boundary are derived and used to construct moment estimators, valid in a subset of the parameter space. The method is illustrated on simulated data. The first two moments of the first-passage time and approximations are given. Finally, it is shown that the way the diffusion model is derived from the LIF model with Poissonian noise influences the interpretation of its parameters. The paper represents a parallel study to that on the OU neuronal model [6].
4 II.
THE MODEL AND ITS PROPERTIES
The changes in the membrane potential between two consecutive neuronal firings are represented by a stochastic process Yt indexed by the time t. The reference level for the membrane potential is taken to be the resting potential. The initial voltage (the reset value following a spike) is assumed to be equal to the resting potential, and set to zero, Y0 = y0 = 0. An action potential is produced when the membrane voltage Yt exceeds a voltage threshold for the first time, for simplicity assumed to be equal to a constant Sy > 0. Formally, the interspike interval (ISI) is identified with the first-passage time of the threshold, T = inf{t > 0 : Yt ≥ Sy }.
(1)
It follows from the model assumptions that for time-homogeneous input containing either a Poissonian or white noise only, the interspike intervals form a renewal process and the initial time can always be identified with zero. Here we consider the white noise input and Yt is a diffusion process. A scalar diffusion process X = {Xt ; t ≥ 0} can be described by the stochastic differential equation dXt = µ(Xt , t) dt + σ(Xt , t) dWt ,
(2)
where W = {Wt ; t ≥ 0} is a standard Wiener process and µ(·) and σ(·) are real-valued functions of their arguments called the infinitesimal mean and variance. The most common of these neuronal models is the OU model given by Yt dYt = − + µ dt + σ dWt ; Y0 = y0 = 0, τ
(3)
see e.g. [2, 4, 9, 38]. The constant µ characterizes the neuronal input, τ > 0 reflects spontaneous voltage decay (the membrane time constant) in absence of input, and σ > 0 is the second input parameter determining the amplitude of the noise. The diffusion term σ(·) of model (3) is state independent. The process is unbounded, which is physiologically unrealistic. In this paper we investigate the more realistic Feller model [8] that introduces an inhibitory reversal potential that bounds the process from below, given by p Yt dYt = − + µy dt + σy Yt − VI dWt ; Y0 = y0 = 0, τy
(4)
5
FIG. 1: Realizations of Xt (membrane potential against time, arbitrary units) for the Feller process (blue line) and the OU process (grey line) in absence of a threshold. For comparison, the same realization of a white noise process was used in the two simulations. The dashed lines are the asymptotic depolarization µτ , which is equal for the two simulations, and the reversal potential VI for the Feller process, respectively. If the threshold value equals S1 , the interspike intervals would be nearly equal for the two processes, for S2 they would be different.
where VI < 0 is the inhibitory reversal potential. In Fig. 1 the two processes are simulated in absence of a threshold, using the same realization of a white noise for comparison. Parameters are chosen such that they have the same asymptotic mean and variance, see below. The Feller process (blue line) is seen never to go below the value of the reversal potential, whereas the OU process (grey line) does. The Feller process attains larger values more often. From Fig. 1 it appears that if the threshold value equals S1 , the interspike intervals would be nearly equal for the two processes, whereas for the threshold equal to S2 they would be different, and the Feller model would spike more often. This observation is valid under the assumption that the reset does not influence the process behavior qualitatively. Analogously to the OU model, the parameters appearing in (1) and (4) can be divided into two groups: parameters characterizing the input, µy and σy , and intrinsic parameters, τy , y0, VI and Sy , which describe the neuron irrespectively of the incoming signal [39]. The process (4) defines a diffusion process, which can be transformed to the standard form of the Feller process [8] by setting Xt = Yt − VI . We obtain p Xt + µ dt + σ Xt dWt ; X0 = x0 = −VI , dXt = − τ
(5)
6 where µ = µy − VI /τy , τ = τy and σ = σy . The interspike interval T is now identified with the first-passage time of the threshold, S = Sy − VI by process Xt . Following Feller’s classification of boundaries (see e.g. [13], p.234), the boundary 0 is entrance if 2µ ≥ σ 2 , which is a reasonable assumption in the neuronal context that we will assume in the rest of the paper. In this case Xt lives on the positive real axis and (5) admits a stationary distribution. In the original parametrization the condition corresponds to µy − VI /τy ≥ σy2 /2, and Yt > VI for all t. The transition density of process (5) is a non-central Chi-square distribution with conditional mean and variance E[Xt |X0 = x0 ] = µτ + (x0 − µτ )e−t/τ , µτ 2 σ 2 Var[Xt |X0 = x0 ] = (1 − e−t/τ )2 + x0 τ σ 2 (1 − e−t/τ )e−t/τ 2
(6) (7)
see e.g. [3]. The asymptotic stationary distribution in absence of a threshold is a Gamma distribution with shape parameter 2µ/σ 2 and scale parameter τ σ 2 /2. Thus, the asymptotic mean and variance is E[Xt ] = µτ and Var[Xt ] = µτ 2 σ 2 /2. Again analogously to the OU model, two distinct firing regimes, usually called sub- and suprathreshold, can be established for the Feller model. In the suprathreshold regime, the asymptotic mean depolarization µτ given by (6) is above the firing threshold S and the ISIs are relatively regular (deterministic firing - which means that the neuron is active also in the absence of noise). In the subthreshold regime, µτ ≪ S, and firing is caused only by random fluctuations of the depolarization (stochastic or Poissonian firing). The term ”Poissonian firing” indicates that when the threshold is far above the steady-state depolarization µτ (relatively to σ), the firing achieves characteristics of a Poisson point process [14, 23]. The FPT problem for model (5) has not been solved and only numerical [27] or simulation [16] techniques are available.
7 A.
Approximations of the ISI moments
Closed form expressions for the mean and variance of T were calculated in [10, 20]: ∞ τ S n+1 − xn+1 S − x0 X 0 Qn E[T ] = + (8) 2 /2) µ (n + 1) (µτ + iτ σ i=0 n=1 ∞
2E[T ]S X 2τ E[T ] S n+1 Q Var[T ] = + µ (n + 1) ni=1 (µτ + iτ σ 2 /2) n=1 P ∞ X (S n+1 − xn+1 )( nj=1 1j ) 0 2 Qn −2τ 2 (n + 1) i=0 (µτ + iτ σ /2) n=0
(9)
in the case 2µ ≥ σ 2 . Define k = 2µ/σ 2 , then the assumption that 0 is entrance boundary implies k ≥ 1. Rewriting (8) and (9) by substituting k yield n S n x −1 ∞ X k n Γ(k) 0 S − x0 x0 +τ E[T ] = µ nΓ(k + n) (µτ )n n=2 n ∞ X S − x0 kS Γ(k) ≈ +τ µ Γ(k + n + 1) µτ n=2 µτ k kS S kS S − x0 τ Γ − − +τ ;k (10) exp = µ k (k + 1)µ kS µτ µτ Rx since S/x0 > 1. Here Γ(x; p) = 0 tp−1 e−t dt is the incomplete Gamma function. The first term in (10) corresponds to the mean FPT for the Wiener process with drift µ, so that
the mean FPT of the Feller process is always larger. For finite k, that is for σ 2 > 0, or in suprathreshold regime, that is for µτ > S, the sums are convergent. This agrees with the deterministic model, where firing only occurs if µτ > S. In suprathreshold regime the convergence will be fast. A cruder approximation yields µτ k kS τ kS E[T ] ≈ τ Γ ;k − . exp kS µτ µτ k
(11)
Moreover,
n S ! n P n n−1 1 x0 −1 ∞ ∞ n X X k Γ(k)( j=1 j ) kS S Γ(k) x0 2 − 2τ Var[T ] = 2E[T ] +τ µ nΓ(k + n) µτ nΓ(k + n) (µτ )n n=2 n=2 Pn−1 1 n ∞ X ) kS Γ(k) n1 ( j=1 x0 j 2 − 2τ ≈ 2E[T ] E[T ] + µ Γ(k + n) µτ n=2 k−1 τS kS kS x0 2 µτ + Γ −τ exp ;k (12) ≈ 2E[T ] E[T ] + µ µ kS µτ µτ
8 where we have used the crude approximation
1 ( n
and n = 3.
Pn−1
1 j=1 j )
≈ 1/2, which is exact for n = 2
For comparison with the approximations of the ISI moments in the OU model calculated in [6], we also approximate (8) and (9) using the simpler expressions (19) and (20) derived below when (21) is fulfilled. Taylor expansion of log(eT /τ ) to second order around E[eT /τ ] and taking expectations yields τ Var[eT /τ ] E[T ] = τ E[log eT /τ ] ≈ τ log E[eT /τ ] − 2E[eT /τ ]2 τ 2 σ 2 (S − x0 )(µτ (S + x0 )/2 − x0 S) µτ − x0 − . = τ log µτ−S 2(µτ − x0 )2 ((µτ − S)2 + τ σ 2 (µτ /2 − S))
(13)
Notice that the first term corresponds to the passage time in the deterministic model when σ 2 = 0. The mean FPT decreases as σ 2 increases. Repeating the calculation for (log(eT /τ ))2 yields 2
2
T /τ 2
E[T ] = τ E[(log e
) ] ≈ τ
so that Var[T ] ≈
2
T /τ
log E[e
2 τ 2 Var[eT /τ ] T /τ ] − log E[e ] − 1 E[eT /τ ]2
τ 3 σ 2 (S − x0 )(µτ (S + x0 )/2 − x0 S) τ 2 Var[eT /τ ] = E[eT /τ ]2 (µτ − x0 )2 ((µτ − S)2 + τ σ 2 (µτ /2 − S))
(14)
ignoring higher order terms.
III.
ESTIMATION OF THE INPUT PARAMETERS
Estimators of the parameters of model (5) from ISI data could be derived if the FPT distribution was known. Unfortunately this is not the case, and we therefore propose estimators based on moments of functionals of the FPT and on approximations of the distribution. In the subthreshold regime the ISIs are approximately exponentially distributed, which suggests the corresponding maximum likelihood estimators. In the suprathreshold regime, where an approximate distribution is not available, we propose a moment estimator. It is not obvious how to determine the regime since µ is unknown, but one could e.g. perform an exponential distribution test. If an exponential distribution of ISIs is rejected, the suprathreshold estimation procedure is applied. The data are assumed to be n observations of T : ti , i = 1, . . . , n. The model implies that the neuronal output forms a renewal process, so that the observations will be independent and identically distributed.
9 A.
Subthreshold regime
If µτ ≪ S relative to σ the first-passage-time density function can be approximated by an exponential distribution [14, 20, 23, 40] f (t) =
1 exp(−t/λ), λ
(15)
where λ is the mean, and can therefore be approximated using (11) for large S: λ = τΓ
2µ σ2
τ σ2 2S
2µ/σ2 τ σ2 2S exp − . τ σ2 2µ
(16)
since limx→∞ Γ(x; p) = Γ(p). From this distribution it is only possible to determine µ and σ up to the parameter function λ. The maximum likelihood estimator is n
X ˆ = t¯ = 1 λ ti . n i=1
(17)
The asymptotic variance of the estimator estimated from the inverted Fisher information evaluated at the optimum is given by ˆ = Var[λ]
B.
ˆ2 λ . n
(18)
Suprathreshold regime
To derive the moment estimators, a closed expression for E eT /τ is deduced requiring
µτ > S (suprathreshold), and an additional condition on σ defined below provides a closed expression for E e2T /τ , by defining suitable martingales and applying Doob’s Optional-
Stopping Theorem, in a similar way as done in [5, 6] for the OU neuronal model. In the Appendix it is shown that µτ − x0 µτ − S
(19)
(µτ − x0 )2 + τ σ 2 (µτ /2 − x0 ) , (µτ − S)2 + τ σ 2 (µτ /2 − S)
(20)
E[eT /τ ] = if µτ > S, and E[e2T /τ ] = if τ σ2 2
r
2µ 1+ 2 −1 σ
!
< (µτ − S).
(21)
10
FIG. 2: Part of parameter space where (19) and (20) are valid, and thus where the estimators (24) and (25) are defined. The axes are non-dimensionalized. Vertical axis indicates the ratio between asymptotic mean and threshold, horizontal axis indicates the size of σ 2 relative to µ. The points are the different values used in the simulations.
The restrictions on the parameter space is illustrated in Fig. 2 as a function of k = 2µ/σ 2, √ k ≥ 1. If µτ > g(k)S then (21) is fulfilled, where g(k) = k/(k + 1 − k + 1) is a strictly decreasing function of k. If σ → 0 for fixed µ, then k → ∞ and g(k) → 1, see Fig. 2, and in this limit only suprathreshold regime is required. If k = 1 the condition is √ µτ > g(1)S = S/(2 − 2).
Straightforward estimators of E[eT /τ ] and E[e2T /τ ] are obtained from the empirical mo-
ments: n
X ˆ T /τ ] = 1 eti /τ Z1 = E[e n i=1
(22)
n
X ˆ 2T /τ ] = 1 Z2 = E[e e2ti /τ . n i=1
(23)
Moment estimators of the parameters, assuming that the data are in the allowed parameter
11 region, are then obtained from equations (19) and (20) SZ1 − x0 τ (Z1 − 1)
(24)
(S − x0 )2 (Z2 − Z12 ) . τ ((Z1 − 1)(SZ2 − x0 ) − (SZ1 − x0 )(Z2 − 1)/2)
(25)
µ ˆ = and σˆ 2 =
Note that the asymptotic depolarization will always be estimated to be suprathreshold (ˆ µτ > S). It follows from equation (24) that µ ˆ ≈ S/τ if Z1 ≈ Z1 − 1. In other words, if ti ≫ τ for some i, the data suggest that the model is not in the suprathreshold regime. This shows the importance of the time constant for determining the firing regimes.
IV.
NUMERICAL RESULTS
Trajectories from the Feller process (5) in suprathreshold regime were simulated according to the Milstein scheme [16] with a stepsize of 0.01 msec for different input parameter values. In all simulations the values τ = 10 msec, S = 20 mV and x0 = 10 mV were used, which corresponds to the values τy = 10 msec, Sy = 10 mV and y0 = 0 mV in the untransformed process (4). The process was run until reaching a threshold S where the time was recorded. Three sets of parameter values for µ and σ were considered. These parameter values are illustrated as points in Fig. 2. For each set of parameter values 1000 data sets of 100 ISIs were generated, and on each of them µ and σ were estimated. In Table I the values used in the simulations and mean and standard deviation of the 1000 estimates obtained for each parameter and in each simulation are listed. TABLE I: Parameter values for µ and σ used in the simulations, and mean ± standard deviation of the 1000 estimates µ ˆ and σ ˆ obtained for each combination of parameter values. q mV mV k µ msec σ µ ˆ msec
σ ˆ
1
4.5
3.0
4.34 ± 0.47
1.98 ± 0.45
2
4.0
2.0
3.92 ± 0.33
1.45 ± 0.27
6
3.0
1.0
2.98 ± 0.17
0.71 ± 0.10
12
B
C 3.5
µ ˆ
µ ˆ
µ ˆ
3
3
2.5
4
4
3.0
5
6
5
A
−2
−1
0
1
2
3
−3
σ ˆ
−2
−1
0
1
2
3
−3
σ ˆ
−2
−1
0
1
2
3
−2
−1
0
1
2
3
σ ˆ
1
1
2
0.5
3
2
4
1.0
5
−3
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
−3
FIG. 3: Gaussian quantile plots (empirical versus theoretical quantiles) for the estimators (24) and √ √ (25) from the 1000 artificial data sets simulated with A: µ = 4.5 mV/msec and σ = 3 mV/ msec, √ √ √ √ B: µ = 4 mV/msec and σ = 2 mV/ msec, and C: µ = 3 mV/msec and σ = 1 mV/ msec. In all simulations τ = 10 msec, S = 20 mV and x0 = 10 mV. The horizontal lines are set at the parameter values used in the simulations.
Normal quantile plots of the estimates are in Fig. 3. The estimator of µ appears nonbiased, with small variance and normally distributed, whereas the estimator of σ underestimates the correct value and is far from being normal, with a heavy tail to the right.
V.
DIFFUSION APPROXIMATION APPROACH
In this section we will show that the interpretation of the parameters of the Feller process is not as straightforward as in the OU model. The diffusion model (4) can be obtained from the deterministic LIF model by adding a noise which restricts the state space from below. However, if we approach the problem from the LIF model with reversal potentials and discontinuous jumps, the conclusions are not as simple. The discrete model is given by
13 equation dXt = −
Xt + a(VE − Xt )dPt + i(Xt − VI )dQt , θ
(26)
where 1 > a > 0 and −1 < i < 0 are contributions of excitatory and inhibitory postsynaptic potentials driven by Poissonian inputs P and Q with intensities λ and ω, respectively. The constant θ > 0 is the membrane time constant, and VE > 0 is the excitatory reversal potential. Under conditions suitable for the diffusion approximation, namely a → 0, i →
0, λ → ∞, ω → ∞, in such a way that λa → α > 0, ωi → β < 0, λa2 → 0 and ωi2 → σ 2 , the limiting diffusion process is dYt =
p 1 ( − − α + β)Yt + αVE − βVI dt + σ Yt − VI dWt , θ
(27)
which can be written in the form dYt =
1 p − Yt + µ dt + σ Yt − VI dWt , τ
(28)
where τ = θ/(1 + θ(α − β)) and µ = αVE − βVI . This makes the estimation procedure more complicated since τ in equation (28) is no longer an intrinsic parameter as in the OU process given by equation (3). This approach has some consequences for the results presented in the previous sections. It would be more appropriate to estimate not only µ and σ in equation (5), but also the “membrane time constant” τ , which now depends on the input. It follows from equation (28) that τ < θ, which may have a substantial effect on the model behavior. The smaller the value of τ , the lower is the firing frequency. Thus, an increase in µ is, at least partly, compensated by a decrease of “the membrane time constant”. This implies that the firing rate would increase less steeply with increasing excitation than in the OU model. Moreover, the OU model has been criticized for a too narrow coding range (range where firing frequency is above zero, but not below some physiologically acceptable level). From the above considerations follows that inclusion of the reversal potential not only has the desirable feature of restricting the state-space, but also increases the coding range of the neuron.
VI.
DISCUSSION
The distribution of the first-passage time, T , through a constant boundary by a Feller process has been the subject of many studies, not only in neuronal modeling [26]. The
14 distribution is probably unknown in analytical form. In this paper we propose moment estimators of the input parameters, in dependency on the sub- or suprathreshold conditions. Only the situation where the intrinsic parameters are assumed to be known is considered. The main contribution is the derivation of expressions for E[eT /τ ] and E[e2T /τ ]. This lead to simple and closed expressions for moment estimators of µ and σ in the suprathreshold regime and when σ is small relative to the distance between the asymptotic depolarization and the threshold. The results obtained from the simulated data show that the proposed estimation procedure in the suprathreshold regime works well for µ. The variance parameter σ is more difficult. This is in agreement with the results obtained for the OU model [6]. Moreover, approximations of E(T ) and Var(T ) from exact formulas were presented. Considering the stochastic diffusion models, the literature often makes reference to cortical neurons in vivo which are characterized by irregular spike sequences and many synaptic inputs with relatively small individual contributions to the membrane depolarization [22, 25, 28, 31–34]. We have shown that having one or more ISIs in a studied sample which is several times longer that the membrane time constant, τ , implies that the neuronal firing cannot be described by the Feller model in suprathreshold regimen. The same conclusion was drawn for the OU model [6]. We therefore deduce that cortical neurons can be described by such a diffusion model in suprathreshold regime only for short time periods. Acknowledgments
Work supported by grants from the Danish Medical Research Council and the Lundbeck Foundation to S. Ditlevsen, and by the Research project AV0Z5011922, Center for Neuroscience LC554 and by Academy of Sciences of the Czech Republic Grant (Information Society, 1ET400110401) to P. Lansky.
15 A.
APPENDIX
Consider the Feller model for the membrane potential X at time t p Xt dXt = − + µ dt + σ Xt dWt ; X0 = x0 , τ
(A.1)
where τ > 0, 2µ ≥ σ 2 , 0 < x0 < S, and S is the threshold value of the stopping time T = inf{t > 0 : Xt ≥ S}.
(A.2)
We will now prove (19) and (20) using martingales, which will be defined using the conditional moments from the Feller process. In general, for a martingale Mt and a stopping time T , we have E[MT ∧t ] = E[M0 ]. For a subclase the stronger result holds (see [41], p.221) Doob’s Optional-Stopping Theorem Let T be a stopping time and let Mt be a uniformly integrable martingale. Then E[MT ] = E[M0 ]. For t > s the conditional moments are E[µτ − Xt |Xs ] = (µτ − Xs )e−(t−s)/τ
(A.3) 2 2
µτ σ (1 − e−(t−s)/τ )2 2 + τ σ 2 Xs (1 − e−(t−s)/τ )e−(t−s)/τ .
E[(µτ − Xt )2 |Xs ] = (µτ − Xs )2 e−2(t−s)/τ +
(A.4)
Define the filtration Ft = σ(Xs ; 0 ≤ s ≤ t), the sigma-algebra generated by Xs for 0 ≤ s ≤ t. Then the process (1)
Mt
= (µτ − Xt ) et/τ ,
(A.5) (1)
is a martingale with respect to Ft by (A.3), and because E[Mt ] < ∞ since Xt follows a (1)
(1)
non-central Chi-square distribution. Therefore MT ∧t , the process Mt a martingale (see [41], p.99). Moreover, if µτ > S then
(1) MT ∧t
stopped at T , is also
is uniformly integrable. To
(1)
show this it is enough to show that |MT ∧t | ≤ Y for all t, for some non-negative variable Y (1)
(1)
with E[Y ] < ∞ (see [41], p.128). We have E[M0 ] = E[MT ∧t ] so that (µτ − x0 ) = E[(µτ − XT ∧t )eT ∧t/τ ] ≥ (µτ − S)E[eT ∧t/τ ].
(A.6)
When µτ > S the coefficient to E[eT ∧t/τ ] is positive, and (A.6) can be rearranged to µτ − x0 ≥ E[e(T ∧t)/τ ]. µτ − S
(A.7)
16 Taking limits on both sides we obtain µτ − x0 ≥ lim E[eT ∧t/τ ] = E[eT /τ ] t→∞ µτ − S
(A.8)
by monotone convergence. The variable Y = µτ eT /τ is thus a non-negative variable with (1)
E[Y ] < ∞, and |MT ∧t | = (µτ − XT ∧t )eT ∧t/τ ≤ µτ eT /τ = Y . Doob’s Optional-Stopping (1)
Theorem can therefore be applied to MT ∧t in suprathreshold regime yielding (1)
(1)
µτ = E[M0 ] = E[MT ] = E[(µτ − XT )eT /τ ] = (µτ − S)E[eT /τ ],
(A.9)
which finally yields (19). Now define (2) Mt
2 2t/τ
= (µτ − Xt ) e
2
2t/τ
+ τ σ (µτ − Xt )e
µτ 2 σ 2 (1 − e2t/τ ) + 2
(A.10)
which is a martingale with respect to Ft : (2)
(2)
E[Mt |Fs ] = E[Mt |Xs ] µτ 2 σ 2 2t/τ 2 2t/τ 2 2t/τ (1 − e ) Xs = E (µτ − Xt ) e + τ σ (µτ − Xt )e + 2 µτ 2 σ 2 t/τ (e − es/τ )2 + τ σ 2 Xs (et/τ − es/τ )es/τ = (µτ − Xs )2 e2s/τ + 2 µτ 2 σ 2 +τ σ 2 (µτ − Xs )e(t+s)/τ+ (1 − e2t/τ ) 2 µτ 2 σ 2 (1 − e2s/τ ) = (µτ − Xs )2 e2s/τ + τ σ 2 (µτ − Xs )e2s/τ + 2 = Ms(2) (A.11) using (A.4), and that all moments of the non-central Chi-square variable are finite, so that (2)
E[Mt ] < ∞. Therefore (µτ − x0 )2 + τ σ 2 (µτ − x0 ) (2)
(2)
= E[M0 ] = E[MT ∧t ] = E (µτ − XT ∧t )2 e2T ∧t/τ + τ σ 2 (µτ /2 − XT ∧t )e2T ∧t/τ + µτ 2 σ 2 /2 µτ 2 σ 2 ≥ (µτ − S)2 + τ σ 2 (µτ /2 − S) E[e2T ∧t/τ ] + 2
where we have used that (µτ − XT ∧t )2 > (µτ − S)2 when µτ > S. When ! r τ σ2 2µ 1 + 2 − 1 < (µτ − S) 2 σ
(A.12)
(A.13)
17 is fulfilled the coefficient to E[e2T ∧t/τ ] is positive, and (A.12) can be rearranged to (µτ − x0 )2 + τ σ 2 (µτ /2 − x0 ) ≥ E[e2(T ∧t)/τ ]. (µτ − S)2 + τ σ 2 (µτ /2 − S)
(A.14)
Taking limits on both sides we obtain 2(T ∧t) (µτ − x0 )2 + τ σ 2 (µτ /2 − x0 ) τ ≥ lim E[e ] = E[e2T /τ ] t→∞ (µτ − S)2 + τ σ 2 (µτ /2 − S)
(A.15)
by monotone convergence. Taking Y = ((µτ )2 + µτ 2 σ 2 /2) e2T /τ + µτ 2 σ 2 /2, which is non(2)
negative, we obtain the uniform integrability since |MT ∧t | ≤ Y and E[Y ] < ∞ if condition (2)
(A.13) is fulfilled. Doob’s Optional-Stopping Theorem can therefore be applied to MT ∧t which finally yields (20).
18 REFERENCES
[1] O.O. Aalen and H.K. Gjessing. Survival models based on the Ornstein-Uhlenbeck process. Lifetime data analysis, 10:407–423, 2004. [2] M.A. Arbib. The Handbook of Brain Theory and Neural Networks. MIT Press, 2nd edition, 2002. [3] J.C. Cox, J.E. Ingersoll, and S.A. Ross. A theory of the term structure of interest rates. Econometrica, 53:385–407, 1985. [4] P. Dayan and L.F. Abbott. Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. MIT Press, 2001. [5] S. Ditlevsen. A result on the first-passage time of an Ornstein-Uhlenbeck process. Research Report 04/4, Department of Biostatistics, University of Copenhagen, 4, 2004. [6] S. Ditlevsen and P. Lansky. Estimation of the input parameters in the Ornstein-Uhlenbeck neuronal model. Phys. Rev. E, 71:Art. No. 011907, 2005. [7] C.R. Doering, K.V. Sargsyan, and P. Smereka. A numerical method for some stochastic differential equations with multiplicative noise. Phys. Letters A, 344:149–155, 2005. [8] W. Feller. Diffusion processes in genetics. In J. Neyman, editor, Proceedings of the second Berkeley symposium on mathematical statistics and probability, pages 227–246. University of California Press, Berkeley, 1951. [9] W. Gerstner and W.M. Kistler. Spiking Neuron Models. Cambridge University Press, 2002. [10] V. Giorno, P. Lansky, A.G. Nobile, and L.M. Ricciardi. Diffusion approximation and firstpassage-time problem for a model neuron. Biol. Cybern., 58:387–404, 1988. [11] P.I.M. Johannesma. Diffusion models for the stochastic activity of neurons. In E.R. Caianiello, editor, Proceedings of the school on neural networks june 1967 in Ravello, pages 116–144. Springer, Berlin, 1968. [12] R. Jolivet, T.J. Lewis, and W. Gerstner. Generalized integrate-and-fire models of neuronal activity approximate spike trains of a detailed model to a high degree of accuracy. J. Neurophysiology, 92(2):959–976, 2004. [13] S. Karlin and H.M. Taylor. A second course in stochastic processes. Academic Press, San Diego, 1981.
19 [14] J. Keilson and H.F. Ross.
Passage time distributions for Gaussian Markov (Ornstein-
Uhlenbeck) statistical processes. Selected tables in Math. Stat. III, 1975. [15] W.M. Kistler, W. Gerstner, and J.L. vanHemmen. Reduction of the Hodgkin-Huxley equations to a single-variable threshold model. Neural Comput., 9(5):1015–1045, 1997. [16] P.E. Kloeden and E. Platen. Numerical solutions of stochastic differential equations. Springer, Berlin, 1999. [17] A.F. Kohn. Dendritic transformations on random synaptic inputs as measured from a neurons spike train - modeling and simulation. IEEE Trans. Biomed. Engn., 36(1):44–54, 1989. [18] G. La Camera, A. Rauch, H.R. Luscher, and et al. Minimal models of adapted neuronal response to in vivo-like input currents. Neural Comput., 16(10):2101–2124, 2004. [19] P. Lansky and V. Lanska. Diffusion approximations of the neuronal model with synaptic reversal potentials. Biol. Cybern., 56:19–26, 1987. [20] P. Lansky, L. Sacerdote, and F. Tomasetti. On the comparison of Feller and OrnsteinUhlenbeck models for neural activity. Biol. Cybern., 73:457–465, 1995. [21] L. Lapicque. Recherches quantitatives sur l’excitation electrique des nerfs traitee comme une polarization. J. Physiol. Pathol. Gen., 9:620–35, 1907. [22] A. Longtin, B. Doiron, and A.R. Bulsara. Noise-induced divisive gain control in neuron models. Biosystems, 67(1-3):147–156, 2002. [23] A.G. Nobile, L.M. Ricciardi, and L. Sacerdote. Exponential trends of Ornstein-Uhlenbeck 1st-passage-time densities. J. Appl. Prob., 22:360–369, 1985. [24] A.R. Pedersen. Estimating the nitrous oxide emission rate from the soil surface by means of a diffusion model. Scand. J. Statist., 27:385–403, 2000. [25] A. Rauch, G. La Camera, H.R. Luscher, W. Senn, and S. Fusi.
Neocortical pyramidal
cells respond as integrate-and-fire neurons to in vivo-like input currents. J. Neurophysiology, 90(3):1598–1612, 2003. [26] S. Redner. Guide to First-Passage Processes. Cambridge U. Press, 2001. [27] L.M. Ricciardi, A. Di Crescenzo, V. Giorno, and A.G. Nobile. An outline of theoretical and algorithmic approaches to first passage time problems with applications to biological modeling. Math. Japonica, 50(2):247–322, 1999. [28] M.J.E. Richardson. The effects of synaptic conductance on the voltage distribution and firing rate of spiking neurons. Phys. Rev. E, 69:Art. No. 051918, 2004.
20 [29] M.J.E. Richardson and W. Gerstner. Characterization of subthreshold voltage fluctuations in neuronal membranes. Neural Comput., 15(11):2577–2618, 2003. [30] M.J.E. Richardson and W. Gerstner. Synaptic shot noise and conductance fluctuations affect the membrane voltage with equal significance. Neural Comput., 17(4):923–947, 2005. [31] M Rudolph and A Destexhe. An extended analytic expression for the membrane potential distribution of conductance-based synaptic noise. Neural Comput., 17(11):2301–2315, 2005. [32] M. Rudolph, J.G. Pelletier, D. Pare, and A. Destexhe. Characterization of synaptic conductances and integrative properties during electrically induced eeg-activated states in neocortical neurons in vivo. J. Neurophysiology, 94(4):2805–2821, 2005. [33] M. Rudolph, Z. Piwkowska, M. Badoual, T. Bal, and A. Destexhe. A method to estimate synaptic conductances from membrane potential fluctuations. J. Neurophysiology, 91:2884– 2896, 2004. [34] Y. Sakai, S. Funahashi, and S. Shinomoto. Temporally correlated inputs to leaky integrateand-fire models can reproduce spiking statistics of cortical neurons. Neural Networks, 12(78):1181–1190, 1999. [35] R.B. Stein. A theoretical analysis of neuronal activity. Biophys., 5:173–194, 1965. [36] R.B. Stein. Some models of neuronal activity. Biophys., 7:37–68, 1967. [37] H.C. Tuckwell. Synaptic transmission in a model for stochastic neural activity. J. Theor. Biol., 77:65–81, 1979. [38] H.C. Tuckwell. Introduction to theoretical neurobiology, Vol.2: Nonlinear and stochastic theories. Cambridge Univ. Press, Cambridge, 1988. [39] H.C. Tuckwell and W. Richter. Neuronal interspike time distributions and the estimation of neurophysiological and neuroanatomical parameters. J. Theor. Biol., 71:167–180, 1978. [40] F.Y.M. Wan and H.C. Tuckwell. Neuronal firing and input variability. J. Theoret. Neurobiol., 1:197–218, 1982. [41] D. Williams. Probability with Martingales. Cambridge University Press, 1991.