52nd IEEE Conference on Decision and Control December 10-13, 2013. Florence, Italy
Approximating the solution of the chemical master equation by combining finite state projection and stochastic simulation. Aron Hjartarson*, Jakob Ruess* and John Lygeros (SSA) [7]. SSA produces exact realizations of the CTMC and hence the solution of the CME can be estimated from a large enough number of simulated state trajectories. This method is appealing due to its simple implementation. Its computational cost, however, may be prohibitive: the number of state trajectories which is required to obtain reasonable statistics may be very large and the computational cost for simulating each single trajectory grows with the frequency of reaction occurrences in the system and may be substantial. Here, we propose a method for approximating the solution of the CME on arbitrarily chosen and possibly time-varying subsets of the state space. The main idea of the method is to truncate the state space and to estimate the transition probabilities into the resulting subset from a low number of SSA runs. These transition probabilities are neglected in FSP and are responsible for accumulation of error over time. In Section II we introduce chemical reaction networks and the chemical master equation. In Section III we present our method. Subsequently, in Section IV, we introduce a scheme for further improving the approximations by Kalman filtering. Section V is devoted to a simulation study where the performance of our method is numerically evaluated for different chemical reaction networks. Finally, Section VI gives a concluding discussion of our results.
Abstract— The advancement of single-cell technologies has shown that stochasticity plays an important role in many biochemical reaction networks. However, our ability to investigate this stochasticity using mathematical models remains rather limited. The reason for this is that computing the time evolution of the probability distribution of such systems requires one to solve the chemical master equation (CME), which is generally impossible. Therefore, many approximate methods for solving the CME have been proposed. Among these one of the most prominent is the finite state projection algorithm (FSP) where a solvable system of equations is obtained by truncating the state space. The main limitation of FSP is that the size of the truncation which is required to obtain accurate approximations is often prohibitively large. Here, we propose a method for approximating the solution of the CME which is based on a combination of FSP and Gillespie‘s stochastic simulation algorithm. The important advantage of our approach is that the additional stochastic simulations allow us to choose state truncations of arbitrary size without sacrificing accuracy, alleviating some of the limitations of FSP.
I. I NTRODUCTION Biochemical reaction networks in which the stochasticity arising from molecular fluctuations cannot be ignored are typically described by continuous-time Markov chain (CTMC) models. In such models the state space corresponds to the number of molecules of the system‘s chemical species and random state transitions occur when molecules react. The time evolution of the probability distribution of such systems is governed by the chemical master equation (CME) [1]. Solving the CME analytically is impossible in all but the simplest cases, since this requires solving as many coupled differential equations as there are states that the CTMC can take. Many approximation methods have been proposed [2], [3], [4], [5], but their efficiency generally varies based on the size and complexity of the system at hand [6]. One such method is the finite state projection (FSP) algorithm which approximates the solution of the CME to within a predefined error tolerance, on a subset of the system‘s state space which is inherently defined by the error tolerance [5]. Maintaining low error tolerance with this algorithm over longer time horizons is usually computationally very demanding since the size of the required subset may grow rapidly. This often makes FSP impractical, especially in systems which contain more than a few species. In such cases a widely used alternative is Gillespie‘s stochastic simulation algorithm
II. C HEMICAL R EACTION N ETWORKS A biochemical reaction network consists of a set of chemical species and reactions. Consider a system with n species A1 , . . . , An governed by m reactions Rµ , µ = 1, ..., m of the form kµ
S1µ A1 + . . . + Snµ An −→ P1µ A1 + . . . + Pnµ An , where kµ is the reaction rate constant and Siµ and Piµ , i = 1, . . . , n are the substrate and product stoichiometric coefficients, respectively. This system can be described by a stochastic process X(t) whose possible states x = [x1 · · · xn ]T ∈ X ⊂ Nn represent the number of molecules xi , i = 1, . . . , n of the species. The dynamics of X(t) are characterized by the reaction propensities n i Y x aµ (x) = kµ , µ = 1, . . . , m Siµ i=1 and the stoichiometric transition vectors vµ = [P1µ · · · Pnµ ]T − [S1µ · · · Snµ ]T , µ = 1, . . . , m,
*First two authors contributed equally to this work. The work was supported in part by the European Commission under the projects HYCON2 and MoVeS. All authors are with the Automatic Control Laboratory, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland
[email protected],
which characterize the probabilities of reaction occurrences and the change in the state of the system followed by the reactions, respectively.
{ruess,lygeros}@control.ee.ethz.ch 978-1-4673-5716-6/13/$31.00 ©2013 IEEE
751
The time evolution of the probability distribution of the system is then governed by the chemical master equation (CME) [1]: p(x; ˙ t) = −p(x; t)
m X µ=1
aµ (x) +
m X
the second term on the right hand side. The main limitation of this approach is that the set J often has to be chosen very large to obtain small approximation error, especially for longer time horizons where more states are likely to be reached by the process. Therefore, the result is often that computing the solution of the approximate system is still computationally prohibitive. We propose a different approach where the set J does not have to include all states which are likely to be reached. The key idea is that instead of replacing the probabilities p(˜ x0j , t), j = 1, . . . , l by zero we construct 0 estimates pb(˜ xj , t), j = 1, . . . , l of their values which are obtained using Gillespie’s stochastic simulation algorithm (SSA). More specifically, we simulate N SSA trajectories S X1S (t), . . . , XN (t) and estimate the probabilities of the states 0 in J as
p(x − vµ ; t)aµ (x − vµ ),
µ=1
where p(x; t) is the probability that the system occupies state x at time t. By fixing a sequence x1 , x2 , . . . of all the states in X, we can write the time evolution of the probability distribution of the system in vector form as ˙ P(X, t) = A · P(X, t),
(1)
X = [x1 x2 · · · ]T and P(X, t) = [p(x1 , t) p(x2 , t) · · · ]T . The matrix A is sometimes referred to as the state reaction matrix [5] and its elements are given by Pm − µ=1 aµ (xi ) i = j P (2) Aij = ∀j s.t. Mj 6= ∅ µ∈Mj aµ (xj ) 0 else,
pb(˜ x0j , t) =
(4)
where I is the indicator function. Using these estimates we can approximate Eq. (3) by
where Mj = {µ ∈ {1, ..., m} | xj = xi − vµ }.
˙ J , t) = A˜ · P(XJ , t) + B · P(X b J 0 , t), P(X
III. A PPROXIMATING THE S OLUTION OF THE C HEMICAL M ASTER E QUATION
(5)
b J 0 , t) = [b where P(X p(˜ x01 , t) · · · pb(˜ x0l , t)]T . Practically, computing estimates of the probabilities in continuous time is cumbersome, since the estimates need to be updated at all times where a reaction occurs in any of the N simulated SSA trajectories. To simplify the computation, we fix a time discretization {t1 , t2 , . . .}, construct the estimates pb(˜ x0j , ti ), j = 1, . . . , l, i = 1, 2, . . . and assume that they remain constant between the time points. That is we set pb(˜ x0j , t) = pb(˜ x0j , ti+1 ), for t ∈ (ti , ti+1 ], j = 1, . . . , l, i = 0, 1, . . ., where t0 = 0. For systems which are not in stationarity, it is sometimes more convenient to choose a time-varying sequence of sets Ji+1 , i = 0, 1, . . . which follow the bulk of the probability distribution, rather than a fixed set J. Similar to the approach taken in [4], we implement a scheme for automatically choosing a sequence of sets Ji+1 , i = 0, 1, . . . such that the sets all contain the same number of states and are centered around xi+1 = arg maxx∈X pb(x, ti+1 ), where the estimates pb(x, ti+1 ) are constructed from the simulated SSA trajectories as described above. Eq. (5) is then solved over the time intervals [ti , ti+1 ], i = 0, 1, . . . with J = Ji+1 . This requires initial conditions p(xJi+1 , ti ) for all states xJi+1 ∈ Ji+1 for each time interval [ti , ti+1 ], i = 1, 2, . . ., which are chosen as ( p(xJi , ti ) if ∃ xJi ∈ Ji : xJi+1 = xJi p(xJi+1 , ti ) = pb(xJi+1 , ti ) else,
In most applications the number of states in X which can be reached by the process is very large or even infinite and computing the solution of Eq. (1) is often infeasible or computationally very expensive. However, many of the states are usually very unlikely to actually be reached by the process and the probability distribution is essentially concentrated on a much smaller set than X. This is the idea behind finite state projection [5] where a finite set J ⊂ X is constructed which contains the bulk of the probability distribution. It is straightforward to see that the time evolution of the probabilities in J depends on the probabilities of the states in J and on the probabilities of the states in the set J 0 from which transitions into J are possible through a single reaction occurrence. By enumerating the states in J and J 0 the time evolution of the probabilities of the states in J can be written in vector form as ˙ J , t) = A˜ · P(XJ , t) + B · P(XJ 0 , t), P(X
N 1 X Ix˜0 (X S (t)), j = 1, . . . , l, N i=1 j i
(3)
where P(XJ , t) = [p(˜ x1 , t) · · · p(˜ xq , t)]T is a vector containing the probabilities of the q states in J and P(XJ 0 , t) = [p(˜ x01 , t) · · · p(˜ x0l , t)]T is a vector containing the probabilities of the l states in J 0 . The matrix A˜ ∈ Rq×q is defined in analogy to Eq. (2) and the matrix B ∈ Rq×l is given by (P x0j ) ∀j s.t. M0 j 6= ∅ µ∈M0 j aµ (˜ Bij = 0 else, where M0 j = {µ ∈ {1, ..., m} | x ˜0j = x ˜i − vµ }. If the set J is chosen such that it contains all states which have significant probabilities of being reached by the process, the probabilities p(˜ x0j , t), j = 1, . . . , l will be close to zero and Eq. (3) can be well approximated by a finite system of linear ordinary differential equations by simply dropping
where pb(xJi+1 , ti ) is constructed from the SSA trajectories as described above. IV. K ALMAN F ILTERING In Eq. (5) only estimates of the probabilities of the states in J 0 are required. However, the SSA trajectories can also 752
be used to estimate the probabilities of the states in J. This provides additional information which is available at no extra cost. So far we only used this information to choose the sequence of sets Ji and to construct initial conditions for the states in Ji+1 \Ji . A convenient way to include this additional information is to embed Eq. (5) in a Kalman b J , t) = [b filter where estimates P(X p(˜ x1 , t) · · · pb(˜ xq , t)]T of the probabilities of the states in J, constructed in the same way as the estimates in Eq. (4), are interpreted as measured system output y. In this view, since all the estimators are unbiased, the system can be written as
variances and covariances of the estimators. Following this approach we obtain the entries of the measurement noise covariance matrices Ri , i = 1, 2, . . . as 1 (b p(˜ xj , ti ) − pb(˜ xj , ti )2 ) j = k (Ri )jk = N 1 − pb(˜ xj , ti )b p(˜ xk , ti ) else, N where j, k ∈ {1, . . . , q}. The formula for the process covariance matrix Q(t) is somewhat more complicated and given in the Appendix. Having assembled the noise covariance matrices we can set up a hybrid Kalman filter in its standard form: Set t0 = 0, i = 0 and, starting from some initial state estimate x b0|0 and some initial covariance matrix Σ0|0 , recursively perform the following steps: Prediction Step: Compute the a priori estimate x bi+1|i = x b(ti+1 ) and its covariance Σi+1|i = Σ(ti+1 ) by integrating the system
˙ J , t) = A˜ · P(XJ , t) + B · P(XJ 0 , t) + w(t), P(X b J , t) = P(XJ , t) + η(t), y(t) = P(X where w(t) and η(t) are zero mean noise terms with covarib J 0 , t) ance matrices equal to the covariance matrices of P(X b J , t), respectively. In our implementation, since and P(X we are only constructing the estimates at the time points t1 , t2 , . . ., the measurement equation turns into
b J 0 , ti+1 ), x b˙ (t) = A˜ · x b(t) + B · P(X ˙ Σ(t) = A˜ · Σ(t) + Σ(t) · A˜T + Q(t),
b J , ti ) = P(XJ , ti ) + ηi , i = 1, 2, . . . yi = P(X and E[w(t)] = [0 · · · 0]T holds only for t ∈ {t1 , t2 , . . .}. It is well known that optimality of the Kalman filtering estimators would require the noise terms to be independent and normally distributed, i.e. ηi ∼ N (0, Ri ) and w(t) ∼ N (0, Q(t)). As a consequence of the central limit theorem b J 0 , t) and P(X b J , t), and therefore also the distributions of P(X the distributions of w(t) and η(t), converge to multivariate Gaussian distributions as the number of SSA trajectories N tends to infinity. However, this convergence may be slow and in most cases the rather small number of SSA trajectories which are used in our approach will not lead to Gaussian estimators. Further, since the same SSA trajectories are used to compute all the estimates, the error terms will not be independent. Even so, Kalman filtering has been successfully applied in cases where normality and independence assumptions were violated [3]. In practice, the distributions of the noise terms are often unknown and the noise covariance matrices are treated as design parameters of the filter. In our application, at least if a fixed set J is used, we can once again employ the SSA trajectories to estimate the noise covariance matrices. Recall the definition of the estimators Eq. (4). The terms Ixj (XiS (t)), xj ∈ X, i = 1, . . . , N are independent Bernoulli distributed random variables with success probability p(xj , t). Hence, from the Bienaym´e formula it follows that 1 p(xj , t) − p(xj , t)2 . var(b p(xj , t)) = N Further, simple calculations show that the covariance of the estimators is given by
over the interval [ti , ti+1 ] using x b(ti ) = x bi|i and Σ(ti ) = Σi|i as initial conditions. Measurement Step: Compute the a posteriori estimate x bi+1|i+1 and its covariance Σi+1|i+1 according to −1 Ki+1 = Σi+1|i Σi+1|i + Ri+1 , b J , ti+1 ) − x x bi+1|i+1 = x bi+1|i + Ki+1 P(X bi+1|i , Σi+1|i+1 = Σi+1|i − Ki+1 Σi+1|i . V. E XAMPLES In this section we demonstrate our approach on biochemical reaction networks which have been widely studied in the literature [8], [9]. To assess the performance of our method we compute the estimation error e = {e1 , e2 , . . .} defined as ei =
1 X true |p (xj , ti ) − papprox (xj , ti )|, i = 1, 2, . . . , 2 xj ∈Ji
where the estimates papprox (xj , ti ) are computed using our method with 1000 simulated SSA trajectories. In the examples where a fixed set J is chosen, the estimates are computed both with and without Kalman filtering, whereas in the example where a sequence of sets is used, the estimates are only computed without Kalman filtering. The terms ptrue (xj , t) are computed using solely SSA with 100.000 simulated trajectories and are considered as the true probabilities. We compare the results to estimates obtained using solely SSA, based on the same 1000 trajectories which are used in our method and, where possible, to approximations obtained by finite state projection in its standard form using the same subset J as in our method. For all the examples we choose the time discretization {t1 = 1, t2 = 2, . . .}.
1 p(xj , t)p(xk , t), j 6= k. N The probabilities p(xj , t) and p(xk , t) are of course unknown. We can, however, once again replace them with their estimates pb(xj , t) and pb(xk , t) and thus estimate the cov(b p(xj , t), pb(xk , t)) = −
753
A) 0.8
90
0.7
70 Y population
0.0020
0.0015 30
0.0010
Total probability in J
0.0025
3
8
0.5 0.4 0.3 0.2 0.1
0.0005 0 0
0.6
0.0 0
11
10
20
X population
30 time
40
50
60
30 time
40
50
60
B) 0.35
Fig. 1. The set J used in Example 1 is shown in grey, including copy numbers of X and Y in the range 3-8 and 30-70, respectively. Level sets of the probability distribution at t = 60, computed using 100.000 SSA trajectories, are shown for reference.
Estimation error e
0.30
A. Example 1 - Gene expression In this example, we consider a simple model of gene expression: a
∅ −→ X b
0.25 0.20 0.15 0.10
X −→ X + Y 0.05
c
X −→ ∅ d
Y −→ ∅,
0.00 0
where a = 0.5, b = 1 and c = d = 0.1. We assume that at time t = 0 three molecules of X and 30 molecules of Y are present and choose a fixed set J such that it covers the bulk of the stationary distribution of the process as shown in Figure 1. The estimated total probability in the set J is shown in Figure 2A. The FSP approximation drops to zero early on since the initial condition lies on the boundary of J and the probability that states in J 0 are reached becomes large already at early time points. The estimates obtained from our method, on the other hand, approximate the true probability fairly well. Additionally performing Kalman filtering slightly improves the estimates in the early time points but shows little advantage at later time points. Figure 2B shows the estimation error e. Noteworthy is that, in contrast to Figure 2A, the error of SSA is substantially larger than the error of our method. The reason is that, in the SSA estimates of the total probability in the set J, errors in the estimates of probabilities of individual states are averaged out.
10
20
Fig. 2. Example 1. (A) Estimated total probability in the set J. (B) Estimation error e. Means (lines) and 90% confidence intervals (shaded regions) were computed from 1000 runs of each method. Black: True solution. Green: SSA. Red: FSP. Blue: Our method. Magenta: Our method with Kalman filtering.
0.1, where f (t) is a piecewise constant function switching between the values 0 and 1 every 10 time units, starting at f (0) = 1. The dynamics of this system change periodically over time. Hence, the process never reaches stationarity and choosing a sequence of different sets is more appropriate than a fixed set J. We choose the size of the sets Ji such that 6 consecutive copy numbers of species X and 16 consecutive copy numbers of species Y are included and let the algorithm automatically choose the location of the sets as described in Section III. The resulting sequence of sets is shown in Figure 3. It can be seen that the sets follow the high probability regions. Figure 4 shows the estimation error for the different methods. The results show that the estimates obtained using our method are more accurate that the estimates obtained from SSA on its own.
B. Example 2 - Gene expression with time-varying rates Here, we consider the same model of gene expression as in Example 1, but assume that the reaction rates vary in time. More specifically, we assume that at time t = 0 no molecules are present in the system and that a = b = f (t) and c = d = 754
23 0.04
Y population
20
Fig. 3. The sequence of sets used in Example 2. The size of the sets in the direction of species Y is shown in green and the true marginal probability distribution of Y is shown on the vertical axis for reference.
10
0.02
5
0.01
0 0
0.14
0.03
15
5
10 X population
15
20
Estimation error e
0.12 Fig. 5. The set J used in Example 3 is shown in grey, including copy numbers of X and Y in the range 0-5 and 0-20, respectively. Level sets of the probability distribution at t = 60, computed using 100.000 SSA trajectories, are shown for reference.
0.10 0.08 0.06 0.04 0.02 0.00 0
10
20
30 time
40
50
Figure 6 shows the estimated total probability in the set J (A) and the estimation error e (B) for the different methods. It can be seen that FSP is still accurate at the early time points, but its error increases over time. As in Example 1, SSA estimates the total probability in the set J very well but gives less accurate estimates for the individual probabilities, where it is outperformed by our method. Performing Kalman 60 filtering did not yield improved estimates in this example. VI. D ISCUSSION
Fig. 4. Estimation error e for Example 2. Means (lines) and 90% confidence intervals (shaded regions) were computed from 1000 runs of each method. Green: SSA. Blue: Our method.
We presented a method for approximating the solution of the chemical master equation on a subset of the system‘s state space, which is based on combining finite state projection and stochastic simulation. The method can be regarded as an extension of finite state projection which accounts for transitions into the chosen subset by estimating the probabilities of the states from which transitions into the subset are possible. We demonstrated that the resulting approximate system can be embedded in a Kalman filtering framework, where estimates obtained from the stochastic simulation serve as measured system outputs. In this light our method can also be regarded as an extension of stochastic simulation, which improves the estimates by including the information given by the chemical master equation. Compared to finite state projection, the main advantage of our method is that the chosen subset does not have to contain all states which are likely to be reached. This is especially advantageous in cases where, using FSP in its standard form, the subset would have to be chosen very large in order to obtain accurate approximations. For instance, if the probability distribution has to be approximated over longer time-horizons, especially for systems which contain more than just a few species, the size of the subset required by FSP is often prohibitively large. Compared to stochastic simulation, our method can serve to reduce the number of simulations which have to be
C. Example 3 - A bistable system As a third example we consider a reaction network consisting of two species where each species represses the production of the other: α(Y )
∅ −→ X d
1 X −→ ∅
β(X)
∅ −→ Y d
2 Y −→ ∅,
where α(Y ) =
a1 , Y n + b1
β(X) =
a2 X m + b2
and a1 = a2 = b1 = b2 = n = m = d1 = d2 = 0.1. With these parameters the stationary distribution of the process is multimodal. We assume that at time t = 0 no molecules are present and choose a fixed set J such that only states close to one of the modes of the probability distribution are covered (Figure 5). 755
A) 1.0
Total probability in J
0.9 0.8 0.7 0.6 0.5 0.4 0
10
20
30 time
40
50
60
B) 0.20
Estimation error e
0.15
0.10
[3] J. Ruess, A. Milias-Argeitis, S. Summers, and J. Lygeros, “Moment estimation for chemically reacting systems by extended Kalman filtering,” The Journal of Chemical Physics, vol. 135, p. 165102, 2011. [4] V. Wolf, R. Goel, M. Mateescu, and T. Henzinger, “Solving the chemical master equation using sliding windows,” BMC Systems Biology, vol. 4, 2010. [5] B. Munsky and M. Khammash, “The finite state projection algorithm for the solution of the chemical master equation,” The Journal of Chemical Physics, vol. 124, no. 4, p. 044104, 2006. [6] J. Goutsias and G. Jenkinson, “Markovian dynamics on complex reaction networks,” Physics Reports, vol. 529, no. 2, pp. 199–264, 2013. [7] D. T. Gillespie, “Exact stochastic simulation of coupled chemical reactions,” The Journal of Physical Chemistry, vol. 81, no. 25, pp. 2340–2361, 1977. [8] N. Friedman, L. Cai, and S. Xie, “Linking stochastic dynamics to population distribution: an analytical framework of gene expression,” Phys Rev Lett, vol. 97, p. 168302, Oct 2006. [9] B. Munsky and M. Khammash, “Transient analysis of stochastic switches and trajectories with applications to gene regulatory networks,” IET Syst Biol, vol. 2, no. 5, pp. 323–333, 2008. [10] B. Munsky, B. Trinh, and M. Khammash, “Listening to the noise: random fluctuations reveal gene network parameters,” Molecular systems biology, vol. 5, p. 318, 2009. [11] G. Neuert, B. Munsky, R. Tan, L. Teytelman, M. Khammash, and A. van Oudenaarden, “Systematic identification of signal-activated stochastic gene regulation,” Science, vol. 339, no. 6119, pp. 584–587, 2013. [12] C. Zechner, J. Ruess, P. Krenn, S. Pelet, M. Peter, J. Lygeros, and H. Koeppl, “Moment-based inference predicts bimodality in transient gene expression,” Proceedings of the National Academy of Sciences, vol. 109, no. 21, pp. 8340–8345, 2012. [13] J. Ruess, A. Milias-Argeitis, and J. Lygeros, “Designing experiments to understand the variability in biochemical reaction networks,” Journal of the Royal Society Interface, vol. 10, no. 88, p. 20130588, 2013.
VII. A PPENDIX
0.05
A. The Process Covariance Matrix Q(t) 0.00 0
10
20
30 time
40
50
60
Fig. 6. Example 3. (A) Estimated total probability in the set J. (B) Estimation error e. Means (lines) and 90% confidence intervals (shaded regions) were computed from 1000 runs of each method. Black: True solution. Green: SSA. Red: FSP. Blue: Our method. Magenta: Our method with Kalman filtering.
The diagonal elements of Q(t) can be approximated as 1 X 2 (Q(t))jj = Bjf pb(˜ x0f , t) − pb(˜ x0f , t)2 N f ∈Yj −
X f,g∈Yj (f 6=g)
Bjf Bjg pb(˜ x0f , t)b p(˜ x0g , t) ,
where Yj = {f ∈ {1, ..., l} | ∃µ s.t. x ˜0f = x ˜j − vµ }, j = 1, . . . , q. The off-diagonal elements can be approximated as 1 X 2 (Q(t))jk = Bjg pb(˜ x0g , t) − pb(˜ x0g , t)2 N g∈Yj ∩Yk
performed to obtain accurate estimates. This is especially important in cases where either SSA estimators converge slowly as the number of used trajectories increases or simulating each single trajectory is costly. In such cases our method provides an efficient alternative to the existing approaches and can, for instance, be used in model building schemes [10], [11], [12], [13] which require iteratively computing the solution of the chemical master equation. In the future we are planning to develop a scheme which uses receding horizon estimation instead of Kalman filtering and allows one to enforce constraints, for instance that the states of the system are probabilities and have to be positive.
−
X X f ∈Yj h∈Yk (h6=f )
j, k = 1, . . . , q, j 6= k.
R EFERENCES [1] D. Gillespie, “A rigorous derivation of the chemical master equation,” Physica A, vol. 188, no. 1-3, pp. 404–425, 1992. [2] T. Jahnke, “On reduced models for the Chemical Master Equation,” Multiscale Model.Simul., vol. 9, no. 4, pp. 1646–1676, 2011.
756
Bjf Bkh pb(˜ x0f , t)b p(˜ x0h , t) ,