Computing Discrete Poisson Probabilities for ...

4 downloads 24 Views 372KB Size Report
Apr 13, 2016 - rithm of the popular Fox-Glynn method, while improving the estimation of truncation ..... Aad PA Van Moorsel and William H Sanders. Transient ...
Computing Discrete Poisson Probabilities for Uniformization Algorithm Maciej Rafal Burak Applied Informatics, West Pomeranian University of Technology ul. Sikorskiego 37, Szczecin, Poland http: // www. we. zut. edu. pl April 13, 2016 Abstract This paper deals with the computation of discrete Poisson probabilities and their precise tail bounds for the use in solving of transient Markov models with the uniformization with steady-state detection algorithm. The algorithm calculates the weights corresponding to the probabilities, maintaining the beneficial properties for the use in uniformization algorithm of the popular Fox-Glynn method, while improving the estimation of truncation points with significant effect on the overall performance. Keywords: Computing discrete distributions, Poisson, uniformization, numerical methods, algorithms

1

Introduction

The paper deals with the computation of discrete Poisson distribution values for the use in the uniformization with steady-state detection algorithm as in Burak [2014]. The algorithm computes, for a given error bound  representing the remaining value of the cdf Poisson distribution, the temporary truncation points Lt and Rt such that both, the probabilities Qλ (Lt ) that at least bLt c and Tλ (Rt ) that at most bRt c events happen for given λ (as defined in Fox and Glynn [1988]), are always smaller than /2 × 10−3 , using simple approximation related to the central limit theorem. Afterwards, starting from the mode m = bλc we recursively compute the weights and, then, their sum – the total weight (used for scaling the weights in order to calculate the discrete probabilities). Further, the precise truncation points L and R, such that both Qλ (L) and Tλ (R) are smaller than /2, are obtained by recursive summation of the weights starting from Lt and Rt . The paper is structured as follows. The next section reviews the uniformization with steady-state detection algorithm with emphasis on the role of the truncation bounds. In section 3 the proposed method of calculating the discrete Poisson probabilities is introduced, followed by the implementation details and computational examples. The paper ends with a summary of the numerical experiments results and conclusions.

1

2

Uniformization Method of Solving CTMC’s

A Continuous Time Markov Chain (CTMC) is described by infinitesimal generator matrix Q : n + 1 × n + 1, Q = (qi,j ) and the initial state probability vector p(0), where the value P qi,j (i 6= j) is the rate at which the state i changes to state j and qi,i = − j6=i qi,j represents the rate for the event of staying in the same state. The transient distribution at time t p(t) can be calculated using Kolmogorov‘s forward equations: p‘(t) = p(t)Q

(1)

Where the vector p(t) = [p0 (t), ..., pn (t)] gives probabilities of the system being in any of the states at time t. Uniformization or Randomization, known since the publication of Jensen in 1953 and, therefore, referenced to as the Jensen method, is the method of choice for computing transient behavior of CTMCs. To use uniformization we first define the matrix Q (2) P =I+ α which for the uniformization rate α ≥ maxi (|qi,i |) is a stochastic matrix. In order to preserve the sparsity of the matrix P the usual method of implementing uniformization is to compute the transient distribution p(t) using: p(t) =

∞ X

Π(i)e−αt

i=0

(αt)i i!

(3)

where α is uniformization rate, as described in (2), and Π(i) is the state probability vector of the underlying DTMC after each step i computed iteratively by: Π(0) = p(0), Π(i) = Π(i − 1)P (4) as proposed e.g. in Reibman and Trivedi [1988]. To compute p(t), within prespecified error tolerance, the computation can stop when the remaining value of cdf of Poisson distribution is less than the error bound : 1−

k X

e−αt

i=0

(αt)i ≤ i!

(5)

with k being the right truncation point. As αt increases, the corresponding probabilities of small number of i Poisson events occurring become less significant. This allows us to start the summation from the l‘th iteration called left truncation point with the equation 3 reduced to: p(t) =

k X

Π(i)e−αt

i=l

(αt)i i!

(6)

with the values of l and k equal to: l−1 X i=0

e−αt

k X  (αt)i  (αt)i ≤ , 1− e−αt ≤ i! 2 i! 2 i=0

2

(7)

The main computational effort of the algorithm lies in consecutive k matrix vector multiplications (MVM) in 4, unless in case of a converging Π(i), with a limiting distribution Π(∞), its steady-state can be predicted, as proposed in Burak [2014], for some i < S < l within some predefined error (steady-state detection threshold): kΠ(l) − Π(∞)k∞ total_weight ) return ( ( n>=l e f t _ l )&&(n=l e f t _ l )&&(n=s t a r t ) && ( S=s t a r t ) && ( n 0 . 0 ) | | ! ( E p s i l o n 1e −13) ) E p s i l o n =1e −13; i f ( ! ( E p s i l o n _ s s d