A closed-form solution on a level-dependent Markovian arrival ...

5 downloads 0 Views 262KB Size Report
Abstract. This paper reports a closed-form solution of the arrival events for a particular level-dependent Markovian arrival process (MAP). We ap-.
CALCOLO 41, 153 –175 (2004) DOI: 10.1007/s10092-004-0089-2

CALCOLO © Springer-Verlag 2004

A closed-form solution on a level-dependent Markovian arrival process with queueing application K. Kawanishi Department of Computer Science, Gunma University, Gunma, 376-8515 Japan e-mail: [email protected] Received: February 2004 / Accepted: March 2004 – © Springer-Verlag 2004

Abstract. This paper reports a closed-form solution of the arrival events for a particular level-dependent Markovian arrival process (MAP). We apply the Baker–Hausdorff Lemma to the matrix expression of the number of arrival events in (0, t]. The successful derivation depends on the fact that the matrices representing the MAP have a specific structure. We report the results of numerical experiments indicating that the closed-form solution is less time-consuming than the uniformization technique for large values of t. As an application, we consider a finite-capacity, multi-server queueing model with impatient customers for possible use in automatic call distribution (ACD) systems. Our primary interest lies in performance measures related to customer waiting time, and we demonstrate how the closed-form solution is applicable to performance analysis. 1 Introduction Suppose that we are given two finite square matrices C and D of order c + 1. We assume that C has negative diagonal elements and non-negative off-diagonal elements, that D has non-negative elements, at least one of which is strictly positive, and that the row sums of C + D are all zero. We denote by J (t) the state variable at time t ∈ R+ := [0, ∞) of an irreducible Markov process on the state space S := {0, 1, . . . , c} with infinitesimal generator C + D. If we denote by N (t) the number of arrival events up to time t, then a Markovian arrival process (MAP) [11] is defined This work was supported by the Ministry of Education, Science, Sports and Culture, Grantin-Aid for Encouragement of Young Scientists, 14780344, 2002.

154

K. Kawanishi

as a bivariate Markov process {N (t), J (t) : t ∈ R+ } on the state space {(n, j ) ; n ∈ N0 , j ∈ S}, where N0 := {0} ∪ N is the set of non-negative integers. The matrices C and D are called the matrices representing the MAP. The matrix C governs transitions without arrival events, while D governs those with arrival events. As a mathematical object, it is not necessary to identify a D-related transition event with the arrival event of a customer, though this approach is often employed in the queueing literature. In some cases, an arrival at a queueing system can be viewed as a departure from another system. Following queueing terminology, however, we use the term arrival event to refer to a D-related transition event for a MAP. A level-dependent Markovian arrival process is an extension of a MAP whose representing matrices depend on the level, i.e., the number of customers in a queue. It can also be viewed as a special case of a leveldependent batch Markovian arrival process (BMAP) [6]. For each k ∈ N0 , suppose that we are given a pair of square matrices (C (k) , D (k) ) satisfying the same conditions as the representing matrices C, D of a MAP. Then, a level-dependent MAP is also defined as a bivariate Markov process {N (k) (t), J (k) (t) : t ∈ R+ } on the state space {(n, j ) ; n ∈ N0 , j ∈ S} for k ∈ N0 , where N (k) (t) counts the number of arrival events up to time t and J (k) (t) is the state variable of the irreducible Markov process at time t on the state space S, under the assumption that the process started from level k. In this paper we are concerned with the probability matrix Pn(k) (t), for k ∈ N0 , whose element (i, j ) with i, j ∈ S is defined by (Pn(k) (t))i,j := Pr[N (k) (t) = n, J (k) (t) = j |N (k) (0) = 0, J (k) (0) = i] (1) for n ∈ N0 and t ∈ R+ . Using an argument similar to that of Hofmann [6], we see that  (k)  P0 (t) P1(k) (t) P2(k) (t) · · ·  0 P (k+1) (t) P (k+1) (t) · · ·    0 1 (k) exp[Q t] =  (k+2) , (t) · · · 0 0 P   0 .. .. .. .. . . . . where Q(k) is given by  Q

(k)

C (k) D (k)

0

  0 C (k+1) D (k+1) =  0 C (k+2)  0 .. .. .. . . .

 ··· ..  . . ..  .  .. .

Thus, we need to evaluate the matrix exponential exp[Q(k) t] to obtain the solution of Pn(k) (t), which is generally difficult to derive analytically in

A closed-form solution on a level-dependent Markovian arrival process

155

closed form. In the case of a homogeneous MAP, in which the representing matrices are independent of the level, the general technique for obtaining the analytic solution relies on the generating function method. Indeed, we have the explicit solution [11] ∞ 

Pn (t)zn = exp[(C + zD)t], |z| ≤ 1, t ∈ R+ ,

n=0

where Pn (t) is the probability matrix of the number of arrival events up to time t for a homogeneous MAP with representation (C, D). It is possible to obtain Pn (t) by evaluating the nth derivative of both sides of the equation at z = 0. The resultant solution is exact but takes the form of a power series expansion in terms of t. In this paper, we consider a closed-form solution of the arrival events for a particular level-dependent Markovian arrival process. This level-dependent MAP is based on a MAP whose representing matrices are constructed by considering the convolution of a two-stage generalized Erlang-type distribution. Without relying on the generating function method, we directly express the number of arrival events up to time t by using a matrix integral expression. We then apply the Baker–Hausdorff Lemma [1] to the matrix expression and obtain a closed-form solution. The successful derivation depends on the fact that the matrices representing the MAP have a specific structure. It is well-known that we can evaluate Pn(k) (t) numerically by applying the so-called uniformization technique [13]. This technique uses the Taylor series expansion of the matrix exponential exp[Q(k) t] and truncates it at order m so as to give an approximation of Pn(k) (t) within a prescribed, tolerable error. In general, m is likely to be large for large values of t, which implies an increasing number of matrix multiplications in the uniformization technique. On the other hand, the closed-form solution obtains Pn(k) (t) by computing several matrix multiplications that do not depend on t for fixed k and n. As a result, we expect that the closed-form solution is less time-consuming than the uniformization technique for large values of t. We report the results of our numerical experiments investigating this conjecture. As an application of the closed-form solution, we consider a finitecapacity, multi-server queueing model with impatient customers for possible use in automatic call distribution (ACD) systems [9]. Performance measures related to customer waiting time are our primary interest. We demonstrate how the closed-form solution can be used to express performance measures that may be useful for queueing models with impatient customers. The paper is organized as follows. In Sect. 2, we analyze the closed-form solution of the number of arrival events for a particular level-dependent MAP. In Sect. 3, we give analytical examples of the closed-form solution.

156

K. Kawanishi

In Sect. 4, we report the results of numerical experiments using the closedform solution. In Sect. 5, we demonstrate an application of the closed-form solution to a queueing model. Lastly, we summarize our results in Sect. 6. 2 Closed-form solution We start by introducing a sequence of (c + 1) × (c + 1) matrix pairs {(C (k) , D (k) ) : k ∈ N0 } which satisfy the conditions for a level-dependent MAP. For n ∈ N and t ∈ R+ , we define n,t by n,t := {(t1 , t2 , . . . , tn ) ∈ Rn+ ; 0 < t1 < t2 < · · · < tn < t},

(2)

where ti is the time when the ith arrival event of the level-dependent MAP oc x (k) curs, for i ∈ {1, 2, . . . , n}. It is well-known that F (k) (x) = 0 eC t D (k) dt gives the transition probability matrix for the Markov renewal process embedded in arrival events [12]. Hence, it can be shown that the element (i, j ) (k) of the matrix eC t D (k) gives the probability density that an arrival event occurs, and that the state of the level-dependent MAP is j at time t, given that the process started from level k and state i. Then, the number of arrival events up to time t is given by (k) (k+1) (k+n) (k) Pn (t) = eC t1 D (k) eC (t2 −t1 ) D (k+1) · · · eC (t−tn ) n,t

dt1 dt2 · · · dtn .

(3)

We assume that the matrix pair (C (k) , D (k) ) takes the form C (k) := C − γk I, D (k) := D + γk I,

(4)

where γk ≥ 0 for k ∈ N0 . Based on these assumptions, we can confirm that the sequence {(C (k) , D (k) ) : k ∈ N0 } satisfies the conditions for the matrices used to construct the level-dependent MAP, provided that the two matrices C and D satisfy the conditions of a MAP. For a given t ∈ R+ , let Mn(k) (t) be the (c + 1) × (c + 1) matrix defined by

g t1 Ct1 (k) −Ct1 g t2 Ct2 (k+1) −Ct2 (k) Mn (t) := e ek e D e e k+1 e D ··· n,t

g e k+n−1 tn eCtn D (k+n−1) e−Ctn dt1 dt2 · · · dtn ,

(5)

for n ∈ N and M0(k) (t) := I for n = 0, where gk := γk+1 − γk and k ∈ N0 . Since the equalities eC(x−y) = eCx e−Cy = e−Cy eCx hold [1], we can rewrite Eq. (3) as Pn(k) (t) = Mn(k) (t)eC

(k+n) t

.

(6)

A closed-form solution on a level-dependent Markovian arrival process

157

Next we construct the representing matrices (C, D) of the base MAP. Suppose that we have c independent and identically distributed (iid) random variables that follow a two-stage generalized Erlang-type distribution, i.e., a convolution of two exponential random variables with different parameters. The first and second stages are assumed to be exponentially distributed with parameters µ and ξ , respectively. For each random variable, we can associate a renewal process with inter-renewals distributed by the variable. We identify the state space S with the number of second stages among the c renewal processes. It then follows that the superposed c renewal processes can be described by a MAP whose representing matrices are given by   −cµ cµ 0 ··· ··· 0 ..  .   0 −(c − 1)µ − ξ (c − 1)µ . . .   . ..  .. .. .. ..  . . . . . .   .  C= (7)  ..   . 0    .  .. ..  .. . . µ  0 and

···

· · · 0 −cξ



0 ···  ... ξ   0 2ξ D=  .. . . . . .  ..

.. ..

.

. 0 ··· ···

 ··· 0 ..  .   ,   . . . . ..  . . . 0 cξ 0

(8)

or, equivalently, (C)i,j = (c − i)µδi,j −1 − [(c − i)µ + iξ ]δi,j , (D)i,j = iξ δi,j +1

(9) (10)

for i, j ∈ S, where δi,j is the Kronecker delta. It is clear that these matrices satisfy the conditions for the representing matrices of a MAP. Note that this is really a Markovian arrival process, over and above the description of the phase-type renewal process, because the D-related transition rates depend on the state immediately before the transitions occur. These are the building blocks of the level-dependent MAP that we consider in this paper. We demonstrate later that a specific level-dependent MAP can be used to express performance measures for a particular queueing model. The matrix integral expression of Pn(k) (t) for the level-dependent MAP suggests that we can find a closed-form solution of Pn(k) (t) if Mn(k) (t), and

158

K. Kawanishi (k+n)

also eC t , can be analytically obtained. In the following, we seek the closed-form solution of each part separately. First, we investigate an algebraic property of C and D in order to obtain the closed-form solution of Mn(k) (t). Definition 1 For two square matrices A and B, we define the commutator [A, B] by [A, B] := AB − BA.

(11)

We can directly calculate the commutator for the matrices C and D. We summarize the results as the following lemma. Lemma 1 The elements (i, j ) of the two commutators [C, D] and [C, [C, D]] are calculated as ([C, D])i,j = (c − 2i)µξ δi,j + i(µ − ξ )ξ δi,j +1 ,

(12)

([C, [C, D]])i,j = −2(c − i)µ ξ δi,j −1 + (c − 2i)(µ − ξ )µξ δi,j 2

+ i(µ − ξ )2 ξ δi,j +1 ,

(13)

respectively. Furthermore, the commutator [C, [C, [C, D]]] is calculated as [C, [C, [C, D]]] = (µ − ξ )2 [C, D].

(14)

Hence, we have 

  n = 2k + 1, (µ − ξ )2k [C, D] [C, [C, . . . , [C, D] · · · ]] = (µ − ξ )2k [C, [C, D]] n = 2k + 2, n

(15)

for k ∈ N0 . Proof We can check this lemma by direct calculation. The details are given in Appendix A.   The specific structure of C and D plays a key role in Lemma 1. The structure has a Lie-algebra setting, which has attracted much interest within the literature on numerical algebra [7, 14]. We denote by sl(2, R) the Lie algebra of the special linear group SL(2, R), i.e., the group consisting of 2 × 2 real matrices with unit determinant. If we define E, F , and H by (E)i,j := (c − i)δi,j −1 , (F )i,j := iδi,j +1 , (H )i,j := (c − 2i)δi,j ,

(16) (17) (18)

A closed-form solution on a level-dependent Markovian arrival process

159

then we can confirm that they are (c + 1) × (c + 1) matrix representations of the standard generators e, ˆ fˆ, and hˆ of sl(2, R), with the commutators given by ˆ [h, ˆ e] ˆ fˆ] = −2fˆ. [e, ˆ fˆ] = h, ˆ = 2e, ˆ [h,

(19)

Because C and D are expanded as C = µE − D = ξ F,

c µ−ξ H − (µ + ξ )I, 2 2

(20) (21)

we can also prove Lemma 1 by using the commutators. In general, it can be shown that a closure property similar to Lemma 1 holds whenever C and D (and hence C (k) and D (k) ) are expanded in terms of E, F, H , and the identity matrix. Note that such matrices do not necessarily satisfy the conditions for the representing matrices of a (level-dependent) MAP. Our construction, however, provides an example of representing matrices C and D which satisfy these properties. For those who are interested in the brief sketch of the general result and other examples of representing matrices, see Appendix B. Next, we explore the matrix expression of Mn(k) (t). Before proceeding, we need the following key lemma for our analysis. Lemma 2 (Baker–Hausdorff Lemma [1]) For a real number t ∈ R and two square matrices A and B, eAt Be−At can be expanded as n

At

e Be

−At

∞ n 

  t =B+ [A, [A, . . . , [A, B] · · · ]]. n! n=1

(22)

Combining Lemma 1 and the Baker–Hausdorff Lemma immediately leads us to the following lemma. Lemma 3 We introduce a := µ − ξ and z := µ/(µ − ξ ) for µ = ξ . Then egk t eCt D (k) e−Ct = egk t (γk I + J ) + e(gk +a)t K + e(gk −a)t L,

(23)

where J, K, and L are (c + 1) × (c + 1) matrices whose elements (i, j ) are defined by (J )i,j := 2(c − i)z2 ξ δi,j −1 − (c − 2i)zξ δi,j , (K)i,j := −(c − i)z2 ξ δi,j −1 + (c − 2i)zξ δi,j + iξ δi,j +1 , (L)i,j := −(c − i)z2 ξ δi,j −1 , respectively.

(24) (25) (26)

160

K. Kawanishi

Proof Applying the Baker–Hausdorff Lemma together with Lemma 1, we have, for µ = ξ , t2 [C, [C, D (k) ]] + · · · 2! n

  tn + [C, [C, . . . , [C, D (k) ] · · · ]] + · · · n!   t3 t5 (k) 2 4 = D + t + (µ − ξ ) + (µ − ξ ) + · · · [C, D] 3! 5!   2 4 6 t t t + (µ − ξ )2 + (µ − ξ )4 + · · · [C, [C, D]] + 2! 4! 6! (µ−ξ )t −(µ−ξ )t −e e = D (k) + [C, D] 2(µ − ξ )  (µ−ξ )t  e 1 + e−(µ−ξ )t + − [C, [C, D]] 2(µ − ξ )2 (µ − ξ )2   [C, [C, D]] = D (k) − (µ − ξ )2   [C, [C, D]] [C, D] (µ−ξ )t + +e 2(µ − ξ ) 2(µ − ξ )2   [C, [C, D]] [C, D] + , (27) + e−(µ−ξ )t − 2(µ − ξ ) 2(µ − ξ )2

eCt D (k) e−Ct = D (k) + t[C, D (k) ] +

because [C, D (k) ] = [C, D]. The lemma then follows immediately.

 

Lemma 3 indicates that egk t eCt D (k) e−Ct is the sum of the three matrices γk I + J , K, and L, each of which has scalar factors dependent on t. Hence, Mn(k) (t) can be written as a multivariate polynomial in terms of these matrices in closed form. Each term of the multivariate polynomial has a factor given by the multiple integral of the scalar functions of t, which are given explicitly. In the case of level-dependence other than that defined by Eq. (4), it seems difficult to obtain a closed-form solution unless we can factorize the level-dependent factor and reduce the problem to that of a homogeneous MAP, which would allow us to apply the Baker–Hausdorff Lemma. The remaining task in obtaining the closed-form solution is to evaluate the matrix exponential exp[C (k+n) t]. Because exp[C (k+n) t] = e−γk+n t exp[Ct], it is sufficient to focus on exp[Ct] alone. It is possible to express exp[Ct] by using the definition of the matrix function. Note that the matrix C has c + 1 distinct eigenvalues −cµ, −(c − 1)µ − ξ, . . . , −µ − (c − 1)ξ, −cξ when µ  = ξ . Hence, if we can find the right and left eigenvectors corresponding to each eigenvalue explicitly, exp[Cx] can easily be expressed by using the spectral representation of C in closed form.

A closed-form solution on a level-dependent Markovian arrival process

161

We denote the eigenvalues of C by λm = −(c − m)µ − mξ , and let um and vm be the corresponding right and left eigenvectors for m ∈ S. Then, the eigenvalues are obtained by solving the system of linear equations Cum = λm um , vm C = λm vm .

(28)

For each eigenvalue, in principle, we can express um and vm in terms of µ and ξ . It is often difficult, however, to obtain the left and right eigenvectors in closed form even if the eigenvalues can be obtained analytically. In our case, we can evaluate them analytically by virtue of the specific structure of C. Indeed, we can apply the following lemma, whose proof is given in Appendix C, which presents a formal, explicit, closed-form solution in terms of U and V composed of the right and left eigenvectors um and vm for m ∈ S. Lemma 4 Let U be a (c + 1) × (c + 1) matrix whose first column is u0 , second column is u1 , and so on. Similarly, let V be a (c + 1) × (c + 1) matrix whose first row is v0 , second row is v1 , and so on. Then U and V are explicitly given by U = (u0 , u1 , . . . , uc ), V = (v0 , v1 , . . . , vc ),

(29) (30)



where um and vm for m ∈ S are both column vectors, which are formally expressed by differentiating the vectors  c   c  z w  zc−1   wc−1      (31) uc =  ..  , vc =  ..  ,  .   .  1

1

with z = µ/(µ − ξ ), z + w = 0, element by element, as uc−m =

1 dm 1 dm u , v = v c c−m m! dzm m! dwm c

(32)

for m ∈ S. It can also be shown by direct calculation that the relation U V = I holds. By making use of the spectral representation C = U V with  = diag{−cµ, −(c − 1)µ − ξ, . . . , −µ − (c − 1)ξ, −cξ }, we can express exp[Ct] explicitly in terms of µ, ξ, c. In summary, we have found that Pn(k) (t) can be written in terms of several matrix products obtained explicitly from the given parameters µ, ξ, c, and γk for k ∈ N0 . The successful derivation depends on the fact that the

162

K. Kawanishi

representing matrices C and D of the MAP have a specific structure. It is possible to obtain Pn(k) (t) by evaluating Eq. (3) directly in terms of the spectral representation of C (k) without relying on the Baker–Hausdorff Lemma. In this case, however, we need the element-by-element multiple integrals, which can be calculated explicitly but may also be somewhat involved. It is also worth noting that the closed-form solution is analytically exact and can be used to evaluate the number of arrival events up to time t for a level-dependent MAP simply by substituting values into the parameters. By comparison, the uniformization technique may induce truncation errors.

3 Examples First, we show that exp[C (k) t] = e−γk t exp[Ct]. The matrix exp[Ct] has a simple structure and its closed-form solution can easily be obtained for general c. For simplicity, here we only give the case of c = 2. The matrix C then has a spectral representation given by    1 2w w2 −2µ 0 0 1 2z z2 C =  0 1 z   0 −(µ + ξ ) 0   0 1 w  . 0 0 −2ξ 0 0 1 0 0 1 

Hence, we obtain exp[C (k) t] explicitly as exp[C (k) t] = e−γk t   −2µt   e 0 0 1 2z z2 1 2w w2  0 1 z   0 e−(µ+ξ )t 0   0 1 w  0 0 1 0 0 1 0 0 e−2ξ t = e−γk t  e−2µt 2we−2µt + 2ze−(µ+ξ )t w2 e−2µt + 2zwe−(µ+ξ )t + z2 e−2ξ t  0 . e−(µ+ξ )t we−(µ+ξ )t + ze−2ξ t −2ξ t 0 0 e (33) Denoting a column vector of ones by 1 , the vector exp[C (k) t]1 , which is needed in some cases, is given by 

 [(1 + w)e−µt + ze−ξ t ]2 exp[C (k) t]1 = e−γk t  e−ξ t [(1 + w)e−µt + ze−ξ t ]  . e−2ξ t

(34)

A closed-form solution on a level-dependent Markovian arrival process

163

We next give analytical examples of the closed-form solution of Mn(k) (t). In the case of n = 1, we can easily show by direct integration that t t t (k) gk x (gk +a)x e dx(γk I + J ) + e dxK + e(gk −a)x dxL M1 (t) = 0

0

0

(35) for k ∈ N0 . Integration of the exponential functions is straightforward. We can also obtain analytic solutions for n ≥ 2 by using multiple integrals. In the case of a homogeneous MAP, which may have more potential applications than a level-dependent MAP, we can also obtain analytically exact solutions. We denote by Pn (t) = Mn (t)eCt the number of arrival events up to time t of a homogeneous MAP. It is characterized by the pair of matrices (C, D), i.e., γk = 0 for k ∈ N0 . Here, we give the first two matrices M1 (x) and M2 (x) as examples: e−at − 1 eat − 1 +L , (36) a −a e−at at + e−at − 1 eat at − eat + 1 t2 + J L M2 (t) = J 2 + J K a2 −a 2 2 at at 2 e − at − 1 e−at + at − 1 2 (e − 1) + KJ + K + KL a2 2a 2 a2 −at − 1)2 e−at + at − 1 eat − at − 1 2 (e + LJ + LK + L . a2 a2 2a 2 (37) M1 (t) = J t + K

The closed-form solution becomes more expensive as a function of n, although the number of matrix products in Pn(k) (t) is independent of t. In fact, we need O(3n ) matrix products to compute Pn(k) (t) for any t with the closed-form solution. 4 Numerical experiments We report here the results of our numerical experiments, which were carried out in Mathematica on a 1-GHz dual-processor PC with 2 GB of main memory. We first considered the minimum order m∗ required to obtain six decimal digits of accuracy for Pn(k) (t) based on the uniformization technique. More  precisely, if we define the matrix 1-norm of a matrix A by ||A||1 := maxj i |(A)i,j |, then m∗ is given by the minimum m such that (k) (t) − Pn(k) (t)||1 ||P˜n,m

||Pn(k) (t)||1

≤  = 10−6 ,

(38)

164

K. Kawanishi

Table 1. m∗ for various Pn (t) with 0 ≤ n ≤ 5 at t = 10, 100, and 1000. The parameters are chose as (µ−1 , ξ −1 , γ −1 , c) = (5, 10, 60, 2) (0)

t 10 100 1000

(0)

P0 (t) 13 48 291

(0)

P1 (t) 15 48 291

(0)

P2 (t) 16 51 310

(0)

P3 (t) 18 54 330

(0)

P4 (t) 20 57 349

(0)

P5 (t) 21 60 368

(k) where P˜n,m (t) is the order-m approximation of Pn(k) (t) obtained by the Taylor series expansion of the matrix exponential exp[Q(k) t] truncated at order (k) m. We then measured the CPU time required to compute P˜n,m ∗ (t) by the uniformization technique. For the closed-form solution, we expressed Mn(k) (t) in terms of the matrices γk I + J , K, and L with t-dependent scalar factors (in which integral expressions were evaluated symbolically by using Mathematica in as simple a way as possible) and then measured the CPU time required to compute Pn(k) (t) numerically. Table 1 gives m∗ for various Pn(0) (t) with 0 ≤ n ≤ 5 at t = 10, 100, and 1000. We chose γk = min(k, M)γ for γ > 0, M < +∞, and k ∈ N0 . We observe that m∗ tends to increase as t becomes large for each n. Thus, the number of matrix multiplications also increases for large values of t, which is the known shortcoming of the uniformization technique, whereas the closedform solution only requires computing several matrix multiplications that do not depend on t. (0) In Fig. 1, we give the CPU times required to compute P˜n,m ∗ (t) by the (0) uniformization technique and Pn (t) by using the closed-form solution. We also give the numerical values of the CPU times in Table 2. We measured the CPU times by using the Mathematica Timing function. All times were averaged over 10 independent runs. We observed that the uniformization technique becomes more time-consuming than the closed-form solution for large values of t. In contrast, the CPU times for the closed-form solution Pn(0) (t) are insensitive to t and almost constant for each n. More precisely, we observe that the closed-form solution required less CPU time than the uniformization technique when t = 100 and 1000, although the uniformization technique outperformed the closed-form solution for all n except n = 0 when t = 10. We investigated further to determine the scope of the increased efficiency of CPU usage with the closed-form solution. In Table 3, we give the CPU (0) (0) times required to compute P˜5,m ∗ (t) and P5 (t). We chose n = 5 because it was the most time-consuming case in our previous experiments. The second column gives the CPU time required to compute P5(0) (t) by using the closed(0) form solution. We measured the CPU times for P˜5,m ∗ (t) for varying , while the other parameters remained the same as for the results shown in Table 1. (0) The third through last columns give the CPU times required for P˜5,m ∗ (t),

A closed-form solution on a level-dependent Markovian arrival process

165

50 45 40 35 30 25 20 15 10 5 0 t=1000 t=100 t=10 CLSD(0)

UNIF(0)

CLSD(1)

UNIF(1)

CLSD(2)

UNIF(2)

CLSD(3)

UNIF(3)

CLSD(4)

UNIF(4) CLSD(5)

UNIF(5)

(0) Fig. 1. The CPU times (in seconds) required to compute P˜n,m∗ (t) by the uniformization (0) technique and Pn (t) by using the closed-form solution. UNIF(n) and CLSD(n) correspond to the uniformization technique and closed-form solution with n, respectively (0) Table 2. The numerical values of the CPU times (in seconds) required to compute P˜n,m∗ (t) (0)

and Pn (t) with 0 ≤ n ≤ 5 at t = 10, 100, and 1000 t 10 100 1000

Method CLSD(n) UNIF(n) CLSD(n) UNIF(n) CLSD(n) UNIF(n)

n=0 0.002 0.011 0.003 0.092 0.003 2.447

n=1 0.056 0.018 0.057 0.174 0.058 4.205

n=2 0.085 0.031 0.081 0.326 0.083 7.992

n=3 0.137 0.047 0.128 0.468 0.131 11.997

n=4 0.355 0.065 0.350 0.596 0.355 16.055

n=5 1.982 1.052 1.961 4.148 1.983 45.151

with m∗ shown in parentheses. We observe that the CPU time with the uniformization technique is closely related to m∗ , which depends on both t and . As  becomes large for each t, m∗ tends to decrease, thus reducing the CPU time. These observations indicate that computing P5(0) (t) by the closedform solution is less time-consuming than by employing the uniformization technique when large values of t and high accuracy (such as t ≥ 50 and  ≤ 10−4 ) are required. In other words, the closed-form solution may not be of interest even for large values of t if low accuracy is acceptable. We remark that, for moderately large values of n, it is expensive to express the closed-form solution as well as to evaluate it numerically. Moreover, this solution potentially has numerical instability because it requires evaluating matrix multiplications with respect to the matrices γk I +J , K, and L, whose

166

K. Kawanishi

Table 3. The CPU times (in seconds) required to compute P˜5,m∗ (t) and P5 (t) using the uniformization technique UNIF(5) and the closed-form solution CLSD(5), respectively (0)

t 10 20 30 40 50

CLSD(5) 1.982 1.969 2.005 1.957 2.006

 = 10−1 0.508(13) 0.708(16) 0.918(19) 1.131(22) 1.359(25)

 = 10−2 0.639(15) 0.934(19) 1.196(23) 1.429(26) 1.666(29)

UNIF(5)  = 10−3 0.777(17) 1.071(21) 1.353(25) 1.663(29) 1.986(33)

 = 10−4 0.851(18) 1.200(23) 1.569(28) 1.888(32) 2.145(35)

(0)

 = 10−5 0.992(20) 1.351(25) 1.724(30) 2.039(34) 2.400(38)

elements may be positive or negative, whereas the uniformization technique is numerically stable because it involves only non-negative elements. The closed-form solution has another numerical difficulty in that t-dependent scalar factors may cause a so-called cancellation error for small values of t. In fact, we have encountered such a problem in computing P1(0) (t) for some small t, e.g., t = 1/10, which may be due to the error or the numerical instability, or both. In practice, however, such small t may be of no interest. Although the closed-form solution has some disadvantages, our numerical experiments indicate that it produces CPU times that are almost insensitive to t and can save a good deal of time in computing Pn(k) (t) for large values of t. Hence, the closed-form solution may be useful for numerical computation with large values of t and small values of n. 5 Queueing application Here we consider a queueing model. The model has c servers and a capacity of K including the customers being served. Every customer arrives according to a Poisson process with rate λ. They are served according to a first-come, first-served (FCFS) approach. We consider the case in which the customers are impatient, i.e., they may leave the queue before being served. If a customer has to wait, he is assigned an iid exponential random variable with a mean of γ −1 upon arrival. If the customer’s random deadline expires before being served, he leaves the queue. For literature on queueing models with impatient customers, see [2, 3] and their references. The service time for each customer consists of two stages, each of which is also assumed to be an iid random variable with an exponential distribution. The random variables for the time durations of the two stages are assumed to be mutually independent. The first stage models a primary service, and its mean duration is given by µ−1 . The second stage corresponds to a postservice, which has mean ξ −1 . Hereafter, the term service time refers to the time duration of the first stage, and wrap-up time refers to that of the second stage. We can then identify three distinct states for each server: (1) the busy

A closed-form solution on a level-dependent Markovian arrival process

167

state, in which the customer is receiving the primary service from the server; (2) the wrap-up state, in which the server is engaged in the post-service after completing the primary service; and (3) the idle state, in which the server is idle. This type of queueing model is being considered for possible use in automatic call distribution (ACD) systems. ACD systems are used by call centers in the travel, banking, and insurance industries to handle large volumes of incoming calls efficiently. They have been analyzed extensively [4, 5, 8]. It may be suggested that employing a two-stage generalized Erlang-type distribution for the whole service time may be artificial. However, this involves a subtle issue that may affect performance analysis. In ACD systems, it is often the case that a server (agent) is required to finish an additional job (post-service), such as entering or updating data into the customer database, after completing the primary service. When a customer being served has left the system, this also releases the trunk line. During the post-service, a new incoming customer can occupy the released line and will not be lost. Thus, the model used for the whole service time distribution influences performance measures such as the queue length distribution or loss probability. Such effects have already been pointed out and analyzed by Jolley and Harris [8] who employed a two-stage generalized Erlang-type distribution for modeling the whole service time. It is clear that the whole service time of our queueing model is the same as for their approach and hence properly captures the effects of the post-service in ACD systems. We move on now to an application of a level-dependent MAP. Suppose that none of the servers is in the idle state. If we denote by Nq (t) the number of customers in the queue at time t and by L(t) the number of servers in the wrap-up state at time t, then we find that the process {(Nq (t), L(t)) :  t ∈ R+ } is a bivariate Markov process on the state space K k=0 (k), where (k) = {(k, 0), (k, 1), . . . , (k, c)} for 0 ≤ k ≤ K − c and where (k) = {(k, k + c − K), (k, k + c − K + 1), . . . , (k, c)} for K − c < k ≤ K. We assume that at time t there are k > 0 waiting customers in the queue, l servers are in the wrap-up state, and the remaining c − l servers are in the busy state. We consider possible transitions in terms of departure events from the queue. Recalling that the service and wrap-up times are exponentially distributed with parameters µ and ξ , respectively, and that the time to miss the deadline is an iid exponential random variable with mean γ −1 , we then have three possible transitions: 1. from state (k, l) to state (k, l + 1) with rate (c − l)µ; 2. from state (k, l) to state (k − 1, l − 1) with rate lξ ; 3. from state (k, l) to state (k − 1, l) with rate kγ . The first case accompanies the completion of the primary service for a customer and puts the server in the wrap-up state. Waiting customers, however,

168

K. Kawanishi

remain in the queue. The second case corresponds to the situation in which one of the servers in the wrap-up state finishes the post-service and immediately continues with the primary service for the first customer in the queue. Thus, the number of waiting customers in the queue decreases by one. The third case is the transition in which one of the k customers leaves the queue due to impatience. We can then recognize that the departure process from the queue, given that there are k waiting customers, is described by a level-dependent MAP with the sequence of the matrix pair given by C (k) = C − kγ I, D (k) = D + kγ I,

(39)

for k ∈ {0, 1, . . . , K − 1}, where C and D are given by Eqs. (7) and (8), respectively. What we consider as an application of the closed-form solution is the idea of performance measures related to the waiting time distribution of the queueing model. Let X, Y , and Z be an exponential random variable with mean γ −1 , the offered virtual waiting time, i.e., the time an infinitely patient virtual customer has to wait to be served, and the actual waiting time, respectively. Then Pr[Z > t] = Pr[X > t] Pr[Y > t] = e−γ t Pr[Y > t] because Z = min(X, Y ). Note that the offered virtual waiting time is not influenced by customers arriving later because the queue discipline is FCFS. It is easy to observe that the offered virtual waiting time is greater than t if no more than k departure events from the queue occur during the interval (0, t], given that an accepted (virtual) customer finds k waiting customers ahead. Then, Pr[Y > t] can be expressed by Pr[Y > t] =

K−1  k=0



π (k)

k 

Pn(k) (t)1 ,

(40)

n=0

where π ∗ (k) is the row vector whose lth element l ∈ S gives the steady-state probability that an accepted infinitely patient customer will find k customers waiting in the queue and l servers in the wrap-up state upon arrival. A slight modification is needed for Pn(k) (t), which is redefined as (k) (k−1) (k−n) eC t1 D (k) eC (t2 −t1 ) D (k−1) · · · eC (t−tn ) Pn(k) (t) := n,t

dt1 dt2 · · · dtn

(41)

for n ∈ {0, 1, . . . , k} because it counts the number of departure events up to time t. The closed-form solution can also be used to express the joint probability distribution Pr[Z > t, Y > X]. After a number of calculations (such as

A closed-form solution on a level-dependent Markovian arrival process

169

repeating integration by parts), we can show that Pr[Z > t, Y > X] = γ e−γ t 

K−1  k=0

π ∗ (k)

k 

Pn(k) (t)[−C (k+1−n) ]−1

n=0

E (k−n) [E (k−n−1) [ · · · [E (0)  + I ] · · · + I ] + I ] + I 1 ,

(42)

where E (p) := D (p) [−C (p) ]−1 for p ∈ N and E (0) is defined as the matrix whose elements are all zero. This probability enables us to evaluate the probability Pr[Y > X|Z > t] = Pr[Z > t, Y > X]/ Pr[Z > t], which may be of interest for queueing models with impatient customers. It is the probability that a customer will leave the system due to impatience before being served, given that he will wait longer than time t. The probability vector π ∗ (n) is related to the steady-state probability vector π(n) for the number of customers in the queue. It can be computed numerically by using the algorithms [10] of the level-dependent finite quasibirth-death (QBD) process. The QBD process of the queueing model has an infinitesimal generator of the block tridiagonal form, defined as   ··· ··· 0 A0,0 B0,0 0 ..   .. .  D0,1 A0,1 B0,1 .    ..   .  0 D0,2 A0,2   .  . . .  .  .. .. .. B  .  0,c−2   D A B  . 0,c−1 0,c−1 0,c−1   . .  D0,c A0 B0 . . ..      .  D1 A1 . . 0     .  . . . .. .. .. B  ..  K−1 0 ··· · · · 0 DK AK For example, Dk is related to D (k) , and Bk is the matrix whose non-zero elements are given by λ. The matrix C (k) is placed in Ak , whose diagonal elements are determined by making the row sums zero. The other matrices are similarly constructed. We omit the remaining detailed structures of the matrices because they are straightforward but relatively complicated to describe. We give numerical examples of Pr[Z > t] and Pr[Y > X|Z > t] based on the closed-form solution in Tables 4 and 5, respectively. The queueing model has c = 2 servers and the capacity of K = 5. The mean inter-arrival time of the customers is chosen as λ−1 = 5.

170

K. Kawanishi Table 4. The complement of the actual waiting time distribution Pr[Z > t] (µ−1 , ξ −1 ) = (10, 5) γ −1 = 20 γ −1 = 10 7.58284E-01 8.20802E-01 1.13312E-01 2.65479E-01 9.38614E-03 5.02255E-02 5.87555E-04 6.70677E-03 3.24154E-05 7.33405E-04 1.68898E-06 7.14745E-05 8.57720E-08 6.50841E-06 4.30669E-09 5.69015E-07 2.15184E-10 4.85365E-08 1.07293E-11 4.07848E-09 5.34502E-13 3.39590E-10

t 0 10 20 30 40 50 60 70 80 90 100

(µ−1 , ξ −1 ) = (5, 10) γ −1 = 10 γ −1 = 20 7.65840E-01 8.34861E-01 1.19531E-01 2.89371E-01 1.04622E-02 6.16439E-02 6.78927E-04 9.19881E-03 3.81882E-05 1.09488E-03 2.00885E-06 1.13253E-04 1.02473E-07 1.07350E-05 5.15562E-09 9.63561E-07 2.57827E-10 8.35935E-08 1.28603E-11 7.10000E-09 6.40766E-13 5.95158E-10

Table 5. Pr[Y > X|Z > t], the probability that a customer will leave the system due to impatience before being served, given that he will wait longer than time t t 0 10 20 30 40 50 60 70 80 90 100

(µ−1 , ξ −1 ) = (10, 5) γ −1 = 20 0.537884752 0.411676509 0.410579899 0.299646609 0.365014738 0.250667066 0.346978820 0.227722890 0.339279875 0.215896781 0.335908089 0.209370223 0.334433382 0.205609402 0.333796452 0.203385915 0.333525639 0.202052189 0.333412230 0.201246053 0.333365372 0.200757021 γ −1 = 10

(µ−1 , ξ −1 ) = (5, 10) γ −1 = 20 0.547774655 0.432595018 0.419507873 0.318441857 0.370397562 0.264766010 0.349726698 0.237296610 0.340593788 0.222159667 0.336512287 0.213399441 0.334702934 0.208173904 0.333913715 0.205003764 0.333575594 0.203064817 0.333433147 0.201875480 0.333374007 0.201145956 γ −1 = 10

6 Conclusion This paper considered a particular level-dependent Markovian arrival process (MAP). Focusing on the specific MAP which is the base of the leveldependent MAP, we derived the probability matrix for the number of arrival events up to time t in closed form. We applied the Baker–Hausdorff Lemma to the matrix expression of the number of arrival events. The successful derivation depends on the fact that the representing matrices of the MAP have a specific structure. We gave analytical examples of the closed-form solution and the results of our numerical experiments using this solution. The results indicated that the CPU time required with the closed-form solution is almost insensitive to t and less time-consuming than with the uniformiza-

A closed-form solution on a level-dependent Markovian arrival process

171

tion technique for large values of t. As an application, we considered a finite-capacity, multi-server queueing model with impatient customers, for possible use in automatic call distribution (ACD) systems. We demonstrated that the closed-form solution can be used to express performance measures related to the waiting time distribution.

Appendix A Proof of Lemma 1 Here we give the detailed calculation of the commutator of C and D to prove Lemma 1. In the following, we use the Einstein summation convention that double (dummy) indices are summed over automatically, viz, ai bi means  a b i i i . It then follows that (C)i,k (D)k,j = {(c − i)µδi,k−1 − [(c − i)µ + iξ ]δi,k }kξ δk,j +1 = (c − i)(i + 1)µξ δi,j − [(c − i)µ + iξ ]iξ δi,j +1 , (D)i,k (C)k,j = iξ δi,k+1 {(c − k)µδk,j −1 − [(c − k)µ + kξ ]δk,j } = i(c − i + 1)µξ δi,j − i[(c − i + 1)µ + (i − 1)ξ ]ξ δi,j +1 . Consequently, the commutator [C, D] has element (i, j ) given by ([C, D])i,j = (c − 2i)µξ δi,j + i(µ − ξ )ξ δi,j +1 .

(43)

Similarly, we can confirm by direct calculation that (C)i,k ([C, D])k,j = (c − i)[c − 2(i + 1)]µ2 ξ δi,j −1 + (c − i)(i + 1)(µ − ξ )µξ δi,j − [(c − i)µ + iξ ](c − 2i)µξ δi,j − [(c − i)µ + iξ ]i(µ − ξ )ξ δi,j +1 , ([C, D])i,k (C)k,j = (c − 2i)(c − i)µ2 ξ δi,j −1 − (c − 2i)[(c − i)µ + iξ ]µξ δi,j + i[c − (i − 1)](µ − ξ )µξ δi,j − i[(c − (i − 1))µ + (i − 1)ξ ](µ − ξ )ξ δi,j +1 . Hence, we can express element (i, j ) of the commutator [C, [C, D]] as ([C, [C, D]])i,j = −2(c − i)µ2 ξ δi,j −1 + (c − 2i)(µ − ξ )µξ δi,j + i(µ − ξ )2 ξ δi,j +1 . (44)

172

K. Kawanishi

Finally, we can check by direct calculation that (C)i,k ([C, [C, D]])k,j = −2(c − i)(c − i − 1)µ3 ξ δi,j −2 + 2[(c − i)µ + iξ ](c − i)µ2 ξ δi,j −1 + (c − i)(c − 2i − 2)(µ − ξ )µ2 ξ δi,j −1 − (c − 2i)[(c − i)µ + iξ ](µ − ξ )µξ δi,j + (i + 1)(c − i)(µ − ξ )2 µξ δi,j − i[(c − i)µ + iξ ](µ − ξ )2 ξ δi,j +1 , ([C, [C, D]])i,k (C)k,j = −2(c − i)(c − i − 1)µ3 ξ δi,j −2 + (c − 2i)(c − i)µ2 ξ(µ − ξ )δi,j −1 + i(c − i + 1)µξ(µ − ξ )2 δi,j + 2(c − i)[(c − i − 1)µ + (i + 1)ξ ]µ2 ξ δi,j −1 − (c − 2i)[(c − i)µ + iξ ]µξ(µ − ξ )δi,j − i[(c − i + 1)µ + (i − 1)ξ ](µ − ξ )2 ξ δi,j +1 . Thus, element (i, j ) of the commutator [C, [C, [C, D]]] is equal to ([C, [C, [C, D]]])i,j = (µ − ξ )2 ([C, D])i,j .

(45)

Appendix B Lemma 1 in sl(2, R) Generators Suppose that E+ , E− , and E3 are (l + 1) × (l + 1) matrix representations of the standard generators eˆ+ , eˆ− , and eˆ3 of sl(2, R) with the commutators given by [eˆ+ , eˆ− ] = eˆ3 , [eˆ3 , eˆ± ] = ±2eˆ± .

(46)

If C and D are expanded as C = c+ E+ + c− E− + c3 E3 + c0 I (l) , D = d+ E+ + d− E− + d3 E3 + d0 I (l) ,

(47) (48)

where I (l) = l × I and the coefficients are real numbers, then we can show the following relations by direct calculation: [C, D] = 2(c3 d+ − c+ d3 )E+ − 2(c3 d− − c− d3 )E− + (c+ d− − c− d+ )E3 , (49) [C, [C, D]] = [4c3 (c3 d+ − c+ d3 ) − 2c+ (c+ d− − c− d+ )]E+ + [4c3 (c3 d− − c− d3 ) + 2c− (c+ d− − c− d+ )]E− − 2[c+ (c3 d− − c− d3 ) + c− (c3 d+ − c+ d3 )]E3 , (50) [C, [C, [C, D]]] = 4(c32 + c+ c− )[C, D].

(51)

A closed-form solution on a level-dependent Markovian arrival process

173

Note that these can be derived by using the commutators. We now construct a matrix pair (C, D) as the representing matrices of a MAP. If we choose (l + 1) × (l + 1) matrix representations given by (E+ )i,j = (l − i)δi,j −1 , (E− )i,j = iδi,j +1 , (E3 )i,j = (l − 2i)δi,j ,

(52) (53) (54)

and expand C and D as α β E3 − I (l) , (55) 2 2 λ+ − λ− λ+ + λ− (l) E3 + I , D = d+ E+ + d− E− + (56) 2 2 where α = (c+ + d+ + λ+ ) − (c− + d− + λ− ) and β = c+ + d+ + λ+ + c− + d− + λ− , then C and D satisfy the conditions for the representing matrices of an (l + 1)-state MAP. Note that we are allowed to have two non-negative parameters c+ and c− , and four non-negative parameters d+ , d− , λ+ , and λ− , with at least one of the four parameters strictly positive. In the case of a two-state MAP, the 2 × 2 matrix representations given by       00 1 0 01 , E− = E+ = , E3 = (57) 10 0 −1 00 C = c+ E+ + c− E− −

lead us to

   λ+ d+ −c+ − d+ − λ+ c+ ,D= . C= d− λ− c− −c− − d− − λ− 

(58)

These are the standard representing matrices of a two-state MAP that are fully determined by six parameters. Appendix C Proof of Lemma 4 Here we give the left and right eigenvectors of C given in Lemma 4 explicitly. Define the right eigenvector x (i) with eigenvalue λi = −(c − i)µ − iξ for i ∈ S. Denoting the mth element of the right eigenvector x (i) by xm(i) , Cx (i) = λi x (i) gives (i) = −[(c − i)µ + iξ ]xm(i) −[(c − m)µ + mξ ]xm(i) + (c − m)µxm+1 (i) (i) ≡ 0. Setting m = i, we have xi+1 = 0 and we can for m ∈ S with xc+1 (i) (i) (i) choose xi = 1. Since µ  = 0, ξ = 0 and µ = ξ , we have xi+1 = xi+2 = · · · = xc(i) = 0. For m ∈ {0, 1, . . . , i − 1}, we have the relation

xm(i) =

(c − m)µ c − m (i) x (i) = zx . (i − m)(µ − ξ ) m+1 i − m m+1

174

K. Kawanishi

Substituting 1 for xi(i) , we obtain (i) xi−k



 c−i+k k = z k

for k ∈ {0, 1, . . . , i}, where   n n! ≡ k k!(n − k)! (i) is the binomial coefficient. Differentiating xi−k with respect to z and dividing by (c − i + 1), we obtain   1 c − (i − 1) + k − 1 k−1 d (i) x = z k−1 c − i + 1 dz i−k

for i ∈ {1, 2, . . . , c}. The right-hand side is exactly the (i − 1)st element of the right eigenvector. In the case of the left eigenvector, we can similarly obtain the explicit expression. Acknowledgements. The author thanks the anonymous referees and editor for their comments and suggestions that have greatly improved this paper.

References [1] Bellman, R.: Introduction to Matrix Analysis. Philadelphia: SIAM 1997 [2] Brandt, A., Brandt, M.: On the M(n)/M(n)/s queue with impatient calls. Performance Evaluation 35, 1–18 (1999) [3] Brandt, A., Brandt, M.: Asymptotic results and a Markovian approximation for the M(n)/M(n)/s + GI system. Queueing Syst. 41, 73–94 (2002) [4] Feinberg, M.A.: Performance characteristics of automated call distribution systems. In: Global Telecommunications Conference (GLOBECOM ’90). Piscataway, NJ: IEEE, Vol. I, pp. 415–419, 1990 [5] Fischer, M.J., Garbin, D.A., Gharakhanian, A.: Performance modeling of distributed automatic call distribution systems. Telecommun. Syst. 9, 133–152 (1998) [6] Hofmann, J.: The BMAP/G/1 queue with level dependent arrivals – an overview. Telecommun. Syst. 16, 347–360 (2001) [7] Iserles, A., Munthe-Kaas, H.Z., Nørsett, S.P., Zanna, A.: Lie-group methods. In: Iserles, A. (ed.): Acta numerica. 2000. Cambridge: Cambridge University Press 2000, pp. 215– 365 [8] Jolley, W.M., Harris, R.J.: Analysis of post-call activity in queueing systems. In: Compania Telefonica National de Espagna et al.: 9th international teletraffic congress, Oktober, 1973, Torremolinos, Spain. Madrid: Paraninfo 1979, pp. 1–9 [9] Koole, G., Mandelbaum, A.: Queueing models of call centers: an introduction. Ann. Oper. Res. 113, 41–59 (2002) [10] Latouche, G., Ramaswami, V.: Introduction to matrix analytic methods in stochastic modeling. Philadelphia: SIAM 1999

A closed-form solution on a level-dependent Markovian arrival process

175

[11] Lucantoni, D.M., Meier-Hellstern, K.S., Neuts, M.F.: A single-server queue with server vacations and a class of nonrenewal arrival processes. Adv. in Appl. Probab. 22, 676– 705 (1990) [12] Neuts, M.F.: Structured stochastic matrices of M/G/1 type and their applications. New York: Dekker 1989 [13] Neuts, M.F., Li, J.-M.: An algorithm for the P (n, t) matrices of a continuous BMAP. In: Chakravarthy, S.R., Alfa, A.S. (eds.): Matrix-analytic methods in stochastic models. New York: Dekker 1997, pp. 7–19 [14] Zanna, A., Munthe-Kaas, H.Z.: Generalized polar decompositions for the approximation of the matrix exponential. SIAM J. Matrix Anal. Appl. 23, 840–862 (2002)

Suggest Documents