The study of patient and impatient customers in a telephone system serves as an example of M/G/1 type [14]. We consider irreducible, aperiodic Markov chains ...
DETERMINATION OF EXPLICIT SOLUTION FOR A GENERAL CLASS OF MARKOV PROCESSES DANIELLE LIU AT&T Bell Laboratories, Holmdel, NJ Y. QUENNEL ZHAO Department of Mathematics and Statistics, University of Winnipeg, Winnipeg Manitoba, CANADA ABSTRACT In this paper, we study both the GI/M/1 type and the M/G/1 type Markov chains with the special structure A0 = ωβ. We obtain explicit formulas for matrices R and G, which generalized earlier results on Markov chains of QBD type. In the case of GI/M/1 type, we show how to find the maximal eigenvalue of R, and then that the computation of R is equivalent to that of x2 . In the case of M/G/1 type, we show that G = 1β. Based on the earlier results, we show that a stable recursion for finding the stationary probability vectors xi can be carreid out easily. Two models in application are discussed, as examples of such processes. 1. INTRODUCTION We consider both the GI/M/1 type and the M/G/1 type Markov chains [6, 7], where the transition submatrices are of order m ≤ ∞. The results in Theorem 1 are consequences of Tweedie [9]. Our purpose is to show the explicit expressions of the rate matrix R and the matrix G when A0 has the special structure A0 = ωβ. The matrices R and G play important roles in matrix analytic solutions. Our results are generalizations of Gillent and Latouche [2] and Ramaswami and Latouche [10], in which QBD processes were considered. The former work dealt with the case of m < ∞. Later, the results were generalized to the case of m ≤ ∞. Besides, they also obtained the counterpart results when A2 has a similar special structure. In this paper, we generalize the results in [2] and [10] to those Markov chains of the GI/M/1 type and the M/G/1 type. Many interesting problems in applications can be formulated into either a GI/M/1 type Markov chain or a M/G/1 type
Markov chain with the special structure. The results in this paper provide an unified treatment of the shortest queue model with jockeying. This kind of queueing system has been tackled for many years by using different methods [3, 12, 1, 13]. The main results will be discussed later in the paper as an example of GI/M/1 type Markov chain. The study of patient and impatient customers in a telephone system serves as an example of M/G/1 type [14]. We consider irreducible, aperiodic Markov chains with the transition probability matrix in the form of either
B C0 0 B1 C1 A0
B2 P = B3 . ..
A2 A1 A0
A3 A2 A1 A0 .. .. .. .. . . . . . . .
,
(1)
where B0 is a square matrix of order n ≤ ∞, Ai (i = 0, 1, 2, . . .) are square matrices of order m ≤ ∞, C0 , C1 and Bi (i = 1, 2, 3, . . .) are matrices of proper dimensions; or
B B1 B2 B3 · · · 0 A0 A1 A2 A3 · · ·
˜ P =
A0 A1 A2 · · · A0 A1 · · · A0 · · · ...
,
(2)
where Bi and Ai (i = 0, 1, 2, . . .) are square matrices of order m ≤ ∞. We denote by x = (x0 , x1 , x2 , . . .) the stationary probability vector associated with P or P˜ , then we have THEOREM 1 If the Markov chain is positive recurrent, then the stationary probability vector x is such that: a) for P in (1), xi = x1 Ri−1 ,
for i ≥ 2,
the matrix R of order m is the minimal nonnegative solution of the matrix equation R=
∞ X
Rk Ak .
(3)
k=0
And it is the limit of the monotonically increasing sequence of matrices: R0 = 0, Rk+1 = A0 (I − A1 )−1 +
∞ X
Rkl Al (I − A1 )−1 , k ≥ 0.
(4)
l=2
The spectral radius η of the matrix R is strictly less than one. The vector (x0 , x1 ) is obtained by solving the linear system
B0
(x0 , x1 ) = (x0 , x1 ) P ∞
k=1
R
C0
k−1
Bk C1 +
with the normalizing condition
P∞
k=2
R
k−1
Ak
,
x0 1 + x1 (I − R)−1 1 = 1, where 1 is a column vector of 1’s with the proper size. b) for P˜ in (2), the matrix G of order m for the fundamental period is the minimal nonnegative solution of the matrix equation G=
∞ X
Ak Gk ,
(5)
k=0
and it is the limit of the monotonically increasing sequence of matrices: G1 = A0 ,
(6)
∞ X
(7)
Gk+1 =
Al Glk , k ≥ 1.
l=0
The theorem as stated here were obtained in Neuts [6], [7] for m < ∞. For m = ∞, the proof for the case of GI/M/1 type is a direct consequence of the results by Tweedie [9], while the case of M/G/1 type can be similarly proved. For details, one may also refer to [10]. The analogous results for continuous Markov chains are stated in the following theorem.
THEOREM 2 For an irreducible, positive recurrent continuous Markov chain of GI/M/1 type with its infinitesimal generator partitioned as in (1) and supi (−A1 )i i < ∞, the stationary probability vector x is such that xi = x1 Ri−1 ,
for i ≥ 2,
the matrix R of order m is the minimal nonnegative solution of the matrix equation ∞ X
Rk Ak = 0,
(8)
k=0
and it is the limit of the monotonically increasing sequence of matrices: R0 = 0, Rk+1 = A0 (−A1 )−1 +
∞ X
Rkl Al (−A1 )−1 , k ≥ 0.
(9)
l=2
The spectral radius η of the matrix R is strictly less than one. The vector (x0 , x1 ) is obtained by solving the linear system
(x0 , x1 ) P ∞
k=1
B0 R
C0
k−1
Bk C1 +
with the normalizing condition
P∞
k=2
R
k−1
Ak
=0
x0 1 + x1 (I − R)−1 1 = 1. Similarly, for an irreducible, positive recurrent continuous Markov chain of M/G/1 type with its infinitesimal generator partitioned as in (2) and supi (−A1 )i i < ∞, the matrix G of order m for the fundamental period is the minimal nonnegative solution of the matrix equation
∞ X
Ak Gk = 0
(10)
k=0
and it is the limit of the monotonically increasing sequence of matrices: G1 = A0 , Gk+1 = (−A1 )−1 A0 +
(11) ∞ X l=2
(−A1 )−1 Al Glk , k ≥ 1.
(12)
Our purpose is to show the simplifications obtained when the rows of A0 are proportional to a common row vector β, that is, A0 = ω · β, where ω is a column vector. Without loss of generality, we assume that β is a row vector with β1 = 1. 2. DETERMINATIONS OF MATRICES R AND G THEOREM 3 Assume that A0 = ωβ with β1 = 1. If the Markov chain P is positive recurrent, then "
R = A0 I −
∞ X
η
i−1
Ai
i=1
or R = ωξ, where "
ξ=β I−
∞ X
η
i−1
Ai
i=1
#−1
#−1
,
(13)
,
(14)
and η = ξω is the maximal eigenvalue of R. Proof: It follows from (4) and A0 = ωβ with β1 = 1 that Rk = ωξk for k ≥ 1, where ξ1 = β(I − A1 )−1 , ξk = β(I − A1 )−1 +
∞ X
(ξk−1 ω)l−1 ξk−1 Al (I − A1 )−1 .
l=2
Note that ξk ≥ 0 for k ≥ 1, thus R = ωξ for some nonnegative row vector ξ, which satisfies the equation −1
ξ = β(I − A1 )
+
∞ X
(ξω)l−1 ξAl (I − A1 )−1 .
l=2
Equivalently, "
β=ξ I−
∞ X
η
l−1
Al
l=1
h
with η = ξω. Since η < 1, the inverse of I −
P∞
l=1
# i
η l−1 Al exists.
Next we show that η = ξω is in fact the maximal eigenvalue of R. Since R = ωξ, Rω = (ξω)ω. Obviously, ξω is the spectral radius of R for m < ∞. For m = ∞, we have
∞ X l=0
r l Rl =
∞ X
rl (ξω)l−1 ωξ,
l=0
which converges for positive r if and only if r < (ξω)−1 .
THEOREM 4 Assume that A0 = ωβ with β1 = 1. If the Markov chain P˜ is positive recurrent, then G = 1β. Proof: It follows from (7) and A0 = ωβ with β1 = 1 that Gk = νkβ for k ≥ 1, where ν1 = ω, νk =
∞ X
Al (βνk−1 )l−1 νk−1 .
l=0
Thus G = νβ. Also we know that G1 = 1, that means G1 = νβ1 = 1 since β1 = 1. Therefore, ν = 1. Remark: The above result is also intuitively true since Gj k is the conditional probability, given that it starts in state (i, j), that the Markov chain eventually enters the set of states at level i − 1 by hitting (i − 1, k), i ≥ 1. Since A0 is the block where the Markov chain moves downwards, G = 1β. Next we state the analogous result for continuous time Markov processes. THEOREM 5 Suppose that we have an irreducible, positive recurrent continuous Markov chain with an infinitesimal generator P partitioned as in (1). Assume that the matrix
P∞
i=0
Ai is irreducible and A0 = ωβ. Further, let supi (−A1 )ii
be finite. Then its stationary probability vector x = (x0 , x1 , x2 , . . .) is such that xi = x1 Ri−1 ,
for i ≥ 2, where "
R = A0 −
∞ X
η i−1 Ai
i=1
#−1
,
(15)
or R = ωξ, where "
ξ=β −
∞ X
η
i−1
i=1
Ai
#−1
,
(16)
and η = ξω is the maximal eigenvalue of R. Similarly, for an irreducible, positive recurrent Markov chain with an infinitesimal generator P˜ partitioned as in (2), assume that A0 = ωβ, with β1 = 1, then G = 1β.
For a GI/M/1 type Markov chain, an efficient algorithm to compute matrix R is usually the key in obtaining numerical results. It is clear from Theorem 3 that if we can find the maximal eigenvalue of R in advance, then the computation of R is equivalent to solving a linear system of size m. Generally, if α is an eigenvalue of R, then it is a zero of the determinant det(αI − P∞
[13]), or det(
k=0
P∞
k=0
αk Ak ) (see Theorem 1 in
αk Ak ) for the continuous case. The reverse is also true, but the
proof is less obvious (see Theorem 2 in [4]). When A0 = ωβ, this can be stated in the following lemma. LEMMA 1 Under the assumptions of Theorem 3 (or Theorem 5), the eigenP∞
value(s) of R is (are) exactly the zero(s) of det(αI− satisfying |α| < 1.
k=0
P∞
αk Ak ) (or det(
k=0
αk Ak ))
A numerical approach to obtain the maximal eigenvalue η without computing the matrix R is outlined in Neuts [6]. The significance of the maximal eigenvalue of R is well examined in [8]. Some special cases in which η can be derived explicitly are also discussed in that paper. After finding the maximal eigenvalue η of R, the computation of R is equivalent to solving the linear system of equations for ξ, which is ξ(I −
∞ X
η i−1 Ai ) = β
i=1
for the discrete case, or ξ(−
∞ X
η i−1 Ai ) = β
i=1
for the continuous case. It is in fact equivalent to the computation of the stationary probability vector x2 . Since all rows of R are proportional to each other, the ith row is ωi (ξ1 , ξ2 , . . . , ξm ). Letting xi = (xi 1 , xi 2 , . . . , xi m ), we have x2 = x1 R =
m X
ωi x1 i (ξ1 , ξ2 , . . . , ξm ),
i=1
and therefore x2 k ξk = Pm , i=1 ωi x1 i
k = 1, 2, . . . , m.
The probability vector x2 , together with x0 and x1 , are determined by
B0
C0
(x0 , x1 , x2 ) = (x0 , x1 , x2 ) B1 P ∞ k−2 k=2
η
C1 Bk
for the discrete case or by
B0 η
P∞
k=2
A0
Bk
k=2
k=1
A0
η k−2 Ak
P∞
k=1
η k−1 Ak
0
C1 P∞
P∞
η k−2 Ak
C0
(x0 , x1 , x2 ) B1 P ∞ k−2 k=2
0
η k−1 Ak
for the continuous case with the normalizing condition
x0 1 + x1 1 + x2 1/(1 − η) = 1.
=0
(17)
(18)
(19)
Summarizing the above discussion leads to the following theorem. THEOREM 6 Assume that A0 = ωβ with β1 = 1. If the Markov chain P is positive recurrent, then xi = x2 η i−2
for i > 2,
where η is the maximal root of the equation det(αI − P∞
discrete time case (or det(
k=0
(20) P∞
k=0
αk Ak ) = 0 for the
αk Ak ) = 0 for the continuous time case) satisfying
0 < α < 1. And (x0 , x1 , x2 ) is determined by solving the linear system in (17) (or (18)). For a M/G/1 type Markov chain with A0 = ωβ, since G = 1β, Gk = G for all k = 1, 2, . . .. The stable recursion for computing probability vectors xi, ¯k and i = 1, 2, . . ., proposed by Ramaswami [11] can be easily achieved since both B A¯k are explicitly expressed. Those results are stated as follows: THEOREM 7 Assume that A0 = ωβ with β = (β1 , β2 , . . . , βm ) and β1 = 1. If the Markov chain P˜ is positive recurrent, then x0 is the stationary probability vector of the stochastic matrix K = B0 + constant c, and for i = 1, 2, . . .,
P∞
¯i + xi = x0 B
i=1
i−1 X
j=1
Bi G up to a difference of the normalization
xj A¯i+1−j I − A¯1
−1
(21)
where ¯ν = B
∞ X
Bi Gi−ν , A¯ν =
i=ν
∞ X
Ai Gi−ν , ν ≥ 0.
(22)
i=ν
Proof: See Neuts [7]. Since G = 1β has a special structure, most of the matrix operations in (21) may be replaced by vector operations. 3. APPLICATIONS There are many applications where the model can be formulated as Markov chains with special structures studied in this paper. Besides those discussed in [10], we consider two different applications, one is of GI/M/1 type and the other of M/G/1 type. The first one deals with a shortest queue with jockeying, which has brought attention of researchers for many years as we mentioned in the introduction. The results presented in this paper enable us to give a unified treatment of the problem. As our second example, we use a telephone model studied recently by Zhao and Alfa [14], which has the transition matrix of M/G/1 type. The shortest queue model with jockeying Shortest queue models can be seen in a broad area of applications. As an example, consider the following satellite communication system. In such a system, all stations on earth are organized into disjoint zones; packets generated from earth zones arrive at the satellite by using different possible access techniques, several buffers are provided at the satellite for the waiting packets to be processed or transmitted; and finally, the packets are sent to their destinations by the multi-down-link beams. Since there is more than one buffer on board, introducing jockeying of the waiting packets among the buffers could improve the performance of the systems. For example, if we allow a packet waiting in the buffer with many waiting packets to move to some other buffer with fewer waiting packets in it, then the average packet waiting time is obviously reduced. Specifically, we make the following model assumptions used in [13], which will include all models studied in [3, 12, 1] as its special cases.
a) Customers arrive singly with interarrival times independent and identically distributed according to an arbitrary distribution function A(t) with A(t) = 0 if t < 0, and they are not allowed to renege or balk. b) There are c (c ≥ 2) servers, numbered 1, 2, . . . , c, in the system and each of them has its own waiting line. In each waiting line, service is rendered according to FIFO (first come first served) discipline. The c servers have independent exponential service times. The service times are independent of arrivals. c) An arriving customer joins one of the shortest waiting lines with a pre-determined probability distribution. d) Jockeying among the waiting lines is permitted. The last customer(s) in the longest waiting line(s) instantaneously jockeys to the shortest waiting line(s) with a pre-determined probability distribution as soon as the difference of the customer numbers between the shortest waiting line(s) and the longest waiting line(s) exceeds r, r ≥ 1. 1) Continuous case: Assume that the interarrival times are independent and identically exponential random variables with mean 1/λ. Let Xk (t) represent the number of customers in queue k including the customer in service (if any) at time t for k = 1, 2, . . . , c, then (X1 (t), X2 (t), . . . , Xc (t)) is a continuous time Markov chain. Let the infinitesimal generator of the Markov chain of queue lengths be partitioned according to the blocks G