The MMCPP/GE/c Queue - Semantic Scholar

4 downloads 13097 Views 249KB Size Report
arrival process (MMCPP)—i.e. a Poisson point process with bulk arrivals having ... Queen's Gate, London, SW7 2BZ. Email: [email protected] .... size at a departure instant is j − c + 1, only one server being able to complete a service period. 3 ...
The MMCPP/GE/c Queue R. Chakka

P.G. Harrison∗

Abstract We obtain the queue length probability distribution at equilibrium for a multi-server, single queue with generalised exponential (GE) service time distribution and a Markov modulated compound Poisson arrival process (MMCPP)—i.e. a Poisson point process with bulk arrivals having geometrically distributed batch size whose parameters are modulated by a Markovian arrival phase process. This arrival process has been considered appropriate in ATM networks and the GE service times provide greater flexibility than the more conventionally assumed exponential distribution. The result is exact and is derived, for both infinite and finite capacity queues, using the method of spectral expansion applied to the two dimensional (queue length by phase of the arrival process) Markov process that describes the dynamics of the system. The Laplace transform of the interdeparture time probability density function is then obtained. The analysis therefore could provide the basis of a building block for modelling networks of switching nodes in terms of their internal arrival processes, which may be both correlated and bursty.

1

Introduction

Various models have been investigated in the literature for modelling ATMtraffic. These include the compound Poisson process (CPP) in which the interarrival times are assumed to have generalised exponential (GE) probability distribution [Kou94], the Markov modulated Poisson process (MMPP) and the self-similar traffic models such as Fractional Brownian Motion (FBM) [Man68]. ∗

Department of Computing, Imperial College, 180 Queen’s Gate, London, SW7 2BZ. Email: [email protected]

Suggestions have also been made about the use of distributions with ”fat tails” such as the log-normal distributions and the Pareto distribution. Many of these models, however, do not possess the necessary mathematical properties for efficient performance analysis by established methods such as queueing theory. A CPP traffic model can capture burstiness characteristics quite well in many situations (essentially when geometric burst sizes are appropriate) but not the auto-correlations observed in much real traffic. It does have the advantage, however, of an efficient analytical solution [Kou88, Har94]. The MMPP-model captures the auto-correlations to some extent, but cannot represent burstiness as well as the GE-model. It too yields an analytical solution. The self-similar models such as the FBM do capture the auto-correlations quite effectively, as well as burstiness, but are analytically intractable in the sense noted above. In this paper we introduce a new traffic model, the Markov Modulated Compound Poisson Process (MMCPP). In the next section we derive exactly the steady state queue length probability distribution for the MMCPP/GE/c queue using the method of spectral expansion. Thus, not only do we utilize a considerably more representative arrival process, we also have a service time distribution that is significantly more general than simple negative exponential; for example is characterized by two moments rather than only one [Kou94]. Obviously the MMPP/M/c queue [Cha96a] is a special case of this where all batch sizes are unity with probability one. In section 3 the departure process of this queue is considered and the Laplace transform of the probability density of its interdeparture times between batches is derived. Section 4 deals with the case of bounded queue size, i.e. the finite capacity system. The paper concludes in section 5 with a discussion of the implications of our results on the modelling of traffic in ATM neworks.

2 2.1

Equilibrium queue length probabilities The arrival process

We consider an MMCPP arrival process with N phases where (σi , θi ) are the parameters of the GE inter-arrival time distribution in phase i, i.e. interarrival time probability distribution function is 1 − (1 − θi )e−σi t in phase i. Thus, the arrival point-process is Poisson with batches arriving at each point 2

having geometric size; specifically, the probability that a batch is of size s is (1 − θi )θis−1 in phase i. Let Q be the generator matrix of the arrival phase process, given by   −q1 q1,2 . . . q1,N    q2,1 −q2 . . . q2,N   Q =  .. .. ..  ...  ,  . . .  qN,1 qN,2 . . . −qN

where qi,k (i 6= k) is the transition rate from phase i to phase k, and qi =

N X

qi,j ,

qi,i = 0

(i = 1, . . . , N )

j=1

Σ = diag(σ1 , σ2 , . . . , σN ) σ = (σ1 , σ2 , . . . , σN ) Let r = (r1 , r2 , . . . , rN ) be the vector of equilibrium phase probabilities of the arrival process. Then, r can be computed using the following equations: rQ = 0 ;

reN = 1 .

where eN stands for the column vector with N elements each of which is unity. Hence, the total average arrival rate, σ, is σ=

N X

ri σi /(1 − θi ) .

i=1

2.2

The GE multi-server

The service facility has c homogeneous servers, each with GE-distributed service times with parameters (µ, φ). The service discipline is FCFS and each server serves at most one customer at any given time. It is then intuitively clear that for the queue to be stable, we require cµ > (1 − φ)σ. The operation of the GE server is similar to that described for the CPP arrival process above. However, the batch size associated with a service completion is bounded by one more than the number of customers waiting to commence service at the departure instant. For queues of length j ≥ c (including any customers in service), the maximum batch size at a departure instant is j − c + 1, only one server being able to complete a service period 3

at any one instant under the assumption of exponentially distributed bulkservice times (or batch-service times). Thus, the probability that a departing batch has size s is (1 − φ)φs−1 for 1 ≤ s ≤ j − c and φj−c for s = j − c + 1. In particular, when j = c, the departing batch has size 1 with probability one, and this is also the case for all j ≤ c since there are then no customers waiting to commence service. It should be noted that, whilst the GE service time distribution implies burstiness in the service completions, this is not intended to reflect the actual operation of the server, which will typically serve one customer at a time. The GE distribution should simply be viewed as something more general than negative exponential with no particular physical interpretation. This generalization has yielded accurate approximations in many previous modelling studies; see for example [Kou94, Kou90, Kou96].

2.3

The steady state solution

The state of the system at any time t can be specified completely by two integer valued random variables, I(t) and J(t). I(t) varies from 1 to N , representing the phase of the arrival process, and J(t) ≥ 0 represents the number of customers in the system at time t, including any in service. The system is now modelled by a continuous time discrete state Markov process, Y , on a semi-infinite lattice strip. For convenience, let I(t) vary in the horizontal direction and J(t) in the vertical direction. We denote the steady state probabilities by {pi,j } where, pi,j = limt→∞ P rob(I(t) = i, J(t) = j). This system can be solved as follows. Y evolves with the following instantaneous transition rates: (a) Q(i, k) – purely lateral transition rate – from state (i, j) to state (k, j) for all j ≥ 0 (1 ≤ i, k ≤ N ; i 6= k), caused by a phase transition in the Markov chain governing the arrival process; (b) Bi,s – unbounded s-step upward transition rate – from state (i, j) to state (i, j + s) for all j ≥ 0 (1 ≤ i ≤ N ; s > 0), caused by a new batch arrival of size s; (c) Cj,s – bounded s-step downward transition rate – from state (i, j) to state (i, j − s) for all i, 1 ≤ i ≤ N (s = 1, . . . , max(1, j − c + 1); j = 1, 2, . . . ), caused by a batch service completion of size s. 4

where Bi,s Cj,s Cj,s Cj,1 Cj,s

= = = = =

(1 − θi )θis−1 σi (1 − φ)φs−1 cµ φs−1 cµ jµ 0

(1 ≤ i ≤ N ; s = 1, 2, . . .) (j ≥ c; 1 ≤ s ≤ j − c) (j ≥ c; s = j − c + 1) (1 ≤ j ≤ c − 1) otherwise

We now define the following N × N diagonal matrices Bs Cj Cj C0

= = = =

s−1 Diag [(1 − θ1 )θ1s−1 σ1 , . . . , (1 − θN )θN σN ] jµ I (1 ≤ j ≤ c) cµ I (j ≥ c) 0

(s = 1, 2, . . .)

where I is the identity, and 0 the zero, N × N matrix. We abbreviate Cc by C and note that Bs = Θs−1 B where, B = B1 = Diag [(1 − θ1 )σ1 , . . . , (1 − θN )σN ] Θ = Diag [θ1 , θ2 , . . . , θN ] Now let vj = (p1,j , p2,j , . . . , pN,j ). Then we have Proposition 1 The balance equations of the MMCPP/GE/c queue are of the form vj−1 Q0 + vj Q1 + vj+1 Q2 = 0 vc−1 Q0,c + vc Q1,c +

∞ X

(j ≥ c + 1) (1)

vc+s Q1+s,c = 0

(2)

vc−1+s Q1+s,c−1 = 0

(3)

vj−1 Q0,j + vj Q1,j + vj+1 Q2,j = 0 h i ˆ v0 Q − ΘB + v1 µ = 0

(1 ≤ j ≤ c − 2) (4)

s=1

vc−2 Q0,c−1 + vc−1 Q1,c−1 +

∞ X s=1

h

i

ˆ v0 Q − ΘB + v1 µ +

∞ X

vs µφs−1 = 0

(f or c ≥ 2)

(5)

(f or c = 1)

(6)

s=2 ∞ X j=0

5

vj eN = 1

(7)

ˆ = Diag [1/(1 − θ1 ), . . . , 1/(1 − where we define v−1 = (0, 0, . . . , 0), write Θ θN )], and where ˆ − Cj−1 )Θ Q0,j = B − (Q − ΘB ˆ − Cj (I + Θ) Q1,j = Q − ΘB

(0 ≤ j ≤ c − 2) (0 ≤ j ≤ c − 2) (0 ≤ j ≤ c − 2)

Q2,j = Cj+1 ˆ − Cc−1 )Θ Q0,c = B − (Q − ΘB ˆ − C(I + Θ) Q1,c = Q − ΘB

Q1+s,c = Cφs−1 ((1 − φ)I − φΘ) (s = 1, 2, . . .) ˆ − Cc−2 )Θ Q0,c−1 = B − (Q − ΘB ˆ − (c − 1)µ(I + Θ) Q1,c−1 = Q − ΘB Q1+s,c−1 = Cφs−1 (s = 1, 2, . . .) ˆ − C)Θ Q0 = B − (Q − ΘB ˆ + φΘΘ)B ˆ Q1 = Q(I + φΘ) − (φI + Θ − C(I + Θ) ˆ Q2 = −φQ + φΘB + C Proof The balance equations are: j X

vj−s Bs + vj (Q −

∞ X

Bs − Cj ) +

vj+s Cj+s,s = 0

(8)

s=1

s=1

s=1

∞ X

for all j ≥ 0. Obviously the last sum will be truncated for queue lengths P ˆ j < c − 1, as we shall see. Since ∞ s=1 Bs = ΘB, we can rewrite the balance equations as: j X

s−1

vj−s Θ

ˆ − Cj ) + B + vj (Q − ΘB

s=1

∞ X

vj+s Cj+s,s = 0

s=1

for all j ≥ 0. Now, the first sum can be expanded as j X

s−1

vj−s Θ

B = vj−1 B +

s=1

j−1 X s=1

6

vj−1−s Θs−1 BΘ ;

(9)

and from equations (9) for the balance equations concerning the (j − 1)th row, we have j−1 X

ˆ − Cj−1 ) − vj−1−s Θs−1 B = −vj−1 (Q − ΘB

s=1

∞ X

vj−1+s Cj−1+s,s

s=1

giving rise to, j X

h

i

ˆ − Cj−1 )Θ − vj−s Θs−1 B = vj−1 B − (Q − ΘB

s=1

∞ X

vj−1+s Cj−1+s,s Θ

s=1

for j ≥ 1. By substituting this equation into equation (9), the balance equations can now be simplified further as h

i

h

i

ˆ − Cj−1 )Θ + vj Q − ΘB ˆ − Cj − Cj,1 Θ vj−1 B − (Q − ΘB +

∞ X

(10)

vj+s [Cj+s,s I − Cj+s,s+1 Θ] = 0

s=1

for j ≥ 1. Substituting j + 1 for j in the above, we get h

i

h

i

ˆ − Cj+1 − Cj+1,1 Θ ˆ − Cj )Θ + vj+1 Q − ΘB vj B − (Q − ΘB +

∞ X

(11)

vj+1+s [Cj+1+s,s I − Cj+1+s,s+1 Θ] = 0

s=1

for j + 1 ≥ 1 or j ≥ 0. We now consider the four cases j ≥ c + 1, j = c, j = c − 1 and j ≤ c − 2. Case (1): j ≥ c + 1 For this range of j, we have Cj+s,s = Cj+s,s+1 = Cj−1 = Cj = Cj+1 = Cj,1 = Cj+1,1 =

(1 − φ)φs−1 cµ (1 − φ)φs cµ C = cµI (1 − φ)cµ

Substituting the above in (10, 11) and then multiplying the equation (11) by φ and subtracting the resulting equation from (10), we get h

i

h

ˆ − C)Θ + vj Q(I + φΘ) − (φI + Θ ˆ + φΘΘ)B ˆ vj−1 B − (Q − ΘB − C(I + Θ) i

h

ˆ +C =0 +vj+1 −φQ + φΘB 7

i

(j ≥ c + 1) (12)

Case (2): j = c In this case, we have Cc+s,s = Cc+s,s+1 = Cc = Cc+1 = C = Cc−1 = Cc,1 = Cc+1,1 = Cc+1+s,s = Cc+1+s,s+1 =

(1 − φ)φs−1 cµ φs cµ cµI (c − 1)µI cµ (1 − φ)cµ (1 − φ)φs−1 cµ (1 − φ)φs cµ

Substituting the above in (10), we get, h

i

h

ˆ − (c − 1)µI)Θ + vc Q − ΘB ˆ − C(I + Θ) vc−1 B − (Q − ΘB +cµ

∞ X

h

i i

vc+s (1 − φ)φs−1 I − φs Θ

(13) = 0

s=1

Case (3): j = c − 1 Here, we have Cc−2 = (c − 2)µI Cc−1 = (c − 1)µI Cc−1,1 = (c − 1)µ Cc−1+s,s = φs−1 cµ Cc−1+s,s+1 = 0 Substituting the above in (10), we get, h

i

h

ˆ − (c − 2)µI)Θ + vc−1 Q − ΘB ˆ − (c − 1)µ(I + Θ) vc−2 B − (Q − ΘB +cµ

∞ X s=1

8

i

(14)

vc−1+s φs−1 = 0

Case (4): 1 ≤ j ≤ c − 2 For this range of j, Cj+s,s = (j + 1)µ if s = 1, 0 otherwise Cj+s,s+1 = 0 Substituting into equation (10) for (1 ≤ j ≤ c − 2) we get, h

i

h

ˆ − (j − 1)µI)Θ + vj Q − ΘB ˆ − jµ(I + Θ) vj−1 B − (Q − ΘB

i

(15)

+vj+1 (j + 1)µ = 0 (0 ≤ j ≤ c − 2) Case (5): j = 0 Writing the balance equations for the 0th row directly, we get, h

i

ˆ v0 Q − ΘB + v1 µ = 0 (f or c ≥ 2) h

i

ˆ v0 Q − ΘB + v1 µ +

∞ X

vs µφs−1 = 0 (f or c = 1)

(16)

s=2

This completes the proof.

2.4

Solution of the balance equations

The solution of the balance equations (12) with j-independent coefficient matrices is given by the spectral expansion method as vj =

N X

ak ψ k λj−c k

(j = c, c + 1, . . .)

(17)

k=1

where the λk (k = 1, 2, . . . , N ) are the N eigenvalues strictly within the unit circle and the ψ k are the corresponding left-eigenvectors of the matrix polynomial equation ψ(Q0 + Q1 λ + Q2 λ2 ) = 0 (18) The ak are certain arbitrary constants so chosen that all the balance equations are satisfied. The theoretical basis of spectral expansion, efficient computation of the eigenvalue-eigenvector pairs and of the arbitrary constants ak (k = 1, 2, . . . , N ), and the consequent determination of the invariant vectors vj (j = 0, 1, . . . , c − 1) using the remaining balance equations may be 9

found in [Mit95] and in greater detail in [Cha95, Cha98]. A good introduction to the subject is given in [Mit98]. Thus, by (1), vj (j ≥ c) can be expressed as a linear sum of N known vectors with N unknown coefficients. Then, using equations (2,3) along with equation (17), vc−1 and vc−2 can also be expressed as linear sums of known vectors with the unknown coefficients ak : h

i−1

ih

ˆ − C(I + Θ) B − (Q − ΘB ˆ − Cc−1 )Θ vc−1 = −vc Q − ΘB −cµ

∞ X

h

i−1

ih

ˆ − Cc−1 )Θ vc+s (1 − φ)φs−1 I − φs Θ B − (Q − ΘB

s=1

h

i−1

ih

ˆ − C(I + Θ) B − (Q − ΘB ˆ − Cc−1 )Θ = −vc Q − ΘB −cµ

∞ X N X

i−1

h

ˆ − Cc−1 )Θ ak ψ k λsk φs−1 [(1 − φ)I − φΘ] B − (Q − ΘB

s=1 k=1

h

i−1

ih

ˆ − C(I + Θ) B − (Q − ΘB ˆ − Cc−1 )Θ = −vc Q − ΘB −cµ

N X

ak ψ k

k=1

h i−1 λk ˆ − Cc−1 )Θ (19) [(1 − φ)I − φΘ] B − (Q − ΘB 1 − φλk

h

i−1

ih

ˆ − (c − 1)µ(I + Θ) B − (Q − ΘB ˆ − Cc−2 )Θ vc−2 = −vc−1 Q − ΘB −cµ

∞ X

i−1

h

ˆ − Cc−2 )Θ vc−1+s φs−1 B − (Q − ΘB

s=1

h

i−1

ih

ˆ − (c − 1)µ(I + Θ) B − (Q − ΘB ˆ − Cc−2 )Θ = −vc−1 Q − ΘB −cµ

N X

ak ψ k

k=1

h i 1 ˆ − Cc−2 )Θ −1 B − (Q − ΘB 1 − φλk

(20)

Now, using the equations (4) for j = c − 2, c − 3, . . . , 1, the vectors vj (j = c − 3, c − 4 . . . , 0) can also be expressed as a linear sum of known vectors with the unknown coefficients ak : h

i−1

ih

ˆ − jµ(I + Θ) B − (Q − ΘB ˆ − Cj−1 )Θ vj−1 = −vj Q − ΘB i−1

h

ˆ − Cj−1 )Θ −(j + 1)µvj+1 B − (Q − ΘB

(j = c − 2, c − 3, . . . , 1) 10

(21)

By now, all the vj ’s have been expressed as linear sums of known vectors with the unknown coefficients, ak . We have two more equations, i.e. equation (15) for j = 0 and the equation, ∞ X

vj eN = 1

(22)

j=0

ak N The above has a simple form since ∞ k=1 1−λk (ψ k .eN ). Equaj=c vj eN = tion (15) for j = 0 comprises N linear scalar equations of which N − 1 are independent. Adding to them the equation (22), we have N independent linear simultaneous equations in N scalar unknowns. Hence, they can be solved.

P

P

An alternate and efficient method for the solution of equations (1) is that of Krieger-Nauomov-Wagner [Kri95]. This method is based on the matrixgeometric solution [Neu94] for the steady state probabilities. Although the traditional matrix-geometric iteration tends to be computationally expensive [Mit95], the method of Krieger-Nauomov-Wagner cuts its execution time substantially, making its efficiency comparable with spectral expansion.

3

Analysis of departures

When a departure or batch of departures from one queue join another queue, it becomes an arrival at that queue. In this way a network of queues is formed which can be analysed in terms of internal arrival processes to each constituent queue together with the server’s characteristics which are assumed to be given. The combined characteristics of all departure processes therefore determine the arrival processes and hence, recursively, the departure processes themselves. The resulting fixed point problem provides a powerful approach to queueing network analysis. Here, however, we just determine the departure process of the MMCPP/GE/c queue in terms of the probability distribution of its batch sizes and the Laplace transform of its batch inter-departure time probability density function. 11

3.1

Departure burst size distribution

From section 2.3, we have the solution for the steady state probabilities, pi,j . The marginal probabilities, pi. and p.j are then defined by: pi. =

∞ X

pi,j

;

p.j =

j=0

N X

pi,j .

(23)

i=1

Now consider the system when j > c. Here, all the c servers are busy, with j − c unattended customers in the queue. In this state, The departure rate associated with a batch size of s is (1 − φ)φs−1 cµ for 1 ≤ s ≤ j − c and φj−c cµ for s = j − c + 1. Using this, the average rate at which batches of size n (n > 1) depart from the queue, is νn =

∞ X

p.j (1 − φ)φn−1 cµ + p.c+n−1 φn−1 cµ

j=c+n



= 

∞ X



p.j (1 − φ) + p.c+n−1  cµφn−1

(n = 2, 3, . . .)

(24)

j=c+n

The average rate of single departures, ν1 , is ν1 =

c X

p.j jµ +

∞ X

p.j (1 − φ)cµ .

(25)

j=c+1

j=1

Clearly, P∞νn νs (n = 1, 2 . . .) determines the burst size probability distribus=1 tion. Also, the number of batch departures ν per unit time is given by, ν=

∞ X

νn .

(26)

n=1

As a numerical check it can be verified that ν =

3.2

PN

i=1 ri σi

and

P∞

n=1

nνn = σ.

System at departure epochs

Let βi,j be the probability that the state of the system, immediately after a departure epoch, is (i, j). Then, βi,j is proportional to the probability flux 12

into state (i, j) due to a departure, i.e. βi,j ∝ Φi,j where Φi,j = Φi,c−1 =

∞ X n=1 ∞ X

pi,j+n cµ(1 − φ)φn−1

(j ≥ c)

pi,c+n−1 cµφn−1

n=1

Φi,j = pi,j+1 (j + 1)µ

(0 ≤ j < c − 1)

The normalising constant is  N ∞ X ∞ X X  pi,j+n cµ(1 − φ)φn−1 i=1

j=c n=1

+

∞ X

pi,c+n−1 cµφ

n=1

n−1

+

c−2 X



pi,j+1 (j + 1)µ

j=0

which is easily verified to be equal to ν. Hence βi,j = Φi,j /ν for all states i, j. Using the expressions given for the pi,j in equation (17), βi,j can be computed efficiently and exactly.

3.3

Inter-batch departure intervals

Consider the system in steady state and let t1 be the time immediately after a batch departure. Let the next departure, strictly after t1 , be at time t2 (> t1 ) and define τ = t2 − t1 . Let ∆(s) be the Laplace transform of the density of the inter-departure interval, τ . In order to derive an expression for ∆(s), let the following Laplace transforms be defined: Γi,j (s) – Laplace transform of the density of the random variable τ , if the state of the system at t1 is given by I(t1 ) = i and J(t1 ) = j. If j ≥ c, then Γi,j (s) is independent of i and j, hence let it be known as Γ(s). Then, cµ . (27) Γ(s) = cµ + s If 0 < j < c, the next event can be, either (i) a single departure making J(t) = j − 1 or (ii) a new batch arrival making it J(t) < c or (iii) a new batch arrival making it J(t) ≥ c or (iv) phase change of the arrival P n−1 process. The transition rates of these events are jµ, c−j−1 σi , n=1 (1 − θi )θi Pc−j−1 n−1 (1 − n=1 (1 − θi )θi )σi and qi respectively. 13

Hence, we can write, c−j−1 n−1 jµΓi,j−1 (s) σi Γi,j+n (s) n=1 (1 − θi )θi Γi,j (s) = + + jµ + σi + qi + s jµ + σi + qi + s PN P n−1 (1 − c−j−1 )σi Γ(s) n=1 (1 − θi )θi l=1 qi,l Γl,j (s) + jµ + σi + qi + s jµ + σi + qi + s (1 ≤ i ≤ N ; 1 ≤ j ≤ c − 1) .

P

(28)

If j = 0 at t1 , then the next event can be, either (i) a batch arrival making J(t) < c, or (ii) a batch arrival making J(t) ≥ c, or (iii) a phase change. Hence, Γi,0 (s) is given by, (1 − θi )θin−1 σi Γi,n (s) + (1 − σi + qi + s PN l=1 qi,l Γl,0 (s) (1 ≤ i ≤ N ) . σ i + qi + s

Pc−1

Pc−1

Γi,0 (s) =

n=1

n=1 (1

− θi )θin−1 )σi Γ(s)

+ (29)

The Laplace transform ∆(s) can now be written as, ∆(s) =

N c−1 X X

βi,j Γi,j (s) + (1 −

c−1 X

β.j )Γ(s)

(30)

j=0

i=1 j=0

In principle, we could now compute ∆(s) at all points s that may be required by solving the linear simulataneous equations given by equations (28) and (29). Such a computation would be required, for example, if the probability density of interdeparture time were required and the Laplace transform had to be inverted numerically. However, this would be very expensive— perhaps prohibitively—and here we concentrate instead on the moments of the interdeparture time distribution. (h) Let Γi,j (s) be the hth derivative of Γi,j (s) and Γ(h) (s) be that of Γ(s). Then, by differentiating (30) h times, we get, (h)



(s) =

N c−1 X X

(h) βi,j Γi,j (s)

+ (1 −

i=1 j=0

c−1 X j=0

If dh is the hth moment of τ , then we have, dh = (−1)h ∆(h) (s)|s=0 . 14

β.j )Γ(h) (s) .

(31)

(h)

In order to compute dh we need the values of Γi,j (s), at s = 0. Differentiating equations (27, 28, 29) h times successively, with respect to s at s = 0, we get the following equations. (−1)h · h! . ch µ h

Γ(h) (0) =

(h)

(h) Γi,j (0)

jµΓi,j−1 (0) + = jµ + σi + qi +

(1 −

Pc−j−1 n=1

(32)

(h)

(1 − θi )θin−1 σi Γi,j+n (0) jµ + σi + qi

n=1

(h)

N (1 − θi )θin−1 )σi Γ(h) (0) l=1 qi,l Γl,j (0) + jµ + σi + qi jµ + σi + qi

Pc−j−1

P

(h−1)



(h) Γi,0 (0)

Pc−1

=

n=1

hΓi,j (0) jµ + σi + qi (h)

(1 − θi )θin−1 σi Γi,n (0) + (1 − σi + qi

(h) l=1 qi,l Γl,0 (s)

PN

+

(1 ≤ i ≤ N ; 1 ≤ j ≤ c − 1) .

(h−1)

− hΓi,0 σi + qi

(0)

Pc−1

n=1 (1

(33)

− θi )θin−1 )σi Γ(h) (0)

(1 ≤ i ≤ N ) .

(34)

(h−1)

The Γ(h) (0) are known from (32). If Γi,j (0) are known then equations (h) (33, 34) are c · N linear simultaneous equations in the c · N unknowns Γi,j (0) (0) (i = 1, 2, . . . , N ; j = 0, 1, . . . , c−1), for any h. We do have, Γi,j = 1(∀(i, j)). Hence, to find dk , these sets of linear simultaneous equations need to be solved successively staring from h = 1, 2, . . . , k. Thus, dh can be computed exactly.

4

Finite capacity system

It is possible to extend the preceding analysis to the case where the queue is limited by a finite capacity, L (≥ c), including the customers in service, if any. Here, we assume, when the number of customers is j and the arriving batch size is greater than L − j, that only L − j customers are taken in and the rest are rejected. Proposition 2 The balance equations of the MMCPP/GE/c/L queue are of the form 15

(i) For 1 ≤ j ≤ c − 2, vj−1 Q0,j + vj Q1,j + vj+1 Q2,j = 0 (ii) For c + 1 ≤ j ≤ L − 2, vj−1 Q0 + vj Q1 + vj+1 Q2 = 0 (iii) h

i

h

i

ˆ − (c − 1)µI)Θ + vc Q − ΘB ˆ − C − cµΘ vc−1 B − (Q − ΘB L−c X

+cµ

h

i

vc+s (1 − φ)φs−1 I − φs Θ

= 0

s=1

(iv) h

i

+

i

+ cµ

ˆ − (c − 2)µI)Θ) vc−2 B − (Q − ΘB h

ˆ − (c − 1)µ(I + Θ) vc−1 Q − ΘB

L−c+1 X

vc−1+s φs−1 = 0

s=1

(v) h

i

ˆ v0 Q − ΘB + v1 µ = 0 (f or c ≥ 2) h

L X

i

ˆ v0 Q − ΘB + v1 µ +

vs µφs−1 = 0 (f or c = 1)

s=2

(vi) h

i

ˆ − CL−2 )Θ vL−2 B − (Q − ΘB

h

+ vL [CL,1 I − CL,2 Θ] = 0 (vii) L X

i

ˆ − CL−1 − CL−1,1 Θ + vL−1 Q − ΘB

vL−s Us + vL (Q − C) = 0

s=1

16

(viii) L X

vj e N = 1

j=0

where, ˆ s Us = ΘB ˆ − (j − 1)µI)Θ Q0,j = B − (Q − ΘB ˆ − jµ(I − Θ) Q1,j = Q − ΘB Q2,j = (j + 1)µ ˆ − C)Θ Q0 = B − (Q − ΘB ˆ + φΘΘ)B ˆ Q1 = Q(I + φΘ) − (φI + Θ − (I + Θ)C ˆ Q2 = −φQ + φΘB + C Proof Using the same notation as for the unbounded queue, in this case the transition rate from state (i, j) to state (i, L) is not Bi,L−j but ∞ X

∞ X

Bi,s =

(1 − θi )θis−1 σi = θiL−j−1 σi .

s=L−j

s=L−j

Hence, for the Lth row (j = L), the balance equations are, L X

vL−s Us + vL (Q − C) = 0

(35)

s=1

where s−1 ˆ s. Us = Diag[θ1s−1 σ1 , θ2s−1 σ2 , . . . , θN σN ] = ΘB

For the j th row (0 ≤ j ≤ L − 1), the balance equations are, j X

vj−s Bs + vj (Q −

s=1

L−j−1 X s=1

Substituting Bs = Θs−1 B , 1, 2, . . .), we have j X s=1

Bs −

s−1

vj−s Θ

∞ X

Bs − Cj ) +

L−j

P∞

s=1

L−j X

vj+s Cj+s,s = 0

(36)

s=1

ˆ and defining v−j = 0 (j = Bs = ΘB

ˆ −Cj )+ B +vj (Q− ΘB

L−j X s=1

17

vj+s Cj+s,s = 0 (0 ≤ j ≤ L−1) (37)

Substituting j − 1 for j, the balance equations for the (j − 1)th row (0 ≤ j − 1 ≤ L − 1), or equivalently (1 ≤ j ≤ L), are, j−1 X

ˆ − Cj−1 ) + vj−1−s Θs−1 B + vj−1 (Q − ΘB

L−j+1 X

s=1

vj−1+s Cj−1+s,s = 0

s=1

(1 ≤ j ≤ L)

(38)

Post-multiplying (38) by Θ and subtracting the resulting equation from (37) gives, for (1 ≤ j ≤ L − 1), h

i

h

i

ˆ − Cj−1 )Θ + vj Q − ΘB ˆ − Cj − Cj,1 Θ vj−1 B − (Q − ΘB +

L−j X

(39)

vj+s [Cj+s,s I − Cj+s,s+1 Θ] = 0 (1 ≤ j ≤ L − 1)

s=1

Substituting j + 1 for j, we get, for (1 ≤ j + 1 ≤ L − 1) or equivalently for (0 ≤ j ≤ L − 2), h

i

h

i

ˆ − Cj )Θ + vj+1 Q − ΘB ˆ − Cj+1 − Cj+1,1 Θ vj B − (Q − ΘB +

L−j−1 X

(40)

vj+1+s [Cj+1+s,s I − Cj+1+s,s+1 Θ] = 0 (0 ≤ j ≤ L − 2)

s=1

Case (1): (c + 1 ≤ j ≤ L − 2) Consider the j-range from c + 1 to L − 1. For this range of j, we have,

Cj−1

Cj+s,s = Cj+s,s+1 = = Cj = Cj+1 = C = Cj,1 = Cj+1,1 =

(1 − φ)φs−1 cµ (1 − φ)φs cµ cµI (1 − φ)cµ

Sustituting these in (39), (40) and then multiplying equation (40) by φ and subtracting the resulting equation from equation (39), we get h

i

h

ˆ − C)Θ + vj Q(I + φΘ) − (φI + Θ ˆ + φΘΘ)B ˆ vj−1 B − (Q − ΘB − C(I + Θ) h

i

ˆ +C =0 +vj+1 −φQ + φΘB 18

i

(c + 1 ≤ j ≤ L − 2) (41)

Case (2): j = c Here we have, Cc+s,s = Cc+s,s+1 = Cc = Cc+1 = C = Cc−1 = Cc,1 = Cc+1,1 = Cc+1+s,s = Cc+1+s,s+1 =

(1 − φ)φs−1 cµ φs cµ cµI (c − 1)µI cµ (1 − φ)cµ (1 − φ)φs−1 cµ (1 − φ)φs cµ

Substituting the above in equations (39) for j = c, we get h

i

h

ˆ − (c − 1)µI)Θ + vc Q − ΘB ˆ − C(I + Θ) vc−1 B − (Q − ΘB +cµ

L−c X

h

i i

vc+s (1 − φ)φs−1 I − φs Θ

(42) = 0

s=1

Case (3): j = c − 1 Here, we have Cc−2 = (c − 2)µI Cc−1 = (c − 1)µI Cc−1,1 = (c − 1)µ Cc−1+s,s = φs−1 cµ Cc−1+s,s+1 = 0 Substituting the above in the equation (39) for j = c − 1, we get h

i

h

ˆ − (c − 2)µI)Θ) + vc−1 Q − ΘB ˆ − (c − 1)µ(I + Θ) vc−2 B − (Q − ΘB +cµ

L−c+1 X s=1

19

i

(43)

vc−1+s φs−1 = 0

Case (4): (1 ≤ j ≤ c − 2) In this case, Cj+s,s = (j + 1)µ if s = 1, 0 otherwise Cj+s,s+1 = 0 Considering (39) for (1 ≤ j ≤ c − 2) and substituting the above, we get h

i

h

ˆ − (j − 1)µI)Θ + vj Q − ΘB ˆ − jµ(I + Θ) vj−1 B − (Q − ΘB

i

(44)

+vj+1 (j + 1)µ = 0 (1 ≤ j ≤ c − 2) Case (5): j = 0 Writing the balance equations for the 0th row directly, we get, h

i

ˆ + v1 µ = 0 (f or c ≥ 2) v0 Q − ΘB h

i

ˆ v0 Q − ΘB + v1 µ +

L X

vs µφs−1 = 0 (f or c = 1) .

(45)

s=2

Case (6): j = L − 1 Substituting j = L − 1 in equation (39), we get the balance equations for the (L − 1)th row as, h

i

h

i

ˆ − CL−2 )Θ + vL−1 Q − ΘB ˆ − CL−1 − CL−1,1 Θ vL−2 B − (Q − ΘB

(46)

+vL [CL,1 I − CL,2 Θ] = 0 Notice that, if L − 2 ≥ c, then equation (46) becomes, h

i

h

i

ˆ − C(I + (1 − φ)Θ) ˆ − C)Θ + vL−1 Q − ΘB vL−2 B − (Q − ΘB

(47)

+(1 − φ)cµvL [I − φΘ] = 0

4.1

Solution of the balance equations

Of the set of equations in Proposition 2, the ones with with j-independent coefficient matrices, i.e equations (41), have a simple solution for vj (j = c, c + 2, . . . , L − 1) by the spectral expansion method [Cha98] or by the 20

Krieger-Nauomov-Wagner method [Kri95, Nau97]. Using that solution, the entire set of equations can be solved quite easily as follows. By the spectral expansion method, vj (j = c, c + 2, . . . , L − 1) can be expressed as a linear sum of 2N known vectors with 2N unknown coefficients. vj =

N X

ak ψ k λkj−c +

k=1

N X

bk η k ξkL−1−j (c ≤ j ≤ L − 1)

(48)

k=1

where the (λk , ψ k )’s are the N eigenvalue-eigenvectors pairs of the polynomial (Q0 + Q1 λ + Q2 λ2 ) corresponding to its N eigenvalues of least absolute value, and the (ξk , η k )’s are those of the polynomial (Q2 +Q1 ξ+Q0 ξ 2 ). The unknown coefficients ak , bk are either real or in the form of complex conjugates. Once vj (j = L − 2, L − 1) are expressed in terms of the ak , bk ’s, vL can also be expressed in terms of the ak , bk ’s using equation (46) as, h

i

ˆ − CL−1 − CL−1,1 Θ [CL,1 I − CL,2 Θ]−1 vL = −vL−1 Q − ΘB h

(49)

i

ˆ − CL−2 )Θ [CL,1 I − CL,2 Θ]−1 −vL−2 B − (Q − ΘB Now vc−1 and vc−2 can be expressed in terms of the ak , bk ’s using equations (42, 43) respectively in conjunction with equation (48) as: h

i−1

ih

ˆ − C(I + Θ) B − (Q − ΘB ˆ − Cc−1 )Θ vc−1 = −vc Q − ΘB −cµ

L−c X

h

i−1

ih

ˆ − Cc−1 )Θ vc+s (1 − φ)φs−1 I − φs Θ B − (Q − ΘB

s=1

h

i−1

ih

ˆ − C(I + Θ) B − (Q − ΘB ˆ − Cc−1 )Θ = −vc Q − ΘB 



λ − λL−c φL−c−1 ξkL−c−2 − ξk−1 φL−c−1  k ak ψ k −cµ + b η k k k 1 − φλk 1 − ξφ k=1 N X

k

i−1

h

ˆ − Cc−1 )Θ [(1 − φ)I − φΘ] B − (Q − ΘB

i−1

h

ˆ − Cc−1 )Θ −cµvL φL−c−1 [(1 − φ)I − φΘ] B − (Q − ΘB

h

(50) i−1

ih

ˆ − (c − 1)µ(I + Θ) B − (Q − ΘB ˆ − Cc−2 )Θ vc−2 = −vc−1 Q − ΘB −cµ

L−c+1 X

i−1

h

ˆ − Cc−2 )Θ vc−1+s φs−1 B − (Q − ΘB

s=1

21

h

i−1

ih

ˆ − (c − 1)µ(I + Θ) B − (Q − ΘB ˆ − Cc−2 )Θ = −vc−1 Q − ΘB 



ξkL−1−c − ξk−1 φL−c  1 − λL−c φL−c k ak ψ + b η −cµ k k k 1 − φλk 1 − ξφ k=1 N X

k

i−1

h

ˆ − Cc−2 )Θ B − (Q − ΘB

i−1

h

ˆ − Cc−2 )Θ −cµvL φL−c B − (Q − ΘB

(51)

Now, using the equations (44) successively for j = c − 2, c − 3, . . . , 1, vj (j = c − 3, c − 4, . . . , 0) can be expressed in terms of the ak , bk ’s using h

i−1

ih

ˆ − jµ(I + Θ) B − (Q − ΘB ˆ − Cj−1 )Θ vj−1 = −vj Q − ΘB

(52)

i−1

h

ˆ − Cj−1 )Θ −(j + 1)µvj+1 B − (Q − ΘB

(j = c − 2, c − 3, . . . , 1) By now, all the vj ’s are expressed as linear sums of known vectors with 2N unknown coefficients, ak , bk (1 ≤ k ≤ N ). We still have equations (35, 45) P and the equation Lj=0 vj eN = 1. These are 2N + 1 linear equations in 2N unknowns. Of these equations, only 2N are independent (including the summation equation) [Cha98]. Hence, the ak , bk ’s can be solved for. The eigenvalues and the unknowns exist either as real or as pairs of complex conjugates, exploiting this nature an efficient computational procedure is described in [Cha98]. The Krieger-Nauomov-Wagner method [Nau97] is an alternative to the spectral expansion method and is equally efficient.

5

Conclusion

We have derived an exact result for the equilibrium state probabilities of a multi-server queue with unbounded or bounded capacity, with generalised exponential service times and Markov modulated compound Poisson arrivals. This represents a considerable generalisation on previous work on queues both with MMPP arrivals [Cha96a, Mei89] and with generalised exponential service times which have been used to model “burstiness”, see for example [Kou88, Kou96]. We believe the analysis can be adapted to the case where the GE-distributed service times are defined such that at queue lengths 22

j > c, the probability that a departing batch has size s is proportional to (1 − φ)φs−1 (1 ≤ s ≤ j − c + 1). A corresponding result would follow for similarly normalised arrival batch size probabilities in the case of the finite capacity model. Perhaps most importantly, by considering the departure process of the queue, we have the basis of a building block for analysing networks of such queues in terms of the internal arrival processes at each constituent queue. Essentially, the departure process from each queue may be approximated by a MMCPP using the approach in [Ryd96], whereupon, using results on the superposition and splitting of MMCPPs, a system of fixed point equations will define the equilibrium state probabilities for the whole network. These can be solved iteratively as in [Cha96]. A much simpler analysis of this kind was used in [Kou96]. This approach allows queues to be considered in isolation so that more complex queueing disciplines and routing strategies can be analysed. In particular, networks with blocking could be investigated by generalising the approach of [Har94]. This was computationally rather expensive because of its need to solve directly the Markov chain associated with each switch in a Banyan network, a problem which would be obviated by the proposed methodology.

References [Cha95] R. Chakka, Performance and Reliability Modelling of Computing Systems Using Spectral Expansion, Ph.D. Thesis, University of Newcastle upon Tyne (Newcastle upon Tyne, 1995). Available from the PDCS project, Department of Computing Science, as Technical Report No. 179. [Cha96] R. Chakka and I. Mitrani, Approximate solutions for open networks with breakdowns and repairs, in Stochastic Networks: Theory and Applications, F.P. Kelly et al eds., Royal Statistical Society Lecture Note Series-4, Oxford University Press, Oxford (1996) 267-280. [Cha96a] R. Chakka and P.G. Harrison, Analysis of MMPP/M/c/L queues, Proceedings of the Twelfth UK Computer and Telecommunications Performance Engineering Workshop, Edinburgh (1996) 117-128. 23

[Cha98] R. Chakka, Spectral expansion solution for some finite capacity queues, Annals of Operations Research, 79 (1998) 27-44. [Har94] P.G. Harrison and A.de C. Pinto, An approximate analysis of asynchronous, packet-switched banyan networks with blocking, Performance Evaluation 19 (1994) 223-258. [Kou88] D.D. Kouvatsos, A maximum entropy analysis of the G/G/1 Queue at Equilibrium, Journal of Operations Research Society, 39, 2 (1998) 183-200. [Kou90] D. Kouvatsos and N. Tabet-Aouel, Product-Form approximations for an extended class of general closed queueing networks, Proceedings of Performance ’90, N. Holland, (1990), 301-315. [Kou94] D. Kouvatsos, Entropy maximisation and queueing network models, Annals of Operations Research, 48 (1994), 63-126. [Kou96] D. Kouvatsos, J. Wilkinson, P.G. Harrison and M.K. Bhabuta, Performance Analysis of Buffered Banyan ATM Switch Architectures, in Performance Modelling and Evaluation of ATM Networks, Vol. II, D. Kouvatsos, ed., Chapman-Hall (1996). [Mei89] K.S. Meier-Hellstern, The analysis of a queue arising in overflow models, IEEE Transactions on Communications, 37 (1989) 367–372. [Kri95] U. Krieger, V. Nauomov and D. Wagner, Analysis of a finite capacity multi-server delay-loss system with a general Markovian arrival process, in: Matrix-Analytic Methods in Stochastic Models, A.S. Alfa and S. Chakravarthy eds., Marcel Dekker, New York (1995). [Man68] B.B. Mandelbrot and J.W.V. Ness, Fractional Brownian Motions, fractional noises and applications, SIAM Review, 10,4 (1968) 422-437. [Mit98] I. Mitrani, Probabilistic Modelling, Cambridge University Press (1998). [Mit95] I. Mitrani and R. Chakka, Spectral expansion solution for a class of Markov models: Application and comparison with the matrix-geometric method, Performance Evaluation 23 (1995) 241-260. 24

[Nau97] V. Nauomov, U. Krieger and D. Wagner, Analysis of a multi-server delay-loss system with a general Markovian arrival process, Matrixanalytical methods in Stochastic models, (1997). [Neu94] M. Neuts, Matrix-geometric solutions in stochastic models: An algorithmic approach, Dover (1994). [Ryd96] T. Ryden, An EM algorithm for estimation in Markov-modulated Poisson processes, Computational Statistics & Data Analysis, 21 (1996) 431-447.

25

Suggest Documents