1. 1. Stochastic Modeling. 1. 2. Probability Review. 6. 3. The Major Discrete Distributions .... edge the help of Anna K
An Introduction To Stochastic Modeling
Howard M.Taylor Samuel Karlin
An Introduction to Stochastic Modeling Third Edition
An Introduction to Stochastic Modeling Third Edition
Howard M. Taylor Statistical Consultant Onancock, Vi ginia
Samuel Karlin Department of Mathematics Stanford University Stanford, California
O Academic Press San Diego
London
New York
Sydney
Boston
Tokyo
Toronto
This book is printed on acid-free paper.
Copyright © 1998, 1994, 1984 by Academic Press All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Permissions may be sought directly from Elsevier's Science and Technology Rights Department in Oxford, UK. Phone: (44) 1865 843830, Fax: (44) 1865 853333, c-mail:
[email protected]. You may also complete your request on-line via the Elsevier homepage: httpJ/www.elseviercom by selecting 'Customer Support' and then 'Obtaining Permissions'.
ACADEMIC PRESS
An Imprint of Elsevier 525 B St., Suite 1900, San Diego, California 92101-4495, USA 1300 Boylston Street, Chestnut Hill, MA 02167, USA http://www.apnet.com
Academic Press Limited 24-28 Oval Road, London NW 1 7DX, UK http://www.hbuk.co.uk/ap/
Library of Congress Cataloging-in-Publication " 1
-x
1
N /5_1
e- 112 du
=
To obtain the mean of geometric Brownian motion Z(t) = ze"' _ we use the fact that = B(t)/ is normally distributed with mean zero and variance one, whence zee°-=°2''+°ac",
E[Z(t)I Z(0) = z] =
zE[e(«_°-)1+°e(I)]
= Ze(«
a2),E[e°"I"'] 7
( = B(t)lN17)
(4.17)
at
Equation (4.17) has interesting economic implications in the case where a is positive but small relative to the variance parameter .2 On the one hand, if a is positive, then the mean E[Z(t)] = z exp(at) -4 oo as t - oo.
On the other hand, if a is positive but a < ZO.2, then a - ;Q.2 < 0, and X(t) = (a - 20 2)t + QB(t) is drifting in the negative direction. As a consequence of the law of large numbers, it can be shown that X(t) - -00 as t - oo under these circumstances, so that Z(t) = z exp[X(t)] - exp(-cc) = 0. The geometric Brownian motion process is drifting ever closer to zero, while simultaneously, its mean or expected value is continually increasing! Here is yet another stochastic model in which the mean value function is entirely misleading as a sole description of the process. The variance of the geometric Brownian motion is derived in much the same manner as the mean. First E[Z(t)2IZ(0) = Z] = z2E[e211t) ] = z2E[e2c« = zzezc«+_°=,t
°'-1,+z°ao,]
(as in (4.17))
516
Brownian Motion and Related Processes
VIII
and then Var[Z(t)] = E[Z(t)2] - { E[Z(t)] } 2 = Z2e 2(a+-a2), Z2e2at(e(' 2,
=
(4.18)
z2e2at
- 1).
Because of their close relation as expressed in the definition (4.16), many results for Brownian motion can be directly translated into analogous results for geometric Brownian motion. For example, let us translate the gambler's ruin probability in Theorem 4.1. For A < 1 and B > 1, define
T=TA.a=minjt?0; Z(t) =AorZ(t) =BI. Theorem 4.3 For a geometric Brownian motion with drift parameter a and variance parameter or, and A < 1 < B,
1-
1
PrIZ(T) = B =
B1-2aha2
A'-2ada:
- A'-2aia2
.19) (4.19)
Example Suppose that the fluctuations in the price of a share of stock in a certain company are well described by a geometric Brownian motion with drift a = 1/10 and variance o2 = 4. A speculator buys a share of this
stock at a price of $100 and will sell if ever the price rises to $110 (a profit) or drops to $95 (a loss). What is the probability that the speculator sells at a profit? We apply (4.19) with A = 0.95, B = 1.10 and 1 - 2a/o-2 = 1 - 2(0.1)/4 = 0.95. Then Pr{Sell at profit} =
I - 0.95095 1.100.91 -
= 0.3342. 0.950.95
Example The Black-Scholes Option Pricing Formula A call, or warrant, is an option entitling the holder to buy a block of shares in a given company at a specified price at any time during a stated interval. Thus the call listed in the financial section of the newspaper as Hewlett
Aug
$60
$6
means that for a price of $6 per share, one may purchase the privilege (option) of buying the stock of Hewlett-Packard at a price of $60 per share at
4.
Brownian Motion with Drift
517
any time between now and August (by convention, always the third Friday of the month). The $60 figure is called the striking price. Since the most recent closing price of Hewlett was $59, the option of choosing when to
buy, or not to buy at all, carries a premium of $7 = $60 + $6 - $59 over a direct purchase of the stock today. Should the price of Hewlett rise to, say, $70 between now and the third Friday of August, the owner of such an option could exercise it, btlying at the striking price of $60 and immediately selling at the then current market price of $70 for a $10 profit, less, of course, the $6 cost of the option itself. On the other hand, should the price of Hewlett fall, the option owner's loss is limited to his $6 cost of the option. Note that the seller (technically called the "writer") of the option has a profit limited to the $6 that he receives for the option but could experience a huge loss should the price of Hewlett soar, say to $100. The writer would then either have to give up his own Hewlett shares or buy them at $100 on the open market in order to fulfill his obligation to sell them to the option holder at $60. What should such an option be worth? Is $6 for this privilege a fair price? While early researchers had studied these questions using a geometric Brownian motion model for the price fluctuations of the stock, they
all assumed that the option should yield a higher mean return than the mean return from the stock itself because of the unlimited potential risk to the option writer. This assumption of a higher return was shown to be false in 1973 when Fisher Black, a financial consultant with a Ph.D. in applied mathematics, and Myron Scholes, an assistant professor in finance at MIT, published an entirely new and innovative analysis. In an idealized setting that included no transaction costs and an ability to borrow or lend limitless amounts of capital at the same fixed interest rate, they showed that an owner, or a writer, of a call option could simultaneously buy or sell the underlying stock ("program trading") in such a way as to exactly match the returns of the option. Having available two investment opportunities with exactly the same return effectively eliminates all risk, or randomness, by allowing an investor to buy one while selling the other. The implications of their result are many. First, since writing an option potentially carries no risk, its return must be the same as that for other riskless investments in the economy. Otherwise, limitless profit opportunities bearing no risk would arise. Second, since owning an option carries no risk, one should not exercise it early, but hold it until its expiration date, when, if the market price exceeds the striking price, it should be exercised, and otherwise
518
VIII
Brownian Motion and Related Processes
not. These two implications then lead to a third, a formula that established the worth, or value, of the option. The Black-Scholes paper spawned hundreds, if not thousands, of further academic studies. At the same time, their valuation formula quickly invaded the financial world, where soon virtually all option trades were taking place at or near their Black-Scholes value. It is remarkable that the valuation formula was adopted so quickly in the real world in spite of the esoteric nature of its derivation and the ideal world of its assumptions. In order to present the Black-Scholes formula, we need some notation. Let S(t) be the price at time t of a share of the stock under study. We assume that S(t) is described by a geometric Brownian motion with drift parameter a and variance parameter o=. Let F(z, T) be the value of an option, where z is the current price of the stock and r is the time remaining until
expiration. Let a be the striking price. When r = 0, and there is no time remaining, one exercises the option for a profit of z - a if z > a (market price greater than striking price) and does not exercise the option, but lets it lapse, or expire, if z : a. This leads to the condition
F(z,0)=(z-a)+=max{z-a,0}. The Black-Scholes analysis resulted in the valuation F(z, T) = e-ME[(Z(T) - a)+IZ(0) = z],
(4.20)
where r is the return rate for secure, or riskless, investments in the economy, and where Z(t) is a second geometric Brownian motion having drift
parameter r and variance parameter o . Looking at (4.20), the careful reader will wonder whether we have made a mistake. No, the worth of the option does not depend on the drift parameter a of the underlying stock. In order to put the valuation formula into a useful form, we write Z(T) = zee`-
B(T)/N//T,
(4.21)
and observe that ze
O )T+O\'Tf >a
is the same as > vo =
20' T dVI-T
(4.22)
4.
Brownian Motion with Drift
519
Then erTF(z, T) = E[(Z(T)
=
- a)+IZ(0) = z]
E[(ze'r-'?2)T+OV'Tf - a)+l W
[ze`,.-' u',-+ox/-n_ - al O(v) dv.
= J Vo
x
x
e'2 'T J e°","O(v) dv
- a J 4(v) dv. V0
V0
Completing the square in the form -;v'+0-
-![(v-0-)Z20-27]
v=
N/IT
shows that
ean4(v) = e+°'-4(v - 0-v"T) , when ce x
errF(z, T) =
ze(r
J 4(v - 0-V T) dv - a[1 - (D(vo)l 0-o
= zer-[1 - 4 (vo - 0-V T)l - a[1 - F(vo)]. Finally, note that //-0-o- UVT= ""
log(a/z) - (r + ;0-22)7
0-V
and that
1 - I(x) = I(-x)
and
log(a/z) = -log(z/a)
to get, after multiplying by e-r-, the end result
log(z/a) + (r + Zoz)T
F(z, r) = z4lf
l /
log(z/a) + (r O'
(4.23) Z0-')
520
VIII
Brownian Motion and Related Processes
Equation (4.23) is the Black-Scholes valuation formula. Four of the five factors that go into it are easily and objectively evaluated: The current market prize z, the striking price a, the time r until the option expires, and the rate r of return from secure investments such as short-term government securities. It is the fifth factor, a., sometimes called the volatility, that presents problems. It is, of course, possible to estimate this parameter based on past records of price movements. However, it should be emphasized that it is the volatility in the future that will affect the profitability of the option, and when economic conditions are changing, past history may not accurately indicate the future. One way around this difficulty is to work backwards and use the Black-Scholes formula to impute a volatility
from an existing market price of the option. For example, the Hewlett-Packard call option that expires in August, six months or, T = 2 year, in the future, with a current price of Hewlett-Packard stock of $59,
a striking price of $60, and secure investments returning about r = 0.05, a volatility of a. = 0.35 is consistent with the listed option price
Striking Price
Time to Expiration (Years)
Offered
a
T
Price
Valuation F(z, T)
1/12
$17.00
$17.45
2/12
19.25 13.50 15.13
18.87 13.09 14.92
8.50 12.00 5.50 9.13
9.26
130 130 135 135 140 140 145 145 145 150 150 155 155 155
1/12
2/12 1/12 2/12 1/12
2/12 5/12
Black-Scholes
1/12
13.63 3.13
2/12
6.38
1/12
1.63
2/12 5/12
4.00 9.75
11.46
6.14 8.52 13.51
3.80 6.14 2.18 4.28 9.05
Exercises
521
of $6. (When o = 0.35 is used in the Black-Scholes formula, the resulting valuation is $6.03.) A volatility derived in this manner is called an imputed or implied volatility. Someone who believes that the future will be more variable might regard the option at $6 as a good buy. Someone who
believes the future to be less variable than the imputed volatility of o, = 0.35 might be inclined to offer a Hewlett-Packard option at $6. The table on the previous page compares actual offering prices on February 26, 1997, for options in IBM stock with their Black-Scholes valuation using (4.23). The current market price of IBM stock is $146.50, and in all cases, the same volatility a = 0.30 was used.
The agreement between the actual option prices and their BlackScholes valuations seems quite good.
Exercises
4.1. A Brownian motion {X(t)} has parameters µ = -0.1 and a- = 2. What is the probability that the process is above y = 9 at time t = 4, given that it starts at x = 2.82?
4.2. A Brownian motion {X(t)} has parameters µ = 0.1 and a- = 2. Evaluate the probability of exiting the interval (a, b] at the point b starting from X(0) = 0 for b = 1, 10, and 100 and a = -b. Why do the probabilities change when alb is the same in all cases?
4.3. A Brownian motion {X(t)} has parameters µ = 0.1 and a = 2. Evaluate the mean time to exit the interval (a, b] from X(0) = 0 for b = 1, 10, and 100 and a = -b. Can you guess how this mean time varies with b for b large? 4.4. A Brownian motion X(t) either (i) has drift p. = +16 > 0, or (ii) has drift µ = -ZS < 0, and it is desired to determine which is the case by observing the process for a fixed duration T. If X(T) > 0, then the decision
will be that µ = +ZS; If X(T) :5 0, then µ = -ZS will be stated. What should be the length T of the observation period if the design error probabilities are set at a = 8 = 0.05? Use S = 1 and a- = 2. Compare this fixed duration with the average duration of the sequential decision plan in the example of Section 4.1.
522
VIII
Brownian Motion and Related Processes
4.5.
Suppose that the fluctuations in the price of a share of stock in a certain company are well described by a geometric Brownian motion with drift a = -0.1 and variance = 4. A speculator buys a share of this stock at a price of $100 and will sell if ever the price rises to $110 (a profit) or drops to $95 (a loss). What is the probability that the speculator sells at a profit? cr-2
4.6.
be a standard normal random variable.
Let
(a) For an arbitrary constant a, show that
E[(e - a)+] = 4(a) - a[1 - 4)(a)] (b) Let X be normally distributed with mean µ and variance cr2. Show that
E[(X - b)+] =
+(1) -
(b
0"
o
µ)L1
- (D(b
v
.1
Problems
4.1. What is the probability that a standard Brownian motion {B(t) } ever crosses the line a + bt (a > 0, b > 0)? 4.2. Show that Prmax
4.3.
b + B(t)
1+t
>a =
e-
,
a
>0 , b
a} = e-°'. °:511s1
4.4. A Brownian motion X(t) either (i) has drift µ = µ0i or (ii) has drift µ = µ,, where µ.° < µ, are known constants. It is desired to determine which is the case by observing the process. Derive a sequential decision
Problems
523
procedure that meets prespecified error probabilities a and 0. Hint: Base ;(µo + µ,). your decision on the process X'(t) = X(t) Change a Brownian motion with drift X(t) into an absorbed Brownian motion with drift X(t) by defining
4.5.
XA(t) =
(X(t),
for t < r,
0,
fort? T,
where
T = min{t ? 0; X(t) = 0). (We suppose that X(0) = x > 0 and that A < 0, so that absorption is sure to occur eventually.) What is the probability that the absorbed Brownian motion ever reaches the height b > x?
4.6. What is the probability that a geometric Brownian motion with drift parameter a = 0 ever rises to more than twice its initial value? (You buy stock whose fluctuations are described by a geometric Brownian motion with a = 0. What are your chances to double your money?)
4.7. A call option is said to be "in the money" if the market price of the stock is higher than the striking price. Suppose that the stock follows a geometric Brownian motion with drift a, variance a2 , and has a current market price of z. What is the probability that the option is in the money at the expiration time T? The striking price is a.
4.8. Verify the Hewlett-Packard option valuation of $6.03 stated in the text when T = ;, z = $59, a = 60, r = 0.05 and a- = 0.35. What is the Black-Scholes valuation if o- = 0.30?
4.9. Let r be the first time that a standard Brownian motion B(t) starting from B(0) = x > 0 reaches zero. Let A be a positive constant. Show that w(x) = E[e-ATIB(0) = x] = e-'-21*.
Hint: Develop an appropriate differential equation by instituting an infinitesimal first step analysis according to w(x) = E[E{e-ATjB(Ot)}!B(0) = x] = E[e-A°'w(x + AB)].
VIII
524
Brownian Motion and Related Processes
4.10. Let to = 0 < t, < t, < ... be time points, and define X = Z(t,,) where Z(t) is geometric Brownian motion with drift parameters r and variance parameter 022 (see the geometric Brownian motion in the Black-Scholes formula (4.20)). Show that is a martingale.
5.
The Ornstein-Uhlenbeck Process*
The Ornstein-Uhlenbeck process {V(t); t ? 0 } has two parameters, a drift coefficient (3 > 0 and a diffusion parameter 0-22. The process, starting from V(0) = v, is defined in terms of a standard Brownian motion {B(t) } by scale changes in both space and time: V(t) = ve-O' +
ae
` B(e2a' - 1),
fort ? 0.
(5.1)
The first term on the right of (5.1) describes an exponentially decreasing trend towards the origin. The second term represents the fluctuations
about this trend in terms of a rescaled Brownian motion. The Ornstein-Uhlenbeck process is another example of a continuous-state-space,
continuous-time Markov process having continuous paths, inheriting these properties from the Brownian motion in the representation (5.1). It is a Gaussian process (see the discussion in Section 1.4), and (5.1) easily shows its mean and variance to be E[V(t)I V(0) = v] = ve-a',
(5.2)
and Var[ V(t)V(0) = x] =
v2 2R Var[B(e2a' - 1)] (5.3)
=
2( 1
ZR
Knowledge of the mean and variance of a normally distributed random variable allows its cumulative distribution function to be written in terms of the standard normal distribution (1.6), and by this means we can imThis section contains material of a more specialized nature.
S.
The Ornstein-Uhlenbeck Process
525
mediately express the transition distribution for the Ornstein-Uhlenbeck process as
V -2,8(y
Pr(V(t)
- xe 0`) l
y1V(0) = x} = ID( aV-l - e_20,
I.
(5.4)
The Covariance Function
Suppose that 0 < u < s, and that V(0) = x. Upon subtracting the mean as given by (5.2), we obtain Cov[V(u), V(s)] = E[{V(u) - xe-Al) {V(s) - xe-g''}]
= 2Re-r,,+'''E[{B(e''-m" - 1)}{B(e'' - 1) }] (5.5)
0,2
2 0,_
enc., a_e
216
5.1. A Second Approach to Physical Brownian Motion The path that we have taken to introduce the Ornstein-Uhlenbeck process is not faithful to the way in which the process came about. To begin an explanation, let us recognize that all models of physical phenomena have deficiencies, and the Brownian motion stochastic process as a model for the Brownian motion of a particle is no exception. If B(t) is the position of a pollen grain at time t and if this position is changing over time, then the pollen grain must have a velocity. Velocity is the infinitesimal change in
position over infinitesimal time, and where B(t) is the position of the pollen grain at time t, the velocity of the grain would be the derivative dB(t)/dt. But while the paths of the Brownian motion stochastic process are continuous, they are not differentiable. This remarkable statement is difficult to comprehend. Indeed, many elementary calculus explanations implicitly tend to assume that all continuous functions are differentiable, and if we were to be asked to find an example of one that wasn't, we might
526
VIII
Brownian Motion and Related Processes
consider it quite a challenge. Yet each path of a continuous Brownian motion stochastic process is (with probability one) differentiable at no point. We have encountered yet another intriguing facet of stochastic processes that we cannot treat in full detail but must leave for future study. We will attempt some motivation, however. Recall that the variance of the Brownian increment AB is At. But variations in the normal distribution are not scaled in terms of the variance, but in terms of its square root, the standard deviation, so that the Brownian increment AB is roughly on the order of
\, and the approximate derivative AB
AB
At V
1
VA-t
is roughly on the order of 1/\. This, of course, becomes infinite as At -, 0, which suggests that a derivative of Brownian motion, were it to exist, could only take the values ±c. As a consequence, the Brownian path cannot have a derivative. The reader can see from our attempt at explanation that the topic is well beyond the scope of an introductory text. Although its movements may be erratic, a pollen grain, being a physi-
cal object of positive mass, must have a velocity, and the OrnsteinUhlenbeck process arose as an attempt to model this velocity directly. Two factors are postulated to affect the particle's velocity over a small time interval. First, the frictional resistance or viscosity of the surround-
ing medium is assumed to reduce the magnitude of the velocity by a deterministic proportional amount, the constant of proportionality being R > 0. Second, there are random changes in velocity caused by collisions with neighboring molecules, the magnitude of these random changes being measured by a variance coefficient o-'-. That is, if V(t) is the velocity at time t, and AV is the change in velocity over (t, t + At], we might express the viscosity factor as
E[zVIV(t) = v] = -,6v1t + o(At)
(5.6)
and the random factor by Var[i VI V(t) = v] = o-2At + o(Ot).
(5.7)
The Ornstein-Uhlenbeck process was developed by taking (5.6) and (5.7) together with the Markov property as the postulates, and from them deriving the transition probabilities (5.4). While we have chosen not to follow this path, we will verify that the mean and variance given in (5.2) and
5.
The Ornstein-Uhlenbeck Process
527
(5.3) do satisfy (5.6) and (5.7) over small time increments. Beginning with (5.2) and the Markov property, the first step is
E[V(t + ot)IV(t) = v] = ve-O-" = v[1 - 80t + o(Ot)], and then
E[OVI V(t) = v] = E[V(t + At)I V(t) = v] - v
= -/3vAt + o(Ot), and over small time intervals the mean change in velocity is the proportional decrease desired in (5.6). For the variance, we have -
Var[OVIV(t) = v] = Var[V(t + ot)IV(t) = v]
0-2(1-e 2/3
= 0,2A t + o(Ot), and the variance of the velocity increment behaves as desired in (5.7). In fact, (5.6) and (5.7) together with the Markov property can be taken as the definition of the Ornstein-Uhlenbeck process in much the same way, but involving far deeper analysis, that the infinitesimal postulates of V, Section 2.1 serve to define the Poisson process.
Example Tracking Error Let V(t) be the measurement error of a radar system that is attempting to track a randomly moving target. We assume V(t) to be an Ornstein-Uhlenbeck process. The mean increment E[OVI V(t)
= v] = -/3vtt + o(At) represents the controller's effort to reduce the current error, while the variance term reflects the unpredictable motion of the target. If /3 = 0.1, 0- = 2, and the system starts on target (v = 0), the probability that the error is less than one at time t = 1 is, using (5.4),
Pr(IV(t)j 0, let 1
SN(t) = -S(Nt) (5.25)
_ , [O-J§(t) + ,V(t)],
538
VIII
Brownian Motion and Related Processes
wherea(t) = B(Nt)/\/N remains a standard Brownian motion. (See Exercise 1.2.) Because the variance of V(t) is always less than or equal to 0.21(2/3), the variance of V(t)/HIV becomes neglible for large N. Equation
(5.25) then shows more clearly in what manner the position process becomes like a Brownian motion: For large N, S\,(t)
o,
a
B(t).
Exercises
5.1. An Ornstein-Uhlenbeck process V(t) has 0-2 = 1 and /3 = 0.2. What is the probability that V(t) s.
If X(t) is the number of customers in the system at time t, then the number undergoing service is min(X(t), s), and the number waiting for service is max{X(t) - s, 0}. The system is depicted in Figure 2.2.
554
IX
Queueing Systems
s = 5 parallel servers
X
A common waiting line XXX
Arrivals
Departures
X
X
X
X 7 X
/
Figure 2.2 A queueing system with s servers.
The auxiliary quantities are given by
fork=0,1,...,s,
ek-AoAI...Ak_I
tL
S!
Sjtk)
for k ? s,
and when A < sµ, then
j=0
z
j=0 j!(-l A
+j=S z S!,
(
/-t )'( SA
(2.15) '
r=c
1 (A 1' +
j! N J
(Alto s!(1 - A/s N-)
for A < sµ
The traffic intensity in an MIMIs system is p = Al(sli). Again, as the traffic intensity approaches one, the mean queue length becomes unbounded. When A < sµ, then from (2.8) and (2.15), = ,To
(A )i Ij'=()
j! A
+ s!(1 - Alsµ)1
Exercises
555
and 1
Ak
fork=0,1,...,s,
ki(µ)ITO
(2.16)
s! \ul s (su)
fork? s,
IT0 k
We evaluate L0, the mean number of customers in the system waiting for, and not undergoing, service. Then
Lo=7,
(J-s)lrj=7' kir,
k
k=0
j=s
k=0
YJAIAS k(
= so(µ).,
kµlk
(2.17) k
7, 7ro
A ls
(A/sµ)
= s! (µl (1 - Alsp)2' Then
W = W0 + and
L=AW=A(W0+ 1 I=LO+ A
µ/
µ
Exercises
2.1. Customers arrive at a tool crib according to a Poisson process of rate A = 5 per hour. There is a single tool crib employee, and the individual service times are exponentially distributed with a mean service time of
556
IX
Queueing Systems
10 minutes. In the long run, what is the probability that two or more workers are at the tool crib being served or waiting to be served?
2.2. On a single graph, plot the server utilization 1 - Tro = p and the mean queue length L = p/(1 - p) for the M/M/l queue as a function of the traffic intensity p = A/µ for 0 < p < 1. 2.3. Customers arrive at a checkout station in a market according to a Poisson process of rate A = 1 customer per minute. The checkout station can be operated with or without a bagger. The checkout times for customers are exponentially distributed, and with a bagger the mean checkout time is 30 seconds, while without a bagger this mean time increases to 50 seconds. Compare the mean queue lengths with and without a bagger.
Problems 2.1.
Determine explicit expressions for 7r0 and L for the MIMIs queue
when s = 2. Plot I - iro and L as a function of the traffic intensity p = A/2µ. Determine the mean waiting time W for an M/M12 system when A = 2 and µ = 1.2. Compare this with the mean waiting time in an M/M11 system whose arrival rate is A = 1 and service rate is µ = 1.2. Why is there a difference when the arrival rate per server is the same in both cases?
2.2.
2.3.
Determine the stationary distribution for an MIM12 system as a function of the traffic intensity p = A/2µ, and verify that L = AW.
2.4. The problem is to model a queueing system having finite capacity. We assume arrivals according to a Poisson process of rate A, independent exponentially distributed service times having mean 1/µ, a single server, and a finite system capacity N. By this we mean that if an arriving customer finds that there are already N customers in the system, then that customer does not enter the system and is lost. Let X(t) be the number of customers in the system at time t. Suppose that N = 3 (2 waiting, 1 being served).
(a) Specify the birth and death parameters for X(t). (b) In the long run, what fraction of time is the system idle? (c) In the long run, what fraction of customers are lost?
Problems
557
2.5. Customers arrive at a service facility according to a Poisson process having rate A. There is a single server, whose service times are exponentially distributed with parameter µ. Let N(t) be the number of peo-
ple in the system at time t. Then N(t) is a birth and death process with parameters A = A for n ? 0 and µ = µ for n ? 1. Assume A < A. Then lrk = (1 - A/p)(Alµ)k, k ? 0, is a stationary distribution for N(t); cf. equation (2.9).
Suppose the process begins according to the stationary distribution. That is, suppose Pr{N(0) = k} _ ffk for k = 0, 1, .... Let D(t) be the number of people completing service up to time t. Show that D(t) has a Poisson distribution with mean At. Hint: Let Pk;(t) = Pr{D(t) = jIN(0) = k} and P(t) _ I 1rkPk;(t) = Pr{D(t) = j}. Use a first step analysis to show that P0;(t + At) = A(Ot)P,;(t) + [1 - A(Ot)]Po;(t) + o(Ot), and for k = 1, 2,
... ,
Pk;(t + At) = µ(Ot)Pk ,.;-I(t) +
[1 - (A + µ)(Ot)]PkJ(t) + 0(t).
Then use P;(t) _ Ik irrPk;(t) to establish a differential equation. Use the explicit form of Irk given in the problem.
2.6.
Customers arrive at a service facility according to a Poisson
process of rate A. There is a single server, whose service times are exponentially distributed with parameter µ. Suppose that "gridlock" occurs whenever the total number of customers in the system exceeds a capacity C. What is the smallest capacity C that will keep the probability of gridlock, under the limiting distributing of queue length, below 0.001? Express your answer in terms of the traffic intensity p = A/µ.
2.7. Let X(t) be the number of customers in an M/MIcc queueing system at time t. Suppose that X(0) = 0. (a) Derive the forward equations that are appropriate for this process by substituting the birth and death parameters into VI, (3.8). (b) Show that M(t) = E[X(t)] satisfies the differential equation M'(t) = A - AM(t) by multiplying the jth forward equation by j and summing. (c) Solve for M(t).
558
3.
IX
Queueing Systems
General Service Time Distributions
We continue to assume that the arrivals follow a Poisson process of rate A. The successive customer service times Y,, Y2, ... , however, are now allowed to follow an arbitrary distribution G(y) = Pr{ Yk y } having a finite mean service time v = E[1 ]. The long run service rate is µ = 1/v. Deterministic service times of an equal fixed duration are an important special case.
3.1.
The MIGI1 System
If arrivals to a queue follow a Poisson process, then the successive durations Xk from the commencement of the kth busy period to the start of the next busy period form a renewal process. (A busy period is an uninterrupted duration when the queue is not empty. See Figure 2.1.) Each Xk is composed of a busy portion Bk and an idle portion Ik. Then po(t), the probability that the system is empty at time t, converges to lim po(t) = pro =
E[I l]
E[XI] (3.1)
E[11]
E[111 + E[B, ]
by the renewal theorem (see "A Queueing Model" in VII, Section 5.3). The idle time is the duration from the completion of a service that empties the queue to the instant of the next arrival. Because of the memoryless property that characterizes the interarrival times in a Poisson process, each idle time is exponentially distributed with mean E[l,] = 1/A. The busy period is composed of the first service time Y,, plus busy periods generated by all customers who arrive during this first service time. Let A denote this random number of new arrivals. We will evaluate the conditional mean busy period given that A = n and Y, = y. First,
E[BJA = 0, Y, = y] = y,
because when no customers arrive, the busy period is composed of the
3.
General Service Time Distributions-
559
first customer's service time alone. Next, consider the case in which A = 1, and let B' be the duration from the beginning of this customer's service to the next instant that the queue is empty. Then
E[B, I A = 1, Y, = y] = y + E[B'] = y + E[B, ], because upon the completion of service for the initial customer, the single arrival begins a busy period B' that is statistically identical to the first, so that E[B'] = E[B,]. Continuing in this manner we deduce that
E[B,I A = n, Y, = y] = y + nE[B,] and then, using the law of total probability, that x
E[B,IY, = y] = 7, E[B,IA = n, Y, = y] Pr{A = nIY, = y) n=0
v + nE[B 11
(Ay)'e'
v
n.
n=0
= y+ 1lyE[B,]. Finally,
E[BI] = J E[B,I Y, = y] dG(y) 0 X
= J {y + AyE[B,] } dG(y)
(3.2)
0
= v{ 1 + AE[B,] }.
Since E[B,] appears on both sides of (3.2), we may solve to obtain
E[B,] =
v
1
_ Av,
provided that Av < 1.
To compute the long run fraction of idle time, we use (3.3) and
(3.3)
560
IX
Queueing Systems
E[I1l
E[lll + E[B,l 1/A (3.4)
1/A + v/(1 - AV)
=1-Av
ifAv
for k = 0, 1,
... ,
k!
where µ(A) is given by (3.19). As t - oo, then x
I i m µ(A) = A
J
[1 - G(x)] dx = Av,
0
where v is the mean service time. Thus we obtain the limiting distribution (Av)ke-aP 1Tk = k!
fork=0,1,....
566
IX
Queueing Systems
Exercises
3.1.
Suppose that the service distribution in a single server queue is exponential with rate p.; i.e., G(v) = 1 - e-µY for v > 0. Substitute the mean
and variance of this distribution into (3.16) and verify that the result agrees with that derived for the M/M/1 system in (2.10).
3.2.
Consider a single-server queueing system having Poisson arrivals at rate A. Suppose that the service times have the gamma density Laya- ' e - {1y
g(Y)
I^
r(a)
for y >_ 0,
where a > 0 and µ > 0 are fixed parameters. The mean service time is a/µ and the variance is a/µZ. Determine the equilibrium mean queue length L.
3.3. Customers arrive at a tool crib according to a Poisson process of rate A = 5 per hour. There is a single tool crib employee, and the individual service times are random with a mean service time of 10 minutes and a standard deviation of 4 minutes. In the long run, what is the mean number of workers at the tool crib either being served or waiting to be served?
3.4. Customers arrive at a checkout station in a market according to a Poisson process of rate A = 1 customer per minute. The checkout station can be operated with or without a bagger. The checkout times for customers are random. With a bagger the mean checkout time is 30 seconds, while without a bagger this mean time increases to 50 seconds. In both cases, the standard deviation of service time is 10 seconds. Compare the mean queue lengths with and without a bagger.
3.5.
Let X(t) be the number of customers in an M/G/co queueing system at time t. Suppose that X(0) = 0. Evaluate M(t) = E[X(t)], and show that it increases monotonically to its limiting value as t - -.
Problems
3.1. Let X(t) be the number of customers in an M/GIc queueing system at time t, and let Y(t) be the number of customers who have entered the
4.
Variations and Extensions
567
system and completed service by time t. Determine the joint distribution of X(t) and Y(t).
3.2. In operating a queueing system with Poisson arrivals at a rate of A = I per unit time and a single server, you have a choice of server mechanisms. Method A has a mean service time of v = 0.5 and a variance in
service time of r 2 = 0.2, while Method B has a mean service time of v = 0.4 and a variance of r2 = 0.9. In terms of minimizing the waiting time of a typical customer, which method do you prefer? Would your answer change if the arrival rate were to increase significantly?
4.
Variations and Extensions
In this section we consider a few variations on the simple queueing mod-
els studied so far. These examples do not exhaust the possibilities but serve only to suggest the richness of the area. Throughout we restrict ourselves to Poisson arrivals and exponentially distributed service times. 4.1.
Systems with Balking
Suppose that a customer who arrives when there are n customers in the systems enters with probability p,, and departs with probability g = 1 - p,,. If long queues discourage customers, then p,, would be a decreasing function of n. As a special case, if there is a finite waiting room of capacity C, we might suppose that {1
forn 0 for k = 1, 2, This process corresponds to a memoryless server queue having Poisson arrivals. A typical evolution is illustrated in Figure 5.4. The arrival process for {X(t) } is a Poisson process of rate A. The reversed time process Y(t) = X(-t) has the same probabilistic laws as does {X(t)}, so the arrival process for {Y(t)} also must be a Poisson process of rate A. But the arrival process for {Y(t)} is the departure process for {X(t)} (see Figure 5.4). Thus it must be that these departure instants also form a Poisson process of rate A. In particular, if D(t) counts the departures in the process over the duration (0, t], then
....
(At)'e-A,
for j = 0,
Pr{D(t) = j} =
1,
....
(5.12)
j!
X(t)
Future for
, ,
, i
t
Arrivals for i
Departures for
1
e
+
+
Departures for
+
1
Arrivals for
Figure 5.4 A typical evolution of a queueing process. The instants of arrivals and departures have been isolated on two time axes below the graph.
Exercises
591
Moreover, looking at the reversed process Y(- t) = X(t), the "future" ar-
rivals for Y(-t) in the Yduration [-t, 0) are independent of Y(-t) = X(t). (See Figure 5.4.) These future arrivals for Y(-t) are the departures for in the interval (0, t}. Therefore, these departures and X(t) = Y(-t) must be independent. Since Pr{X(t) = k} = lrk, by the assumption of stationarity, the independence of D(t) and X(t) and (5.12) give
Pr{X(t) = k, D(t) = j} = Pr (X(t) = k} Pr ( D(t)
j)
1rke-`(At)!
and the proof of Theorem 5.1 is complete.
Exercises
5.1. Consider the three server network pictured here: P21 = .2
P12 =
Server #2
=3,n?1 Arrivals
Poisson, rate
ip.
Departs with probability .8
Server #1 1
X=2 Server # 3 1
In the long run, what fraction of time is server #2 idle while, simultaneously, server #3 is busy? Assume that all service times are exponentially distributed.
5.2. Refer to the network of Exercise 5.1. Suppose that Server #2 and Server #3 share a common customer waiting area. If it is desired that the total number of customers being served and waiting to be served not exceed the waiting area capacity more than 5% of the time in the long run, how large should this area be?
592
IX
Gueueing Systems
Problems
5.1. Suppose three service stations are arranged in tandem so that the departures from one form the arrivals for the next. The arrivals to the first station are a Poisson process of rate A = 10 per hour. Each station has a single server, and the three service rates are µ, = 12 per hour, p, = 20 per hour, and p, = 15 per hour. In-process storage is being planned for station 3. What capacity C3 must be provided if in the long run, the probability of exceeding C3 is to be less than or equal to 1 percent? That is, what
is the smallest number C3 = c for which lim,, Pr{X3(t) > c} : 0.01? 6.
General Open Networks
The preceding section covered certain memoryless queueing networks in which a customer could visit any particular server at most once. With this assumption, the departures from any service station formed a Poisson process that was independent of the number of customers at that station in steady state. As a consequence, the numbers X,(t), X2(t), ... , XK(t) of customers at the K stations were independent random variables, and the product form solution expressed in (5.2) prevailed. The situation where a customer can visit a server more than once is more subtle. On the one hand, many flows in the network are no longer Poisson. On the other hand, rather surprisingly, the product form solution of (5.2) remains valid.
Example To begin our explanation, let us first reexamine the simple feedback model of Section 4.3. The flow is depicted in Figure 6.1. The arrival process is Poisson, but the input to the server is not. (The distinction between the arrival and input processes is made in Figures 1.2 and 4.1.) Feedback Not Poisson
Feedback with probability p Depart with probability q Arrivals
Input
Output
Poisson Departures
Poisson X
Not Poisson
Not Poisson
Rate = X
Figure 6.1
A single server with feedback.
6.
General Open Networks
593
The output process, as shown in Figure 6.1, is not Poisson, nor is it independent of the number of customers in the system. Recall that each customer in the output is fed back with probability p and departs with probability q = 1 - p. In view of this non-Poisson behavior, it is remarkable that the distribution of the number of customers in the system is the same as that in a Poisson M/M/1 system whose input rate is A/q and whose service rate is p,, as verified in (4.4).
Example Let us verify the product form solution in a slightly more complex two server network, depicted in Figure 6.2. Server #2 P2 Feedback with
probability p Arrivals
Server #1
Poisson, rate X
Wi
Depart with probability q
Figure 6.2 A two server feedback system. For example, server #2 in this system might be an inspector returning a fraction p of the output for rework.
If we let X;(t) denote the number of customers at station i at time t, for i = 1, 2, then X(t) = [X,(t), X2(t)] is a Markov chain whose transition rates are given in the following table: Transition Rate
From State
State
(m, n) (m, n)
(m + 1, n)
A
(m + 1, n - 1)
µ,
Arrival of new customer Input of feedback customer
(m - 1, n)
qµ,
Departure of customer
(m - 1, n + 1)
pµ,
Feedback to server #2
To
Description
n?1 (m, n)
m?1 (m, n)
m?1
594
IX
Queueing Systems
Let ir,,,,, = lim,,,. Pr(X1(t) = m, X2(t) = n} be the stationary distribution of the process. Reasoning analogous to that of (6.11) and (6.12) of VI (where the theory was developed for finite-state Markov chains) leads to the following equations for the stationary distribution: (6.1)
Air00 = g1.1ir1.0,
(A + jx2)ir0.,, = PP'Iii.,,-1 + gp1iT1.,,,
(A +
n ? 1,
Air,,,-1.0 + gj 1ir,,+1.0 +
1.1,
m ? 1,
(A + A, +
(6.2)
(6.3)
AlT,,-i.,, + p1x11;m+1.,,-1 + qp 1 irn,+;.,,
m, n ? 1. (6.4) + The mass balance interpretation as explained following (6.12) in VI may help motivate (6.1) through (6.4). For example, the left side in (6.1) measures the total rate of flow out of state (0, 0) and is jointly proportional to ir0.0, the long run fraction of time the process is in state (0, 0), and A, the (conditional) transition rate out of (0, 0). Similarly, the right side of (6.1) measures the total rate of flow into state (0, 0). Using the product form solution in the acyclic case, we will "guess" a solution and then verify that our guess indeed satisfies (6.1) through (6.4). First we need to determine the input rate, call it A1, to server #1. In equilibrium, the output rate must equal the input rate, and of this output, the fraction p is returned to join the new arrivals after visiting server #2. We have
Input Rate = New Arrivals + Feedback, which translates into
A, = A + pA, or
A,=1-pq A
The input rate to server #2 is
A
(6.5)
6.
General Open Networks
595 A
A2 = pA, = q
(6.6)
The solution that we guess is to treat server #1 and server #2 as independent M/M/1 systems having intput rates A, and A2, respectively (even though we know from our earlier discussion that the input to server #2, while of rate A2, is not Poisson). That is, we attempt a solution of the form A,
- ')(' ")
A
AI
P12
P12
(6.7) In
form,n? 1.
- (1 It is immediate that W
W
77 1Tirr.n = 1
,n=0n=0
provided that A, = (A/q) < µ, and A2 = pAlq < p.2.
We turn to verifying (6.1) through (6.4). Let
(Algµ,)m X (pA1gµ2)". It suffices to verify that 0,,,,, satisfies (6.1) through (6.4), since
and 6,,,,, differ only by the constant multiple iro,,, = 1 - A,/µ,) X (1 - A2/µ2). Thus we proceed to substitute 9,,,,,, into (6.1) through (6.4) and verify that equality is obtained. We verify (6.1):
A=q,.l
qµ,
=A.
We verify (6.2): (A
+
11
+
-
or after dividing by (pA/gµ2)" and simplifying,
A+µ2=\pq
1\pAI
+A=A+µ2.
qk
596
IX
Queueing Systems
We verify (6.3):
) (A
_ AI
I,,,+i
+
+
'(gp-z
P,( qlu,)
+µ)(qµ,
1'
which, after dividing by (A/qµ,)"', becomes
(A+µ0_A(q
+gA,(
)
gµj +µ\gJ11)(q
)'
or
A+µ, =glxl +A+plL,, = A+ µ,. The final verification, that 9satisfies (6.4), is left to the reader as Exercise 6.1.
6.1.
The General Open Network
Consider an open queueing network having K service stations, and le Xk(t) denote the number of customers at station k at time t. We assume that
1. The arrivals from outside the network to distinct servers form independent Poisson processes, where the outside arrivals to station k occur at rate A. 2. The departures from distinct servers independently travel instantly to other servers, or leave the system, with fixed probabilities, where the probability that a departure from station j travels to station k is Pik
3. The service times are memoryless, or Markov, in the sense that
Pr{Server #k completes a service in (t, t + ot]!Xk(t) = n)
=
o(Ot)
for n = 1, 2,
... ,
and does not otherwise depend on the past. 4. The system is in statistical equilibrium (stationary).
(6.8)
General Open Networks
6.
597
5. The system is completely open in that all customers in the system eventually leave. Let Ak be the rate of input at station k. The input at station k is composed of customers entering from outside the system, at rate Aok, plus customers traveling from (possibly) other stations. The input to station k from station j occurs at rate A;Pk, whence, as in (5.3),
x
fork= 1,...,K.
Ak= AOk+ Z, A/P;k
(6.9)
j=1
Condition 5 above, that all entering customers eventually leave, ensures that (6.9) has a unique solution.
With A...... A, given by (6.9), the main result is the product form solution
Pr{X1(t) = n,, X2(t) = n,, .
. .
Xk(t) = ng}
,
(6.10) = i/i1(n1)Y'2(n2) ... Y'K(nK)+
where TlkOAk
t/ik(n)=
for n = 1, 2,
I
I /2'k2
*
... ,
(6.11)
* * Pin
and All
410) _ 1rkO =
k
{ 1+ n=I l-'i'k1P'k2
Alai
t
(6.12) .
Example The example of Figure 6.1 (see also Section 4.3) corresponds to K = 1 (a single service station) for which P = p < 1. The external arrivals are at rate A,, = A, and (6.9) becomes A, = Ao, + AP,,,
A, = A + A,p,
or
which gives A, = A/(1 - p) = A/q. Since the example concerns a single server, then = µ for all n, and (6.11) becomes \n
41(n) = ?l10(µ
)n'
=
1110 `
qlA
598
IX
Oueueing Systems
where
I - -), 9µ
IT,o =
in agreement with (4.4).
Example Consider next the two server example depicted in Figure 6.2. The data given there furnish the following information: A01=A,
A02=0,
P11 = 0,
P12 = p,
P21 = 1,
P22 = 0,
which substituted into (6.9) gives A, = A + A2(1),
A2=0+A1(p), which readily yields
A, _
A
q
and
AZ =
PA
q
in agreement with (6.5) and (6.6). It is readily seen that the product solution of (6.10) through (6.12) is identical with (6.7), which was directly verified as the solution in this example.
Exercises
6.1.
In the case m ? 1, n ? 1, verify that 9as given following (6.7)
satisfies the equation for the stationary distribution (6.4).
Problems
6.1. Consider the three server network pictured here:
Problems
599 P21 = .2 P12 =
Server # 2
1L2n=3,n?1 Arrivals
j
Poisson, rate
1.
Departs with probability .8
Server #1
µIn=6,n? 1
1=2
LPI
Server #3
µ3n=2,n?1
In the long run, what fraction of the time is server #2 idle while, simultaneously, server #3 is busy? Assume that the system satisfies assumptions (1) through (5) of a general open network.
Further Reading
Elementary Textbooks Breiman, L. Probability and Stochastic Processes with a View Toward Applications. Boston: Houghton Mifflin, 1969.
Cinlar, E. Introduction to Stochastic Processes. Englewood Cliffs, N.J.: Prentice-Hall, 1975.
Cox, D. R., and H. D. Miller. The Theory of Stochastic Processes. New York: John Wiley & Sons, 1965. Hoel, R. G., S. C. Port, and C. J. Stone. Introduction to Stochastic Processes. Boston: Houghton Mifflin, 1972. Kemeny, J. G., and J. L. Snell. Finite Markov Chains. New York: Van Nostrand Reinhold, 1960.
Intermediate Textbooks Bhat, U. N. Elements of Applied Stochastic Processes. New York: John Wiley & Sons, 1972. Breiman, L. Probability. Reading, Mass.: Addison-Wesley, 1968.
Dynkin, E. B., and A. A. Yushkevich. Markov Processes: Theorems and Problems. New York: Plenum, 1969. Feller, W. An Introduction to Probability Theory and Its Applications. 2 vols. New York: John Wiley & Sons, 1966 (vol. 2), 1968 (vol. 1, 3rd ed.). Karlin, S., and H. M. Taylor. A First Course in Stochastic Processes. New York: Academic Press, 1975. Karlin, S., and H. M. Taylor. A Second Course in Stochastic Processes. New York: Academic Press, 1981.
Further Readings
602
Kemeny, J. G., J. L. Snell, and A. W. Knapp. Denumerable Markov Chains. New York: Van Nostrand Reinhold, 1966. Ross, S. M. Stochastic Processes. New York: John Wiley & Sons, 1983. Ross, S. M. Introduction to Probability Models, 5/e New York: Academic Press, 1993.
Renewal Theory Kingman, J. F. C. Regenerative Phenomena. New York: John Wiley & Sons, 1972.
Queueing Processes Kleinrock, L. Queueing Systems. Vol. 1, Theory. Vol. 2, Computer Applications. New York: John Wiley & Sons, Interscience, 1976.
Branching Processes Athreya, K. B., and P. Ney. Branching Processes. New York: SpringerVerlag, 1970. Harris, T. The Theory of Branching Processes. New York: Springer-Verlag, 1963.
Stochastic Models Bartholomew, D. J. Stochastic Models for Social Processes. New York: John Wiley & Sons, 1967. Bartlett, M. S. Stochastic Population Models in Ecology and Epidemiology. New York: John Wiley & Sons, 1960. Goel, N. S., and N. Richter-Dyn. Stochastic Models in Biology. New York: Academic Press, 1974.
Point Processes Lewis, P. A. Stochastic Point Processes: Statistical Analysis, Theory, and Applications. New York: John Wiley & Sons, Interscience, 1972.
Answers to Exercises
Chapter I 2.1.
Because B and B` are disjoint events whose union is the whole sample space, the law of total probability (Section 2.1) applies to give the desired formula.
2.3.
(b)Ax)
f(x) = 3x2 0
(c) E[X] - 1 (d) Pr( X 1. - 26 64,
for x < 0;
for O s x s l; for 1 < x.
(b) E[X] = R/(1 + R). (c) Var[X] = R/[(R + 2)(R + 1)2].
2.8. f(v) = A(1 - v)A-' for O s v s 1;
E[V]=1/(A+ 1); Var[V] = A/[(A + 2)(A + 1)2].
Answers to Exercises
604 2.9.
for x < 0;
0 x2
FX(x)
for x 1; Var[X] = 16-
3.1.
Pr{X = 3} = ;2.
3.2.
Pr {0 defective) = 0.3151. Pr{0 or 1 defective) = 0.9139.
3.3.
Pr{N = 10) = 0.0315.
3.4.
Pr{X=2} =2e-2=0.2707. Pr(Xs 2) = 5e-22= 0.6767.
3.5.
Pr{X ? 81 = 0.1334.
3.6.
(a) Mean = n 2 1; Variance = m+1
(b) I
Pr{Z=m}=
n2 - 1 12
form=0,...,n;
n-
2n+1-m
for m = n + 1, ..., 2n.
n2
(c) Pr{U = k} = 4.1.
1 +2(n-k) (n + 1)2
for
k=0,...,n.
Pr{X>1.5}=e3=0.0498. Pr{X = 1.5) = 0.
4.2.
Median =
4.3.
Exponential distribution with parameter A/2.54.
4.4.
Mean = 0; Variance = 1.
log 2; Mean =
.
605
Answers to Exercises Qy - PO'xQY
4.5.
a*
4.6.
(a) f,.(y) = e-,
uX + 02
for
2po-xv,.
p
±1.
for y,::> 0. c- iv 1 1 (b) f,,(w) = (w) for 0 < w < 1. n
4.7. ^ as the gamma density fR(r) = 5.1.
Pr{X ? 1 } = 0.6835938 Pr{X > 2) = 0.2617188 Pr{X? 31 = 0.0507812 Pr { X ? 4) = 0.0039062.
5.2.
Mean = ;.
5.3.
E[X] _
5.4.
(a) E[X,,] =
A2re-Ar
.
E[XB] = 3 (b) E[min{XA, XB}] = s;
(c) Pr{X,, < XB) = 5; (d) E[XB - XAjXA < XB] _ ;.
5.5.
(a) Pr{Naomi is last) = ;; (b) Pr{Naomi is last) _ 2 = 0.1128; .,j'
(c) c = 2 + N/-3.
Chapter II 1.1.
Pr{N=3,X=2} =,=6; Pr{X=5} =48; E[X] = '-.
1.2.
Pr(2 nickel heads N = 4}
1.3.
Pr{X > 1IX ? 1 } = 0.122184; Pr { X > 1 lAce of spades) = 0.433513.
for
r > 0.
Answers to Exercises
606
1.4.
Pr{X = 21 = 0.2204.
1.5.
E[XI X is odd] = A eA
1.6.
Pr{U=u,Zz}p2(1-p)z,
A
Pr{U = uIZ = n} =
CA
+ e-A)'
1
n
1,
1
2.1.
Pr{Game ends in a 4} = o.
2.3.
Pr(Win) = 0.468984.
3.1.
k 0 1
2 3
4 5
6
Pr{Z = k} 0.16406 0.31250 0.25781 0.16667 0.07552 0.02083 0.00260
O
u:!:-: z;
O: u :5 n.
E[Z] =
;; Var[Z] = 1.604167.
3.2.
E[Z] = Var[Z] = 8; Pr{ Z = 2) = 0.29663.
3.3.
E[Z] = A2; Var[Z] = µ(1 + µ)c72.
3.4.
Pr{X = 2) = 0.2204; E[X] = 2.92024.
3.5.
E[Z] = 6; Var[Z] = 26.
4.1.
Pr{X= 2} _;.
4.2.
Pr {System operates l = ;.
4.3.
Pr{U > ;) = 1 - ;(1 + log 2) = 0.1534.
4.4.
f,(z)
(1
+
for z)2
0 < z < oo.
Answers to Exercises
607
5.1.
u > 0, v > 0.
for
4.5. f,.,,(u, v) = e-"""' X
2 0.14
1
Pr{X > x}
0.61
0.37
2
1
1
- E[X]
x
5.2.
Pr{X ? 1 } = E[X] = p.
Chapter III 1.1.
0.
1.2.
0.12, 0.12.
1.3.
0.03.
1.4.
0.02, 0.02.
1.5.
0.025, 0.0075.
2.1.
(a)
P2 _
0.47
0.13
0.40
0.42
0.14
0.44
0.26
0.17
0.57
(b) 0.13. (c) 0.16. n
0
Pr{X,, =OIX)=O}
1
2.2.
2.3.
0.264, 0.254.
2.4.
0.35.
2.5.
0.27, 0.27.
1
2
3
0
'
'
4 ;
2
4
s
Answers to Exercises
608
2.6.
0.42, 0.416.
-1
0
1
2
3
-1
0
0
0.3
0.3
0.4
0
0
0
0.3
0.3
0.4
1
0.3
0.3
0.4
0
0
2
0
0.3
0.3
0.4
0
3
0
0
0.3
0.3
0.4
3.1.
P=
3 2 .
.
(
NIP + N N l )q;
P'
(N_ i) P'', =
(-)q. -1
0
1
2
3
-1
0
0
0.1
0.4
0.5
0
0
0
0.1
0.4
0.5
0.1
0.4
0.5
0
0
2
0
0.1
0.4
0.5
0
3
0
0
0.1
0.4
0.5
3.3.
P= 1
-2
-1
0
1
2
3
0
0
0.2
0.3
0.4
0.1
0
0
0.2
0.3
0.4
0.1
0
0
0
0.2
0.3
0.4
0.1
1
0.2
0.3
0.4
0.1
0
0
2
0
0.2
0.3
0.4
0.1
0
31 1
0
0
0.2
0.3
0.4
0.1
3.4. -12I
I
Answers to Exercises 1
2
1
0
2
'
0
' 2
0
1
0
0
3.5.
609
I
4.1.
v03 = I*
4.2.
(a) u,o=1;
4.3.
(a) u,o- = 1o05 .
-s
- ,o (b) v, T-
4.4.
v0 = 6.
4.5.
uH,TT
4.6.
4.7.
= '3 6.
(a) uI0 = 23 (b)- ' vl = 23 W
- zo.
-
W. 2
= 2s
as
V, - ,,.
4.8. w = 1.290;w12=0.323 v,=1.613. 4.9.
u10
= zz = 0.40909 Y,o = 0.17; P;o = 0.2658; P;o = 0.35762 . . .; P;0 = 0.40245
.
5.1.
0.71273.
5.2.
(a) 0.8044; 0.99999928 (b) 0.3578; 0.00288
.
Answers to Exercises
610
5.3.
p, =
0.42
0.58 0.49
0.51
0.474 0.447
P3 =
0.526 0.553
P4_
0.5422 0.5341
0.4578 0.4659
P5 = 0.53734 0.53977
5.4.
0.46266 0.46023
0
1
2
3
0
0
0
1
5.5.
PcO = 0.820022583.
5.6.
2.73.
5.7.
u,o = 0.3797468.
5.8.
po = a, ro = 1 - a; pi = a(l - /3), q, = /3(1 - a),
r;=a(3+(1 -a)(1 -(3),
5.9.
Po= 1,qo=0,
p;=p,q;=q,r;=0 6.1.
for
i? 1.
for
i?1.
(a) u3s = ; (p)2J/L1
(b) uss = I
1
-
- Ip
].
Answers to Exercises
611
6.2.
u,o=0.65.
6.3.
v = 2152.777
6.4.
v, = 2.1518987.
7.1.
w (a) u10
20
25
7t
tJ
1o
40
I1
11
--22;
9 . 20.
(/b1
\u ) WII
11; W12
7.2.
70 79
100
w=
- 25 71
79 30 79
100
79
a) ulo 30 79; - 100. _ 70 (b) w - 79; W12 - 79 (.
8.1.
M(n) = 1, V(n) = n.
8.2.
p.=b+2c;a2=b+4c-(b+2c)2.
8.3.
n
1
2
3
UPI
0.5
0.625
0.695
4 0.742
8.4.
M(n)=A",V(n)=A"(1-' ,JIB 1.
9.1.
n
2
1
0.480 0.333 UPI ux = 0.82387.
9.2.
cp(s) = Po +
9.3.
cp(s)=p+qs".
9.4.
(P(0) 1 - c4(0)
(P(S)
P2S2
3
0.564
4 0.619
5
0.775
5
0.658
Answers to Exercises
612
Chapter IV 1.1
.
IT0
10,
-?,
7T 1 =
5 21
6
, 1T,
31
16
66
66
-
_ 19 66,
1.3.
1T I1
1.4.
2.94697.
1G .5.
790 - 2 9 , 7r1 - 2 9 7 T 2 = 29, 1T 3 - 29
1.6.
1TO
1.7.
- 40 - 135 L40 7r0-441,1 441,192-441,193-
1.8.
7r = 177.
1.9.
1r0
1'3
10
_9
5
S
14,1f1
=3
A, 2
- 14-
-
_
2
3
,
I
71 72
126 441
7
17
1.10. irlatc
40
2.1.
1T,. = 9.
2.2.
One facility: Pr (Idle) =
q2
1 + p2'
Two facilities: Pr(Idle) = 2.3.
(a)
p
0
0.02
AFI
0.10 0
0.11
AFI
0 0.20
AOQ
0
AOQ (b)
2.4.
p R,
R,
p
0.05 0.998 0.998
0.10 0.990 0.991
1
l+p+p2'
0.018
0.04 0.12 0.036
0.06 0.13 0.054
0.08 0.14 0.072
0.10 0.16 0.090
0.02 0.23 0.016
0.04 0.27 0.032
0.06 0.32 0.048
0.08 0.37 0.064
0.10 0.42 0.080
0.15 0.978 0.981
0.20 0.962 0.968
0.25 0.941 0.952
Answers to Exercises
2.5.
ITA=5.
2.6.
ao = .
2.7.
(a) 0.6831;
613
(b) ir,=ar,=-0°,ir,=,',; (c) ?r3 = ,!,.
2.8. 3.1.
?r 3 = 8SI
{n ? 1; P`"" > 01 = 15, 8, 10, 13, 15, 16, 18, 20, 21, 23, 24, 25, 26,
28, ...) d(0) = 1, P`57' = 0, P;38' > 0
3.2.
Transient states: (0, 1, 31. Recurrent states: (2, 4, 5).
3.3.
(a) (0, 2), { 1, 3), 14, 5); (b) (0), (5 ), ( 1, 2), (3, 4).
3.4.
{0), d = 1;
for all i, j.
{1},d=0; {2,3,4,5},d= 1. 4.1.
lr,=pkl(1 +p+p-+p;+p4) for k=0,...,4.
4.2.
,4i9 (a) ?r0 - - 9999 8550 (b) m ,o10 - 1449
4.3.
Iro=Ir,=0.2,7,=7r,=0.3.
5.1.
lim P`"" = lim P`,"o' = 0.4;
limP;o'= limPo'= 0; lim Pa'o = 0.4.
5.2.
(a)
(b) 0, (c)
3I,
(d) 9,
(e) 3, (f) X, (g) 5, (h) 27,
Answers to Exercises
614
Chapter V 1.1.
(a)
a-22;
(b) e2.
k =0, 1, ... .
1.2.
(pklpk-1) = A/k,
1.3.
Pr{X = kI N = n} _ ()p*(1
0,1,...;
"e a1.4.
(a) ( )
a
p= a+/3
- p)"kAt
k
k!
(b) Pr{X(t) = n + kIX(s) = n) =
Wt - s)]
E[X(t)X(s)] = Alts + As.
k!
1.5.
Pr{X = k} = (1 - p)pk for k = 0, 1, ... where p = 1/(1 + 0).
1.6.
(a) a-12;
(b) Exponential, parameter A 1.7.
= 3.
(a) 2e-2; (b) 64 e -6., (6,)(1)2(2)4; (c) 3 3 (d) 32e-4
1.8.
(a)
5e-22;
(b) 4e-4; 1 - 3e-2 (c) 1 - e-2 1.9.
2.1.
(a) 4; (b) 6; (c) 10. k
0
1
(a)
0.290 0.296
0.370 0.366
0.301
0.361
(b) (c)
2 0.225 0.221 0.217
Answers to Exercises
615
2.2.
Law of rare events, e.g. (a) Many potential customers who could enter store, small probability for each to actually enter.
2.3.
The number of distinct pairs is large; the probability of any particular pair being in sample is small.
2.4.
Pr{Three pa9f s error free) - e-''-.
3.1.
a-6.
3.2.
(a) e-' - e-"; (b) 4e-'.
3.3.
;.
3.4.
()()'(3)' = xs TI
m=0,1,
3.5.
3.6.
F(t) = (1
3.7.
t+
2. A
3.8.
(i5)(2)S( )'. I
3.9.
Pr{W,. t + = {_(At)"e' n!
S)
le-As.
4.1.
M(t) = Zt - 6.
4.2.
8.
4.3.
T* =
4.4.
c(T) is a decreasing function of T.
4.5.
h(x) = 2(1 - x) for 0 : x < 1.
4.6.
(a)
8
a
a+p
;
- 15
Answers to Exercises
620
/LA
5 .1 . 1
5.2.
9
5.3.
T* = cc.
6.1.-6.2.
n 0 1
1.1111
2
0.96296 1.01235 0.99588 1.00137 0.99954 1.00015 0.99995 1.00002 0.99999
3
4 5
6 7 8
9 10 6.3.
0.6667
1.33333 0.88888 1.03704 0.98765 1.00412 0.99863 1.00046 0.99985 1.00005 0.99998 1.00001
E[X] = 1; 1 bk = 1.
Chapter VIII 1.1.
(a) 0.8413. (b) 4.846.
1.2.
Cov[W(s), W(t)] = min{s, t}.
1.3.
ap = ',_cp(z)t-'[z2 - 1]; ax
= r zcp(z).
Answers to Exercises
1.4.
621
(a) 0. (b) 3u2 + 3iuv + uw.
1.5.
(a) a-'-`;
(b) t(1 -s)
for
0 < t < s < 1;
(c) min{s, t}.
1.6.
(a) Normal, .t = 0,
U-z
= 4u + v;
(b) Normal, A=0,o-'-=9u+4v+w. S 1.7.
S
2.1.
(a) 0.6826. (b) 4.935.
2.2.
If tan 0 = V, then cos 0 = t/(.
2.3.
0.90.
2.4.
Reflection principle: Pr(M,, ? a, S,, < a) _
Pr{M ? a, S,, > a}(= Pr(S,, > a)).
Also, Pr{M ? a, S = a) = Pr(S = a). 2.5.
Pr(,r0