Intuitively, a stochastic process or random process [13] is a func- tion that assigns
a time process (i.e., ... (i) (Ω,A,P) is a probability space, T is a parameter set that
usually represents time or ..... [13] R.D. Yates and D.J. Goodman. Probability and
...
Home Page
Title Page
JJ
II
J
I
Page 1 of 34
8. Random Processes G ERMAN H ERNANDEZ
Go Back
Full Screen
Close
Quit
Mathematical Modeling of Telecommunication Systems
Home Page
Outline
Title Page
JJ
II
J
I
Page 2 of 34
Go Back
Full Screen
Close
Quit
1. 2. 3. 4. 4. 4. 5.
Random Processes Stationary Random Processes Markov Processes and Markov Chains Brownian Motion Whithe Noise Poisson Process Queues
1. Random Processes
Home Page
Intuitively, a stochastic process or random process [13] is a function that assigns a time process (i.e., a function of time), or a space process (i.e., a function of space), to each outcome of a random experiment. Formally, an stochastic process X is defined as a function
Title Page
JJ
II
J
I
Page 3 of 34
Go Back
Full Screen
Close
Quit
A stochastic process can also be seen [6] as a family of random elements (measurable functions) that describe the evolution of a probabilistic system in time or space.
Then an stochastic process X is defined as the function
X : Ω → ST Home Page
such that: Title Page
JJ
II
J
I
Page 4 of 34
Go Back
Full Screen
Close
Quit
(i) (Ω, A, P ) is a probability space, T is a parameter set that usually represents time or space, (S, B) is a measurable space called the state space usually is the real set R with B = B(R); and ST = {f |f : T → S}, i.e., the set of functions of T with values in S. (ii) Let Xω denote the function from T to S associated to ω ∈ Ω. For a fixed t ∈ T , let Xt : Ω → S defined as Xt (ω) = Xω (t). For all t ∈ T Xt is a random element (a measurable function) from Ω to S. When S = R Xt is a r.v. for each t ∈ T.
Home Page
There are three ways to see a stochastic process: [6]:
• A family of realizations Title Page
JJ
II
J
I
Page 5 of 34
Go Back
Full Screen
Close
Quit
{Xω (t)}ω∈Ω = {Xω : T → S}ω∈Ω . • A family of random elements that describe the evolution of a probabilistic system in time or space.
{X(t)}t∈T = {Xt : Ω → S}t∈T • A function X : T × Ω → S.
Stochastic Process
Home Page
II I
S ®
{X (t)}
X
A collection of realizations w
J
:W
Title Page
JJ
A
W
ra n for dom ea e ch lem t en
S
t
X : W ´T ® S
t
Xw : T ® S A realization for each w
wÎW
w
Page 6 of 34
Go Back
Full Screen
Close
Quit
{X }
t tÎT
A collection of random elements
T
Stochastic process as a function of T × Ω in S
2. Markov Processes and Markov Chains A stochastic process {Xt }t∈T is a Markov process if and only if Home Page
P [Xt ∈ B | σ (Xu : u ≤ s)] = P [Xt ∈ B | σ (Xs)] Title Page
where JJ
II
J
I
Page 7 of 34
Go Back
Full Screen
Close
Quit
(i) t, s ∈ T with s ≤ t; (ii) B ∈ A; (iii) σ (Xu : u ≤ s) is the σ -algebra generated by {Xu : u ≤ s}, i.e., is the minimal σ -algebra in A that makes all the variables {Xu : u ≤ s} measurable; and (iv) σ (Xs ) is σ -algebra generated by Xs . The σ -algebra σ (Xu : u ≤ s) contains the history of the process up to time t and is composed of the events whose occurrence or non-occurrence can be determined at time t.
Home Page
This property is called the Markov property and implies that given the present, the future is statistically independent of the past. In other words, once the current state is known, the past contains no further or additional information about the future[8]. For a Markov process there exists a family of transition probability functions
{X t } t
Title Page
JJ
II
J
I
T
P(s,x,t,B) = P[ Xt
B | Xs = x]
Page 8 of 34
Go Back
B
x State space
State space
s
t
Full Screen
Close
Quit
2.1.
Finite space Markov chains
A Finite space Markov chain [9] [2][7] is a discrete-time, timehomogeneous Markov process , Home Page
{Xn}n∈T , Title Page
JJ
II
J
I
on a finite state space S = {s1 , s2 , · · · , sk } and with T a countable time-set. It is customary to assume S = {1, 2, · · · , m} denoted also as [m] and T = Z+ = {0, 1, 2, · · · , n, · · · }. When the state space is finite, the evolution of the precess can be described by a the transition probability matrix
Page 9 of 34
P = (pij )m×m Go Back
Full Screen
Close
Quit
where pij = p(i, j) for i, j ∈ S = {0, 1, 2 . . . , m} is the transition probability of going from i to j ,defined as
p(i, j) = P {Xn+1 = j | Xn = i} .
Home Page
Title Page
JJ
II
J
I
Page 10 of 34
The Markov frog. Stochastic Processes notes. J. Chang, http://pantheon.yale.edu/~jtc5/251/
Go Back
Full Screen
A Markov chain is completely characterized by three items: (i) the state space S,
Close
(ii) the transition probability matrix P, Quit
(iii) an initial probability distribution π0 on S.
2.2.
Home Page
Asymptotic dynamic behavior of Markov chains
Some of the fundamental questions about the structure of the asymptotic dynamic behavior of the chain are: (a) Which states are visited from which starting points?
Title Page
(b) How often are sets visited from different starting points? JJ
II
J
I
Page 11 of 34
(c) If starting from a random state obtained from the initial distribution π0 , in the long term, does the chain evolve towards some steady state? A steady state is interpreted to be an asymptotic probability distribution on S such that
lim Pπ0 (Xn = k) = lim pn(i, k) = π(k),
Go Back
Full Screen
n→∞
n→∞
i.e.,
lim Pπn0 = π.
Close
Quit
n→∞
To formalize the questions about the asymptotic behavior of the chain, let us define the occupation time of a state j , ηj , as the number of visits of {Xn }n∈T to j after time zero, i.e. Home Page
ηj :=
∞ X
1{Xn=j}.
n=1
Title Page
JJ
II
We have that the expected occupation time of j , starting from i, is ∞
J
I
X
Ei [ηj ] =
P n (i, j) .
n=1 Page 12 of 34
The first return time to j is defined as
Go Back
Full Screen
Close
Quit
τj := min {n ≥ 1 | Xn = j} . and
L(i, j) := Pi (τj < ∞) = P {Xn ever reaches j starting at i}.
Two distinct states i and j in S communicate, written i ↔ j , if any of these three equivalent conditions are satisfied (i) L(i, j) > 0 and L(j, i) > 0, Home Page
Title Page
JJ
II
J
I
Page 13 of 34
(ii) there exists nij and lji such that P nij (i, j) > 0 and
P lji (j, i) > 0 P∞ P∞ (iii) n=0 P n (i, j) > 0 and n=0 P n (j, i) > 0. The relation “↔” is an equivalence relation, thus the equivalence classes C(i) = {k | i ↔ k} cover S. Classification of the Finite Markov chains in relation to their asymptotic behavior:
Go Back
• absorbing, Full Screen
Close
Quit
• irreducible ( ergodic), and • regular.
2.2.1.
Absorbing Markov chains
An equivalence class C(i) is called absorbing if the chain once enters C(i), never leaves it, i.e.,
P (j, C(i)) = 1 for all j ∈ C(i) Home Page
Title Page
JJ
II
J
I
Page 14 of 34
Go Back
Full Screen
Close
A Markov chain is absorbing if it has at least one absorbing class and from every state it is possible to go to an absorbing class (not necessarily in one step). Theorem 1 (Absorbing probability) In an absorbing Markov chain the probability that the process is absorbed is 1; i.e., the Markov chain will fall into one of the absorbing classes with probability 1. 1 1/2
0
1 1/2
1/2
2 1/2
1/2
3 1/2
Quit
Drunkard's walk absorbing chain
1/2
4 1/2
5 1/2 1
2.2.2.
Ergodic Markov chains
A Markov chain is said to be irreducible or ergodic if every pair of states communicate. Then, the whole state space is an equivalence class of communicating states, i.e, Home Page
C(i) = S for all i ∈ S. Title Page
JJ
II
J
I
1
0
1
Page 15 of 34
1 Go Back
Full Screen
Close
Quit
Ergodic chain
Home Page
Theorem 2 (Law of large numbers for irreducible Markov chains) For an irreducible Markov chain with state space S and transition probability matrix P, there exists a unique probability distribution π on S such that (i) π(i) > 0 for all i ∈ S.
Title Page
(ii) π is stationary/invariant/equilibrium , i.e, π = πP. JJ
II
J
I
(iii) If Hi (n) is the proportion of the times in n steps that the chain is in state i, n
1X Hi(n) = 1{Xt = i}, n t=0
Page 16 of 34
Go Back
then for any > 0, Full Screen
lim Pr |Hi(n) − π(i)| > → 0.
Close
Quit
n→∞
2.2.3.
Regular Markov chains
A state i is said to be aperiodic if
n o n gcd n | P (i, i) > 0 = 1. Home Page
Title Page
JJ
II
J
I
If a state is aperiodic and the chain is irreducible then every state in S must be aperiodic too. In this case the irreducible chain is called aperiodic.Aperiodicity = no parity problems. A Regular Markov chain = irreducible + aperiodic.
Page 17 of 34
Go Back
Full Screen
Close
A random walk on a clock. Quit
Stochastic Processes notes. J. Chang, http://pantheon.yale.edu/~jtc5/251/
Theorem 3 (Fundamental limit theorem for regular chains) For a regular Markov chain with state space S and transition probability matrix P, there exists a unique probability distribution π on S such that Home Page
(i) π is stationary/invariant/equilibrium, i.e., π = πP, and Title Page
JJ
II
J
I
(ii) for any initial probability measure π0 ,
lim π0Pn = π, and,
n→∞
(iii) there exists 0 < r < 1 and c > 0 such that Page 18 of 34
kPπn0 − πkvar ≤ crn.
Go Back
With Full Screen
Close
Quit
kλ − νkvar = supA∈A |λ(A) − ν(A)| = 21 |λ P− νkL1 1 = 2 x∈Ω |λ(x) − ν(x)|.
2.3. Home Page
Title Page
JJ
II
J
I
Page 19 of 34
Go Back
Full Screen
Markov Chain Simulation
In order to simulate a Markov chain {Xn }n∈Z+ with a state space S = {1, 2, · · · , m} and transition probability matrix P = P (i, j). We first compute the cumulative transition probabilities
G(i, j) =
j X
n o P (i, k) = Pr xn+1 ≤ j | xn = i .
k=1
From a given initial state X0 = x0 we apply the follwing recursive rule to produce the subsequent states of the chain:
xn+1 = j if G(i, j − 1) < u ≤ G(i, j) with u uniform random number in [0, 1].
Close
Quit
Home Page
Title Page
JJ
II
J
I
Page 20 of 34
Go Back
Full Screen
Close
Quit
MC-S IMUALTOR(x0 , P, T ) 1 B Calculate G 2 for i = 1 to m 3 do G(i, 1) ← P (i, 1) 4 for j = 2 to m 5 do G(i, j) ← G(i, j − 1) + P (i, j) 6 7 B Simulate MC 8 X0 ← x0 9 n←0 10 repeat 11 u ← R ANDOM([0, 1]) 12 for j = 2 to m 13 do if G(Xn , j − 1) < u ≤ G(Xn , j) 14 then Xn+1 ← j 15 n←n+1 16 until n ≤ T
3. Brownian Motion Irregular movement that small particle exhibits inside of a fluid.
Home Page
Title Page
JJ
II
J
I
Page 21 of 34
Go Back
Full Screen
➊ Named after Brown 1829 Close
Quit
➋ Explained by Einstein 1905 ➌ Formalized by Wiener 1923 (Wiener process)
Home Page
Title Page
JJ
II
J
I
Page 22 of 34
Go Back
Full Screen
Close
Quit
Is named after the distinguished British botanist Robert Brown. “Brown did not discover Brownian motion. After all, practically anyone looking at water through a microscope is apt to see little things moving around. Brown himself mentions one precursor in his 1828 paper [3] and ten more in his 1829 paper [4] starting at the beginning with Leeuwenhoek (1632-1723), including Buffon and Spallanzani (the two protagonists in the eighteenth century debate on spontaneous generation), and one man (Bywater, who published in 1819) who reached the conclusion (in Brown’s words) that not only organic tissues, but also inorganic substances, consist of what he calls animated or irritable particles.” The First dynamical theory of Brownian motion was that the particles were alive. The problem was in part observational, to decide whether a particle is an organism, but the vitalist bugaboo was mixed up in it.[11] D. Nelson,Dynamical Theories of Brownian Motion
Home Page
This stochastic process has applications in physics, chemistry, finance, communications systems , data networks, .... Two and one dimentional Brownian motion applet Commodity prices geometric Brownian motion applet Chang notes in Brownian motion [5]
Title Page
A Standard Brownian Motion JJ
II
J
I
Page 23 of 34
Go Back
{W (t) : t ≥ 0} is an stochastic process that have ➊ continuous paths ➋ stationary, independent increments ➌ W (t) ∼ N (0; t) for all t ≥ 0.
Full Screen
Close
Quit
The letter “W ” is used of used for this process, in honor of Norbert Wiener.
3.1.
Interpretation of the conditions
Home Page
➊ P {ω : W (., ω) is a continuous function } = 1; Title Page
JJ
II
J
I
Page 24 of 34
Go Back
Full Screen
Close
Quit
➋ for each 0 ≥ t1 < t2 < ... < tn < ∞, W (t2 ) − W (t1), ..., W (tn)−W (tn −1) son independent and W (t)− W (s) only depends on t − s. ➌ The distribution of the increments satisfies
• P {W (0) = 0} = 1 • W (t) − W (s) = W (t − s) − W (0) = W (t − s) • W (t − s) ∼ N (0; t − s)
Home Page
Title Page
JJ
II
J
I
Page 25 of 34
Go Back
Full Screen
Close
Quit
Home Page
Title Page
JJ
II
J
I
Page 26 of 34
Go Back
Full Screen
Close
Quit
Home Page
Title Page
JJ
II
J
I
Page 27 of 34
Go Back
Full Screen
Close
Quit
Home Page
Title Page
JJ
II
J
I
Page 28 of 34
Go Back
Full Screen
Close
Quit
3.2. Home Page
Title Page
JJ
II
J
I
Page 29 of 34
Go Back
Full Screen
Close
Quit
Irregularity and Self-similarity
The paths of the Brownian motion are self-similar
Home Page
Title Page
JJ
II
J
I
Page 30 of 34
Go Back
Full Screen
Close
Quit
Home Page
Title Page
JJ
II
J
I
Page 31 of 34
Go Back
Full Screen
Close
Quit
Home Page
A stochastic process {Xt}t∈T is H -self similar for H > 0 if its distribution functions satisfy the condition d
Title Page
JJ
II
J
I
(τ H Xt1 , ..., τ H Xtn ) = (Xt1 , ..., Xtn ) for all τ > 0 and t1 , t2 , ..., tn ∈ T The Brownian paths are [10] ➊ 0.5 selfsimilar,
Page 32 of 34
Go Back
Full Screen
Close
Quit
➋ not differentiable at any place, ➌ has infinite zeros in each (0, ) ➍ has unbounded variation in each (0, )
4. White Noise Home Page
The White Noise is the derivative of the Brownian motion considering both as Generalized stochastic processes [1] [12]
Title Page
Nt = dWtdt JJ
II
J
I
4.1.
Properties
➊ Wide sense stationary, i.e., E[N (t)] = µ and RN (τ, ξ) = Page 33 of 34
Go Back
Full Screen
RN (ξ − τ ) ➋ E[N (t)] = 0. ➌ Autocorrelation: RN (τ ) = E[N (t)N (t − τ )] = δ(τ ). ➍ Spectral density: SN (ω) = F(RN [τ ]) = 1.
Close
Quit
References [1] L. Arnold. Stochastic Differential Equations. John Wiley, 1974. [2] P. Bremaud. Markov Chains:Gibbs fields, Monte Carlo simulation and queues. Springer Verlag, 1999.
Home Page
[3] R. Brown. A brief account of microscopical observations made in the months of june, july, and august, 1827, on the particles contained in the pollen of plants; and on the general existence of active molecules in organic and inorganic bodies. Philosophical Magazine N. S., 1828. [4] R. Brown. Additional remarks on active molecules. Philosophical Magazine N. S., 1829.
Title Page
[5] J. Chang. Stochastic Processes. Notes, Department of Statistics, Yale University, http://pantheon.yale.edu/ jtc5/251/, 2001. JJ
II
J
I
[6] I.I. Gikhman and A.V.Skorokhod. Introduction to the Theory of Random Processes. Dover, 1996. [7] C.M. Grinstead and J.L. Snell. Introduction to Probability. American Mathematical Society, 1997. [8] A.F. Karr. Markov processes. In D.P. Heyman and M.J. Sobel, editors, Stochastic Models. North Holland, 1990.
Page 34 of 34
[9] S. Meyn and R. Tweedie. Markov Chains and Stochastic Stability. Springer Verlag, 1994. Go Back
[10] T. Mickosch. Elementary Stocahstic Calculus with A Finance View. World Sceintfic, 1999. [11] D. Nelson. Dynamical Theories of Brownian Motion. Princeton University Press, 1967.
Full Screen
Close
Quit
[12] B. Oksenal. Stochastic Differential Equations. Springer Verlag, 1998. [13] R.D. Yates and D.J. Goodman. Probability and Stochastic Processes: A Friendly Introduction for Electrical and Computer Engineers. John Wiley and Sons, 1999.