Business. â» Computer science and telecommunication ... Queueing Networks (QN) [Little61, Basket et al. ... Stochastic
Simulation of Markovian models using Bootstrap method Summer Computer Simulation Conference (SCSC) 2010
Ricardo M. Czekster, Paulo Fernandes, Afonso Sales, Dione Taschetto and Thais Webber Pontifícia Universidade Católica do Rio Grande do Sul (PUCRS) PaleoProspec Project - PUCRS/Petrobras Funded also by CAPES and CNPq - Brazil
Outline
Introduction Markovian model simulation Bootstrap simulation Numerical results Conclusion
2 / 47
Approach modeling formalism
System
solve
Performance indices
Model
improve
Well-known modeling formalism Markov Chains
3 / 47
Approach modeling formalism
System
solve
Performance indices
Model
improve
Well-known modeling formalism Markov Chains
4 / 47
Markov Chains formalism Simple primitives: states and transitions ◮
Represented by a matrix transition
Time scale: ◮ ◮
Continuous-Time Markov Chains (CTMC) Discrete-Time Markov Chains (DTMC)
Applicable to many domains: ◮ ◮ ◮ ◮ ◮
Biology Physics Social sciences Business Computer science and telecommunication
5 / 47
Markov Chains Advantage: intuitive and easy to model “small” systems Limitation: representing huge models by state/transition form becomes intractable (state space explosion problem)
Structured formalisms Model description by components
Structured mathematical description
Examples Queueing Networks (QN) [Little61, Basket et al. 75, Reiser et al. 80] Stochastic Petri Nets (SPN) [Florin et al. 85] Performance Evaluation Process Algebra (PEPA) [Hillston95] Stochastic Automata Networks (SAN) [Plateau84] 6 / 47
Model solution It plays equally an important role Numerical solution (iterative methods): ◮ ◮ ◮
Power method [Stewart94] Arnoldi [Arnoldi51] GMRES [Saad and Schultz86]
Simulation (model’s events firing transitions according to pseudorandom number generator): ◮ ◮ ◮
Traditional [Ross96] Monte Carlo [Häggström02] Backward [Propp and Wilson 96]
7 / 47
Simulation methods They suffer from a lack of precision in the results due to the very nature of simulation ◮ ◮
Based on a pseudorandom number generator When compared to the analytical solution
Bootstrap method It is a well known statistical method applied to many fields to improve accuracy when performing sample estimations for complex distributions
Our idea Apply the Bootstrap method in the simulation context in order to reduce the “noise” that were observed from the samples
8 / 47
Bootstrap method ˜ Λ
Λ
¯xΛ
K1
K2
¯xK1
¯xK2
...
Kz
¯xKz
¯xK
˜ infinite sized set Λ: ˜ Λ: subset (sampling) of Λ K: bootstrap (with the same size of Λ)
These set of results, in average, are better than the average of a single sampling 9 / 47
Bootstrap method ˜ Λ
Λ
¯xΛ
K1
K2
¯xK1
¯xK2
...
Kz
¯xKz
¯xK
˜ infinite sized set Λ: ˜ Λ: subset (sampling) of Λ K: bootstrap (with the same size of Λ)
These set of results, in average, are better than the average of a single sampling 10 / 47
Traditional simulation States
Transition Matrix 2
0 1 2 0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45
1
0 Time 0
π′
π0′
π1′
π2′
0
0
0
n = trajectory length each visited state = sample mean permanence ′ probability π = πn
11 / 47
Traditional simulation States
Transition Matrix
Initial state
0 1 2 0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45 0 Time 0
π′
π0′
π1′
π2′
0
0
0
n = trajectory length each visited state = sample mean permanence ′ probability π = πn
12 / 47
Traditional simulation States
Transition Matrix
Initial state
0 1 2 0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45 0 Time 0 1 U = 0.08
π′
π0′
π1′
π2′
0
0
0
n = trajectory length each visited state = sample mean permanence ′ probability π = πn
13 / 47
Traditional simulation States
Transition Matrix
Initial state
0 1 2 0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45 0
0 Time 0 1 U = 0.08
π′
π0′
π1′
π2′
0
0
0
n = trajectory length each visited state = sample mean permanence ′ probability π = πn
14 / 47
Traditional simulation States
Transition Matrix
Initial state
0 1 2 0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45 0
0 Time 0 1 U = 0.08
π′
π0′
π1′
π2′
1
0
0
n = trajectory length each visited state = sample mean permanence ′ probability π = πn
15 / 47
Traditional simulation States
Transition Matrix
Initial state
0 1 2 0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45 0
0 Time 0
1 2 U = 0.87
π′
π0′
π1′
π2′
1
0
0
n = trajectory length each visited state = sample mean permanence ′ probability π = πn
16 / 47
Traditional simulation States
Transition Matrix 2
Initial state
0
0 1 2 0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45
0 Time 0
1 2 U = 0.87
π′
π0′
π1′
π2′
1
0
0
n = trajectory length each visited state = sample mean permanence ′ probability π = πn
17 / 47
Traditional simulation States
Transition Matrix 2
Initial state
0
0 1 2 0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45
0 Time 0
1 2 U = 0.87
π′
π0′
π1′
π2′
1
0
1
n = trajectory length each visited state = sample mean permanence ′ probability π = πn
18 / 47
Traditional simulation States
Transition Matrix 2
Initial state
0
0 1 2 0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45
0 Time 0
1
2 3 U = 0.32
π′
π0′
π1′
π2′
1
0
1
n = trajectory length each visited state = sample mean permanence ′ probability π = πn
19 / 47
Traditional simulation States
Transition Matrix 2
0 1 2 0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45
1 Initial state
0
0 Time 0
1
2 3 U = 0.32
π′
π0′
π1′
π2′
1
0
1
n = trajectory length each visited state = sample mean permanence ′ probability π = πn
20 / 47
Traditional simulation States
Transition Matrix 2
0 1 2 0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45
1 Initial state
0
0 Time 0
1
2 3 U = 0.32
π′
π0′
π1′
π2′
1
1
1
n = trajectory length each visited state = sample mean permanence ′ probability π = πn
21 / 47
Traditional simulation States
Transition Matrix 2
0 1 2 0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45
1 Initial state
0
0 Time 0
1
2
π′
3 4 U = 0.06
π0′
π1′
π2′
1
1
1
n = trajectory length each visited state = sample mean permanence ′ probability π = πn
22 / 47
Traditional simulation States
Transition Matrix 2
0 1 2 0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45
1 Initial state
0
0
0 Time
0
1
2
π′
3 4 U = 0.06
π0′
π1′
π2′
1
1
1
n = trajectory length each visited state = sample mean permanence ′ probability π = πn
23 / 47
Traditional simulation States
Transition Matrix 2
0 1 2 0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45
1 Initial state
0
0
0 Time
0
1
2
π′
3 4 U = 0.06
π0′
π1′
π2′
2
1
1
n = trajectory length each visited state = sample mean permanence ′ probability π = πn
24 / 47
Traditional simulation States
Transition Matrix 2
0 1 2 0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45
1 Initial state
0
0
0 Time
0
1
2
π′
3
5 4 U = 0.56
π0′
π1′
π2′
2
1
1
n = trajectory length each visited state = sample mean permanence ′ probability π = πn
25 / 47
Traditional simulation States
Transition Matrix 2
1
1 Initial state
0
0 1 2 0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45
0
0 Time
0
1
2
π′
3
4 5 U = 0.56
π0′
π1′
π2′
2
1
1
n = trajectory length each visited state = sample mean permanence ′ probability π = πn
26 / 47
Traditional simulation States
Transition Matrix 2
1 Initial state
0
0 1 2 0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45
1
0
0 Time
0
1
2
π′
3
4 5 U = 0.56
π0′
π1′
π2′
2
2
1
n = trajectory length each visited state = sample mean permanence ′ probability π = πn
27 / 47
Traditional simulation States
Transition Matrix 2
1 ...
1 Initial state
0
0 0
0
1
2
π′
3
4
π0′
π1′
π2′
2
2
1
0 1 2 0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45 n = trajectory length
Time 5 ... n
each visited state = sample mean permanence ′ probability π = πn
28 / 47
Bootstrap simulation States
Transition Matrix
0 Initial state
0 Time 0
1
2
0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45 n = trajectory length K: bootstrap z: number of bootstraps mean permanence Pz ¯ xi probability π = i=1 z
29 / 47
Bootstrap simulation States
Transition Matrix
0 Initial state
0
0 Time 0 1 U = 0.08
K1
0 1 2
K2
0 1 2
1
2
0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45
Kz
0 ... 1 2
n = trajectory length K: bootstrap z: number of bootstraps mean permanence Pz ¯ xi probability π = i=1 z
30 / 47
Bootstrap simulation States
Transition Matrix
0 Initial state
0
0 Time 0 1 U = 0.08
K1
0 1 2
K2
0 1 2
1
2
0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45
Kz
0 ... 1
2
n¯ is the number of trials to perform the resamplings, where n¯ ≪ n
n = trajectory length K: bootstrap z: number of bootstraps mean permanence Pz ¯ xi probability π = i=1 z
31 / 47
Bootstrap simulation States
Transition Matrix
0 Initial state
0
0 Time 0 1 U = 0.08
K1
0 1 2
K2
0 1 2
1
2
0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45
Kz
0 ... 1
2
n¯ is the number of trials to perform the resamplings, where n¯ ≪ n
n = trajectory length K: bootstrap z: number of bootstraps mean permanence Pz ¯ xi probability π = i=1 z
32 / 47
Bootstrap simulation States
Transition Matrix
0 Initial state
0
0 Time 0 1 U = 0.08
K1
0 1 2
K2
0 1 2
1
2
0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45
Kz
0 ... 1
2
n¯ is the number of trials to perform the resamplings, where n¯ ≪ n
n = trajectory length K: bootstrap z: number of bootstraps mean permanence Pz ¯ xi probability π = i=1 z
33 / 47
Bootstrap simulation States
2
Transition Matrix
0 Initial state
0
0 Time 0
1 2 U = 0.87
K1
0 1 2
K2
0 1 2
1
2
0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45
Kz
0 ... 1
2
n¯ is the number of trials to perform the resamplings, where n¯ ≪ n
n = trajectory length K: bootstrap z: number of bootstraps mean permanence Pz ¯ xi probability π = i=1 z
34 / 47
Bootstrap simulation States
2
Transition Matrix 1 Initial state
0
0
0 Time 0
1
K1
0 1 2
2 3 U = 0.32
K2
0 1 2
1
2
0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45
Kz
0 ... 1
2
n¯ is the number of trials to perform the resamplings, where n¯ ≪ n
n = trajectory length K: bootstrap z: number of bootstraps mean permanence Pz ¯ xi probability π = i=1 z
35 / 47
Bootstrap simulation States
2
Transition Matrix 1 Initial state
0
0
0 0 Time
0
1
K1
0 1 2
2
K2
0 1 2
3 4 U = 0.06
1
2
0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45
Kz
0 ... 1
2
n¯ is the number of trials to perform the resamplings, where n¯ ≪ n
n = trajectory length K: bootstrap z: number of bootstraps mean permanence Pz ¯ xi probability π = i=1 z
36 / 47
Bootstrap simulation States
2
Transition Matrix 1 Initial state
0
0
1
0
0 Time
0
1
K1
0 1 2
2
K2
0 1 2
3
4 5 U = 0.56
1
2
0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45
Kz
0 ... 1
2
n¯ is the number of trials to perform the resamplings, where n¯ ≪ n
n = trajectory length K: bootstrap z: number of bootstraps mean permanence Pz ¯ xi probability π = i=1 z
37 / 47
Bootstrap simulation States
2
Transition Matrix 1 ...
1 Initial state
0
0 0
1
K1
0 1 2
0 2
K2
0 1 2
3
4
Time 5 ... n
0
1
2
0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45
Kz
0 ... 1
2
n = trajectory length K: bootstrap z: number of bootstraps mean permanence Pz ¯ xi probability π = i=1 z
38 / 47
Bootstrap simulation States
2
Transition Matrix 1 ...
1 Initial state
0
0 0
1
K1
0 1 2
0
K2
0 1 2
2
3
¯x1
Kz
0 ... 1 2
Time 5 ... n
4
normalize
0 1 2
¯x2
0 1 2
¯xz
0 ... 1 2
0 1 2 0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45 n = trajectory length K: bootstrap z: number of bootstraps mean permanence Pz ¯ xi probability π = i=1 z
39 / 47
Bootstrap simulation States
2
Transition Matrix 1 ...
1 Initial state
0
0 0
1
K1
0 1 2
0 2
K2
0 1 2
3
¯x1
Kz
0 ... 1 2
Time 5 ... n
4
normalize
0 1 2
¯x2
π ¯x1[0] + ¯x2[0] + ··· + ¯xz[0] z ¯x1[1] + ¯x2[1] + ··· + ¯xz[1] z
= π0
¯x1[2] + ¯x2[2] + ··· + ¯xz[2] z
= π2
= π1
¯xz
0 ... 1 2
0 1 2
0 1 2 0 0.10 0.65 0.25 1 0.25 0.55 0.20 2 0.30 0.25 0.45 n = trajectory length K: bootstrap z: number of bootstraps mean permanence Pz ¯ xi probability π = i=1 z
40 / 47
Examples Alternate Service Patterns (ASP): describes an Open Queueing Network with servers that map P different service patterns. First Available Server (FAS): indicates the availability of N servers, where tasks are firstly assigned to the first server, case it is available. If the server is busy, the task must be sent to the second server and so on. Resource Sharing (RS): maps R shared resources to P processes.
Mean relative error It is computed comparing the simulation results with the numerical results obtained by the Power method
41 / 47
Results: ASP model 1
0.0082 0.0069
0.001
104
105
106
107
108
0.0009
0.0028 0.0061
0.0902 0.0285
0.01
0.0272 0.0110
0.3740
0.2608 0.1017
Mean relative error
0.1
Traditional Bootstrap
109
Trajectory length (n)
42 / 47
Results: FAS model
0.01
104
105
106
107
108
0.0146
0.0485 0.0383
0.1429 0.0582
0.1
0.4680 0.1552
0.8672 0.4214
1.2876
Mean relative error
1
1.5182 0.9986
Traditional Bootstrap
109
Trajectory length (n)
43 / 47
Results: RS model 1
104
105
106
107
108
0.0011
0.001
0.0034 0.0010
0.0108 0.0033
0.01
0.0307 0.0110
0.1071 0.0325
0.3487 0.1063
0.3151
Mean relative error
0.1
Traditional Bootstrap
109
Trajectory length (n)
44 / 47
Conclusion Simulation is an alternative solution to solve Markovian models when analytical solution becomes intractable Due to the nature of simulation (pseudorandom number generator) there is a lack of precision in the results A large quantity of samples is necessary in order to achieve better precision results Bootstrap simulation can produce a more precise result using a smaller amount of samples than Traditional simulation
Future works Model classification: determination of classes of models more suitable for the Bootstrap simulation Parallel solution: distribute the bootstraps for different processors/computers in a grid Compare with other methods: Monte Carlo, Backward, ... 45 / 47
Conclusion Simulation is an alternative solution to solve Markovian models when analytical solution becomes intractable Due to the nature of simulation (pseudorandom number generator) there is a lack of precision in the results A large quantity of samples is necessary in order to achieve better precision results Bootstrap simulation can produce a more precise result using a smaller amount of samples than Traditional simulation
Future works Model classification: determination of classes of models more suitable for the Bootstrap simulation Parallel solution: distribute the bootstraps for different processors/computers in a grid Compare with other methods: Monte Carlo, Backward, ... 46 / 47
Thank you for your attention.
47 / 47