A Fast Approximation Algorithm for the Lyapunov Exponent of Stochastic Max-Plus Systems Rob M.P. Goverde Bernd Heidergott Glenn Merlet
Abstract— This paper addresses the problem of approximately computing the Lyapunov exponent of stochastic maxplus linear systems. Our approach allows for an efficient simulation of bounds for the Lyapunov exponent. Depending on the simulation budget the bounds can be made arbitrarily close. We illustrate the effectiveness of our bounds with application to (real-life) railway systems.
I. INTRODUCTION In this paper, we study max-plus linear systems, that is, we study systems whose state vectors satisfies an equation like x(k + 1) = A(k) ⊗ x(k) for k ≥ 0, where {A(k)} is a sequence of matrices modeling the system dynamic and matrix-vector multiplication has to be understood in the maxplus sense. For details we refer to [8], [9], [1], [6]. Max-plus algebra is a tool for analyzing Discrete Event Systems that are prone to synchronization. In particular, maxplus algebra has been successfully applied to analysis of transportation networks. One of the main characteristics of a max-plus linear system is the so-called Lyapunov exponent of the matrix sequence describing the system dynamic. In system theoretic terms, the Lyapunov exponent is the fastest rate at which a system can operate on average. The actual numerical value of the Lyapunov exponent is therefore of importance in the design of timetables for public transportation networks, such as railway systems. This is, for example, the basis of the commercial software tool PETER that allows for analyzing the robustness of timetables for railway networks. In the deterministic setting, that is A(k) = A for all k for some matrix A, there exist efficient algorithms for computing the Lyapunov exponent of {A(k)} (which is the max-plus eigenvalue of A). See for example [9]. In the general case, however, no efficient methods for computing the Lyapunov exponent of {A(k)} are known. Apart from straight forward simulation of the Lyapunov exponent there has been only one alternative approach proposed in the literature, namely This research is supported by the Technology Foundation STW, applied science division of NWO and the technology programme of the Ministry of Economic Affairs. Rob Goverde, Department Transport & Planning, Delft University of Technology, Stevinweg 1, 2628 CN Delft, The Netherlands,
[email protected] Bernd Heidergott, Department of Econometrics and Tinbergen Institute, Vrije Universiteit, De Boelelaan 1105, 1081 HV Amsterdam, The Netherlands,
[email protected] Glenn Merlet, LIAFA, CNRS UMR 7089, Universit´e Paris Diderot - Paris 7, Case 7014, 75205 Paris Cedex 13, France
[email protected]
that of approximating the Lyapunov exponent by a Taylor series. First references are [2], [3], [4]. As is shown in [8], even though the Taylor series approximation is a generic result, the elements of the Taylor series can only be explicitly computed in simple cases. For this reason we propose a new approach to the approximation for the Lyapunov exponent. The approach presented in this paper is based on a combination of stochastic ordering results and simulation. For a fixed simulation budget, our algorithm will provide an interval which covers the Lyapunov exponent with predefined probability. We will illustrate the performance of our approach with numerical examples. The paper is organized as follows. Section II presents basic facts and definitions of max-plus algebra needed for our analysis. The bounding interval is constructed in Section III and it is shown that the true Lyapunov exponent lies with probability one in the interval. Numerical examples are provided in Section IV. Eventually, we will discuss in Section V possible extensions of our approach and implementation issues. II. BASIC DEFINITIONS AND CONCEPTS Max-plus algebra is usually introduced as follows. Let ε = −∞ and Rε = R ∪ {ε}, and set x ⊕ y = max(x, y) ,
x⊗y = x+y,
for x, y ∈ Rε , and set [A ⊗ B]ij =
n M k=1
aik ⊗ bkj = max (aik + bkj ) , k=1,...,n
for matrices A ∈ Rεn×k and B ∈ Rεk×m . For integers k ≥ l, we set A(k) ⊗ A(k − 1) ⊗ · · · ⊗ A(l) = A(k, l). Consider the sequence {x(k)} satisfying the recurrence relation x(k + 1) = A(k) ⊗ x(k) ,
k ≥ 0,
(1)
where x(0) = x0 ∈ Rnε is the initial value and {A(k)} is a sequence of n × n matrices over Rε . The asymptotic growth rate of {x(k)} is defined via the limits xj (k) , (2) lim k→∞ k for 1 ≤ j ≤ n, provided the limits exist. If the above limits exist and all have the same value, then this value, usually denoted as λ = λ({A(k)}), is called the Lyapunov exponent of the sequence {A(k)}. When it comes to queueing
networks, the Lyapunov exponent is the inverse throughput, and for systems operating according to a timetable, such as transportation networks, it is the (minimal) asymptotic period length of the timetable. A matrix A is called regular if each row of A contains at least one finite entry. Throughout the paper we will assume that A(k) are a.s. regular. This implies that products A(k, l) are a.s. regular. Moreover, for x some finite vector, it holds that A(k, l)⊗x is a.s. a finite vector. A random variable with values in Rǫ is called integrable if E[|X|1X>−∞ ] is finite. A matrix A is called integrable if all elements of the matrix are integrable. A consequence of our assumption that A(k) are a.s. regular is that the function x 7→ A ⊗ x is non-expansive in the supremum norm, that is, maxi |(A ⊗ x)i − (A ⊗ y)i | ≤ maxi |xi − yi |, for finite vectors x, y. The asymptotic growth rate and Lyapunov exponent therefore do not depend on the initial condition. From now on, we assume that x0 is the zero-vector, which we denote by e, because zero is the neutral element of multiplication in Rε . In the sequel, e also denotes the neutral element of Rε , or Rεn×n , according to the context. III. BOUNDING THE LYAPUNOV EXPONENT To estimate the Lyapunov exponent, we want to see it as the Cesaro limit of the sequence (E[xj (k + 1)] − E[xj (k)])k∈N , which will be bounded asymptotically. We therefore introduce the difference between products of matrices. Fix some integer c and set δij (k) = Aij (0, −k + 1) − Aij (−c, −k + 1),
f stands for lim sup or lim inf. where lim Proof: Fix i, with 1 ≤ i ≤ n. Then for any l, with 1 ≤ l ≤ n, it holds that Ail (0, −k + 1) ≤ Ail (−c, −k + 1) + max δij (k). j
Adding Alj (−c, −k) on both sides of the inequality and taking the maximum with respect to l, yields M Ail (0, −k + 1) ⊗ Alj (−k) ≤ max δij (k) ⊗ j
M l
Aij (0, −k) − Aij (−c, −k) ≤ max δij (k), j
Ail (−c, −k + 1) ⊗ Alj (−k),
(4)
for any j. Taking the maximum over j thus yields max δij (k + 1) ≤ max δij (k), j
j
which proves the claim for maxj δij (k). Monotonicity of minj δij (k) follows along the same line. We now turn to the proof of (ii). Note that E[xi (k)]−E[xi (k −c)] =
c−1 X j=0
E[xi (k −j)]−E[xi (k −1−j)].
Using a Cesaro averaging argument, we get 1 ] lim E[xi (k)] − E[xi (k − 1)] = ] lim E[xi (k)], k→∞ k which yields k→∞
] lim E[xi (k)] − E[xi (k − c)]
k→∞
=c] lim E[xi (k)] − E[xi (k − 1)]. k→∞
(5)
Using stationarity of {x(k)}, we obtain E[xi (k)] − E[xi (k − c)]
= E[(A(0, −k + 1) ⊗ e)i ] − E[(A(−c, k + 1) ⊗ e)i ] M M = E Aij (0, −k + 1) − (Aij (−c, k + 1). (6) j
(3)
provided that at least one of the two entries is finite; otherwise, we set δij (k) to some particular element α ∈ / Rε . In the following, the maximum operator will interpret α as −∞ and the minimum operator as +∞. Proposition 3.1: Let {A(k)}k∈N be a stationary and ergodic sequence of integrable random matrices with values in Rεn×n and almost-surely at least one finite entry on each row. Then the following holds: (i) for any i, with 1 ≤ i ≤ n, and any positive integer c, the sequence {minj δij (k)}k∈N is non-decreasing, while {maxj δij (k)}k∈N is non-increasing; (ii) for any finite integer N , we have 1 E min δij (N ) ≤ c ] lim E [xi (k)] ≤ E max δij (N ) , j j k→∞ k
l
which yields
j
By (4) together with the fact that {maxj δij (k)}k∈N is nonincreasing, we obtain for any l ≥ k: M M Aij (−c, −l + 1) ≤ max δij (k). Aij (0, −l + 1) − j
j
j
Following the same line of argument for {minj δij (k)}k∈N we arrive at M M Aij (−c, −l) Aij (0, −l) − min δij (k) ≤ j
j
≤
j
max δij (k). j
(7)
Taking expected values, the proof follows from (5) together with (6). To deduce an estimate of λ from Proposition 3.1, we need assumptions such that k1 E [xi (k)] converges to λ. Moreover, for implementation reasons, we prefer to multiply matrices on the left, rather than on the right. To this aim, we apply the previous result to the sequence A⊤ (−k) k∈N , where A⊤ denotes the transpose of a matrix A. The differences between the associated products of matrices are defined by ∆ij (n) = Aij (n − 1, 0) − Aij (n − 1, c),
(8)
with the same convention for ε − ε as in Equation (3). If both {A(k)}k∈N and A⊤ (−k) k∈N have a Lyapunov exponent, then it is the same, because the former is the growth rate of maxij Aij (k, 0), and the later is the growth
rate of maxij (A⊤ )ij (−k, 0), which has the same law as maxij Aij (k, 0). For each sequence {A(k)}k∈N , we define G(A) as the directed graph whose nodes are the integers between 1 and n and whose arcs are the pairs (i, j) such that P (Aij (1) 6= ε) > 0. A random matrix A(k) with fixed support is called irreducible if G(A(k)) is strongly connected. conditions under which both {A(k)}k∈N and The A⊤ (−k) k∈N have a Lyapunov exponent are beyond the scope of this paper. It involves a detailed analysis of G(A), conducted in [10]. When the A(k) are i.i.d., a sufficient condition is the irreducibility of {A(k)}k∈N . Moreover the general case can be reduced to finitely many irreducible case (cf. [10]). We say that a random matrix A(k) has fixed support if each entry is either almost-surely equal to ε or almostsurely finite. Matrices with fixed support appear frequently in applications, especially the train networks we will focus on in section IV. In the remainder of this paper, we assume that A(k) have fixed support. Proposition 3.2: Let {A(k)}k∈N be an i.i.d. sequence of irreducible and integrable random matrices. Then the following holds: (i) for any index j and any positive integer c, the sequence {mini ∆ij (k)}k∈N is non-decreasing, while {maxi ∆ij (k)}k∈N is non-increasing; If c is a multiple of G(A)’s cyclicity, then (ii) for any random integer N , we have h i h i E min ∆ij (N ) ≤ c λ({A(k)}) ≤ E max ∆ij (N ) ; (9) i
i
(iii) if, moreover, the A(k) have fixed support, then there is a fixed integer k0 such that E [mini ∆ij (k0 )] and E [maxi ∆ij (k0 )] are finite. The proof of the proposition is postponed to the full-length version of our paper due to space limitations. According to Proposition 3.2, we can always obtain bounds, but not necessarily good ones. Are there good choices for j and N ? In the best situation, maxi ∆ij (N ) = mini ∆ij (N ). This means that the jth columns of A(N, 0) and A(N, c) only differ by an additive constant, or that they are proportional in the max-plus sense. According to [11], for generic random matrices, there is an N, such that all columns of A(N, c) are proportional in the max-plus sense. Since a column of A(N, 0) is a maxplus linear combination of columns of A(N, c), such an N satisfies maxi ∆ij (N ) = mini ∆ij (N ). On the other hand, the iterates of a deterministic matrix have a periodic behaviour whose period is not the cyclicity of G(A) but the cyclicity of one of its subgraphs, called critical graph of A. When the law of A(1) concentrates around one matrix B, it will therefore be much more efficient to take the cyclicity of B’s critical graph for c, rather than G(A)’s cyclicity. Moreover, j should be taken in the critical graph. IV. NUMERICAL RESULTS The theoretical results of the previous section suggest a simulation-based approach to approximating the Lyapunov
exponent. This section discusses some modelling issues and presents numerical results illustrating the theory of the previous section, based on an illustrative small railway network and a real-life large-scale network. The basic algorithm is given as follows. The superscripts (m) denote the mth replication (simulation run). LYAPUNOV EXPONENT ALGORITHM (LEA) Choose precision δ > 0. (i) For m = 1, . . . , M compute km such that (m) (m) max ∆ij (km ) − min ∆ij (km ) ≤ δ i
i
(m) ∆ij (k)
recursively defined as in (8). with (ii) Compute the Lyapunov exponent estimate λM =
M 1 X (m) max ∆ij (km ) . i cM m=1
The output of the LEA algorithm is an unbiased point 2 estimate λM of λ({A(k)}) and the variation SM that can be used to construct a confidence interval CI1−α . For instance, √ the 95%-confidence interval CI0.95 = λM ± 1.96SM / M . Note that the M replications have a variable simulation length corresponding to the coupling time km . This coupling time may get very large, so to avoid extensive computation times we may bound the number of iterations by some maximum number Nmax . If there are replications km > Nmax then the LEA algorithm returns a lower and upper bound on the Lyuapunov exponent, given by λup M =
M 1 X (m) max ∆ij (km ) , i cM m=1
λlow M =
M 1 X (m) min ∆ij (km ) cM m=1 i
up with λlow M ≤ λM ≤ λM . The variation of these bounds can be defined likewise. As a first example, we take a small railway network consisting of two stations and three train lines: line 1 serves the regional area around station S1 , line 2 runs between the two stations, and line 3 serves the regional area around station S2 , see Fig. 1. Both stations have two arrival and departure events, denoted by ai and di (i = 1, . . . , 4), respectively. The numbers at the arcs are the scheduled running and dwell times (solid arcs), and transfer and minimum departure headway times (dashed arcs). For more details, see Goverde [7]. Define the state vector as x = (d1 , . . . , d4 , a1 , . . . , a4 )′ , including both departure and arrival events. The timetable is given by b(k) = (31, 30, 0, 1, 21, 56, 26, 56)′ ⊗ 60⊗k . Hence, the timetable b(k) is periodic with cycle time 60 minutes. The railway system can now be written as the max-plus linear system
x(k + 1) = A0 ⊗ x(k + 1) ⊕ A1 ⊗ x(k),
(10)
Line 1 50
a3 a1 2 2 2 2 d1 1 d2 S1 Fig. 1.
with
A0 =
ε ε ε ε ε ε ε ε
26
d3 1 d4 2
Line 2
2 2 a4
2 a2 26
Line 3 55
S2
The example railway network
1 ε ε 2 ε ε ε 2 ε ε ε ε ε 1 ε ε ε ε ε ε 26 ε ε ε ε 26 ε ε ε ε 55 ε
ε ε ε ε ε ε ε ε
2 2 ε ε ε ε ε ε
ε ε ε ε ε ε ε ε
and A1 has all element equal to ε except for (A1 )3 6 = (A1 )3 8 = (A1 )4 6 = (A1 )4 8 = 2, and (A1 )5 1 = 50. This equation differs from (1) in that it is deterministic and it contains an additional term involving A0 . We come back to the stochastic counterpart, but first explain its form. Matrix A0 contains all activities that occur completely within a period, while A1 contains all activities that cross a period. For example, train 1 departs from S1 at x1 (k) = 31 (mod 60) in some period k and it takes 50 minutes until arrival at x5 (k + 1) = 31 ⊗ 50 = 21 (mod 60) in the next period. In contrast, after arrival the train stays for at least 2 minutes in S1 before departing again at x1 (k + 1) = 31 (mod 60) within the same period k + 1. Hence, [A1 ]51 = 50 and [A0 ]15 = 2. The matrices A0 and A1 are partitioned into four parts corresponding to the departure and arrival events. The lower-left part contains the running times, the upper-right part contains all dwell and transfer times from arrivals to departures, the upper-left part contains minimum headway times between departures, and the lower-right part contains minimum headway times between arrivals. Minimum headway times between arrivals and departures can also be defined in the lower-left and upper-right parts. Goverde [7] considered this deterministic system, where the arrival times can simply be expressed in the preceding departure times, resulting in reduced 4 × 4-matrices and departure events only. However, if the running times are stochastic then this is no longer feasible without introducing dependencies between the various entries in the matrices. For example, departures x1 and x2 both depend on arrival x7 , which satisfies x7 (k) = a73 ⊗ x3 (k). So we could also define precedence constraints from x3 directly to x1 and x2 with travel times a ˜13 = a17 ⊗ a73 and a ˜23 = a27 ⊗ a73 , respectively. Clearly, a ˜13 and a ˜23 are dependent because they both include the running time a73 . Hence, we take the arrival times explicitly in our state vector, by which each entry in the matrices corresponds to a single process with some known distribution. Equation (10) can be transformed into a purely first-order
system x(k + 1) A ⊗ x(k) by computing A = A∗0 ⊗ A1 , L= n−1 ⊗l ∗ where A0 = Clearly, the l=0 A0 , see e.g. [1], [9]. matrix entries correspond to compound processes and the earlier interpretation of the parts of the matrix is no longer valid. For example, A11 = a15 ⊗ a51 ⊕ a15 ⊗ a52 ⊗ a21 = max(2 + 50, 1 + 2 + 50), which is the maximum of the two path lengths x1 − x5 − x1 and x1 − x5 − x2 − x1 (note: aij denote the finite matrix entries in either A0 or A1 ). In the stochastic case, the distribution of A11 is quite complicated, even if the marginal distributions of its components are simple. Moreover, A11 is stochastically dependent on A12 , A16 , and A18 since these entries share a common process. Likewise, for all other entries. Finally, note that A is no longer irreducible, since it contains five ε columns. The conditions of Prop. 3.2 are therefore not satisfied for this A and indeed for j ∈ {2, 3, 4, 5, 7} we obtain Aij (k, l) = ε for all i, k and l. The standard approach in the deterministic case is to remove the empty columns and associated rows of A, and find the eigenvalue of this reduced matrix. The most efficient method -the policy iteration algorithmhowever works with the general deterministic system (10) directly [5], [9]. Motivated by the above, we will deal with the stochastic case, i.e., we let A0 and A1 in (10) depend on k. Moreover, we will consider large-scale sparse matrices and therefore implemented the algorithm with respect to an arc list representation of the max-plus linear system (10). The implemented computations for Aij (0, k − 1) and Aij (c, k−1) correspond to an iterative critical path algorithm from a single origin j, where in each iteration k + 1 the critical paths are computed from the processes of A1 (k) going from period k to k + 1 followed by the processes of A0 (k) within period k +1. Hence, we consider the stochastic max-plus linear system x(k + 1) = A0 (k) ⊗ x(k + 1) ⊕ A1 (k) ⊗ x(k),
k ≥ 0,
with xj (0) = 0 and xi (0) = ε for i 6= j, for some selected j. In each iteration, the finite entries of Al (k), for l = 0, 1, are drawn from a set of distributions. For the example, we define the following distributions for each finite entry (Al (k))ij , l = 0, 1: with probability pij we take (Al (k))ij = aij (the scheduled process time) and with probability 1 − pij we take (Al (k))ij ∼ exp(λij ) (the exponential distribution). So, the distribution function Fij of the stochastic process times (Al (k))ij for all k is 1 − (1 − pij )e−λij (t−aij ) if t > aij Fij (t; pij , λij ) = pij if t = aij , where we take pij = 0.7 for all activities, and 1/λij = 0.05 · aij . So each activity is independently delayed with probability 0.3 according to an exponential distribution with mean delay equal to 5% of the scheduled process time, e.g., if aij = 10 minutes then it has a mean delay of 30 seconds. Future research is directed on appropriate choices of distributions and parameters using train realization data. We implemented the Lyapunov exponent algorithm in Matlab and run the experiments on a PC (Intel CPU 2.80 GHz, 512 MB RAM). Here and in all other numerical
TABLE I
Bounds on Lyapunov exponent
C ONVERGENCE OF THE LOWER AND UPPER BOUND ON THE LYAPUNOV
65
EXPONENT FOR THE SMALL NETWORK
E[mini ∆ij (N )] 53.79 56.97 59.20 59.24
E[maxi ∆ij (N )] ∞ 59.28 59.24 59.24
∆ ∞ 2.31 0.04 0
experiments we set δ = 10−8 as precision. For the sample network, we have cyclicity c = 1 and we used index j = 8, which is a critical node of the deterministic system. Table I shows the computational results. The mean is taken over 100 replications, while the maximum number of iterations per replication was fixed at N = 10. All replications converged within the maximum number of iterations. The lower (min) and upper (max) bound converged in four iterations. The upper bound settled after three iterations, while the lower bound took four iterations. After convergence of the bounds, this value is maintained in all future iterations. Hence, the iterations in any replication can stop as soon as the bounds couple to the same value. The point estimate of the Lyapunov exponent is thus λ100 = 59.24. The algorithm also returns a 95%-confidence interval, which is [58.58, 59.90]. Hence, with a certainty of more than 95%, the Lyapunov exponent is smaller than 60 minutes, given the distribution parameters. The computation time is a fraction of a second. The confidence interval can be made smaller by increasing the number of replications. In stability analysis, the user is interested in whether or not the Lyapunov exponent exceeds a given cycle time T . In this case, T = 60 and since λ < T , the system is stable with a certainty of 95%. In the deterministic case (10), the eigenvalue is λdet = 58. So the stochastic analysis shows that the system stays stable in the presence of the given delay distributions. As an example of a large-scale network, we consider the basic hour pattern of the Dutch railway timetable 2007. This network consists of 11,111 nodes and 88,160 arcs, divided over 49,175 finite entries in A0 and 38,985 in A1 . The arc weights correspond to the minimum (technical) process times, of which 76,900 are minimum headway times. The cyclicity is 1 and the deterministic maximum eigenvalue is λdet = 59.27. For the index we selected j = 4660, a critical node found by the policy iteration algorithm applied to the deterministic system. We use the same distribution as the earlier example for all process times (running times, dwell times, layover times, transfer times, rolling stock connection times), except for the minimum headway times which were kept deterministic. We used 20 replications and set a maximum number of iterations at N = 100. The lower and upper bounds (mini ∆ij (k) and maxi ∆ij (k)) in the 20 replications converged between 30 and 89 iterations with a mean of 50. The lower and upper bounds on the Lyapunov exponent, E[mini ∆ij (N )] and E[maxi ∆ij (N )], coupled at iteration N = 90, while after 67 iterations the bounds were already at a distance of 0.01, see Fig. 2. The upper bound already converged after N = 84 iterations. The computation
60
55 Minutes
N 1 2 3 4
50
45 Upper bound (max) Lower bound (min) 40 0
Fig. 2.
10
20
30
40 50 60 70 Number of iterations N
80
90
100
Coupling of the lower and upper Lyapunov exponent bounds
took about 3 minutes and gave λ20 = 60.33 and a 95%confidence interval [59.89, 60.77]. It seems that the timetable is not stable with the given distribution parameters. To be sure, we can increase the sample size to find a smaller confidence interval. To get an idea of the number of iterations required in an experiment, we performed 100 replications of the large-scale network and recorded the coupling time in each replication. We set a maximum to the number of iterations at N = 100. Fig. 3 shows a histogram of the result. In four replications the maximum number of iterations was reached. The minimum number is 28 and the mean is 51. It is well-known that the coupling time can become extremely large. Repeated experiments without restrictions on N gave in one case N = 473. Further experimentation showed that about 5% of the replications reach N = 100 iterations without coupling. If we terminate a replication prematurely and still use the results in the lower and upper bound of the Lyapunov exponent, the difference between our upper and lower bound may increase. The gap between the bounds can be kept small by continuing a simulation run after N iterations until the difference is within some threshold. When the point estimate is in the neighborhood of a critical value then a small confidence interval may be of interest, which also requires equal bounds. Extensive experiments showed no dependency between the computed bounds and the coupling time of a replication. This suggests the following alternative modification. The expected coupling time of a new replication is at least as large as the remaining number of iterations required for a ‘tough’ case. Because very large coupling times do occur once in a while, we can guard against them by a maximum number of iterations. Hence, we abandon ‘tough’ replications after a maximum number of iterations instead of iterating until coupling occurs. In the remaining experiments, we use Nmax = 100 and take M replications with coupling time N < Nmax (which requires about 1.05M replications).
18
eters, where EN (and Nmax ) can be estimated from initial experiments. The algorithm requires the cyclicity c and an index j. Experiments showed that a critical node j on a critical circuit of the deterministic system is a good heuristic for a small mean coupling time EN . Both c and a critical j can be found by e.g. the policy iteration algorithm or the power algorithm applied to the deterministic system√[9]. The width of the confidence interval is proportional to M , so halving the confidence interval requires 4M replications. Initial experiments give a good estimate of the average computation time CPU(m · EN ) required per replication (10 seconds in the large-scale case), so that the overall computation time M · CPU(m · EN ) can be estimated in advance.
16 14
Counts
12 10 8 6 4 2 0 20
V. EXTENSIONS AND OPEN ISSUES 30
Fig. 3.
40
50
60 70 Coupling time
80
90
100
Coupling time histogram over 100 replications
Future work will be on the extension of the LEA algorithm to reducible matrices. Moreover, variance reduction methods will be studied in order reduce the variance of the bounding interval produced by LEA.
Lyapunov exponent estimate and 95%−confidence interval 61
R EFERENCES
Minute
60.5
60
59.5 10
20
30
40 50 60 70 Number of replications M
80
90
100
Fig. 4. Lyapunov exponent approximation (solid) and 95% confidence interval (dotted) as a function of the number of replications
Fig. 4 shows the point estimate and 95% confidence interval of the Lyapunov exponent for a number of replications ranging from M = 1 to M = 100. The point estimate of the Lyapunov exponent for 100 replications is λ100 = 60.22 and the associated 95%-confidence interval [60.06, 60.38] is now completely above 60, so we conclude that the timetable is indeed unstable when 30% of the activities have an exponential delay with mean 5% of their minimum process times. This is not surprising since the deterministic eigenvalue λdet = 59.27 is already very high. The computation time (and number of operations) is proportional to M · m · EN , where M is the number of replications, m the number of finite matrix entries (arcs), and EN the mean number of iterations over the replications. The latter two parameters depend on the network specified by the matrices {A0 (k), A1 (k)} and the distribution param-
[1] Baccelli, F., Cohen, G., Olsder, G. J., and Quadrat, J.–P. Synchronization and Linearity. John Wiley and Sons, (this book is out of print and can be accessed via the max-plus web portal at http://maxplus.org), 1992. [2] Baccelli, F. and Hong, D. Analytic expansions of (max,+) Lyapunov exponents. Annals of Appl. Probab., 10:779–827, 2000. [3] Baccelli, F. and Schmidt, V. Taylor series expansions for Poisson– driven (max,+)–linear systems. Annals Appl. Prob., 6:138–185, 1996. [4] Baccelli, F., Hasenfuß, S., and Schmidt, V. Expansions for steady–state characteristics of (max,+)–linear systems. Com. in Statistcs: Series Stochastic Models, 14:1–24, 1998. [5] Cochet-Terrasson, J., Cohen, G., Gaubert, S., Mc Gettrick, M., and Quadrat, J.-P. Numerical computation of spectral elements in max-plus algebra. IFAC Conference on System Structure and Control, Nantes, 1998. [6] Cuninghame-Green, R.A. Minimax Algebra. vol.166 of Lecture Notes in Economics and Mathematical Systems. Springer,Berlin, 1979. [7] Goverde, R.M.P. Railway Timetable Stability Analysis Using MaxPlus System Theory. Transportation Research Part B, 41(2), 179–201, 2007. [8] Heidergott, B. Perturbation Analysis and Max Plus linear Stochastic Systmes. Springer, New York, 2006. [9] Heidergott, B., Olsder, G.J., and van der Woude, J. Max Plus at Work: Modeling and Analysis of Synchronized Systems, Princeton University Press, 2005. [10] Merlet, G. Cycle time of stochastic max-plus linear systems. Electronic Journal of Probability, 13, 322-340, 2008. [11] Merlet, G. Memory loss property for products of random matrices in the (max, +) algebra. Technical report, IRMAR, http://hal.archivesouvertes.fr/ccsd-00001607, May 2004. Submitted.