converges to a reflected Brownian motion and the observation process converges ... of reflected Brownian motion, and for the filter of the rescaled queue model.
The conditional law of a continuous time random walk with respect to its local time: explicit form and approximations∗ Giovanna Nappo†
Barbara Torti
‡
January 2003
Abstract A filtering problem is considered in the case when the state process is a continuous time random walk Xt and the observation process is its local time Lt . An explicit construction of the filter is given. This construction is then applied to a suitable approximation of a Brownian motion and to an asymptotically symmetric M/M/1 queueing model. In both these cases it has been possible to give a convergence result for the respective filters.
Contents 1 Introduction
2
2 The model
4
3 Scaling
8
4 The interpolating Brownian motion model: strong convergence for the filter 10 5
The M/M/1 queueing model: weak convergence for the filter 5.1 Weak convergence of the filter: the symmetric case . . . . . . . . . . . . . . . . . 5.2 Weak convergence of the filter: the asymptotically symmetric case . . . . . . . .
14 17 21
6 The M/M/1 queueing model: an approximation for the filter
24
7 The M/M/1 queueing model: observing the idle time process
26
A Appendix 33 A.1 Identification of the interpolating Brownian motion model . . . . . . . . . . . . . 33 B A limit result and an B.1 Preliminary results B.2 Sketch of the proof B.3 Technical details .
∗
upper . . . . . . . . . . . .
bound for . . . . . . . . . . . . . . . . . . . . .
the rescaled queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Partially supported by MURST project ”Processi Stocastici, Modelli Stocastici ed Applicazioni” Dipartimento di Matematica - Universit` a di Roma ”La Sapienza”, Italy ‡ Dipartimento di Matematica - Universit` a di Roma ”Tor Vergata”, Italy †
35 35 37 38
1
Introduction
The kind of problems we are interested in arises from the following situation. Suppose that in a queue we can observe, up to time t, whether the queue is busy or idle, but we cannot observe the size of the queue, so that the observation process is the total time the queue has spent in 0, i.e. the so called idle time (see Prabhu [14]). Then the problem is to evaluate the size of the queue at time t, given this information, i.e. to compute the conditional law (or the filter) of the queue given the observation process up to time t. In the setup of heavy traffic limit, the rescaled queue converges to a reflected Brownian motion and the observation process converges to its local time. Then the limit model ¡ ¢ can be constructed as (Wt + Λt , Λt ), where Wt is a Brownian motion and Λt = − inf 0≤s≤t Ws is the corresponding local time in the sense of Skorohod definition. In the limit model the problem is the computation of the conditional law of a reflected Brownian motion £Wt + Λt when the ¤ observation process is its local time Λt , i.e. the computation of the filter E g(Wt + Λt )/FtΛ , for g in a sufficiently large class of functions. A first problem is to find the exact expression for the filters, both for the filter of the limit model of reflected Brownian motion, and for the filter of the rescaled £ ¤ queue model. A second problem concerns the convergence of the latter to E g(Wt + Λt )/FtΛ . The filter of the limit Brownian motion model is derived in G. Nappo, B. Torti [13] (Sections 4 and 6), where it is obtained by means of a suitable sequence of processes Λn approximating the observation process Λ. Each process Λn is proportional to a counting process, and therefore the nonlinear filtering techniques for counting processes are used. The filter can also be derived by means of the Az´ema martingale, and this derivation is shortly discussed in [13]. For sake of completeness we recall its explicit expression. Theorem 1.1. Let Wt be a Brownian motion with diffusion coefficient a2 and drift c ∈ R and let Λt be its local time. Let g be a bounded measurable function. Denote by
Z
∞
Π(g)(s, l) = 0
or, equivalently,
µ 2¶ ¡ √ ¢ y dy s ≥ 0, g −l + y s y exp − 2
(1)
h i Π(g)(s, l) = E g(−l + Ws∗ )/Λ∗s = 0 ,
(2)
where W ∗ is any standard Brownian motion and Λ∗ is its local time. Then
¯ ¡ ¢ 2 c ¯ £ ¤ Π g(·) exp( ·) (a s, l) 2 ¯ a¢ ¡ πt (g) = E g(Wt )/FtΛ = ¯ c 2 Π exp( a2 ·) (a s, l) ¯
and
,
(3)
,
(4)
s=ζt ,l=Λt
¯ ¢ ¡ £ ¤ Π g(·) exp( ac2 ·) (a2 s, 0) ¯¯ Λ ¡ ¢ π ˆt (g) = E g(Wt + Λt )/Ft = ¯ Π exp( ac2 ·) (a2 s, 0) ¯
s=ζt
where FtΛ is the history generated by Λu up to time t; ζt is the elapsed time from the last visit to 0 for the process Wt + Λt , i.e. ζt = γt0 (W + Λ) = γt (Λ),
(5)
γt0 (x) = t − sup{s < t : xs = 0}.
(6)
where
γt (x) = t − sup{s < t : xs < xt }.
(7)
In this paper we start by considering a somehow simplified version of the motivating problem: we consider a continuous time random walk Yt and its conditional law w.r.t. its local time Lu up to time t. Indeed, up to a reflection, several queueing models and birth and death processes can be represented just by choosing the law of a continuous time random walk. More precisely, we start by deriving an explicit expression for the filter under the assumption that the process Yt can be decomposed as Yt = VZt , where Zt is a renewal process, Vk is a discrete time random walk, with Zt and Vk mutually independent. It turns out that the filter of Yt w.r.t. Lu up to time t can be expressed as a deterministic function of Lt and ζt = γt (L), where γt is the deterministic functional of Lu up to time t defined in (7). Then, we consider two situations when a sequence (Xtn , Lnt ), of rescaled random walks, converges to a Brownian motion and its local time (Wt , Λt ), and we deal with the problem whether the corresponding filter converges to the filter of the limit. As discussed at the end of Section 3, the convergence of the previous systems is equivalent to the convergence of the system with the reflected random walk (Xtn + Lnt , Lnt ) to the system with the reflected Brownian motion (Wt + Λt , Λt ), and the convergence of the filters for the rescaled random walks is equivalent to the convergence of the filters for the reflected random walks. In particular, for the models we propose, we study two different kind of problems. The first is a theoretical one, and concerns the problem of checking if the weak limit of the filter is the the filter of the limit model. In turn, this question give rise to another problem, more interesting from a computational point of view, regarding to find a good approximation of the filter of the discrete system. If the filter, as functional of the observation process, satisfies some regularity properties both in the discrete case and in the limit case, we can think to use the limit functional, computed on the true observation, in order to give a manageable approximation of the filter. This situation will be analyzed in Section 6. These problems have been studied in more general situations by many authors, among which we recall in particular Bhatt et al. in [1] and Goggin ([9], [10]). Most of these results concern diffusive models and do not apply to our case. Moreover, usually the applications concern the problem to approximate a given signal/observation process with a suitably chosen sequence of signal/observation processes so that the corresponding sequence of filters converges to the the filter of the original process. We start from a different point of view: the sequence of processes is given and the problem is to show the convergence of the filters in the sense specified above. The above problem of convergence of the sequence of filters is not a trivial problem as even strong convergence of random variables does not imply convergence of the conditional laws. This is clearly explained by the following simple and illuminating example (see E. Goggin [9]). Let ξ be a real random variable, and (ξn , ηn ) = (ξ, ξ/n). Then (ξn , ηn ) converges strongly to (ξ, η), with η = 0. Nevertheless, for any measurable function g, E(g(ξn )/ηn ) = g(ξ), so that the conditional law of ξn given ηn is the measure concentrated in ξ(ω), while E(g(ξ)/η) = E(g(ξ)), so that the conditional law of ξ given η coincides with the (deterministic) law P ◦ ξ −1 of ξ. In the example above, although the sequence of the conditional laws of ξn given ηn does not converge to the conditional law of the limit, it is a constant sequence and therefore is a converging sequence. This is not surprising indeed in the light of the next general result, which is a slight generalization of a result of E. Goggin [9] (proof of Theorem 2.1, Step 1): one has only to replace the sequence of σ−algebras used in [9] with a general sequence.
Lemma 1.2. Let Rn be a sequence of random variables with values in a Polish space, let Hn be a sequence of σ−algebras, let αn be a regular version of the conditional distribution of Rn given Hn . If {Rn , n ∈ N} is tight, then {αn , n ∈ N} is tight. Then, as far as weak convergence is concerned, for the systems converging to a Brownian motion, the main problem is to check whether the limit points of the sequence of filters are all equal to the filter of a standard Brownian motion w.r.t. its local time. The first model we consider is a non-markovian queueing model arising when a Brownian motion W is approximated by a sequence of continuous time random walks W n , obtained with a suitable interpolation procedure. The approximation scheme we propose for W follows some of the ideas used in [10] to study a filter approximation problem in diffusive models, and is related with the approximation scheme used in [13]. In addition the processes W n can be identified with our rescaled model. In this particular case we also get a strong convergence result for the approximating filter. The second model we consider is the case when the renewal process Zt is a Poisson process, so that the reflected random walk is an M/M/1 queue. We consider the case when the drift of the limit Brownian motion is zero, and we find, under suitable conditions, the weak limit of the corresponding filter by a limiting procedure. It is important to note that the observation process is the local time of the random walk, which is not the cumulative time spent in 0, as in the original motivating problem. The last Section outlines how this problem can be dealt with.
2
The model
Fix a probability space (Ω, F, P ) and consider on it two sequences of random variables {Tj , j ≥ 1}, {Uj , j ≥ 1} satisfying the assumptions H1 the sequences {Tj , j ≥ 1} {Uj , j ≥ 1} are mutually independent; H2 the random variables {Tj , j ≥ 1} are non-negative, mutually independent, with identical distribution function F ; H3 {Uj , j ≥ 1} is a sequence of i.i.d. random variables such that P (Uj = 1) = p P (Uj = −1) = 1 − p = q, p ∈ (0, 1). Put τ0 = 0, τk =
k X
Tj for k ≥ 1,
j=1
consider the renewal process Zt =
∞ X
I (τj ≤ t)
j=1
and the random walk {Vj , j ≥ 0} defined by V0 = 0, Vj = Vj−1 + Uj , j ≥ 1.
Finally, consider the marked point process Yt = VZt =
∞ X
Uj I (τj ≤ t) =
j=1
X
Uj .
(8)
j≤Zt
The solution of the Skorohod problem 1 for the process Yt is given by the pair (Yt + Lt , Lt ) where Lt is the local time at level 0 for the process Yt , that is Lt = `t (Y ), where ` : DR [0, ∞) → DR [0, ∞) s.t. `t (x) = − inf x(s) ∧ 0. s≤t
(9)
It is easy to see that, for our model, Lt admits the representation Lt =
∞ X
I (σj ≤ t) ,
(10)
j=1
where {σj , j ≥ 0} is the sequence of its jump times. We can see that {σj , j ≥ 0} is the subsequence of {τj , j ≥ 0} defined by σ0 = 0 and © σj = inf τk
ª © ª s.t. Yτk ≤ −j = inf t > 0 s.t. Yt ≤ −j for j ≥ 1.
Set Gt = FtL = σ {Ls , s ≤ t} and Ft = FtY = σ {Ys , s ≤ t}. Obviously {σj , j ≥ 0} are stopping times w.r.t. both the histories Gt and Ft . A first representation for the filter of Yt given Gt is stated in the following Proposition, where we use a condition, which is implied by the above conditions H1, H2, H3. H0 The R+ × {+1, −1}−valued random variables (Tj , Uj ), for j ≥ 1, form a sequence of identically distributed and mutually independent random variables. Proposition 2.1. Assume H0. Then the conditional law of Yt given Gt admits the following representation P -a.s. i h E g(−j + Y − Y )I (S > s) ∞ σ j+1 s+σ j j ¤ X £ ª s=t−σj © h i E g(Yt )/Gt = (11) I σj ≤ t < σj+1 , E I(S > s) j+1 j=0 s=t−σj
where Sj+1 = σj+1 − σj , j ≥ 0. Proof. The thesis can be proved by a slight modification of the argument in [13], Proposition 3.1.
Remark 2.2. The main ingredient in the proof of the previous Proposition is that condition H0 implies that the process Ys+σj − Yσj is independent of Gσj and is equal in law to the process Ys . In its turn the last property implies that (11) admits the representation For any x ∈ DR [0, ∞), with x(0) ≥ 0 the pair (z, v) = (x + `(x), `(x)) is the unique solution of the Skorohod problem, i.e. is the unique pair of functions satisfying z(t) = x(t) + v(t), and such that z(t) ≥ 0, for all t ≥ 0, v(0) = 0, v is nondecreasing and increases only when z(t) = 0. 1
£ ¤ E g(Yt )/Gt =
∞ X j=0
h i E g(−j + Ys∗ )I (σ1∗ > s) ª s=t−σj © h i I σj ≤ t < σj+1 , E I(σ1∗ > s)
(12)
s=t−σj
where Ys∗ is another birth and death process, with the same law as Yt , defined by the rule (8) starting from a sequence {(Tj∗ , Uj∗ ), ≥ 1} satisfying condition H0, and σ1∗ is its first visit time to the state -1. Then the problem reduces to the computation of h i h i E g(−j + Ys∗ )I (σ1∗ > s) E g(−j + Ys )I (σ1 > s) h i h i = , E I(σ1∗ > s) E I(σ1 > s) for any s ≥ 0. The previous Remark implies also that the process Lt is a renewal process, with inter-arrival times {Sh , h ≥ 1}. It is possible to represent them in terms of the sequences {Ti , i ≥ 1} and {Mi , i ≥ 0}, where the sequence {Mi , i ≥ 0} is recursively defined by the rule ( M0 = 0 © ª (13) Mi = inf k ≥ 0 : VM0 +...+Mi−1 +k − VM0 +...+Mi−1 = −1 . Then σ0 = 0,
σh =
M1 +...+M X h
Ti = τM1 +...+Mh ,
h ≥ 1,
i=1
and Sh = σh − σh−1 =
M1 +...+M X h
Ti ,
h ≥ 1.
i=M1 +...+Mh−1 +1
Under Condition H0 (and therefore under Conditions H2 and H3 for a suitable F and a suitable value of p) the sequence {Mi , i ≥ 1} is a sequence of i.i.d. random variables. Moreover, under Condition H1, it is obvious that {Mi , i ≥ 1} and {Ti , i ≥ 1} are mutually independent. Next result provides an explicit expression for the terms of the sum in (11). £ ¤ Proposition 2.3. Assume conditions H1, H2, H3. Then the filter E g(Yt )/Gt admits the representation £ ¤ E g(Yt )/Gt = h i P∞ ∞ E I (M ≥ k) g(−j + V ) (Fk−1 (t − σj ) − Fk (t − σj )) © X 1 k−1 ª k=1 ¡ ¢ P∞ = I σj ≤ t < σj+1 m=1 P M1 = m (1 − Fm (t − σj )) j=0
(14)
where Fk is the distribution function of τk i e. Fk = F ∗k , the k-fold convolution of F . h i h i Proof. As explained in Remark 2.2 we need only to compute E I(σ1 > s) , and E g(−j + Ys )I (σ1 > s) , for all j ≥ 0. Taking into account the equality σ1 = τM1 , and the independence of the sequences {Ti , i ≥ 1} and {Mi , i ≥ 0}
" ∞ # ∞ X X £ ¤ £ ¤ ¡ ¢¡ ¢ E I (σ1 > s) = E I (τM1 > s) = E I (τm > s) I (M1 = m) = P M1 = m 1 − Fm (s) . m=1
m=j
Finally, taking into account also (8), " ∞ # h i X E g(−j + Ys )I (σ1 > s) =E [g (−j + VZs ) I (τM1 > s)] = E g (−j + VZs ) I (τm > s) I (M1 = m) = " =E
m=1 ∞ X ∞ X
#
g (−j + Vi−1 ) I (τi−1 ≤ s < τi ) I (τm > s) I (M1 = m) =
m=1 i=1
= = =
∞ X
m h i X E I (M1 = m) g(−j + Vi−1 )I (τi−1 ≤ s < τi ) =
m=1 m ∞ X X m=1 i=1 ∞ X ∞ X
i=1
h i E I (M1 = m) g(−j + Vi−1 ) P (τi−1 ≤ s < τi ) = h i³ ´ E I (M1 = m) g(−j + Vi−1 ) Fi−1 (s) − Fi (s) =
i=1 m=i
=
∞ h i³ ´ X E I (M1 ≥ i) g(−j + Vi−1 ) Fi−1 (s) − Fi (s) . i=1
¤ £ is determined Now the state space of the process Yt is discrete and then the filter E g(Y )/G t t ¤ £ by its discrete density νt (x), x ∈ Z, i.e. νt (x) = E g(Yt )/Gt with g(z) = I{x} (z), z ∈ Z. In the next Proposition we give the expression of the discrete density νt (x). Proposition 2.4. The filter of Yt given Gt has discrete density ¡ ¢ ∞ P∞ X © ª k=1 P (Vk−1 = x + j, M1 ≥ k) Fk−1 (t − σj ) − Fk (t − σj ) ¡ ¢¡ ¢ P∞ I σj ≤ t < σj+1 , νt (x) = m=1 P M1 = m 1 − Fm (t − σj ) j=0 where P (Vk−1 = a, M1 ≥ k) =
=
µ ¶ k+a−1 k−a−1 a+1 k 2 q 2 k+a+1 p k 2
0
if
k+a−1
is even and
|a| ≤ k − 1 (15)
otherwise.
Proof. By using (14) for the functions g(z) = I{x} (z), z ∈ Z we achieve the result. The joint law of (Vk−1 , maxj≤k−1 Vj ) is well-known (see e.g. Theorem 3.10.10 in [11]), and in particular
µ ¶ P Vk−1 = −a, max Vj < 1 = j≤k−1
=
µ ¶ k+a−1 k−a−1 a+1 k 2 p 2 k+a+1 q k 2
0
if k + a − 1
is even and |a| ≤ k − 1 (16)
otherwise.
Observe that {Vk−1 = a, M1 ≥ k} = {Vk−1 = a, minj≤k−1 Vj > −1}. Then by noting that µ ¶ µ ¶ P Vk−1 = −a, max Vj < 1 = P −Vk−1 = a, min (−Vj ) > −1 j≤k−1
j≤k−1
we obtain the thesis interchanging the role of p and q in (16).
3
Scaling
˜ tn , n ∈ N} be a sequence of continuous time random walks defined in (Ωn , F n , P n ). We Let {X ˜ n = V˜ nn , where V˜ n and Z˜ n are defined as in Section 1 starting from a sequence assume that X t t ˜ k Z t n n ˜ ˜ ˜ n ); j ≥ 1} satisfies the {(Tj , Uj ); j ≥ 1}. For each n, we assume that the sequence {(T˜jn , U j assumption H0 stated in Section 2. Consider the deterministic linear time-space scaling ˜ an t , Xtn = bn X n where {an , n ∈ N} and {bn , n ∈ N} are suitable sequences of real numbers. The local time Lnt = `t (X n ) of the rescaled process Xtn can be obtained just applying the same scaling to the ˜ nt = `t (X ˜ n ), i.e. process L ˜n . Lnt = bn L an t n
˜n
We are interested in the conditional law of Xtn w.r.t. FtL = FaLn t , i.e. the filter £ £ ¤ n¤ ˜ an t )/FaL˜ nt , πtn (g) = E n g(Xtn )/FtL = E n g(bn X n n where E n denotes the expectation w.r.t. P n . For the sake of notational convenience we will n denote FtL as Gtn and, when unnecessary, drop the symbol n in the expectation, so that the filter becomes £ ¤ πtn (g) = E g(Xtn )/Gtn . Following the same lines as in the previous Section we get the representation
£ ¤ πtn (g) = E g(Xtn )/Gtn =
∞ X j=0
where
i h E g(−bn j + Xsn∗ )I (σ1n∗ > s) ¢ s=t−σjn ¡ n n h i I σj ≤ t < σj+1 , E I(σ1n∗ > s) n s=t−σj
(17)
• {σjn , j ≥ 0} is defined by σjn = inf {t > 0 s. t. Xtn ≤ −bn j} and represents a renewal process that coincides with the local time Lnt ; • X n∗ is a process with the same law as X n and σ1n∗ is its first exit time from the set (−bn , ∞). n ), the stopping time σ n can be written as Note that, when t ∈ [σjn , σj+1 j
σjn = sup{u ≤ t s. t. Lnu < Lnt }. This observation allows us to write (17) as follows h i¯ £ ¤ ¯ πtn (g) = E g(Xtn )/Gtn = E g(−l + Xsn∗ )/{Ln∗ < b } n ¯ s l=Ln t, h i¯ ¯ = E g(−l + Xsn∗ )/{Ln∗ s = 0} ¯ n
s=ξtn
l=Lt , s=ξtn
=
,
(18)
where ξtn = t − sup{u ≤ t s. t. Lnu < Lnt } = γt (Ln ),
(19)
with γt (x) the functional defined by (7). Then, denoting by Σn (g)(s, l) h i h i E g(Xsn∗ − l)I (σ1n∗ > s) h i Σn (g)(s, l) = E g(−l + Xsn∗ )/{Ln∗ s < bn } = E I(σ1n∗ > s)
(20)
(17) can be shortly written as £ ¤ πtn (g) = E g(Xtn )/Gtn = Σn (g)(ξtn , Lnt ).
(21)
Remark 3.1. Note that Σn (g)(s, l) depends also on the probability measure P n , i.e. Σn (g)(s, l) = ΣnP n (g)(s, l). This dependence will be emphasized when necessary. By taking into account that £ ¤ £ ¤¯ π ˆtn (g) = E g(Xtn + Lnt )/Gtn = E g(Xtn + m)/Gtn ¯m=Ln t
the filter can be written as £ ¤ ˆ n (g)(ξtn ), π ˆtn (g) = E g(Xtn + Lnt )/Gtn = Σ where ˆ n (g)(s) = Σn (g)(s, 0) = E [g(Xsn∗ )/{Ln∗ Σ s
i h E g(Xsn∗ )I (σ1n∗ > s) h i . < bn }] = E I(σ1n∗ > s)
(22)
(23)
˜ n = V nn satisfies the assumptions H1, We end this Section by noting that, when the process X Zt H2 and H3 stated in Section 2, then Proposition 2.3 easily provides the explicit expressions of ˆ n (g)(s) Σn (g)(s, l) and Σ i¡ h ¢ P∞ n ≥ k) g(b V n − l) F n (s) − F n (s) E I (M n k−1 1 k=1 k k−1 ¡ n ¢¡ ¢ P∞ Σn (g)(s, l) = (24) n m=1 P M1 = m 1 − Fm (s)
i¡ h ¢ n ≥ k) g(b V n ) F n (s) − F n (s) E I (M n k−1 1 k=1 k−1 k ¡ n ¢¡ ¢ P∞ , n m=1 P M1 = m 1 − Fm (s)
P∞ ˆ n (g)(s) = Σ
(25)
where M1n , Fkn have a similar meaning as M1 , Fk in (14). In particular it is interesting to note that if F˜1n denotes the distribution function of n o ˜ n ≤ −1 , σ ˜1n = inf t > 0 s. t. X t then σ1n = σ ˜1n /an , and therefore F1n (s) = F˜1n (an s), and Fkn (s) = F˜kn (an s), where F˜kn is the k−fold convolution of F˜1n . The remainder of this paper is devoted to the problem of finding the limit of the filter (21) ( or of the filter (22) ), or some good approximation for it, in the case when the rescaled sequence Xtn converges to a process Xt . When Xtn converges to a continuous process Xt , then the local time Lnt = `t (X n ) of Xtn also converges to the local time `t (X) of Xt . In fact the functional `t (x) defined by (9) is continuous w.r.t. the topology of the uniform convergence on compact sets. In this case it is also natural to investigate whether the limit of the filter is the corresponding filter of the limits. Finally we observe that, if the limit of (22) exists, then the same holds for (21), and these limits can be obtained one from each other, as we can obtain (22) by (21) and vice-versa.
4
The interpolating Brownian motion model: strong convergence for the filter
In this Section we study the case of a continuous time random walk arising when a Brownian motion W is approximated with a sequence of processes W n . The processes W n are defined on the probability space of W , and are obtained pathwise by an interpolation procedure. For this model we are able to get a strong convergence result for the filter. When the process W has drift zero, then the approximating model falls into the frame of the previous sections, in particular the common distribution function of Tj is F 0 (see (39) below) and the random walk is symmetric. The asymmetric case can also be managed, and we will briefly explain how at the end of this Section. We start by introducing the approximating model, then we show the convergence result. The basic idea is to approximate the state W by the stepwise interpolation of the random points where W hits a uniform grid and consider as approximating observation the local time of the approximating state. This procedure is a deterministic one and therefore we describe it in the deterministic case. Let z ∈ DR [0, +∞) and let h ∈ R+ be a fixed threshold. Consider the sequence {b τkh (z), k ≥ 0} ( τb0h (z) = 0 ª © h (z))| ≥ h h (z) : |z(t) − z(b k ≥ 1. τk−1 τbkh (z) = inf t > τbk−1
(26)
Put zkh = z(b τkh (z)) and consider the function z h ∈ DR [0, +∞) z h (t) =
∞ X
I£
k=0
¢ (t)z h .
(27)
k
h (z) τbkh (z),τbk+1
We will need the following results Lemma 4.1. Let z ∈ CR [0, +∞). Then, for each t ∈ R+ |z h (t) − z(t)| ≤ h,
(28)
and zh → z h→0
`(z h ) → `(z)
and
h→0
w.r.t. the topology of the uniform convergence, where the functional ` is defined by (9). Proof. The bound (28) is due to the continuity of z, and the uniform convergence of z h to z follows. The continuity of the functional ` implies the other limit. Remark 4.2. The process `(z h ) admits the representation h
`t (z ) = 0 ∨ (−z0 ) −
∞ X
zσh (z) I[σh (z),σh
j=0
j
j
j+1 (z))
(t),
where © σjh (z) = inf t
s.t.
ª © z(t) − z0 ≤ −jh = inf τbkh (z)
ª z(b τkh (z)) − z0 ≤ −jh .
s.t.
(29)
When z is a continuous function and z0 = 0 then zσh (z) = −jh and j
h
`t (z ) =
∞ X
jhI(σjh (z)
≤t
0 (γt0 (Qn ), γt (Ln ), Lnt ) ⇒ (ζt , ζt , Λt ). Proof. Define ηtn = sup{s < t : Lns < Lnt },
ηt = sup{s < t : Λs < Λt },
βtn
βt = sup{s < t : Ws + Λs = 0},
= sup{s < t :
Qns
= 0},
with ηtn = t, ηt = t, βtn = t and βt = t if the corresponding set is empty. Note that ηtn = sup{s < t : Xtn − Xsn < Qnt − Qns }, ηt = sup{s < t : Wt − Ws < Wt + Λt − Ws − Λs }, Applying the Skorohod representation theorem, we can assume sup(|Xsn − Ws | + |Qns − Ws − Λs |) → 0
a.s.
(45)
s≤t
This implies that Ln → Λ uniformly in [0, t] a.s. and lim inf ηtn ≥ ηt . n→∞
Then, since γt0 (Qn ) = t − βtn ,
γt (Ln ) = t − ηtn ,
the result is achieved once we prove that the sequence (βtn , ηtn ) converges a.s. to (ηt , ηt ) and ζt = t − ηt . Let βt∞ = lim supn→∞ βtn and note that Qn (βtn ) assumes only the values 0 or √1n . Then, by (45), Wβt∞ + Λβt∞ = 0. It follows that lim sup βtn ≤ βt . n→∞
Moreover, if ηtn < t, then Qn (ηtn ) = 0, if ηtn = t, then βtn = t, and it follows that ηtn ≤ βtn
for all t, a.s.
and then ηt ≤ lim inf ηtn ≤ lim sup βtn ≤ βt , n→∞
for all t, a.s.
n→∞
The proof is achieved since P (ηt = βt ) = 1 γt0 (W
+ Λ) = t − βt = t − ηt = t − γt (Λ). For sake of completeness we prove the and then ζt = previous statement. Note that Ws + Λs = 0 if and only if Ws = inf r 0, g(0) if s = 0,
(47)
for each g : R+ → R bounded and continuous, where Π(g)(s, l) is the function defined by (1) and (2) in Theorem 1.1.
Proposition 5.5. Let g : R+ → R be a bounded continuous function. Under the assumption (46), for each s, ˆ n (g)(s) −→ Π(g)(s). ˆ Σ (48) n→∞
Moreover for any sequence sn converging to s > 0 ˆ n (g)(sn ) −→ Π(g)(s). ˆ Σ n→∞
(49)
In order to prove Proposition 5.5 we need a preliminary result. Lemma 5.6. For each s > 0 and n ∈ N, let Fsn be the distribution function of the random variable Xsn . Then µ ¶ µ ¶ x 1 Fsn (x) − Φ √ =o √ , uniformly in x ∈ Γn , (50) s ns where
© 1 Γn = √ n
µ ¶ ª 1 z+ ,z ∈ Z , 2
and Φ (x) is the distribution function of a standard normal random variable. Proof. The random variable Xsn can be written as follows n
Xsn
1 X k ∆s , =√ n k=0
˜ ks − X ˜ (k−1)s , k ∈ N}. Under the assumption stated at the beginning of this where {∆ks = X Subsection, the sequence {∆ks , k ∈ N} is a sequence of i.i.d. symmetric random variables, with ¡characteristic function exp{s(cos(u) − 1)}, with first and third moments equal to zero and ¢ V ar ∆1s = s. Moreover their common distribution function Fs is concentrated on the lattice Γn , as well as the distribution function Fsn . Then, since the third moments of ∆ks are zero, by Theorem 2, in [7], Chapter XVI.4 we get µ ¶ 1 x n Fˆs (x) − Φ (x) = o √ , uniformly in x s.t. √ ∈ Γn . (51) n s n √ √s . where Fˆsn (x) = Fsn (x s) is the distribution function of X s The expansion (51) does not imply immediately the thesis, nevertheless (50) can be proved ³ by ´ repeating and adapting the proof to the case when the limit distribution function is Φ √xs instead of Φ(x). Remark 5.7. Let sn → s and s > 0, so that there exists n ¯ such that sn ≥ 12 s, for any n > n ¯. Then, under the assumptions of Lemma 5.6 ¶ µ ¶ µ 1 x n =o √ , uniformly in x ∈ Γn . (52) Fsn (x) − Φ √ sn n
Proof of Proposition 5.5. We prove in detail only (48). The proof of (49) for the case of a sequence sn converging to s > 0, is treated in a similar way, by using Lemma 5.6 and Remark 5.7. ˆ n (s) is the conditional law of Xsn∗ given When s = 0 (48) is trivial. When s > 0 we note that Σ 1 n∗ the event {Ls < √n }. Then
n ˆ n (g)(s) = Θ (g)(s) , Σ Θn (1)(s)
where 1 denotes the constant function 1(x) = 1, and where Θn (s) is the measure defined by · ¸ 1 n n∗ n∗ Θ (g)(s) = E g(Xs )I{Ls < √ } . n We can restrict to the functions g ∈ C 1 (R) with g 0 (0) = 0 since this class is convergence determining. Without loss of generality we can also assume that g(0) = 0. In fact ˆ n (g)(s) = Σ ˆ n (g − g(0))(s) + g(0). Σ By an explicit computation we get µ ¶ µ ¶ ∞ X k k −1 n∗ n∗ Θ (g)(s) = g √ P Xs = √ , min Xs > √ . n n 0≤u≤s n n
k=0
Moreover, the reflection principle yields ¶ µ ¶ µ ¶ µ k k+2 k −1 n∗ n∗ n∗ n∗ =P Xs = √ − P Xs = − √ = P Xs = √ , min Xs > √ n 0≤u≤s n n n µ ¶ µ ¶ k k+2 =P Xsn∗ = √ − P Xsn∗ = √ , n n and so µ ¶· µ ¶ µ ¶¸ ∞ X k k k+2 g √ P Xsn∗ = √ − P Xsn∗ = √ = n n n k=0 · ¸ 2 2 n∗ n∗ n∗ n∗ =E g(Xs )I{Xs ≥ 0} − g(Xs − √ )I{Xs − √ ≥ 0} . n n
Θn (g)(s) =
(53)
Set g˜ : R → R, x → g˜(x) = g(x)I(x ≥ 0). Then · n
Θ (g)(s) = E
g˜(Xsn∗ )
−
g˜(Xsn∗
¸ · ¸ 2 2 2 0 n∗ − √ ) = √ E g˜ (Xs − θ √ ) , n n n
where θ is a (0, 1)-valued random variable. ˆ n (s) admits the following representation Then the sequence of measures Σ · ¸ 2 2 1 n 0 n∗ ˆ Σ (g)(s) = E g˜ (Xs − θ √ ) √ , n n n Θ (1)(s) for each g ∈ C 1 (R) with g 0 (0) = 0. On the other hand
· 0
lim E g˜
n→∞
(Xsn∗
¸ £ ¤ 2 − θ √ ) = E g˜0 (Ws ) , n
(54)
(55)
and an explicit computation yields µ µ 2¶ µ 2¶ ¶ Z ∞ Z ∞ £ 0 ¤ y 1 y 1 y 0 −g(0) + g(y) exp − E g˜ (Ws ) = √ g (y) exp − dy = √ dy . 2s s 2s 2πs 0 2πs 0
By the change of variable x =
√y s
£ ¤ 1 E g˜0 (Ws ) = √ 2πs
in the integral, and recalling that g(0) = 0 we get
µZ
∞
0
µ 2¶ ¶ √ x 1 ˆ g( sx)x exp − dx = √ Π(g)(s). 2 2πs
Then, by (54), the proof of (48) is completely achieved by showing that √ n nΘ (1)(s) 1 =√ . n→∞ 2 2πs
(56)
¸¶ µ ¶ µ ¶ µ · 1 1 1 n∗ n n = Fs √ − Fs − √ , Θ (1)(s) = P Xs ∈ 0, √ n n n
(57)
lim
By (53) we get
n
and, by using the expedient to write the r.h.s. of the above formula as µ ¶ µ ¶ 3 1 n n √ Fs − Fs − √ , 2 n 2 n we can use the expansion (50), so that ¶ µ ¶ µ ¶ ¶ µ µ 1 1 1 2 3 1 1 √ −Φ − √ +o √ ' √ Φ0 (γns ) + o √ , Θn (1)(s) =Φ 2 ns 2 ns ns ns ns ´ ³ where γns ∈ − 12 √1ns , 32 √1ns . Then √ n 1 nΘ (1)(s) 1 = lim √ Φ0 (γns ) + o (1) = √ lim . n→∞ n→∞ 2 s 2πs ¤ Theorem 5.8. Under the same assumptions of Proposition 5.5 for each t > 0 ˆ n (g)(ξtn ) ⇒ Π(g)(ζ ˆ Σ t ).
(58)
for each g : R+ → R bounded and continuous. Moreover π ˆtn converge weakly to π ˆt . Proof. Proposition 5.5 guarantees that, for any sequence sn → s, s > 0, µ 2¶ Z ∞ √ x n ˆ ˆ dx = Π(g)(s). Σ (g)(sn ) → g( sx)x exp − 2 0 Then the thesis follows since we can use the Skorohod representation probability space as in Proposition 5.3, and in this space, for each t > 0, ξtn = γt (Ln ) → ζt , a.s., and on the other hand P {ω : ζt (ω) = 0} = 0. Indeed therefore + ˆ n (g)(ξ n ) → Π(g)(ζ ˆ Σ t ), for each g : R → R bounded and continuous t
a.s.
(59)
Remark 5.9. Proposition 5.5 applies also to show that, for any sequence sn → s, s > 0 and for each g : R+ → R bounded continuous, µ 2¶ Z ∞ √ x n Σ (g)(sn , l) −→ Π(g)(s, l) = g(−l + sx)x exp − dx. (60) n→∞ 2 0 Then, following the same procedure as in Theorem 5.8 we can state that Σn (g)(ξtn , Lnt ) ⇒ Π(g)(ζt , Λt ),
(61)
for each g : R+ → R bounded and continuous, and that πtn converge weakly to πt , so that Theorem 5.2 is completely achieved in the symmetric case.
5.2
Weak convergence of the filter: the asymptotically symmetric case
In this subsection our aim is to prove Theorem 5.2 in the asymptotically symmetric case defined by conditions C1, C2, C3 and A1, A2, A3, with the same space-time scaling. The symmetric case investigated in previous subsection is obtained by defining on a suitable space (Ω, F, P ) a P Poisson process Zt of intensity 2λ and a symmetric random walk Vk = kj=1 Uj . Starting from these processes, and in analogy with (40) and (41), we can define the process A˜t ˜ t , and the process N ˜t as the process counting as the process counting the positive jumps of X ˜ the negative jumps of Xt . ˆ n) = On (Ω, F) we consider the filtration {Ftn , t ∈ [0, T ]} generated by the rescaled processes (Aˆnt , N t ˜nt ), and the probability measure P n , absolutely continuous with respect to P , such that (A˜nt , N dP n ¯¯ A,n n × LN,n n = Lt = Lt t F t dP
(62)
where ¾ ½ µ ¶ ¾ Z t λn ˆn λn ˆn n (λn − λ) ds = exp log log dAs − At − n (λn − λ) t (63) := exp λ λ 0 0 ½Z t ¾ ½ µ ¶ ¾ Z t µn ˆ n λn ˆ n := exp log dNs − n (µn − λ) ds = exp log Nt − n (λn − λ) t . (64) λ λ 0 0 ½Z
LA,n t LN,n t
t
ˆtn are mutually independent Poisson processes with Under the measure P , the processes Aˆnt , N ˆ n are mutually intensities nλ, nλ while (see [3], Chapter VIII) under P n the processes Aˆnt , N t n independent Poisson processes with intensities nλn , nµn . Moreover, under P , the conditions A1, A2, A3 are satisfied with Z n = Z, Ujn = Uj , namely: ˆtn = Znt is a Poisson process with intensity n(λn + µn ); 1 Ztn = Aˆnt + N n =Z 2 Znt nt is a Poisson process with intensity n(λn + µn );
3 {Uk , k ∈ N} is a sequence of i.i.d. random variable with law (
P n (Uk =
1) =
P n (Uk = −1) = 4 {Uk , k ∈ N} and Ztn are mutually independent. We can now state the main result of this subsection.
λn λn +µn µn λn +µn
Theorem 5.10. Let Gtn be the filtration generated by Lnt , and g be a bounded and continuous function. Then £ ¤ ˆ E n g(Qnt )/Gtn ⇒ Π(g)(ζ t ),
(65)
ˆ where Π(s) is defined by (47). Proof. The thesis is achieved if we prove that for any bounded and Lipschitz continuous function ϕ £ £ ¤¤ £ ¤ ˆ E n ϕ(E n g(Qnt )/Gtn ) → E ϕ(Π(g)(ζ t )) ¯ £ £ ¤¤ £ ¤¯¯ ¯ n ˆ ¯E ϕ(E n g(Qnt )/Gtn ) − E ϕ(Π(g)(ζ t )) ¯ ¯ £ £ ¤¤ £ ¤¯¯ ¯ ˆ = ¯E Lnt ϕ(E n g(Qnt )/Gtn ) − E ϕ(Π(g)(ζ t )) ¯ ¯ £ £ ¤¤ £ £ ¤ ¤¯¯ ¯ n n n n n n n ≤ ¯E Lt ϕ(E g(Qt )/Gt ) − E ϕ(E g(Qt )/Gt ) ¯ ¯ £ £ ¤¤ £ £ ¤ ¤¯¯ ¯ + ¯E ϕ(E n g(Qnt )/Gtn ) − E ϕ(E g(Qnt )/Gtn ) ¯ ¯ £ £ ¤¤ £ ¤¯¯ ¯ n n ˆ + ¯E ϕ(E g(Qt )/Gt ) − E ϕ(Π(g)(ζt )) ¯ The last addend in the above inequality converges to zero by Theorem 5.8 for the symmetric case. The first one is bounded above by ¯ £ £ ¤¤ £ £ ¤ ¤¯¯ £ ¤ ¯ ¯E Lnt ϕ(E n g(Qnt )/Gtn ) − E ϕ(E n g(Qnt )/Gtn ) ¯ ≤k ϕ k∞ E |Lnt − 1| . The second addend is bounded above by h¯ £ ¤ £ ¤ ¯i E ¯ϕ(E n g(Qnt )/Gtn ) − ϕ(E g(Qnt )/Gtn )¯ h¯ £ ¤ £ ¤¯ i £ ¤ ≤ Lϕ E ¯E n g(Qnt )/Gtn − E g(Qnt )/Gtn ¯ ≤ Lϕ 2 k g k∞ E |Lnt − 1| , where Lϕ is the Lipschitz constant of ϕ, £ ¤ and the last inequality follows by Lemma 5.12 below. The thesis is achieved since E |Lnt − 1| converges to zero, as is proven il Lemma 5.11 below. n¯ ¯ Lemma 5.11. Let Lnt := dP dP F n be defined as in (62), then t
E
£
(Lnt )2
¤
µ = exp
¶ ¤ √ t£√ 2 2 ( n(λn − λ)) + ( n(µn − λ)) λ
Moreover, under the conditions C1, C2, C3, on the space (Ω, F, P ) £ ¤ E |Lnt − 1| → 0, and
£ ¤ sup E (Lnt )2 < ∞, n
(66)
(67) (68)
Proof. We only need to prove (66), since £ ¤ £ ¤ E 2 |Lnt − 1| ≤ E |Lnt − 1|2 £ ¤ £ ¤ £ ¤ = E (Lnt )2 − 2E Lnt + 1 = E (Lnt )2 − 1 µ ¶ ¤ √ t£√ = exp ( n(λn − λ))2 + ( n(µn − λ))2 − 1 → 0, λ and then, by condition C3, we get (67) and (68). To this end we observe that µ Lnt
=
λn λ
¶Aˆnt
exp (−n (λn − λ) t) ×
³ µ ´Nˆtn n
λ
exp (−n (µn − λ) t) ,
and so ∞ £ n 2¤ X E (Lt ) = k=0
µ
λn λ
¶2k
(nλt)k exp (−nλt) exp (−2n (λn − λ) t) × k!
∞ ³ X µn ´2k (nλt)k × exp (−nλt) exp (−2n (µn − λ) t) = λ k! k=0 µ ¶ ¤ √ t£√ 2 2 = exp ( n(λn − λ)) + ( n(µn − λ)) . λ
Lemma 5.12. Let (Ω, F, P) and (Ω, F, P0 ) be probability spaces with P and P0 equivalent probability measures, G ⊂ F, P = P|G and P 0 = P0 |G , then ¯i ¯i h¯ £ h¯ h¯ ¤ £ ¤¯¯i ¯ ¯ ¯ ¯ ¯ E0 ¯E0 ψ/G − E ψ/G ¯ ≤ 2αE0 ¯1 − (dP/dP0 )¯ = 2αE ¯(dP0 /dP) − 1¯ , for any random variable ψ bounded by α = ¯kψk ∞. h¯ £ above ¤ £ ¤¯i ¯ 0 The same bound holds for E ¯E ψ/G − E ψ/G ¯ . Proof. The proof is based on the Kallianpur-Striebel formula and is a particular case of Lemma 4.2 in [4]. £ ¤ For the sake of notational convenience we denote L = (dP/dP0 ), L = E0 L/G = (dP/dP 0 ), £ ¤ L0 = (dP0 /dP), and L0 = E L0 /G = (dP 0 /dP ) ¯ £ ¤¯ ¯ 0£ ¤ E0 Lψ/G ¯¯ ¤ £ ¤¯ ¯¯ 0 £ ¯E ψ/G − E ψ/G ¯ = ¯E ψ/G − £ ¤ ¯ ¯ E0 L/G ¯ ¯ £ ¤ £ ¤ £ ¤¯ 1 ¤ ¯¯E0 ψ/G E0 L/G − E0 Lψ/G ¯¯ = 0£ E L/G ¤ £ ¤¯¯ 1 ¯¯ 0 £ 0 = E ψ/G L − E Lψ/G ¯ ¯ L ¯ ¯ ¤ £ ¤¯¯ ¤ £ ¤¯ ¯¯ £ 1 ¯ 0£ ≤ ¯E ψ/G L − E0 ψ/G ¯ + ¯E0 ψ/G − E0 Lψ/G ¯. L
Moreover and
¯ £ ¯ ¯ ¤ £ ¤¯¯ ¯ 0 ¯E ψ/G L − E0 ψ/G ¯ ≤ kψk∞ ¯L − 1¯, ¯ £ ¯ ¤ ¯ ¤ ¤ £ ¤¯¯ £ ¯ £¯ ¯ 0 ¯E ψ/G − E0 Lψ/G ¯ ≤ E0 ψ ¯1 − L¯/G ≤ kψk∞ E0 ¯1 − L¯/G .
Therefore ¯ ¯ ¤ª ¯ 0£ ¤ £ ¤¯ £¯ ©¯ ¯E ψ/G − E ψ/G ¯ ≤ kψk∞ 1 ¯L − 1¯ + E0 ¯1 − L¯/G L ( taking into account that 1/L = 1/(dP/dP 0 ) = (dP 0 /dP ) = L0 ) ¯ ¤ª ¯ £¯ ©¯ = kψk∞ L0 ¯L − 1¯ + E0 ¯1 − L¯/G , and so h¯ £ ¯ ¯ ¤¤ £¯ £¯ ¤ £ ¤¯¯i ¯ E0 ¯E0 ψ/G − E ψ/G ¯ ≤ kψk∞ E0 ¯L − 1¯ + E0 ¯1 − L¯/G . Then the proof is achieved by noting that, by Jensen inequality, ¯i ¯i h¯ h¯ £ h¯ ¯¤ £¯ ¤¯¯i ¯ ¯ 0 ¯ 0 ¯ 0 0 ¯ 0 0 0 ¯ 0 ¯ E L − 1 = E ¯(dP/dP ) − 1¯ = E ¯E (dP/dP ) − 1/G ¯ ≤ E ¯(dP /dP) − 1¯ .
6
The M/M/1 queueing model: an approximation for the filter
˜ nt is an M/M/1 queue with As in Section 5, we suppose that under the measure P the process Q ˜ n is an M/M/1 parameters λ, λ (symmetric case), while, under the measure P n the process Q t queue with parameters λn , µn (general case). ˆ n (g)(s) (see (23)), evaluated in The explicit form (22) for the filter of the discrete system is Σ n s = ξt , and it is evident from (25) that, from a computational point of view, it can be very hard to use it. Then the problem arises of finding a good approximation for this filter which, at the same time, is simpler to handle, and depends on the really observed trajectory, so that it can be actually used in applications. n ˆ ˆ A natural candidate is Π(g)(ξ t ), where Π(g)(s) is defined by (47). We will show in Theorem 6.1 n ˆ that Π(g)(ξt ), approximates the discrete filter in the L1 (Ω × [0, T ])-norm, for each T > 0. The statement holds both for the symmetric case and for the general one.
ˆ n (g)(s) depends on the law of Q ˜ n , or equivaBefore stating formally the result we recall that Σ t lently on the probability measure we use (see Remark 3.1). Then, to emphasize this dependence we write, in analogy with (23)
and
ˆ nP n (g)(s) = E n [g(Xsn∗ )/{Ln∗ Σ s < bn }] ,
(69)
ˆ nP (g)(s) = E [g(Xsn∗ )/{Ln∗ Σ s < bn }] .
(70)
Theorem 6.1. For all g bounded and continuous and for each T > 0 Z
T
n→∞
0
Z
T
0
¯ ¯ ¯ˆn n n ¯ ˆ E ¯Σ (g)(ξ ) − Π(g)(ξ ) P t t ¯dt −→ 0.
¯ ¯ ¯ˆn n n ¯ ˆ E n ¯Σ P n (g)(ξt ) − Π(g)(ξt )¯dt −→ 0. n→∞
Proof. Consider first the symmetric case. The limit we are looking for depends only on the distribution of ξtn , therefore, as observed in Remark 5.4, using the Skorohod representation theorem as in the proof of Proposition 5.3, we can assume that ξtn (ω) → ζt (ω) dP × dt − a.e. Proposition 5.5 implies that ˆ n (g)(sn ) ≡ Σ ˆ n (g)(sn ) −→ Π(g)(s), ˆ Σ P n→∞
(71)
whenever sn → s, s > 0, moreover it is clear that ˆ ˆ Π(g)(s n ) −→ Π(g)(s), n→∞
whenever sn → s, s > 0. Combining these results we get ˆ nP (g)(ξtn ) −→ Π(g)(ζ ˆ Σ t) n→∞
n ˆ ˆ Π(g)(ξ t ) −→ Π(g)(ζt )
and
n→∞
(72)
for each (ω, t) such that ζt (ω) > 0. The observation that {(ω, t) ∈ Ω × [0, T ] such that ζt (ω) = 0} is a zero measure set with respect to dP × dt, and an easy application of the dominated convergence theorem imply that Z
T
0
¯p i h¯ ¯ˆn ¯ n ˆ E ¯ΣP (g)(ξt ) − Π(g)(ζt )¯ → 0,
for any p > 0
¯p i h¯ ¯ˆn n n ¯ ˆ (g)(ξ ) − Π(g)(ξ ) → 0, E ¯Σ P t t ¯
for any p > 0.
and Z 0
T
Consider now the asymmetric case. Note that Z
¯i h¯ ¯ˆn n n ¯ ˆ E n ¯Σ (g)(ξ ) − Π(g)(ξ ) n P t t ¯ dt 0 Z T Z ¯i h¯ n ¯ˆn n n n ¯ ˆ ≤ E ¯ΣP n (g)(ξt ) − ΣP (g)(ξt )¯ dt + T
0
0
T
¯i h¯ ¯ˆn n n ¯ ˆ E n ¯Σ P (g)(ξt ) − Π(g)(ξt )¯ dt
(73)
Therefore by Lemma 5.12 ¯i h¯ £¯ £ ¤ £ ¤¯ ¤ ¯ˆn n n n ¯ n ¯ n ˆ E g(Qnt )/Gtn − E g(Qnt )/Gtn ¯ E n ¯Σ (g)(ξ ) − Σ (g)(ξ ) n P t P t ¯ =E ¯¤ £ ¯¯ £ ¤ ¯ ≤ 2 k g k∞ E n ¯1 − (dP/dP n )|F n ¯ = 2 k g k∞ E |Lnt − 1| . t
Moreover, by Cauchy-Schwartz inequality ¯i ¯i h ¯ h¯ n n ¯ n ¯ˆn n n ¯ n¯ ˆ n ˆ ˆ E ¯ΣP (g)(ξt ) − Π(g)(ξt )¯ = E Lt ¯ΣP (g)(ξt ) − Π(g)(ξt )¯ i ·³ ´2 ¸ h n n n n 2 ˆ P (g)(ξt ) − Π(g)(ξ ˆ . ≤ E (Lt ) E Σ t) Summarizing Z T ¯i h¯ ¯ˆn n n ¯ ˆ E n ¯Σ P n (g)(ξt ) − Π(g)(ξt )¯ dt 0 Z T Z T h i ·³ ´2 ¸ £ ¤ n ˆ n (g)(ξ n ) − Π(g)(ξ ˆ ≤2 k g k∞ E |Lnt − 1| dt + E (Lnt )2 E Σ ) dt, P t t 0
0
and the thesis follows by the bounds (67) and (68) of Lemma 5.11 and by (73) of the symmetric case. Remark 6.2. Note that ¯i h¯ £¯ £ ¤ £ ¤¯ ¤ ¯ˆn n n n ¯ n ¯ n ˆ E n ¯Σ (g)(ξ ) − Σ (g)(ξ ) E g(Qnt )/Gtn − E g(Qnt )/Gtn ¯ n P t P t ¯ =E
(74)
is a particular case of the approximation problems considered in [4], where the observation process is a counting process: Gtn coincides with the σ-algebra generated by the jump times of the observation process. Nevertheless in order to apply Theorem 2.1 in [4]) we need to assume the further and more restrictive assumptions that n(µn − λ) → 0 and n(λ − λn ) → 0, which, under condition C3, may even converge to infinity.
7
The M/M/1 queueing model: observing the idle time process
˜ ns , when the In this section we are interested in the conditional law of the M/M/1 queue Q observation process is the idle time process, i.e. Z t ³ ´ ˜ n = 0 ds, C˜tn = I Q s 0
the cumulative time the queue has spent in 0, up to t. ˜tn ), where Equivalently one can consider as observation process the bivariate point process (I˜tn , B ˜tn is the I˜tn is the process that counts the times when the system starts an idle period and B process that counts the times when the system starts a busy period, that is Z t n ˜ ˜ ns− = 1)dN ˜sn It = I(Q (75) 0
Z ˜tn = B
0
t
˜ ns− = 0)dA˜ns . I(Q
(76)
Indeed the filtration generated by the idle time process C˜tn and the filtration generated by the ˜tn ) coincide, or more precisely F C˜+n = F I˜n ,B˜ n . observation process (I˜tn , B t t Our first aim is to study the conditional law £ ¤ ˜ nt )/H ˜ tn , E g(Q
(77)
where for the notational convenience we denote ˜ tn = F I˜n ,B˜ n = F C˜+n , H t t ˜ n ), is given in (84). and the explicit expression for the filter (77) in terms of γt0 (Q Then we consider the rescaled processes ˜n Q Qnt := √nt , n
I˜n Itn := √nt , n
˜n B Btn := √nt , n
Ctn :=
√ n nµn C˜nt ,
and the conditional law of the rescaled queue £ ¤ E g(Qnt )/Htn ,
(78)
where Htn is the filtration generated by the rescaled observation n ˜ nt Htn = H = FtI
n ,B n
n
= FtC+ .
We are interested in the limit behaviour of the filter (78) under the same assumption C1, C2, C3 and A1, A2, A3 of Section 5. Under these assumptions we already know that Qnt converge weakly to a driftless Brownian motion Wt with diffusion coefficient 2λ. ¯ n := Qn − C n , then clearly Qn = X ¯ n + C n , and therefore, since by Moreover if one defines X t t t t t t n n n n definition Ct increases only when Qt = 0, the pair (Qt , Ct ) is the solution of the Skorohod ¯ n . Consequently problem corresponding to X t ¡ n n n¢ ¯ t , Qt , Ct ⇒ (Wt , Wt + Λt , Λt ) , X where as usual Λt is the local time of Wt (for a deeper investigation of these results, we refer to Kurtz [12]) 2 . £ ¤ £ ¤ It is therefore natural to expect that E g(Qnt )/Htn converges weakly to E g(Wt + Λt )/FtΛ = n 0 ˆ ˆ Π(g)(ζ t ). This result is proven in Theorem 7.4. Moreover Π(g)(γt (Q )) is a good approximation of the filter for the rescaled model (see Theorem 7.6). Before giving the technical details we observe that the results of this section rely on the results of Sections 5 and 6. In these sections we have used the measures P in the symmetric case, and P n otherwise. However in this section, as in Sections 2 and 3, we do not distinguish between 2
By the above considerations
and Ctn :=
√
in DR ([0, T ])
Ctn ⇒ Λt
n ˜nt nµn C =
√
Z
t
nµn 0
I
˜n Q ns = 0 ds =
√
Z
t
nµn 0
I (Qns = 0) ds.
In general weak convergence does not imply immediately the convergence of the expected values, i.e. that
lim E n µn
n→∞
Z
t 0
√ nI(Qn s = 0)ds = E [Λt ] .
Nevertheless, after some explicit calculations, it is possible to show that this is the case (see (95) in Appendix B).
the two cases for notational convenience, unless necessary. ˜tn , Let {σkBn , k ∈ N} and {σkIn , k ∈ N} be the jump times of the process I˜tn and the process B n ˜ respectively. Under the assumption Q0 = 0, it easy to verify that Bn In σkBn < σkIn < σk+1 < σk+1 ,
for each k ≥ 1,
Bn , for k ≥ 0, while Qn > 0 otherwise. and that Qnt = 0 when σkIn ≤ t < σk+1 t We start by observing some regenerative properties of the above jump times, which are fundamental in the sequel.
˜ In = Q ˜ n In − Q ˜ nIn and Q ˜ Bn = Q ˜ n Bn − Q ˜ nBn Lemma 7.1. For each k ∈ N the processes Q k,t k,t t+σ σ t+σ σ k
k
k
k
˜n ˜n ˜ In has the same law as are independent of FσQIn and FσQBn respectively. Moreover, the process Q k,t k k n ˜ the process Q . t
˜ In = Q ˜ n = 0, moreover it is easy to see that Q ˜ In solves the same martingale Proof. Clearly Q 0 k,0 k,t ˜ n , hence these processes share the same law. The independence property follows problem as Q t ˜ nt , since Q ˜ nIn = 0: by the strong Markov property of Q σ k
´´ i ´´ i h ³ ³ h ³ ³ n ˜ nIn /Q ˜ nIn = ˜ nIn /F Q˜In ˜ n In − Q ˜ n In − Q = E exp iu Q E exp iu Q σk σk σk t+σk t+σ σ ´´i ´´i k h ³ ³ h ³ ³ k ˜ nIn ˜ nIn ˜ n In − Q ˜ n In − Q = E exp iu Q =E exp iu Q σk σk t+σk t+σk h ³ ³ ´´i ˜ nt =E exp iu Q . n ˜ Bn is independent of F Q˜Bn Similar arguments apply to show that the process Q . k,t σ k
˜tn is a delayed renewal process, i.e. (see, for example, The process I˜tn is a renewal process, and B Bn −σ Bn } [7] VI.7.3 page 187) the sequence {σk+1 k≥0 is a sequence of mutually independent random k Bn Bn variables and {σk+1 −σk }k≥1 are also identically distributed. Also {σkIn −σkBn }k≥1 is a sequence of mutually independent random variables. In the setting of this section, the above considerations and Lemma 7.1 guarantee that the filter ˜ nt given H ˜ tn admits a representation similar to that given in Proposition 2.1. More precisely of Q ˜ nt given H ˜ tn admits the following representation Proposition 7.2. The conditional law of Q ³ ´ £ ¤ ˜ nt )/H ˜ tn = I Q ˜ nt = 0 g(0)+ E g(Q (79) h ³ ´i In Bn > s ˜n ˜n ∞ E g(Qs+σ Bn − Qσ Bn + 1)I σj − σj ³ ´X ª j j s=t−σjBn © Bn ˜n > 0 h i I σj ≤ t < σjIn . +I Q t E I(σjIn − σjBn > s) j=1 Bn s=t−σj
Even if this Proposition is quite similar to Proposition 2.1, we sketch the proof because the assumptions on the processes involved are a little different. Moreover we point out that since the processes involved are all Markovian, the proof of (79) could be given by the techniques used in [5]. ´ ³ ˜ n -adapted. Moreover we can state (see [3] ˜ n > 0 is H Proof. We begin by noting that I Q t t Chapter. III T5) © ª © n ª © ª © n ª ˜ tn ∩ σjBn ≤ t < σjIn ∩ Q ˜t > 0 = H ˜ nBn ∩ σjBn ≤ t < σjIn ∩ Q ˜t > 0 . H σ j
By using, as in Proposition 2.1 the arguments of Proposition 3.1 in [13], we obtain ³ ´ £ ¤© ª ˜ n > 0 E g(Q ˜ n )/H ˜ n I σ Bn ≤ t < σ In = I Q t t t j j h ³ ´ i n In Bn Bn n ˜ ˜ /HσBn © ³ ´ E g(Qt )I σj − σj > t − σj ª ˜ nt > 0 ³ ´ j I σjBn ≤ t < σjIn . I Q ˜ nBn P σjIn − σjBn > t − σjBn /H σ j
Note that
© In ª σj − σjBn > s =
½
¾ Bn ˜ inf Qj,u > 0
0≤u≤s
˜n
and is independent of FσQBn by Lemma 7.1. j
Then, denoting by s = t − σjBn , so that ˜ nt = Q ˜ nBn = Q ˜ nBn − Q ˜ nBn + 1 = Q ˜ Bn Q j,s + 1 σ +s σ +s σ j
j
j
h ³ ´¯ i ˜ nBn )I σ In − σ Bn > s ¯¯ ˜ nBn E g(Q / H j j σj +s σj s=t−σjBn ³³ ´¯ ´ ¯ ˜ nBn P σjIn − σjBn > s ¯ /H σ Bn s=t−σj
j
h ³ ´ ³ ´i ˜ Bn + 1 I σ In − σ Bn > s E g Q j,s j j s=t−σjBn h ³ ´i , = E I σjIn − σjBn > s Bn
(80)
s=t−σj
˜ nt ) as Then the thesis follows by writing g(Q ∞ ³ ´ ³ ´X © ª n n n ˜ ˜ ˜ ˜ nt )I σjBn ≤ t < σjIn , g(Qt ) = I Qt = 0 g(0) + I Qt > 0 g(Q j=1
£ ¤ ˜ nt )/H ˜ tn as and, therefore, E g(Q ∞ ³ ´ £ ³ ´X £ ¤ ¤ £ ¤© ª ˜ n )/H ˜n = I Q ˜ n = 0 E g(0)/H ˜n + I Q ˜n > 0 ˜ n )/H ˜ n I σ Bn ≤ t < σ In . E g(Q E g(Q t t t t t t t j j j=1
It is important to note that n o ˜ Bn∗ σjIn − σjBn = inf u ≥ 0 : Q + 1 = 0 , j,u ˜ Bn + 1 for s < and that the process Q j,s
(81)
o n n = inf u ≥ 0 : X Bn behaves like the continuous time random walk X ˜ un = −1 , ˜ sn +1 for s < σ ˜ sigmaIn −σ 1 j j and hence h ³ ´ ³ ´i ´ i h ³ n > s) ˜ Bn + 1 I σ In − σ Bn > s ˜ n + 1 I (˜ E g Q σ E g X s 1 j,s j j h ³ ´i h i = . (82) E I σjIn − σjBn > s E I (˜ σ1n > s)
As a consequence ³ ´ £ ¤ ˜ n )/H ˜n = I Q ˜ n = 0 g(0)+ E g(Q t t t h ³ ´ i n + 1 I (˜ n > s) ˜ E g X σ ∞ s 1 ³ ´X ª s=t−σjBn © Bn ˜ nt > 0 i h +I Q I σj ≤ t < σjIn . E I (˜ σ1n > s) j=1 Bn
(83)
s=t−σj
Observing that, by definition (6), ˜ n) γt0 (Q
˜ ns = 0} = = t − sup{s < t such that Q
∞ X © ª (t − σjBn )I σjBn ≤ t < σjIn j=1
we can rewrite h ³ ´ i¯ n + 1 I (˜ n > s) ¯ ˜ ´ E X g σ ¯ £ ¤ s 1 ˜ nt = 0 g(0) + I Q ˜ nt > 0 ˜ nt )/H ˜ tn = I Q ¯ h i E g(Q ¯ ¯ E I (˜ σ1n > s) ³
´
³
.
(84)
˜n) s=γt0 (Q
The above considerations leads us to state the following result Theorem 7.3. Consider the rescaled process Qnt = Itn =
I˜n √nt n
and Btn =
˜n B √nt , n
˜n Q √nt , n
the rescaled observation processes
and denote by Htn the history generated by (Iun , Bun ) for u ≤ t, i.e. Htn = FtI
n ,B n
n ˜ nt =H .
Then £ ¤ ˆ n (g n )(γt0 (Qn )), E g(Qnt )/Htn = I (Qnt = 0) g(0) + I (Qnt > 0) Σ ˆ n is the functional defined in (23), and g n (x) = g(x + where Σ
(85)
√1 ). n
Proof. Equality (84) implies ´ i¯ h ³ E g Xsn + √1n I (σ1n > s) ¯¯ £ ¤ ¯ h i E g(Qnt )/Htn = I (Qnt = 0) g(0) + I (Qnt > 0) ¯ ¯ E I (σ1n > s)
s=γt0 (Qn )
and clearly
h ³ ´ i E g Xsn + √1n I (σ1n > s) ˆ n (g n )(s). h i =Σ n E I (σ1 > s)
As a consequence of the above Theorem, for any g uniformly continuous £ ¤ ˆ n (g n )(γt0 (Qn )) E g(Qnt )/Htn = I (Qnt = 0) g(0) + I (Qnt > 0) Σ (86) =
I (Qnt
= 0) g(0) +
I (Qnt
ˆ n (g)(γt0 (Qn )) + ε(n, g) > 0) Σ
with ¯ ¯ 1 |ε(n, g)| ≤ ωg ( √ ) = sup ¯g(x) − g(y)¯. n |x−y|≤ √1
(87)
n
We can now state the main result of this section. Theorem 7.4. For any g uniformly continuous £ ¤ £ ¤ ˆ E g(Qnt )/Htn ⇒ E g(Wt )/FtΛ = Π(g)(ζ t ),
for any t ≥ 0.
Proof. It is sufficient to prove that P rob
I (Qnt = 0) −→ 0,
for any t ≥ 0
(88)
and that ˆ n (g)(γ 0 (Qn )) ⇒ Π(g)(ζ ˆ Σ t ), t so that
for any t ≥ 0,
(89)
P rob
I (Qnt = 0) g(0) + ε(n, g) −→ 0 and
ˆ n (g)(γ 0 (Qn )) ⇒ Π(g)(ζ ˆ I (Qnt > 0) Σ t ), t P rob
since (88) is equivalent to I (Qnt > 0) −→ 1. As recalled in (42), the sequence Qnt converges weakly to a reflected Brownian motion Wt + Λt . Then, the limit (88) can be obtained by noting that - the function I (x = 0) has a discontinuity point at x = 0, - P (Wt + Λt = 0) = 0, - the continuous mapping theorem implies that I (Qns = 0) ⇒ 0. To prove (89) we use the weak convergence of γt0 (Qn ) to ζt (see Proposition 5.3), and same techniques of the previous sections, in particular Remark 5.4, Proposition 5.5.
Remark 7.5. As already observed at the beginning of this section, ¡ n n n¢ ¯ t , Qt , Ct ⇒ (Wt , Wt + Λt , Λt ) X ¯ tn := Qnt − Ctn . Thanks to the continuity of the limit processes, the convergence can where X be considered in the space DR3 [0,∞) endowed with the topology of the uniform convergence on compact sets. Moreover it is interesting to note that γt0 (Qn ) = γt (C n ) ⇒ γt (Λ) = ζt , Then, similarly to Proposition 5.3, it is possible to prove that ¡ 0 n ¢ ¡ ¢ γt (Q ), γt (C n ), Ctn ⇒ γt0 (Wt + Λt ), γt0 (Λt ), Λt = (ζt , ζt , Λt ) , and therefore an alternative proof of the previous Theorem can be achieved, by using these properties.
We end this section by noting that, even in this new situation, to give the same ¤ £ itn is possible n in the symmetric case )/H approximation for the filter as in Theorem 6.1, namely for E g(Q t t £ ¤ and for E n g(Qnt )/Htn in the asymptotically symmetric case. More precisely the following result holds Theorem 7.6. For all g bounded and continuous and for each T > 0 Z 0
Z 0
T
T
¯ £ ¯ ¤ ¯ 0 n ¯ ˆ E ¯E g(Qnt )/Htn − Π(g)(γ (Q )) ¯dt −→ 0. t n→∞
¯ £ ¯ ¤ ¯ 0 n ¯ ˆ E n ¯E n g(Qnt )/Htn − Π(g)(γ (Q )) ¯dt −→ 0. t n→∞
Proof. For the symmetric case, note that ¯ £ ¯p ¤ ¯ 0 n ¯ ˆ (Q )) ¯E g(Qnt )/Htn − Π(g)(γ ¯ = t ¯ ¯p ¯ 0 n ¯ ˆ n (g)(γ 0 (Qn )) + ε(n, g) − Π(g)(γ ˆ = ¯I (Qnt = 0) g(0) + I (Qnt > 0) Σ (Q )) ¯ t t ¯ ¯p ¯ ¯p ³ ´ ¯ 0 n ¯ ˆ n (g)(γt0 (Qn )) + ε(n, g)¯¯ + C(p)¯¯Σ ˆ n (g)(γt0 (Qn )) − Π(g)(γ ˆ ≤ C(p)¯I (Qnt = 0) g(0) + Σ (Q )) ¯ , t where C(p) is a suitable constant, and ε(n, g) is defined in (87). Then ¯p i h¯ £ ¤ ¯ 0 n ¯ ˆ E ¯E g(Qnt )/Htn − Π(g)(γ (Q )) ¯ ≤ t ¯p i ¯p i h¯ h¯ ¯ ¯ ¯ˆn 0 n ¯ ˆ ≤C(p)E ¯I (Qnt = 0) 2 k g k∞ +ε(n, g)¯ + C(p)E ¯Σ (g)(γt0 (Qn )) − Π(g)(γ (Q )) ¯ . t The thesis follows since both the addends at the right hand side of the previous inequality converge to zero. The first addend at converges to zero by the bounded convergence theorem. To prove that the the second addend converges to zero one has just to substitute ξtn with γt0 (Qn ) in the proof of Theorem 6.1. The proof in the asymptotically symmetric case is analogous. Acknowledgements We are indebted to Thomas G. Kurtz for the proof of Proposition 5.3.
A
Appendix
A.1
Identification of the interpolating Brownian motion model
Wtn defined by (32) admits the following representation Wtn =
1 n V n, 2n Zt
(90)
where • Ztn is the point process defined by the sequence τkn ; • Vkn is the discrete-time process satisfying the relation n Vkn = Vk−1 + Ukn ,
where {U³k , k ∈ N} is the ´ sequence of {−1, 1}-valued random variables defined by the rule n n n . Uk = 2 Wτkn − Wτk−1 n , k ≥ 1} of the point process Z n . Let us consider the interarrival time sequence {Tkn = τkn − τk−1 t The following two lemmas are an easy consequence of the strong Markov property of W and of the fact that W and −W have the same law.
Lemma A.1. Ztn is a renewal process, that is {Tkn , k ≥ 1} ia a sequence of i.i.d. random variables.
Lemma A.2. Vkn is a symmetric random walk, i.e. n Vkn = Vk−1 + Ukn ,
{Ukn , k ≥ 1} is a sequence of i.i.d. random variables s.t. P (Ukn = 1) = P (Ukn = −1) =
1 2
Moreover the following result can be found, for example, in [6] page 480 or [7] page 342. Lemma A.3. The distribution function F n of the random variables Tkn is +∞ X ¡ n ¢ 1 (−1)j √ F (t) = P Tk ≤ t = 4 2π j=0 n
Z
+∞ (2j+1) √ 2n t
µ 2¶ x exp − dx 2
(91)
and its generating function is E [exp (−αTkn )] =
cosh
1 ³√ ´. 2α 2n
Remark A.4. Note that the distribution function (91) can be written in the equivalent way +∞ X ¡ ¢ P Tkn ≤ t = 4 (−1)j P j=0
µ ¶ (2j + 1) Wt ≥ . 2n
Lemma A.5. The processes {Ztn , t ∈ R+ } and {Vkn , k ∈ N} are mutually independent. n ) are independent. In fact U n and F n Proof. Note that, for each k ∈ N, Ukn and (T1n , ...Tk−1 τk−1 k n ) are F n -measurable. are independent and (T1n , ...Tk−1 τk−1 n , T n , ...). Moreover Ukn is Fτkn -measurable, so it is independent of (Tk+1 k+2
So we just have to prove the independence between Ukn and Tkn . We consider the case k = 1 taking into account that this procedure easily extends to the general case. To this end it will suffices to verify E [I (U1 = 1) exp(−ατ1n )] = E [I (U1 = 1)] E [exp(−ατ1n )] .
(92)
Setting y−1 (α) =E [I (U1 = −1) exp(−ατ1n )] y1 (α) =E [I (U1 = 1) exp(−ατ1n )] by a direct computation we get y1 (α) = y−1 (α) =
1 ³√ ´ 2 cosh
2α 2n
and then £ ¤ y1 (α) = y1 (0) y1 (α) + y−1 (α) which is equivalent to (92). ¡ ¢ 2 c Π g · exp( ·) (a s, l) 2 a ¢ ¡ Πa,c (g)(s, l) = ; c Π exp( a2 ·) (a2 s, l)
(93)
B
A limit result and an upper bound for the rescaled queue
In this section we prove that, under conditions C1, C2, and C3 · Z t ¸ Z t √ √ n n E[Λt ] = lim E n µn nI(Qns− = 0)ds = lim µn nP (Qs = 0/Qn0 = 0)ds, n→∞
n→∞
0
(94)
0
where Λt is the local time of a Wiener process with diffusion coefficient 2λ and drift coefficient 0, and moreover we prove that for any 1 < α < 2, there exists nα such that for any n ≥ nα Z t √ √ n n √ ¡ λn ¢ nP (Qs = 0/Qn0 = 0)ds ≤ n(1−α)/2 + n 1 − (95) t + K t, µn 0 for a computable constant K. √ ³ As a consequence, since n 1 −
λn µn
Z lim sup n→∞
0
´ =
1 √ µn n (µn
− λn ) → 0, under assumption C3,
√ nP n (Qns = 0/Qn0 = 0)ds ≤ K t.
t√
To this end we start by recalling a few properties that we need, then we sketch the proof of the upper bound and of the limit, and finally we present the technical details.
B.1
Preliminary results
1) Modified Bessel functions The modified Bessel functions of order k, for k ≥ 0 are defined as Ik (x) =
∞ X i=0
³ x ´k+2i 1 , (k + i)!i! 2
while, for for k ≤ 0 Ik (x) = I−k (x). 2) The asymptotic behaviour of modified Bessel functions It is well known that, for any k, ex Ik (x) ∼ √ 2π x
when x → ∞,
(96)
and therefore, for any k, there exist an xk > 0 and an Lk such that sup x≥xk
Ik (x) x
√e 2π x
≤ Lk
(97)
3) The one-dimensional distribution of continuous time random walk Let Zt be a standard Poisson process and Yt = VZt , where as usual {Vk , k ∈ N} is a discrete time random walk, such that V0 = 0, and Vk = Vk−1 + Uk , where {Uk , k ∈ N} is a sequence of independent {−1, 1}-valued random variables with P (Uk = 1) = p and P (Uk = −1) = q. Then, for any u > 0 (see, for instance, [7] II.7 page 58) µ ¶k p 2 √ P (Yu = k) = exp{−u} Ik (2 pqu). q As a consequence, for any u > 0 and any positive numbers p, q ∈ (0, 1), with p + q = 1
∞ X k=−∞
and therefore
∞ X k=2
µ ¶k p 2 √ exp{−u} Ik (2 pqu) = 1, q
µ ¶k p 2 √ exp{−u} Ik (2 pqu) ≤ 1, q
and, taking into account that Ik (x) = I−k (x) (or equivalently interchanging the role of p and q) µ ¶k ∞ X q 2 √ Ik (2 pqu) ≤ 1, (98) exp{−u} p k=2
4) Transition probabilities pk;m (t) of a birth and death process with reflection in l Assume that Qt is a birth and death process, assume that the upward jump rate is ν and the downward jump rate is ρ, and that the process lives in [l, ∞) ∩ N, with l a reflecting state. Then the transition probabilities (see e.g. [8]) are µ ¶ k−m 2 ν √ exp{−t(ν + ρ)} [Ik−m (2 ρν t) pk,m (t) = P (Qt = m/Q0 = k) = ρ ¡ρ¢1 √ 2 + Im+k+1−2l (2 ρν t) ν ∞ X j ¡ ¡ ¢ ¢ ρ 2 ν √ + 1− Im+k−2l+j (2 ρν t) ρ ν j=2
5) The behaviour of p0;0 (t) with reflection in 0 When k = m = l = 0, then h ¡ρ¢1 √ √ 2 I (2 p0,0 (t) = P (Qt = 0/Q0 = 0) = exp{−t(ν + ρ)} I0 (2 ρν t) + ρν t) 1 ν ∞ i ¡ ν ¢ X ¡ ρ ¢ 2j √ + 1− Ij (2 ρν t). ρ ν j=2
h i ¡ρ¢1 √ √ 2 I (2 p0,0 (t) − exp{−t(ν + ρ)} I0 (2 ρν t) + ρν t) 1 ν ∞ i X ¡ ¡ρ¢j ν ¢h √ 2 = 1− exp{−t(ν + ρ)} Ij (2 ρν t). ρ ν j=2
ρ ν Note that if we set t = ν + ρ , p = ν+ρ , and q = ν+ρ , then the expression in square brackets in the previous expression is identical to (98) with u = (ν +ρ)t, and therefore it is bounded by 1. As a consequence we have that ¯ h i¯ ¡ ¡ρ¢1 ν¢ √ √ ¯ ¯ 2 I1 (2 ρν t) ¯ ≤ 1 − (99) ¯p0,0 (t) − exp{−t(ν + ρ)} I0 (2 ρν t) + ν ρ
and in particular h i ¡ ¡ρ¢1 ν¢ √ √ 2 I (2 p0,0 (t) ≤ exp{−t(ν + ρ)} I0 (2 ρν t) + ρν t) + 1 − . 1 ν ρ
(100)
B.2
Sketch of the proof
In our case we are interested to give an upper bound for ·Z t ¸ Z t √ √ n n En nI(Qns = 0)ds = nP (Qs = 0/Qn0 = 0)ds. 0
0
We can use (100) in the case when ν = nλn , and ρ = nµn and get √ n n nP (Qs = 0/Qn0 = 0) h i √ ¡ ¡ µn ¢ 1 √ λn ¢ 2 I (x (s)) + ≤ n exp{−sn(λn + µn )} I0 (xn (s)) + n 1− , 1 n λn µn where
p xn (s) = 2n µn λn s.
For any real number α > 1 Z Z t √ n n n nP (Qs = 0/Q0 = 0)ds = 0
Z t √ n n √ n n n nP (Qs = 0/Q0 = 0)ds + nP (Qs = 0/Qn0 = 0)ds 0 n−α/2 Z t h i ¡ µn ¢ 1 √ (1−α)/2 2 I (x (s)) ds ≤n + n exp{−sn(λn + µn )} I0 (xn (s)) + 1 n λn n−α/2 √ ¡ λn ¢ t. + n 1− µn n−α/2
Then, using (97), it is possible to show (see the technical details at the end of this section) that for any 1 < α < 2
and for any n1−α/2 ≥
max(x0 , x1 ) √ 2 µn λn
the following bound holds Z t √ n n √ ¡ λn ¢ nP (Qs = 0/Qn0 = 0)ds ≤ n(1−α)/2 + n 1 − t µn 0 ¸ · Z t ¡ µn ¢ 1 1 2 p √ + L1 ds. L0 + λn n−α/2 2 π µn λn s Starting from the bound (99) instead of (100), and using (96), the same technique can be used to show that ¸ · ¸ ·Z t Z t ¡ µn ¢ 1 √ 1 n n 2 p √ nµn I(Qs− = 0)ds = lim µn 1+ ds = E[Λt ] lim E n→∞ n→∞ λn n−α/2 2 π µn λn s 0 where Λt is the local time of a Wiener process with diffusion coefficient 2λ and drift coefficient 0. Indeed, on the one hand · ¸ · ¸ Z t Z t ¡ µn ¢ 1 ¡λ¢1 1 1 2 2 p √ √ 1+ lim µn ds = λ 1+ ds n→∞ λn λ n−α/2 2 π µn λn s 0 2 λπs √ Z t √ λ 2 λt √ ds = √ , = πs π 0 On the other hand, by the reflection principle, the density of the random variable Λt is u2
e− 4λt fΛt (u) = √ I(u > 0), πλt and then
1 E[Λt ] = √ πλt
Z
∞
ue 0
2
u − 4λt
√ 2 λt du = √ . π
B.3
Technical details
Taking into account (97), we get, for s ∈ [n−α/2 , t] √ n n nP (Qs = 0/Qn0 = 0) ≤
√ exp{xn (s)} I0 (xn (u)) ¡ µn ¢ 12 n exp{−sn(λn + µn )} p + sup λn √ n (u)} 2πxn (s) u∈[n−α/2 ,t] exp{x 2πxn (u)
sup u∈[n−α/2 ,t]
I1 (xn (u)) exp{xn (u)} √ 2πxn (u)
Now p √ exp{xn (s)} √ 1 = n exp{−sn[(λn + µn ) − 2 µn λn ]} p n exp{−sn(λn + µn )} p √ 2πxn (s) 4πn µn λn s p 1 1 ≤ p √ = exp{−sn[(λn + µn ) − 2 µn λn ]} p √ 2 π µn λn s 2 π µn λn s √ since (λn + µn ) − 2 µn λn ≥ 0. Moreover, for any n, and for k = 0, 1 I0 (x) Ik (xn (u)) Ik (x) = sup ≤ sup √ exp{x (u)} exp{x} exp{x} n u∈[n−α/2 ,t] √ x∈[xn (n−α/2 ,xn (t)] √ x≥2n µn λn n−α/2 √ sup
2πx
2πxn (u)
2πx
and therefore, with the same notations of (97), for α < 2, for n sufficiently large so that √ 2n1−α/2 µn λn ≥ xk , we get Ik (xn (u)) ≤ Lk exp{xn (u)} u∈[n−α/2 ,t] √ sup
2πxn (u)
References [1] Bhatt, A. G., Kallianpur, G., and Karandikar, R. L. Robustness of the nonlinear filter. Stochastic Process. Appl. 81, 2 (1999), 247–254. [2] Billingsley, P. Convergence of Probability Measures. Wiley, New York, 1968. ´maud, P. Point Process and Queues. Springer, New York, 1981. [3] Bre [4] Calzolari, A., and Nappo, G. Counting observations: a note on state estimation sensitivity with an L1 -bound. Appl. Math. Optim. 44, 2 (2001), 177–201. [5] Calzolari, A., and Nappo, G. The filtering problem in a model with grouped data and counting observation times. Tech. rep., Universit` a di Roma http://www.mat.uniroma1.it/people/nappo/nappo.html#scientif, 2001. [6] de Finetti, B. Teoria delle probabilit` a, vol. II. Einaudi, 1970. [7] Feller, W. An introduction to Probability Theory and Its applications, vol. 2. Wiley, 1971. [8] Goel, N. S., and Richter-Dyn, N. Stochastic models in biology. Academic Press [A subsidiary of Harcourt Brace Jovanovich, Publishers], New York-London, 1974. [9] Goggin, E. Conditions for convergence of conditional expectations. Ann. Probab. 22, 2 (1994), 1097–1114. [10] Goggin, E. An L1 approximation for conditional expectations. Stoch. and Stoch Rep. 60 (1997), 85–106. [11] Grimmett, G. R., and Stirzaker, D. R. Probability and Random Processes, second ed. Oxford University Press, New York, 1992. [12] Kurtz, T. G. Lectures on Stochastic Analysis. Dept. of Math. and Stat. - Univ. of Wisconsin - http://www.math.wisc.edu/ ˜ kurtz/m735.htm, 1999. [13] Nappo, G., and Torti, B. Filtering of a Brownian motion with respect to its local time. Manuscript. [14] Prabhu, N. U. Stochastic storage processes, second ed., vol. 15 of Applications of Mathematics. Springer-Verlag, New York, 1998. Queues, insurance risk, dams, and data communication.