Convergence of Nonlinear Filtering for Stochastic Dynamical Systems ...

1 downloads 0 Views 205KB Size Report
Jul 25, 2017 - arXiv:1707.07824v1 [math.PR] 25 Jul 2017. CONVERGENCE OF NONLINEAR FILTERING FOR STOCHASTIC. DYNAMICAL SYSTEMS WITH ...
CONVERGENCE OF NONLINEAR FILTERING FOR STOCHASTIC ´ DYNAMICAL SYSTEMS WITH LEVY NOISES*

arXiv:1707.07824v1 [math.PR] 25 Jul 2017

HUIJIE QIAO Department of Mathematics, Southeast University Nanjing, Jiangsu 211189, China [email protected] Abstract. We consider the nonlinear filtering problem of multiscale non-Gaussian signal processes and observation processes with jumps. Firstly, we prove that the dimension for the signal system could be reduced. Secondly, convergence of the corresponding nonlinear filtering to the homogenized filtering is shown by weak convergence approach.

1. Introduction For a fixed time T > 0, given a completed filtered probability space (Ω, F , {Ft }t∈[0,T ] , P). Consider the following slow-fast system on Rn × Rm : for 0 6 t 6 T , R  ε ε ε ε ε ε ˜p1 (dt, du), dX = b (X , Z )dt + σ (X , Z )dV + f (Xt− , u)N  1 1 t t t t t t U1 1   ε X0 = x0 , R (1) ε ε ˜pε (dt, du), dZtε = 1ε b2 (Xtε , Ztε )dt + √1ε σ2 (Xtε , Ztε )dWt + U2 f2 (Xt− , Zt− , u)N   2  ε Z0 = z0 , where V, W are l-dimensional and m-dimensional standard Brownian motion, respectively, and p1 , p2 are two stationary Poisson point processes of the class (quasi left-continuous) defined on (Ω, F , {Ft }t∈[0,T ] , P) with values in U and the characteristic measure ν1 , ν2 , respectively. Here ν1 , ν2 are two σ-finite measures defined on a measurable space (U, U ). Fix U1 , U2 ∈ U with ν1 (U \ U1 ) < ∞ and ν2 (U \ U2 ) < ∞. Let Np1 ((0, t], du) be the counting measure of p1 (t), a Poisson random measure and then ENp1 ((0, t], A) = tν1 (A) for A ∈ U . Denote ˜p1 ((0, t], du) := Np1 ((0, t], du) − tν1 (du), N ˜ p2 ((0, t], du). the compensated measure of p1 (t). By the same way, we could define Np2 ((0, t], du), N ε ε And Np2 ((0, t], du) is another Poisson random measure on (U, U ) such that ENp2 ((0, t], A) = 1 tν (A) for A ∈ U . Moreover, Vt , Wt , Np1 , Np2 , Npε2 are mutually independent. The mapε 2 pings b1 : Rn ×Rm 7→ Rn , b2 : Rn ×Rm 7→ Rm , σ1 : Rn ×Rm 7→ Rn×l , σ2 : Rn ×Rm 7→ Rm×m , f1 : Rn × U1 7→ Rn and f2 : Rn × Rm × U2 7→ Rm are all Borel measurable. The slow-fast dynamical system (1) is usually called multiscale processes, where the rates of change of different variables differ by orders of magnitude. And multiple time scales models are widely applied in the science and engineering fields. For example, fast AMS Subject Classification(2010): 60G35, 60G51, 60H10. Keywords: Nonlinear filtering, dimensional reduction, homogenization, weak convergence. *This work was partly supported by NSF of China (No. 11001051, 11371352). 1

atmospheric and slow oceanic dynamics describe the climate evolution and state dynamic in electric power systems consists of fast- and slowly-varying elements. Next, define an observation process Y ε by Ytε

=

Z

t 0

h(Xsε , Zsε )ds

+ Bt +

Z tZ 0

˜λ (ds, du) + f3 (s, u)N

Z tZ 0

U3

g3 (s, u)Nλ (ds, du), U\U3

where B is a d-dimensional standard Brownian motion and Nλ ((0, t], du) is a Poisson random measure with a predictable compensator λ(t, Xtε , u)tν3 (du). Here the function λ(t, x, R u) ∈ 2(0, 1) and ν3 is another σ-finite measure defined on U with ν3 (U \ U3 ) < ∞ and U3 kukU ν3 (du) < ∞ for a fixed U3 ∈ U , where k·kU denotes the norm on (U, U ). Set ˜λ ((0, t], du) := Nλ ((0, t], du) − λ(t, X ε , u)tν3 (du), and then N ˜λ ((0, t], du) is the compenN t sated martingale measure of Nλ ((0, t], du). Moreover, Vt , Wt , Bt , Np1 , Np2 , Npε2 , Nλ are mutually independent. h : Rn ×Rm 7→ Rd , f3 : [0, T ]×U3 7→ Rd and g3 : [0, T ]×(U\U3) 7→ Rd are all Borel measurable. For a Borel measurable function F , the nonlinear filtering problem for the slow component Xtε with respect to {Ysε , 0 6 s 6 t} leads to evaluating the ε ε ‘filter’ E[F (Xtε )|FtY ], where FtY is the σ-algebra generated by {Ysε , 0 6 s 6 t} and E|F (Xtε )| < ∞ for t ∈ [0, T ]. When f1 = f2 = f3 = g3 = 0, this problem has been studied alternatively. Let us recall some works. In [8], Park-Sowers-Namachchivaya considered the filtering problem with a two-dimensional plant and a one-dimensional observation process. There they used the time change and decomposition methods. And for the high dimension case, Park-Namachchivaya-Yeong [7] presented a numerical algorithm method. Later, Imkeller-Namachchivaya-Perkowski-Yeong [2] showed that for the high dimension slowε fast dynamical system (1), the filter E[F (Xtε )|FtY ] converges to the homogenized filter (See Section 4) by double backward stochastic differential equations and asymptotic techniques. When f1 6= 0, f2 6= 0, f3 = g3 = 0, Kushner [4] studied this problem by a weak convergence method. In the paper, we observe this problem with f1 6= 0, f2 6= 0, f3 6= 0, g3 6= 0, i.e. multiscale non-Gaussian signal processes and observation processes with jumps. Firstly, the dimension for the slow-fast system is proved to be reduced. Secondly, convergence of the corresponding nonlinear filtering to the homogenized filtering is shown. It is worthwhile to mention our methods. By a martingale problem method we reduce the dimension of the slow-fast system. For the filtering problem for the slow component Xtε with respect to {Ysε , 0 6 s 6 t}, since the time change is only useful for a onedimensional process, and the theory for double backward stochastic differential equations with jumps is short, these techniques are not applied to the present case. Here we compute ε the difference between E[F (Xtε )|FtY ] and the homogenized filter and then convert it to the difference between two unnormalized filterings. With the help of the weak convergence method in [4], we know that the difference between two unnormalized filterings converges ε to zero. Thus, we prove that E[F (Xtε )|FtY ] converges weakly to the homogenized filter. The paper is arranged as follows. In the next section, we introduce some notation, terminology and concepts used in the sequel. The dimension reducing for the slow-fast system is placed in Section 3. In Section 4, nonlinear filtering problems are introduced. And convergence of the corresponding nonlinear filtering to the homogenized filtering is proved in Section 5. 2

The following convention will be used throughout the paper: C with or without indices will denote different positive constants (depending on the indices) whose values may change from one place to another. 2. Preliminary In the section, we introduce some notation, terminology and concepts used in the sequel. Firstly, introduce the following notation and terminology: (i) For a separable metric space E, let B(E) denote the Borel σ-algebra on E and B(E) denote the set of all real-valued uniformly bounded Borel-measurable mappings on E. Also ¯ let C(E) be the set of all real-valued continuous functions on E, put C(E) := B(E)∩C(E), ¯ and let Cc (E) be the set of all members of C(E) which have compact support. When ˆ ¯ E is locally compact, let C(E) be the collection of all members of C(E) which vanish at infinity. (ii) For a positive integer r, let C r (Rq ) denote the collection of all members of C(Rq ) with continuous derivatives of each order, up to and including r. Let Cc∞ (Rq ) denote the collection of all members of C(Rq ) with continuous derivatives of all orders and compact support. For E a metric space and r some positive integer, write C r,0 (Rq × E) for the collection of all mappings f ∈ C(Rq × E) whose partial derivatives of every order up to and including r, with respect to its first q real-valued arguments, exist and are members of C(Rq × E), and put Ccr,0 (Rq × E) := C r,0 (Rq × E) ∩ Cc (Rq × E). (iii) When E is a complete separable metric space, let P(E) denote the collection of all probability measures on the measurable space (E, B(E)) with the usual topology of weak (or narrow) convergence; and if X : (Ω, F , P) 7→ E is F /B(E)-measurable, then let L(X) be the distribution of X on (E, B(E)). Also, for a B(E)-measurable mapping R f : E 7→ R which is integrable with respect to µ ∈ P(E), we put µf := E f dµ. Secondly, we introduce some concepts. Suppose that E is a separable metric space. Definition 2.1. Let A ⊂ B(E) × B(E) be a relation with domain D(A), and let µ ∈ P(E). Then a progressively measurable solution of the martingale problem for A (for ˜ (X ˜ is a complete fil˜ F˜ , {F˜t }, P), ˜ t )}, in which (Ω, ˜ F˜ , {F˜t }, P) (A, µ)) is some pair {(Ω, ˜ ˜ tered probability R t space and {Xt } is an E-valued Ft -progressively measurable process such ˜ t )− Af (X ˜ s )ds is an {F˜t }-martingale for each f ∈ D(A) (and L(X ˜ 0 ) = µ). The that f (X 0 martingale problem for (A, µ) has the property of existence when there exists some progressively measurable solution of the martingale problem for (A, µ), and has the property of ˜ F˜ , {F˜t }, P), ˜ (X ˜ t )} uniqueness when, given any two progressively measurable solutions {(Ω, ˇ (X ˇ Fˇ , {Fˇt }, P), ˇ t )} of the martingale problem for (A, µ), the E-valued processes and {(Ω, ˜ and X ˇ necessarily have identical finite-dimensional distributions. The martingale probX lem for (A, µ) is called well-posed when it has the properties of both existence and uniqueness. Finally, the martingale problem for A is well-posed when the martingale problem for (A, µ) is well-posed for each µ ∈ P(E). 3. Convergence of some processes In the section, we study convergence for the system (1) when ε → 0. We make the following assumptions, in order to guarantee existence and uniqueness of the solution for the system (1). Assumption 1. 3

(H1b1 ,σ1 ,f1 ) For x1 , x2 ∈ Rn , z1 , z2 ∈ Rm , there exists a L1 > 0 such that |b1 (x1 , z1 ) − b1 (x2 , z2 )| 6 L1 (|x1 − x2 | + |z1 − z2 |), kσ1 (x1 , z1 ) − σ1 (x2 , z2 )k2 6 L1 (|x1 − x2 |2 + |z1 − z2 |2 ), R |f1 (x1 , u) − f1 (x2 , u)|2 ν1 (du) 6 L1 |x1 − x2 |2 , U1

where | · | and k · k denote the length of a vector and the Hilbert-Schmidt norm of a matrix, respectively. (H2b1 ,σ1 ,f1 ) For x ∈ Rn , z ∈ Rm , there exists a L2 > 0 such that Z 2 2 |f1 (x, u)|2 ν1 (du) 6 L2 . |b1 (x, z)| + kσ1 (x, z)k + U1

(H1b2 ,σ2 ,f2 ) For x1 , x2 ∈ Rn , z1 , z2 ∈ Rm , there exists a L3 > 0 such that |b2 (x1 , z1 ) − b2 (x2 , z2 )| 6 L3 (|x1 − x2 | + |z1 − z2 |), kσ2 (x1 , z1 ) − σ2 (x2 , z2 )k2 6 L3 (|x1 − x2 |2 + |z1 − z2 |2 ), R |f2 (x1 , z1 , u) − f2 (x2 , z2 , u)|2 ν2 (du) 6 L3 (|x1 − x2 |2 + |z1 − z2 |2 ). U2

Under Assumption 1., by Theorem 1.2 in [10], the system (1) has a unique strong solution denoted by (Xtε , Ztε ). Moreover, the infinitesimal generator of the system (1) is given by ε

ε

(Lε H)(x, z) = (LX H)(x, z) + (LZ H)(x, z),

H ∈ D(Lε ),

where ε

(LX H)(x, z) :=

and ε

(LZ H)(x, z) :=

∂H(x, z) i 1 ∂ 2 H(x, z) b1 (x, z) + (σ1 σ1T )ij (x, z) ∂xi 2 ∂xi ∂xj Z h i  ∂H(x, z) i H x + f1 (x, u), z − H(x, z) − + f1 (x, u) ν1 (du), ∂xi U1

1 ∂H(x, z) i 1 ∂ 2 H(x, z) b2 (x, z) + (σ2 σ2T )ij (x, z) ε ∂zi 2ε ∂zi ∂zj Z h i  1 ∂H(x, z) i + H x, z + f2 (x, z, u) − H(x, z) − f2 (x, z, u) ν2 (du). ε U2 ∂zi

Here and hereafter, we use the convention that repeated indices imply summation. Next take any x ∈ Rn and fix it. And consider the following SDE in Rm :  R ˜p2 (dt, du), dZtx = b2 (x, Ztx )dt + σ2 (x, Ztx )dWt + U2 f2 (x, Ztx , u)N Z0x = z0 , t > 0.

Under the assumption (H1b2 ,σ2 ,f2 ), the above equation has a unique solution Ztx . In addition, it is a Markov process and its transition probability is denoted by p(x; z0 , t, A) for R m t > 0 and A ∈ B(R ). Set (Tt ϕ)(z0 ) := Rm ϕ(z ′ )p(x; z0 , t, dz ′ ) for any ϕ ∈ C(Rm ), and ε then {Tt , t > 0} is its transition semigroup and εLZ is its infinitesimal generator. For Ztx , we assume: 4

Assumption 2. There exists a unique invariant probability measure p¯(x; dz) for Ztx and Z ∞ Z Z ′ ′ ′ ′ ϕ(z )p(x; z , s, dz ) − ϕ(z )¯ p (x; dz ) 0 m ds < ∞ m 0

R

R

for any ϕ ∈ C(Rm ). About conditions for existence of a unique invariant probability measure for Ztx , please refer to [9]. Define an operator L¯ as follows: ¯ := C ∞ (Rn ), D(L) Zc ε ¯ (Lg)(x) := (LX g)(x, z)¯ p(x; dz) Rm

where

1 ∂ 2 g(x) ∂g(x) ¯i (¯ σ1 σ ¯1T )ij (x) b1 (x) + = ∂xi 2 ∂xi ∂xj Z h i  ∂g(x) i g x + f1 (x, u) − g(x) − + f1 (x, u) ν1 (du), ∂xi U1

¯b1 (x) :=

Z

b1 (x, z)¯ p(x, dz),

Rm

(¯ σ1 σ ¯1T )(x)

:=

Z

Rm

¯ g ∈ D(L),

(σ1 σ1T )(x, z)¯ p(x, dz).

It is clear that L¯ is a diffusion operator. So, we could construct a SDE generated by L¯ on the probability space (Ω, F , {Ft }t∈[0,T ] , P) as follows:  R ˜¯ 0 dXt0 = ¯b1 (Xt0 )dt + σ ¯1 (Xt0 )dV¯t + U1 f1 (Xt− , u)N(dt, du), (2) 0 X0 = x0 , 0 6 t 6 T,

¯ where V¯ is a l-dimensional standard Brownian motion, and N(dt, du) is a Poisson random ˜ ¯ ¯ measure with the characteristic measure ν1 and N(dt, du) = N (dt, du) − ν1 (du)dt. For ¯ we make the following requirement. the operator L, ¯ δx0 ) is well-posed. Assumption 3. The martingale problem for (L, Theorem 3.1. Under all the above hypotheses {Xtε , t ∈ [0, T ]} converges weakly to {Xt0 , t ∈ [0, T ]} in D([0, T ], Rn ).

Proof. Step 1. We prove that {Xtε , t ∈ [0, T ]} is relatively weakly compact in D([0, T ], Rn). Firstly of all, consider the martingale problem associated with Lε . For H ∈ D(Lε ), Z t ε ε MH (t) := H(Xt , Zt ) − H(x0 , z0 ) − (Lε H)(Xsε , Zsε )ds (3) 0

is a square integrable martingale and Z t ∂H(Xsε , Zsε ) ∂H(Xsε , Zsε ) hMH (·), MH (·)it = (σ1 σ1T )ij (Xsε , Zsε )ds ∂x ∂x i j 0 Z tZ + [H(Xsε + f1 (Xsε , u), Zsε) − H(Xsε , Zsε )]2 ν1 (du)ds 0

+

1 ε

Z

U1

t

0

∂H(Xsε , Zsε ) ∂H(Xsε , Zsε ) (σ2 σ2T )ij (Xsε , Zsε )ds ∂zi ∂zj 5

Z tZ

1 + ε

0

[H(Xsε , Zsε + f2 (Xsε , Zsε , u)) − H(Xsε , Zsε )]2 ν2 (du)ds. U2

Taking H(x, z) = xi , i = 1, 2, · · · , n in (3), we obtain that Z t ε i i (Xt ) − x0 = bi1 (Xrε , Zrε )dr + Mxi (t), 0 < t 6 T. 0

Let τ be any (Ft )t>0 −stopping time no more than T . And then by the H¨older inequality and the Itˆo isometry, it holds that for any δ > 0, Z τ +δ  n n X X ε ε 2 ε i ε i 2 ε ε 2 i E[|Xτ +δ − Xτ | ] = E[|(Xτ +δ ) − (Xτ ) | ] 6 2δ E |b1 (Xr , Zr )| dr i=1

+2

i=1 n X

τ

E[|Mxi (τ + δ) − Mxi (τ )|2 ]

i=1

= 2δ

n X

+2

E

i=1 n X

E

i=1

= 2δE

Z

+2E 6 Cδ,

Z

τ +δ

|bi1 (Xrε , Zrε )|2 dr

τ

Z

τ

τ +δ τ

Z

Z

Z n X +2 E

U1



+ 2E

Z

|f1 (Xrε , u)|2 ν1 (du)dr

U1

τ

i=1

|f1i (Xrε , u)|2ν1 (du)dr

|b1 (Xrε , Zrε )|2 dr τ +δ

τ

τ +δ Z



τ +δ

(σ1 σ1T )ii (Xrε , Zrε )dr





τ +δ

τ

kσ1 (Xrε , Zrε )k2 dr





where the last inequality is based on the condition (H2b1 ,σ1 ,f1 ), and the constant C is independent of ε. So, lim sup lim sup sup E[|Xτε+δ − Xτε |2 ] = 0. δ↓0

τ 6T

ε↓0

By the similar deduction to above, one could furthermore get sup E[|Xtε |2 ] < ∞.

ε,t6T

Thus, it follows from Theorem 2.7 in [3] that {Xtε , t ∈ [0, T ]} is tight in D([0, T ], Rn). And then the Prohorov theorem admits us to obtain that {Xtε , t ∈ [0, T ]} is relatively weakly compact in D([0, T ], Rn ). Step 2. We prove that the weak limit of {Xtε , t ∈ [0, T ]} is {Xt0 , t ∈ [0, T ]}. Taking H(x, z) = g(x) in (3), where g is a smooth and bounded function, we have that Z t ε ε ε g(Xt ) − g(Xs ) − (LX g)(Xrε , Zrε )dr = Mg (t) − Mg (s), s

and then g(Xtε )



g(Xsε )



Z

s

t

ε ¯ (Lg)(X r )dr =

Z

s

t



 ε ε ¯ (LX g)(Xrε , Zrε ) − (Lg)(X r ) dr + Mg (t) − Mg (s). 6

Moreover, multiplying a bounded Fs -measurable functional χs of the process {Xt0 , t ∈ [0, T ]} and taking expectation on the two hand sides of the above equality, we know    Z t ε ε ε ¯ E χs g(Xt ) − g(Xs ) − (Lg)(Xr )dr s  Z t   Xε  ε ε ε ¯ = E χs (L g)(Xr , Zr ) − (Lg)(Xr ) dr . (4) s

h R   i ε t ε ¯ Next we compute lim E χs s (LX g)(Xrε, Zrε ) − (Lg)(X ) dr . On one hand, set r ε↓0 R∞ Ψ(x, z, A) := 0 [p(x; z, t, A) − p¯(x; A)]dt, R   ε ¯ Ψg (x, z) := m (LX g)(x, z ′ ) − (Lg)(x) Ψ(x, z, dz ′ ), R

and then

Z  Xε  ∞ ′ ¯ Ψg (x, z) = (L g)(x, z ) − (Lg)(x) [p(x; z, t, dz ′ ) − p¯(x; dz ′ )]dt m 0  ZR∞ Z  Xε  ′ ′ ′ ¯ = (L g)(x, z ) − (Lg)(x) [p(x; z, t, dz ) − p¯(x; dz )] dt 0 Rm Z ∞ ε ¯ = Tt [(LX g) − (Lg)](x, z)dt. Z

0

Furthermore, it holds that Z ∞ ε ε Zε ¯ ε(L Ψg )(x, z) = (εLZ Tt )[(LX g) − (Lg)](x, z)dt 0 Z ∞ ε ¯ dTt [(LX g) − (Lg)](x, z) dt = dt 0 ε ε ¯ ¯ = lim Tt [(LX g) − (Lg)](x, z) − [(LX g)(x, z) − (Lg)(x)] t→∞ Z  Xε  ¯ = lim (L g)(x, z ′ ) − (Lg)(x) [p(x; z, t, dz ′ ) − p¯(x; dz ′ )] t→∞

Rm Xε

¯ −[(L g)(x, z) − (Lg)(x)] ε ¯ = −[(LX g)(x, z) − (Lg)(x)],

(5)

where the last equality is based on Assumption 2. On the other hand, taking H(x, z) = εΨg (x, z) again in (3), we get that Z t ε ε ε ε ε εΨg (Xt , Zt ) − εΨg (Xs , Zs ) − ε (LX Ψg )(Xrε , Zrε )dr s Z t ε = ε(LZ Ψg )(Xrε , Zrε )dr + MεΨg (t) − MεΨg (s). s

So, by multiplying χs and taking expectation on the two hand sides of the above equality, it holds that    Z t ε ε ε ε Xε ε ε εE χs Ψg (Xt , Zt ) − Ψg (Xs , Zs ) − (L Ψg )(Xr , Zr )dr s  Z t  Zε ε ε = E χs ε(L Ψg )(Xr , Zr )dr s

7



= −E χs

Z

t s



(L



g)(Xrε , Zrε )



ε ¯ (Lg)(X r)





dr ,

where the last equality is based on (5). As ε → 0, it is easy to see that  Z t   Xε  ε ε ε ¯ lim E χs (L g)(Xr , Zr ) − (Lg)(X r ) dr = 0. ε↓0

s

The above limit, together with (4), yields that    Z t ε ε ε ¯ lim E χs g(Xt ) − g(Xs ) − (Lg)(Xr )dr = 0, ε↓0

s

which means that the weak limit of {Xtε , t ∈ [0, T ]} is a solution of the martingale problem ¯ δx0 ). By Assumption 3., the weak limit of {X ε , t ∈ [0, T ]} is {X 0 , t ∈ [0, T ]}.  for (L, t t 4. Nonlinear filtering problems In the section, we study nonlinear filtering problems for X ε and X 0 . For Y ε , we assume: Assumption 4. h is bounded and Z TZ |f3 (s, u)|2ν3 (du)ds < ∞. 0

U3

Under Assumption 4., Y ε is well defined. Set  Z t Z 1 t ε −1 ε ε i i |h(Xsε , Zsε )|2 ds (Λt ) : = exp − h(Xs , Zs ) dBs − 2 0 0  Z tZ Z tZ ε ε − log λ(s, Xs− , u)Nλ (ds, du) − (1 − λ(s, Xs , u))ν3(du)ds . 0

0

U3

U3

Assumption 5. There exists a positive function L(u) satisfying Z (1 − L(u))2 ν3 (du) < ∞ L(u) U3 such that 0 < l 6 L(u) < λ(t, x, u) < 1 for u ∈ U3 , where l is a constant. Under Assumption 5., it holds that " (Z Z )# T (1 − λ(s, Xsε , u))2 E exp ν3 (du)ds λ(s, Xsε , u) 0 U3 ) (Z Z T (1 − L(u))2 ν3 (du)ds < exp L(u) 0 U3 < ∞. Thus, by the same deduction to that in [11], we know that (Λεt )−1 is an exponential martingale. By use of (Λεt )−1 , one could define a measure Pε via dPε = (ΛεT )−1 . dP 8

By the Girsanov theorem for Brownian and random measures, we can obtain that R t motions ε ¯ ε ε under the measure P , Bt := Bt + 0 h(Xs , Zs )ds is a Brownian motion and Nλ ((0, t], du) is a Poisson random measure with the predictable compensator tν3 (du). Next, rewrite Z t Z 1 t ε ε ε i ¯i |h(Xsε , Zsε )|2 ds Λt = exp h(Xs , Zs ) dBs − 2 0 0  Z tZ Z tZ ε ε + log λ(s, Xs− , u)Nλ (ds, du) + (1 − λ(s, Xs , u))ν3 (du)ds , 0

0

U3

U3

and define

ε

ε

ρεt (ψ) := EP [ψ(Xtε )Λεt |FtY ],

ψ ∈ B(Rn ),

ε

ε

where EP denotes the expectation under the measure Pε and FtY stands for the σ-algebra generated by {Ysε , 0 6 s 6 t}. Again set ε

πtε (ψ) := E[ψ(Xtε )|FtY ], and by the Kallianpur-Striebel formula it holds that ρε (ψ) πtε (ψ) = tε . ρt (1) Set Z ¯ h(x) := h(x, z)¯ p(x, dz), Rm

¯ is an averaged version of h. So, we make use of h ¯ to define and then h Z t Z 1 t ¯ 0 2 0 i ¯i ¯ ¯ h(Xs ) ds Λt := exp h(Xs ) dBs − 2 0 0  Z tZ Z tZ 0 0 + log λ(s, Xs− , u)Nλ (ds, du) + (1 − λ(s, Xs , u))ν3 (du)ds , ρ0t (ψ)



:= E

0 U3 ε 0 ¯ [ψ(Xt )Λt |FtY ],

0

U3

where Xt0 is the limit process in Section 3. Put πt0 (ψ) :=

ρ0t (ψ) , ρ0t (1)

and then we study the relation between πt0 and πtε as ε → 0 in the next section. At the first look, it is more reasonable to define the limit observable process Z t Z tZ Z tZ 0 0 ˜ ¯ ¯ ¯λ (ds, du), Yt := h(Xs )ds + Bt + f3 (s, u)Nλ (ds, du) + g3 (s, u)N 0

0

0

U3

U\U3

¯λ ((0, t], du) is a Poisson random measure with a predictable compensator where N 0 λ(t, Xt , u)tν3 (du), and the corresponding nonlinear filtering 0

P0t (ψ) := E[ψ(Xt0 )|FtY ], and discuss the relation between P0t and πtε as ε → 0. In fact, since Xt0 couldn’t be obtained genuinely, Yt0 is not observable. However, should such homogenized observation be available, using it would lead to loss of information for estimating the signal compared ε to using the actual observation. Therefore, we only consider Xt0 under FtY . 9

5. Convergence of nonlinear filterings In the section, we prove that πtε converges weakly to πt0 as ε → 0 for any t ∈ [0, T ]. Firstly, let us prove two key lemmas. Lemma 5.1. Suppose that h, λ satisfy Assumption 4-5.. Then (ρ0t (1))−1 < ∞ P a.s. for any t ∈ [0, T ]. Proof. By the H¨older inequality, it holds that −1 ε 1/2 Pε ε 2 1/2 ε ε E(ρ0t (1))−1 = EP ρ0t (1) E (ΛT ) . ΛT 6 EP (ρ0t (1))−2 ε ε ε ¯ t |F Y ] and x 7→ x−2 is Let us firstly estimate EP (ρ0t (1))−2 . Note that ρ0t (1) = EP [Λ t convex. Thus, we know by the Jensen inequality that ε ε ε ¯ t |FtY ε ])−2 6 EPε [EPε [(Λ ¯ t )−2 |FtY ε ]] = EPε (Λ ¯ t )−2 . EP (ρ0t (1))−2 = EP (EP [Λ ε ¯ t )−2 . Applying the Itˆo formula to (Λ ¯ t )−1 , one could obtain that So, we estimate EP (Λ Z t Z tZ 0 2 −1 −1 0 2 −1 (1 − λ(s, Xs , u)) ¯ ¯ ¯ ¯ ν3 (du)ds (Λ t ) = 1+ (Λs ) |h(Xs )| ds + (Λ s ) λ(s, Xs0 , u) 0 0 U3 Z t Z tZ 0 −1 ¯ 0 i ¯i ¯ ¯ s )−1 1 − λ(s, Xs , u) N ˜λ (ds, du). − (Λs ) h(Xs ) dBs + (Λ 0 , u) λ(s, X 0 0 U3 s

Furthermore, it follows from the H¨older inequality and the Itˆo isometry that Z t 2 Z t 2 Pε ¯ −2 Pε Pε −1 ¯ 0 2 −1 ¯ 0 i ¯i ¯ ¯ E (Λ t ) 6 5 + 5E (Λs ) |h(Xs )| ds + 5E (Λs ) h(Xs ) dBs 0

0

2 Z t Z 0 2 Pε −1 (1 − λ(s, Xs , u)) ¯ ν3 (du)ds +5E (Λ s ) 0 λ(s, Xs , u) 0 U3 Z t Z 2 0 Pε −1 1 − λ(s, Xs , u) ˜ ¯ +5E (Λ s ) Nλ (ds, du) 0 λ(s, Xs , u) 0 U3 Z t Z t Pε −2 ¯ 0 4 Pε ¯ 0 )|2 ds ¯ ¯ s )−2 |h(X 6 5 + 5T E (Λs ) |h(Xs )| ds + 5E (Λ s 0 0 Z Z t 2 (1 − λ(s, Xs0 , u))2 Pε −2 ¯ +5T E (Λ s ) ν3 (du) ds λ(s, Xs0 , u) U3 0 Z tZ 0 2 Pε ¯ s )−2 (1 − λ(s, Xs , u)) ν3 (du)ds (Λ +5E λ(s, Xs0 , u)2 0 U3 Z t ε ¯ s )−2 ds, 6 5+C EP (Λ 0

where the last step is based on Assumption 4-5.. The Gronwall inequality admits us to ε ¯ t )−2 < ∞. have EP (Λ ε Next, deal with EP (ΛεT )2 . Applying the Itˆo formula to Λεt , we obtain that Z t Z tZ ε ε ε ε i ¯i ε ˜λ (ds, du). Λt = 1 + Λs h(Xs , Zs ) dBs + Λεs− (λ(s, Xs− , u) − 1)N (6) 0

0

U3

¯ t ) it holds that EP (Λε )2 < ∞. Thus, by the similar deduction to E (Λ T In conclusion, E(ρ0t (1))−1 < ∞. The proof is completed. Pε

−2

10

ε



Lemma 5.2. Under Assumption 4-5., {ρεt , t ∈ [0, T ]} is relatively weakly compact in D([0, T ], M(Rn)). Proof. First of all, we explain ρεt ∈ M(Rn ). Note that ρεt (Rn ) = ρεt (1Rn ) = ρεt (1) = ε ε EP [Λεt |FtY ]. And then by the H¨older inequality, it holds that 1/2 Pε ε 2 1/2 ε ε Eρεt (Rn ) = Eρεt (1) = EP [ρεt (1)ΛεT ] 6 EP (ρεt (1))2 E (ΛT ) . On one hand, the Jensen inequality admits us to obtain that ε

ε

ε

ε

ε

ε

ε

ε

EP (ρεt (1))2 = EP (EP [Λεt |FtY ])2 6 EP [EP [(Λεt )2 |FtY ]] = EP (Λεt )2 . ε

By the proof of Lemma 5.1, one could get EP (ρεt (1))2 < ∞. On the other hand, it ε follows from the proof of Lemma 5.1 that EP (ΛεT )2 < ∞. Thus, ρεt (Rn ) < ∞ a.s. P. Other measure properties of ρεt are easy to justify by means of properties of conditional expectations. Next, we deduce the equation for ρεt . For ψ ∈ Cb2 (Rn ), applying the Itˆo formula to ψ(Xtε ), we have that Z t Z t ε ε Xε ε ε ψ(Xt ) = ψ(X0 ) + (L ψ)(Xs , Zs )ds + (∇ψ)(Xsε )σ1 (Xsε , Zsε )dVs 0 0 Z tZ ε ε ε ˜p1 (ds, du). + [ψ(Xs− + f1 (Xs− , u)) − ψ(Xs− )]N 0

Note that

U1

Λεt

satisfies Eq.(6). So, it follows from the Itˆo formula that Z t ε ε ε ¯i ψ(Xt )Λt = ψ(X0 ) + ψ(Xsε )Λεs h(Xsε , Zsε )i dB s 0 Z tZ ε ε ˜λ (ds, du) + ψ(Xs− )Λεs− (λ(s, Xs− , u) − 1)N 0

+

Z

U3

t

0

+

ε Λεs (LX ψ)(Xsε , Zsε )ds

Z tZ 0

+

Z

0

t

Λεs (∇ψ)(Xsε )σ1 (Xsε , Zsε )dVs

ε ε ε ˜p (ds, du). )]N , u)) − ψ(Xs− + f1 (Xs− Λεs− [ψ(Xs− 1

U1 ε

Taking the conditional expectation with respect to FtY under Pε on two hand sides of the above equality, one could obtain that Z t ε ε Pε ε ε Yε Pε ε Yε ¯i E [ψ(Xt )Λt |Ft ] = E [ψ(X0 )|F0 ] + EP [ψ(Xsε )Λεs h(Xsε , Zsε )i |FsY ]dB s 0 Z tZ ε ε ε ε ˜λ (ds, du) EP [ψ(Xs− )Λεs− (λ(s, Xs− + , u) − 1)|FsY ]N 0

+

U3

Z

t

Z

t

0

i.e. ρεt (ψ)

=

ρε0 (ψ) +

+ Z tZ 0

U3

0

ε

ε

ε

EP [Λεs (LX ψ)(Xsε , Zsε )|FsY ]ds,

ρεs 



L



  ε ψ (·, Zs ) ds +

Z

0

t

ρεs

  ε i ¯i ψh(·, Zs ) dB s

 ˜λ (ds, du). ρεs ψ(λ(s, ·, u) − 1) N 11

(7)

For the detailed deduction of the above equation, please refer to the proof of Theorem 3.3 in [11]. Let τ be any (Ft )t>0 −stopping time no more than T . For any δ > 0, we compute E|ρετ +δ (ψ) − ρετ (ψ)|. It follows from the H¨older inequality that 1/2 Pε ε 2 1/2 ε ε E|ρετ +δ (ψ) − ρετ (ψ)| = EP |ρετ +δ (ψ) − ρετ (ψ)|ΛεT 6 EP |ρετ +δ (ψ) − ρετ (ψ)|2 E (ΛT ) . ε

ε

Since EP (ΛεT )2 < C, which has been proved in Lemma 5.1, we only consider EP |ρετ +δ (ψ)− ρετ (ψ)|2 . The H¨older inequality and the Itˆo isometry admit us to get Z τ +δ   2  Pε ε ε 2 Pε ε Xε ε E |ρτ +δ (ψ) − ρτ (ψ)| 6 3E ρs L ψ (·, Zs ) ds τ Z τ +δ  2  Pε i ε ε i ¯ +3E ρs ψh(·, Zs ) dB s τ

2 Z τ +δ Z   ε ˜λ (ds, du) +3EP ρεs ψ(λ(s, ·, u) − 1) N τ U3 Z τ +δ   2  ε Pε Xε ε 6 3δE ρs L ψ (·, Zs ) ds τ Z τ +δ   2 ε ε +3EP ρs ψh(·, Zsε )i ds τ Z τ +δ Z   2 ε Pε +3E ρs ψ(λ(s, ·, u) − 1) ν3 (du)ds τ

U3

=: I1 + I2 + I3 .

Firstly, deal with I1 . By the Jensen inequality and (H2b1 ,σ1 ,f1 ), it holds that Z τ +δ Pε ε X ε Pε E [Λ (L ψ)(X ε , Z ε )|F Y ε ] 2 ds I1 = 3δE s s s s τ Z τ +δ 2 ε ε ε ε 6 3δEP EP [(Λεs )2 (LX ψ)(Xsε , Zsε ) |FsY ]ds τ Z τ +δ ε ε ε P EP [(Λεs )2 |FsY ]ds 6 3CδE τ Z δ ε ε ε 6 3CδEP EP [(Λετ +s )2 |FτY+s ]ds 0 Z δ ε ε ε 6 3Cδ EP [EP [(Λετ +s )2 |FτY+s ]]ds 0 Z δ ε 6 3Cδ EP [(Λετ +s )2 ]ds 0

2

6 3Cδ ,

where C is independent of ε, δ. By the same deduction to I1 , we get that I2 + I3 6 Cδ. Thus, lim sup lim sup sup E|ρετ +δ (ψ) − ρετ (ψ)| = 0. δ↓0

ε↓0

τ 6T

12

(8)

Based on the similar calculation to above, it holds that sup E|ρεt (ψ)| < ∞.

(9)

ε,t6T

So, combining (9) with (8), we know from Theorem 5.1 in [4] that {ρεt (ψ), t ∈ [0, T ]} is relatively weakly compact in D([0, T ], R). Moreover, Theorem 6.2 in [4] admits us to  obtain that {ρεt , t ∈ [0, T ]} is relatively weakly compact in D([0, T ], M(Rn)). To attain the convergence of πtε to πt0 as ε → 0, we assume more: Assumption 6. {Ztε , t ∈ [0, T ]} is tight. Now, it is the position to state the main result in the section. Theorem 5.3. Under Assumption 1.-6., πtε converges weakly to πt0 as ε → 0 for any t ∈ [0, T ]. Proof. For ψ ∈ Cb2 (Rn ), it holds that πtε (ψ) − πt0 (ψ) =

ρεt (ψ) − ρ0t (ψ) ρεt (1) − ρ0t (1) ε − π (ψ) . t ρ0t (1) ρ0t (1)

Thus, in order to prove πtε (ψ) − πt0 (ψ) converges weakly to 0, by Lemma 5.1 and the conditional expectation property of πtε (ψ), we only need to show that ρεt (ψ) converges weakly to ρ0t (ψ) as ε → 0. On one hand, we compute the weak limit of ρεt (ψ) as ε → 0. By Lemma 5.2, there exist a weakly convergence subsequence {ρεt k , k ∈ N} and a measure-valued process ρ¯t such that ρεt k (ψ) converges weakly to ρ¯t (ψ) as k → ∞. To compare ρ¯t (ψ) with ρ0t (ψ), we deduce the equation which ρ¯t (ψ) satisfies. Note that ρεt (ψ) solves Eq.(7). And then we consider the weak limits of three integrals in Eq.(7). For the first integral, it holds that           ε ε ¯ ¯ ρεsk LX k ψ (·, Zsεk ) − ρ¯s Lψ = ρεsk LX k ψ (·, Zsεk ) − ρεsk Lψ     ¯ − ρ¯s Lψ ¯ +ρεsk Lψ    ε ¯ = ρεsk LX k ψ (·, Zsεk ) − Lψ     ¯ − ρ¯s Lψ ¯ +ρεk Lψ s

=: I1 + I2 .

For I1 , one know that   Y εk  ε  ε εk ¯ I1 = EP k Λεsk LX k ψ (Xsεk , Zsεk ) − (Lψ)(X s ) |Fs    i ε  Y εk εk εk i εk Pεk εk ∂ψ k ¯ (X ) b1 (Xs , Zs ) − b1 (Xs ) |Fs = E Λs ∂xi s   2   Y εk 1 Pεk εk T ij εk εk T ij εk εk ∂ ψ + E (X ) (σ1 σ1 ) (Xs , Zs ) − (¯ σ1 σ¯1 ) (Xs ) |Fs Λs 2 ∂xi ∂xj s =: I11 + I12 . Let us deal with I11 . Since   n−1 h i X ∂ψ εk εk εk ε εk i εk i Y Pεk k (X(j+1)t/n ) b1 (X(j+1)t/n , Zs ) − ¯b1 (X(j+1)t/n ) Fs I(jt/n,(j+1)t/n] (s) Λ(j+1)t/n lim E n→∞ ∂x i j=0 13

  i ε  Y εk ∂ψ εk εk i εk k ¯ (X ) b1 (Xs , Zs ) − b1 (Xs ) Fs , a.s.P, = E ∂xi s   h i ε ε ε ε ε ∂ψ i εk i Y εk P k k k k k ¯ we only consider E Λ(j+1)t/n ∂xi (X(j+1)t/n ) b1 (X(j+1)t/n , Zs ) − b1 (X(j+1)t/n ) Fs Pεk



Λεsk

for s ∈ (jt/n, (j + 1)t/n]. Based on independence of X εk , Z εk and Y εk under Pεk , it holds that   h i ∂ψ εk εk εk εk ε Pεk i εk i Y k Λ(j+1)t/n E (X ) b (X , Z ) − ¯b1 (X(j+1)t/n ) Fs ∂xi (j+1)t/n 1 (j+1)t/n s " h h i εk εk εk εk Pεk Pεk ∂ψ i εk i ¯ = E Λ(j+1)t/n E (X ) b (X , Z ) − b1 (X(j+1)t/n ) ∂xi (j+1)t/n 1 (j+1)t/n s # i ε εk ε Y X k , Z k , s−εk l s−εk l Fs where l is a positive integer such that s − εk l > 0. And then we compute   h i ∂ψ ε εk εk ε ε i εk i Pεk k k k (X ) b (X , Z ) − ¯b1 (X(j+1)t/n ) Xs−εk l , Zs−εk l . E ∂xi (j+1)t/n 1 (j+1)t/n s On one side, it is easy to see that   h i ∂ψ εk εk εk εk εk Pεk i εk i ¯ E (X ) b (X , Z ) − b1 (X(j+1)t/n ) Xs−εk l , Zs−εk l ∂xi (j+1)t/n 1 (j+1)t/n s Z h i ∂ψ εk εk εk (X(j+1)t/n ) bi1 (X(j+1)t/n , z) − ¯bi1 (X(j+1)t/n ) p¯(Xsεk , dz) − Rm ∂xi   h i ∂ψ εk εk ε ε ε Pεk i εk i k k k = E (X ) b (X , Z ) − ¯b1 (X(j+1)t/n ) Xs−εk l , Zs−εk l ∂xi (j+1)t/n 1 (j+1)t/n s −

Z

Rm

h i ∂ψ εk εk εk εk εk (X(j+1)t/n ) bi1 (X(j+1)t/n , z) − ¯bi1 (X(j+1)t/n ) p(Xs−ε ; Zs−ε , l, dz) kl kl ∂xi

!

h i ∂ψ εk εk εk εk εk , l, dz) ; Zs−ε (X(j+1)t/n ) bi1 (X(j+1)t/n , z) − ¯bi1 (X(j+1)t/n ) p(Xs−ε kl kl Rm ∂xi ! Z h i ∂ψ εk εk εk (X(j+1)t/n ) bi1 (X(j+1)t/n , z) − ¯bi1 (X(j+1)t/n ) p¯(Xsεk , dz) − Rm ∂xi

+

Z

=: I111 + I112 .

Based on tightness of {(Xtε , Ztε ), t ∈ [0, T ]} and (H1b1 ,σ1 ,f1 ), it holds that lim I111 = 0. By k→∞

εk εk , l, dz) and p¯(Xsεk , dz), we know that lim I112 = 0. On the ; Zs−ε the definition of p(Xs−ε kl kl l→∞

other side, it follows from the dominated convergence theorem that Z h i ∂ψ εk εk εk lim (X(j+1)t/n ) bi1 (X(j+1)t/n , z) − ¯bi1 (X(j+1)t/n ) p¯(Xsεk , dz) n→∞ Rm ∂xi Z   ∂ψ (Xsεk ) bi1 (Xsεk , z) − ¯bi1 (Xsεk ) p¯(Xsεk , dz) = 0. = Rm ∂xi Thus, the dominated convergence theorem admits us to obtain lim I11 = 0. k→∞

14

By the same deduction to that for I11 , it holds that I12 goes to zero a.s. as k → ∞. Thus, I1 converges to zero ask → ∞, which together with weak convergence of I2 to    εk X εk εk ¯ zero as k → ∞ yields that ρs L ψ (·, Zs ) converges weakly to ρ¯s Lψ as k → ∞. Besides, set n−1   (n)   X   εk εk εk X εk εk X εk ρs L ψ (·, Zs ) := ρ(j+1)t/n L ψ (·, Z(j+1)t/n ) I(jt/n,(j+1)t/n] (s), j=0

¯ ρ¯s Lψ and then

(n)

:=

n−1 X j=0

 ¯ I(jt/n,(j+1)t/n] (s), ρ¯(j+1)t/n Lψ

  (n)     ε ε ρεsk LX k ψ (·, Zsεk ) = ρεsk LX k ψ (·, Zsεk ) , n→∞   (n) ¯ ¯ , lim ρ¯s Lψ = ρ¯s Lψ a.s.P. lim

a.s.P,

n→∞

Moreover, by the dominated convergence theorem, it holds that ! Z t   (n)   2   ε ε ε ρεk LX k ψ (·, Z εk ) lim EP k − ρεsk LX k ψ (·, Zsεk ) ds = 0, s s n→∞

0

ε

lim EP k

n→∞

Z t   2  (n) ¯ ds = 0. ¯ − ρ¯s Lψ ρ¯s Lψ 0

So, the H¨older inequality admits us to obtain that Z t   2 Z t  (n)   εk εk εk lim EP ρεsk LX ψ (·, Zsεk ) ds − ρεsk LX ψ (·, Zsεk )ds = 0, n→∞

0

ε

lim EP k

n→∞

Z

0

t

¯ ρ¯s Lψ

0

(n)

ds −

Z

t

0

  2 ¯ ds = 0. ρ¯s Lψ

From this, it follows that Z t  Z t    εk X εk εk ¯ ds ρs L ψ (·, Zs ) ds − ρ¯s Lψ 0 0 Z t  Z t   (n)   εk X εk εk εk X εk εk = ρs L ψ (·, Zs ) ds − ρs L ψ (·, Zs ) ds 0 0 Z t  Z t (n)  (n) εk X εk εk ¯ + ρs L ψ (·, Zs ) ds − ρ¯s Lψ ds 0 0 Z t Z t (n)  ¯ ¯ ds + ρ¯s Lψ ds − ρ¯s Lψ 0 0 Z t  Z t   (n)   εk X εk εk εk X εk εk = ρs L ψ (·, Zs ) ds − ρs L ψ (·, Zs ) ds 0

+

0

n−1  X j=0

k ρε(j+1)t/n



L

X εk

     εk ¯ (j + 1)t/n − jt/n ψ (·, Z(j+1)t/n ) − ρ¯(j+1)t/n Lψ 15

+

Z

t

¯ ρ¯s Lψ

0

w.

− → 0,

(n)

ds −

Z

t 0

 ¯ ds ρ¯s Lψ

i.e. Z

0

t

ρεsk



L

X εk

  w. εk ψ (·, Zs ) ds − →

Z

t 0

 ¯ ds. ρ¯s Lψ

(10)

In the following, we treat the second integral in Eq.(7). By the similar    deduction to εk εk i i ¯ above one could have that ρs ψh(·, Zs ) converges weakly to ρ¯s ψ h as k → ∞. Besides, define n−1 (n)     X εk εk εk i εk i ρs ψh(·, Zs ) := ρ(j+1)t/n ψh(·, Z(j+1)t/n ) I(jt/n,(j+1)t/n] (s), j=0

n−1 (n)     X ¯i ¯ i I(jt/n,(j+1)t/n] (s), ρ¯s ψ h ρ¯(j+1)t/n ψ h := j=0

and then (n)    εk i εk εk i lim ψh(·, Zs ) = ρs ψh(·, Zs ) , n→∞   (n)   ¯i ¯i , lim ρ¯s ψ h = ρ¯s ψ h a.s.P. 

ρεsk

a.s.P,

n→∞

Furthermore it follows from the dominated convergence theorem that ε

lim EP k

k→∞

! Z t   (n)  2  ε ρsk ψh(·, Zsεk )i − ρεsk ψh(·, Zsεk )i ds = 0, 0

Pεk

lim E

k→∞

! Z t   (n)  2  i i ¯ ¯ ds = 0. ρ¯s ψ h − ρ¯s ψ h 0

Based on the Itˆo isometry, it holds that

2 Z t   Z t  (n)  ¯si ¯si − E ρεsk ψh(·, Zsεk )i dB ρεsk ψh(·, Zsεk )i dB 0 0 ! Z t   m (n)   2 X ε Pεk εk i εk εk i k E = − ρs ψh(·, Zs ) ds , ρs ψh(·, Zs ) Pεk

i=1

0

Z t   2 Z t  (n)  Pεk i i i i ¯ ¯ ¯ ¯ E ρ¯s ψ h dBs − ρ¯s ψ h dBs 0 0 ! Z t   m   2  X (n) εk ¯i ¯ i ds . ρ¯s ψ h = − ρ¯s ψ h EP i=1

0

16

(n) Rt  R   i (n) ¯ ¯ i and t ρ¯s ψ h ¯ i converge in mean square to Thus, 0 ρεsk ψh(·, Zsεk )i dB dB s s 0     Rt ε R t ¯ i dB ¯ i and ρ¯s ψ h ¯ i , respectively. Let us compute ρ k ψh(·, Zsεk )i dB s s 0 s 0

=

t

Z t    i i ¯ ¯ ¯i dBs − ρ¯s ψ h dB s 0 0 Z Z t  (n)   t εk i εk ¯i − ¯i ψh(·, Z ) ρ dB ρεsk ψh(·, Zsεk )i dB s s s s 0 0 Z t  Z (n) (n) t  ¯i ¯i − ¯i + ρεk ψh(·, Z εk )i dB ρ¯s ψ h dB

Z

ρεsk



0

=

ψh(·, Zsεk )i

s

s

s

Z t  Z t  (n)  i i i ¯ ¯ ¯ ¯i + ρ¯s ψ h dBs − ρ¯s ψ h dB s 0 0 Z t  Z t   (n) i εk εk i ¯ ¯i ρs ψh(·, Zs ) dBs − ρεsk ψh(·, Zsεk )i dB s 0

+

0

n−1  X j=0

    εk k ¯i ¯i ¯i B ρε(j+1)t/n ψh(·, Z(j+1)t/n )i − ρ¯(j+1)t/n ψ h (j+1)t/n − Bjt/n 

Z t  Z t  (n)  i i ¯ ¯ i dB ¯ ¯i + ρ¯s ψ h dBs − ρ¯s ψ h s 0

w.

s

0

0

− → 0, that is,

Z

0

t

ρsεk

Z t     w. εk i i i ¯ ¯ − ¯i. ψh(·, Zs ) dB → ρ ¯ ψ h dB s s s

(11)

0

For the third integral in Eq.(7), by the similar deduction to the second integral it holds that Z tZ Z tZ     w. εk ˜ ˜λ (ds, du). (12) ρ¯s ψ(λ(s, ·, u) − 1) N → ρs ψ(λ(s, ·, u) − 1) Nλ (ds, du) − 0

0

U3

U3

Combining (12) with (10) (11) and taking weak limits on two hand sides of (7) as k → ∞, we obtain that Z t Z t    ¯ i dB ¯ ¯i ρ¯t (ψ) = ρ¯0 (ψ) + ρ¯s Lψ ds + ρ¯s ψ h s 0 0 Z tZ   ˜λ (ds, du). + ρ¯s ψ(λ(s, ·, u) − 1) N 0

U3

On the other hand, we consider ρ0t (ψ). By the similar deduction to ρεt (ψ), it holds that Z t   Z t   0 0 0 ¯ 0 i ¯ ¯i ρt (ψ) = ρ0 (ψ) + ρs Lψ ds + ρs ψ h dB s 0 0 Z tZ   ˜λ (ds, du). + ρ0s ψ(λ(s, ·, u) − 1) N 0

U3

17

Thus, ρ¯ and ρ0 solve the same equation Z t   Z t   ¯ i dB ¯ ¯i ρt (ψ) = ρ0 (ψ) + ρs Lψ ds + ρs ψ h s 0 0 Z tZ   ˜λ (ds, du). + ρs ψ(λ(s, ·, u) − 1) N 0

(13)

U3

Besides, based on Theorem 4.2 in [11], Eq.(13) has a unique solution. So, for t ∈ [0, T ] ρ¯t (ψ) = ρ0t (ψ), That is, as k → ∞,

ρεt k (ψ)

converges weakly to

a.s.P. ρ0t (ψ).

The proof is completed.



References [1] N. Ikeda and S. Watanabe: Stochastic differential equations and diffusion processes, 2nd ed., NorthHolland/Kodanska, Amsterdam/Tokyo, 1989. [2] P. Imkeller, N. S. Namachchivaya, N. Perkowski and H. C. Yeong: Dimensional reduction in nonlinear filtering: a homogenization approach, The Annals of Applied Probability, 23(2013)2290-2326. [3] T. G. Kurtz: Approximation of Population Processes, Vol. 36 of CBMS-NSF Regional Conf. Series in Appl. Math., SIAM, Philadelphia, 1981. [4] H. J. Kushner: Weak Convergence Methods and Singularly Perturbed Stochastic Control and Filtering Problems, Systems & Control: Foundations & Applications 3. Birkh¨auser, Boston, 1990. [5] V. M. Lucic and A. J. Heunis: Convergence of nonlinear filters for randomly perturbed dynamical systems, Appl. Math. Optim., 48(2003)93-128. [6] A. Pazy: Semigroups of Linear Operators and Applications to Partial Differential Equations, Springer-Verlag Berlin Heidelberg New York, 1983. [7] J. H. Park, N. S. Namachchivaya and H. C. Yeong: Particle filters in a multiscale environment: Homogenized hybrid particle filter,J. Appl. Mech., 78(2011)1-10. [8] J. H. Park, R. B. Sowers and N. S. Namachchivaya: Dimensional reduction in nonlinear filtering, Nonlinearity, 23(2010)305-324. [9] H. J. Qiao: Exponential ergodicity for SDEs with jumps and non-Lipschitz coefficients, Journal of Theoretical Probability, 27(2014)137-152. [10] H. J. Qiao: Euler approximation for SDEs with jumps and non-Lipschitz coefficients, Osaka Journal of Mathematics, 51(2014)47-66. [11] H. J. Qiao and J. Q. Duan: Nonlinear Filtering of Stochastic Dynamical Systems with L´evy Noises, Advances in Applied Probability, 47(2015)902-918. [12] D. W. Stroock and S. R. S. Varadhan: Diffusion processes with boundary conditions, Comm. Pure Appl. Math., 24(1971)147-225.

18