Effective Filtering for Multiscale Stochastic Dynamical Systems in ...

6 downloads 0 Views 178KB Size Report
Apr 17, 2018 - arXiv:1804.06204v1 [math.PR] 17 Apr 2018. EFFECTIVE ... by following up the line in [11] and [17]. Firstly, the system is reduced to a system on.
EFFECTIVE FILTERING FOR MULTISCALE STOCHASTIC DYNAMICAL SYSTEMS IN HILBERT SPACES*

arXiv:1804.06204v1 [math.PR] 17 Apr 2018

HUIJIE QIAO School of Mathematics, Southeast University Nanjing, Jiangsu 211189, China [email protected] Abstract. In the paper, effective filtering for a type of slow-fast data assimilation systems in Hilbert spaces is considered. Firstly, the system is reduced to a system on a random invariant manifold. Secondly, nonlinear filtering of the origin system can be approximated by that of the reduction system.

1. Introduction Give a probability space (Ω, F , P) (Specialized in Section 3) and fix two separable Hilbert spaces H1 , H2 with the inner products h·, ·iH1 , h·, ·iH2 and the norms k · kH1 , k · kH2 , respectively. Consider a stochastic slow-fast system on H1 × H2 ( x˙ ε = Axε + F (xε , y ε) + xε V˙ , (1) ˙ , y˙ ε = 1ε By ε + 1ε G(xε , y ε) + √σε W where A, B are two linear operators on H1 , H2 , respectively, and the interaction functions F : H1 × H2 → H1 and G : H1 × H2 → H2 are Borel measurable. Moreover, V, W are mutually independent standard 1-dimensional Brownian motions, xε V˙ is an Itˆo differential, σ is nonzero H1 -valued noise intensity, and ε is a small positive parameter representing the ratio of the two time scales. The type of systems (1) have appeared in many fields, such as engineering and science([14]). For example, the climate evolution consists of fast atmospheric and slow oceanic dynamics, and state dynamic in electric power systems include fast- and slowly-varying elements. The research for systems (1) is various. When H1 , H2 are finite dimensional, WangRobert [12] and Schmalfuß-Schneider [15] observed the invariant manifold for systems (1). In [6] Fu-Liu-Duan also studied the invariant manifold of systems (1) in general Hilbert spaces. Stochastic average of systems (1) is considered in [8, 16]. Fix a separable Hilbert space H3 with the inner product h·, ·iH3 and the norm k · kH3 . The nonlinear filtering problem for the slow component xεt with respect to a H3 -valued observation process {rsε , 0 6 s 6 t} (See Subsection 4.1 in details) is to evaluate the ‘filter’ E[φ(xεt )|Rεt ], where φ is a Borel measurable function such that E|φ(xεt )| < ∞ for t ∈ [0, T ], and Rεt is the σ-algebra generated by {rsε , 0 6 s 6 t}. When H1 , H2 , H3 are finite dimensional, the nonlinear filtering problems of multi-scale systems have been widely studied. AMS Subject Classification(2010): 60H15, 37D10, 70K70. Keywords: Multiscale systems in Hilbert spaces, random invariant manifolds, nonlinear filtering, dimensional reduction. *This work was supported by NSF of China (No. 11001051, 11371352). 1

Let us recall some results. In [7], Imkeller-Namachchivaya-Perkowski-Yeong showed that the filter E[φ(xεt )|Rεt ] converges to the homogenized filter by double backward stochastic differential equations and asymptotic techniques. Recently, Papanicolaou-Spiliopoulos [9] also studied this convergence problem by independent version technique and then applied it to statistical inference. When jumps processes are added in the system (1), the author proved the convergence by weak convergence technique. Besides, in [11] and [17], the author and two coauthors reduced the system (1) to a system on a random invariant manifold, and showed that E[φ(xεt )|Rεt ] converges to the filter of the reduction system. Thus, a new method to study the nonlinear filtering problem for multiscale systems is offered. In the paper, we consider effective filtering for the system (1) in general Hilbert spaces by following up the line in [11] and [17]. Firstly, the system is reduced to a system on a random invariant manifold. Secondly, nonlinear filtering of the origin system can be approximated by that of the reduction system. This paper is arranged as follows. In Section 2, we introduce basic concepts about random dynamical systems, stationary solutions and random invariant manifolds. The framework for our method for reduced filtering is placed in Section 3. In Section 4, the nonlinear filtering problem is introduced and the approximation theorem of the filtering is proved. The following convention will be used throughout the paper: C with or without indices will denote different positive constants (depending on the indices) whose values may change from one place to another. 2. Preliminaries In the section, we introduce some notations and basic concepts in random dynamical systems. 2.1. Notation and terminology. Let B(H1 ) be the Borel σ-algebra on H1 , and B(H1 ) be the set of all real-valued bounded Borel-measurable functions on H1 . Let C(H1 ) be the set of all real-valued continuous functions on H1 , and Cb1 (H1 ) denote the collection of all functions of C(H1 ) which are bounded and Lipschitz continuous. And set |φ(x1 ) − φ(x2 )| kφk := max1 |φ(x)| + max . x1 6=x2 kx1 − x2 kH1 x∈H

2.2. Random dynamical systems ([1]). Let (Ω, F , P) be a probability space, and (θt )t∈R a family of measurable transformations from Ω to Ω satisfying for s, t ∈ R, θ0 = 1Ω ,

θt+s = θt ◦ θs .

(2)

If for each t ∈ R, θt preserves the probability measure P, i.e., θt∗ P = P,

(Ω, F , P; (θt )t∈R ) is called a metric dynamical system. Definition 2.1. Let (X, X ) be a measurable space. A mapping ϕ : R × Ω × X 7→ X,

(t, ω, y) 7→ ϕ(t, ω, y)

is called a measurable random dynamical system (RDS), or in short, a cocycle, if these following properties hold: 2

(i) Measurability: ϕ is B(R) ⊗ F ⊗ X /X -measurable, (ii) Cocycle property: ϕ(t, ω) satisfies the following conditions ϕ(0, ω) = idX , and for ω ∈ Ω and all s, t ∈ R

ϕ(t + s, ω) = ϕ(t, θs ω) ◦ ϕ(s, ω),

(iii) Continuity: ϕ(t, ω) is continuous for t ∈ R.

2.3. Stationary solutions and random invariant manifolds ([15]). Let ϕ be a random dynamical system on the normed space (X, k · kX ) . Then we introduce a stationary solution and a random invariant manifold with respect to ϕ. A random variable ζ is called a random fixed point in the sense of random dynamical systems if for t > 0 ϕ(t, ω, ζ(ω)) = ζ(θt ω), a.s.ω. If ϕ is defined by the solution mapping of a stochastic/random differential equation, (t, ω) 7→ ζ(θt ω) is a stationary solution of a stochastic/random differential equation. A family of nonempty sets M = {M(ω)}ω∈Ω is called a random set in X if for ω ∈ Ω, M(ω) is a closed set in X and for every y ∈ X, the mapping Ω ∋ ω → dist(y, M(ω)) :=

is measurable. Moreover, if M satisfies

ϕ(t, ω, M(ω)) ⊂ M(θt ω),

inf

x∈M(ω)

t > 0,

kx − ykX

ω ∈ Ω,

M is called (positively) invariant with respect to ϕ. In the sequel, we consider random sets defined by a Lipschitz continuous function. And then define a function by Ω × H1 ∋ (ω, x) → H(ω, x) ∈ H2

such that for all ω ∈ Ω, H(ω, x) is globally Lipschitzian in x and for any x ∈ H1 , the mapping ω → H(ω, x) is a H2 -valued random variable. Then M(ω) := {(x, H(ω, x))|x ∈ H1 },

is a random set in H1 × H2 ([15, Lemma 2.1]). The invariant random set M(ω) is called a Lipschitz random invariant manifold. 3. Framework In the section, we present some results which will be applied in the following sections. Let Ω1 := C0 (R, R) be the collection of all continuous functions f : R → R with f (0) = 0. And then it is equipped with the compact-open topology. Let F 1 be its Borel σ-algebra and P1 the Wiener measure on Ω1 . Set θt1 ω1 (·) := ω1 (· + t) − ω1 (t),

ω1 ∈ Ω1 ,

t ∈ R,

and then (θt1 )t∈R satisfy (2). Moreover, by the property of P1 we obtain that (Ω1 , F 1 , P1 , θt1 ) is a metric dynamical system. Next, set Ω2 := C0 (R, R). And then we define F 2 , P2 , θt2 by the similar means to F 1 , P1 , θt1 . Thus, (Ω2 , F 2 , P2 , θt2 ) becomes another metric dynamical system. Set Ω := Ω1 × Ω2 , F := F 1 × F 2 , P := P1 × P2 , θt := θt1 × θt2 , 3

and then (Ω, F , P, θt ) is a metric dynamical system that is used in the sequel. Consider the slow-fast system (1) on H1 × H2 , i.e. ( x˙ ε = Axε + F (xε , y ε) + xε V˙ , ˙ . y˙ ε = 1ε By ε + 1ε G(xε , y ε) + √σε W We make the following hypotheses: (H1 ) There exists a γ1 > 0 such that kAk 6 γ1 ,

where kAk stands for the norm of the operator A and A has no eigenvalue on the imaginary axis. (H2 ) There exists a γ2 > 0 such that for any y ∈ H2 , hBy, yiH2 6 −γ2 kyk2H2 .

(H3 ) There exists a positive constant L such that for all (xi , yi ) ∈ H1 × H2 and

kF (x1 , y1 ) − F (x2 , y2 )kH1 6 L(kx1 − x2 kH1 + ky1 − y2 kH2 ),

kG(x1 , y1 ) − G(x2 , y2 )kH2 6 L(kx1 − x2 kH1 + ky1 − y2 kH2 ),

and F (0, 0) = G(0, 0) = 0. (H4 )

γ2 > L. Under the assumptions (H3 ), the system (1) has a global unique solution (xε (t), y ε(t)), with a given initial value (x0 , y0 ). Set ϕεt (x0 , y0 ) := (xε (t), y ε (t)), and then ϕε is called a solution operator of the system (1). Next, we will study some properties of ϕε . To do this, we introduce two auxiliary systems dη = Aηdt + ηdV, 1 σ dξ ε = Bξ ε dt + √ dW. ε ε

(3) (4)

Lemma 3.1. Suppose that A, B satisfy (H1 ) and (H2 ), respectively. Then there exist two stationary solutions for Eq.(3) and Eq.(4). Proof. First of all, we observe Eq.(3). By (H1 ), it holds that all the eigenvalues of A have nonzero real parts. Thus, set (H1 )+ , (H1 )− be two spaces spanned by these eigenvectors according to positive and negative real parts, respectively, and then H1 = (H1 )+ ⊕ (H1 )− . By the similar method to that in [4, Theorem 5.2], we know that Eq.(3) has a stationary solution. That is, there exists a random variable η such that η(θt1 ω1 ) satisfies Eq.(3). In the following, study Eq.(4). Set   Z 0 Ws (ω2 ) ε −B εs √ d ξ (ω2 ) = σ e , ε −∞

and then by (H2 ), ξ ε is well defined. So, it holds that     Z 0 Z 0 Ws (θt2 ω2 ) Ws+t (ω2 ) − Ws (ω2 ) −B sε ε 2 −B sε √ √ d =σ d e ξ (θt ω2 ) = σ e ε ε −∞ −∞ 4

= σ

Z

0

−∞

−B εs

e

d



Ws+t (ω2 ) √ ε



B εt

= σe

Z

t

−B εs

e

−∞

d



Ws (ω2 ) √ ε



.

Moreover, one can justify that ξ ε (θt2 ·) solves Eq.(4). Thus, by the definition of stationary solutions, ξ ε is a stationary solution to Eq.(4). The proof is completed.  Next, set x¯ε := xε − η(θ·1 ·), y¯ε := y ε − ξ ε (θ·2 ·), and then (¯ xε , y¯ε) satisfy the following system  ε x¯˙ = A¯ xε + F (¯ xε + η(θ·1 ·), y¯ε + ξ ε (θ·2 ·)), 1 1 ε ε xε + η(θ·1 ·), y¯ε + ξ ε (θ·2 ·)). y¯˙ = ε B y¯ + ε G(¯ It is easy to justify that (¯ xε , y¯ε) generates a random dynamical system denoted by ϕ¯ε . The following theorem comes from [6, Theorem 4.2]. Theorem 3.2. (Random invariant manifold) Assume that (H1 )–(H4 ) are satisfied. Then for sufficiently small ε > 0, ϕ¯ε has a random invariant manifold   ¯ ε (ω) = x, H ε (ω, x) , x ∈ H1 , M where for ω ∈ Ω, the Lipschitz constant of H ε (ω, x) is bounded by L h (γ2 − α) 1 − L( γ21−α +

ε ) α−εγ1

and α is a positive number satisfying γ2 − α > L.

i,

Based on the relation between ϕε and ϕ¯ε , it holds that ϕε is also a random dynamical system and has a random invariant manifold   Mε (ω) = x + η(ω1 ), H ε (ω, x) + ξ ε (ω2 ) , x ∈ H1 . By the same deduction as [6, Theorem 4.4], we could get a reduction system on Mε .

Theorem 3.3. (A reduced system on the random invariant manifold) Assume that ε > 0 is sufficiently small and (H1 )–(H4 ) hold. Then for the system (1), there exists the following system on the random invariant manifold:  x˜˙ ε = A˜ xε + F (˜ xε , y˜ε ) + x˜ε V˙ , (5) y˜ε = H ε (θ· ω, x ˜ε − η(θ·1 ω1 )) + ξ ε (θ·2 ω2 ), such that for t > 0 and almost all ω, kz ε (t, ω) − z˜ε (t, ω)kH1×H2 6 CL,γ2 ,α e

−αt ε

kz ε (0) − z˜ε (0)kH1 ×H2 ,

where z˜ε (t) = (˜ xε (t), y˜ε (t)) is the solution of the system (5) with the initial value z˜ε (0) = (˜ x0 , y˜0 ) and CL,γ2 ,α > 0 is a constant depending on L, γ2 and α. 4. An approximate filter on the invariant manifold In the section we introduce nonlinear filtering problems for the system (1) and the reduced system (5) on the random invariant manifold, and then study their relation. 5

4.1. Nonlinear filtering problems. In the subsection we introduce nonlinear filtering problems for the system (1) and the reduced system (5). Let {βi (t, ω)}i>1 be a family of mutual independent one-dimensional Brownian motions on (Ω, F , P). Construct a cylindrical Brownian motion on H3 with respect to (Ω, F , P) by ∞ X Ut := Ut (ω) := βi (t, ω)ei , ω ∈ Ω, t ∈ [0, ∞), i=1

where {ei }i>1 is a complete orthonormal basis for H3 . It is easy to justify that the covariance operator of the cylindrical Brownian motion U is the identity operator I on H3 . Note that U is not a process on H3 . It is convenient to realize U as a continuous process on an enlarged Hilbert space H˜3 , the completion of H3 under the inner product ∞ X 2−i hx, ei iH3 hy, ei iH3 , x, y ∈ H3 . hx, yiH˜3 := i=1

Furthermore, fix a Borel measurable function h(x, y) : H1 × H2 → H3 . For h, we make the following additional hypothesis: kh(x, y)kH3 6 M and h(x, y) is Lipschitz (H5 ) There exists a M > 0 such that sup (x,y)∈H1 ×H2

continuous in (x, y) whose Lipschitz constant is denoted by khkLip . Next, for T > 0, an observation system is given by Z t ε rt = Ut + h(xεs , ysε )ds, t ∈ [0, T ]. 0

ε

Under the assumption (H5 ), r is well defined. Set  Z t  Z 1 t ε −1 ε ε ε ε 2 (Γt ) := exp − hh(xs , ys ), dUs iH3 − kh(xs , ys )kH3 ds , 2 0 0

and then by [3, Proposition 10.17], (Λεt )−1 is an exponential martingale under P. Thus, [3, Theorem 10.14] admits us to obtain that r ε is a cylindrical Brownian motion under a new probability measure Pε via dPε = (ΓεT )−1 . dP Rewrite Γεt as  Z t Z 1 t ε ε 2 ε ε ε ε kh(xs , ys )kH3 ds , Γt = exp hh(xs , ys ), drs iH3 − 2 0 0 and define

ρεt (φ) := Eε [φ(xεt )Γεt |Rεt ],

φ ∈ B(H1 ),

where Eε stands for the expectation under Pε , Rεt , σ(rsε : 0 6 s 6 t) ∨ N and N is the collection of all P-measure zero sets. And set πtε (φ) := E[φ(xεt )|Rεt ],

φ ∈ B(H1 ),

and then by the Kallianpur-Striebel formula it holds that πtε (φ) =

ρεt (φ) . ρεt (1) 6

Here ρεt is called nonnormalized filtering of xεt with respect to Rεt , and πtε is called normalized filtering of xεt with respect to Rεt , or the nonlinear filtering problem for xεt with respect to Rεt . Besides, we rewrite the reduced system (5) as x˜˙ ε = A˜ xε + F˜ ε (ω, x ˜ε ) + x˜ε V˙ , where F˜ ε (ω, x) := F (x, H ε (θ· ω, x − η(θ·1 ω1 )) + ξ ε (θ·2 ω2 )), and study the nonlinear filtering problem for x˜ε . Set ˜ ε (ω, x) := h(x, H ε (θ· ω, x − η(θ1 ω1 )) + ξ ε (θ2 ω2 )), h · ·  Z t Z t 1 ε ε 2 ε ε ε ε ˜ ˜ ˜ kh (ω, x ˜s )kH3 ds , Γt := exp hh (ω, x ˜s ), drs iH3 − 2 0 0

˜ ε is an exponential martingale under Pε . Thus, we and then by [3, Proposition 10.17], Γ t define ˜ ε |Rε ], ρ˜εt (φ) := Eε [φ(˜ xεt )Γ t t ε ρ˜ (φ) π ˜tε (φ) := tε , φ ∈ B(H1 ), ρ˜t (1) and prove that π ˜ ε could be understood as the nonlinear filtering problem for x˜ε with respect to Rεt . 4.2. The relation between πtε and π ˜tε . In the subsection we will show that a suitable distance between πtε and π˜tε converges to zero as ε → 0. Let us start with two estimations. Lemma 4.1. Assume that (H1 )–(H5 ) are satisfied. Then    2 p p −p 2 ε ε M T , t ∈ [0, T ], E |˜ ρt (1)| 6 exp + 2 2

p > 0.

Proof. By the Jensen inequality it holds that h i ˜ ε ε −p ε ε ˜ ε −p ε ˜ ε |−p ]. 6 E E [| Γ | |R ] = Eε [|Γ Eε |˜ ρεt (1)|−p = Eε Eε [Γ |R ] t t t t t

˜ ε we know that So, by the definition of Λ t   Z t  Z t p ε ˜ ε −p ε ε ε ε ε ε 2 ˜ (ω, x ˜ (ω, x E [|Γt | ] = E exp −p hh ˜s ), drs iH3 + kh ˜s )kH3 ds 2 0 0 "  Z t  2 Z t p ε ε ε ε ε ε 2 ˜ (ω, x ˜ (ω, x = E exp −p hh ˜s ), drs iH3 − kh ˜s )kH3 ds 2 0 0  2 Z t # p p ε ε 2 ˜ (ω, x • exp + kh ˜s )kH3 ds 2 2 0  Z t   " #  2 2 Z t p p p ˜ ε (ω, x ˜ ε (ω, x M 2 T Eε exp −p hh ˜εs ), drsε iH3 − + kh ˜εs )k2H3 ds 6 exp 2 2 2 0 0    2 p p 2 M T , + = exp 2 2 7

n R t ˜ε where the last step is based on the fact that exp −p 0 hh (ω, x ˜εs ), drsε iH3 − is an exponential martingale under Pε . The proof is completed.

p2 2

Rt 0

˜ ε (ω, x kh ˜εs )k2H3 ds 

o

Lemma 4.2. Under (H1 )–(H5 ), it holds that for φ ∈ Cb1 (H1 ), 1/2 (e Eε |ρεt (φ) − ρ˜εt (φ)|p 6 Ckφkp (Eε kz ε (0) − z˜ε (0)k2p H1 ×H2 )

−αtp ε

t ∈ [0, T ],

+ ε),

p > 2,

where the constant C > 0 is independent of ε. Proof. For φ ∈ Cb1 (Rn ), ε

E

|ρεt (φ)



ρ˜εt (φ)|p

p ε ε ε ε ε ε ˜ε ε E E [φ(xt )Γt |Rt ] − E [φ(˜ xt )Γt |Rt ] p ˜ ε |Rε ] Eε Eε [φ(xεt )Γεt − φ(˜ xεt )Γ t t   p  ˜ ε Rε xεt )Γ Eε Eε φ(xεt )Γεt − φ(˜ t t p i h ˜ ε Eε φ(xεt )Γεt − φ(˜ xεt )Γ t ε

= = 6 =

6 2p−1 Eε [|φ(xεt )Γεt − φ(˜ xεt )Γεt |p ] p i h ˜ ε +2p−1 Eε φ(˜ xεt )Γ xεt )Γεt − φ(˜ t

=: J1 + J2 .

To J1 , by the H¨older inequality, we know that   J1 6 2p−1 (Eε |φ(xεt ) − φ(˜ xεt )|2p )1/2 (Eε |Γεt |2p )1/2 n R t ε p−1 p ε ε ε 2p 1/2 E exp 2p 0 hh(xεs , ysε), drsε iH3 − 6 2 kφk (E kxt − x˜t kH1 ) • exp

n

(2p)2 2

Rt 0

kh(xεs , ysε )k2H3 ds −

p 6 2p−1 kφkp CL,γ e 2 ,α

−αtp ε

R 2p t 2

0

kh(xεs , ysε )k2H3 ds

o

(6)

(2p)2 2

!1/2

1/2 p(2p−1)M (Eε kz ε (0) − z˜ε (0)k2p e H1 ×H2 )

2 T /2

Rt

kh(xεs , ysε)k2H3 ds 0

,

(7) where the last step is based on Theorem 3.3 and the fact that the process o n R t (2p)2 R t ε ε 2 ε ε ε kh(x , y )k ds is an exponential martingale under exp 2p 0 hh(xs , ys ), drs iH3 − 2 0 s s H3 Pε . Next, for J2 , it holds that p i h ˜ ε . J2 6 2p−1kφkp Eε Γεt − Γ t ˜ ε solve the following equations, respectively, Based on the Itˆo formula, Γεt and Γ t Z t Z t ε ε ε ε ε ε ˜ ε (ω, x ˜ ˜ ε hh Γt = 1 + Γs hh(xs , ys ), drs iH3 , Γt = 1 + Γ ˜εs ), drsε iH3 . s 0

0

So, it follows from the BDG inequality and the H¨older inequality that  Z t D p i E p  h ε ε ε ε˜ε ε ε ε ε ε ε ˜ ˜ Γs h(xs , ys ) − Γs h (ω, x ˜s ), drs = E E Γt − Λt H3 0

8

o

p/2 Z t

2

ε ε ε ε˜ ε ε ˜ ˜s ) ds = E

Γs h(xs , ys ) − Γs h (ω, x H3 0 Z t

p p/2−1 ε ε ε ε ε˜ ε ε ˜ 6 T E Γs h(xs , ys ) − Γs h (ω, x ˜s ) ds H3 0 Z t

p p−1 p/2−1 ε ε ε ε ε˜ε ε 6 2 T E Γs h(xs , ys ) − Γs h (ω, x ˜s ) ds H3 0 Z t

p

˜ε ε ˜ ε (ω, x ˜εh +2p−1T p/2−1 Eε Γεs h (ω, x ˜εs ) − Γ ˜ ) s s 3 ds ε

H

0

=: J21 + J22 .

For J21 , by the similar deduction to J1 we have Z t −αsp p 1/2 p(2p−1)M 2 T /2 p−1 p/2−1 e ds J21 6 2 T khkpLip CL,γ e ε (Eε kz ε (0) − z˜ε (0)k2p H1 ×H2 ) 2 ,α 0

p 1/2 p(2p−1)M e (Eε kz ε (0) − z˜ε (0)k2p = 2p−1 T p/2−1 khkpLip CL,γ H1 ×H2 ) 2 ,α

2 T /2

−αtp ε [1 − e ε ]. αp

And for J22 , it follows from the bounded property of h that Z t p p−1 p/2−1 p ε ε ε ˜ J22 6 2 T M E Γs − Γs ds. 0

So,

Z t p i h ε ˜ ε p 2p ε ε ε 1/2 ε ε ε ˜ E Γs − Γs ds. E Γt − Γt 6 Cε(E kz (0) − z˜ (0)kH1 ×H2 ) + C ε

0

By the Gronwall inequality it holds that p i h ˜ ε 6 Cε(Eε kz ε (0) − z˜ε (0)k2p1 2 )1/2 . Eε Γεt − Γ t H ×H

Thus,

1/2 J2 6 2p−1 kφkp Cε(Eε kz ε (0) − z˜ε (0)k2p . H1 ×H2 )

(8)

Finally, combining (6) with (7)and(8), we obtain that p Eε |ρεt (φ) − ρ˜εt (φ)|p 6 2p−1 kφkp CL,γ e 2 ,α

This proves the lemma.

−αtp ε

1/2 p(2p−1)M (Eε kz ε (0) − z˜ε (0)k2p e H1 ×H2 )

1/2 +2p−1 kφkp Cε(Eε kz ε (0) − z˜ε (0)k2p . H1 ×H2 )

2 T /2



Now, we are ready to state and prove the main result in the paper. First, we give out two concepts used in the proof of Theorem 4.5. Definition 4.3. For he set M ⊂ Cb1 (H1 ), if the convergence lim φ(xn ) = φ(x), ∀φ ∈ M, n→∞

for some x, xn ∈ H1 , implies that lim xn = x, it is said that M strongly separates points in H1 .

n→∞

1 1 1 Definition 4.4. R For the setR N ⊂ Cb (H ), if µn and µ are probability measures on B(H ), such that lim H1 φ dµn = H1 φ dµ for any φ ∈ N, then µn converges weakly to µ, it is n→∞ said that N is convergence determining for the topology of weak convergence of probability measures, .

9

Theorem 4.5. (Approximation by the reduced filter on the invariant manifold) Under (H1 )–(H5 ), there exists a positive constant C such that for φ ∈ Cb1 (H1 ) 1/16 E|πtε (φ) − π ˜tε (φ)|p 6 Ckφkp (Ekz ε (0) − z˜ε (0)k16p (e H1 ×H2 )

−4αtp ε

+ ε)1/4 .

Thus, for the distance d(·, ·) in the space of probability measures that induces the weak convergence, the following approximation holds: 1

16p (e E[d(πtε , π ˜tε )] 6 C(Ekz ε (0) − z˜ε (0)k16p H1 ×H2 )

−4αtp ε

1

+ ε) 4p .

Proof. For φ ∈ Cb1 (H1 ), the H¨older inequality, Lemma 4.1 and Lemma 4.2 admit us to obtain that ε p ρt (φ) − ρ˜εt (φ) ρεt (1) − ρ˜εt (1) ε ε p ε E|πt (φ) − π ˜t (φ)| = E − πt (φ) ρ˜εt (1) ρ˜εt (1) ε p p ρεt (1) − ρ˜εt (1) ˜εt (φ) p−1 ρt (φ) − ρ p−1 ε 6 2 E + 2 E πt (φ) ρ˜εt (1) ρ˜εt (1) 1/2 1/2 E |˜ ρεt (1)|−2p 6 2p−1 E |ρεt (φ) − ρ˜εt (φ)|2p 1/2 1/2 E |˜ ρεt (1)|−2p +2p−1 kφkp E |ρεt (1) − ρ˜εt (1)|2p 1/2 ε ε 1/2 = 2p−1 Eε |ρεt (φ) − ρ˜εt (φ)|2p ΓεT E |˜ ρt (1)|−2p ΓεT 1/2 1/2 ε ε E |˜ ρt (1)|−2p ΓεT +2p−1 kφkp Eε |ρεt (1) − ρ˜εt (1)|2p ΓεT 1/4 ε ε 2 1/2 1/4 ε ε E |ΓT | E |˜ ρt (1)|−4p 6 2p−1 Eε |ρεt (φ) − ρ˜εt (φ)|4p 1/4 ε ε 2 1/2  1/4 E |ΓT | Eε |˜ ρεt (1)|−4p +2p−1 kφkp Eε |ρεt (1) − ρ˜εt (1)|4p  −4αtp 1/8 1/4 ε ε 2 1/2 ε 6 Ckφkp (Eε kz ε (0) − z˜ε (0)k8p ) (e + ε) E |Γ | 1 2 T H ×H 1/2 −4αtp 8p p ε ε = Ckφk (Ekz (0) − z˜ (0)kH1 ×H2 |ΓεT |−1 )1/8 (e ε + ε)1/4 Eε |ΓεT |2 1/16 −4αtp 1/2 1/16 6 Ckφkp (Ekz ε (0) − z˜ε (0)k16p E|ΓεT |−2 (e ε + ε)1/4 Eε |ΓεT |2 . H1 ×H2 ) In the following, we estimate E(ΓεT )−2 , Eε |ΓεT |2 . By simple calculations, it holds that   Z T  Z 2 T ε −2 ε ε ε ε 2 E(ΓT ) = E exp −2 hh(xs , ys ), dUs iH3 + kh(xs , ys )kH3 ds 2 0 0 "   Z T Z 22 T ε ε 2 ε ε kh(xs , ys )kH3 ds = E exp −2 hh(xs , ys ), dUs iH3 − 2 0 0  2Z T # Z T 2 2 • exp kh(xεs , ysε)k2H3 ds + kh(xεs , ysε )k2H3 ds 2 0 2 0  6 exp 3M 2 T , n R o R t 22 t ε ε 2 ε ε where the last step is based on the fact that exp −2 0 hh(xs , ys ), dUs iH3 − 2 0 kh(xs , ys )kH3 ds is an exponential martingale under P. By the similar deduction to above, we know that  Eε |ΓεT |2 6 exp M 2 T . 10

Thus, 1/16 E|πtε (φ) − π ˜tε (φ)|p 6 Ckφkp (Ekz ε (0) − z˜ε (0)k16p (e H1 ×H2 )

−4αtp ε

+ ε)1/4 .

Next, notice that there exists a countable algebra {φi , i = 1, 2, · · · } of Cb1 (H1 ) that strongly seperates points in H1 . By [5, Theorem 3.4.5], it furthermore holds that {φi , i = 1, 2, · · · } is convergence determining for the topology of weak convergence of probability measures. For two probability measures µ, τ on B(H1 ), set R R ∞ X | H1 φi dµ − H1 φi dτ | , d(µ, τ ) := i 2 i=1

and then d is a distance in the space of probability measures on B(H1 ). Since {φi , i = 1, 2, · · · } is convergence determining for the topology of weak convergence of probability measures, d induces the weak convergence. The proof is completed.  References

[1] L. Arnold, Random Dynamical Systems, Springer, Berlin, 1998. [2] A. Bain and D. Crisan, Fundamentals of Stochastic Filtering, Springer, Berlin, 2009. [3] G. Da Prato and J. Zabczyk: Stochastic Equations in Infinite Dimensions. Encyclopedia of Mathematics and its Applications. Cambriddge: Cambridge University Press, 1992. [4] J. Duan, K. Lu and B. Schmalfuss, Invariant manifolds for stochastic partial differential equations, Ann. Probab., 31(2003), 2109-2135. [5] S. N. Ethier and T. G. Kurtz, Markov Processes: Characterization and Convergence. John Wiley & Sons, 1986. [6] H. Fu, X. Liu and J. Duan, Slow manifolds for multi-time-scale stochastic evolutionary systems, Comm. Math. Sci., 11(2013), 141-162. [7] P. Imkeller, N. S. Namachchivaya, N. Perkowski and H. C. Yeong, Dimensional reduction in nonlinear filtering: a homogenization approach, The Annals of Applied Probability, 23(2013), 2290-2326. [8] R. Z. Khasminskii and G.Yin, On transition densities of singularly perturbed diffusions with fast and slow components, SIAM J. Appl. Math., 56(1996), 1794-1819. [9] A. Papanicolaou and K. Spiliopoulos: Dimension reduction in statistical estimation of partially observed multiscale processes, SIAM J. on Uncertainty Quantification, 5(2017)1220-1247. [10] H. Qiao, Convergence of nonlinear filtering for stochastic dynamical systems with L´evy noises, https://arxiv.org/abs/1707.07824. [11] H. Qiao, Y. Zhang and J. Duan, Effective filtering on a random slow manifold, https://arxiv.org/abs/1711.05377. [12] W. Wang and A. J. Roberts, Slow manifold and averaging for slow-fast stochastic differential system, J. Math. Anal. Appl, 398(2013), 822-839. [13] B. L. Rozovskii, Stochastic Evolution System: Linear Theory and Application to nonlinear Filtering, Springer, New York, 1990. [14] Z. Schuss, Nonlinear Filtering and Optimal Phase Tracking, Springer, New York, 2012. [15] B. Schmalfuß and R. Schneider, Invariant manifolds for random dynamical systems with slow and fast variables, J. Dyna. Diff. Equa., 20(2008), 133-164. [16] G. A. Pavliotis and A. M. Stuart, Multiscale methods: averaging and homogenization, Springer Science+Business Media, New York, 2008. [17] Y. Zhang, H. Qiao and J. Duan, Effective filtering analysis for non-Gaussian dynamic systems, https://arxiv.org/abs/1803.07380.

11

Suggest Documents