Exponential Convergence for 3D Stochastic Primitive
arXiv:1506.08514v1 [math.PR] 29 Jun 2015
Equations of the Large Scale Ocean Zhao Dong1, , Jianliang Zhai2, , Rangrang Zhang1, 1
Institute of Applied Mathematics, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, No. 55 Zhongguancun East Road, Haidian District, Beijing, 100190, P. R. China 2
School of Mathematical Sciences, University of Science and Technology of China, No. 96 Jinzhai Road, Hefei, 230026, P. R. China
(
[email protected],
[email protected],
[email protected] )
Abstract In this paper, we consider the ergodicity for the three-dimensional stochastic primitive equations of the large scale oceanic motion. We prove that if the noise is at the same time sufficiently smooth and non-degenerate, then the weak solutions converge exponentially fast to the unique equilibrium. The coupling method introduced by C. Odasso [25] plays a key role.
1 Introduction The large-scale motion of the ocean can be well modeled by 3D viscous primitive equations. Beyond their considerable significance in applications, the primitive equations have generated much interest from the mathematics community due to their rich nonlinear, nonlocal character and their anisotropic structure. There are many works about the global well-posedness and the long time dynamics of the deterministic primitive oceanic and atmospheric equations. In [17, 18, 19, 20], the authors set up the mathematical framework to study the viscous primitive equations, and they obtained the existence of global weak solutions, but uniqueness is still open. In [9], the global existence of strong solutions to the primitive equations with small initial data was established. Moreover, the authors showed the local existence and uniqueness of strong solutions to the equations. In [12], the authors proved the global existence and uniqueness of strong solutions to the primitive equations under the small depth hypothesis for a large set of initial data whose size depends on the thickness of the domain. In [2], C. Cao and E.S. Titi developed a beautiful approach to dealing with the L6 -norm of the fluctuation v˜ of horizontal velocity, and obtained the global well-posedness for the 3D viscous primitive equations. Despite these great successful developments for the deterministic primitive equations, introducing uncertainty in ‘exact’ model equations is reasonable and necessary in the study of the primitive equations of the large scale ocean. The introducing of stochastic processes is aimed at accounting for a 1
number of uncertainties and errors. For example, the external forcing of the ocean comes mainly from the atmosphere, and atmospheric forcing field must be regarded as random, see, e.g. [7, 10, 22, 27] and references therein. On the other hand, as in [5], these ‘exact’ models are numerically intractable, they can’t be fully solved with present super computers(and will not be for any foreseeable future). We refer to e.g. [5] and [10] for more details about the motivations for introducing uncertainty. Recently, there are several research works about the well-posedness and the long time behavior of the stochastic primitive equations. When the initial value U0 is V− valued, B. Guo and D. Huang [10] proved global well-posedness and the existence of global random attractor, if only the velocity v is driven by additive noises; A. Debussche, N. Glatt-Holtz, R. Temam and M. Ziane [5] proved unique global pathwise solution, if v and T are both driven by random forcing. For the long time behavior of the stochastic case, T. Tachim Medjo [23] obtained the weak solution converges exponentially in the mean square and almost surely exponentially to the stationary solution. In his paper, the author not only assumed that the viscosity ν is large enough, but also assumed that the covariance operator of the noise satisfied some exponential decline property. In [8], the authors established the continuity of the Markovian semigroup associated with strong solutions of the 3D stochastic primitive equations, and proved the existence of an invariant measure. In their paper, only the velocity part of the 3D stochastic primitive equations is considered, and the uniqueness of invariant measure is still open. In this paper, our main result is to establish that if the noise is at the same time sufficiently smooth and non-degenerate, then the weak solutions for 3D stochastic primitive equations converge exponentially fast to equilibrium. More precisely, given a weak solution, there exists a stationary solution such that the total variation distance between the law of the given solution and the law of the stationary solution converges to zero exponentially fast. Due to the lack of uniqueness of weak solutions for 3D stochastic primitive equations, we only consider solutions which are limit of Galerkin approximations. A very natural question is that the invariant measures might depend on Galerkin approximation sequences. However, we can prove all the invariant measures from Galerkin approximations are the same. In particular, we obtain the uniqueness of the invariant measure for strong solution (v, T ). For the convergent rate, we obtain the exponential property both for the weak and strong solutions. Due to the lack of the uniqueness for weak solutions, it is not straightforward to define a Markov evolution associated to weak solutions for 3D stochastic primitive equations. We overcome this difficulty by working with Galerkin approximations YN = (vN , S N ). Our proof relies on the modified coupling method from C. Odasso [25]. Applying the Bismut-Elworthy-Li formula, a coupling of YN with good properties is constructed, which ensures that coupling can be achieved for initial data which are small in a sufficiently smooth norm. Then we can obtain exponential mixing for YN with constants which are controlled uniformly in N. Another important ingredient we need to prove is that YN enters a small ball in the smooth norm and that the time of entering in this ball admits an exponential moment, which is independent of N. Combining these two results, we establish the exponential mixing for YN with a constant independent of N. Finally, taking the limit, we obtain the exponential mixing for weak solutions
2
which are the limit of Galerkin approximations. we refer to [11, 13, 15, 16] and reference therein for a comprehensive context of coupling method. For the proving that the time of entering in a small ball admits an exponential moment, we need to deal with k · k6 , k · k42 , k · k62 , k · k43 and other high order Sobolev norm. To obtain the exponential moment, similarly as in [25], we need to introduce stopping times to prevent from estimating high order norms of YN . This is highly non-trivial. This paper is organized into six sections. In Sects. 2 and 3, we introduce the 3D stochastic primitive equations and review some basic representational results. Our main results are given in Sect. 4: the existence theorem of global weak solutions for the 3D stochastic primitive equations and exponential mixing for the weak solutions. Then we prove these main results in Sects. 5 and 6, respectively.
2 Preliminaries The 3D Stochastic Primitive Equations of the large-scale ocean under a stochastic forcing, in a Cartesian system, are written as ∂v ∂v + (v · ∇)v + θ + f k × v + ∇P + L1 v = Θ1 (t, x, y, z), ∂t ∂z ∂z P + T = 0, ∇ · v + ∂z θ = 0, ∂T ∂T + (v · ∇)T + θ + L2 T = Θ2 (t, x, y, z) ∂t ∂z
(2.1) (2.2) (2.3) (2.4)
where the horizontal velocity field v = (v1 , v2 ), the three-dimensional velocity field (v1 , v2 , θ), the temperature T and the pressure P are unknown functions. f is the Coriolis parameter. k is vertical unit vector. W1 and W2 are two independent cylindrical Wiener processes on H1 and H2 , respectively. H1 and H2 will be defined below. Let ˜ T) Θ1 (t, x, y, z) = φ(v,
dW1 , dt
Θ2 (t, x, y, z) = ϕ(v, ˜ T)
dW2 , dt
which are two external random forcing. We set ∇ = (∂x, ∂y) to be the horizontal gradient operator and ∆ = ∂2x + ∂2y to be the horizontal Laplacian. The viscosity and the heat diffusion operators L1 and L2 are given by 1 1 ∂2 v L1 v = − ∆v − , Re1 Re2 ∂z2 1 ∂2 T 1 ∆T − , Rt1 Rt2 ∂z2 where Re1 , Re2 are positive constants representing the horizontal and vertical Reynolds numbers, respectively, and Rt1 , Rt2 are positive constants which stand for the horizontal and vertical heat diffusivity, respectively. In the whole article, we assume that the constants Re1 , Re2 , Rt1 , Rt2 are all equal to 1. Without this assumption, we can also obtain the results of this paper. L2 T = −
3
The space domain for (2.1)-(2.4) is O = D × (−1, 0), where D is a smooth bounded open domain in R2 . The boundary value conditions for (2.1)-(2.4) are given by on D × {0} = Γu ,
∂z v = 0, θ = 0, ∂z T = 0
∂z v = 0, θ = 0, ∂z T = 0 on D × {−1} = Γb , ∂T ∂v × n = 0, =0 on ∂D × [−1, 0] = Γl , v · n = 0, ∂n ∂n where n is the normal vector to Γl . Integrating (2.3) from −1 to z and using (2.5), (2.6), we have Z z θ(t, x, y, z) = Φ(v)(t, x, y, z) = − ∇ · v(t, x, y, z′ )dz′ ,
(2.5) (2.6) (2.7)
(2.8)
−1
moreover,
Z
0 −1
∇ · vdz = 0.
Supposing that pb is a certain unknown function at Γb , and integrating (2.1)-(2.4) as Z z ∂v ∂v ∇T dz′ + L1 v + (v · ∇)v + Φ(v) + f k × v + ∇pb − ∂t ∂z −1 ∂T ∂T + (v · ∇)T + Φ(v) + L2 T ∂t ∂z Z 0 ∇ · vdz
(2.2) from −1 to z, we rewrite ˜ T ) dW1 , = φ(v, dt dW2 = ϕ(v, ˜ T) , dt = 0.
(2.9) (2.10) (2.11)
−1
The boundary value conditions for (2.9)-(2.11) are given by ∂z v = 0, ∂z T = 0
on Γu ,
(2.12)
∂z v = 0, ∂z T = 0 ∂v ∂T v · n = 0, × n = 0, =0 ∂n ∂n Denote U = (v, T ) and the initial value conditions are
on Γb ,
(2.13)
on Γl .
(2.14)
U(0) = u0 = (v0 , T 0 ).
(2.15)
In the following, we make scale transformation of T . √ Let S = C0 T , where C0 is a positive constant described in Sect. 4, then (2.9)-(2.15) can be changed to the following equations Rz dW1 ∂v ∂v ′ √1 (2.16) ∂t + (v · ∇)v + Φ(v) ∂z + f k × v + ∇pb − C −1 ∇S dz + L1 v = φ(v, S ) dt , 0
∂S ∂t
dW2 + (v · ∇)S + Φ(v) ∂S ∂z + L2 S = ϕ(v, S ) dt , R0 ∇ · vdz = 0, −1
4
(2.17) (2.18)
where ˜ T ), φ(v, S ) = φ(v,
ϕ(v, S ) =
p C0 ϕ(v, ˜ T ).
The boundary value conditions for (2.16)-(2.18) are given by ∂z v = 0, ∂z S = 0
on Γu ,
(2.19)
∂z v = 0, ∂z S = 0 ∂S ∂v × n = 0, =0 v · n = 0, ∂n ∂n
on Γb ,
(2.20)
on Γl .
(2.21)
Denote Y = (v, S ) and the initial value conditions are Y(0) = y0 = (v0 , S 0 ).
(2.22)
It’s easy to know that the ergodicity of U is equivalent to Y. In the following, our aim is to prove the exponential mixing of Y. We call (2.16)-(2.22) the initial boundary value problem of the 3D viscous stochastic primitive equations, which is denoted by IBVP.
3 Formulation of IBVP 3.1 Some Functional Spaces Let L(K1 , K2 ) (resp. L2 (K1 , K2 )) be the space of bounded (resp. Hilbert-Schmidt) linear operators from the Hilbert space K1 to K2 . We denote by |·|L2 (D) and |·|H p (D) the norms of L2 (D) and H p (D), respectively. | · | and (·, ·) represent the norm and inner product of L2 (O). Let | · | p and | · |H p (O) be the norms of L p (O) and Sobolev space H p (O), respectively. p 2 2 H (O) = {Y ∈ L (O)|∂α Y ∈ L (O) for |α| ≤ p}, P |Y|2 p = |α|≤p |∂α Y|2 . H (O)
It’s known that (H p (O), | · |H p (O) ) is a Hilbert space. Define working spaces for the equations (2.16)-(2.22). Let ) ( Z 0 ∂v ∞ 2 ∂v |Γ ,Γ = 0, v · n|Γl = 0, × n|Γl = 0, ∇ · vdz = 0 , V1 , v ∈ (C (O)) ; ∂z u b ∂n −1 ) ( ∂S ∂S ∂S ∞ |Γ = 0, |Γ = 0, |Γ = 0 , V2 , S ∈ C (O); ∂z u ∂z b ∂n l V1 = the closure of V1 with respect to the norm | · |H 1 (O) × | · |H 1 (O) , V2 = the closure of V2 with respect to the norm | · |H 1 (O) , H1 = the closure of V1 with respect to the norm | · | × | · |, H2 = the closure of V2 with respect to the norm | · |, V = V1 × V2 ,
H = H1 × H2 . 5
The inner products and norms on V, H are given by (Y, Y1 )V
= (v, v1 )V1 + (S , S 1 )V2 ,
(Y, Y1 ) = (v, v1 ) + (S , S 1 ) = (v(1) , (v1 )(1) ) + (v(2) , (v1 )(2) ) + (S , S 1 ), 1
(Y, Y)V2
1
1
1
= (v, v)V2 1 + (S , S )V2 2 ,
kYkV = (Y, Y)V2 .
where Y = (v, S ), Y1 = (v1 , S 1 ). By the Riesz representation theorem, we can identify the dual space H ′ of H (respectively Hi′ = Hi , i=1,2). Then we have V ⊂ H = H′ ⊂ V ′, where the two inclusions are compact continuous.
3.2 Some Functionals Define three bilinear forms a : V × V → R, a1 : V1 × V1 → R, a2 : V2 × V2 → R, and their corresponding ′ ′ ′ linear operators A : V → V , A1 : V1 → V1 , A2 : V2 → V2 by setting a(Y, Y1 ) , (AY, Y1 ) = a1 (v, v1 ) + a2 (S , S 1 ), where a1 (v, v1 ) , (A1 v, v1 ) =
Z
a2 (S , S 1 ) , (A2 S , S 1 ) =
O
Z
O
! ∂v ∂v1 ∇v · ∇v1 + · dxdydz, ∂z ∂z ! ∂S ∂S 1 dxdydz, ∇S · ∇S 1 + ∂z ∂z
for any Y = (v, S ), Y1 = (v1 , S 1 ) ∈ V. The following lemma follows readily. Lemma 3.1. (i) The forms a, ai (i = 1, 2) are coercive, continuous, and therefore, the operators A : V → V ′ and Ai : Vi → Vi′ (i = 1, 2) are isomorphisms. Moreover, a(Y, Y1 ) ≤ C1 kYkV kY1 kV , a(Y, Y) ≥ C2 kYk2V ,
where C1 and C2 are two absolute constants (independent of the physically relevant constants Rei , Rti , etc). (ii) The isomorphism A : V → V ′ (respectively Ai : Vi → Vi′ (i = 1, 2)) can be extended to a self-adjoint unbounded linear operator on H (respectively on Hi , i=1,2), with compact inverse A−1 : H → H (respectively A−1 i ; Hi → Hi (i = 1, 2)).
6
It’s known that A1 is a self-adjoint operator with discrete spectrum in H1 . Denote by {kn }n=1,2,··· an eigenbasis of H1 associated to the increasing sequence {νn }n=1,2,··· of eigenvalues of A1 . Similarly, A2 is a self-adjoint operator with discrete spectrum in H2 . Let (ln )n=1,2,··· be an eigenbasis of H2 associated to kn 0 , then we the increasing sequence {λn }n=1,2,··· of eigenvalues of A2 . Let e¯n,0 = and e¯ 0,m = 0 lm can rearrange {¯en,0 , e¯ 0,m } and denoted it by {en }n=1,2,··· , the associated eigenvalues of (A, D(A)) is denoted by an increasing sequence {µn }n=1,2,··· . The fractional power (A s , D(A s )) of the operator (A, D(A)) for s ∈ R is P∞ P∞ 2 D(A s ) = {Y = n=1 yn en ; n=1 µ2s n |yn | < ∞}; P P A s Y = ∞ µns yn en , where Y = ∞ yn en . n=1 n=1 For any s ∈ R, set
s
s
HAs = D(A 2 ).
kYkAs = |A 2 Y|,
It’s obvious that (HAs , k · kAs ) is a Hilbert space and that (H0 , k · k0 ) = (H, | · |) and (H1 , k · k1 ) = (V, k · k). Similarly, define (HAs 1 , k · kAs 1 ) and (HAs 2 , k · kAs 2 ), for convenience, all of them are denoted by (H s , k · ks ). Now, we define three functionals b : V × V × V → R, bi : V1 × Vi × Vi → R (i = 1, 2) and the associated operators B : V × V → V ′ , Bi : V1 × Vi → Vi′ (i = 1, 2) by setting b(Y, Y1 , Y2 ) , (B(Y, Y1), Y2 ) = b1 (v, v1 , v2 ) + b2 (v, S 1 , S 2 ), # Z " ∂v1 b1 (v, v1 , v2 ) , (B1 (v, v1 ), v2 ) = (v · ∇)v1 + Φ(v) · v2 dxdydz, ∂z O # Z " ∂S 1 S 2 dxdydz, b2 (v, S 1 , S 2 ) , (B2 (v, S 1 ), S 2 ) = (v · ∇)S 1 + Φ(v) ∂z O for any Y = (v, S ), Yi = (vi , S i ) ∈ V. Then we have Lemma 3.2. For any Y ∈ V, Y1 ∈ V, (B(Y, Y1), Y1 ) = b(Y, Y1 , Y1 ) = b1 (v, v1 , v1 ) = b2 (v, S 1 , S 1 ) = 0.
by
Moreover, we define another functional g : V × V → R and the associated linear operator G : V → V ′ g(Y, Y1 ) , (G(Y), Y1 ) # Z " Z z 1 ′ = f (k × v) · v1 + (∇pb − √ ∇S dz ) · v1 dxdydz. C0 −1 O
By (2.18), we have (v, ∇pb ) =
Z
0
−1
vdz, ∇pb
!
L2 (D)
= − pb ,
and since (v, f k × v) = 0, then 7
Z
0
−1
! ∇ · vdz
= 0, L2 (D)
Lemma 3.3. (i) 1 g(Y, Y) = (G(Y), Y) = − √ C0 (ii) There exists a constant C, such that
Z h Z O
z
−1
!
i ∇S dz · v dxdydz. ′
|(G(Y), Y)| ≤ C(|S |kvk ∨ kS k|v|), |(G(Y), Y1 )| ≤ C|v||v1 | + C(|S |kv1 k ∨ kS k|v1 |).
(3.23) (3.24)
3.3 Some inequalities We recall some interpolation inequalities used later (see [1]). For h ∈ H 1 (D),
1
1
|h|L4 (D) ≤ c|h|L2 2 (D) |h|H2 1 (D) , 3
2
|h|L5 (D) ≤ c|h|L5 3 (D) |h|H5 1 (D) , 2
1
|h|L6 (D) ≤ c|h|L3 4 (D) |h|H3 1 (D) . For h ∈ H 1 (O), 1
1
1
3
|h|3 ≤ c|h| 2 |h|H2 1 (O) , |h|4 ≤ c|h| 4 |h|H4 1 (O) , |h|6 ≤ c|h|H 1 (O) , 1
1
|h|∞ ≤ c|h|H2 1 (O) |h|H2 2 (O) . At last, we recall the integral version of Minkowshy inequality for the L p spaces, p ≥ 1. Let O1 ⊂ Rm1 and O2 ⊂ Rm2 be two measurable sets, where m1 and m2 are two positive integers. Suppose that f (ξ, η) is measurable over O1 × O2 . Then !1/p ! p #1/p Z Z "Z Z | f (ξ, η)| p dξ dη. ≤ | f (ξ, η)|dη dξ O1
O2
O2
O1
Lemma 3.4. (see [2]) If v1 ∈ H 1 (O), v2 ∈ H 3 (O), v3 ∈ H 3 (O), then R 1 1 (i) | O v3 · [(v1 · ∇)v2 ]dxdydz| ≤ c|∇v2 ||v3 |3 |v1 |6 ≤ c|∇v2 ||v3 | 2 |∇v3 | 2 |∇v1 |, R 1 1 1 1 (ii) | O Φ(v1 )v2z · v3 dxdydz| ≤ c|∇v1 ||v3 | 2 |∇v3 | 2 |∂z v2 | 2 |∇∂z v2 | 2 .
3.4 Definition of Martingale Solution We merge (2.16) and (2.17), and use the functionals defined in the above to obtain the stochastic evolution equation dY(t) + AY(t)dt + B(Y(t), Y(t))dt + G(Y(t))dt = Ψ(Y(t))dW(t), (3.25) Y(0) = y0 . 8
where W1 , W = W2
φ(v, S ) 0 Ψ(Y) = 0 ϕ(v, S )
.
Definition 3.1. We say that there exists a martingale solution of equation (3.25) if for any T > 0, there exists a stochastic basis (Ω, F , {Ft }t∈[0,T ] , P), a cylindrical Wiener process W on H and a progressively measurable process Y : [0, T ] × Ω → H such that α Y ∈ L∞ (0, T ; H) ∩ L2 (0, T ; V) ∩ C 0, T ; D(A− 2 ) ,
P − a.s,
α
where α > 3 is any fixed positive number, and for all ψ ∈ D(A 2 ), the identity Z t Z t Z t (Y(t), ψ) + AY(s), ψ ds + B(Y(s), Y(s)), ψ ds + G(Y(s)), ψ ds 0 0 0 Z t = (y0 , ψ) + Ψ(Y(s))dW(s), ψ , ∀t ∈ [0, T ], P − a.s. 0
holds.
If D(Y(0)) = λ ∈ P(H), we denote Pλ = D(Y). Here D(Z) stands for the law of random variable Z.
4 Hypothesis and Main Results 4.1 Hypothesis Suppose C0 ≥ λ21 in the preliminaries. The covariance operator Ψ satisfies the following Hypothesis H0, which is similar to the one introduced by [25]. Hypothesis H0 W1 and W2 are two independent cylindrical Wiener processes on H1 and H2 , respectively. (1) There exists ε0 > 0 and a family {Ψn }n=1,2,··· of continuous mappings H → R with continuous Frechet derivatives such that ∞ ∞ X X n Ψ(y)dW = Ψ (y)e dW where W = W n en , n n n=1 n=1 ∞ X 0 κ0 = < ∞. sup |Ψn (y)|2 µ2+ε n y∈H n=1
(2) There exists κ1 such that for any y, η ∈ H3 , ∞ X n=1
|Ψ′n (y) · η|2 µ3n < κ1 kηk23 .
9
(3) For any y ∈ H and n ∈ N, Ψn (y) > 0,
κ2 = sup |Ψ−1 (y)|2L(H4 ;H) < ∞, y∈H
where
∞ X
−1
Ψ (y) · h =
Ψ−1 n (y)hn en
for
h=
n=1
∞ X
hn en .
n=1
Setting κ = κ0 + κ1 + κ2 + 1. β
Remark 1. Ψ = A− 2 fulfills Hypothesis H0, provided β ∈ ( 27 , 4]. Hypothesis H1 Ψ : H → L2 (H, H) is a continuous and bounded Lipschitz mapping, i.e. |Ψ(y)|2L2 (H,H) ≤ λ0 |y|2 + ρ y ∈ H, for some constants λ0 ≥ 0, ρ ≥ 0. Remark 2. It is easy to know Hypothesis H0 implies Hypothesis H1.
4.2 Main Results Theorem 4.1. Under Hypothesis H1, there exists a martingale solution Y(t) of (3.25). Its proof is given in Sect. 5. However, the uniqueness of martingale solutions is still open. Before stating the second theorem, we will recall some definitions. Let E be a Polish space. The set of all probability measures on E is denoted by P(E). The set of all bounded measurable (resp. uniformly continuous) maps from E to R is denoted by Bb (E; R) (resp. UCb (E; R)). Since the total variation norm k · kvar is the dual norm of | · |∞ , for k · kvar associated to the space H s for s < −3, we have for any ′ probability measure λ ∈ P(H), Z ′ ′ g(y)λ (dy)|, kλ kvar = sup | |g|∞ ≤1
Hs
where the supremum is taken over g ∈ UCb (H s ) which verifies |g|∞ ≤ 1. Theorem 4.2. Assume that Hypothesis H0 holds. There exist C and γ > 0 only depending on Ψ, O, ε0 such that, for any martingale solution Pλ with initial law λ ∈ P(H) which is a limit of Galerkin approximations of (5.27), there exists a stationary martingale solution Pµ with initial law µ ∈ P(H) such that ! Z −γt 2 kDPλ (Y(t)) − µkvar ≤ Ce 1+ |y| λ(dy) , (4.26) H
provided Z
H
|y|2 λ(dy) < ∞,
where k · kvar is the total variation norm associated to the space H s for s < −3. 10
Remark 3. In Theorem 4.2, the stationary measure µ seems to depend on λ and the Galerkin approximation subsequence {Nk } as we only consider solutions which are limit of Galerkin approximations. However, the following Corollary 4.3 tells us that µ is independent of λ and {Nk }. Corollary 4.3. All invariant measures for martingale solutions which are limit of Galerkin approximations are the same, i.e, the above µ is independent of λ and {Nk }. In particular, the invariant measure µV of the strong solution U = (v, T ) is unique. In order to prove Theorem 4.2, we need two propositions: Proposition 6.2 and Proposition 6.3, which are verified in Section 6.2 and Section 6.3, separately.
5 Existence of Martingale Solutions Our proof is based on [6], compactness method is applied.
5.1 Compact Embedding of Certain Functional Spaces Let (Ω, F , {Ft }t≥0 , P) be a stochastic basis (with expectation E) and W be a cylindrical Wiener process with values in H defined on the stochastic basis. Given p > 1, α ∈ (0, 1), let W α,p (0, T ; H) be the Sobolev space of all u ∈ L p (0, T ; H) such that Z
T
Z
0
T
0
|u(t) − u(s)| p dtds < ∞, |t − s|1+αp
endowed with the norm p
|u|W α,p (0,T ;H) =
Z
0
T
|u(t)| p dt +
Z
0
T
Z
0
T
|u(t) − u(s)| p dtds. |t − s|1+αp
For any progressively measurable process f ∈ L2 (Ω × [0, T ]; L2 (H, H)) denote by I( f ) the Itô integral defined as Z t f (s)dW(s) t ∈ [0, T ]. I( f )(t) = 0
Clearly, I( f ) is a progressively measurable process in L2 (Ω × [0, T ]; H).
Lemma 5.1. Let p ≥ 2, α < [0, T ]; L2 (H, H)),
1 2
be given. Then for any progressively measurable process f ∈ L p (Ω × I( f ) ∈ L p (Ω; W α,p (0, T ; H))
and there exists a constant C(p, α) > 0 independent of f such that Z T p p E|I( f )|W α,p (0,T ;H) ≤ C(p, α)E | f (t)|L2 (H,H) dt. 0
11
Lemma 5.2. Let B0 ⊂ B ⊂ B1 be Banach spaces, B0 and B1 reflexive, with compact embedding of B0 in B. Let p ∈ (1, ∞) and α ∈ (0, 1) be given. Let X be the space X = L p (0, T ; B0 ) ∩ W α,p (0, T ; B1 ), endowed with the natural norm. Then the embedding of X in L p (0, T ; B) is compact. Lemma 5.3. If B1 ⊂ B˜ are two Banach spaces with compact embedding, and the real number α ∈ ˜ (0, 1), p > 1 satisfy αp > 1, then the space W α,p (0, T ; B1 ) is compactly embedded into C(0, T ; B).
5.2 Proof of Theorem 4.1. 3
3
We divide into three steps to prove this theorem. Step 1. Let Pn be the operator from D(A− 2 ) to D(A 2 ) defined as n X 3 Pn x = hx, ei iei x ∈ D(A− 2 ). i=1
3
3
Here, we denote by h·, ·i the dual pairing between D(A 2 ) and D(A− 2 ). Then hPn x, yi = hx, Pn yi, 3
for all x, y ∈ D(A− 2 ). Its restriction to H is the orthogonal projection onto Pn H , Span{e1 , · · ·, en }. Let Bn (Y, Y) be the Lipschitz operator in Pn H defined as Bn (Y, Y) = χn (Y)B(Y, Y) Y ∈ Pn H, where χn : H → R is defined as χn (Y) = Θn (|Y|), with Θn : R → [0, 1] of class C ∞ , such that 1, if |Y| ≤ n, χn (Y) = 0, if |Y| > n + 1.
Consider the classical Galerkin approximation scheme defined by the processes Yn (t) ∈ Pn H, solutions of dYn + AYn dt + Pn Bn (Yn , Yn )dt + PnG(Yn )dt = Pn Ψ(Yn )dW(t), t ∈ [0, T ] (5.27) Yn (0) = Pn y0 . 3
Noticing (3.23), (3.24) and B is locally Lipschitz from V × V to D(A− 2 ), all the coefficients are continuous and with linear growth in Pn H, thus, this equation has a unique martingale solution Yn ∈ L2 (Ω; C(0, T ; Pn H)). Moreover, for each p ≥ 2, applying Itô formula on |Yn | p and using the classical method referred to [6], by (Yn , G(Yn )) ≤ √1 |S n |kvn k C0 1 1 kYn k2 + |Yn |2 , ≤ 2 2C0 12
we can prove that there exist two positive constants C1 (p), C2 , which are independent of n, such that ! p E sup |Yn (s)| ≤ C1 (p), (5.28) 0≤s≤T T
E
Z
kYn (s)k2 ds ≤ C2 .
0
Step 2.
(5.29)
Decompose Yn as Z t Z t Yn (t) = Pn y0 − AYn (s)ds − Pn Bn (Yn (s), Yn (s))ds 0 0 Z t Z t − PnG(Yn (s))ds + Pn Ψ(Yn (s))dW(s) 0
=
Jn1
+
0 2 3 4 Jn (t) + Jn (t) + Jn (t)
+ Jn5 (t).
By (5.28), E|Jn1 |2
≤ C3 ,
E|Jn4 |2W 1,2 (0,T ;V ′ )
2
≤ CE sup |Yn (s)| 0≤s≤T
!
≤ C4 .
From (5.29), E|Jn2 |2W 1,2 (0,T ;V ′ )
≤ CE Jn5 ,
Z
T
0
kYn (s)k2 ds ≤ C5 .
from Lemma 5.1 and (5.28), under Hypothesis H1, for suitable positive constants C3 , C4 , C5 . As to we have E|Jn5 |2W β,2 (0,T ;H) ≤ C6 (β), for all β ∈ (0, 12 ), and for some constant C6 (β) > 0. As to Jn3 , refer to [18], kB(Y, Y1 )k−3 ≤ C|Y|kY1 k, then 2
|Pn Bn (Yn , Yn )|
2
3 L2 (0,T ;D(A− 2 ))
≤ C7 sup |Yn (s)| 0≤s≤T
Z
T
0
kYn (s)k2 ds,
by (5.28) and (5.29), E|Jn3 |
3 W 1,2 (0,T ;D(A− 2 ))
p ≤ C8 C1 (2)C2 .
For a Banach space B, we know W 1,p (0, T ; B) ⊂ W α,p (0, T ; B) for all α ∈ (0, 1) and p > 1. Collecting all the previous inequalities we obtain E|Yn |
3
W β,2 (0,T ;D(A− 2 ))
≤ C9 (β),
for all β ∈ (0, 12 ), and for some constant C9 (β) > 0. Recalling (5.29), which implies that the laws D(Yn ) are bounded in probability in 3 L2 (0, T ; V) ∩ W β,2 0, T ; D(A− 2 ) , thus, by Lemma 5.2, we have the family D(Yn ) is tight in L2 (0, T ; H). 13
Arguing similarly on the term Jn5 , on the basis of the estimate (5.28), we apply Lemma 5.3 and have γ that the family D(Yn ) is tight in C 0, T ; D(A− 2 ) , for all given γ > 3. Thus we can find a subsequence, γ still denoted by Yn , such that D(Yn ) converges weakly in L2 (0, T ; H) ∩ C 0, T ; D(A− 2 ) . Step 3. Fix γ > 3. By Skorohod embedding theorem, there exists a stochastic basis γ (Ω1 , F 1 , {Ft1 }t∈[0,T ] , P1 ) and L2 (0, T ; H) ∩ C 0, T ; D(A− 2 ) -valued random variables Y 1 , Yn1 , n ≥ 1, such γ that Yn1 has the same law of Yn on L2 (0, T ; H) ∩ C 0, T ; D(A− 2 ) , and Yn1 → Y 1 in L2 (0, T ; H) ∩ γ C 0, T ; D(A− 2 ) , P1 − a.s. Of course, for each n, D(Yn1 )(C(0, T ; Pn H)) = 1,
and by (5.28) and (5.29), we have E
P1
EP
1
sup 0≤s≤T Z T 0
|Yn1 (s)| p
!
≤ C1 (p),
kYn1 (s)k2 ds ≤ C2 ,
for all n and p ≥ 2. Hence, we also have Y 1 ∈ L2 (0, T ; V) ∩ L∞ (0, T ; H)
P1 − a.s.
and Yn1 → Y 1 weakly in L2 (Ω × [0, T ]; V). For each n ≥ 1, the process Mn1 (t) with trajectories in C(0, T ; H) defined as Z t Z t Z t 1 1 1 1 1 1 Mn (t) = Yn (t) − Pn Y (0) + AYn (s)ds + Pn Bn (Yn (s), Yn (s))ds + PnG(Yn1 (s))ds. 0
0
0
In fact, Mn1 is a square integrable martingale with respect to the filtration {G1n }t = σ{Yn1 (s), s ≤ t}, with quadratic variation [Mn1 ]t
=
Z
0
t
Pn Ψ(Yn1 )Ψ(Yn1 )∗ Pn ds.
Then by a standard method (see [6]), we obtain the martingale solution.
6 Exponential Convergence Let λ and Y be a probability measure and a random variable on (Ω, F ), respectively. The distribution Dλ (Y) denotes the law of Y under λ. A weak solution Pµ with initial law µ is said to be stationary if, for any t ≥ 0, µ is equal to DPµ (Y(t)). For N ∈ N, we consider the following finite dimensional approximation of (3.25) dYN + AYN dt + PN B(YN , YN )dt + PN G(YN )dt = PN Ψ(YN )dW(t), (6.30) PN Y(0) = PN y0 . 14
It’s easy to show that for any given y0 ∈ H, (6.30) has a unique solution YN = YN (·, y0 ) = (vN (·, y0 ), S N (·, y0 )). Define (PtN ψ)(y0 ) = E[ψ(YN (t, y0 ))],
for ψ ∈ Bb (PN H).
YN (·, y0 ) verifies the strong Markov property, which obviously implies that (PtN )t∈R+ is a Markov transition semi-group on PN H. In the following, PN is omitted for simplicity written. Itô formula on |YN (·, y0 )|2 gives d|YN |2 + 2kYN k2 dt = −2 YN , B(YN , YN ) dt − 2 YN , G(YN ) dt + 2 YN , Ψ(YN )dW + kΨ(YN )k2L2 (H,H) dt. By Lemma 3.2, we have
YN , B(YN , YN ) = 0.
By Lemma 3.3, we obtain
d|YN |2 + 2kYN k2 dt ! Z z 1 ′ ∇S N dz dt + kΨ(YN )k2L2 (H,H) dt = 2 YN , Ψ(YN )dW + 2 vN , √ C0 −1 ! Z z 1 ′ = 2 YN , Ψ(YN )dW − 2 ∇ · vN , √ S N dz dt + kΨ(YN )k2L2 (H,H) dt C0 −1 2 ≤ 2 YN , Ψ(YN )dW + √ |S N |kvN kdt + kΨ(YN )k2L2 (H,H) dt. C0
Then, by ab ≤ 12 (a2 + b2 ) and kYN k2 = kvN k2 + kS N k2 , we have
d|YN |2 + kYN k2 dt + kS N k2 dt ≤ 2(YN , Ψ(YN )dW) + Since C0 ≥
2 λ1 ,
kS N k2 ≥ λ1 |S N |2 , then
2 2 C0 |S N |
2 |S N |2 dt + κdt. C0
≤ kS N k2 . Thus
d|YN |2 + kYN k2 dt ≤ 2(YN , Ψ(YN )dW) + κdt.
(6.31)
By kYN k2 ≥ µ1 |YN |2 and applying integration by parts to eµ1 t |YN (t)|2 , then integrating and taking expectation, we obtain κ E |YN (t)|2 ≤ e−µ1 t |y0 |2 + . (6.32) µ1
Hence, applying the Krylov-Bogoliubov Criterion (see [3]), we obtain that (PtN )t admits an invariant measure µN and that every invariant measure has a moment of order two in H. Let Y0N be a random variable whose law is µN and which is independent of W, then YN = YN (·, Y0N ) is a stationary solution of (6.30). Integrating (6.31), we obtain Z t kYN (s)k2 ds ≤ E|YN (0)|2 + κt. (6.33) E|YN (t)|2 + E 0
15
Since the law of YN (s) is µN for any s ≥ 0, it follows Z kyk2 µN (dy) ≤ κ.
(6.34)
PN H
Using the similar arguments as proof of Theorem 4.1, the laws (PµNN ) of YN (·, Y0N ) are tight in γ L2 (0, T ; H) ∩ C(0, T ; D(A− 2 )), γ > 3. Then, for a subsequence, still denote by (PµNN ), which converges in law to Pµ a stationary martingale solution of (3.25) with initial law µ. We deduce from (6.34) that Z kyk2 µ(dy) ≤ κ. H
In general, it’s not known whether µ is an invariant measure due to the lack of uniqueness and also we don’t know whether (3.25) defines a Markov evolution. However, the above information is a key to prove uniqueness of invariant measures.
6.1 Coupling Method The proof of Theorem 4.2 is based on coupling arguments. We now recall some basic results about coupling. Moreover, in order to explain the method in the case of non-degenerate noise, we briefly give the proof of exponential mixing for (6.30). Let (λ1 , λ2 ) be two distributions on a Polish space (E, B(E)) and let (Ω, F , P) be a probability space and let (Z1 , Z2 ) be two random variables (Ω, F ) → (E, B(E)). We say that (Z1 , Z2 ) is a coupling of (λ1 , λ2 ) if λi = D(Zi ) for i = 1, 2. The total variation kλkvar of a finite real measure λ on E is defined as kλkvar = sup{|λ(Γ)| | Γ ∈ B(E)}, where B(E) stands for the set of the Borelian subsets of E. The next result is fundamental in the coupling methods, the proof, for example, can be found in [25]. Lemma 6.1. Let (λ1 , λ2 ) be two probability measures on (E, B(E)). Then kλ1 − λ2 kvar = min P(Z1 , Z2 ). The minimum is taken over all couplings (Z1 , Z2 ) of (λ1 , λ2 ). There exists a coupling which reaches the minimum value. It is called a maximal coupling. Let us first consider the case of the solutions of (6.30). Assume that Hypothesis H0 holds. Let N ∈ N and (y10 , y20 ) ∈ H × H. Combining arguments similar as [14],[21], it can be shown that there exists a function pN (·) > 0 such that k(P1N )∗ δy2 − (P1N )∗ δy1 kvar ≤ 1 − pN (|y10 | + |y20 |), 0
(6.35)
0
where δy1 and δy2 are two Dirac measures on single point y10 and y20 , respectively. Applying Lemma 6.1, 0 0 we construct a maximal coupling (Z1 , Z2 ) = Z1 (y10 , y20 ), Z2 (y10 , y20 ) of (P1N )∗ δy1 , (P1N )∗ δy2 . It follows 0
P(Z1 = Z2 ) ≥ pN (|y10 | + |y20 |) > 0. 16
0
(6.36)
˜ be a couple of independent cylindrical Wiener processes on H × H and δ > 0. Denote Let (W, W) 1 ˜ by YN (·, y0 ) and Y˜ N (·, y20 ) the solutions of (6.30) with initial data y10 and y20 associated to W and W, 1 2 1 2 respectively. Now we can construct a couple of random variables (V1 , V2 ) = (V1 (y0 , y0 ), V2 (y0 , y0 )) on PN H for (YN , Y˜ N ) as follows (YN (·, y0 ), YN (·, y0 )), if y10 = y20 = y0 , (6.37) (V1 , V2 ) = (Z1 (y10 , y20 ), Z2 (y10 , y20 )), if (y10 , y20 ) ∈ BH×H (0, δ)\{y10 = y20 }, (YN (·, y1 ), Y˜ N (·, y2 )), else, 0 0 where BH×H (0, δ) is the ball of H × H with radius δ . Then (V1 (y10 , y20 ), V2 (y10 , y20 )) is a coupling of (P1N )∗ δy1 , (P1N )∗ δy2 . It can be shown that it depends on (y10 , y20 ). We then construct a coupling (Y 1 , Y 2 ) 0 0 1 of D(YN (·, y0 )), D(YN (·, y20 )) by induction on N. Firstly, setting Y i (0) = yi0 for i = 1, 2, then assuming that we have constructed (Y 1 , Y 2 ) on {0, 1, · · ·, k}. We take (V1 , V2 ) as above independent of (Y 1 , Y 2 ) and set Y i (k + 1) = Vi (Y 1 (k), Y 2 (k)) for i = 1, 2.
Taking into account (6.32), it is easily shown that the time of return of (Y 1 , Y 2 ) in BH×H (0, µ4κ1 ) admits an exponential moment. We choose δ = µ4κ1 . It follows from (6.36) and (6.37) that (Y 1 (n), Y 2 (n)) ∈ BH×H (0, δ) implies that the probability of (Y 1 , Y 2 ) having coupled at time n + 1 is bounded below by pN (2δ) > 0. Finally, remark that if (Y 1 , Y 2 ) are coupled at time n + 1, then they remain coupled for any time after. Combining these properties and using the fact that (Y 1 (n), Y 2 (n))n∈N is a discrete strong Markov process, it is easily shown that P(Y 1 (n) , Y 2 (n)) ≤ C N e−γN n (1 + |y10 |2 + |y20 |2 ),
(6.38)
with γN > 0. Recall that (Y 1 , Y 2 ) is a coupling of D(YN (·, y10 )), D(YN (·, y20 )) on N. It follows that 1 2 N ∗ N ∗ (Y (n), Y (n)) is a coupling of (Pn ) δy1 , (Pn ) δy2 . Combining Lemma 6.1 and (6.38), we obtain, for 0 0 n ∈ N, k(PnN )∗ δy2 − (PnN )∗ δy1 kvar ≤ C N e−γN n (1 + |y10 |2 + |y20 |2 ). 0 0 N N )∗ λ) Setting n = ⌊t⌋(the integer part of t) and integrating (y20 , y10 ) over ((Pt−n µN , where µN is an R 2 invariant measure, it follows that, for any λ ∈ P(PN H) with P H |y| λ(dy) < ∞, N
k(PtN )∗ λ −
−γN t
µN kvar ≤ C N e
1+
Z
! |y| λ(dy) . 2
PN H
(6.39)
This result is useless when considering equations (3.25) since the constants C N , γN strongly depend on N. In the following, we prove that (6.35) is true uniformly in N, which implies (6.39) holds with constants uniform in N, provided y10 , y20 in a small ball of H3 . Then it remains to prove that the time of return in this small ball admits an exponential moment. Besides, in order to obtain Theorem 4.2, it’s sufficient to prove the following proposition.
17
Proposition 6.1. Assume that Hypothesis H0 holds. Then there exist C = C(Ψ, O, ε0 ) > 0 and γ = γ(Ψ, O, ε0 ) > 0 such that for any N ∈ N, there exists a unique invariant measure µN for (PtN )t∈R+ . R Moreover, for any λ ∈ P(PN H) with P H |y|2 λ(dy) < ∞, N ! Z N ∗ −γt 2 k(Pt ) λ − µN kvar ≤ Ce 1+ |y| λ(dy) . (6.40) PN H
Let λ ∈ P(H) and Eλ be an expectation under the initial distribution λ. It’s easy to know that (6.40) implies that ! Z Z −γt 2 |y| λ(dy) , (6.41) g(y)µN (dy) ≤ Ce |g|∞ 1 + Eλ g(YN (t)) − H PN H
for any g ∈ UCb (H s ).
6.2 Coupling of Solutions Starting from Small Initial Data The aim of this section is to establish the following result, which is analogous to (6.36) but uniform in N. Proposition 6.2. Assume that Hypothesis H0 holds. Then there exist (Υ, δ) ∈ (0, 1) × (0, 1) such that, for any N ∈ N, there exists a coupling (Z1 (y10 , y20 ), Z2 (y10 , y20 )) of (PΥN )∗ δy1 , (PΥN )∗ δy2 which measurably 0
depends on (y10 , y20 ) ∈ H3 × H3 and verifies 3 P Z1 (y10 , y20 ) = Z2 (y10 , y20 ) ≥ , 4 provided ky10 k23 ∨ ky20 k23 ≤ δ.
0
(6.42)
(6.43)
Assume that Hypothesis H0 holds. Let Υ ∈ (0, 1). Applying Lemma 6.1, we can construct Z1 (y10 , y20 ), Z2 (y10 , y20 ) as the maximal coupling of (P∗Υ δy1 , P∗Υ δy2 ). Measurable dependence on (y10 , y20 ) 0 0 follows from a slight extension of Lemma 6.1 (see [26], Remark A.1). In order to establish Proposition 6.2, it is sufficient to prove that there exists c(κ, O) not depending on Υ ∈ (0, 1) and on N ∈ N such that √ k(PΥN )∗ δy2 − (PΥN )∗ δy1 kvar ≤ c(κ, O) Υ, (6.44) 0
0
provided ky10 k23 ∨ ky20 k23 ≤ κΥ3 . Then it suffices to choose Υ ≤ 1/(4c(κ, O))2 and δ = κΥ3 . Since k · kvar is the dual norm of | · |∞ , (6.44) is equivalent to √ E g(YN (Υ, y2 )) − g(YN (Υ, y1 )) ≤ 8|g|∞ c(κ, O) Υ, 0 0
(6.45)
(6.46)
for any g ∈ UCb (PN H). It follows from the density of Cb1 (PN H) ⊂ UCb (PN H) that, in order to establish Proposition 6.2, it is sufficient to prove that (6.46) holds for any N ∈ N, Υ ∈ (0, 1) and g ∈ Cb1 (PN H) provided (6.45) holds. The proof of (6.46) under condition (6.45) is aligned into the next three subsections. 18
6.2.1
A prior estimate.
For any process Y = (v, S ), we define the H2 − energy of Y at time t by Z t H2 2 EY (t) , kY(t)k2 + kY(s)k23 ds. 0
P1N
For any N ∈ N. Let be the orthogonal projection in H1 onto the space P1N H1 , Span{k1 , k2 , · · ·, kN } and P2N be the orthogonal projection in H2 onto the space P2N H2 , Span{l1 , l2 , · · ·, lN }. Denote PN Y = (P1N v, P2N S ) for any Y = (v, S ) ∈ V. We obtain the following finite dimension approximation of (2.16), (2.17) ∂vN ∂vN + P1N (vN · ∇)vN + P1N Φ(vN ) + P1N f k × vN + P1N ∇pb ∂t ∂z Z dW1 1 1 z (6.47) ∇S N dz′ + L1 vN = P1N φ(vN , S N ) − √ PN , dt C −1 0 vN (0) = P1 v0 , N ∂S N ∂S N dW2 2 2 2 ∂t + PN (vN · ∇)S N + PN Φ(vN ) ∂z + L2 S N = PN ϕ(vN , S N ) dt , S N (0) = P2 S 0 . N
(6.48)
For written simplicity, in the following, PN , P1N and P2N are omitted. Now we establish the following result which will be useful in the proof of (6.46). Lemma 6.2. Assume that Hypothesis H0 holds. There exist K0 = K0 (O) and c = c(O) such that for any Υ ≤ 1 and any N ∈ N, we have ! κ √ H2 Υ, P sup EYN (·,y0 ) (t) > K0 ≤ c 1 + K0 (0,Υ) provided ky0 k22 ≤ κΥ.
Proof of Lemma 6.2. Let YN = YN (·, y0 ), vN = vN (·, y0 ) and S N = S N (·, y0 ). Itô formula on kYN k22 gives dkYN k22 + 2kYN k23 dt = dMH2 + IH2 dt + JH2 dt + kPN Ψ(YN )k2L2 (H;H2 ) dt, let IH2 = IHv 2 + IHS 2 ,
JH2 = JHv 2 + JHS 2 ,
where ! ! Z z 1 ∂vN 2 v ′ 2 v , JH2 = −2 A1 vN , f k × vN + ∇pb − √ ∇S N dz , IH2 = −2 A1 vN , (vN · ∇)vN + Φ(vN ) ∂z C0 −1 ! ∂S N S 2 , JHS 2 = 0, IH2 = −2 A2 S N , (vN · ∇)S N + Φ(vN ) ∂z Z t M = 2 A2 YN (s), Ψ(YN (s))dW(s) . H2 0
19
By Hölder inequality and the Young inequality, IHv 2 ≤ ckvN k3 kvN k|∇vN |∞ + ckvN k3 kvN k2 |vN |∞ 1
JHv 2 IHS 2
1
3
1
≤ ckvN k3 kvN kkvN k22 kvN k32 + ckvN k3 kvN k 2 kvN k22 1 ≤ kvN k23 + ckvN k62 , 4 1 2 ≤ 2kvN k3 kvN k + √ kvN k3 kS N k2 ≤ kvN k23 + ckvN k2 + ckS N k22 , 4 C0 1 ≤ kS N k23 + ckvN k42 + ckvN k82 + ckS N k82 + ckS N k42 . 2
Combining the above inequalities, we obtain 3 dkYN k22 + kYN k23 dt ≤ ckYN k82 dt + cκdt + dMH2 , 2 then
where
dkYN k22 + kYN k23 dt ≤ ckYN k22 kYN k62 − 8K03 dt + cκdt + dMH2 , K0 =
r 3
µ1 . 16c
(6.49)
Setting σH2 = inf{t ∈ (0, Υ) | kYN (t)k22 > 2K0 }, we infer from ky0 k22 ≤ κΥ that for any t ∈ (0, σH2 ) 2 EH YN (t) ≤ MH2 (t) + cκΥ.
(6.50)
We deduce from Hypothesis H0 that Ψ(y)∗ A is bounded in L(H2 ; H2 ) by cκ. It follows that for any t ∈ (0, σH2 ), Z t Z t ∗ 2 2 hMH2 i(t) = 4 |PN Ψ(YN (s)) A YN (s)| ds ≤ cκ kYN k22 ds ≤ 2cκK0 Υ. 0
0
Hence Burkholder-Davis-Gundy inequality gives p √ p E sup MH2 (t) ≤ cE hMH2 i(σH2 ) ≤ c K0 κΥ ≤ c(K0 + κ) Υ. (0,σH2 )
It follows from (6.50) and Υ ≤ 1 that
√ H 2 E sup EYN (t) ≤ c(K0 + κ) Υ, (0,σH2 )
which yields, by Chebyshev inequality, κ √ H P sup EYN2 (t) > K0 ≤ c(1 + ) Υ. K0 (0,σH2 )
o o n n Let B = sup(0,σH ) EYHN2 (t) ≤ K0 , A = sup(0,Υ) EYHN2 (t) ≤ K0 . Since sup(0,σH ) EYHN2 (t) ≤ K0 implies 2 2 σH2 = Υ, we have B ⊂ A, i.e. Bc ⊃ Ac , then P(Ac ) ≤ P(Bc ), which implies Lemma 6.2. 20
6.2.2
Estimate of the derivative of YN .
Let N ∈ N and (y0 , h) ∈ (H3 )2 , where y0 = (v0 , S 0 ). Consider the following equation ∂β ∂βN ∂YN N + PN (vN · ∇)βN + PN Φ(vN ) + PN (ηN · ∇)YN + PN Φ(ηN ) ∂t ∂z ∂z ′ dW ˜ N ) + AβN = PN Ψ (YN )βN + PN G(β , dt βN (s, s, y0 ) · h = PN h, where
R P1 f k × ηN − √1 P1 z ∇γN dz′ N N −1 C0 ˜ N ) = G(β 0
and
ηN (t) = ηN (t, s, y0 ) · h,
γN (t) = γN (t, s, y0 ) · h
(6.51)
,
for t ≥ s.
Denote βN = (ηN , γN ) and βN (t) = βN (t, s, y0 ) · h. Existence and uniqueness of the solutions of (6.51) is easily shown. Moreover, if g ∈ Cb1 (PN H), then, for any t ≥ 0, we have ∇ PtN g (y0 ), h = E (∇g(YN (t, y0 )), βN (t, 0, y0 ) · h) .
For process Y = (v, S ), set
( ) Z t 2 σ(Y) = inf t ∈ (0, Υ)| kY(s)k3 ds ≥ K0 + 1 ,
(6.52)
0
where K0 is defined in (6.49). Lemma 6.3. Assume that Hypothesis H0 holds. Then there exists c = c(κ, O) such that for any N ∈ N, Υ ≤ 1 and (y0 , h) ∈ (H3 )2 , Z σ(YN (·,y0 )) E kβN (t, 0, y0 ) · hk24 dt ≤ ckhk23 . 0
Proof of Lemma 6.3. For written simply, we set βN (t) = βN (t, 0, y0 ) · h, ηN (t) = ηN (t, 0, y0 ) · h, γN (t) = γN (t, 0, y0 ) · h and σ = σ(YN (·, y0 )). Itô formula on kβN (t)k23 gives ′
dkβN (t)k23 + 2kβN (t)k24 dt = dMβN + IβN dt + JβN dt + kPN (Ψ (YN ) · βN )k2L2 (H;H3 ) dt, where
Let
Z t ′ 3 (t) = 2 M A β , P Ψ (Y ) · β dW , β N N N N N 0 ! ∂βN ∂YN 3 3 IβN = −2 A βN , (vN · ∇)βN + (ηN · ∇)vN − 2 A ηN , Φ(vN ) + Φ(ηN ) , ∂z ∂z ˜ N) . JβN = −2 A3 βN , G(β IβN = IηN + IγN ,
JβN = JηN + JγN ,
21
(6.53)
where
! ∂ηN ∂vN 3 3 IηN = −2 A1 ηN , (vN · ∇)ηN + (ηN · ∇)vN − 2 A1 ηN , Φ(vN ) + Φ(ηN ) , ∂z ∂z ! Z z 1 3 ∇γN dz′ , J = −2 A η , f k × η − √ N 1 N ηN C 0
and
−1
! ∂γN ∂S N 3 3 IγN = −2 A2 γN , (vN · ∇)γN + (ηN · ∇)S N − 2 A2 γN , Φ(vN ) ∂z + Φ(ηN ) ∂z , JγN = 0.
By the Hölder inequalities, Sobolev Embedding and the Young inequality, ≤ ckηN k4 kvN k3 |ηN |∞ + ckηN k4 kηN k3 |vN |∞
IηN
≤ ckηN k4 kvN k3 kηN k3 1 ≤ kηN k24 + ckvN k23 kηN k23 , 4 ≤ ckηN k4 kηN k2 + ckηN k4 kγN k3 1 ≤ kηN k24 + ckηN k22 + ckγN k23 . 4
JηN
Similarly, we obtain 1 kγN k24 + ckγN k23 kvN k23 + ckηN k23 kS N k23 . 2 Collecting all inequalities in the above, we have IγN + JγN ≤
3 dkβN k23 + kβN k24 dt ≤ ckβN k23 (1 + kYN k23 )dt + dMβN . 2 Integrating and taking the expectation on both sides of (6.54), we deduce ! Z σ 2 2 E E(σ, 0)kβN (σ)k3 + E(t, 0)kβN (t)k4 dt ≤ khk23 , 0
where E(t, 0) = e−ct−c
Rt 0
kYN (r)k23 dr
.
From the definition of σ, we deduce Z σ E kβN (t)k24 dt ≤ khk23 exp c(K0 + 1) + cΥ , 0
which yields Lemma 6.3. 6.2.3
Proof of (6.46)
Let ψ ∈ C ∞ (R; [0, 1]) be such that 0, on (K0 + 1, ∞), ψ= 1, on (−∞, K0 ). 22
(6.54)
For process Y, set Z
ψY = ψ
Υ
0
! kY(s)k23 ds .
Remark that E g(YN (Υ, y2 )) − g(YN (Υ, y1 )) ≤ I0 + |g|∞ (I1 + I2 ), 0 0
where
(6.55)
E g(Y (Υ, y2 ))ψ 1 I = − g(Y (Υ, y ))ψ 2 1 0 N N YN (·,y0 ) YN (·,y0 ) , 0 0 ! Z Υ i 2 kYN (s, y0 )k3 ds > K0 . Ii = P 0
For θ ∈ [1, 2], set θ 1 2 θ θ y0 = (2 − θ)y0 + (θ − 1)y0 , Yθ = YN (·, y0 ), vθ = vN (·, y0 ), βθ (t) = βN (t, 0, yθ0 ), ηθ (t) = ηN (t, 0, yθ0 ), σθ = σ(Yθ ),
S θ = S N (·, yθ0 ),
where σ is defined in (6.52). For better readability, the dependence on N has been omitted. Setting h = y20 − y10 ,
we have I0 ≤
Z
2
1
|Jθ |dθ,
Jθ = ∇yθ E(g(Yθ (Υ))ψYθ ), h . 0
To estimate Jθ , applying a truncated Bismut-Elworthy-Li formula, similar to [25], we have Jθ = where
1 ′ ′ J + 2Jθ,2 , Υ θ,1
" Z σθ ∧Υ # ′ −1 Jθ,1 = E g (Yθ (Υ)) ψYθ Ψ (Yθ (t)) · βθ (t) · h, dW(t) , 0 " Z σθ ∧Υ # 3 ′ t 3 ′ Jθ,2 = E g(Yθ (Υ))ψYθ (1 − ) A 2 Yθ (t), A 2 (βθ (t) · h)dt , Υ 0 ! Z Υ ′ ′ kYθ (s)k23 ds . ψYθ = ψ 0
It follows from Hypothesis H0 that ′
|Jθ,1 | ≤ |g|∞ κ
s
E
Z
σθ ∧Υ
0
kβθ (t) · hk24 dt,
and from Hölder inequality that ′
′
|Jθ,2 | ≤ |g|∞ |ψ |∞
s
E
Z
0
σθ ∧Υ
kYθ (t)k23 dt
23
s
E
Z
0
σθ ∧Υ
kβθ (t) · hk23 dt.
Hence for any Υ < 1, 1 |Jθ | ≤ c(κ, O)|g|∞ Υ
s
E
Z
σθ ∧Υ
0
kβθ (t) · hk24 dt.
(6.56)
Combining (6.56) and Lemma 6.3, we have |Jθ | ≤ c(κ, O)|g|∞ which yields,
khk3 , Υ
√ I0 ≤ c(κ, O)|g|∞ Υ.
Since κΥ3 ≤ κΥ, we can apply Lemma 6.2 to control I1 + I2 in (6.55) if (6.45) holds. Hence (6.46) follows provided (6.45) holds, which yields Proposition 6.2.
6.3 Time of Return in a Small Ball of H3 ˜ Assume that Hypothesis H0 holds. Let N ∈ N and Υ, δ, Z1 , Z2 be as in Proposition 6.2. Let (W, W) be a couple of independent cylindrical Wiener processes on H × H. Denote by YN (·, y0 ) and Y˜ N (·, y0 ) ˜ respectively. We construct a couple of random variables the solution of (5.27) associated to W and W, 1 2 1 2 (V1 , V2 ) = (V1 (y0 , y0 ), V2 (y0 , y0 )) on PN H × PN H as follows (YN (·, y0 ), YN (·, y0 )), if y10 = y20 = y0 , (V1 , V2 ) = (Z1 (y10 , y20 ), Z2 (y10 , y20 )), if (y10 , y20 ) ∈ BH3×H3 (0, δ)\{y10 = y20 }, (YN (·, y1 ), Y˜ N (·, y2 )), else. 0 0
(6.57)
Then, we can construct (Y 1 , Y 2 ) by induction on ΥN. Indeed, firstly setting Y i (0) = yi0 for i = 1, 2. Then, assuming that we have constructed (Y 1 , Y 2 ) on {0, Υ, 2Υ, ..., nΥ}, taking (V1 , V2 ) as above independent of (Y 1 , Y 2 ) and setting Y i ((n + 1)Υ) = Vi (Y 1 (nΥ), Y 2 (nΥ)) for i = 1, 2. It follows that (Y 1 , Y 2 ) is a discrete strong Markov process and a coupling of D(YN (·, y10 )), D(YN (·, y20 )) on ΥN. Moreover, if (Y 1 , Y 2 ) are coupled at time nΥ, then they remain coupled for any time after. Setting n o τ = inf t ∈ ΥN\{0} | kY 1 (t)k23 ∨ kY 2 (t)k23 ≤ δ . (6.58) The aim of this section is to establish the following Proposition 6.3.
Proposition 6.3. Assume that Hypothesis H0 holds. There exist α = α(Ψ, O, ε0 , δ) > 0 and K ′′ = K ′′ (Ψ, O, ε0 , δ) such that for any (y10 , y20 ) ∈ H × H , E(eατ ) ≤ K ′′ (1 + |y10 |2 + |y20 |2 ). The proof is postponed to Section 6.3.1, which is based on the following five lemmas. Refer to [25], we know the following property of Z(t) defined in Lemma 6.4. 24
Lemma 6.4. Assume that Hypothesis H0 holds. For any t, M > 0, there exists p0 (t, M) = p0 (t, M, ε0 , {|Ψn |∞ }n , O) > 0 such that for any adapted process Y, 2 P sup kZ(s)k3 ≤ M ≥ p0 (t, M), (0,t)
where
Z(t) =
Z
t
e−A(t−s) Ψ(Y(s))dW(s).
0
Using this estimation, we can estimate the moment of the first time in a small ball in H. Let δ3 > 0. We set o n τL2 = τ ∧ inf t ∈ ΥN | |Y 1 (t)|2 ∨ |Y 2 (t)|2 ≤ δ3 .
Lemma 6.5. Assume that Hypothesis H0 holds. Then, for any δ3 > 0, there exist C3 (δ3 ) and γ3 (δ3 ) such that for any (y10 , y20 ) ∈ H × H, E(eγ3 τL2 ) ≤ C3 1 + |y10 |2 + |y20 |2 . The proof is postponed to Section 6.3.2.
Then we need to get a finer estimation in order to control the time necessary to enter a ball in stronger topologies in V. Lemma 6.6. Assume that Hypothesis H0 holds. Then, for any δ4 > 0, there exist p4 (δ4 ) and R4 (δ4 ) > 0 such that for any y0 verifying |y0 |2 ≤ R4 , we have for any Υ ≤ 1, P kYN (Υ, y0 )k2 ≤ δ4 ≥ p4 .
The proof is postponed to Section 6.3.3.
Lemma 6.7. Assume that Hypothesis H0 holds. Then, for any δ5 > 0, there exist p5 (δ5 ) and R5 (δ5 ) > 0 such that for any y0 verifying ky0 k2 ≤ R5 , we have for any Υ ≤ 1, P kYN (Υ, y0 )k22 ≤ δ5 ≥ p5 .
The proof is postponed to Section 6.3.4.
Lemma 6.8. Assume that Hypothesis H0 holds. Then, for any δ6 > 0, there exist p6 (δ6 ) and R6 (δ6 ) > 0 such that for any y0 verifying ky0 k22 ≤ R6 , we have for any Υ ≤ 1, P kYN (Υ, y0 )k23 ≤ δ6 ≥ p6 .
The proof is postponed to Section 6.3.5.
25
6.3.1
Proof of Proposition 6.3
Setting δ6 = δ, δ5 = R6 (δ6 ), δ4 = R5 (δ5 ), δ3 = R4 (δ4 ), p4 = p4 (δ4 ), p5 = p5 (δ5 ), p6 = p6 (δ6 ), p1 = (p4 p5 p6 )2 . By the definition of τL2 , |Y 1 (τL2 )|2 ∨ |Y 2 (τL2 )|2 ≤ R4 (δ4 ). We prove this proposition by three cases. The first case: kY 1 (τL2 )k23 ∨ kY 2 (τL2 )k23 ≤ δ, which obviously yields P
! min max kY i (τL2 + kΥ)k23 ≤ δ | Y 1 (τL2 ), Y 2 (τL2 ) ≥ p1 .
k=0,1,2,3 i=1,2
(6.59)
The second case: Y 1 (τL2 ) = Y 2 (τL2 ) = y0 with ky0 k23 > δ. Combining Lemma 6.6, Lemma 6.7 and Lemma 6.8, we deduce from the strong Markov property of YN that P(kYN (3Υ, y0 )k23 ≤ δ) ≥ p6 p5 p4 , provided |y0 |2 ≤ R4 . In that case, Y 1 (τL2 + 3Υ) = Y 2 (τL2 + 3Υ). Hence, since the law of Y 1 (τL2 + 3Υ) conditioned by (Y 1 (τL2 ), Y 2 (τL2 )) = (y10 , y20 ) is D(YN (3Υ, y0 )), it follows i
P max kY (τL2 + i=1,2
3Υ)k23
! 2 1 ≤ δ | Y (τL2 ), Y (τL2 ) ≥ p6 p5 p4 ≥ p1 ,
then (6.59) holds. The third case: Y 1 (τL2 ) , Y 2 (τL2 ) and kY 1 (τL2 )k23 ∨ kY 2 (τL2 )k23 > δ. In that case, Y 1 (τL2 + Υ), Y 2 (τL2 + Υ) conditioned by Y 1 (τL2 ), Y 2 (τL2 ) are independent. Since the law of Y i (τL2 + Υ) condi tioned by Y 1 (τL2 ), Y 2 (τL2 ) = (y10 , y20 ) is D(YN (Υ, yi0 )), it follows from Lemma 6.6 that i
P max kY (τL2 i=1,2
! + Υ)k ≤ δ4 | Y (τL2 ), Y (τL2 ) ≥ p24 . 2
1
2
(6.60)
Then, we distinguish three cases for (Y i (τL2 + Υ))i=1,2 similar to (Y i (τL2 ))i=1,2 : kY 1 (τL2 + Υ)k23 ∨ kY 2 (τL2 + Υ)k23 ≤ δ, Y 1 (τL2 + Υ) = Y 2 (τL2 + Υ) = y1 with ky1 k23 > δ and Y 1 (τL2 + Υ) , Y 2 (τL2 + Υ) and kY 1 (τL2 + Υ)k23 ∨kY 2 (τL2 +Υ)k23 > δ. For the front two cases, using the method similar to the first case and the second case, and combining (6.60), (6.59) holds. For the last case, we know that Y 1 (τL2 + 2Υ), Y 2 (τL2 + 2Υ) conditioned by Y 1 (τL2 + Υ), Y 2 (τL2 + Υ) are independent. By Lemma 6.7, we have ! (6.61) P min max kY i (τL2 + kΥ)k22 ≤ δ5 | Y 1 (τL2 + Υ), Y 2 (τL2 + Υ) ≥ p25 , k=1,2 i=1,2
provided max kY i (τL2 + Υ)k2 ≤ δ4 . i=1,2
26
Now, we distinguish three cases for (Y i (τL2 + 2Υ))i=1,2 similar to the above: kY 1 (τL2 + 2Υ)k23 ∨ kY 2 (τL2 + 2Υ)k23 ≤ δ, Y 1 (τL2 + 2Υ) = Y 2 (τL2 + 2Υ) = y2 with ky2 k23 > δ and Y 1 (τL2 + 2Υ) , Y 2 (τL2 + 2Υ) and kY 1 (τL2 + 2Υ)k23 ∨ kY 2 (τL2 + 2Υ)k23 > δ. For the front two cases, we also can obtain (6.59). For the last case, Y 1 (τL2 + 3Υ), Y 2 (τL2 + 3Υ) conditioned by Y 1 (τL2 + 2Υ), Y 2 (τL2 + 2Υ) are independent. By Lemma 6.8, ! 2 i 2 1 P min max kY (τL2 + kΥ)k3 ≤ δ | Y (τL2 + 2Υ), Y (τL2 + 2Υ) ≥ p26 , (6.62) k=2,3 i=1,2
provided max kY i (τL2 + 2Υ)k22 ≤ δ5 . i=1,2
Combining (6.60), (6.61) and (6.62), we deduce (6.59) holds for the last case. Thus, we have proved (6.59) is true almost surely. Integrating (6.59), we obtain ! 2 i P min max kY (τL2 + kΥ)k3 ≤ δ ≥ p1 . k=0,1,2,3 i=1,2
(6.63)
Combining Lemma 6.5 and (6.63), we conclude that Proposition 6.3 holds. 6.3.2
Proof of Lemma 6.5
Recall (6.32) E|YN (t)|2 ≤ e−µ1 t |y0 |2 +
κ . µ1
Since (Y 1 , Y 2 ) is a coupling of D(YN (·, y10 )), D(YN (·, y20 )) on ΥN, we obtain
κ E |Y 1 (nΥ)|2 + |Y 2 (nΥ)|2 ≤ e−µ1 nΥ (|y10 |2 + |y20 |2 ) + 2 . µ1
Since (Y 1 , Y 2 ) is a strong Markov process, it can be deduced that there exist C7 and γ7 such that γ τ′ E e 7 L2 ≤ C7 (1 + |y10 |2 + |y20 |2 ),
where
(6.64)
n o τ′L2 = inf t ∈ ΥN\{0} | |Y 1 (t)|2 + |Y 2 (t)|2 ≤ 4κ .
Taking into account (6.64), a standard method shows that, in order to establish Lemma 6.5 it is sufficient to prove that there exist (p8 (δ3 , t), Υ8 (δ3 )) such that P |YN (t, y0 )|2 ≤ δ3 ≥ p8 > 0,
provided N ∈ N, t ≥ Υ8 (δ3 ), |y0 |2 ≤ 4κ and Υ8 (δ3 ) is independent of y0 . Setting Z t
Z(t) =
e−A(t−s) Ψ(Y(s))dW(s),
0
27
XN = YN − PN Z,
(6.65)
N(ω) = sup kZ(s, ω)k23 (0,t)
for ω ∈ Ω.
Assume that there exist M8 (δ3 ) > 0 and Υ8 (δ3 ) such that for ω ∈ Ω, N(ω) ≤ M8 (δ3 ) ∧
δ3 implies 4
|XN (t, ω)|2 ≤
δ3 , 4
(6.66)
provided t ≥ Υ8 (δ3 ) and |y0 |2 ≤ 4κ . Then (6.65) results from Lemma 6.4 with M = M8 (δ3 ) ∧ δ43 . We now prove (6.66). From (6.30), we know ∂XN ∂(XN + PN Z) + PN (vN · ∇)(XN + PN Z) + PN Φ(vN ) + PN G(XN + PN Z) + AXN = 0, ∂t ∂z XN (0) = PN y0 . Let
XN = (ωN , gN ) = (vN , S N ) − (P1N Z1 , P2N Z2 ), where Z1 (t) =
Z
t
−A1 (t−s)
e
φ(vN (s), S N (s))dW1 (s),
Z2 (t) =
0
For ω ∈ Ω, setting
Z
t
e−A2 (t−s) ϕ(vN (s), S N (s))dW2 (s).
0
N1 (ω) = sup kZ1 (s, ω)k23 ,
N2 (ω) = sup kZ2 (s, ω)k23 ,
(0,t)
(0,t)
we have N1 (ω) ∨ N2 (ω) ≤ N(ω). From (6.47) and (6.48), we have ∂(ωN + Z1 ) ∂ωN + (ωN + Z1 ) · ∇ (ωN + Z1 ) + Φ(ωN + Z1 ) ∂t ∂z Z z ∂2 ω N 1 ∇(gN + Z2 )dz′ − △ωN − + f k × (ωN + Z1 ) + ∇pb − √ = 0, ∂z2 C0 −1 ∂(gN + Z2 ) ∂2 g N ∂gN = 0. + [(ωN + Z1 ) · ∇](gN + Z2 ) + Φ(ωN + Z1 ) − △gN − ∂t ∂z ∂z2 Taking the scalar product of (6.67) with ωN , it follows that 1 d|ωN |2 ∂(ωN + Z1 ) + kωN k2 = − ωN , (ωN + Z1 ) · ∇ (ωN + Z1 ) + Φ(ωN + Z1 ) 2 dt ∂z Z z 1 − ωN , f k × (ωN + Z1 ) − ωN , ∇pb − √ ∇(gN + Z2 )dz′ , C0 −1 by integration by parts ! ∂ωN = 0, ωN , (ωN + Z1 ) · ∇ ωN + Φ(ωN + Z1 ) ∂z 28
(6.67)
(6.68)
and (ωN , f k × ωN ) = 0,
(ωN , ∇pb ) = 0,
moreover, applying Hölder inequality and Sobolev embedding theorem, we have ! ∂Z1 ωN , (ωN + Z1 ) · ∇ Z1 + Φ(ωN + Z1 ) ∂z 1
1
1
1
≤ c|ωN ||ωN | 2 kωN k 2 kZ1 k2 + c|ωN ||Z1 | 2 kZ1 k 2 kZ1 k2 + c|ωN |kωN kkZ1 k3 + c|ωN |kZ1 kkZ1 k3 1 1 ≤ ckZ1 k3 kωN k2 + |ωN |2 + ckZ1 k43 + kωN k2 , 8 8
and
1 |(ωN , f k × Z1 )| ≤ |ωN |2 + c|Z1 |2 , 8 ! Z z 1 1 1 1 |gN |2 + |Z2 |2 . ∇(gN + Z2 )dz′ ≤ kωN k2 + ωN , √ 2 C C C0 −1 0 0 Combining all inequalities in the above, we obtain
1 1 d|ωN |2 1 + kωN k2 ≤ ckZ1 k3 kωN k2 + ckZ1 k43 + |gN |2 + |Z2 |2 + c|Z1 |2 dt 2 C0 C0 1 2 |gN |2 . ≤ cM82 kωN k2 + cM8 + C0
(6.69)
Similarly, taking the scalar product on both sides of (6.68) with gN , using Hölder inequality and Sobolev embedding theorem, it follows that d|gN |2 3 + kgN k2 ≤ cM8 |ωN |2 + cM82 |gN |2 + 2cM82 . dt 2 Since C0 ≥ λ21 , then (6.70), we obtain
2 2 C0 |gN |
(6.70)
≤ λ1 |gN |2 ≤ kgN k2 . When M8 is sufficiently small, combining (6.69) and
d|XN |2 1 + kXN k2 ≤ cM8 dt 4 Applying Gronwall inequality, we get µ1
on (0, t).
|XN (t)|2 ≤ e− 4 t |y0 |2 + Then, we deduce from |y0 |2 ≤ 4κ that µ1
|XN (t)|2 ≤ 4κe− 4 t +
(6.71)
cM8 . µ1
cM8 . µ1
Choosing t sufficiently large and M8 sufficiently small, we obtain (6.66), which yields Lemma 6.5 holds. Indeed, when (6.66) holds, δ3 δ3 2 2 2 P |YN (t, y0 )| ≤ δ3 ≥ P |XN (t, y0 )| ≤ , sup kZ(s)k3 ≤ M8 (δ3 ) ∧ 4 4 (0,t) δ3 ≥ P sup kZ(s)k23 ≤ M8 (δ3 ) ∧ 4 (0,t) δ3 > 0, ≥ p0 t, M8 (δ3 ) ∧ 4 29
˜ 8 (δ3 ) = M8 (δ3 ) ∧ let M holds. 6.3.3
δ3 4
˜ 8 (δ3 )), which yields (6.65), then Lemma 6.5 and choose p8 (t, δ3 ) = p0 (t, M
Proof of Lemma 6.6
From now on, we use the decomposition XN = YN − PN Z defined in Sect.3.1 and set N(ω) = sup kZ(s, ω)k23
for ω ∈ Ω.
(0,Υ)
Let δ4 > 0, assume that there exist M9 (δ4 ) > 0, R4 (δ4 ) > 0, such that for ω ∈ Ω, N(ω) ≤ M9 (δ4 ) ∧
δ4 implies 4
kXN (Υ, ω, y0 )k2 ≤
δ4 , 4
provided |y0 |2 ≤ R4 (δ4 ). Then, Lemma 6.6 results from Lemma 6.4 with M = M9 (δ4 ) ∧ Integrating (6.71), Z Υ 1 1 kXN (t)k2 dt ≤ |y0 |2 + cM8 , 4Υ 0 Υ
δ4 4.
which yields, by Chebyshev inequality,
! Υ 8 2 λ t ∈ (0, Υ) | kXN (t)k ≤ |y0 | + 8cM8 ≥ , Υ 2 2
(6.72)
where λ denotes the Lebesgue measure on (0, Υ). Setting τH1 = inf{t ∈ (0, Υ) | kXN (t)k2 ≤
8 2 |y0 | + 8cM8 }, Υ
from (6.72) and the continuity of XN , we deduce kXN (τH1 )k2 ≤
8 2 |y0 | + 8cM8 . Υ
(6.73)
Taking inner product with A1 ωN on both sides of (6.67) in L2 (O), we obtain ! ∂(ωN + Z1 ) 1 dkωN k2 2 + kωN k2 = − A1 ωN , ((ωN + Z1 ) · ∇) (ωN + Z1 ) + Φ(ωN + Z1 ) 2 dt ∂z ! Z z 1 ′ − (A1 ωN , f k × (ωN + Z1 )) − A1 ωN , ∇pb − √ ∇(gN + Z2 )dz . C0 −1 Since 1
1
|(A1 y, (x · ∇)z + (z · ∇)x)| ≤ ckyk2 kzk 2 kzk22 kxk, we have ! ∂(ωN + Z1 ) A1 ωN , (ωN + Z1 ) · ∇ (ωN + Z1 ) + Φ(ωN + Z1 ) ∂z 3
3
≤ ckωN k 2 kωN k22 + ckZ1 k2 kωN k22 + ckZ1 k22 kωN k2 + ckωN kkωN k22 + ckZ1 k2 kωN k22 . 30
By Hölder inequality and the Young inequality, we have |(A1 ωN , f k × (ωN + Z1 )| ≤ c|Z1 |kωN k2 , ! Z z 1 1 1 1 kgN k2 + kZ2 k2 . ∇(gN + Z2 )dz′ ≤ kωN k22 + A1 ωN , ∇pb − √ 2 C0 C0 C0 −1
Thus, it follows
1 2 dkωN k2 + kωN k22 ≤ ckωN k6 + cM9 + 4cM92 kωN k22 + kgN k2 + ckωN k2 kωN k22 . dt C0
(6.74)
Similarly, taking inner product with A2 gN on both sides of (6.68) in L2 (O), we obtain dkgN k2 3 + kgN k22 ≤ ckgN k2 kωN k2 kωN k22 + cM92 + cM9 kωN k22 dt 2 1
+cM92 kgN k22 + ckωN k4 kgN k2 . Since C0 ≥ (6.75),
2 λ1 ,
then
2 2 C0 kgN k
(6.75)
≤ λ1 kgN k2 ≤ kgN k22 . When M9 is sufficiently small, combining (6.74) and
dkXN k2 + kXN k22 ≤ ckωN k6 + ckωN k2 kωN k22 + ckgN k2 kωN k2 kωN k22 + cM9 . dt Then ! 1 dkXN k2 1 2 2 4 + kXN k2 + − ckXN k − ckXN k kωN k22 ≤ ckXN k2 (kXN k4 − 4K12 ) + cM9 , dt 4 8 q where K1 = µ8c1 . Setting ( ) 1 σH1 = inf t ∈ (τH1 , Υ) | kXN (t)k2 > 2K1 ∧ √ , 8c
denote by 2K˜1 = 2K1 ∧
√1 . 8c
Remark that on (τH1 , σH1 ), we have dkXN k2 1 + kXN k22 ≤ cM9 . dt 4
(6.76)
Integrating (6.76), we obtain kXN (σH1 )k2 +
1 4
Z
σH1
τH1
kXN (t)k22 dt ≤ kXN (τH1 )k2 + cM9 (σH1 − τH1 ) ≤ kXN (τH1 )k2 + cM9 Υ
≤ kXN (τH1 )k2 + cM9 .
From (6.73) and (6.77), we obtain that, for M8 , M9 and |y0 |2 sufficiently small, kXN (σH1 )k2 ≤ 31
δ4 ∧ K˜1 , 4
(6.77)
which yields σH1 = Υ. It follows that kXN (Υ)k2 ≤
δ4 , 4
provided M8 , M9 and |y0 |2 sufficiently small. Since δ4 δ4 2 2 2 P kYN (Υ, y0 )k ≤ δ4 ≥ P kXN (Υ, y0 )k ≤ , sup kZ(s)k3 ≤ M9 (δ4 ) ∧ 4 4 (0,Υ) δ4 ≥ P sup kZ(s)k23 ≤ M9 (δ4 ) ∧ 4 (0,Υ) δ4 > 0, ≥ p0 Υ, M9 (δ4 ) ∧ 4 ˜ 9 (δ4 ) = M9 (δ4 ) ∧ let M 6.3.4
δ4 4
˜ 9 (δ4 )), we obtain Lemma 6.6. and choose p4 (δ4 ) = p0 (Υ, M
Proof of Lemma 6.7
When M9 sufficiently small and ky0 k2 + cM9 ≤ K˜ 1 , we have τH1 = 0 and σH1 = Υ, then it follows from (6.76) that Z 1 Υ kXN (t)k22 dt ≤ ky0 k2 + cM9 . 4 0
Applying the same argument as in subsection 6.3.3, it is easy to deduce that there exists a stopping time τH2 ∈ (0, Υ) such that 8 (6.78) kXN (τH2 )k22 ≤ (ky0 k2 + cM9 ), Υ provided M9 and ky0 k2 are sufficiently small. Similar to the above, let δ5 > 0, assume that there exist M10 (δ5 ) > 0 and R5 (δ5 ) > 0 such that for ω ∈ Ω, δ5 δ5 N(ω) ≤ M10 (δ5 ) ∧ implies kXN (Υ, ω, y0 )k22 ≤ , 4 4 provided ky0 k2 ≤ R5 (δ5 ). Then, Lemma 6.7 results from Lemma 6.4 with M = M10 (δ5 ) ∧ Taking the scalar product with A21 ωN on both sides of (6.67), we obtain
δ5 4.
! ∂(ωN + Z1 ) 1 dkωN k22 2 2 + kωN k3 = − A1 ωN , (ωN + Z1 ) · ∇ (ωN + Z1 ) + Φ(ωN + Z1 ) 2 dt ∂z ! Z z 1 − A21 ωN , f k × (ωN + Z1 ) − A21 ωN , ∇pb − √ ∇(gN + Z2 )dz′ . C0 −1 Since |(A21 y, (x · ∇)z + (z · ∇)x)| ≤ ckyk3 kzk2 kxk2 ≤ εkyk23 + c(kzk42 + kxk42 ),
32
we obtain ! 2 ∂(ωN + Z1 ) A1 ωN , (ωN + Z1 ) · ∇ (ωN + Z1 ) + Φ(ωN + Z1 ) ∂z 3
1
3
1
1
1
≤ ckωN k32 kωN k22 kωN k + ckωN k3 kωN k22 kωN k 2 + ckωN k3 kωN k22 kωN k 2 kZ1 k2 3
3
+ ckωN k23 kZ1 k3 + ckωN k3 kZ1 k23 + ckωN k32 kωN k22 + ckZ1 k2 kωN k22
+ ckZ1 k22 kωN k2 + ckωN kkωN k22 + ckZ1 k2 kωN k22
≤ εkωN k23 + ckωN k62 + ckZ1 k23 kωN k22 + ckZ1 k43 . By Hölder inequality and the Young inequality, we have
|(A21 ωN , Z1 )| ≤ ckZ1 kkωN k3 , ! Z z 1 1 1 1 ′ 2 ∇(gN + Z2 )dz ≤ kωN k23 + kgN k22 + kZ2 k22 . A1 ωN , ∇pb − √ 2 C C C0 −1 0 0
Combining the above inequalities, applying Hölder inequality and the Young inequality, we obtain 1 dkωN k22 2 2 2 + kωN k23 ≤ ckωN k62 + ckωN k42 + cM10 kωN k23 + kgN k22 + cM10 + εkωN k23 . dt C0
(6.79)
Similarly, taking the scalar product with A22 gN on both sides of (6.68), we obtain dkgN k22 + 2kgN k23 ≤ εkgN k23 + ckgN k22 kωN k42 + ckgN k22 kωN k22 dt 2 +cM10 kωN k23 + cM10 kgN k23 + cM10 . Choosing M10 small enough, since
2 2 C0 kgN k2
(6.80)
≤ λ1 kgN k22 ≤ kgN k23 , by (6.79) and (6.80), we obtain
dkXN k22 1 + kXN k23 ≤ ckXN k22 kXN k22 + kXN k42 − 4K22 + cM10 . dt 4
(6.81)
where K2 is defined similar to the above. Setting σH2 = inf{t ∈ (τH2 , Υ) | kXN k22 ≥ 2K2 }, integrating (6.81) on (τH2 , σH2 ), we obtain Z 1 σH2 2 kXN (σH2 )k2 + kXN (t)k23 dt ≤ kXN (τH2 )k22 + cM10 . (6.82) 4 τH2 Combining (6.78) and (6.82). We obtain that, for M9 , M10 and ky0 k2 sufficiently small, kXN (σH2 )k22 ≤
δ5 ∧ K2 . 4
It follows that σH2 = Υ and kXN (Υ)k22 ≤
33
δ5 , 4
provided M9 , M10 and ky0 k2 sufficiently small. Since δ5 δ5 2 2 2 P kYN (Υ, y0 )k2 ≤ δ5 ≥ P kXN (Υ, y0 )k ≤ , sup kZ(s)k3 ≤ M10 (δ5 ) ∧ 4 4 (0,Υ) δ5 ≥ P sup kZ(s)k23 ≤ M10 (δ5 ) ∧ 4 (0,Υ) δ5 > 0, ≥ p0 Υ, M10 (δ5 ) ∧ 4 ˜ 10 (δ5 ) = M10 (δ5 ) ∧ let M 6.3.5
δ5 4
and choose p5 (δ5 ) = p0 (Υ, M˜ 10 (δ5 )), we obtain Lemma 6.7.
Proof of Lemma 6.8
When M10 sufficiently small and ky0 k22 + cM10 ≤ K2 , we have τH2 = 0 and σH2 = Υ. Taking into account (6.81), we obtain Z 1 Υ kXN (t)k23 dt ≤ ky0 k22 + cM10 . 4 0
Applying the same argument from Lemma 3.4, it’s easy to know there exists a stopping time τH3 ∈ (0, Υ) such that 8 kXN (τH3 )k23 ≤ (ky0 k22 + cM10 ). (6.83) Υ Similar to the above, let δ6 > 0, assume that there exist M11 (δ6 ) > 0 and R6 (δ6 ) > 0 such that for ω ∈ Ω, δ6 δ6 N(ω) ≤ M11 (δ6 ) ∧ implies kXN (Υ, ω, y0 )k23 ≤ , 4 4 provided ky0 k2 ≤ R6 (δ6 ). Then, Lemma 6.8 results from Lemma 6.4 with M = M11 (δ6 ) ∧ Taking the scalar product with A31 ωN on both sides of (6.67), we obtain
δ6 4.
! 1 dkωN k23 ∂(ωN + Z1 ) 2 3 + kωN k4 = − A1 ωN , (ωN + Z1 ) · ∇ (ωN + Z1 ) + Φ(ωN + Z1 ) 2 dt ∂z ! Z z 1 3 ′ 3 ∇(gN + Z2 )dz , − A1 ωN , f k × (ωN + Z1 ) − A1 ωN , ∇pb − √ C0 −1 Since |(A31 y, (x · ∇)z + (z · ∇)x)| ≤ ckyk4 kzk3 kxk3 ≤ εkyk24 + c(kzk43 + kxk43 ), we obtain
! 3 ∂(ωN + Z1 ) A1 ωN , (ωN + Z1 ) · ∇ (ωN + Z1 ) + Φ(ωN + Z1 ) ∂z ≤ ckωN k4 kωN k3 kωN k2 + ckωN k24 kZ1 k3 + ckωN k23 kZ1 k23 3
1
+ ckωN k4 kZ1 k23 + ckωN k4 kωN k23 + ckωN k42 kωN k32 kZ1 k2
≤ εkωN k24 + ckωN k43 + ckZ1 k23 kωN k24 + ckZ1 k43 + ckZ1 k43 kωN k23 .
34
By Hölder inequality and the Young inequality, we have
|(A31 ωN , Z1 )| ≤ ckZ1 k2 kωN k4 , ! Z z 1 1 1 1 A31 ωN , ∇pb − √ kgN k23 + kZ2 k23 . ∇(gN + Z2 )dz′ ≤ kωN k24 + 2 C0 C0 C0 −1
Combining the above inequalities, we obtain dkωN k23 dt
+ kωN k24 ≤ εkωN k24 + ckωN k43 + cM11 + cM11 kωN k23 +
2 kgN k23 . C0
(6.84)
Similarly, taking the scalar product with A32 gN on both sides of (6.68), we obtain dkgN k23 2 + 2kgN k24 ≤ εkgN k24 + ckωN k23 kgN k23 + cM11 kgN k24 + cM11 kωN k24 + cM11 . dt Choosing M11 sufficiently small, since
2 2 C0 kgN k3
(6.85)
≤ λ1 kgN k23 ≤ kgN k24 , by (6.84) and (6.85), we obtain
dkXN k23 3 + kXN k24 ≤ ckXN k43 + cM11 , dt 4 then
dkXN k23 1 + kXN k24 ≤ ckXN k23 kXN k23 − 2K3 + cM11 , dt 4
(6.86)
where K3 is defined similarly to the above. Setting σH3 = inf{t ∈ (τH3 , Υ) | kXN (t)k23 ≥ 2K3 } . Integrating (6.86) on (τH3 , σH3 ), we obtain Z 1 σH3 kXN (σH3 )k23 + kXN (t)k24 dt ≤ kXN (τH3 )k23 + cM11 . 4 τH3 Taking into account (6.83) and choosing M10 , M11 and ky0 k22 sufficiently small, we obtain kXN (σH3 )k23 ≤
δ6 ∧ K3 . 4
It follows that σH3 = Υ and that kXN (Υ)k23 ≤
δ6 , 4
provided M10 , M11 and ky0 k22 sufficiently small. Since δ6 δ6 2 2 2 P kYN (Υ, y0 )k2 ≤ δ6 ≥ P kXN (Υ, y0 )k ≤ , sup kZ(s)k3 ≤ M11 (δ6 ) ∧ 4 4 (0,Υ) δ6 ≥ P sup kZ(s)k23 ≤ M11 (δ6 ) ∧ 4 (0,Υ) δ6 > 0, ≥ p0 Υ, M11 (δ6 ) ∧ 4 ˜ 11 (δ6 ) = M11 (δ6 ) ∧ let M
δ6 4
and choose p6 (δ6 ) = p0 (Υ, M˜ 11 (δ6 )), we obtain Lemma 6.8. 35
6.4 Proof of Theorem 4.2 As above explained, we only need to prove Proposition 6.1. Let (y10 , y20 ) ∈ H3 × H3 . Let’s recall the process (Y 1 , Y 2 ) is defined at the beginning of Sect. 3. Let δ > 0, Υ ∈ (0, 1) be as in Proposition 6.2 and τ defined in (6.58), setting τ1 = τ,
n o τk+1 = inf t > τk | kY 1 (t)k23 ∨ kY 2 (t)k23 ≤ δ .
It can be deduced from the strong Markov property of (Y 1 , Y 2 ) and Proposition 6.3 that E(eατk+1 ) ≤ K ′′ E eατk (1 + |Y 1 (τk )|2 + |Y 2 (τk )|2 ) ,
which yields,
ατk+1 ) ≤ cK ′′ (1 + 2δ)E(eατk ), E(e E(eατ1 ) ≤ K ′′ (1 + |y10 |2 + |y20 |2 ).
It follows that there exists K > 0 such that
E(eατk ) ≤ K k (1 + |y10 |2 + |y20 |2 ). Hence, applying the Jensen inequality, we obtain, for any θ ∈ (0, 1), E(eθατk ) ≤ K θk (1 + |y10 |2 + |y20 |2 ).
(6.87)
We deduce from Proposition 6.2 and from (6.57) that 1 P Y 1 (Υ) , Y 2 (Υ) ≤ , 4
provided (y10 , y20 ) is in the ball of H3 × H3 with radius δ. Setting
n o k0 = inf k ∈ N | Y 1 (τk + Υ) = Y 2 (τk + Υ) .
By strong Markov property of (Y 1 , Y 2 ), we have
1 P(k0 > n) ≤ 4
!n
,
(6.88)
which implies k0 < ∞ almost surely. Let θ ∈ (0, 1), we deduce from Cauchy-Schwarz inequality that E(e
θ 2 ατk0
∞ p ∞ X X θ ατn 2 )= Ik0 =n ≤ P(k0 ≥ n)E(eθατn ). E e n=1
n=1
Combining (6.87) and (6.88), we deduce ∞ X K θ !n θ ατ (1 + |y1 |2 + |y2 |2 ). E(e 2 k0 ) ≤ 0 0 2 n=0 36
Hence, choosing θ ∈ (0, 1) sufficiently small, we obtain that there exists γ > 0 independent of N ∈ N such that E(eγτk0 ) ≤ 4(1 + |y10 |2 + |y20 |2 ). (6.89) Recall that if (Y 1 , Y 2 ) are coupled at time t ∈ ΥN, then they remain coupled for any time after. Hence Y 1 (t) = Y 2 (t) for t > τk0 . It follows P(Y 1 (nΥ) , Y 2 (nΥ)) ≤ 4e−γnΥ (1 + |y10 |2 + |y20 |2 ). N ∗ N ∗ 1 2 Since Y (nΥ), Y (nΥ) is a coupling of (PnΥ ) δy1 , (PnΥ ) δy2 , we deduce from Lemma 6.1 0
0
N ∗ N ∗ ) δy1 − (PnΥ k(PnΥ ) δy2 kvar ≤ 4e−γnΥ (1 + |y10 |2 + |y20 |2 ), 0
0
(6.90)
for any n ∈ N and any (y10 , y20 ) ∈ H3 × H3 . Recall that the existence of an invariant measure µN ∈ P(PN H) can be justified by (6.33). Let λ ∈ P(H) and t ∈ R+ . We set n = ⌊ Υt ⌋ and C = 4eγΥ . Integrating (y10 , y20 ) N over ((Pt−nΥ )∗ λ) ⊗ µN in (6.90), we obtain ! Z N ∗ −γt 2 k(Pt ) λ − µN kvar ≤ Ce 1+ |y| λ(dy) , (6.91) H
which establishes (6.41), then Proposition 6.1 holds. We now explain why Proposition 6.1 implies Theorem 4.2. First of all, (4.26) is equivalent to ! Z Z E g Y(t) − −γt 2 g(y)µ(dy) ≤ Ce |g|∞ 1 + |y| λ(dy) , (6.92) λ H
H
for any g ∈ UCb (H s ). Moreover, we know that, for any given initial law λ, there exists a subsequence ′ {Nk }k such that YN ′ converges in distribution in C(0, T ; H s ) to the law of Y. Recall that the family µN is k ′ tight in H during the proof of Theorem 4.1. Hence, for a subsequence {Nk }k of {Nk }k , µNk converges to a stationary measure µ(λ,{Nk }) . µ(λ,{Nk }) means that µ depends on λ and {Nk }. Taking the limit in both sides of (6.41), we have ! Z Z E g(Y(t)) − (λ,{Nk }) −γt 2 g(y)µ (dy) ≤ Ce |g|∞ 1 + |y| λ(dy) . (6.93) λ H
H
Thus, in order to obtain (6.92), we need to prove Corollary 4.3.
Proof of Corollary 4.3 For any initial value y ∈ V, by [5], we know that there exists a unique global strong solution Y(t, y) and we can obtain the existence of invariant measures µ(δy ) , which does not depend on {Nk }. Then, we claim that µ(δy ) ≡ µV for any y ∈ V. Indeed, suppose yi ∈ V(i = 1, 2) are two different elements, µ(δy1 ) and µ(δy2 ) are corresponding invariant measures. Consider Z Z Z g(y)µ(δy1 ) (dy) − g(y)µ(δy1 ) (dy) − Eg(Y y1 ) (δy2 ) g(y)µ (dy) ≤ (6.94) t H H H Z y y y + Eg(Yt 1 ) − Eg(Yt 2 ) + Eg(Yt 2 ) − g(y)µ(δy2 ) (dy) H
= I1 + I2 + I3 , 37
we have Eg(Y y1 ) − Eg(Y N,y1 ) t t N,y + Eg(Yt 1 ) − Eg(YtN,y2 ) + Eg(YtN,y2 ) − Eg(Yty2 )
y y I2 = Eg(Yt 1 ) − Eg(Yt 2 ) ≤
= I4 + I5 + I6 . Thus,
Z Z g(y)µ(δy1 ) (dy) − (δy2 ) g(y)µ (dy) ≤ I1 + I3 + I4 + I5 + I6 . H
H
By (6.90) and (6.93), for any ǫ > 0, there exists t0 > 0 such that for any t > t0 I1 + I3 + I5 ≤ ǫ uniformly for N. Fix t > t0 , and let N → ∞, we obtain I4 + I6 → 0. Thus, µ(δy ) = µV
for any y ∈ V.
(6.95)
Now, let λ = δy0 , y0 ∈ H, choosing y1 ∈ V, we consider Z Z Z Z (δ ) y g(y)µ(δy0 ,{Nk }) (dy) − V ,{N }) (δ 0 g(y)µ (dy) ≤ g(y)µ y0 k (dy) − g(y)µNk (dy) H H H PNk H Z (δy ) N ,y N ,y N ,y + g(y)µNk 0 (dy) − Eg(Yt k 0 ) + Eg(Yt k 0 ) − Eg(Yt k 1 ) PNk H Z y N ,y y + Eg(Yt k 1 ) − Eg(Yt 1 ) + Eg(Yt 1 ) − g(y)µV (dy) H
= K1 + K2 + K3 + K4 + K5 .
By (6.90), (6.91), (6.93) and (6.95), when t → ∞, we have K2 + K3 + K4 + K5 → 0 uniformly for Nk , and then, letting Nk → ∞, we obtain K1 → 0. Thus Z Z (δy0 ,{Nk }) g(y)µ (dy) = g(y)µV (dy), H
H
which implies µ(δy0 ,{Nk }) is independent of δy0 and {Nk }. Moreover, we have µ(δy0 ,{Nk }) = µV . Acknowledgements This work was supported by National Natural Science Foundation of China (NSFC) (No. 11431014, No. 11371041, No. 11401557, No. 11271356), the Fundamental Research Funds for the Central
Universities (No. 0010000048), Key Laboratory of Random Complex Structures and Data Science, Academy of Mathematics and Systems Science, Chinese Academy of Sciences(No. 2008DP173182), and the applied mathematical research for the important strategic demand of China in information science and related fields(No. 2011CB808000).
38
References [1] Adams, R.A.: Sobolev Space. New York: Academic Press, 1975 [2] Cao, C., Titi, E.S.: Global well-posedness of the three-dimensional viscous primitive equations of large-scale ocean and atmosphere dynamics. Ann. of Math. 166, 245-267 (2007) [3] Da Prato, G., Zabczyk, J.: Ergodicity for Inifinite Dimensional Systems. In: London Mathematical Society Lecture Notes,n.229, Cambridge University Press, Cambridge (1996) [4] Debussche, A., Glatt-Holtz, N., Temam, R.: Local martingale and pathwise solutions for an abstract fluids model. Phys. D 240 , no. 14-15, 1123-1144 (2011) [5] Debussche, A., Glatt-Holtz, N., Temam, R., Ziane, M.: Global existence and regularity for the 3D stochastic primitive equations of the ocean and atmosphere with multiplicative white noise. Nonlinearity, 25, 2093-2118 (2012) [6] Flandoli, F., Gatarek, D.: Martingale and stationary solutions for stochastic Navier-Stokes equations. Probab. Theory Related Fields 102, 367-391 (1995) [7] Frankignoul, C., Hasselmann, K.: Stochastic climate models, Part II: Application to sea-surface temperature anomalies and thermocline variability. Tellus 29, 289-305 (1977) [8] Glatt-Holtz. N., Kukavica. I., Vicol. V., Ziane. M.: Existence and regularity of invariant measures for the three dimensional stochastic primitive equations. arXiv:1311.4204v1 [9] Guill´en − Gonz´alez, F., Masmoudi, N., Rodriguez-Bellido, M.A.: Anisotropic estimates and strong solutions for the primitive equations. Diff. Int. Equ. 14, 1381-1408(2001) [10] Guo, B., Huang, D.: 3D stochastic primitive equations of the large-scale ocean: global wellposedness and attractors. Comm. Math. Phys. 286, no. 2, 697-723 (2009) [11] Hairer, M.: Exponential Mixing Properties of stochastic PDEs through Asymptotic coupling. Prob. Theory Rel. Fields 124(3), 345-380 (2002) [12] Hu, C., Temam, R., and Ziane, M.: The primitive equations on the large scale ocean under the small depth hypothesis. Discrete Contin. Dynam. Systems 9, 97-131 (2003) [13] Kuksin, S.: On exponential convergence to a stationary measure for nonlinear PDEs. Partial differential equations, 161-176 (2002) [14] Kuksin, S., Shirikyan, A.: Ergodicity for the randomly forced 2D Navier-Stokes equations. Math. Phys. Anal. Geom. 4, 147-195 (2001) [15] Kuksin, S., Shirikyan, A.: A coupling approach to randomly forced PDE’s I. Comm. Math. Phys. 221, no. 2, 351-366 (2001) 39
[16] Kuksin, S., Piatnitski, A., Shirikyan, A.: A coupling approach to randomly forced PDE’s II. Comm. Math. Phys. 230(1), 81-85 (2002) [17] Lions, J.L., Temam, R., Wang, S.: New formulations of the primitive equations of atmosphere and applications. Nonlinearity 5, 237-288 (1992) [18] Lions, J.L., Temam, R., Wang, S.: On the equations of the large scale ocean. Nonlinearity 5, 10071053 (1992) [19] Lions, J.L., Temam, R., Wang, S.: Models of the coupled atmosphere and ocean. Computational Mechanics Advance 1, 1-54 (1993) [20] Lions, J.L., Temam, R., Wang, S.: Mathematical theory for the coupled atmosphere-ocean models. J. Math. Pures Appl. 74, 105-163 (1995) [21] Mattingly, J.: Exponential convergence for the stochastically forced Navier-Stokes equations and other partially dissipative dynamics. Comm. Math. Phys. 230, 421-462 (2002) [22] Mikolajewicz, U., Maier-Reimer, E.: Internal secular variability in an OGCM. Climate Dyn. 4, 145-156 (1990) [23] Tachim Medjo, T.: The exponential behavior of the stochastic three-dimensional primitive equations with multiplicative noise. Nonlinear Anal. Real World Appl. 12, no. 2, 799-810 (2011) [24] Nualart, D.: Malliavin Calculus and related topic. Probability and its Applications, 93B Newyork: Springer, 1995 [25] Odasso, C.: Exponential mixing for the 3D stochastic Navier-Stokes equations. Comm. Math. Phys. 270, no. 1, 109-139 (2007) [26] Odasso, C.: Ergodicity for the stochastic Complex Ginzburg-Landau equations. Ann. Inst. H. Poincar´l˛e Probab. Statist. 42, no. 4, 417-454 (2006) [27] Phillips, O.M.: On the generation of waves by turbulent winds. J. Fluid Mech. 2, 417-445 (1957) [28] Temam, R.: Navier-Stokes equations and nonlinear functional analysis. Second edition. CBMSNSF Regional Conference Series in Applied Mathematics, 66. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1995
40