Singular Perturbation Systems with Stochastic ...

5 downloads 0 Views 354KB Size Report
Nathan Glatt-Holtz †and Mohammed Ziane‡. April 7, 2009 ... One approach, the renormalization group method has enjoyed considerable success in recent.
Singular Perturbation Systems with Stochastic Forcing and the Renormalization Group Method∗ Nathan Glatt-Holtz †and Mohammed Ziane‡ April 7, 2009

Abstract This article examines a class of singular perturbation systems in the presence of a small white noise. Modifying the renormalization group procedure developed by Chen, Goldenfeld and Oono [6] , we derive an associated reduced system which we use to construct an approximate solution that separates scales. Rigorous results demonstrating that these approximate solutions remain valid with high probability on large time scales are established. As a special case we infer new small noise asymptotic results for a class of processes exhibiting a physically motivated cancellation property in the nonlinear term. These results are applied to some concrete perturbation systems arising in geophysical fluid dynamics and in the study of turbulence. For each system we exhibit the associated renormalization group equation which helps decouple the interactions between the different scales inherent in the original system.

1

Introduction

Perturbation theory has long played an important role in applied analysis. Perturbed systems can provide the relevant setting for the study of physical phenomena exhibiting multiple spatial and temporal scales. In the field of climatology, for example, the basic momentum equations naturally exhibit multiple orders of magnitude, with the largest order being driven by the Coriolis term arising from to the earth’s rotation. A basic challenge in the study of perturbed dynamical systems is that the unperturbed problem may exhibit fundamentally different quantitative and qualitative behaviors, particularly for large time scales or near certain boundaries. Such difficulties were appreciated as early as the 18th century in the study of multi-body problems in celestial mechanics. Needless to say such “singular” systems are notoriously challenging to analyze. A variety of asymptotic methods have been developed, each germane to different types of singular perturbation problems. These methods seek to find approximate solutions to perturbed systems that separate scales and remain valid over long time intervals. See the recent texts of Verhulst [21], [22] for a broad introductory survey for deterministic systems. For the stochastic setting see Freidlin and Wentzell [11] or Skorokhod, Hoppensteadt and Habib [18] for an overview. ∗ supported

in part by the NSF grant DMS-0505974 of Mathematics, Indiana University, Bloomington, IN 47405. Email: [email protected] ‡ Department of Mathematics, University of Southern California, Los Angeles, CA 90089. Email: [email protected] † Department

1

2

N. Glatt-Holtz and M. Ziane

One approach, the renormalization group method has enjoyed considerable success in recent investigations of singularly perturbed systems. The method was first developed by Chen, Goldenfeld and Oono in the context of perturbative quantum field theory. See [6] or [7] and references therein. Subsequently the work of Ziane [23] and later the work of DeVille, Harkin, Holzer, Josi´c, and Kaper [9] put the subject on a firm mathematical foundation. Using this technique, Moise, Simonnet, Temam, and Ziane [15] conducted a series of Numerical simulations of ordinary differential equations arising in geophysical fluid dynamics. In the context of partial differential equations the method has been applied to Navier-Stokes type systems by Moise and Ziane [17] and Moise and Temam [16]. The two-dimensional Navier-Stokes equations perturbed by a small additive white noise was addressed using this method by Bl¨omker, Gugg, and Maier-Paape [4]. We should also mention a related work to ours which concerns the stochastic normal form by Arnold and Imkeller [1], Arnold, Namachchivay, and Shenk-Hopp´e [2] and Namachchiavayv and Lin [19], see also the book by Arnold [3] and the references therein. These authors seek coordinate transformations that decouple slow and fast modes, and the emphasis is on computing Lyapunov exponents as opposed to pathwise estimates considered in our atricle. In any physical system uncertainties (measurement error, unresolved scales or interactions, numerical inaccuracies, etc.) arise that are hard to account for in the basic model. One would like to be able to quantify the robustness of a model, particularly one involving singular perturbations, in the presence of these uncertainties. As such, it is natural to incorporate small white noise driven perturbations into existing singular models. As far as we are aware [4] is the only investigation to apply Renormalization group techniques to a stochastically driven system. In particular the crucial case of highly oscillatory systems remains unaddressed up to the present article. In this work we investigate a class of stochastic equations taking the form 1 dX  + AX  dτ = F (X  )dτ + m G(τ, τ /)dW, 

(1.1)

where A is a linear operator which is assumed to be either symmetric positive semidefinite or antisymmetric. F is a nonlinear operator, but may exhibit important cancellation properties that arise physically. dW is a white noise process in the appropriate sense. The parameter m is a real number measuring the strength of the noise term. In most cases we restrict m > 0. In the case of strictly dissipative systems we are also able to address the case of a noise term of moderate strength, −1/2 < m ≤ 0. Following the classical techniques we attempt to find an approximate solution via a naive perturbation expansion of the solution, setting u ≈ u(0) + u(1) . This can (and usually does) break down due to resonances between the nonlinear term F and the semi-group generated by A. Additionally we face the new difficulty of accounting for intermediate scale diffusion introduced by the small noise term. To compensate for this we derive the renormalized system dV  = R(V  )dτ + m H(τ, τ /)dW.

(1.2)

¯  = e−τ /A V  . Indeed, we will prove The solution V  of (1.2) defines an approximate solution X  that if the behavior of V is reasonable, namely that if for some K > 0 ! P

sup kV  (τ )k > K τ ∈[0,T ]

→0

−−−→ 0,

(1.3)

The Renormalization Group with Stochastic Forcing

3

¯  is a valid approximation in the sense that then X ! P

¯  (τ ) − X  (τ )k > Cγ sup kX

→0

−−−→ 0.

(1.4)

τ ∈[0,T ]

Here, γ > 0, and depends on m as well as the structure of A and H. Note that beyond establishing convergence in probability, typical for such results, we have also managed to establish a rate of convergence. The system (1.2) greatly simplifies the original equations. The structure of H depends on A and the desired rate of convergence for the approximate solution. In particular, we show that we can take H to be identically zero at the cost of a reduced rate of convergence. Even for cases where H is non-zero, sending  to zero reduces to a more tractable small noise asymptotic problem. One interesting consequence of these results is that when A ≡ 0 in the original system (1.1), (1.4) can be interpreted as a small noise asymptotic result. While such results are classical for systems with Lipschitz nonlinear terms (see [11] or [8]) our result allows us to address the physically important case when one can only expect cancellations in the nonlinear portion of the equation. We next turn to some concrete examples. In particular we consider stochastic versions of a meteorological model developed by Lorenz in [14] and of a simple model for turbulent flow proposed by Temam in [20]. In each case we exhibit the renormalization group which decouples the two scale inherent in the original system. Current work in preparation by the first author seeks to apply the results in this work to a series of numerical studies of these and other systems. The final section collects some general results concerning slowly varying stochastic processes. The estimates that we establish in this section form the analytical core of the main approximation results and may hold independent interest for other studies of stochastic singular perturbation systems. Appendices collect further small noise asymptotic results and as well as some mostly classical estimates on stochastic convolutions terms arising in the proof of the main theorems.

2

The Stochastic Singular Perturbation System and The Derivation of the Renormalization Group

For m ∈ R, we consider the singularly perturbed system given by the stochastic differential equation in Cn 1¯  ¯ τ /) dW (τ ) dτ = F¯ (Y  (τ )) dτ + α G(τ, dY  (τ ) + AY  Y  (0) = Y0 .

(2.1)

We assume that A¯ is a diagonalizable matrix. Below, we will analyze the case when A¯ is antisymmetric and in the case when A¯ is positive semidefinite. The nonlinear term F¯ is a assumed, for the sake of simplicity, to be a polynomial. W = (W 1 , . . . , W n )⊥ is a standard n dimensional Brownian ¯ takes values in motion relative to some underlying filtered probability space (Ω, F, (Fτ )τ ≥0 , P). G Mn×n and is bounded in the Frobenius norm independently of τ and τ / X ¯ τ /)k2 = sup ¯ j,k (τ, τ /)|2 < ∞. sup kG(τ, |G (2.2) τ ≥0

τ ≥0

j,k

4

N. Glatt-Holtz and M. Ziane

¯ when we consider the case when A¯ is positive semidefFurther assumptions will be imposed on G inite. In what follows T is a fixed large time. The goal will be to study the behavior of X  on the time interval [0, T ] when  is small and in the limit as  → 0. For the analysis below we shall work in a different basis. Let Q be an orthogonal matrix ¯ and let diagonalizing A, X  (τ ) := QY  (τ ),

F (x) := QF¯ (Q∗ x),

¯ τ /). G(τ, τ /) := QG(τ,

(2.3)

Multiplying (2.1) by Q gives the following evolution system for X  1 dX  (τ ) + AX  (τ ) dτ = F (X  (τ )) dτ + α G(τ, τ /) dW  X  (0) = X0 ,

(2.4)

where A is a diagonal matrix. Note that in this basis X  may evolve in Cn . The results below are easily translated back to the original coordinate frame for the system (2.1). See Remark 2.1. The next step is to write X  in a naive perturbation expansion in powers of . The goal is to derive a reduced system that approximates (2.4) on large time intervals and that separates scales. To this end, we consider (2.4) on a new time scale t = τ / ˜ dX  (t) + AX  (t) dt = F (X  (t)) dt + m+1/2 G(t, t) dW X  (0) = X0 .

(2.5)

˜ , also a standard Brownian motion, is a rescaling of W Below we set β = m + 1/2. Note that W given by ˜ (t) = √1 W (t) = √1 W (τ ). W (2.6)   When β < 2, the perturbation expansion has the form1 X  (t) = X (0) (t) + β X (β) + X (1) (t) + O(2 ) X0 = X (0) (0).

(2.7)

We plug (2.7) into (2.5) and match power of  to formally derive the coupled system of equations dX (0) = −AX (0) dt;

X (0) (0) = X0

˜; dX (β) = −AX (β) dt + GdW   dX (1) = −AX (1) + F (X (0) ) dt;

X (β) (0) = 0 X (1) (0) = 0

which admits the solution X (0) (t) = e−At X0 Z t (β) ˜ X (t) = e−A(t−s) G(s, s) dW 0 Z t (1) X (t) = e−A(t−s) F (e−As X0 ) ds. 0 1 Note

however that no such restriction on β is needed for Proposition 3.1 or Proposition 3.2 below.

(2.8)

The Renormalization Group with Stochastic Forcing

5

We have therefore derived the approximate solution   Z t Z t ¯  (t) = e−At X0 +  ˜ + O(2 ) eAs G(s, s) dW X eAs F (e−As X0 ) ds + β 0 0   Z t Z t −At  As −As  β ˜ + O(2 ). e−A(t−s) G(s, s) dW =e X0 +  e F (e X0 ) ds + 

(2.9)

0

0

The approximation (2.9) assumes that X (1) remains O(1) with high probability on the time scale 1/ (see Definition (2.1) below). This may breakdown in X (1) due to resonances with F , as analyzed in [23]. In contrast to previous investigations we also need to account for the role of the noise term in (2.9). As we shall see below, when the noise is “small” (i.e. when m > 0 in the original system (2.1) or (2.4)) a approximate solution without stochastic terms can be shown to be valid on the time scale 1/. In the case of strictly dissipative systems where A is (strictly) positive definite we shall see that the noise can be even taken to be of intermediate strength (for example m = 0). On the other hand we also show that including some or all of the noise terms in the approximate solution leads to a faster rate of convergence between the real and the approximate solutions. We end this section by formalizing the O(·) statement. Definition 2.1. Suppose that δ1 (·), δ2 (·) : [0, 0 ] → [0, ∞) are monotonically decreasing functions. We say that a collection of processes X  (·) : Ω × [0, ∞) → Cn are O(δ1 ()) with high probability on time scale 1/δ2 () if for all T > 0, there exists a K > 0, so that ! P

kX  (t)k > Kδ1 ()

sup

↓0

−−→ 0.

(2.10)

t∈[0,T /δ2 ()]

2.1

Resonance Analysis for the Nonlinear Term

We split F into its resonant and non-resonant parts with respect to the semi-group generated by A. Since F is a polynomial of degree d, we can write   n X X  Cαj uα  ej . F (u) = j=1

|α|≤d

Here ej the jth standard basis element in Cn . Let Λ = {λ1 , . . . , λn } P be the eigenvalues of A. n Given any multindex α = (α1 , . . . , αn ), we adopt the notation (Λ, α) := i=1 αi λi . Let Nrj = {α ∈ Nn : |α| ≤ d, (Λ, α) = λj }.

(2.11)

The resonant terms in F are given by R(u) =

n X

 X

 j=1

and note that for any u ∈ Cn



Cαj uα  ej ,

(2.12)

α∈Nrj

eAt R(e−At u) = R(u).

(2.13)

6

N. Glatt-Holtz and M. Ziane

Let FN R (u) = F (u) − R(u) =

n X

 

j=1

and S(t, u) = etA FN R (e−tA u) =

n X

Cαj uα  ej

(2.14)

α6∈Nrj



 X 

j=1

 X

Cαj et(λj −(Λ,α)) uα  ej .

(2.15)

α6∈Nrj

We have the decompositions eAt F (e−At u) = R(u) + S(t, u), F (u) = e−At R(eAt u) + FN R (u) = R(u) + FN R (u).

(2.16)

Applying (2.16) to (2.9), we find   Z t Z t ¯  = e−At X0 + tR(X0 ) +  ˜ + O(2 ) X S(s, X0 ) ds + β eAs G(s, s) dW 0 0   Z t Z t ˜ + O(2 ). = e−At X0 + tR(X0 ) +  S(s, X0 ) ds + β e−A(t−s) G(s, s) dW 0

(2.17)

0

¯  , and to show that the last term is In order to estimate terms arising in the expression of X nonresonant (bounded) in the probabilistic sense, we need the following lemma whose proof is an easy application of Ito’s formula, Burkholder-Davis-Gundy inequality, and therefore will be given in an Appendix for the sake of completeness. Lemma 2.1. Suppose that H takes values in Mn×n1 , that W = (W1 , . . . , Wn1 )T is a standard n1 dimensional Brownian motion and that A in Mn×n is symmetric non-negative definite or antisymmetric. Assume that H is uniformly bounded in time  1/2 X |Hk,j (t)|2  0

t>0

(2.18)

j,k

Given constants K, T0 > 0 !

Z t

CT0 −A(t−s) P sup e H(s)dW

> K ≤ K2 t∈[0,T0 ] 0

(2.19)

where C := C(kHk). Suppose now that β > 1/2 (equivalently m > 0 in the original time scale). For γ < β − 1/2, Lemma 2.1 implies that !

Z t

β

↓0 −A(t−s) γ ˜

P sup  e G(s, s) dW >  ≤ CT 2(β−γ)−1 −−→ 0. (2.20) t∈[0,T /]

0

The Renormalization Group with Stochastic Forcing

7

Rt ˜ is O(γ ) with high probability on the time scale 1/. Referring As such β 0 e−A(t−s) G(s) dW back to (2.17) we arrive at the renormalized system dV  (t) = R(V  )dt V  (0) = V0 , with the associated approximate solution taking the form   Z t  −At   ¯ X (t) = e V (t) +  S(s, V (s)) ds .

(2.21)

(2.22)

0

This is exactly the form the approximate solution takes in other works ([23], [17], [15], [4] etc.) However by applying Proposition 5.1, one can show that Z t Z t −At  e e−A(t−s) FN R (e−sA V  )ds S(s, V )ds = (2.23) 0

0

remains O(1) with high probability on time scale 1/ as long as V  remains O(1) with high probability on this time scale. We are therefore justified in simplifying the approximate solution to ¯  (t) = e−At V  (t) X (2.24) This approximate solution satisfies ¯  + AX ¯  dt = R(X ¯  )dt dX ¯  (0) = X ¯ 0 = V0 . X

(2.25)

In the original time scale, the renormalization group equation takes the form dV  (τ ) = R(V  (τ ))dτ V  (0) = V0 ,

(2.26)

and the approximate solution is given by ¯  (τ ) = e−(τ /)A V  (τ ) X

(2.27)

¯  dτ = R(X ¯  )dτ ¯  + 1 AX dX  ¯  (0) = X ¯  = V . X 0 0

(2.28)

which solves

Note that the approximate solution in (2.27) splits the dynamics into two scales with the (possibly) intermediate scale introduced by the diffusion term of no consequence to the asymptotic validity ¯  converges of the approximation. Employing the estimate (2.20) we are able to establish that X  γ to X with high probability on time scale 1 with a convergence rate  for any γ < m. More precisely we show that ! →0   γ ¯ (τ ) − X (τ )k > C P sup kX −−−→ 0, (2.29) τ ∈[0,T ]

for any such γ < m. In order to improve these convergence rates we may wish to include some stochastic terms in the renormalization group. The analysis must consider the antisymmetric and positive semidefinite cases separately.

8

2.2

N. Glatt-Holtz and M. Ziane

The Antisymmetric Case: Improved Convergence Rates

Suppose that A is antisymmetric along with the standing assumption that β > 1/2 (i.e. m > 0). If we include all of the noise in the renormalized system we obtain ˜ dV  (t) = R(V  )dt + β etA G(t, t)dW, V  (0) = V0 ,

(2.30)

which reads as dV  (τ ) = R(V  )dτ + α e(τ /)A G(τ, τ /)dW, V  (0) = V0 ,

(2.31)

in the original time scale. Since A is antisymmetric, (2.2) implies that supτ >0 ke(τ /)A G(τ, τ /)k < ∞. As such the small noise asymptotic results established below (c.f. Remark 3.2) show that for a large class of physically motivated systems, if the deterministic renormalized system in (2.26) is O(1) with high probability on time scale 1 then so is the solution of (2.31). We are justified, as in the previous case, in assuming that V  is a slowly varying process in the sense of (5.1), (5.2) and ¯  according to (5.3). Once again Proposition 5.1 justifies simplifying the approximate solution X  ¯ satisfies (2.24) (or (2.27) in the original time scale). In this case X ¯  + AX ¯  dt = R(X ¯  )dt + β G(t, t)dW ˜ dX    ¯ ¯ X (0) = X0 = V0 ,

(2.32)

and in time scale τ, it written in the form ¯  + 1 AX ¯  dτ = R(X ¯  )dτ + m G(τ, τ /)dW dX  ¯  (0) = X ¯  = V . X 0

(2.33)

0

When comparing this system to (2.4), the stochastic terms cancel. The approximate solution thus enjoys a faster rate of convergence to the original system (2.4). In particular we prove below that ! P

¯  (τ ) − X  (τ )k > C sup kX

→0

−−−→ 0.

(2.34)

τ ∈[0,T ]

Remark 2.1. In some applications, particularly for the antisymmetric case, the governing equations (2.1) are given in a non-diagonalized basis, relative to the dominant linear term. The renormalized systems defined by (2.26) or (2.31) can be translated back to this original basis. Let U  = Q∗ V  ,

¯ = Q∗ R(Q·), R(·)

¯ . Y¯  = Q∗ X

(2.35)

If V  is defined by (2.26), then in the original basis U  solves ¯  (τ ))dτ dU  (τ ) = R(U U  (0) = U0 .

(2.36)

The Renormalization Group with Stochastic Forcing

9

On the other hand if we include stochastic forcing in the renormalized system as in (2.31) then ¯

¯  (τ ))dτ + α eτ /A G(τ, ¯ τ /)dW dU  (τ ) = R(U U  (0) = U0 .

(2.37)

In both cases, the approximate solution is given by ¯ Y¯  (τ ) = e−(τ /)A U  (τ ).

(2.38)

1 ¯ Y¯  )dτ dY¯  + A¯Y¯  dτ = R(  Y¯  (0) = Y¯0 = U0 ,

(2.39)

1 ¯ Y¯  )dτ + α G(t, ˜ t)dW dY¯  + A¯Y¯  dτ = R(  Y¯  (0) = Y¯0 = U0 ,

(2.40)

In the first case Y¯  is the solution of

whereas

¯  with U  , in the second case. The results in Proposition 3.1 apply when we replace V  , X  and X   ¯ Y and Y respectively.

2.3

The Positive Semidefinite Case: Improved Convergence Rates and the Case of Large Noise

We next consider the case when A is positive semidefinite. Here, beyond the uniform time bounds in (2.2) we assume that G is in Mn×n and diagonal. Let M, N be the projections onto ker(A) and ker(A)⊥ respectively. Applying this decomposition to (2.17) we obtain   Z t Z t ¯  =e−At X0 + tR(X0 ) +  ˜ X S(s, X0 ) ds + β MG(s, s)dW 0

+

β

Z

0

t

e

−A(t−s)

(2.41)

˜ + O( ). N G(s, s) dW 2

0

In order to control the last term in the expression above we need the following lemma whose proof is given in an Appendix Lemma 2.2. Suppose that A and H(·) are diagonal with supt>0 kH(t)k < ∞ and the spectrum of A real and non-negative. For q ≥ 2 we have !

Z t

CT0 −A(t−s) P sup e N H(s)dW (2.42)

> K ≤ Kq , t∈[0,T0 ] 0 where C := C(q, H) and N is the projection onto ker(A)⊥ .

10

N. Glatt-Holtz and M. Ziane

Suppose that β > 0 (i.e. m > −1/2 in the original time scale). By applying Lemma 2.2, we have the estimate !

Z t

β γ ≤ C(q)T q(β−γ)−1 , (2.43) e−A(t−s) N G(s, s)dW P sup

>

 t∈[0,T /]

0

Rt for any 0 < γ < β. By choosing q > max{(β − γ)−1 , 2}, we see that β 0 e−A(t−s) G(s)dW remains O(γ ) with high probability on a time scale of order 1/. With this in mind we define the renormalization group equation ˜ dV  (t) = R(V  (t))dt + β MG(t, t)dW, V  (0) = V0 ,

(2.44)

which gives dV  (τ ) = R(V  (τ ))dτ + m MG(τ, τ /)dW, V  (0) = V0 ,

(2.45)

in the original time scale. Note that in order to apply either Proposition 5.1 or Remark 3.2 to this renormalized system, we must require either that M = 0 (equivalently that A is strictly positive ¯  (t) = e−tA V  definite) or that m > 0. The approximate solution is defined in either case by X which is the solution of ¯  + AX ¯  dt = R(X ¯  )dt + β MG(t, t)dW ˜ dX ¯  (0) = X ¯ = V  X 0

(2.46)

0

¯  (τ ) = e−(τ /)A V  (τ ) which solves Returning to the original time scale we have X ¯  + 1 AX ¯  dτ = R(X ¯  )dτ + m MG(τ, τ /)dW dX  ¯  (0) = X ¯ 0 = V0 . X

(2.47)

Relying on the stronger estimates available for the stochastic convolution due to the dissipation in the system (c.f. (2.43) above) we are able to show that ! P

¯  (τ ) − X  (τ )k > Cγ sup kX

→0

−−−→ 0,

(2.48)

τ ∈[0,T ]

where γ < m + 1/2.

3

Rigorous Approximation Results

¯  as an approximate solution We now rigorously state and prove the main results legitimating X of (2.4). We start with the highly oscillatory case.

The Renormalization Group with Stochastic Forcing

11

Proposition 3.1. Assume that X  solves (2.4) with A antisymmetric and m > 0. Suppose that V  solves either (2.26) or (2.31), and that for any T > 0 there exists K > 0 such that ! →0

sup kV  (τ )k > K

P

:= φ1 () −−−→ 0,

(3.1)

τ ∈[0,T ]

Additionally, we assume that →0

P (kV0 − X0 k > K) := φ2 () −−−→ 0.

(3.2)

¯  is as in (2.27), then for any γ < m (i) Let V  be as in (2.26) and therefore X ! →0   γ ¯ P sup kX (τ ) − X (τ )k > C −−−→ 0.

(3.3)

τ ∈[0,T ]

where C = C(K, F ). ¯  is given by (2.33) then (ii) If V  solves (2.31) and X ! P

¯



sup kX (τ ) − X (τ )k > C

→0

−−−→ 0.

(3.4)

τ ∈[0,T ]

Remark 3.1. As in [4], T can be replaced with T := T log(1/) in (3.1), (3.3) and (3.4) (or in (3.35), (3.37), (3.38) below) with no additional complications. Proof. Throughout the following, we work on the time scale t. Let Z  (t) = eAt X  (t). This process satisfies ˜ ; Z  (0) = X0 . dZ  = eAt F (e−At Z  ) + β eAt G(t, t)dW (3.5) Since A is antisymmetric ¯  (t) − X  (t)k, kV  (t) − Z  (t)k = kX

t ≥ 0.

(3.6)

Thus, we are justified to consider the error process E  (t) := V  (t) − Z  (t).

(3.7)

For the case (i), using (2.16) E  (t) =E0 + 

Z

t

0

Z

t

=E0 +  0 Z t ˜ − β eAs G(s, s)dW. 0

Z

t

˜ eAs G(s, s)dW 0 Z t  As −As  As −As   e F (e V ) − e F (e (V − E ) ds −  S(s, V  )ds

 R(V  ) − eAs F (e−As Z  ) ds − β

0

(3.8)

12

N. Glatt-Holtz and M. Ziane

However, in the case (ii), there is no stochastic integral term, and we have Z t  E  (t) = E0 +  R(V  ) − eAs F (e−As Z  ) ds 0 Z t Z t  = E0 +  eAs F (e−As V  ) − eAs F (e−As (V  − E  ) ds −  S(s, V  )ds. 0

(3.9)

0 

Notice that in both cases V has the form (5.1) and satisfies the conditions (5.2) and (5.3). For α 6∈ NRj , let Kα,j be the constant arising in Proposition 5.1 with f (u) = Cαj uα . Define C1 =

n X X

Kα,j .

(3.10)

j=1 α6∈Nrj

Applying Proposition 5.1 we estimate P

!

Z t



sup  S(s, V )ds > C1 

t∈[0,T /]

0



X 1≤j≤n j α6∈NR

P

sup t∈[0,T /]

!

Z t

↓0 t(λ −(Λ,α)) j  α

e j Cα (V ) ds

> Kα,j −−→ 0.

(3.11)

0

For case (i) we need to estimate the stochastic integral term in (3.8). Lemma 2.1 implies !

Z t

β γ As ˜ > e G(s, s)dW P sup

 t∈[0,T /] 0 !

Z t

(3.12) γ−β −A(t−s) ˜

=P sup e G(s, s)dW >  t∈[0,T /]

0

↓0

≤ CT 2(β−γ)−1 = CT 2(m−γ) −−→ 0. By the mean value theorem, we have kF (e−At V  ) − F (e−At V  (t) − e−At E  (t))k ≤ sup kD(F )(e−At V  (t) − σe−At E  (t))kkE  (t)k.

(3.13)

σ∈[0,1]

Let η (t) = sup kD(F )(e−At V  (t) − σe−At E  (t))k.

(3.14)

σ∈[0,1]

Note that

Z t

 As −As  −As   

e F (e V ) − F (e (V − E ) ds

≤ Ξ (t),

(3.15)

0

where 

Z

Ξ (t) = 0

t

η (s)kE  (s)kds.

(3.16)

The Renormalization Group with Stochastic Forcing

13

Define )

Z t

S(s, V  )ds G = {kE0 k ≤ K} ∩ sup 

≤ C1 

t∈[0,T /] 0 ( )

Z t

β

As γ ˜

∩ e G(s, s)dW ≤  sup 

(3.17)

)

Z t

S(s, V  )ds 

≤ C1 

(3.18)

(

t∈[0,T /]

0

in case (i) and ( G = {kE0 k ≤ K} ∩

sup t∈[0,T /]

0

for (ii). Due to (3.2) and (3.11) and in the first case (3.12), we have →0

P(GC ) −−−→ 0.

(3.19)

We complete the proof for either (i) or (ii) using a maximal argument. We give the details for (i), the other case is nearly identical. Let C2 = K + C1 + 1. We have for any sample ω coming from G kE  (t, ω)k ≤ γ C2 + Ξ(t, ω). (3.20) Thus

dΞ (t, ω) = η (t, ω)kE  (t, ω)k ≤ γ (C2 + Ξ (t, ω))η (t, ω). dt

(3.21)

The Gronwall’s inequality implies 

γ

Z

Ξ (t, ω) ≤  C2 exp

t

Z t  η(s, ω)ds η(s, ω)ds. γ

0

(3.22)

0

For ω ∈ G , define I  (ω) ⊂ [0, T /] to be the maximal interval so that kE  (s, ω)k ≤ 1 for every s ∈ I  (ω). Note that for  < K −1 , the definition of G insures that this interval is nontrivial for almost every ω ∈ G . Let ! C3 :=

sup t∈R+ kxk≤K,kyk≤1

sup kD(F )(e−At (x − σy))k .

(3.23)

σ∈[0,1]

By using the estimate (3.22) we obtain kE  (s, ω)k ≤ γ (C2 e

γ

T C3

T C3 ) ≤ γ C4 ,

for any s ∈ I(ω). Note that C4 can be chosen independently of ω and . As such for 
0, X  solves dX  = F (X  )dτ + m G(τ, τ /)dW ;

X  (0) = X0 .

(3.25)

and that x is the solution of the associated deterministic system dx = F (x )dτ ;

x (0) = x0 .

(3.26)

Assume that G(·, ·) is uniformly bounded as in (2.2) that for all u ∈ Rn ,

hF (u), ui ≤ 0 that

→0

P (kX0 − x0 k > Km ) −−−→ 0

(3.27) (3.28)

and finally that →0

P (kx0 k > K) −−−→ 0.

(3.29)

Then for any γ < m ! P

sup kX  (τ ) − x (τ )k > Cγ

→0

−−−→ 0.

(3.30)

τ ∈[0,T ]

Where C := C(F, K). Proof. Notice that (3.26) is in the form of (2.1)2 for A ≡ 0. Defining R according to (2.12) we find that F (u) = R(u) and that the renormalization group derived in Section 2.1 is given by (3.26). Due to (3.27) we have Z τ kx (τ )k2 = kx0 k2 + 2 hF (x ), x idτ ≤ kx0 k2 . (3.31) 0

By applying (3.29) we find that the condition (3.1) holds for x . The conclusion (3.30) therefore follows from Proposition 3.1, (i). Remark 3.2. (i) One important class of systems that satisfy (3.27) arise when F (u) = Lu + B(u, u),

(3.32)

where L is linear and either antisymmetric or positive semidefinite and B is a bilinear form satisfying the cancellation property hB(u, v), vi = 0. (ii) Small noise asymptotic results involving Lipschitz continuous nonlinear terms are classical and have been studied by many authors (see [11] and [8]). As noted, Corollary 3.1 covers the physically important case when we can only expect cancellations in the nonlinear portion of the equation. In Section 7 below we provide a different proof of Corollary 3.1 that covers the case of multiplicative noise and also establishes convergence to the deterministic limit system in Lp (Ω) for p ≥ 2. 2 In

this case, (2.1) and (2.4) are identical

The Renormalization Group with Stochastic Forcing

15

(iii) Corollary 3.1 can sometimes be used to verify (3.1) for V  , the solution of (2.31). Suppose that according to (2.12) R(u) = Lu + B(u, u), so that hR(u), ui ≤ 0. Take dv  = R(v  )dτ ;

v  (0) = v0 .

(3.33)

If V0 − v0 satisfies (3.28) and v0 fulfills (3.29), each with constant κ, then Corollary 3.1 and the calculation in (3.31) imply   P sup |V  (τ )| > κ + 1 τ ∈[0,T ]

! ≤P





sup |V (τ ) − v (τ )| > 1

+P

τ ∈[0,T ]

(3.34)

! 

sup |v (τ )| > κ

↓0

−−→ 0.

τ ∈[0,T ]

In a similar manner one may verify (3.35) for V  satisfying (2.45) in Proposition 3.2 below. We next address the positive semidefinite case. Proposition 3.2. Assume that X  solves (2.4) with A positive semidefinite. Suppose that V  is a solution of either (2.26) or (2.45) so that for T > 0 there is a constant K > 0 such that ! →0

sup kV  (τ )k > K

P

:= φ1 () −−−→ 0.

(3.35)

τ ∈[0,T ]

Additionally, suppose that the initial data V0 approximates X0 with high probability for small  →0

P (kV0 − X0 k > K) := φ2 () −−−→ 0.

(3.36)

(i) If m > 0 in (2.4) and V  solves (2.26) then there is a positive constant C = C(K, F ) so that whenever γ ≤ m ! →0   γ ¯ (τ ) − X (τ )k > C P sup kX −−−→ 0, (3.37) τ ∈[0,T ]

¯  (τ ) := e−( τ )A V  (τ ). where X (ii) In addition to the uniform bound (2.2), suppose that G(·, ·) is diagonal. If m > 0 in (2.4) and V  is the solution of (2.45) then ! P

¯



sup kX (τ ) − X (τ )k > C

γ

→0

−−−→ 0,

(3.38)

τ ∈[0,T ]

for any γ < m + 1/2. (iii) If, moreover, A is (strictly) positive definite (i.e. σ(A) is strictly positive) then if m > −1/2, (3.38) holds for γ < m + 1/2.

16

N. Glatt-Holtz and M. Ziane

Proof. As in the proof of Proposition 3.1 we work on time scale t. Here we define the error process ¯  (t) − X  (t). Subtracting (2.25) from (2.5) in case (i), or (2.46) from (2.5) in cases by E  (t) = X (ii) and (iii), we arrive at the system  ¯  ) − F (X  ) dt − β PG(t, t)dW ; E  (0) = V  − X  . (3.39) dE  + AE  dt =  R(X 0 0 In case (i) P = I while in the later cases P = N , the projection on ker(A)⊥ . We have Z t Z t   −At  −A(t−s)   β ¯ E (t) =e R(X ) − F (X ) ds −  e−A(t−s) PG(t, t)dW E0 +  e 0 0 Z t  ¯  ) − F (X  ) ds =e−At E0 +  e−A(t−s) e−As R(eAs X 0 Z t e−A(t−s) PG(t, t)dW − β 0 Z t  ¯  ) − F (X ¯  − E  ) ds =e−At E0 +  e−A(t−s) F (X 0 Z t Z t ¯  )ds − β − e−A(t−s) FN R (X e−A(t−s) PG(t, t)dW. 0

(3.40)

0

First we estimate the term  Z t n X Z t X −A(t−s)  −λj (t−s) j ¯  α ¯ e FN R (X )ds = e Cα (X ) ds ej 0

k=1 α6∈Nrj

=

0

n X Z X k=1 α6∈Nrj

t

e

−λj (t−s) −s(Λ,α)

e

(3.41)



Cαj (V  )α ds

ej .

0

Here P λj = Ajj ≥ 0 (since A is assumed diagonal, this is the jthj eigenvalue of A) and (Λ, α) = k αk λk . Recalling (2.11) we have λj 6= (Λ, α), for each α 6∈ Nr . Notice that for cases (i) and (iii) V  takes the form (5.1) trivially since both (2.21) and (2.44) are deterministic systems (since σ(A) > 0 in the later case M = 0). On the other hand, for case (ii), β > 1/2 in (2.44). Combining these observations with the boundedness condition (3.35) we see that each of the terms in (3.41) satisfy Lemma 5.1 (ii). Similarly to (3.10) and (3.11) we infer a constant C1 , depending on Cαj uα and λj in (3.41), so that !

Z t

↓0 −A(t−s)  ¯ )ds > C1  − P sup e FN R (X −→ 0. (3.42)



t∈[0,T /]

0

We next address the stochastic integral terms. For case (i) we estimate !

Z t

β

↓0 −A(t−s) γ P sup e G(t, t)dW ≤ CT 2(β−γ)−1 −−→ 0,



> t∈[0,T /]

(3.43)

0

using Lemma 2.1. On the other hand for cases (ii) and (iii) we apply Lemma 2.2, choosing q > max{(β − γ)−1 , 2} so that !

Z t

β

↓0 −A(t−s) γ

P sup  e N G(t, t)dW >  ≤ C(q)T q(β−γ)−1 −−→ 0. (3.44) t∈[0,T /]

0

The Renormalization Group with Stochastic Forcing

The final step is to estimate the term Z t  ¯  ) − F (X ¯  − E  ) ds. e−A(t−s) F (X 

17

(3.45)

0

By the mean value theorem ¯  ) − F (X ¯  − E  )k kF (X ¯  (s) − E  (s))kkE  (t)k. ≤ sup kD(F )(X

(3.46)

σ∈[0,1]

Let η (t) = sup kD(F )(e−As V  (s) − σE  (s))k,

(3.47)

σ∈[0,1]

and as in the proof of Proposition 3.1, define Z t  Ξ (t) = η (s)kE  (s)kds

(3.48)

0

and ( G =

)

Z t

−A(t−s)  ¯

e FN R (X )ds ≤ C1  ≤ K ∩ sup  t∈[0,T /] 0 )

Z t

β

γ −A(t−s) ˜ ≤ ,

 e N G(s)dW

(3.49)

)

Z t

−A(t−s)  ¯

e FN R (X )ds ≤ C1  ≤ K ∩ sup  t∈[0,T /] 0 )

Z t

β

γ −A(t−s) ˜ ≤ .

 e G(s)dW

(3.50)

)

(

ke−At E0 k

sup t∈[0,T /]

( ∩

sup t∈[0,T /]

0

or ( G =

)

sup t∈[0,T /]

( ∩

(

ke−At E0 k sup t∈[0,T /]

0

The maximal argument is then employed exactly as in Proposition 3.1 to complete the proof.

4

Applications: Some Examples from Fluid Dynamics

With the results developed above in hand, we now consider several examples of stochastically perturbed multi-scale systems. These simple models were first considered in the deterministic setting in [15] and are motivated by the numerical study of geophysical fluid flow and turbulence. For each example below observe that the renormalized system decouples the fast and slow components of the original system. The stochastically forced renormalized systems are seen to satisfy (3.1) (or (3.35)) for a large class of initial conditions by Remark 3.2. We further remark that for the first two examples the perturbation systems are written in the non-diagonalized form (2.1). As such we exhibit the deterministic and stochastic renormalized system according to (2.36) and (2.37) respectively. See Remark 2.1. A future article which develops applications of the results in this work to the numerical integration of singular stochastic systems will consider the examples below in more detail.

18

N. Glatt-Holtz and M. Ziane

Example 4.1 (Linear Renormalization Group Equation). For the first example we consider a nonlinear equation that exhibits a linear renormalization group equation. √ 1 dY1 + (λ1 Y1 − Y2 − Y1 Y3 ) dτ = dW1    √ 1 dY2 + λ2 Y2 + Y1 + Y2 Y3 dτ = dW2   √ dY3 + λ3 Y3 + Y12 − Y22 dτ = dW3

(4.1)

where λ1 , λ2 , λ3 are fixed constants. After some routine computations we conclude that the renormalization group equation has the form 1 dU1 + (λ1 + λ2 )U1 dτ = 0 2 1 dU2 + (λ1 + λ2 )U2 dτ = 0 2 dU3 +λ3 U3 dτ. = 0

(4.2)

The stochastic counterpart is given by √ √ 1 dU1 + (λ1 + λ2 )U1 dτ =  cos(τ /)dW1 −  sin(τ /)dW2 2 √ √ 1 dU2 + (λ1 + λ2 )U2 dτ =  sin(τ /)W1 +  cos(τ /)dW2 2 √ dU3 +λ3 U3 dτ = dW3 .

(4.3)

Example 4.2 (The Lorenz System). We next consider a system introduced by E. Lorenz (see [14]) with a small additive noise. Let λ1 , λ2 and b be fixed positive parameters. dY1 = (−λ1 Y1 − Y2 Y3 + bY2 Y5 )dτ + dY2 dY3 dY4 dY5



dW1 √ = (−λ1 Y2 + 2Y1 Y3 − 2bY1 Y5 )dτ + dW2 √ = (−λ1 Y3 − Y1 Y2 )dτ + dW3   √ 1 = −λ2 Y4 − Y5 dτ + dW4    √ 1 = −λ2 Y4 + Y4 + bY1 Y2 dτ + dW5 . 

(4.4)

In this case the renormalization group equation is given by dU1 =(−λ1 U1 − U2 U3 ) dτ dU2 =(−λ1 U2 + 2U1 U3 ) dτ dU3 =(−λ1 U3 − U1 U2 ) dτ dU4 = − λ2 U4 dτ dU5 = − λ2 U5 dτ

(4.5)

The Renormalization Group with Stochastic Forcing

19

and by dU1 =(−λ1 U1 − U2 U3 ) dτ + dU2 dU3 dU4 dU5



dW1 √ =(−λ1 U2 + 2U1 U3 ) dτ + dW2 √ =(−λ1 U3 − U1 U2 ) dτ + dW3 √ √ = − λ2 U4 dτ +  cos(τ /)dW4 −  sin(τ /)dW5 √ √ = − λ2 U5 dτ +  sin(τ /)dW4 +  cos(τ /)dW5 .

(4.6)

Example 4.3 (A Symmetric Positive Semidefinite). The final example addresses the dissipative case. √ dX1 + (X1 + X1 X2 )dτ = dW1   (4.7) √ 1 X2 − X12 = dW2 . dX2 +  This system was studied in [20] as a simple model for the numerical simulation of turbulent fluid flows. Here we have replaced the external forcing terms with a small white noise. The renormalized system has the form √ dV1 + V1 dτ = dW1 (4.8) dV2 = 0.

5

Large Time Estimates for Slowly Varying Processes

The following general result establishes that time integrals of slowly varying processes against oscillatory or dissipative terms remain O(1) with high probability on large time scales. Proposition 5.1. Suppose that, for  > 0, Z  ∈ Cn solves dZ  = Φ(t, Z  )dt + β Ψ(t, Z  )dW ;

Z  (0) = Z0 .

(5.1)

Here we suppose that β > 1/2, Φ and Ψ are uniformly bounded in time for bounded subsets of Cn in the sense that, for any R > 0 kΦ(t, x)k < ∞,

sup t>0,kxk≤R

sup

kΨ(t, x)k < ∞,

(5.2)

t>0,kxk≤R

and assume that for some constant K > 0 ! P

sup



kZ (t)k > K

↓0

:= φ() −−→ 0.

(5.3)

t∈[0,T /]

Then (i) For any σ ∈ R \ {0} and any analytic function f : Cn → C ! Z t ↓0 P sup eiσs f (Z  )ds > C −−→ 0, t∈[0,T /]

where C := C(f, σ, T ).

0

(5.4)

20

N. Glatt-Holtz and M. Ziane

(ii) Suppose that Z  take values in Rn , and assume that λ, σ ≥ 0 with σ 6= λ. Then for any C 2 function f : Rn → R ! Z t ↓0 −λ(t−s) −σs  e e f (Z )ds > C −−→ 0, (5.5) P sup t∈[0,T /]

0

where C := C(f, σ, T ). Proof. In both cases the Itˆ o formula gives df (Z  ) =

n X ∂f (Z  ) k=1

∂zk

Φk (t, Z  )dt + β

n X ∂f (Z  ) k=1

∂zk

(Ψ(t, Z  )dW )k

 2  n ∂ f (Z  ) 2β X Ψk,j (t, Z  )Ψl,j (t, Z  ) dt + 2 ∂zk ∂zl

(5.6)

k,l,j=1

:=HD (t, Z  )dt + β HS (t, Z  )dW + 2β HC (t, Z  )dt. For (i), integrating by parts and using (5.6), we have Z t eiσt f (Z  (t)) − f (Z  (0)) eiσs f (Z  )ds = iσ 0 (5.7) Z t iσs Z t iσs e e  2β−1  β  (HD (s, Z ) +  HC (s, Z ))ds +  HS (s, Z )dW. + 0 iσ 0 iσ Define C1 :=

2 sup |f (z)| |σ| kzk≤K

C2 :=

T sup (|HD (t, z)| + |HC (t, z)|) |σ| t>0,kzk≤K

C3 :=

1 sup kHS (t, Z  )k. |σ| t>0,kzk≤K

(5.8)

Note that (5.2) assures that these constants are finite. For the first term in (5.7), we have ! iσt e f (Z  (t)) − f (Z  (0)) P sup (5.9) > C1 ≤ φ(). iσ t∈[0,T /] Next t

 eiσs  2β−1  (HD (s, Z ) +  HC (s, Z ))ds > C2 t∈[0,T /] 0 iσ ! Z t    ≤P sup (|HD (s, Z )| + |HC (s, Z )|)ds > C2 t∈[0,T /] |σ| 0 ! T (|HD (t, Z  (t))| + |HC (t, Z  (t))|) ≤P sup > C2 ≤ φ(). |σ| t∈[0,T /]

 P sup

Z 

(5.10)

The Renormalization Group with Stochastic Forcing

21

For the final term from (5.7), define the stopping time ξ,K := inf {kZ  (t)k > K}. t≥0

Splitting the integral and making use of Doob’s inequality ! Z t iσs β e  P sup  HS (s, Z )dW (s) > C3 t∈[0,T /] 0 iσ ! Z t iσs β e  ≤P sup  HS (s, Z )11ξ,K >s dW (s) > C3 + P(ξ,K ≤ T /) t∈[0,T /] 0 iσ

2

iσs Z T /

e 2β 

≤ 2E HS (s, Z ) ds + ψ() 11ξ,K >s C3 iσ 0 Z T / 2β ≤ 2E 11ξ,K >s C32 ds + ψ1 () ≤ 2β−1 T + ψ(). C3 0

(5.11)

(5.12)

Setting C = C1 + C2 + C3 and applying (5.9), (5.10) and (5.12) with (5.7), we obtain the desired result. For item (ii), the integration by parts gives e−λt

Z

t

e(λ−σ)s f (Z  )ds =

0

e−σt f (Z  (t)) − e−λt f (Z  (0)) λ−σ Z t −λ(t−s) −sσ e e + (HD (s, Z  ) + 2β−1 HC (s, Z  ))ds λ − σ 0 Z t −λ(t−s) −sσ e e HS (s, Z  )dW. + β λ − σ 0

(5.13)

We define constants C1 and C2 similarly to the previous case C1 :=

2 sup |f (z)| |λ − σ| kzk≤K

C2 :=

T sup (|HD (t, Z  )| + |HS (t, Z  )|). |λ − σ| kzk≤K,t>0

(5.14)

The estimates for the first two terms on the left hand side of (5.13) are carried out in the same manner as (5.9) and (5.10). For the stochastic integral terms we consider two cases. First when λ > 0, we take C3 = 1, and we have ! Z t −λ(t−s) β e −σs  P sup  e HS (s, Z )dW (s) > 1 λ−σ t∈[0,T /] 0 ! Z t −λ(t−s) (5.15) e ≤P sup 11ξ,K >s e−σs HS (s, Z  )dW (s) > −β + P(ξ,K ≤ T /) λ − σ t∈[0,T /] 0 ≤ CT 2β−1 + φ(),

22

N. Glatt-Holtz and M. Ziane

where we applied Lemma 2.1 with G(t) = 11ξ,K >t e−σt HS (t, Z  ) for the second inequality. On the other hand, if λ = 0, then σ > 0 and we take C3 as in (5.8). The stochastic integral estimate can be made as in (5.12).

6

Appendix I: Stochastic Convolution Estimates

We give here the proofs of Lemmas (2.1) and(2.2) which were used to estimate terms arising in the proof of Proposition 3.1, (i) and Proposition 3.2. Proof of Lemma (2.1). Recall that Z

t

e−A(t−s) H(s)dW

X(t) =

(6.1)

0

is the solution of dX + AXdt = HdW ;

X(0) = 0.

(6.2)

Itˆ o’s lemma implies that dkXk2 + 2hAX, Xi = 2

X

hH·,j , XidWj + kHk2 dt.

(6.3)

j

Using the Burkholder-Davis-Gundy inequality (c.f. [13], Theorem 3.28), we estimate   1/2 XZ t Z T0 X E  sup 2 hH·,j , XidWj  ≤CE  hH·,j , Xi2 ds t∈[0,T0 ] 0 0 j j !1/2 Z 

T0

kHk2 kXk2 ds

≤CE

(6.4)

0

1 ≤ E sup kXk2 + CE 2 t∈[0,T0 ]

Z

T0

kHk2 ds.

0

This estimate and (6.3) imply ! 2

sup kXk

E

Z

T0

≤C

t∈[0,T0 ]

kHk2 ds.

(6.5)

0

The Chebyshev inequality therefore implies (2.19). Proof of Lemma (2.2). Since the proof is very similar to Lemma 5.1 in [5], we shall be brief in details. The factorization method relies on the identity Z s

t

(t − σ)θ−1 (σ − s)−θ dσ =

π , sin(πθ)

(6.6)

The Renormalization Group with Stochastic Forcing

23

which is valid for θ ∈ (0, 1) and for any s < t. From (6.6) and the stochastic Fubini Theorem (see [8], Theorem 4.18) we infer Z t e−A(t−s) N H(s)dWs 0  Z t Z sin(πθ) t −A(t−s) (6.7) (t − σ)θ−1 (σ − s)−θ dσ N H(s)dWs e = π s 0 Z σ  Z sin(πθ) t = (t − σ)θ−1 e−A(t−σ) N (σ − s)−θ e−A(σ−s) N H(s)dWs dσ. π 0 0 Let λmp = min{Ajj > 0}. By applying the decomposition (6.7) with the Chebyshev and H¨older inequalities we find that for any q > 2 !

Z t

−A(t−s) P sup e N H(s)dW

>K t∈[0,T0 ] 0 (6.8)

q 

Z σ 

T0 −θ −A(σ−s) (σ − s) e N H(s)dW ≤ C(θ, q, λmp ) q sup E

,

K σ>0 0 where we can take  C(θ, q, λmp ) =

sin(πθ) π

q

Z 0



0

e−(q λmp )σ dσ σ q0 (1−θ)

!q/q0 .

(6.9)

Note that this constant is finite for any choice of θ > 1/q. To complete the proof we estimate the moments of Z σ M (σ) := (σ − s)−θ e−A(σ−s) N H(s)dW. (6.10) 0

This process is Gaussian with mean zero and covariance matrix Z σ θ Vjk (σ) = δjk (σ − s)−2θ e−2λj (σ−s) Njj Hjj (s)2 ds.

(6.11)

0

Since q > 2, we can chose θ ∈ (1/q, 1/2) arbitrarily. Let θ be fixed in this range, and take, Vj (σ) = Vjjθ (σ). Notice that Vj∗ = sup Vj (σ) < ∞. (6.12) σ>0

For appropriately small values of s, the moment generating function of M (σ) is given by Y φ(s) = E(exp(skM (σ)k2 ) = E(exp(s|Mjj (σ)|2 ) λj 6=0 ∞

  1 − 2Vj (σ)s 2 xj dxj exp − 2Vj (σ) −∞ λj 6=0   n ∞ 1 X X (2sVj (σ))m  = exp − . 2 j=1 m=1 m =

Y

(2πVj (σ))−1/2

Z

(6.13)

24

N. Glatt-Holtz and M. Ziane

Differentiating φ(s), we find  k n X E(|M (σ)|2k ) = φk (0) ≤ C(k)  Vj∗  .

(6.14)

j=1

Applying (6.14) with 2k > q to (6.8), one infers (2.42).

7

Appendix II: Small Noise Asymptotic Results

In this section we present an alternative proof of the small noise asymptotic result in Corollary 3.1. Note that while the approach below is closer in spirit to classical work (see [11], for example) we are able to address the case of multiplicative noise. Consider stochastically perturbed system dX  + [LX  + B(X  , X  )] dt = m σ(t, , X  )dW X  (0) = X0

(7.1)

where m > 0. The related deterministic system is given by dx + [Lx + B(x, x)] = 0 dt x(0) = x0 .

(7.2)

We assume that L is linear and either non-negative definite or anti-symmetric. B(., .) is a bilinear form with the cancellation property hB(y, x), xi = 0. (7.3) As in the previous sections, W = (W 1 , . . . , W m ) is a standard Brownian motion relative to a filtered probability space (Ω, F, (Ft )t≥0 , P). The diffusion term σ takes values in Mn×m . On Mn×m P 1/2 2 we use the Frobenius norm |A| = A and impose the uniform Lipschitz condition j,k j,k |σ(t, , x) − σ(t, , y)| ≤ K|x − y| |σ(t, , x)| ≤ K(1 + |x|).

(7.4)

Note that K is assumed to be independent of both  and t. Remark 7.1. Under the conditions given for L, B and σ and assuming that X0 is F0 measurable (7.1) admits a unique continuous solution. If, in addition, we assume that X0 ∈ Lp (Ω) then Z E

T

|X  (t)|p dt < ∞.

0

See [10] for detailed proofs. We first establish sufficient conditions for convergence in Lp (Ω).

(7.5)

The Renormalization Group with Stochastic Forcing

25

Proposition 7.1 (Lp (Ω) Convergence). Let T > 0, p ≥ 2, X0 ∈ Lp (Ω) for  > 0 and x0 be a fixed element in Rn . Assume that for some appropriate constant C0 E|X0 − x0 |p ≤ mp C0 .

(7.6)

E sup (1 + |X  (t)|2 )p/2 ≤ C

(7.7)

Then t∈[0,T ]

and for  > 0 E sup |X  (t) − x(t)|p ≤ mp C

(7.8)

t∈[0,T ]

where C = C(|x0 |, T, L, B, K, p). For the proof of Proposition 7.1 we shall use the following Lemma that functions as a stochastic analogue of the Gronwall Lemma. See [12] for a more general formulation and the proof. Lemma 7.1. Suppose that Y, Z : [0, T ] × Ω → R+ are stochastic processes so that Z

T

(Y + Z) dt < ∞.

E

(7.9)

0

Assume that for any 0 ≤ Sa ≤ Sb ≤ T ! E

sup

Y (t)

Z ≤ C0 E Y (Sa ) +

t∈[Sa ,Sb ]

(Y + Z)dt ,

sup Y (t)

(7.10)

Sa

where C0 is independent of the choice of Sa , Sb . Then ! E

!

Sb

Z

≤ CE Y (0) +

t∈[0,T ]

T

! Zdt

(7.11)

0

where C = C(C0 , T ). We now turn to the proof of the Proposition. Proof - Proposition 7.1. We first establish (7.7). By applying Itˆo’s lemma to 1 + |X  (t)|2 , using the cancellation property for B and then applying Itˆo’s lemma to (1 + |X  (t)|2 )p/2 , we discover d(1 + |X  |2 )p/2 = − p(1 + |X  |2 )p/2−1 hLX  , X  i dt X 2m p (1 + |X  |2 )p/2−1 σj,k (t, , X  )2 dt + 2 j,k X m  2 p/2−1 +  p(1 + |X | ) Xj σj,k (t, , X  ) dW k j,k

 2 X X 2m p(p − 2)  + (1 + |X  |2 )p/2−2 Xj σj,k (t, , X  ) dt. 2 j k

(7.12)

26

N. Glatt-Holtz and M. Ziane

Given the assumptions on L the first term on the right hand side of (7.12) is non-positive for all t > 0. Fix arbitrary 0 ≤ Sa < Sb ≤ T . Integrate (7.12) between Sa and t and take a supremum over [Sa , Sb ]. After taking an expected value we have   E sup (1 + |X  |2 )p/2 t∈[Sa ,Sb ]

≤E(1 + |X  (Sa )|2 )p/2 Z Sb X + C(p)E (1 + |X  |2 )p/2−1 σj,k (s, , X  )2 ds Sa

Z

j,k

Sb

(1 + |X  |2 )p/2−2

+ C(p)E

X

Sa

(7.13)

 2 X  Xj σj,k (s, , X  ) ds j

k

X Z t  2 p/2−1   k + C(p)E sup (1 + |X | ) Xj σj,k (t, , X ) dW . t∈[Sa ,Sb ] j,k Sa By applying the Cauchy-Schwartz inequality and using the Lipschitz assumption, we can write

(1 + |X  |2 )p/2−2

X k

 2 X X  Xj σj,k (s, , X  ) ≤(1 + |X  |2 )p/2−2 |X  |2 σj,k (s, , X  )2 j

j,k

(7.14)

≤C(1 + |X  |2 )p/2 . We estimate the stochastic integral terms using the Burkholder-Davis-Gundy inequality (see [13]) X Z t E sup (1 + |X  |2 )p/2−1 Xj σj,k (t, , X  ) dW k t∈[Sa ,Sb ] j,k Sa   2 1/2 Z Sb X X    ≤ CE  (1 + |X  |2 )p−2 Xj σj,k (t, , X  ) dt Sa

j

k

 Z

≤ CE  sup (1 + |X  |2 )p/4 t∈[Sa ,Sb ]

1 ≤ E 2

(7.15) !1/2 

Sb

(1 + |X  |2 )p/2 dt



Sa

! sup (1 + |X  |2 )p/2

(1 + |X  |2 )p/2 dt .

+ CE

t∈[Sa ,Sb ]

!

Sb

Z

Sa

Using the observations in (7.14) and (7.15) with (7.13) and rearranging, we have ! E

 2 p/2

sup (1 + |X | ) t∈[Sa ,Sb ]



2 p/2

≤ CE (1 + |X (Sa )| )

Z

!

Sb  2 p/2

(1 + |X | )

+ Sa

ds .

(7.16)

The Renormalization Group with Stochastic Forcing

27

Note that the constant C = C(K, p) above is independent of Sa , Sb . Applying Lemma 7.1 one infers E sup (1 + |X  (t)|2 )p/2 ≤ C(K, p, T )E(1 + |X0 |2 )p/2 . (7.17) t∈[0,T ]

Given the assumptions on the initial data (7.6), we have the uniform bound E(1 + |X0 |2 )p/2 ≤ CE(1 + |x0 |p )

(7.18)

which implies (7.7). To establish (7.8), we again apply Itˆ o’s lemma to determine an evolution equation for |X  − x|p d|X  − x|p = − p|X  − x|p−2 hL(X  − x), X  − xi dt − p|X  − x|p−2 hB(X  , X  ) − B(x, x), X  − xi dt X 2m p  + |X − x|p−2 σj,k (t, , X  )2 dt 2 j,k X m  p−2 +  p|X − x| (Xj − xj )σj,k (t, , X  )dW k

(7.19)

j,k

 2 X X 2m p(p − 2)   (Xj − xj )σj,k (t, , X  ) dt. |X − x|p−4 + 2 j k

Using the cancellation assumption (7.3), one infers hB(X  , X  ) − B(x, x), X  − xi = hB(X  − x, x), X  − xi.

(7.20)

Assumption (7.3) also allows us to determine an a priori bound for x, the solution of (7.2). Taking inner products we have d 2 |x| = −2hLx, xi ≤ 0. (7.21) dt So, for t ∈ [0, T ] |x(t)| ≤ |x0 |. (7.22) Now fix 0 ≤ Sa ≤ Sb ≤ T . Integrate (7.19) from Sa to t, take a supremum over this time interval and then an expected value   E sup |X  − x|p t∈[Sa ,Sb ]

≤E|X  (Sa ) − x(Sa )|p + C(p, B, |x0 |)E

Z

Sb

|X  − x|p dt

Sa

+ 2m C(K, p)E 

Z

Sb

|X  − x|p−2 (1 + |X  |2 ) dt Sa  Z t X m  p−2   k +  C(p)E  sup |X − x| (Xj − xj )σj,k (t, , X )dW  . t∈[Sa ,Sb ] Sa j,k

(7.23)

28

N. Glatt-Holtz and M. Ziane

Once again we make use of the Burkholder-Davis-Gundy inequality XZ t  p−2   k m |X − x| (Xj − xj )σj,k (t, , X )dW  CE sup t∈[Sa ,Sb ] j,k Sa   2 1/2 Z Sb X X   (Xj − xj )σj,k (t, , X  ) dt ≤m CE  |X  − x|2(p−2)  Sa

≤m C(K)E

j

k

Z

!1/2

Sb

(7.24)

|X  − x|2(p−1) (1 + |X  |2 )dt

Sa

1 ≤ E 2 1 ≤ E 2

! 

p

|X − x|

sup

+

mp

 2

(1 + |X | dt

C(K, p)

t∈[Sa ,Sb ]

Sa

! 

p

|X − x|

sup

!p/2

Sb

Z

+ mp C(K, p, T )

Z

t∈[Sa ,Sb ]

Sb

(1 + |X  |2 )p/2 dt.

Sa

Using this estimate and (7.7), we have   E sup |X  − x|p t∈[Sa ,Sb ] 

p

Z

Sb

≤ 2E|X (Sa ) − x(Sa )| + C



p

|X − x| dt + 

mp

Z

 2 p/2

(1 + |X | )

C

Sa

(7.25)

Sb

.

Sa

Noting, once again that the constants C(K, p, T, |x0 |) above are independent of Sa . We apply Lemma 7.1. together with (7.6) and (7.7), we finally deduce   Z  p E sup |X − x| ≤ CE|X0 − x0 |p + mp C t∈[0,T ]

T

(1 + |X  |2 )p/2 ≤ mp C.

(7.26)

0

As a corollary of Proposition 7.1 we infer convergence of X  to x in probability along with a rate of convergence. Proposition 7.2 (Convergence in Probability). Let T > 0, p ≥ 2, X0 ∈ Lp (Ω) and x0 be a fixed element in Rn . Suppose that E|X0 − x0 |p ≤ mp C0 . (7.27) Then for any γ < m ! P

sup |X  − x| > γ

≤ C(m−γ)p .

t∈[0,T ]

Proof. The desired result follows directly from (7.8) by applying Markov’s Inequality.

(7.28)

The Renormalization Group with Stochastic Forcing

29

Acknowledgments The authors would like to thank an anonymous referee who brought to our attention the topic of stochastic normal forms. This work has been supported in part by the NSF grant DMS-0505974. This work is part of Glatt-Holtz thesis at the University of Southern California.

References [1] L. Arnold and P. Imkeller. Normal forms for stochastic differential equations. Probab. Theory Related Fields, 110(4):559–588, 1998. [2] L. Arnold, N. Sri Namachchivaya, and K. R. Schenk-Hopp´e. Toward an understanding of stochastic Hopf bifurcation: a case study. Internat. J. Bifur. Chaos Appl. Sci. Engrg., 6(11):1947–1975, 1996. [3] Ludwig Arnold. Random dynamical systems. Springer Monographs in Mathematics. SpringerVerlag, Berlin, 1998. [4] D. Bl¨ omker, C. Gugg, and S. Maier-Paape. Stochastic navier-stokes equation and renormalization group theory. Physica D, 1(173):137–152, November 2002. [5] D. Bl¨ omker, S. Maier-Paape, and G. Schneider. The stochastic Landau equation as an amplitude equation. Discrete Contin. Dyn. Syst. Ser. B, 1(4):527–541, 2001. [6] L.-Y. Chen, N. Goldenfeld, and Y. Oono. Renormalization group theory for global asymptotic analysis. Phys. Rev. Lett., 73(10):1311–1315, 1994. [7] L.-Y. Chen, N. Goldenfeld, and Y. Oono. Renormalization group and singular perturbations: Multiple scales, boundary layers, and reductive perturbation theory. Phys. Rev. E, 54(1):376– 394, Jul 1996. [8] G. Da Prato and J. Zabczyk. Stochastic equations in infinite dimensions, volume 44 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, 1992. [9] R.E.L. DeVille, A. Harkin, M. Holzer, K. Josi´c, and T. Kaper. Analysis of a renormalization group method for solving perturbed ordinary differential equations. Physica D, 2008. to appear. [10] F. Flandoli. An introduction to 3d stochastic fluid dynamics. In SPDE in Hydrodynamic: Recent Progress and Prospects, volume 1942 of Lecture Notes in Mathematics, pages 51–150. Springer Berlin / Heidelberg, 2008. [11] M. I. Freidlin and A. D. Wentzell. Random perturbations of dynamical systems, volume 260 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, New York, second edition, 1998. Translated from the 1979 Russian original by Joseph Sz¨ ucs. [12] N. Glatt-Holtz and M. Ziane. The stochastic primitive equations in two space dimensions with multiplicative noise. Discrete Contin. Dyn. Syst. Ser. B. (to appear).

30

N. Glatt-Holtz and M. Ziane

[13] I. Karatzas and S. E. Shreve. Brownian motion and stochastic calculus, volume 113 of Graduate Texts in Mathematics. Springer-Verlag, New York, second edition, 1991. [14] E. N. Lorenz. On the existence of a slow manifold. J. Atmospheric Sci., 43(15):1547–1557, 1986. [15] I. Moise, E. Simonnet, R. Temam, and M. Ziane. Numerical simulation of differential systems displaying rapidly oscillating solutions. J. Engrg. Math., 34(1-2):201–214, 1998. A celebration of the sixty-fifth anniversary of Pieter J. Zandbergen: teacher and research leader in applied mathematics. [16] I. Moise and R. Temam. Renormalization group method. Application to Navier-Stokes equation. Discrete Contin. Dynam. Systems, 6(1):191–210, 2000. [17] I. Moise and M. Ziane. Renormalization group method. Applications to partial differential equations. J. Dynam. Differential Equations, 13(2):275–321, 2001. [18] A. V. Skorokhod, F. C. Hoppensteadt, and H. Salehi. Random perturbation methods with applications in science and engineering, volume 150 of Applied Mathematical Sciences. SpringerVerlag, New York, 2002. [19] N. Sri Namachchivaya and Y. K. Lin. Method of stochastic normal forms. Internat. J. Non-Linear Mech., 26(6):931–943, 1991. [20] R. Temam. Multilevel methods for the simulation of turbulence. A simple model. J. Comput. Phys., 127(2):309–315, 1996. [21] F. Verhulst. Nonlinear differential equations and dynamical systems. Universitext. SpringerVerlag, Berlin, second edition, 1996. Translated from the 1985 Dutch original. [22] F. Verhulst. Methods and applications of singular perturbations, volume 50 of Texts in Applied Mathematics. Springer, New York, 2005. Boundary layers and multiple timescale dynamics. [23] M. Ziane. On a certain renormalization group method. J. Math. Phys., 41(5):3290–3299, 2000.