The first exit problem and metastability of generic scalar dissipative ...

2 downloads 42 Views 442KB Size Report
Jun 23, 2017 - arXiv:1706.07745v1 [math.PR] 23 Jun 2017. The first exit problem and metastability of generic scalar dissipative reaction-diffusion equations.
arXiv:1706.07745v1 [math.PR] 23 Jun 2017

The first exit problem and metastability of generic scalar dissipative reaction-diffusion equations subject to multiplicative regularly varying L´evy noise at small intensity Michael Anton H¨ogele∗ June 26, 2017

Abstract This article studies the first exit times and loci of the mild solutions of a generic class of scalar nonlinear dissipative reaction-diffusion equations under ε-small perturbations by infinitedimensional multiplicative L´evy noise with a regularly varying component. In contrast to the exponential asymptotic behavior of respective Gaussian perturbations the exit times grow asymptotically as a power function of the noise intensity ε as ε → 0 with the exponent being the (negative) tail index of the L´evy measure. To this end we prove an exponential estimate of the stochastic convolution for jump L´evy processes with bounded jumps and construct well-understood models of the exit times and loci on the same probability space to which the original quantities converge in a probabilistically optimal sense. The results cover the case of the Chafee-Infante equation and the linear heat equation perturbed by additive and multiplicative infinite-dimensional α-stable noise. As a consequence of the first exit results we infer the metastable convergence of the process on a common time scale to a Markov chain switching between the stable states of the deterministic dynamical system.

Keywords: first exit times; first exit loci; nonlinear reaction-diffusion equations; Morse-Smale property; small noise asymptotic; α-stable L´evy process; multiplicative L´evy noise; regularly varying noise; stochastic heat equation with additive and multiplicative α-stable noise; stochastic ChafeeInfante equation with multiplicative noise; 2010 Mathematical Subject Classification: 35K57; 35K55; 37D15; 37L55.

1

60H15; 60G51; 60G52; 60G55; 35K05; 35K91;

Introduction

This article solves the first exit problem from the domain of attraction of a stable state of a generic class of dissipative reaction-diffusion equations over a bounded interval subject to small multiplicative regularly varying L´evy noise, such as small multiplicative α-stable noise. ∗ Departamento

de Matem´ aticas, Universidad de los Andes, Bogot´ a, Colombia; [email protected]

1

The first exit problem of a randomly perturbed dynamical system from the domain of attraction of a stable fixed point in the limit of small noise intensity has a long history in finite dimensions for Gaussian perturbations going back to the works of Cram´er and Lundberg and giving rise to edifice of large deviations theory and the associated Freidlin-Wentzel theory. We refer the reader to the classical works [2, 5, 14, 15, 18, 20, 23, 24, 39] and references therein. In infinite dimensions this problem was studied for Wiener processes, for instance in [4, 7, 8, 21, 22]. It is a characteristic feature for small Gaussian perturbations that the first exit times grow exponentially as a function of the inverse of the noise intensity, with the prefactor in the exponent given as the solution of an optimization problem. The convergence of the suitably renormalized process to a Markov chain [25, 36] and its connection between the metastability and the spectrum of the diffusion generator is treated in the works of [3, 5, 6, 37, 38]. In the context of regularly varying L´evy jump noise perturbations, however, the perturbed process exhibits heavy tails and lacks exponential moments (cf. [1, 48]). Since in addition the statement of the problem does not include beforehand the renormalization of the temporal jump intensity of the driving noise, the large and moderate deviations techniques recently developed in [10, 12, 13] are not directly at hand. The first exit problem for dynamical systems perturbed by small α-stable or more generally regularly varying perturbations was addressed in different settings in a series of works. After the early work of [26] on a large deviations principle in Skorohod space the first exit times problem is solved in one dimension for additive α-stable noise in [34]. The authors introduced the following purely probabilistic proof technique also used and extended in this article, which we sketch briefly: Given α-stable noise perturbation εdL the first step consists in the choice of an ε-dependent jump size threshold ρε , which decomposes the driving noise into the sum of εdξ ε + εdη ε , ξ ε being an infinite intensity process with jumps bounded from above by the threshold and thus exhibiting exponential moments and η ε the compound Poisson process of jumps bounded from below by the threshold. Assuming that ερε tends to 0 as ε tends to 0 and taking into account that ξ ε has exponential moments it does not come as a surprise that up to the first large compound Poisson jump of η ε the resulting flow decomposition of the strong Markov process yields a process which is close to the deterministic solution with overwhelming probability and hence in the vast majority of cases does not cause the exit from the domain of attraction. Due to the choice of the noise decomposition the convergence of the deterministic solution to a ε-small ball centered in the stable state is much faster than the first large jump time and hence the first large compound Poisson jump starts from close to the stable state and yields an exit probability in terms of the tail of the L´evy measure. The strong Markov property propagates this exit scenario to all independent waiting time intervals between the large jumps. The exit is hence caused with very high probability by the first successful attempt of a large jump to exit. The resulting geometric exit structure of exit times occurs at a rate given by the tail decay of the L´evy measure, which in the case of regular variation is of polynomial order. In [43] the author shows this result for gradient systems in any finite dimension and multiplicative noise, in particular he derives an exponential estimate for small deviations from the deterministic system. In [30] the results are generalized to the non-gradient case in finite dimensions, in addition the convergence in law of the first exit locus is proved. The wellposedness of reaction-diffusion equations in infinite dimensions in a generic setting is established in [40, 41, 42, 45]. In the infinite dimensional case study summarized in [16] and exposed in detail in [17] the authors consider the first exit times of the Chafee-Infante equation perturbed by small additive regularly varying noise with tail index −α for 0 < α < 2. This article provides a substantial extension of these results in several directions. We extend the scope of the deterministic forcing of [17] to a generic class of weakly dissipative non-linear reaction

2

terms over an interval with Dirichlet boundary conditions for which the system retains the MorseSmale property of the deterministic system. The most important cases covered in this article are dissipative polynomials of odd order and the linear heat equation. Our results are stated for the Laplace operator with Dirichlet conditions on the Sobolev space H01 over the standard interval [0, 1]. We expect the results to hold true for any unbounded operator with negative point spectrum A which generates a generalized contracting analytic semigroup. However we use the generic MorseSmale property of the deterministic dynamical system in H01 as well as the smoothness of the separating manifold of the domains of attractions, and to our knowledge these results are not 1 available in the literature for general spaces D(A 2 ). The generalizations of the type of stochastic perturbations are twofold. In the first place we study multiplicative noise coefficients as opposed to [17]. They are the original motivation of this article and make it necessary to consider the first exit problem localized on large balls. Consequently we get rid of the rather strong point dissipativity of the deterministic dynamical system, and also treat the new important example of the linear heat equation subject to additive and multiplicative α-stable noise. Secondly, we lift the rather strong restriction of a tail index 0 < α < 2 in [17] with the help of an exponential estimate of the stochastic convolution for multiplicative Poisson random integrals with bounded jumps. It combines the recent pathwise estimate of the stochastic convolution in [50] and the pathwise Burkholder-Davis-Gundy inequality in [49]. Since multiplicative noise necessarily leads to working with stopped processes, the non-pathwise estimates of the stochastic convolution available until then were rather difficult to implement. With these new powerful tools at hand a term by term estimate of the exponential series yields a lift of the right side of the BurkholderDavis-Gundy inequality to the exponent, which is estimated with the help of Campbell’s formula for the Laplace-transform of Poisson random integrals. Our estimates are rather direct and avoid the development of the technically charged large deviation theory introduced by Budhiraja and collaborators; see for instance [10, 12]. In comparison to those works we construct explicitly (on the same probability space as the driving noise) a completely understood model of the first exit times and loci respectively to which the original objects converge. Our convergence results are optimal in a probabilistic sense, in that we obtain exponential convergence up to exponent strictly less than 1, while the limiting object does not have exponential moments of order 1. The same applies to the convergence of the first exit loci, which is essentially a geometric mixing of deformed large jump increments of the noise. Those increments for tail indices −α, α > 0, have moments of order 0 < p < α and we show convergence in Lp -sense towards those objects. Finally we infer metastability in the sense of [31] and [33] as a corollary. The article is organized as follows. In Section 2 we present the general setup, the specific hypotheses, the main results and examples. In Section 3 we introduce the compound Poisson flow decomposition and show an exponential estimate on the smallness of the stochastic convolution between large jumps and study its pushforward to the small noise process. In Section 4 we prove the main results by strong Markov and tailor-made event estimates.

2

The object of study and the main result

Consider the Sobolev space H := H01 (0, 1) equipped with the inner product hhx, yii := h∇x, ∇yi for 1 1 x, y ∈ H with kxk = hhx, xii 2 , where h·, ·i is the inner product in L2 (0, 1) and norm |x| = hx, xi 2 . Let C0 [0, 1] the space of continuous functions x : [0, 1] → R with x(0) = x(1) = 0 equipped with 3

the supremum norm | · |∞ . Since |x| 6 |x|∞ 6 ||x|| for x ∈ H we have the continuous injections H ֒→ C0 [0, 1] ֒→ L2 (0, 1), in particular |x| 6 Λ0 kxk, x ∈ H, for the Sobolev constant Λ0 > 0.

2.1

The underlying deterministic dynamics

The object of study is the effect of random perturbations of the dynamical system given as the solution map x 7→ u(t; x) of the following nonlinear reaction-diffusion equation over the interval J = (0, 1) with Dirichlet boundary conditions. For t > 0, x ∈ H and ζ ∈ J consider ∂ ∂2u (t, ζ) + f (u(t, ζ)) u(t, ζ) = ∂t ∂ζ 2

with

u(t, 0) = u(t, 1) = 0

and

u(0, ζ; x) = x(ζ), (2.1)

where the non-linearity f ∈ C 2 (R, R) satisfies the growth condition lim sup f ′ (r) < Λ0 .

(2.2)

|r|→∞

This equation has unique and well-posed weak and mild solutions in L2 (0, 1) and H (cf. [17, 51]). The solutions are most regular for any t > 0 and x ∈ L2 (0, 1), that is u(t; x) ∈ C ∞ (0, 1) ∩ C0 [0, 1]. P2n−1 If, for instance, f (r) = j=0 bj rj with b2n−1 < 0 for n ∈ N it is well-known in the literature [27, 28, 47] that for a generic choice of (b0 , . . . , bn ) ∈ Rn+1 the dynamical system generated by (2.1) is of Morse-Smale type, that is there is a finite number of fixed points, all of which are hyperbolic and whose stable and unstable manifold intersect transversally. The paradigmatic example of the Chafee-Infante equation is studied in [11, 28] where f (r) = −λ(r3 − r), r ∈ R, whenever λ 6= (πn)2 . Since all finitely many equilibria are elements of H ⊆ L∞ (0, 1), the Morse-Smale property only involves f on a bounded set. On bounded sets a general function f ∈ C 2 (R, R) is approximated in C 2 -norm by polynomials and the generic Morse-Smale property is inherited by f . R It is well-known that equation (2.1) enjoys the nonnegative potential function V(x) = J (∇x(ζ))2 +  Rr F (x(ζ)) dζ on H for F (r) = r0 f (s)ds for some r0 and is rewritten as the following gradient system ∂ u(t, ζ) = −(DV)(u(t, ζ)) ∂t

with

u(0, ζ; x) = x(ζ)

for x ∈ H.

The level sets of V remain bounded in H and positive invariant under the dynamical system (2.1). U r := {x ∈ H | V(x) 6 d(r)},



d(r) := inf{r′ > 0 | Br (0) ⊆ U r },

r > 0.

As a consequence, V serves as a Lyapunov function and yields the following result (cf. [27, 29]). Proposition 2.1. Denote by P ⊆ H the set of fixed points of (2.1). Then 0 < |P| < ∞ and for any x ∈ H there exists a stationary state φ ∈ P of the system (2.1) such that limt→∞ u(t; x) = φ. For φ ∈ P we define the domain of attraction D(φ) := {x ∈ H | limt→∞ u(t; x) = φ}. The subset P − of all φ ∈ P such that D(φ) contains an open ball in H is the set of stable states. For φι ∈ P − we denoteSits domain of attraction Dι = D(φι ) and the separating manifold between them by S := H \ ι Dι . For a generic choice of coefficients the Morse-Smale property implies that S is a closed C 1 -manifold without boundary in H of codimension 1 separating all elements of (Dι )φι ∈P − and containing all unstable fixed points P \ P − (cf. [46]). Fix a radius χ0 > 0 such that P ⊆ Bχ0 (0) and U χ0 ∩ ∂Dι 6= ∅ for all φι ∈ P − . 4

In order to capture the main dynamics and steer clear of sticky unstable fixed point we introduce nested reduced domains of attraction with certain positive invariance properties. For perturbations ψ ∈ D ∩ L∞ ([0, ∞), H) and x ∈ H we consider the unique c` adl` ag mild solution vψ of the equation ∂ 2 vψ ∂vψ (t, ζ) = (t, ζ) + f (vψ (t, ζ) + ψ(t, ζ)) for ∂t ∂ζ 2 vψ (t, 0) = vψ (t, 1) = 0 and vψ (0, ζ) = x(ζ).

t > 0,

ζ ∈ (0, 1)

(2.3)

We define for δi > 0, i = 1, . . . , 4 and χ > χ0 the following reductions of Dι : [ D1ι (δ1 , χ) := {x ∈ Dι | Bδ1 (u(t; x)) ⊆ Dι ∩ U χ }, t>0

D2ι (δ1 , δ2 , χ) := {x ∈ D1ι (δ1 , χ) | for all ψ ∈ D([0, ∞), H) with sup kψ(t)k 6 δ2 : [

t>0

{vψ (t; x) + G(vψ (t; x), Bδ3 (0))} ⊆ D1ι (δ1 , χ) },

t>0

D3ι (δ1 , δ2 , δ3 , χ) := {x ∈ D2ι (δ1 , δ2 , χ) | for all ψ ∈ D([0, ∞), H) with sup kψ(t)k 6 δ3 : [

{vψ (t; x) + G(vψ (t; x), Bδ3 (0))} ⊆ D2ι (δ1 , δ2 , χ) },

t>0

D4ι (δ1 , δ2 , δ3 , δ4 , χ)

(2.4)

t>0

:= {x ∈ D3ι (δ1 , δ2 , δ3 , χ) |

[

\

t>0 v∈Bδ4

{v + G(v, z)} ∈ D3ι (δ1 , δ2 , χ) }}.

(φι )

In finite dimensions, where the dynamical system t 7→ u(t; x) also exists for negative times, these reductions have a much more natural shape, see Appendix in [30]. The S reduced domains of attractions are nested by construction and for all δ ∈ (0, 1) we have Dι = χ>χ0 ,δ∈(0,1] D3ι (δ, δ, δ, δ, χ) (cf. [16]). For any δ0 ∈ (0, 1] and χ > χ0 such that φι ∈ Dι (δ0 , χ) and δ ∈ (0, δ0 ] we have the positive invariance of the reduced domains of attraction under the dynamical system (2.1). For convenience we set D2ι (δ, χ) := D2ι (δ, δ, χ), D3ι (δ, χ) := D3ι (δ, δ, δ, χ) and D4ι (δ, χ) analogously. Proposition 2.2. For any generic choice of f such that (2.1) is Morse Smale and χ > χ0 there exist constants κ0 , κ1 > 0 (depending on f and χ) which satisfy the following. For any monotonically growing function γ· : (0, 1] → (0, 1) with limε→0 γε = 0 there is a constant ε0 ∈ (0, 1] such that for each ε ∈ (0, ε0 ] the conditions t > κ0 + κ1 | ln γε | and x ∈ D1ι (γε , χ) imply ku(t; x) − φι k < 14 γε . This result is based on the existence of a Lyapunov function and the Morse-Smale hyperbolicity of the fixed points and the fine dynamics of the deterministic solution flow. In [16] this is shown for the Chafee-Infante equation. Its generalization is straight forward.

2.2

The stochastic reaction-diffusion equation under consideration

Given a filtered probability space Ω = (Ω, A, P, (Ft )t>0 ) satisfying the usual conditions in the sense of Protter [44] let L = (L(t))t>0 be a c` adl` ag version of a L´evy process in (H, B(H)). We denote by M0 (H) the class of Radon measures ν on B(H) satisfying ν(A) < ∞ if and only if A ∈ B(H) ¯ The L´evy-Chinchine representation establishes a unique L´evy triplet (h, Q, ν) where with 0 ∈ / A.

5

R h ∈ H, Q ∈ L2 (H) and ν ∈ M0 (H) satisfying ν({0}) = 0 and H (1 ∧ kyk2 )ν(dy) < ∞ such that  the characteristic function φL(t) (u) := E exp(ihhu, L(t)ii) has the exponent Z    1 eihhu,zii − 1 − ihhz, uii1{kzk 6 1} ν(dz) . t ihhh, uii − hhQu, uii + 2 H

By the L´evy-It¯ o representation of L there exists a Q-Wiener process (BQ (t))t>0 and a Poisson random measure N on Ω with intensity measure dt ⊗ ν(dz) on [0, ∞) × H and the compensated e ([a, b) × A) = N ([a, b) × A) − (b − a)ν(A) for a 6 b, A ∈ B(H), 0 ∈ Poisson random measure N / A¯ such that P-a.s. for all t > 0 L(t) = ht + BQ (t) +

Zt Z

0 kzk61

e (dsdz) + zN

Zt Z

zN (dsdz).

(2.5)

0 kzk>1

For a comprehensive account we refer to [9] and [32]. In this study we set h = 0 and BQ = 0, since their exit contributions are asymptotically insignificant compared to the pure jump part. The map G : H × H → H satisfies the following Lipschitz and boundedness conditions. There are constants K1 , K2 > 0 such that for any x1 , x2 ∈ H Z Z 2 2 kG(x1 , z)k ν(dz) 6 K1 (1 + kx1 k ) and kG(x1 , z) − G(x2 , z)k2 ν(dz) 6 K2 kx1 − x2 k2 . H

B1 (0)

Consider the formal stochastic reaction diffusion equation for t > 0, x ∈ H, ζ ∈ J and ε ∈ (0, 1] dX ε (t, ζ) =

 ∂2 ε X (t, ζ) + f (X ε (t, ζ)) dt + G(X ε (t−, ζ), εdL(t, ζ)) 2 ∂ζ

with

ε

ε

X (t, 0) = X (t, 1) = 0

(2.6)

ε

and

X (0, ζ) = x(ζ),

in the sense that ε

G(X (t, ζ), dL(t, ζ)) =

Z

|z|61

e (dtdz) + G(X (t−, ζ), εz(ζ))N ε

Z

G(X ε (t−, ζ), εz(ζ))N (dtdz).

|z|>1

Proposition 2.3. Assume the growth rate (2.2) for f ∈ C 2 (R, R). Then for any mean zero c` adl` ag L2 (P; H)-martingale ξ = (ξ(t))t>0 , T > 0, and initial value x ∈ H equation (2.6) driven by εdξ instead of εdL has a unique c` adl` ag mild solution (Y ε (t; x))t∈[0,T ] . The transition kernels of the ε solution process Y induce a homogeneous Markov family satisfying the Feller property and hence the strong Markov property. The proof relies on the local Lipschitz continuity of f : H → H and the global bound of vψ in terms of ψ. A proof for dissipative polynomials f is given in [45], Chapter 10, and for Chafee-Infante in [16]. By interlacing of large jumps this notion of solution is extended to the heavy-tailed process L. Corollary 2.4. For x ∈ H equation (2.6) has a global c` adl` ag mild solution (X ε (t; x))t>0 , which satisfies the strong Markov property. 6

2.3

The specific hypotheses and the main results

For γ, ε ∈ (0, 1], χ > χ0 , x ∈ D2ι (εγ , χ) and the c` adl` ag mild solution (X ε (t; x))t>0 of (2.6) we define the first exit time from the reduced domain of attraction τxι (ε, χ) := inf{t > 0 | X ε (t; x) ∈ / D2ι (εγ , χ)}. Hypotheses: (D.1) The function f is generic in the sense that is the solution flow of (2.1) defines a Morse-Smale system. In addition we assume 2 6 |P − | < ∞. (D.2) The function f satisfies the eventual monotonicity condition (2.2). (S.1) The L´evy measure ν ∈ M0 (H) is regularly varying with index −α, α > 0 and limit measure µ ∈ M0 (H). According to [9] and [32] there exists a measurable functiosn h, ℓ : (0, ∞) → (0, ∞) such that lim

r→∞

ν(rA) = 1, h(r)µ(A)

for any A ∈ B(H) with 0 ∈ / A¯

where ℓ is a slowly varying function in that limr→∞

ℓ(ar) ℓ(r)

and

h(r) = r−α ℓ(r),

= 1 for any a > 0.

(S.2) There is a globally Lipschitz continuous function G1 : H → [0, ∞) such that kG(y, z)k 6 G1 (y)kzk,

y, z ∈ H.

We define the set of increment vectors z ∈ H sending x ∈ H to the set U ∈ B(H) as J U (x) := {z ∈ H | x + G(x, z) ∈ U },

x ∈ H,

and the measure mι (A) := µ(J A (φι )) for A ∈ B(H) with 0 ∈ / A¯ and hε := h( 1ε ) for ε ∈ (0, 1]. (S.3) For all φι ∈ P − and χ > χ0 we have mι ((Dι ∩ U χ )c ) > 0. (S.4) For all φι ∈ P − and η > 0 there are δ > 0 and χ > χ0 such that mι (H \ We define the characteristic exit rate λιε of system (2.6) from Dι by λιε := ν Then (S.1) implies

λιε hε

=

ε→0+ λιε −→ εα ℓ( ε1 )

1 ε

 ι c J (D ) (φι ) ,

ε ∈ (0, 1].

S

ι

D4ι (δ, χ)) < η.

(2.7)

mι ((Dι )c ). The main exit time result theorem reads as follows.

Theorem 2.5. Let Hypotheses (D.1-2) and (S.1-4) be satisfied. Then there is a EXP(1)-distributed family of random variables (sι (ε))ε∈(0,1] on Ω satisfying the following. For any C > 0 and θ ∈ (0, 1) there is χ > χ0 and ε0 , γ ∈ (0, 1] such that ε ∈ (0, ε0 ] implies for all U ∈ B(H) with mι (U ) = 0 that i h ι ι ι sup E eθ|λε τx (ε,χ)−s (ε)| 6 1 + C. x∈D3ι (εγ ,χ)

7

Therefore, we have the convergence of all moments limε→0 E[|λιε τxι (ε, χ)|n ] ∈ [n! − C, n! + C] uniformly in D3ι (εγ , χ) and the following polynomial behavior, with the supremum exchangeable by the infimum, h i 1 + 2C 1−C 1+C 1 − 2C , ]. E τxι (ε, χ) ∈ [ ι , ]⊆[ α 1 ι ι ι )c ) εα ℓ( 1 )mι ((D ι )c ) ι γ λ λ )m ((D ε ℓ( x∈D3 (ε ,χ) ε ε ε ε sup

In terms of [8] the memorylessness of s(ε) describes the “unpredictabililty” of the exit times with a “polynomial” loss of memory as opposed to a Gaussian “exponential” loss of memory. For the statement of the main result about the exit loci we write ∆t L := L(t) − L(t−), t > 0 for L given by (2.5). For ρ ∈ (0, 1) and ε ∈ (0, 1] we construct large jump times of L by  T0 (ε) := 0, Tk (ε) := inf t > Tk−1 (ε) k∆t Lk > ε−ρ , k > 1, (2.8) and large jump increments by Wk (ε) := ∆Tk (ε) L, k ∈ N. The family (Wk (ε))k∈N is i.i.d. with P(εWk ∈ A) =

ν(A ∩ Bεc−ρ (0)) , ν(Bεc−ρ (0))

where βε := ν(Bεc−ρ (0)) behaves as

βε ε→0+ −→ ε −α (ρ ) ℓ(ρε )

mι (B1c (0)).

Theorem 2.6. Let Hypotheses (D.1-2) and (S.1-4) be satisfied. Then there is a family of random variables (k ∗ (ε))ε∈(0,1] on Ω with k ∗ (ε) being GEO(βε ) distributed and satisfying the following. For any C > 0 and 0 < p < α there are χ > χ0 and γ, ρ, ε0 ∈ (0, 1] such that ε ∈ (0, ε0 ] implies h i sup E kX ε (τxι (ε, χ); x) − (φι + G(φι , εWk∗ (ε) ))kp 6 C x∈D3ι (εγ ,χ)

and in particular for any U ∈ B(H) with mι (U ) > 0 and mι (∂U ) = 0 we have mι (A ∩ (Dι )c ) 6 C. P(X ε (τ ι (ε, χ); x) ∈ A) − mι ((Dι )c ) x∈D3ι (εγ ,χ) sup

ι c

) ) The second result follows since limε→0 P(εWk∗ (ε) (ε) ∈ A) = µ(A∩(D µ((Dι )c ) for any A ∈ B(H). The construction for the domains of attraction and their reduction can be done for any positive invariant measurable set B ι ⊆ Dι under the dynamical system (2.1) with φι contained in its open kernel B ◦ . Think for instance of the level sets U χ ∩ Dι for values χ sufficiently small. The exit result then reads as follows. For any such a set B we keep the notation of (2.4) and write B1ι (δ, χ) etc. replacing in the definition the expression Dι by B. The same applies to τxι (ε, χ) with λιε = ν( 1ε B1c (0)) ≈ εα ℓ( 1ε )µ(B1c (0)).

Corollary 2.7. Let Hypotheses (D.1-2) and (S.1-4) be satisfied. Consider a Borel set B ⊆ Dι with φι ∈ B ◦ which is positive invariant under the dynamical system given by (2.1). Then there exists a family of EXP(1)-distributed random variables (sι (ε))ε∈(0,1] and a family of N-valued, GEO(βε ) distributed family of random variables such that for any C > 0 there is χ > χ0 and γ = γ(χ), ρ = ρ(χ) and ε0 = ε0 (χ, γ, ρ) such that ε ∈ (0, ε0 ] implies i h ι ι ι sup E eθ|λε τx (ε,χ)−s (ε)| 6 1 + C, x∈B3 (εγ ,χ)

8

and for any 0 < p < α sup x∈B3 (εγ ,χ)

h i E kX ε (τxι (ε, χ); x) − (φι + G(φι , εWk∗ (ε) ))kp 6 C

with the same immediate consequences as in Theorem 2.5 and 2.6. Since the exit rate ε 7→ λιε ≈ε εα ℓ( 1ε )mι ((Dι )c ) asymptotically depends only on ι as a prefactor the process X ε converges on a common time scale t/εα to a Markov chain whose transition from Dι to Dκ only depending on the limiting measures (µι ((Dκ )))κ . This behavior is typical for regularly varying L´evy noises such as α-stable noise ([17, 31, 33]) and differs strongly from the Gaussian case (see in particular [25]) and corresponds to the degenerate Gaussian case as discussed in the introduction of [31]. Corollary 2.8. Under the assumptions of Theorem 2.5 there exists a continuous time Markov chain M = (Mt )t>0 with values in P − on Ω and infinitesimal generator G for p = |P − | given by 

 G=

−m1

D1

c 

.. .  mp D 1

m1 D 2



m1 (Dp ) .. .

... ...

κ

m (D d

κ−1

) −m

p

p

(D )



 . c

(2.9)

such that for any T > 0 we have (X ε ( εtα ; x))t∈[0,T ] −→ (Mt )t∈[0,T ] in the following sense. 1. For any C > 0 and N > 1, ι0 , . . . , ιN ∈ {1, . . . , p} and 0 < s1 < · · · < sN there are χ > χ0 and ε0 ∈ (0, 1] such that ε ∈ (0, ε0 ] and x ∈ Dι0 imply   sN s1 Px X ε ( ) ∈ Dι1 , . . . , X ε ( ) ∈ DιN − Pι0 Ms1 = φι1 , . . . , MsN = φιN 6 C. hε hε

2. Let σ be a [−1, 1] valued uniformly distributed random variable independent of L. Let rε : (0, 1] → R+ with rε → 0 and rε hε → ∞ as ε → 0. Then H ∈ Cb (H, R) and ι ∈ {1, . . . , p} and s < t imply the following. For any C > 0 there is χ > χ0 and ε0 ∈ (0, 1] such that ε ∈ (0, ε0 ] imply h i h i s t + σrε ; x)) | X ε ( ; x) ∈ Dι − E H(Mt ) | Ms = φι 6 C. E H(X ε ( hε hε

The proof of this result is an analogous construction as in [31] and does not depend on the fact that the system is infinite-dimensional and fairly straight forward.

2.4 2.4.1

Examples The Chafee-Infante equation with multiplicative stable noise

We consider equation (2.1) for f (r) = −λ(r3 − r) with parameter λ > (2π)2 and λ 6= (2πn)2 for all n ∈ N. The respective deterministic dynamical system is a well understood Morse-Smale system and has two stable states φ± with domains of attraction D± , see [27, 29]. 9

RtR R R e (dsdz) + t Consider a symmetric α-stable L´evy process L(t) = 0 kzk61 z N zN (dsdz) 0 kzk>1 1 with values in H with characteristic triplet (0, 0, ν), where ν(dz) = σ(d¯ z ) rα+1 , for α ∈ (0, 2), r = kzk, z¯ = z/kzk and σ being a symmetric Radon measure on ∂B characteristic R 1 (0) ⊂ H. The u ii|α σ(d¯ z ). This z , kuk function has the special shape φL(t) (u) = exp(−tcα kukα ) for cα = ∂B1 (0) |hh¯ −α ¯ L´evy measure is selfsimilar ν(rA) = r ν(A) for r > 0 and A ∈ B(H) with 0 ∈ / A and has the limiting measure µ = ν. We apply multiplicative noise G(x, z) := kxkz. Then the H-valued mild solution of the equation  dX ε (t) = ∆X ε (t) − λ(X ε (t)3 − X ε (t)) dt + εkX ε (t)kdL(t) has the following first exit times and loci behavior from D = D+ . For convenience we drop all superindices ±. For any C > 0 there is χ > χ0 sufficiently large, ε0 , γ ∈ (0, 1) such that ε ∈ (0, ε0 ] 1 1 implies for U ∈ B(H) with ν( kφk U − φ) > 0 and ν( kφk ∂U − φ) = 0 sup x∈D3± (εγ ,χ)

sup x∈D3± (εγ ,χ)

h i E τx (ε, χ) ∈

1 [1 − C, 1 + C] 1 εα ν( kφk Dc − φ)

P(X ε (τ ; x) ∈ U ) ∈

1 (U ∩ D)c − φ) ν( kφk 1 ν( kφk Dc − φ)

[1 − C, 1 + C]

as opposed to the case of additive noise of [16] with G(x, z) = z, where we choose formally χ = ∞ (with D3 (εγ ) = D3 (εγ , ∞)) such that for U ∈ B(H) with ν(U − φ) > 0 and ν(∂U − φ) = 0 sup x∈D3± (εγ )

sup x∈D3± (εγ )

  E τx (ε, χ) ∈

1 εα ν(Dc

P(X ε (τ ; x) ∈ U ) ∈

− φ)

[1 − C, 1 + C]

and

ν(U ∩ Dc − φ) [1 − C, 1 + C]. ν(Dc − φ)

In addition, the continuous time Markov chain M constructed in Corollary 2.8 is switching trivially between the states {φ+ , φ− } and (X ε ( εtα ; x))t∈[0,T ] −→ (Mt )t∈[0,T ] in the sense of Corollary 2.8. 2.4.2

The linear heat equation perturbed by additive and multiplicative stable noise

We consider the linear heat equation on H with Dirichlet conditions. Note that we can even take f = 0 since the eigenvalues of the Dirichlet-Laplacian are strictly negative. The unit ball B := B1 (0) ⊂ H is obviously positive invariant. We consider the respective dynamical system subject to symmetric α-stable noise (L(t))t>0 as in the previous example. The multiplicative coefficient G(x, z) = hhx − v, ziiw for some v, w ∈ H, with kvk = 1 and w 6= 0 fixed. For ε > 0, t > 0 and x ∈ B and ζ ∈ (0, 1) we consider dX ε (t) =

∂2 ε X (t)dt + εhhX ε (t) − v, dL(t)iiw ∂ζ 2

with

X ε (0) = x.

which has the following first exit times and loci behavior from B2 (εγ , χ). For χ > 1 we have B2 (εγ , χ) = B2 (εγ , ∞) =: B2 (εγ ). We calculate the exit increments as follows  1 v  v   ⊥ J B (0) = {z ∈ H | |hh−v, zii|kwk > 1} = {z ∈ H | |hhv, zii| > ∪ v + } = − v⊥ + kwk kwk kwk 10

we obtain that any C > 0 there are ε0 , γ ∈ (0, 1) such that ε ∈ (0, ε0 ] implies for U ∈ B(H) with ν(J U∩B (0)) > 0 and ν(J ∂U∩B (0)) = 0 h i 1   [1 − C, 1 + C], and sup E τx (ε, χ) ∈ v x∈B3± (εγ ) εα 2ν v ⊥ + kwk      v v ∪ v ⊥ + kwk ν A ∩ − v ⊥ + kwk   [1 − C, 1 + C]. sup P(X ε (τ ; x) ∈ U ) ∈ v x∈B3± (εγ ) 2ν v ⊥ + kwk

as opposed to the case of additive noise G(x, z) = z, where h i 1  [1 − C, 1 + C], and sup E τx (ε, χ) ∈ α ε ν Bc x∈B3± (εγ )  ν U ∩ Bc ε  [1 − C, 1 + C]. sup P(X (τ ; x) ∈ U ) ∈ ν Bc x∈B3± (εγ )

3

Deviations of the Small Noise Solution

This section is devoted to a large deviations type estimate for the stochastic convolution between consecutive large jumps. It quantifies the fact, that in the time interval strictly between two adjacent large jumps the solution of (2.6) is perturbed by only the small noise component and deviates from the solution of the deterministic equation by only a small ε-dependent quantity, with probability converging to 1 exponentially in the small noise limit ε → 0. For the correct understanding of the role of the different scales we need to avoid cancelations and therefore formulate and prove our results for abstract scale functions ρ· : (0, 1] → [1, ∞), γ· : (0, 1] → (0, 1],

lim ρε = ∞,

ε→0+

lim ερε = 0

ε→0+

(3.1)

lim γε = 0

ε→0+

before choosing them numerically in Proposition 3.2. We assume that the functions and all limits above are attained monotonically. Recall the notation for these abstract scales  and Wk := ∆Tk L, k > 1, (3.2) T0 := 0, Tk := inf t > Tk−1 k∆t Lk > ρε , We define the compound Poisson η ε process of L which consists only of large jumps with intensity  c βε := ν Bρε (0) and the jump probability measure by P(Wk ∈ A) = ν(A ∩ Bρc ε (0))/βε . ηtε

:=

Zt 0

Z

zN (dsdz) =

∞ X

k=1

kzk>ρε

Wk 1{Tk 0,

The complementary small jumps process ξ ε := L − η ε has the following shape ξtε

=

Zt Z

0 kzk61

e (dsdz) + zN

Zt

Z

zN (dsdz) =

Zt

Z

0 kzk6ρε

0 1 χ0 , ε ∈ (0, 1] and y ∈ D2ι (γε , χ) we set 1 σ 1 := σχ,y (ε) := inf{t > 0 | Y ε (t; y) ∈ / U χ }.

3.1

(3.4)

Exponential estimate of a stochastic convolution with bounded jumps

In this subsection we show that the stochastic convolution with with respect to G(Y ε (s−), εdξ ε (s)) is very small (i.e. of order 6 γεq for some q > 1) with exponential probability in terms of γε tending to 1 as long as Y ε and the stochastic convolution remain inside the sublevel set containting a (large) ball of radius χ. For the solution Y ε of (3.3) with y ∈ D2ι (γε , χ) we consider the multiplicative stochastic convolution process Ψε,y := t

Zt 0

=

Zt

S(t − s)G(Y (s−; y), εξ ε (s)) Z

0 00 is a F -adapted c` 2 σ 2 := σχ,y (ε) := inf{t > 0 | Ψε,y ∈ / U χ }, t

and

σ := σ 1 ∧ σ 2 .

(3.5)

Proposition 3.1. Let the Hypotheses (D.1-2) and (S.1-2) be satisfied and the functions ρ· , γ· given by (3.1) and satisfy for some q > 1 the limit relation lim Γ(ε) = 0

ε→0

monotonically,

12

for

Γ(ε) :=

(ερε )2 . γε4q+3

(3.6)

Then for any χ > χ0 there is ε0 ∈ (0, 1] such that ε ∈ (0, ε0 ] implies sup y∈D2ι (γε ,χ)

q −1 P( sup kΨε,y ). s k > γε ) 6 16 exp(−(5γε )

(3.7)

s∈[0,σ]

Proof. We start with the exponential Kolmogorov inequality, followed an exponential version of the Burkholder-Davis-Gundy inequality, and the Campbell representation of the Laplace transform of the quadratic variation of Poisson random integrals and a comparison principle for the characteristic exponent of Ψε . Step 0: Drift estimate. We show that for any χ > χ0 there is ε0 ∈ (0, 1] such that ε ∈ (0, ε0 ] implies 1 q sup kbε,y σ k < γε . 2 y∈D2ι (γε ,χ) We write Y (t) = Y ε (t; x) and recall Hypothesis (S.2). For g1 (r) := supx∈U r G1 (x) the triangular inequality and the norm estimate of the heat semigroup S yield kbε,y σ k

6k

Zσ 0

Z

S(σ − s)G(Y (s−), εz)ν(dz)dsk

16kzk6ρε



Z

kS(σ − s)G(Y (s−), εz)kν(dz)ds

6 g1 (χ)



Z

6

0 16kzk6ρε

0 16kzk6ρε

= εg1 (χ)



e−Λ0 (σ−s) kεzkν(dz)ds

e−Λ0 (σ−s) ds

0

Z

kzkν(dz) 6 g1 (χ)

16kzk6ρε

ν(B1c (0)) ερε ερε 6 C0 . Λ0 γε γε

Note that the right side is independent of the size of σ. We assume C0 > 1 without loss of generality. The limit (3.6) implies the existence of a constant ε0 ∈ (0, 1] such that for ε ∈ (0, ε0 ] we C1 3 γ 6 C21 γε2 and hence satisfies the claim. have ερε 6 2C 0 ε We start the proof of main estimate. Step 1: Exponential estimate of the stochastic convolution: (0, ε0 ] and m ∈ N we have q e ε,y P( sup kΨε,y t k > γε ) 6 P( sup kΨt k > t6σ∧m

t6σ∧m

For ε0 in Step 0 and ε ∈

1 q 1 q 1 q e ε,y γ ) + P(kbε,y σ k > γε ) = P( sup kΨt k > γε ). 2 ε 2 2 t6σ∧m

Kolmogorov’s exponential inequality yields for the free parameter λ > 0 e ε,y P( sup kΨ t k > t6σ∧m

i 1 q 1 2q 1 2q h ε,y 2 2 e ε,y e γε ) = P(λ sup kΨ k > λ γ ) 6 exp(−λ γ )E exp(λ sup k Ψ k ) . t t 2 4 ε 4 ε t6σ∧m t6σ∧m (3.8) 13

RtR (1) (1) e (dsdz) is a F -martingale. The We have that (Mt )t∧σ for Mt := 0 0 0 [50] yields the P-a.s. inequality (Ψ 0 n  2n 2 e ε,x e ε,x kΨ = kΨ t k t k  Zt  X  n ε,x 2 ε,x (1) −2Λ0 (t−s) ε,x 2 (1) e ε,x e e e 6 2 e−2Λ0 (t−s) hhΨ , dM ii + e k Ψ k − k Ψ k − 2hh Ψ , ∆ M ii s s− s s s− s− 06s6t

0

=e

−2nΛ0 t

 Zt 2

Z

e

2Λ0 s

0 kzk6ρε

e ε,x e hhΨ s− , G(Y (s−), z)iiN (dsdz) +

(2) (3) n =: e−2nΛ0 t |Mt | + |Mt | .

Zt

Z

e

2Λ0 s

0 kzk6ρε

n kG(Y (s−), z)k N (dsdz) 2

Therefore we obtain

sup t∈[0,σ∧m]

e ε,x k2n 6 e−2nΛ0 (σ∧m) kΨ t

= e−2nΛ0 (σ∧m)





sup t∈[0,σ∧m]

sup t∈[0,σ∧m]

(2)

|Mt | + (2) |Mt |

+

sup

(3)

|Mt |

t∈[0,σ∧m] n (3) Mσ∧m .

n

Using the inequality by Siorpaes [49] we continue for Hs := q pathwise Burkholder-Davis-Gundy −1 (2) (2) Ms / [M (2) ]s + supr∈[0,s] (Mr )2 (which satisfies P-a.s. |Hs | 6 1) for all s > 0 

sup t∈[0,σ∧m]

(2) |Mt |

+

(3) Mσ∧m

n

σ∧m n  q Z (3) (2) (2) Hs− dMs + Mσ∧m , 6 6 [M ]σ∧m − 0

and furthermore h E

sup t∈[0,σ∧m]

σ∧m Z i h q n i (3) ε,x 2n −2nΛ0 (σ∧m) (2) e 6e E 6 [M ]σ∧m − kΨt k . Hs− dMs(2) + Mσ∧m 0

14

Therefore we continue the estimate of the exponential factor on the right side of (3.8) with the help of the monotone convergence theorem ∞ ∞ h i hX i X i λn h λn e ε,y k2 ) = E e ε,y k2n = e ε,y k2n E exp(λ sup kΨ sup k Ψ E sup k Ψ t t n! t6σ∧m t n! t6σ∧m t6σ∧m n=0 n=0 σ∧m  n Z ∞ q n i X λ −2nΛ0 (σ∧m)  (3) Hs− dMs(2) + Mσ∧m 6 [M (2) ]σ∧m − e 6 E n! n=0 0

σ∧m Z  q h  i (3) −2Λ0 (σ∧m) 6 [M (2) ]σ∧m − = E exp λe Hs− dMs(2) + Mσ∧m 0

Z i  σ∧m  h  3λ (3) 2 −2Λ0 (σ∧m) −4Λ0 (σ∧m) (2) Hs− dMs(2) + e−2Λ0 (σ∧m) Mσ∧m + 3λc − λe e [M ] 6 E exp σ∧m 2 c 0

h  6λ  i  6 4E exp 2 e−4Λ0 (σ∧m) [M (2) ]σ∧m + 4 exp 6λc2 c σ∧m Z i h  i h  (3) −2Λ0 (σ∧m) Hs− dMs(2) + 4E exp 2λe−2Λ0 (σ∧m) MT ∧σ + 4E exp − 2λe 0

=: J1 (ε) + J2 (ε) + J3 (ε) + J4 (ε). Step 2: Campbell’s formula and Optimization over the free parameters. The idea is 1 to choose the free parameters λ and c as ε-dependent functions λε := γ 2q+1 and cε = γεq+1 such ε that on the right side of (3.8) the term structure we obtain reads   1 exp(−λε γε2q ) J1 (ε) + J2 (ε) + J3 (ε) + J4 (ε) 4

from which we extract the desired estimate. We estimate the terms J1 , . . . , J4 one by one. J1 :

Note that P-a.s. we have [M

(2)

]σ∧m

h Z· = 2

Z

σ∧m

0 kzk6ρε

=4

σ∧m Z 0

6 C1

Z

kzk6ρε

σ∧m Z 0

  i e ε,x , G(Y (s−), εz)ii N e (dsdz) e2Λ0 s hhΨ s−

Z

 2 e ε,x , G(Y (s−), εz)ii N (dsdz) e4Λ0 s hhΨ s−

kzk6ρε

e ε,x k2 kεzk2N (dsdz) 6 C2 e4Λ0 s kΨ s−

15

σ∧m Z 0

Z

kzk6ρε

e4Λ0 s kεzk2N (dsdz).

Campbell’s formula yields σ∧m Z Z h h i λε C3 E exp 2 e−4Λ0 (σ∧m) 3[M (2) ]σ∧m = E exp 4q+3 e−4Λ0 (σ∧m) cε γε 0

σ∧m Z

h = E exp

0

Z

exp

kzk6ρε

kzk6ρε

i e4Λ0 s kεzk2N (dsdz)

 i C3 kεzk2 −2Λ0 ((σ∧m)−s)  . e − 1 ν(dz)ds γε4q+3

The limit (3.6) implies then uniformly in m sup

sup

sup

m∈N s∈[0,σ∧m] kzk6ρε

(ερε )2 kεzk2 −2Λ0 ((σ∧m)−s) 6 4q+3 −→ 0, 4q+3 e γε γε ε 2

) Hence for ε0 ∈ (0, 1] such that ε ∈ (0, ε0 ] implies (ερ 6 1 and ρε > γε4q+3 r using the estimate (e − 1) 6 (e − 1)r for all r ∈ [0, 1] we obtain

Z Z h  σ∧m E exp 0

kzk6ρε



exp

C3 kεzk2 γε4q+3

σ∧m Z Z h  6 E exp (e − 1) 0

kzk6ρε

h  C ε2 3 6 E exp Λ0 γε4q+3 R

kzk61

R

kzk61

kzk2 ν(dz)/ν(B1c (0))

 i  e−2Λ0 ((σ∧m)−s) − 1 ν(dz)ds

i C3 kεzk2 −2Λ0 ((σ∧m)−s) e ν(dz)ds γε4q+3

 ε2 h  C (e − 1)  3 1 − e−2Λ0 ((σ∧m)−s) 4q+3 6 E exp 2Λ0 γε  Z

as ε → 0.

Z

kzk6ρε

i kzk2ν(dz)

i  2C C (ερε )2  3 4 kzk2ν(dz) + ρε ν(B1c (0)) 6 exp Λ0 γε4q+3

for C4 = kzk61 kzk2ν(dz) + ν(B1c (0)). Note that the right-hand side is estimated independently of m ∈ N only by monotinicity. We have thus obtained for ε ∈ (0, ε0 ] 1 1 1 1 1 1 1 exp(−λε γε2q )J1 (ε) 6 exp(− )J1 (ε) 6 4 exp(− ) exp(Γ(ε)) 6 4 exp(− ). 4 γε 4 γε 4 γε 5 J2 :

Trivially, there is ε0 ∈ (0, 1] such that for ε ∈ (0, ε0 ] we have 1 1 1 1 1 1 1 exp(−λε γε2q )J2 (ε) 6 exp(− + 6λε c2ε ) = exp(− + 6γε ) 6 4 exp(− ). 4 γε 4 γε 4 γε 5

J3 :

Recall that (2) Mt

=2

Zt

Z

0 kzk6ρε

e ε,x , G(Y (s−), εz)iiN e (dsdz) e2Λ0 s hhΨ s− 16

e ε,x , G(Y (s−), εz)ii we have such that for the function h(s−, εz) := e2Λ0 s− Hs− hhΨ s− σ∧m Z

Hs− dMs(2) =

σ∧m Z 0

0

Z

kzk6ρε

e h(s−, εz)N(dsdz)

and thus σ∧m Z i h  −2Λ0 (σ∧m) Hs− dMs(2) E exp − 2λε e 0

h  = E exp

σ∧m Z

Z

  i −2Λ0 (σ∧m) h(s−,εz) e−2λε e − 1 + 2λε e−2Λ0 (σ∧m) h(s−, εz) ν(dz)ds

h  6 E exp

σ∧m Z

Z

  i −2Λ0 (σ∧m) |h(s−,εz)| e2λε e − 1 − 2λε e−2Λ0 (σ∧m) |h(s−, εz)| ν(dz)ds .

0

0

kzk6ρε

kzk6ρε

Note that for s ∈ [0, σ ∧ m], z ∈ H and ε ∈ (0, 1] we have |h(s, εz)| 6 d(χ)g1 (χ)e2Λ0 s εkzk and hence the limit (3.6) yields uniformly in m ∈ N P-a.s. sup

sup

sup 2λε e−2Λ0 (σ∧m) |h(s, z)| 6 2d(χ)g1 (χ)

m∈N s∈[0,σ∧m] kzk6ρε

ερε γε2p+1

→ 0,

as ε → 0 + .

Therefore using er − 1 − r 6 r2 (e − 5/2)/2 for r ∈ [0, 1] we obtain for sufficiently small ε0 ∈ (0, 1] R ε ε such that ε ∈ (0, 1] implies 2d(χ)g1 (χ) γερ kzk2ν(dz)/ν(B1c (0)) the inequality 2p+1 6 1 and ρ > kzk61 ε

Z Z h  σ∧m J3 (ε) 6 4E exp 0

kzk6ρε



−2Λ0 (σ∧m)

e2λε e

σ∧m Z Z h  6 4E exp (e − 5/2)/2 0

kzk6ρε

|h(s,z)|

 i − 1 − 2λε e−2Λ0 (σ∧m) |h(s, z)| ν(dz)ds

i 2 2λε e−2Λ0 (σ∧m) |h(s, z)| ν(dz)ds

σ∧m Z h  2 i 2 e−2Λ0 ((σ∧m)−s) ds 2λε e−2Λ0 ((σ∧m)−s) d(χ)g1 (χ)ερε 6 4E exp (2e − 5)C4 0

h  (2e − 5)C 2 22 d(χ)2 g (χ)2 2 i  1 4 6 4E exp 1 − e−2Λ0 (σ∧m) λε ερε 2Λ0   (ερε )2   6 4 exp C5 λ2ε (ερε )2 = 4 exp C5 4q+2 → 4, as ε → 0 + . γε This implies for ε ∈ (0, ε0 ] the estimate exp(−λε 14 γε2q )J3 (ε) 6 4 exp(− γ1ε 15 ).

17

J4 :

This case resembles the one of J1 (ε). Since we have only positive jumps we estimate P-a.s. Mσ∧m =

σ∧m Z 0

Z

e

2Λ0 s

2

kG(Y (s−), εz)k N (dsdz) 6

kzk6ρε

leading to i h  E exp 2λε e−2Λ0 (σ∧m) Mσ∧m

Z Z h  σ∧m = E exp 0

σ∧m Z 0

Z

e2Λ0 s g1 (χ)kεzk2 N (dsdz),

kzk6ρε

i  exp(2λε e−2Λ0 ((σ∧m)−s) g1 (χ)kεzk2) − 1 ν(dz)ds .

kzk6ρε

Analogously to J1 (ε) we obtain with the help of the limit (3.6) that for ε → 0+ we have P-a.s. sup

sup

sup 2λε e−2Λ0 ((σ∧m)−s) g1 (χ)kεzk2 6 2g1 (χ)λε (ερε )2 6 2g1 (χ)

m∈N s∈[0,σ∧m] kzk6ρε

(ερε )2 → 0. γε2q+1

ε 2

) 6 1 and ρε > The additional choice of ε0 ∈ (0, 1] such that ε ∈ (0, ε0 ] implies 2g1 (χ) (ερ γε2q+1 R kzk2 ν(dz)/ν(B1c (0)) ensures kzk61

Z Z h  σ∧m J4 (ε) 6 4E exp 0

kzk6ρε



 i  exp 2λε e−2Λ0 ((σ∧m)−s) g1 (χ)kεzk2 − 1 ν(dz)ds

σ∧m Z i h  e−2Λ0 ((σ∧m)−s) dsλε (ερε )2 6 4E exp (e − 1)2g1 (χ)C6 0

 (e − 1)g (χ)C (ερε )2  1 6 6 4 exp −→ 4, Λ0 γε2q+1 Step 3:

as ε → 0 + .

We conclude for ε ∈ (0, ε0 ] that

i 1 2q h 1 ε,y 2 q e kΨε,y k > γ ) 6 exp(−λ ), γ )E exp(λ sup k Ψ k ) 6 16 exp( ε ε t ε t 4 ε 5γε t6σ∧m t∈[0,σ∧m]

P(

sup

where the last estimate is uniformly in m ∈ N. Thus the limit m → ∞ finishes the proof.

3.2

Exponential estimates of the deviations of the small jump equation

For ε, γ ∈ (0, 1], y ∈ H, t > 0 we define the event of a small stochastic convolution Et,y (γ, ε) := { sup kΨε,y s k 6 γ}

(3.9)

s∈[0,t]

Ey (γ, ε) := { sup kY ε (s; y) − u(s; y)k 6 γ}. s∈[0,T1 ]

18

(3.10)

We suppress the dependence on ε ∈ (0, 1]. This subsection is dedicated to the proof the following estimate used in the proof of the main result. Proposition 3.2. Let the Hypotheses (D.1-2) and (S.1-2) be satisfied and the functions γ· , ρ· given by (3.1). Then there exists a constant q > 1 such that if γ· , ρ· satisfy in condition (3.6) for q we obtain for any χ > χ0 and θ ∈ (0, 1) a constant ε0 ∈ (0, 1] such that ε ∈ (0, 1] satisfies i h ι 1 ). E eθλ (ε)T1 1(Exc ) 6 16 exp( ι 5γ ε x∈D2 (γε ,χ)

(3.11)

sup

In addition, the scales can be chosen as follows 1

γε := ε 4(2q+1) ,

3

3

ρε := ε− 4 ,

βε = ν(ρε B1c (0)) = O((ρε )−α )ε→0 = O(ε 4 α )ε→0

(3.12)

The proof strategy consists in the estimate the event of Ey by Ey,T1 . For this effect we introduce the nonlinear residuum Rε of the randomness in Y ε Rtε,x := Y ε (t; x) − u(t; x) − Ψε,x t ,

t > 0, x ∈ D2ι (γε , χ),

ε ∈ (0, 1].

(3.13)

The quantity we have to control in Ex has the shape Y ε − u = Ψε + Rε . By Proposition 3.1 we have a good estimate of Ψε . It is therefore natural to control Rε in terms of Ψε , which is done first for large initial values (of Y ε ) on small time scales and then for initial values (of Y ε ) close to the stable state and large time scales. Proposition 3.3. Let the Hypotheses (D.1-2) and (S.1-2) be satisfied and the functions γ· , ρ· given by (3.1). Then there exists a constant q > 1 such that if γ· , ρ· satisfy in condition (3.6) for q such that for any χ > χ0 exists a constant ε0 ∈ (0, 1] such that ε ∈ (0, ε0 ] and x ∈ D2ι (γε , χ) imply ET1 ,x (γεq ) ⊆ { sup kY ε (t; x) − u(t; x)k 6 (1/2)γε },

P-a.s.

(3.14)

t∈[0,T1 ]

Proof. of Proposition 3.2: Due to the independence of Y ε and T1 and Proposition 3.3 we estimate sup x∈D2ι (εγ ,χ)

i Z∞ h ι c θλε T1 1(Ex ) 6 E e 0

6

sup x∈D2ι (γε ,χ)

sup x∈D2ι (γε ,χ)

−βε s P(sup kΨε,x ds t k > γε )βε ε t6s

q P( sup kΨε,x t k > γε ) 6 16 exp( t∈[0,∞)

1 ). 5γε

Lemma 3.4. Let the Hypotheses (D.1-2) and (S.1-2) be satisfied and the functions γ· , ρ· given by (3.1). Then for all χ > χ0 there are ε0 ∈ (0, 1] and q > 0 such that for any K > 0, ε ∈ (0, ε0 ] and x ∈ Dι (γε , χ) and for all functions T : (0, 1] → R monotonically decreasing and satisfying T ε → ∞ as ε ց 0 we have the following. The bound T ε 6 κ0 + κ1 | ln(γε )| (given in Proposition 2.2) implies on the event ET ε ∧σ (γεq ) that sup kRtε,x k 6 Kγε . (3.15) t∈[0,T ε ∧σ]

19

Proof. Fix χ > χ0 and ε ∈ (0, 1] and x ∈ Dι (γε , χ). Then the positive invariance of U χ and Dι (γε , χ) ⊂ U χ yield sup sup ku(t; x)k 6 sup kyk = d(χ). y∈U χ

x∈D2ι (γε ,χ) t>0

The process Rtε,x satisfies formally for all x ∈ H and ε ∈ (0, 1] dRtε,x = ∆Rtε,x + f (Y ε (t; x)) − f (u(t; x)), dt

R0ε,x = 0.

(3.16)

Recall that f : H → H is locally Lipschitz continuous. Hence it is globally Lipschitz continuous on U χ and we obtain for y, u ∈ H kf (y) − f (u)k 6 ℓ(y, u)ky − uk, for ℓ : H × H → (0, ∞), (y, u) 7→ ℓ(y, u) being some continuous function. The mild formulation of (3.16) and the identity Y ε (T ; x) − u(t; x) = Rtε,x + Ψε,x imply the estimate t kRtε,x k

6

Zt

e

−Λ0 (t−s)

0

ε

kf (Y (s; x)) − f (u(s; x))k ds 6

Zt 0

e−Λ0 (t−s) ℓ(Y ε (s; x), u(s; x)) kRsε,x − Ψε,x s kds.

kRtε,x k



> 1} ∧ σ. Then we obtain on the events We define the stopping time σ (ε) := inf{t > 0 | {t 6 T ε ∧ σ ′ } ∩ ET ε ∧σ′ (γεq ) for some fixed but unknown q > 1 and which we determine at the end of the proof the boundedness ε,x kY ε (t; x)k 6 ku(t; x)k + kΨε,x t k + kRt k 6 d(χ) + 2.

Due to x ∈ U χ the positive invariance of u(t; x) ∈ U χ for all t > 0 the events ET ε ∧σ′ (γεq ) and t 6 T ε ∧ σ ′ imply for ℓχ := sup(y,u)∈(Bd(χ)+2 (0))2 ℓ(y, u) < ∞ the estimate e

Λ0 t

kRtε,x k

6 ℓχ

 Zt

e

Λ0 s

0

kRsε,x kds



Zt 0

 eΛ0 s kΨε,x s kds .

Hence Gronwall’s lemma with eΛ0 0 R0ε,x = 0 yields on Eσ′ (γεq ) and t 6 T ε ∧ σ e

Λ0 t

kRtε,x k

6

Zt 0

e

ℓχ (t−s)

Zs

e

Λ0 r

0

kΨε,x r kdrds

6 sup r∈[0,t]

kΨε,x r k

Zt Zs 0

eℓχ (t−s) eΛ0 r drds.

0

The elementary calculus Zt Zs 0

eℓχ (t−s) eΛ0 r drds =

0

e ℓχ t eΛ0 t 1 − + ℓχ (ℓχ − Λ0 ) Λ0 ℓχ Λ0 (ℓχ − Λ0 )

yields for κ := ℓχ − Λ0 > 0 the estimate kRtε,x k 6

eκt sup kΨε,x k κ2 r∈[0,t] r 20

for t 6 T ε ∧ σ ′ on ET ε ∧σ′ (γεq ). Setting q := κ1 κ + 2 we obtain for any K > 0 a value ε0 ∈ (0, 1] sufficiently small such that ε ∈ (0, ε0 ] implies for T ε := κ0 + κ1 | ln(γε )| on the event ET ε ∧σ′ (γεaκ+1 ) the desired estimate ε eκT (3.17) sup kRtε,x k 6 2 γεaκ+2 6 Kγε . κ t∈[0,T ε ∧σ] If ε0 ∈ (0, 1] is additionally small enough such that Kγε < 1 for ε ∈ (0, ε0 ] we have on the event ET ε ∧σ′ (γεqκ+1 ) inf{t > 0 | kRtε,x k > 1} > T ε ∧ σ, which implies (3.15). Lemma 3.5. Let the Hypotheses (D.1-2) and (S.1-2) be satisfied and the functions γ· , ρ· given by (3.1). Then for all χ > χ0 and φι ∈ P − there exist constants δ0 , δ1 , K0 > 0 such that for all x ∈ Bδ0 (φι ) and ε ∈ (0, 1] we have on the event Eσ (δ2 ) the inequality sup kRtε,x k 6 K0 sup kΨε,x r k.

t∈[0,σ]

(3.18)

r∈[0,σ]

Proof. As before we fix an arbitrary time scale T · : (0, 1] → (0, ∞) satisfying w.l.o.g. T ε → ∞ monotonically as ε → 0. The stability of φι yields that the linearization ∆v + f ′ (φι )v of ∆u + f (u) in φι has strictly negative maximal eigenvalues −Λ1 < 0, in that h∆v + f ′ (φι )v, vi 6 −Λ1 |v|2 for v ∈ H. We fix δ0 ∈ (0, 1) such that additionally sup v,w∈Bδ0

(φι )

kf ′ (v) − f ′ (w)k 6

Λ1 4

and

sup v∈Bδ0 (φι )

kf ′ (v)k 6 2kf ′ (φι )k =: C ′ .

The stability also implies the existence of δ1 ∈ (0, 1) such that for x ∈ Bδ1 (φι ) u(t; x) ∈ B δ0 (φι )

t > 0.

4

Denote for x ∈ Bδ1 (φι )

δ0 } ∧ σ. 4 With the help of the decomposition (3.13) the mean value theorem applied to (3.16) yields σ ′ := inf{t > 0 | kRtε,x k >

dRtε,x = ∆Rtε,x + f (Y ε (t; x)) − f (u(t; x)) dt Z1 ε,x ε,x = ∆Rt + f ′ (u(t; x) + θ(Rtε,x + Ψε,x + Ψε,x t ))dθ(Rt t ) 0

=

∆Rtε,x



ι

+ f (φ

)Rtε,x

+

Z1 0

+

Z1

 ε,x ′ ι f ′ (u(t; x) + θ(Rtε,x + Ψε,x t )) − f (φ ) dθRt

ε,x f ′ (u(t; x) + θ(Rtε,x + Ψε,x t ))dθΨt .

0

21

Multiplying with Rtε,x in L2 (0, 1), integration by parts and Young’s inequality yield for any δ1 < on {t 6 T ε ∧ σ ′ } ∩ ET ε ∧σ′ (δ1 )

δ0 4

1 d ε,x 2 Λ1 ε,x 2 Λ1 ε,x 2 (C0 )2 ε,x 2 |Ψt | |Rt | + Λ1 |Rtε,x |2 6 |Rt | + C0 |Rtε,x ||Ψε,x |R | + t |6 2 dt 4 2 t Λ0 such that

2(C0 )2 ε,x 2 d ε,x 2 |Ψt | . |Rt | + Λ1 |Rtε,x |2 6 dt Λ1

Hence Gronwall’s lemma with the initial condition R0ε,x = 0 yields |Rε (s; x)|2 6

2(C0 )2 ε,x 2 |Ψt | Λ1

(3.19)

on {t 6 σ ′ ∧ T ε } ∩ Eσ′ ∧T ε (δ1 ). In order to obtain an estimate in H we use the smoothing property of the heat semigroup S and the mean value theorem as well as (3.19) on {t 6 T ε ∧ σ ′ } ∩ ET ε ∧σ′ (δ1 ) kRtε,x k

6 C1

Zt 0

e−Λ0 (t−s) √ |f (Ysε,x ) − f (u(s; x))|ds t−s

6 C1 (C0 +

Λ0 ) 4

Zt 0

e−Λ0 (t−s) √ (|Rsε,x | + |Ψε,x s |)ds t−s

 Zt e−Λ0 (t−s) Λ0  (C0 )2 ε,x √ 6 C1 (C0 + +1 ) 2 dr sup |Ψε,x r | 6 C2 sup kΨr k, 4 Λ0 t−s r∈[0,t] r∈[0,t] 0

 ′2 R ∞ ) + 1 where K1 = C2 = C1 (C0 + Λ40 ) 2(C Λ0 0 kRtε,x k

δ0 4 }

exp(−Λ0 r) √ dr r

ε

< ∞. If in addition δ1
0 | > > T ∧ σ on ET ε ∧σ (δ1 ). In addition, we have not used any specific property of T ε . This finishes the proof. We combine the previous two lemmas.

Lemma 3.6. Let the Hypotheses (D.1-2) and (S.1-2) be satisfied and the functions γ· , ρ· given by (3.1). For the constant q > 1 obtained in Lemma 3.4 let γ· , ρ· additionally satisfy condition (3.6). Then for any χ > χ0 there is a constant ε0 ∈ (0, 1] such that for any ε ∈ (0, ε0 ] and x ∈ D2ι (γε , χ) we have on the event Eσ (γεq ) the inequality sup kRtε,x k 6

t∈[0,σ]

1 γε . 4

(3.20)

Proof. As in the previous lemmas we prove the result for an arbitrary scale T · : (0, 1] → [1, ∞) with limε→0 T ε = ∞ monotonically. Recall the constants κ0 , κ1 > 0 from Proposition 2.2 and denote by sε := κ0 + κ1 | ln(γε )|. Assume ε0 ∈ (0, 1] sufficiently small such that γε 6 δ0 and γεq < δ1 given in Lemma 3.5 . For T ε 6 sε for all ε ∈ (0, 1] the result follows immediately by Lemma 3.4 for K = 14 . For T ε > sε 22

for all ε ∈ (0, 1]. Fix ε0 ∈ (0, 1] sufficiently small such that ε ∈ (0, ε0 ] implies for x ∈ D2ι (γε , χ) on {t 6 T ε ∧ σ} ∩ ET ε ∧σ (γεq ) both 1 γε 4 1 sup kRsε,x k 6 γε 9 s∈[0,t]

for t > sε

ku(t; x) − φι k 6

Then we obtain by Lemma 3.4 for K =

1 9

and

(3.21)

for t 6 sε .

(3.22)

on the event ET ε ∧σ (γεq )

ε,x kY ε (sε ; x) − φι k 6 ku(sε ; x) − φι k + kRsε,x ε k + kΨsε k 6

1 4

+

1 1 γε + γεq 6 γε . 9 2

By Lemma 3.5 for all x ∈ Bδ0 (φ) the solution u(t; x) ∈ Bδ1 (φ) for all t > 0. In addition there is ℓ0 ∈ (0, 1] such that ku(t; x) − u(t; y)k 6 ℓ0 kx − yk,

for all x, y ∈ Bδ0 (φ), t > 0.

Hence we have for ε ∈ (0, ε0 ] and x ∈ D2ι (γε , χ) on the event {t 6 T ε ∧ σ} ∩ ET ε ∧σ (γεq ) ε,x ku(t; x) − u(t − sε ; Y ε (sε ; x))k 6 ku(sε ; x) − Y ε (sε ; x)k 6 kRsε,x ε k + kΨsε k 6

1 γε + γεq . 9

and thus kRtε,x k = kY ε (t − sε , sε , Y ε (sε ; x)) − u(t − sε ; u(sε ; x)) − Ψε,x t k

ε,Y ε (sε ;x)

6 kY ε (t − sε , sε , Y ε (sε ; x)) − u(t − sε ; Y ε (sε ; x)) − Ψt−sε ,sε ε

ε

ε

ε

ε

+ ku(t − s ; u(s ; x)) − u(t − s ; Y (s ; x))k + 6

sup

sε 6t6T ε ∧σ

6

kRtε,x k

+

1 2 γε + 3γεq 6 γε . 9 4

sup

t∈[0,sε ∧σ]

kΨε,x t k

+

2γεq

kΨε,x t k

+

k

ε,Y ε (sε ;x) kΨt−sε ,sε k

Since we have not used any specific property of T ε this finishes the proof. Proof. (of Proposition 3.3). Let the assumptions of Lemma 3.6 be satisfied for some χ > χ0 and q > 1 given by Lemma 3.4. We consider a time scale T · : (0, 1] → R satisfying limε→0 T ε = ∞ monotonically. Without loss of generality we assume T ε > sε . By Lemma 3.6 there exists ε0 ∈ (0, 1] such that for all ε ∈ (0, ε0 ] and x ∈ D2ι (γε , χ) we have P-a.s. {

sup t∈[0,T ε ∧σ]

kY ε (t; x) − u(t; x)k >

γε γε } = { sup kRtε,x + Ψε,x } t k > 2 2 ε t∈[0,T ∧σ] γε γε ⊆ { sup kRtε,x k > } ∪ { sup kΨε,x } t k> 4 4 t∈[0,T ε ∧σ] t∈[0,T ε ∧σ] γε ⊆ { sup kRtε,x k > } ∪ ETc ε ∧σ,x (γεq ) = ETc ε ∧σ,x (γεq ) ε 4 t∈[0,T ∧σ]

Note that by limit (3.6) we have for ε0 ∈ (0, 1] sufficiently small, ε ∈ (0, ε0 ] and x ∈ D2ι (γε , χ) ⊂ U χ−2γε that kG(Y ε (σ−; x), ε∆σ ξ ε )k 6 g1 (χ)kε∆σ ξ ε k 6 g1 (χ)ερε 6 γε . Choose in addition ε0 ∈ 23

(0, 1] small enough such that 2γε < χ − γε for all ε ∈ (0, ε0 ]. Then by definition of σ, since u(t; x) ∈ B 12 γε (φι ) for t > sε and D2ι (γε , χ) this implies that ET ε ∧σ (γεq ) ∩ {σ < T ε } = Eσ (γεq ) ∩ { sup kY ε (t; x) − u(t; x)k 6 (1/2)γε } t∈[0,σ]

ε

∩ {Y (t; x) ∈ D1ι (γε , χ) for all t ∈ [0, σ)} ∩ {Y ε (t; x) ∈ Bγε (φ) for all t ∈ [0, σ)}

(3.23)

∩ {Y ε (σ−; x) ∈ B2γε (φι )} ∩ {Y ε (σ−; x) + G(Y ε (σ−; x), ε∆σ ξ ε ) ∈ (Dι ∩ U χ ) \ D1ι (γε , χ)} / Bχ−γε (0)} = ∅. ⊆ {Y ε (σ; x) ∈ B2γε (φι )} ∩ {Y ε (σ; x) ∈ χ ε In addition, ET ε ∧σ,x (γεq ) ⊆ ET ε ,x (γεq + 12 γεq ) ⊆ {Ψε,x s ∈ U for all s ∈ [0, T ]} by definition. Together q with (3.23) this implies on the event ET ε ∧σ (γε ) that P-a.s. we have σ > T ε . Since T ε was chosen arbitrary this finishes the proof.

4

Asymptotic first exit times

4.1

The models of the exit times and exit loci

We now construct on (Ω, A, P) the random variables (sι (ε))ε∈(0,1] of Theorem 2.5 and (k ∗ (ε))ε∈(0,1] of Theorem 2.6. ι c

Definition 4.1. For given scales ρ· and γ· in (3.1), Bj⋄ (ε) := {εWj ∈ J (D ) (φι )} and the arrival times Tk of Wk given in (3.2) we define sι (ε) :=

∞ X

Tk

k=1

and ∗

k (ε) :=

k−1 Y j=1

∞ X

k=1

(1 − 1(Bj⋄ ))1(Bk⋄ ),

(k − 1)

ε ∈ (0, 1],

k−1 Y j=1

(1 − 1(Bj⋄ ))1(Bk⋄ ).

·

Lemma 4.2. For given scale ρ in (3.1) the random variables sι (ε) is exponentially distributed with λι rate λιε and any of the random variables k ∗ (ε) is geometrically distributed with rate P(B ⋄ ) = βεε . In particular s¯ι (ε) := λιε sι (ε) is exponentially distributed with rate 1. The proof is elementary and provided in the Appendix.

24

4.2

Exit events and their estimates

Recall the arrival times Tk = t1 + · · · + tk of Wk from (3.2). The following events are the building blocks of the first exit times. For j ∈ N, x ∈ H, χ > χ0 and given rate γ : (0, 1) → (0, 1) with γε → 0 as ε → 0 we define Ax := {Y ε (t; x) ∈ D2ι (γε , χ) for t ∈ [0, t1 ] and Y ε (t1 ; x) + G(Y ε (t1 ; x), ε∆tj L) ∈ D2ι (γε , χ)}

ι ε ι ε ε A− x := {Y (t; x) ∈ D2 (γε , χ) for t ∈ [0, t1 ] and Y (t1 ; x) + G(Y (t1 ; x), ε∆tj L) ∈ D3 (γε , χ)}

Bx := {Y ε (t; x) ∈ D2ι (γε , χ) for t ∈ [0, t1 ] and Y ε (t1 ; x) + G(Y ε (t1 ; x), ε∆tj L) ∈ D2ι (γε , χ)} Cx := {Y ε (t; x) ∈ / D2ι (γε , χ) for some t ∈ [0, t1 )}.

The shift operator Θs : D([0, ∞), H) → D([0, ∞), H) by s > 0 is defined as ψ(t) ◦ Θs := ψ(t + s) for s, t > 0. It is applied to the event Ajx by Ajx ◦ Θs = {Y ε (t + s; Y ε (s; x)) ∈ D2ι (γε , χ) for all t ∈ (s, tj + s)

and

Y ε (tj + s; Y ε (s; x)) + εG(Y ε (tj + s; Y ε (s; x)), ε∆tj +s L) ∈ D2ι (γε , χ)}.

In particular, since tj ◦ ΘTj−1 = Tj we obtain

Ajx := Ax ◦ ΘTj−1 = {Y ε (t, Y ε (Tj−1 ; x)) ◦ ΘTj−1 ∈ D2ι (γε , χ) for all t ∈ (Tj−1 , Tj ) Y ε (Tj ; x) + G(Y ε (Tj ; x), ε∆Tj L) ∈ D2ι (γε , χ)} Bxj ,

Cxj

and the analogous expressions for and By construction we have the representations {τx = Tk } =

4.3

k−1 \ j=1

Ajx ∩ Bxk

Exj

(4.1)

defined in (3.10) by the shift (4.1) respectively.

{τx ∈ (Tk−1 , Tk )} =

and

and

k−1 \ j=1

Ajx ∩ Cxk .

(4.2)

Proof of Theorem 2.5 and Theorem 2.6

In this section we prove two results which are not congruent to Theorem 2.5 and Theorem 2.6. In Proposition 4.3 we show the statement of Theorem 2.5 and additionally the convergence in probability of the first exit loci of Theorem 2.6. We apply this strategy for the sake of efficiency in order to avoid the repetition of arguments. Proposition 4.4 sharpens this result to the convergence in Lp for p(0, α) showing the uniform integrability. Proposition 4.3. For any θ ∈ (0, 1) there are ε0 , γ ∈ (0, 1] and χ > χ0 such that ε ∈ (0, ε0 ] implies for any U ∈ B(H) with mι (∂U ) = 0 that h i ι ι ι ι c 1 sup E eθ|λε τx (ε,χ)−¯s (ε)| 1 + |1{X ε (τ ; x) ∈ U } − 1{Wk∗ (ε) ∈ J U∩(D ) (φι )}| 6 1 + C. ε x∈D3ι (εγ ,χ)

The statement of Proposition 4.3 directly implies the statement of Theorem 2.5.

Proof. The proof is organized in four consecutive steps. First, the strong Markov property reduces the main expression to four geometric sums, whose limit consists of expressions involving certain events, which are estimated in step 2. As a next step we estimate the resulting expressions using all the previous results available. 25

Step 0: Conventions and assumptions. Fix C ′ ∈ (0, 12 (1 − θ)), χ > χ0 large enough and χ δ ∈ (0, 1] sufficiently small such that P ⊆ U and satisfying  C′ mι D \ D4ι (δ, χ) < . 5

(4.3)

We keep the scales (3.12), which satisfy in addition to (3.6) the following limit as ε → 0 3 1 βε2 | ln(γε )| = ε(2 4 −1)α | ln(ε)| → 0. ι λε 4(2q + 1)

(4.4)

In addition assume ε0 ∈ (0, 1] sufficiently small such that γε 6 δ. Due to the ubiquitous dependence of all quantities of ε, χ and ι we drop these dependencies. For convenience we write Di = Diι (γε , χ). Step 1: Reduction to expressions based on events on (0, T1 ].

We identify

h i c 1 sup E eθ|λε τy −¯s| 1 + |1{X ε (τ ; x) ∈ U } − 1{Wk∗ ∈ J U∩D (φ)}| 6 S11 + S12 + S2 + S3 , where ε x∈D3 S11 :=

∞ X

k=1

h sup E eθλε |τy −Tk | 1{τy = Tk } ∩ {s = Tk }

y∈D2

1 + |1{X ε (τ ; y) ∈ U } − 1{Wk∗ ∈

S12 := 2

∞ X

k=1

S2 := 2 S3 := 2

i 1 U∩Dc J (φ)}| , ε

h i sup E eθλε |τy −Tk | 1{τy ∈ (Tk−1 , Tk )} ∩ {s = Tk } ,

y∈D3

∞ k−1 X X

h i sup E eθλε |τy −Tk | 1{τy ∈ (Tℓ−1 , Tℓ ]} ∩ {s = Tk } ,

y∈D3 k=1 ℓ=1 ∞ X ∞ X k=1 ℓ=k+1

h i sup E eθλε |τy −Tk | 1{τy ∈ (Tℓ−1 , Tℓ ]} ∩ {s = Tk } .

y∈D3

In the sequel we estimate the preceding expressions using the representations in (4.2) and the strong Markov property with respect to the F -stopping times Tk . S11 : This expression is served first since it is the only one of order O(1)ε→0 all other expressions are o(1)ε→0 . We denote the symmetric difference E1 △ E2 := (E1 \ E2 ) ∪ (E2 \ E1 ) for events E1 , E2 .

26

In the sequel we repeatedly use strong Markov estimates of the following type k−1 h \  i c 1 A⋄j ∩ Bk⋄ 1 + |1{X ε (Tk ; y) ∈ U } − 1{Wk ∈ J U∩D (φ)}| E 1 {τy = Tk } ∩ ε j=1

 i   h  k−1 \ c 1 Ajy ∩ A⋄j 1 Byk ∩ Bk⋄ 1 + 1{X ε (Tk ; y) ∈ U } △ {Wk ∈ J U∩D (φ)} 6E 1 ε j=1

 h    ii h  k−1 \ c 1 Ajy ∩ A⋄j E 1 Byk ∩ Bk⋄ 1 + 1{X ε (Tk ; y) ∈ U } △ {Wk ∈ J U∩D (φ)} | FTk−1 =E 1 ε j=1

 h h  k−1 \  ii c 1 Ajy ∩ A⋄j EX ε (Tk−1 ;y) 1 B·k ∩ Bk⋄ 1 + 1{X ε (Tk ; y) ∈ U } △ {Wk ∈ J U∩D (φ)} =E 1 ε j=1 h i h  k−1 \ i  c 1 sup E 1 By ∩ B ⋄ 1 + 1{X ε (Tk ; y) ∈ U } △ {Wk ∈ J U∩D (φ)} . Ajy ∩ A⋄j 6E 1 ε y∈D2 j=1

The (k − 1)-fold iteration of this argument yields h  i c 1 sup P(Ay ∩ A⋄ )k−1 sup E 1 By ∩ B ⋄ 1 + 1{X ε (Tk ; y) ∈ U } △ {Wk ∈ J U∩D (φ)} ε y∈D2 y∈D2 k=1 i h   c supy∈D2 E 1 By ∩ B ⋄ 1 + 1{X ε (Tk ; y) ∈ U } △ {Wk ∈ 1ε J U∩D (φ)} . = 1 − supy∈D2 P(Ay ∩ A⋄ ) (4.5)

S11 6

∞ X

S12 :

The remaining diagonal term is estimated as follows S12 6 2

∞ X

k=1

For k = 1 we obtain the term

i h  k−1 \  θλε tk Ajy ∩ A⋄j ∩ Cyk ∩ Bk⋄ . 1 sup E e

y∈D3

j=1

h i sup E eθλε T1 1 Cy ∩ B ⋄ .

y∈D3

For k > 2 we take into account that the event Ay represents the non-exit from D2 . However, in order to dominate Cy we need initial values in D3 (see estimate 4.12), for which we take the last large jump before. The error made by this jump to stay within D3 instead of D2 turns out to be negligible by Hypothesis (S.4). First obtain by the analogous strong Markov arguments arguments as for the term S11 h  k−1 \ i  sup E eθλε tk 1 Ajy ∩ A⋄j ∩ Cyk ∩ Bk⋄

y∈D3

j=1

i h 6 sup P(Ay ∩ A⋄ )k−2 sup E 1(A1y ∩ A⋄1 )eθλε t2 1(Cy2 ∩ B2⋄ ) , y∈D2

y∈D2

27

such that S12

∞ i h h X i ⋄ θλε T1 +2 sup P(Ay ∩ A⋄ )k−2 sup E 1(A1y ∩ A⋄1 )eθλε t2 1(Cy2 ∩ B2⋄ ) 1 Cy ∩ B 6 2 sup E e D3

h i 6 2 sup E eθλε T1 1 Cy + y∈D3

S2 :

D2

D2

k=2

i h 2 supy∈D2 E 1(A1y ∩ A⋄1 )eθλε t2 1(Cy2 ∩ B2⋄ ) 1 − supy∈D2 P(Ay ∩ A⋄ )

. (4.6)

The estimate of {τy ∈ (Tℓ−1 , Tℓ ]} and the representation of {s = Tk } yield

S2 6 2

∞ k−1 X X k=1 ℓ=1

  ℓ−1 h  k−1 \ \ i  . Ajy ∩ A⋄j ∩ (Byℓ ∪ Cyℓ ) ∩ A⋄ℓ sup E eθλε (tℓ +···+tk ) 1 A⋄j ∩ Bk⋄ 1

y∈D3

j=1

j=ℓ+1

For each of the summands k ∈ N and ℓ = 1 we combine the mutual independence of the families (Tk )k∈N and (Wk )k∈N with the analogous strong Markov estimate   i h  k−1 \  θλε (t1 +···+tk ) A⋄j ∩ Bk⋄ 1 By ∪ Cy ∩ A⋄ sup E e 1

y∈D3

j=2

ik  h 6 (1 − P(A⋄ )) sup P (By ∪ Cy ) ∩ A⋄ E eθλε T1 P(A⋄ )k−1 . y∈D2

For k ∈ N and k − 1 > ℓ > 2 we obtain

h  k−1   ℓ−1 i \ \   sup E eθλε (tℓ +···+tk ) 1 A⋄j ∩ Bk⋄ 1 Ajy ∩ A⋄j ∩ Byℓ ∪ Cyℓ ∩ A⋄ℓ

y∈D3

j=1

j=ℓ+1



6 (1 − P(A )) sup P y∈D2

(A1y



A⋄1 )

∩ (By2 ∪ Cy2

  θλε T1 k−1 P(A⋄ )k−2 E e

−(ℓ−2)    . sup P(Ay ∩ A⋄ )ℓ−2 E eθλε T1 P(A⋄ )

y∈D2

Monotonicity yields

  sup P(Ay ∩ A⋄ ) 6 E eθλε T1 P(A⋄ ),

y∈D2

such that k−1 X ℓ=2

−(ℓ−2)    6 k − 2, sup P(Ay ∩ A⋄ )ℓ−2 E eθλε T1 P(A⋄ )

y∈D2

28

and hence ∞ X   k−1  S2 6 2P(B ⋄ ) sup P (By ∪ Cy ) ∩ A⋄ E eθλε T1 P(A⋄ )k−1 E eθλε T1 y∈D3

k=1

+ 2P(B ⋄ ) sup P (A1y ∩ A⋄1 ) ∩ (By2 ∪ Cy2 ) ∩ A⋄2 y∈D2

∞   θλε T1  X k−2  E e P(A⋄ )k−2 (k − 1)E eθλε T1 k=1

  supy∈D3 P (By ∪ Cy ) ∩ A⋄ supy∈D2 P (A1y ∩ A⋄1 ) ∩ (By2 ∪ Cy2 ) ∩ A⋄2 ⋄ ⋄    . = 2P(B ) + 2P(B )  2  1 − E eθλε T1 P(A⋄ ) 1 − E eθλε T1 P(A⋄ ) (4.7) S3 :

Due to the doubly infinite summation this turns out the most cumbersome case:

S3 6 2

∞ X ∞ X

k=1 ℓ=k+1

  ℓ−1 h  k−1 \ \ i   . Ajy ∩ Byℓ ∪ Cyℓ sup E eθλε (tk +···+tℓ ) 1 Ajy ∩ A⋄j 1 Aky ∩ Bk⋄ 1

y∈D3

j=1

j=k+1

The strong Markov estimates as in S11 yield for ℓ = k + 1 the estimate h  k−1 \ i   θλε (tk +tk+1 ) sup E e 1 Ajy ∩ A⋄j 1 Aky ∩ Bk⋄ 1 Byk ∪ Cyk

y∈D3

j=1

6 sup P Ay ∩ A⋄ y∈D2

For ℓ > k + 2 we obtain the summands

k−1

 h  i sup E eθλε (t1 +t2 ) A1y ∩ B1⋄ ∩ By2 ∪ Cy2 .

y∈D2

  ℓ−1 h  k−1 \ \ i   Ajy ∩ Byℓ ∪ Cyℓ Ajy ∩ A⋄j 1 Aky ∩ Bk⋄ 1 sup E eθλε (tk +···+tℓ ) 1

y∈D3

j=1

6 sup P Ay ∩ A⋄ y∈D2

j=k+1

k−1

iℓ−1−k h  sup P Ay ∩ B ⋄ sup E eθλε T1 1(Ay ) y∈D2

y∈D2

h i sup E eθλε (t2 +t1 ) 1(A1y )1(By2 ∪ Cy2 ) .

y∈D2

29

  Assuming supy∈D2 E eθλε T1 1(Ay ) < 1 for ε ∈ (0, ε0 ] for ε0 ∈ (0, 1] small enough which we verify in Step 3 we obtain S3 /2  ∞ h  X i  ⋄ k−1 sup E eθλε (t1 +t2 ) 1 A1y ∩ B1⋄ ∩ By2 ∪ Cy2 6 sup P Ay ∩ A k=1

D2

D2

∞ iℓ−1−k  h  i X   h sup E eθλε T1 1(Ay ) + sup P Ay ∩ B ⋄ E eθλε (t1 +t2 ) 1 A1y ∩ B1⋄ ∩ By2 ∪ Cy2 D2



ℓ=k+2

D2

h  i + sup E eθλε (t1 +t2 ) 1 A1y ∩ B1⋄ ∩ By2 ∪ Cy2

1 1 − supD2 P(Ay ∩ A⋄ ) D2      supD2 E eθλε T1 1(Ay ) supD2 P(Ay ∩ B ⋄ ) supy∈D2 E eθλε (t1 +t2 ) 1 A1y ∩ B1⋄ ∩ By2 ∪ Cy2   1 − supD2 E eθλε T1 1(Ay )       supD2 E eθλε (t1 +t2 ) 1 A1y ∩ B1⋄ ∩ By2 ∪ Cy2 supD2 E eθλε T1 1(Ay ) supD2 P(Ay ∩ B ⋄ )   1 + . 6 1 − supD2 P(Ay ∩ A⋄ ) 1 − supD2 E eθλε T1 1(Ay ) (4.8)

=

Step 2: Precise estimates of the events on (0, T1 ]. Claim 1:

While for y ∈ D2 we have

1(Ay ) 6 1{εW1 ∈ J D (φ)} + 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc )

(4.9)

− D\D4 (φ)} + 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc ) 1(A− y ) 6 1(Ay ) 6 1(Ay ) + 1{εW1 ∈ J

1(By ) 6 1{εW1 ∈ J

Dc

(φ)} + 1{εW1 ∈ J

D\D4

(φ)} + 1{T1 < κ0 + κ1 | ln(γε )|} +

1(Eyc ).

(4.10) (4.11)

the stronger condition y ∈ D3 implies 1(Cy ) 6 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc ).

(4.12)

The first estimates of 1(Ay ) and 1(By ) are seen easily with the help of the inclusions J D2 (Bγε (φ)) ⊆ J D (φ) c

c

J D2 (Bγε (φ)) ⊆ J D3 (φ)

and

J D2 \D3 (Bγε (φ)) ⊆ J D\D4 (φ),

D3c ⊆ Dc ∪ (D \ D3 (γε )) ∪ (U χ−3γε )c

since for y ∈ D2 we have 1(Ay ) 6 1(Ay )1(Ey )1{T1 > κ0 + κ1 | ln(γε )|} + 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc )

6 1{Y ε (T1 ; y) ∈ Bγε (φ)}1{εW1 ∈ J D2 (Y ε (T1 ; y))} + 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc )   \ {εW1 ∈ J D2 (y)} + 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc ) 61 y∈Bγε (φ)

= 1{εW1 ∈ J D2 (Bγε (φ))} + 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc ). 30

(4.13)

In addition the reduced event A− y satisfies 1(A− y ) 6 1(Ay ) ε ε ε 6 1(A− y ) + 1{Y (t; y) ∈ D2 for t ∈ [0, T1 ] and Y (T1 ; y) + G(Y (T1 ; y), εW1 ) ∈ D2 \ D3 }

ε ε ε 6 1(A− y ) + 1{Y (t; y) ∈ D2 for t ∈ [0, T1 ] and Y (T1 ; y) + G(Y (T1 ; y), εW1 ) ∈ D2 \ D3 }·

· 1(Ey )1{T1 > κ0 + κ1 | ln(γε )|} + 1{T1 > κ0 + κ1 | ln(γε )|} + 1(Eyc )

D2 \D3 (Bγε (φ))} + 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc ) 6 1(A− y ) + 1{εW1 ∈ J

D\D4 (φ)} + 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc ). 6 1(A− y ) + 1{εW1 ∈ J

Analogously, we obtain 1(By ) 6 1(By )1(Ey )1{T1 > κ0 + κ1 | ln(γε )|} + 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc ) c

6 1{Y ε (T1 ; y) ∈ Bγε (φ)}1{εW1 ∈ J D2 (Y ε (T1 ; y))} + 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc )   \ c {εW1 ∈ J D2 (y)} + 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc ) 61 y∈Bγε (φ)

c

= 1{εW1 ∈ J D2 (Bγε (φ))} + 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc ) c

6 1{εW1 ∈ J D3 (φ)} + 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc ).

(4.14)

For y ∈ D3 we start by 1(Cy ) 6 1(Cy )1(Ey )1{T1 > κ0 + κ1 | ln(γε )|} + 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc ). By definition of the reduced domain of attraction for y ∈ D3 (γε , χ) we have that the event Ey implies that Y ε (t; y) ∈ B 12 γε (u(t; y)) ⊂ D2 (γε , χ) for all t ∈ [0, T1 ]. If additionally T1 > κ0 + κ1 | ln(γε )|, we have that Y ε (T1 ; y) ∈ Bγε (φ) ⊆ D2 (γε , χ), that is for all y ∈ D3 we have (4.12). Claim 2:

For y ∈ D2 and U ∈ B(H) we have

1(Ay ∩ B ⋄ ) 6 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc )

1(By ∩ A⋄ ) 6 1{εW1 ∈ J D\D3 (δ,χ) (φ)} + 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc )  1(By ∩ B ⋄ )1 {X(T1 ; y) ∈ U } △ {εW1 ∈ J U (φ)} c

6 1{εW1 ∈ J B(ℓ+1)γε (∂U)∩D (φ)} + 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc ).

(4.15) (4.16) (4.17)

The estimate (4.15) is a direct consequence of (4.9) in Claim 1. Using (4.11) we obtain 1(By ∩ A⋄ ) 6 1{εW1 ∈ J D\D4 (δ,χ) (φ)} + 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc ). For (4.17) the inclusion J U (Bγε (φ)) ⊆ J U (φ) and the Lipschitz continuity of y 7→ y + G(y, z) with Lipschitz constant 1 + ℓ for any z yield 1(By ∩ B ⋄ )1{X(T1 ; y) ∈ U } △ {εW1 ∈ J U }

 c 6 1{εW1 ∈ J D (φ) ∩ J U (Bγε (φ)) △ J U (φ } + 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc )  c 6 1{εW1 ∈ J D (φ) ∩ J U (φ \ J U (Bγε (φ))} + 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc )  c 6 1{εW1 ∈ J B(ℓ+1)γε (∂U)∩D (φ } + 1{T1 < κ0 + κ1 | ln(γε )|} + 1(Eyc ). 31

Step 3: Estimates of the resulting expressions: Step 2 provides the estimates to dominate respectively the term S11 by (4.5), S12 by (4.6), S2 by (4.7) and S3 by (4.8). Event Ay : implies

Due to the limit (4.4) and (4.9) there is a constant ε0 ∈ (0, 1] such that ε ∈ (0, ε0 ]

  sup P(Ay ∩ A⋄ ) 6 sup E eθλε T1 1(Ay )

y∈D2

y∈D2

1 βε (1 − θ)λε βε + βε ((κ0 + κ1 | ln(γε )|) + C2 16e− 5γε βε − θλε βε − θλε βε − θλε  (1 − θ) λ ε 61− − C′ βε 1 − θ λβεε

61−

61−

1 − θ λε 2 βε

6 1 − (1 − C ′ ) Event By :

λε < 1. βε

(4.18)

Using that ν is regularly varying and the initial choice of χ > χ0 in (4.3) we obtain

ν( 1 J D\D4 (γε0 ,χ) (φ)) ν( 1ε J D\D4 (γε ,χ) (φ))  λε −1 µ(J D\D4 (γε0 ,χ) (φ)) = 6 lim ε 6 C ′ . (4.19) 1 D ε→0 ε→0 βε βε µ(J D (φ)) ν( ε J (φ)) lim

We use estimate (4.11) and the limit (4.4) implying limε→0+ ε0 ∈ (0, 1] such that ε ∈ (0, ε0 ] implies

βε2 | ln(γε )| λε

= 0 which yield a constant

  sup P(By ∩ B ⋄ ) 6 sup E eθλε T1 1(By )

y∈D2

y∈D2

      c 1 6 E eθλε T1 P(W1 ∈ J D (φ)) + E eθλε T1 1{T1 < κ0 + κ1 | ln(γε )|} + sup E eθλε T1 1(Eyc ) ε y∈D2  λ  −1 βε β − θλ ε ε ε 6 + 1 − e−(βε −θλε )(κ0 +κ1 | ln(γε )|) + 16e−(5γε) βε − θλε βε βε λε 6 (1 + 2C ′ ) . (4.20) βε

Event Cy : Estimate (4.12) and the limit (4.4) yields a constant ε0 ∈ (0, 1] sufficiently small such that ε ∈ (0, ε0 ] implies   sup P(Cy ) 6 sup E eθλε T1 1 Cy y∈D3

y∈D3

    6 E eθλε T1 1{T1 < κ0 + κ1 | ln(γε )|} + sup E eθλε T1 1(Eyc ) y∈D2

−1

= βε (κ0 + κ1 | ln(γε )|) + 16 exp(−(5γε )

32

) 6 C′

λε . βε

(4.21)

Events Ay ∩ B ⋄ and By ∩ A⋄ : By (4.15) we obtain directly ε0 ∈ (0, 1] such that for ε ∈ (0, ε0 ] we obtain with the analogous calculations   λε sup P(Ay ∩ B ⋄ ) 6 sup E eθλε T1 1(Ay ∩ B ⋄ ) 6 C ′ . βε y∈D2 y∈D2

(4.22)

and with the help of (4.16), the limit (4.4), the regular variation and (4.3) for ε ∈ (0, ε0 ]   λε sup P(By ∩ A⋄ ) 6 sup E eθλε T1 1(By ∩ A⋄ ) 6 C ′ . βε y∈D2 y∈D2 Exit events after a non-exit first large jump:   sup P Ay ∩ By2 ∪ Cy2 ∩ A⋄2

(4.23)

First note the trivial inclusions

y∈D2

h  i 6 sup E eθλε t2 1 Ay ∩ By2 ∪ Cy2 ∩ A⋄2 y∈D2

h  i 6 sup E eθλε (t1 +t2 ) 1 Ay ∩ By2 ∪ Cy2 ∩ A⋄2 y∈D2

h h i i 6 sup E eθλε (t1 +t2 ) Ay ∩ By2 ∩ A⋄2 + sup E eθλε (t1 +t2 ) 1 Ay ∩ Cy2 . y∈D2

y∈D2

The analogous estimate holds obviously true without the presence of A⋄2 . With the help of the strong Markov property and (4.23) we obtain ε0 ∈ (0, 1] such that for ε ∈ (0, ε0 ]       λε sup E eθλε (t1 +t2 ) Ay ∩ By2 ∩ A⋄2 6 sup E eθλε T1 1 Ay sup E eθλε T1 1 By ∩ A⋄ 6 C ′ . βε y∈D2 y∈D2 y∈D2 (4.24) In case of A⋄2 being replaced by B1⋄ we obtain for ε ∈ (0, ε0 ] h h h i i i sup E eθλε (t1 +t2 ) Ay ∩ B ⋄ ∩ By2 6 sup E eθλε T1 1 Ay ∩ B ⋄ sup E eθλε T1 1 By

y∈D2

y∈D2

y∈D2

 λ 2 ε 6 C ′ (1 + C ′ ) . (4.25) βε

33

1− Keeping in mind the convention Ay = A1y and A− y = Ay the estimates (4.10), (4.18), (4.19) and (4.21) yield a constant ε0 ∈ (0, 1] sufficiently small such that for ε ∈ (0, ε0 ] h i sup E eθλε (t1 +t2 ) 1 Ay ∩ Cy2 y∈D2

 h i   2 D\D4 6 sup E eθλε (t1 +t2 ) 1 A− (φ)} + 1{t1 < κ0 + κ1 | ln(γε )|} + 1(Eyc ) y 1 Cy + 1{εW1 ∈ J y∈D2

i i h h 6 sup E eθλε T1 1(Ay ) sup E eθλε T1 1(Cy ) y∈D2



y∈D3

i h i i h h + E eθλε T1 E eθλε T1 P(εW1 ∈ J D\D4 (φ)) + E eθλε T1 1{T1 < κ0 + κ1 | ln(γε )|} i h + sup E eθλε T1 1(Eyc ) y∈D3

 βε βε λε βε λε  C′ + βε (κ0 + κ1 | ln(γε )|) + C ′ βε − θλε βε − θλε βε βε − θλε βε   λ λε  ′ λε λ λ ε ε ε C 6 6C ′ . 6 1+θ + C′ + C′ βε βε βε βε βε

6

(4.26)

Step 4: Concluding estimates of the sums of (4.5): Estimate S11 : Since mι (∂U ) = µ(J ∂U (φι )) = 0 by assumption and the regular variation of ν by Hypothesis S.1 we have for any ε0 ∈ (0, 1] and ε ∈ (0, ε0 ] that lim P εW1 ∈ J

B(ℓ+1)γε (∂U)∩Dc

ε→0

 λε −1 ν = lim (φ) ε→0 βε



1 B(ℓ+1)γε (∂U)∩Dc (φ) εJ c ε ν(ρ B1 (0))

ν(ρε B1c (0))  ν 1ε J Dc (φ)  c µ J B(ℓ+1)δ (∂U)∩D (φ) C′ 6 . (4.27) 6 c µ(J D (φ)) 5

Hence (4.17), (4.20) and (4.27) yield i h  c sup E 1 By ∩ B ⋄ 1 + 1{X ε (T1 ; y) ∈ U } △ {εW1 ∈ J U∩D } y∈D2

c

c

6 P(εW1 ∈ J D (φ)) + P(εW1 ∈ J B(ℓ+1)γε (∂U)∩D (φ)) + 2P(T1 < κ0 + κ1 | ln(γε )|) + 2 sup P Eyc y∈D2

λε 6 (1 + C ′ ) , βε



such that for ε ∈ (0, ε0 ] the sum S11 given in (4.5) satisfies S11 6 S12 given by (4.6): the estimate

1 + C′ 6 1 + 4C ′ . 1 − C′

(4.28)

By (4.18), (4.21) and (4.26) the sum S12 given in (4.6) satisfies for ε ∈ (0, ε0 ] S12 6

6C ′ 6 12C ′ . 1 − C′ 34

(4.29)

S2 given by (4.7): Using the estimates (4.21), (4.23), (4.24) and (4.26) the sum S2 given in (4.7) satisfies for ε ∈ (0, ε0 ] the estimate  λ 2  λ 2 6C ′ 2(1 + 3C ′ )C ′ ε ε + 2 S2 6  βε λε βε λε 2 βε β ε (1 − ) 1 − βε −θλ 1 − βε −θλε (1 − βε ) βε ε    β − θλ 2  λ 2 2 βε − θλε λε ε ε ε = 2(1 + 3C ′ )C ′ + 12C ′ 6 C ′ + 13C ′ = 14C ′ . (4.30) (1 − θ)λε βε (1 − θ)λε βε Using (4.18), (4.25), (4.26) and (4.22) we obtain ε0 ∈ (0, ε0 ] such that

S3 given by (4.8): ε ∈ (0, ε0 ] implies S3 6

 2C ′ (1 + C ′ )  λ  ε

(1 − C ′ )

βε

+

C′  6C ′  6 26C ′ . 1 + 1 − C′ (1 − C ′ )

(4.31)

We finally collect (4.28 - 4.31) and conclude that there is ε0 ∈ (0, 1] such that ε ∈ (0, ε0 ] implies h i c 1 sup E eθλε |τx −s(ε)| 1 + |1{X ε (τx ; x) ∈ U } − 1{Wk∗ (ε) ∈ J U∩D (φ)}| 6 1 + 56C ′ . ε x∈D2

Since C ′ ∈ (0, 1−θ 2 ) was chosen arbitrary this finishes the proof.

Having established the convergence in probability of the exit loci it is sufficient to establish the uniform boundedness. We keep all the notation and the scales of the proof of Proposition 4.3. Proposition 4.4. For any 0 < p < α and χ > χ0 there are ε0 , γ ∈ (0, 1] and such that i h sup sup E kX ε (τ ; x) − (φ + G(φ, εWk∗ (ε) ))kp < ∞.

(4.32)

ε∈(0,ε0 ] x∈D3 (εγ ,χ)

Proof. Fix p ∈ (0, α). We use the conventions in Step 0 of the proof of Prop. 4.3. Then for x ∈ D3  h i i h i h E kX ε (τ ; x) − (φ + G(φ, εWk∗ (ε) ))kp 6 3p E kX ε (τ ; x)kp + kφkp + G1 (φ)E kεWk∗ (ε) kp .

(4.33)

For the last term on the right-hand side of (4.33) we obtain ∞ i h i X h h i E kεWk kp P(k ∗ (ε) = k) = εp E εkW1 kp , E kεWk∗ (ε) kp = k=1

and hence for ε0 ∈ (0, 1] sufficiently small the regular variation of ν implies for ε ∈ (0, ε0 ] that Z∞ Z∞ Z∞ 1 c h p i p−1 p−1 ν(r ε B1 (0)) E εkW1 k = r P(kW1 k > r)dr = r dr 6 2 rp−1 (ερε r)α dr ν(ρε B1c (0)) ρε

= 2(ερε )α

ρε

Z∞

ρε

rp−α−1 dr = 2

ρε

(ερε )α ε p−α 2(ε0 ρε0 )α ε0 p−α < ∞. (ρ ) 6 (ρ ) α−p α−p 35

We calculate the first term on the right side in (4.33) Z∞ i Z∞ h p−1 ε ε p P(kX (τ ; x)k > r)dr = rp−1 P(kX ε (τ ; x)k > r)dr. E kX (τ ; x)k = r 0

χ

Using (4.2) and the same strong Markov argument as in Claim 1 we obtain for x ∈ D3 P(kX ε (τ ; x)k > r) =

∞ X

k=1

6

∞ X

P({kX ε(τ ; x)k > r} ∩ {τ = Tk }) + k−1 \

P(

j=1

k=1

6

∞ X

∞ X

k=1

P({kX ε (τ ; x)k > r} ∩ {τ ∈ (Tk−1 , Tk )})

Ajx ∩ Bxk ∩ ({kX ε (Tk ; x)k > r}) +

∞ X

k−1 \

P(

k=1

j=1

Ajx ∩ Cxk ∩ {kX ε (τx ; x)k > r})

sup P(Ay )k−1 sup P(By ∩ ({kX ε (T1 ; y)k > r}) y∈D2

y∈D2

k=1 ∞ X

+

k=1

sup P(Ay )k−2 sup P(Ay ∩ Cy2 ∩ {kY ε (τy ; y)k > r}) y∈D2

y∈D2

  supy∈D2 P(By ∩ ({kX ε (T1 ; y)k > r}) + supy∈D2 P(kY ε (τy ; y)k > r)  6 . supy∈D2 P(Ay ) 1 − supy∈D2 P(Ay )

For the first sum we have for r > d(χ) + 2

sup P(By ∩ ({kX ε (T1 ; y)k > r}) = sup P(By ∩ ({kY ε (T1 ; y) + G(Y ε (T1 ; y), εW1 )k > r})

y∈D2

y∈D2

6 sup P(ky + G(y, εW1 )k > r) y∈U χ

6 sup P(G1 (y)kεW1 k > r − d(χ) − 1) y∈U χ

6 P(G1 (χ)kεW1 k > r − d(χ) − 1) 6 P(kW1 k >

1 r − d(χ) − 1 ). ε G1 (χ)

Hence w.l.o.g. choosing p′ as α > p′ > p > (1 − ρ)α the estimate (4.18) of Ax yields i h ′ E kW1 kp εp′ G (φ)p′ supy∈D2 P(By ∩ {kX ε (T1 ; y)k > r}) 1 6 ′ ) 1 − supD2 P(Ax ) (1 − C ′ ) λβεε (r − kφk)p h i p′ ′ E kW k 1 G1 (φ)p p′ −α(1−ρ) 6 ε0 d(χ) + ℓ + 1 sup P(kY ε (τy (ε, χ); x)k > r) = 0, y∈D2

36

since kY ε (τx (ε; χ); x)k = d(χ) + (ℓ + 1)ερε 6 d(χ) + ℓ + 1 for ε0 ρε0 6 1. Therefore for ε0 ∈ (0, 1] and ε ∈ (0, ε0 ] we have i h E kX ε (τy (ε, χ); y)kp 6 C 6C

Z∞

rp−1 P(kX ε (τy (ε, χ); y)k > r)dr

Z∞

1 dr < ∞. r1−p (r − d(χ) − 1)p′

d(χ)+2

d(χ)+2

This establishes the uniform integrability result (4.32). Proof. of Theorem 2.6: The convergence in probability kX ε (τ ; y) − φ − G(φ, εWk∗ )k → uniformly in D3 (εγ , χ) as ε → 0 established in Proposition 4.3 the uniform boundedness of Proposition 4.4 implies the uniform integrability of (kX ε (τ ; y)− φ− G(φ, εWk∗ )kp )ε∈(0,ε0 ] and hence its convergence as ε → 0 in Lp .

5

Appendix

Since the family (Wk )k∈N is i.i.d. and Bk⋄ = {εWk ∈ (Dι )} we have that k ∗ (ε) is geometrically λι distributed by construction with rate P(B ⋄ ) = βεε . Let θ > 0. We calculate the Laplace transform of sι (ε) " ∞ # i i h h P Q Y Q −θ ∞ Tk k−1 (1−1(Bj⋄ ))1(Bk⋄ ) −θTk k−1 (1−1(Bj⋄ ))1(Bk⋄ ) −θsι (ε) k=1 j=1 j=1 = E =E e e E e =

∞ X

k=1



E e−θTk



k−1 Y j=1

(1 − 1(Bj⋄ ))1(Bk⋄ ) =

k=1

∞ X

k=1



E

k−1 Y j=1



e−θtj (1 − 1(Bj⋄ ))e−θtk 1(Bk⋄ ) .

Exploiting the independence of (Wk )k∈N and (Tk )k∈N as well as the stationarity of (Wk )k∈N each summand takes the form   k−1 k−1 Y  Y    E e−θtj (1 − 1(Bj⋄ )) E e−θtk 1(Bk⋄ ) e−θtj (1 − 1(Bj⋄ ))e−θtk 1(Bk⋄ ) = E j=1

j=1

  k−1  −θt1  = E e−θt1 (1 − P(B1⋄ )) E e P(B1⋄ )

=



k−1 λιε βε λιε βε (1 − ) . θ + βε βε θ + βε βε

37

Finally we conclude k−1 ∞  i h X ι λι βε λιε βε (1 − ε ) E e−θs (ε) = θ + βε βε θ + βε βε k=1

=

βε λιε θ + βε βε 1 −

1

βε θ+βε (1



λιε βε

)

=

λιε βε

1 θ+βε βε

− (1 −

λι (ε) βε

)

=

λιε \ε )(θ). = EXP(λ θ + λιε

References [1] D. Applebaum. L´evy processes and stochastic calculus, volume 116 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, second edition, 2009. [2] N. Berglund and Gentz B. On the noise-induced passage through an unstable periodic orbit I: Two-level model. J. Stat. Phys., 114(5–6):1577–1618, 2004. [3] N. Berglund and B. Gentz, The Eyring–Kramers law for potentials with nonquadratic saddles. [4]

, Sharp estimates for metastable lifetimes in parabolic SPDEs: Kramers’ law and beyond, EJP, 18 (2013), pp. no. 24, 58.

[5] A. Bovier, M. Eckhoff, V. Gayrard, and M. Klein, Metastability in reversible diffusion processes I: Sharp asymptotics for capacities and exit times, J. of the EMS, 6 (2004), pp. 399– 424. [6] A. Bovier, V. Gayrard, and M. Klein, Metastability in reversible diffusion processes II: Precise asymptotics for small eigenvalues, Journal of the European Mathematical Society, 7 (2005), pp. 69–99. [7] S. Brassesco. Some results on small random perturbations of an infinite dimensional dynamical system. Stochastic Processes and their Applications, 38:33–53, 1991. [8] S. Brassesco. Unpredictability of an exit time. Stochastic Processes and their Applications, 63:55–65, 1996. [9] N. H. Bingham, C. M. Goldie, J. L. Teugels. Regular variation, Cambridge University Press, Cambridge, 1987. [10] A. Budhiraja, P. Nyquist. Large deviations for multidimensional state-dependent shot-noise processes, J. Appl. Probab., Vol. 52, Number 4 (2015), 1097-1114. [11] N. Chafee, E. F. Infante. A bifurcation problem for a nonlinear partial differential equation of parabolic type, Applicable Anal. (4), 17-37, 1974/75. [12] A. Budhiraja, J. Chen, P. Dupuis. Large Deviations for Stochastic Partial Differential Equations Driven by a Poisson Random Measure, Stoch. Proc. App. 123 (2013), no 2, 523-560.

38

[13] A. Budhiraja, P. Dupuis, A. Ganguly. Moderate Deviation Principles for Stochastic Differential Equations with Jumps, The Annals of Probability, 2016, Vol. 44, No 3., 1723–1775. [14] M. V. Day. On the exponential exit law in the small parameter exit problem. 8:297–323, 1983.

Stochastics,

[15] M. V. Day. Exit cycling for the Van der Pol oscillator and quasipotential calculations. Journal of Dynamics and Differential Equations, 8(4):573–601, 1996. [16] A. Debussche, M. H¨ ogele, P. Imkeller. Metastability for the Chafee-Infante equation with small heavy-tailed L´evy noise, Electronic Communications in Probability, 16, 213–225, 2011. [17] A. Debussche, M. H¨ ogele, P. Imkeller. The dynamics of nonlinear reaction-diffusion equations with small L´evy noise. Springer Lecture notes in Mathematics, Vol. 2085, 163p. (2013). [18] J. D. Deuschel, D. W. Stroock. Lare Deviations. American Mathematical Society, 2001. [19] G. DaPrato, J. Zabzcyk. Stochastic Equations in Infinite Dimensions. Encyclopedia of Mathematics and Its Applications, vol. 44, (Cambridge University Press 1992). [20] A. Dembo and O. Zeitouni. Large deviation techniques and applications, second edition, Applications of Mathematics, vol. 38, 1998. [21] G. W. Faris, G. Jona-Lasinio. Large fluctuations for a nonlinear heat equation with noise, J. Phys. A 15 (1982), 3025-3055 . [22] M. I. Freidlin. Random perturbations of reaction-diffusion equations: the quasideterministic approximation. Transactions of the American Mathematical Society, 305(2):665–697, 1988. [23] A. D. Ventsel’ and M. I. Freidlin. On small random perturbations of dynamical systems. Russian Mathematical Surveys, 25(1):1–55, 1970. [24] M. I. Freidlin and A. D. Wentzell. Random perturbations of dynamical systems, volume 260 of Grundlehren der Mathematischen Wissenschaften. Springer, second edition, 1998. [25] A. Galves, E. Olivieri, and M. E. Vares, Metastability for a class of dynamical systems subject to small random perturbations, The Annals of Probability, 15 (1987), pp. 1288–1305. [26] V. V. Godovanchuk. Asymptotic probabilities of large deviations due to large jumps of a Markov process. Theory of Probability and its Applications, 26:314–327, 1982. [27] J. Hale. Some infinite-dimensional Morse-Smale systems defined by parabolic partial differential equations, CWI Quarterly, 12 (1999), no 3&4, 239-314. [28] D. Henry. Some infinite-dimensional Morse-Smale systems defined by parabolic partial differential equations, J. Differential Equations, 59 (1985), no 2, 165-205. [29] D. Henry. Geometric theory of semilinear parabolic equations, Lecture notes in Mathematics, 840. Springer-Verlag, Berlin-New York, 1981. [30] M. H¨ ogele, I. Pavlyukevich. The exit problem from the neighborhood of a global attractor for heavy-tailed L´evy diffusions. Stochastic Analysis and Applications, 32 (1), 163–190 (2013) 39

[31] M. H¨ ogele, I. Pavlyukevich: Metastability in a class of hyperbolic dynamical systems perturbed by heavy-tailed L´evy type noise. Stoch. and Dyn., 15 (3), 1550019-1 – 1550019-26 (2015) [32] H. Hulk and F. Lindskog. Regular variation for measures on metric spaces, Publ. Inst. Math. (Beograd) (N.S.) 80 (2006), no 94, 121-140. [33] P. Imkeller and I. Pavlyukevich, Metastable behaviour of small noise L´evy-driven diffusions, ESAIM: Prob. Stat., 12 (2008), 412-437. [34] P. Imkeller and I. Pavlyukevich, First exit times of SDEs driven by stable L´evy processes, Stoch. Proc. Appl., 116 (2006), no 4, 611-642. [35] P. Imkeller, I. Pavlyukevich, and M. Stauch. First exit times of non-linear dynamical systems in Rd perturbed by multifractal L´evy noise. J. Stat. Phys., 141(1):94–119, 2010. [36] C. Kipnis and C. M. Newman, The metastable behavior of infrequently observed, weakly random, one-dimensional diffusion processes, SIAM Journal on Applied Mathematics, 45 (1985), pp. 972–982. [37] V. N. Kolokoltsov, Semiclassical analysis for diffusions and stochastic processes, vol. 1724 of Lecture Notes in Mathematics, Springer, 2000. [38] V. N. Kolokol’tsov and K. A. Makarov, Asymptotic spectral analysis of a small diffusion operator and the life times of the corresponding diffusion process, Russian Journal of Mathematical Physics, 4 (1996), pp. 341–360. [39] H. A. Kramers, Brownian motion in a field of force and the diffusion model of chemical reactions, Physica, 7 (1940), pp. 284–304. [40] C. Marinelli, C. Pr´evˆ ot, and M. R¨ ockner, Regular dependence on initial data for stochastic evolution equations with multiplicative Poisson noise, J. Funct. Anal. 258 (2010), no. 2, 616649. [41] C. Marinelli and M. R¨ ockner, Well-posedness and asymptotic behavior for stochastic reactiondiffusion equations with multiplicative Poisson noise EJP, Vol. 15 (2010), pa. no. 49, p. 15281555. [42] C. Marinelli and M. R¨ ockner, On uniqueness of mild solutions for dissipative stochastic evolution equations, Infin. Dimens. Anal. Quantum Probab. Relat. Top. . [43] I. Pavlyukevich. First exit times of solutions of stochastic differential equations driven by multiplicative L´evy noise with heavy tails. Stoch. and Dyn., 11(2&3), 2011. [44] P. E. Protter. Stochastic integration and differential equations, volume 21 of Applications of Mathematics. Springer, second edition, 2004. [45] S. Peszat and J. Zabczyk, Stochastic partial differential equations with L´evy noise (an evolution equation approach), Cambridge University Press, Cambridge, 2007. [46] G. Raugel, Global attractors in partial differential equations, Fiedler, Bernold (ed.), Handbook of dynamical systems. Vol 2, 885-982, North-Holland, Amsterdam, 2002.

40

[47] C. Rocha, Examples of Attractors in Reaction-Diffusion equations, J. Diff. Equ. 73, 178-195, 1988 [48] K. Sato. L´evy processes and infinitely divisible distributions, volume 68 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, 1999. [49] P. Siorpaes. Applications of https://arxiv.org/abs/1507.01302

pathwise

Burkholder-Davis-Gundy

inequalities.

[50] E. Salavati and B. Z. Zangeneh. A maximal inequality for the p-th power of stochastic convolution integrals, volume 68 of Journal of Inequalities and Applications. 155: 1–16, 2016. [51] R. Temam. Infinite-dimensional dynamical systems in Mechanics and Physics, Springer, New York, Applied Mathematical Sciences Series, vol. 68, 1988. Second augmented edition, 1997.

41

Suggest Documents