Mar 9, 2015 - i.e., there exists θ > 0 so that for all y ∈ Rm and all x1 ∈ R we have ... analysis is to consider the linear operator obtained upon ... v(x, t) := u(x, t) − ¯u(x1 − δ(˜x, t)). .... There exist constants c1 and θ1 sufficiently small, and a constant ...... In particular, proceeding similarly for the other terms we find that we can.
Linear Stability for Transition Front Solutions in Multidimensional Cahn-Hilliard Systems Peter Howard March 9, 2015 Abstract We consider linear stability for planar transition front solutions u ¯(x1 ) arising in n multidimensional (i.e., x ∈ R ) Cahn-Hilliard systems. In previous work the author has established that the linear operator obtained from linearization about the transition front has (after Fourier transform in the transverse variable x ˜ = (x2 , x3 , . . . , xn ) 7→ ξ) a leading eigenvalue that moves into the stable (Re λ < 0) half-plane at rate |ξ|3 . This constitutes precisely the type of borderline case that has been effectively analyzed by the pointwise semigroup methods of Zumbrun and Howard, and we follow that approach here. In particular, the approach can be viewed as a three-step process including: (1) characterization of the spectrum of the linearized operator; (2) derivation of linear semigroup estimates, typically encoded into estimates on an appropriate Green’s function; and (3) implementation of an iterative process to accommodate nonlinearities. In this paper we address Step (2) in the case of multidimensional Cahn-Hilliard systems.
1
Introduction
We consider Cahn-Hilliard systems on x ∈ Rn , m nX o ∂uj =∇· Mjk (u)∇ (−Γ∆u)k + Fuk (u) , ∂t k=1
(1.1)
for j = 1, 2, . . . , m. Here, F : Rn → R, and Γ and M are m × m matrices. For notational convenience, we will often use the tensor form n o ut = ∇ · M (u)Dx − Γ∆u + Du F , (1.2) where the operator D is a Jacobian operator as described, for example, in [6]. For convenient reference, we collect some assumptions that will be made throughout the analysis. 1
(H0) (Assumptions on Γ) Γ denotes a constant, symmetric, positive definite m × m matrix. (H1) (Assumptions on F ) F ∈ C 4 (Rm ), and F has at least two distinct local minimizers at which the Hessian matrix Du2 F (u) is positive definite and (by subtracting an appropriate hyperplane from F if necessary) we can take F to be zero. We denote this class of values M := {u ∈ Rm : F (u) = 0, Du F (u) = 0, Du2 F (u) is positive definite}. (H2) (Transition front existence) There exists a transition front solution to (1.1) u¯(x1 ) so that −Γ¯ u00 + F 0 (¯ u) = 0, (1.3) with u¯(±∞) = u± , u± ∈ M. (H3) (Assumptions on M ) M ∈ C 2 (Rm ); M is uniformly positive definite along the wave; i.e., there exists θ > 0 so that for all y ∈ Rm and all x1 ∈ R we have y T M (¯ u(x1 ))y ≥ θ|y|2 ; and M± = M (u± ) are symmetric. (H4) (Endstate Assumptions) We set B± := Du2 F (u± ) (a symmetric, positive definite matrix) and assume one of the following holds: (H4a) the matrices M± B± have distinct eigenvalues, as do the matrices Γ−1 B± ; or (H4b) one or more of these matrices has a repeated eigenvalue, but the solutions µ = µ(σ) of det − µ4 M± Γ + µ2 (M± B± + 2σκ0 M± Γ) − σ(λ0 I + κ0 M± B± + σκ20 M± Γ) = 0 (1.4) can be strictly divided into two cases: if µ(0) 6= 0 then µ(σ) √ is analytic in σ for |σ| sufficiently small, while if µ(0) = 0 µ(σ) can be written as µ(σ) = σh(σ), where h is analytic in σ for |σ| sufficiently small. Here |(λ0 , κ0 )| = 1, and (σλ0 , σκ0 ) ∈ S for > 0 sufficiently small. (The set S is defined in Definition 2.1.) Regarding (H2), we note that Alikakos and others have established that transition front solutions arise precisely as minimizers of the energy functional Z +∞ 1 E(¯ u) = F (¯ u) + hΓ¯ u, u¯idx1 , (1.5) 2 −∞ where h·, ·i denotes Euclidean inner product. (See [1, 2, 27]). The system (1.1) is a standard model of certain phase separation processes, and its physicality is discussed in detail in [17] and the references cited there. Our interest in this analysis is to consider the linear operator obtained upon linearization of (1.1) about u¯(x1 ). More precisely, our goal is to obtain precise information about the evolution under this linear operator that can be employed to establish linear and nonlinear stability of u¯(x1 ). (The case of nonlinear stability is analyzed in a separate paper.) 2
In the full nonlinear analysis, we will introduce a shift function δ(˜ x, t) (to be chosen during the nonlinear analysis), and define a perturbation variable v(x, t) := u(x, t) − u¯(x1 − δ(˜ x, t)).
(1.6)
Upon substitution of (1.6) into (1.1) we obtain
where with
(∂t − L)v = (∂t − L)(δ¯ u0 (x1 )) + ∇ · Q,
(1.7)
n o ¯ (x1 )Dx − Γ∆v + B(x ¯ 1 )v , Lv := ∇ · M
(1.8)
¯ (x1 ) := M (¯ M u(x1 )) ¯ 1 ) := Du2 F (¯ B(x u(x1 )),
(1.9)
and Q is a collection of nonlinear terms that won’t play a role in the current analysis. (See [15] for a full discussion of nonlinear issues.) The eigenvalue problem for L can be expressed as Lφ = λφ, and we take the Fourier transform of this equation in the transverse variable x˜, using the scaling Z 1 ˆ φ(x1 , ξ) = e−i˜x·ξ φ(x1 , x˜)d˜ x. (1.10) n−1 (2π) 2 Rn−1 The eigenvalue problem transforms to
where
ˆ Lξ φˆ = −Aξ Hξ φˆ = λφ,
(1.11)
¯ (x1 )∂x1 + |ξ|2 M ¯ (x1 ) Aξ := −∂x1 M ¯ 1 ) + |ξ|2 Γ. Hξ := −Γ∂x21 x1 + B(x
(1.12)
We note that under our current assumptions Aξ and Hξ are both self-adjoint (though of course Lξ is not). For convenient reference, we collect here a set of conditions on (1.11) that follow from our assumptions (H0)-(H4). (C0) Same as (H0). ¯ ∈ C 2 (R) is symmetric; there exists a constant αB > 0 so that (C1) B ¯ 1 ) − B± ) = O(e−αB |x1 | ), ∂xj 1 (B(x
x1 → ±∞,
for j = 0, 1, 2; B± are both positive definite matrices. ¯ ∈ C 2 (R); there exists a constant αM > 0 so that (C2) M ¯ (x1 ) − M± ) = O(e−αM |x1 | ), ∂xj 1 (M 3
x1 → ±∞,
¯ (x1 ) is uniformly positive definite on R. We will set α := min{αB , αM }. for j = 0, 1, 2; M (C3) Same as (H4). As a test case for clarification of the discussion, we’ll consider (1.1) with m = 2, Γ = I, M (u) = I, and F (u1 , u2 ) = u21 u22 + u21 (1 − u1 − u2 )2 + u22 (1 − u1 − u2 )2 .
(1.13)
The associated wave u¯(x1 ) is depicted in Figure 1 of [17], and it is shown in [18] that this wave is stable as a solution to (1.1) with n = 1. Before recalling the spectral theorem of [14], we clarify our terminology for the spectrum of Lξ (which follows [16]; see particularly the appendix to Chapter 5). Definition 1.1. We define the point spectrum of Lξ , denoted σpt (Lξ ), as the set σpt (Lξ ) = {λ ∈ C : Lξ φ = λφ for some φ ∈ H 2 (R)}. We define the essential spectrum of Lξ , denoted σess (Lξ ), as the values in C that are not in the resolvent set of Lξ and are not isolated eigenvalues of finite multiplicity. We note that σ(Lξ ) = σpt (Lξ ) ∪ σess (Lξ ), but the sets σpt (Lξ ) and σess (Lξ ) are not necessarily disjoint. We will see that for real values of ξ the spectrum of Lξ is confined to the real line (though Lξ is not self-adjoint), and is bounded above. (Although |ξ| only appears in Lξ as |ξ|2 , we still may obtain complex values under complexification. This is perhaps most easily seen in the case n = 2 for which it’s useful to write ξ 2 instead of |ξ|2 .) We will refer to the largest (right-most) eigenvalue of Lξ as its leading eigenvalue, and we will denote this eigenvalue λ∗ (ξ). The assumptions for the spectral theorem of [14] are all straightforward, except for a condition associated with the stability of u¯ with respect to (1.1) in R. That is, since u¯ is a function of only one variable, it can be viewed as a stationary solution for a Cahn-Hilliard system on R, n o ut = M (u)(−Γuxx + Du F )x . (1.14) x
In [17], the authors identify a spectral stability criterion for u¯ as a solution of (1.14), and verify that it is satisfied for certain example systems. In [18, 19], the authors establish that this spectral condition is sufficient to imply nonlinear stability for u¯ as a solution of (1.14). Although we will postpone our full discussion of this condition until Section 3, we will denote it (D0 ) in the statement of our theorem, and we note here that it is ultimately a transversality condition in the following sense. When (1.3) (in (H2)) is written as a first order autonomous ODE system, our condition ensures that u¯ arises as a transverse connection either from the m-dimensional unstable linearized subspace for u− , denoted U − , to the mdimensional stable linearized subspace for u+ , denoted S + , or (by isotropy) vice versa. (We recall that since our ambient manifold is R2m , the intersection of U − and S + is referred to as transverse if at each point of intersection the tangent spaces associated with U − and S + 4
generate R2m . In particular, in this setting a transverse connection is one in which the the intersection of these two manifolds has dimension 1; i.e., our solution manifold will comprise shifts of u¯.) Theorem 1.1 (From [14]). Let Assumptions (H0)-(H4) hold, with κ0 = 0 in (H4), and additionally assume M is a constant matrix. Assume Condition (D0 ) holds, and that u¯ minimizes the energy (1.5). The spectrum of the operator Lξ satisfies the following: I. For real values of ξ: 1. The spectrum σ(Lξ ) lies entirely on R. 2. The essential spectrum of Lξ lies in the union of the two intervals (−∞, −m± b± |ξ|2 − m± γ|ξ|4 ], where m± , b± , and γ respectively denote the smallest eigenvalues of M± , B± , and Γ. 3. There exists a constant θ0 > 0 so that the point spectrum of Lξ is confined to the interval (−∞, −θ0 |ξ|4 ]. 4. There exists a constant r > 0 sufficiently small so that for |ξ| < r the leading eigenvalue of Lξ , denoted λ∗ (ξ), satisfies λ∗ (ξ) = −c3 |ξ|3 (1 + o(|ξ|)), where
R +∞ c3 = 4
F (¯ u(x1 ))dx1 −1 ¯ [u], [u]i > 0. hM
−∞
Here, o(·) denotes standard “little-O” notation, and [·] denotes jump, so that [u] = u+ − u− . Moreover, for any 0 < |ξ0 | < r, there exists 0 < r0 < r sufficiently small so that λ∗ (ξ) is analytic on |ξ − ξ0 | < r0 . 5. The constant r > 0 from Part 4 can be taken sufficiently small so that there exists a constant θ1 > 0 so that for |ξ| < r the set σpt (Lξ )\{λ∗ (ξ)} is confined to the interval (−∞, −θ1 |ξ|2 ]. II. Moreover, if we allow complex values of ξ (ξ = ξR + iξI ) so that |ξ|2 becomes ζ = |ξR |2 − |ξI |2 + 2ihξR , ξI i =: |ξ|2R , then: 6. There exist constants c1 and θ1 sufficiently small, and a constant Cθ1 sufficiently large, so that the essential spectrum for Lξ is bounded to the left of a wedge contour described by Re λ + c1 |Im λ| = −θ1 |ξR |2 + |ξR |4 + Cθ1 |ξI |2 + |ξI |4 . The designation Cθ1 indicates that θ1 and Cθ1 are chosen together, and one can be varied at the expense of a change in the other. 5
7. The perturbation expression for λ∗ (ξ) given in Part 4 continues to hold for complex values of ξ (with |ξ|3 replaced by ζ 3/2 ), and there exist constants c2 and θ2 sufficiently small, and a constant Cθ2 sufficiently large, so that the remainder of the point spectrum is bounded for |ζ| sufficiently small to the left of a contour described by Re λ + c2 |Im λ| = −θ2 |ξR |2 + Cθ2 |ξI |2 . 8. There exist constants c3 and θ3 sufficiently small, and a constant Cθ3 sufficiently large, so that the point spectrum for Lξ is bounded to the left of a contour described by Re λ + c3 |Im λ| = −θ3 |ξR |4 + Cθ3 1 + |ξR |2 + |ξI |2 + |ξI |4 . The main observations summarized in Theorem 1.1 are as follows: Part I asserts that for real values of ξ the spectrum of Lξ lies entirely in the stable (i.e., negative-real) half-plane, and indeed the leading eigenvalue moves into the stable half-plane like |ξ|3 . Moreover, the remainder of the spectrum (both point and essential) separates from λ∗ (ξ) by moving into the stable half-plane at the faster rate |ξ|2 (faster for |ξ| small). Part II asserts that similar behavior holds for for complex values of ξ. Parts I is proven in [14], while Part II is proven in Section 4 of the current paper. Finally, we note that only Parts 4, 7, and 8 require M to be constant. If we let G(x, t; y) denote a Green’s function for L so that (∂t − L)G = 0;
G(x, 0; y) = δy (x)I,
(1.15)
where in this case δy (x) denotes a Dirac delta function, then we can express solutions of (1.7) as Z Z tZ 0 v(x, t) − u¯ (x1 )δ(˜ x, t) = G(x, t; y)v0 (y)dy + G(x, t − s; y)∇ · Q(y, s)dyds. (1.16) Rn
0
R
As we’ll clarify in Theorem 1.2 we can express G as ˜ t; y), G(x, t; y) = u¯0 (x1 )e(˜ x, t; y) + G(x,
(1.17)
where, roughly speaking, e(˜ x, t; y) encodes information associated with the shift δ(˜ x, t), and ˜ G(x, t; y) encodes information away from the transition layer. We obtain Z Z 0 0 ˜ t; y)v0 (y)dy v(x, t) − u¯ (x1 )δ(˜ x, t) = u¯ (x1 ) e(˜ x, t; y)v0 (y)dy + G(x, Rn Rn Z tZ Z tZ 0 ˜ t − s; y)∇ · Q(y, s)dyds. + u¯ (x1 ) e(˜ x, t − s; y)∇ · Q(y, s)dyds + G(x, 0
Rn
0
6
Rn
(1.18)
We now choose δ(˜ x, t) so that Z Z tZ δ(˜ x, t) = − e(˜ x, t; y)v0 (y)dy − Rn
e(˜ x, t − s; y)∇ · Q(y, s)dyds.
(1.19)
Rn
0
Upon combining (1.18) and (1.19), and integrating the nonlinear terms by parts, we obtain the system of m + 1 integral equations Z Z tZ X n ˜ ˜ y (x, t − s; y)Qj (y, s)dyds v(x, t) = G(x, t; y)v0 (y)dy − G j Rn
0
Z δ(˜ x, t) = −
Z e(˜ x, t; y)v0 (y)dy +
Rn
0
Rn j=1 n tZ X
(1.20) eyj (˜ x, t − s; y)Qj (y, s)dyds.
Rn j=1
The ultimate goal of the program is to use a nonlinear iteration argument to establish existence and asymptotic behavior of solutions to this system. In order to do this, we require ˜ t; y) and e(˜ detailed estimates on G(x, x, t; y), and we will establish such estimates in the main theorem of this paper (Theorem 1.2). In order to efficiently describe some logarithmic behavior that arises in our main theorem, we make the following definition. Definition 1.2. We define a function hp,n (t) for all 1 ≤ p ≤ ∞, n = 2, 3, . . . , and t > 0. Precisely, we take hp,2 (t) ≡ 1 for all 1 ≤ p ≤ ∞, and for n = 3, 4, . . . we set ( ln t p = 1 hp,n (t) = 1 p > 1. In addition, for 1 ≤ p < ∞ we will denote the Lp norm in the transverse variable as Z p1 ku(x1 , ·, t)kLpx˜ := |u(x1 , x˜, t)|p d˜ x , Rn−1
and we define ku(x1 , ·, t)kL∞ in an analogous fashion. x ˜ Finally, for positive constants K and T that arise in the statement of Theorem 1.2, we will let χII (x, t; y) denote the characteristic function for the set S II := {(x, t; y) : t ≥ T, |x − y| ≤ Kt}, and we will let χIII (x, t; y) denote the characteristic function for the complement of S II (in Rn × R+ × Rn ). We can then write ˜ t; y) = G ˜ II (x, t; y) + G ˜ III (x, t; y), G(x, where
˜ II (x, t; y) = G(x, ˜ t; y)χII (x, t; y) G ˜ III (x, t; y) = G(x, ˜ t; y)χIII (x, t; y). G
The primary purpose of this analysis is to establish the following theorem. We note that the condition denoted (Dξ ) is stated precisely in Section 3. 7
Theorem 1.2. Suppose Conditions (C0)-(C3) hold, and that σ(Lξ ) satisfies the conclusions of Theorem 1.1 (possibly under weaker hypotheses), along with spectral condition (Dξ ). Then given any time threshold T > 0 there exist constants η > 0 (sufficiently small), and C > 0, K > 0, M > 0 (sufficiently large) so that the Green’s function described in (1.15) can be bounded as follows: there exists a splitting ˜ t; y), G(x, t; y) = u¯0 (x1 )e(˜ x, t; y) + G(x, so that: (I) Transition layer terms. ke(˜ x, t; y)k
Lpx˜
− n−1 (1− p1 ) 3
≤ C(1 + t)
1
key1 (˜ x, t; y)kLpx˜ ≤ C(1 + t)− 3 − ket (˜ x, t; y)k
Lpx˜
y2
hp,n (t)e
n−1 (1− p1 ) 3
(1− p1 ) −1− n−1 3
≤ C(1 + t)
4
kety1 (˜ x, t; y)kLpx˜ ≤ C(1 + t)− 3 −
− M1t 2 y1
hp,n (t)e− M t , y2
hp,n (t)e
n−1 (1− p1 ) 3
− M1t 2 y1
hp,n (t)e− M t ,
and for any multiindex β in x˜ and y˜, with |β| ≤ 3, k∂ β e(˜ x, t; y)kLpx˜ ≤ C(1 + t)−
|β| − n−1 (1− p1 ) 3 3
2 y1
hp,n (t)e− M t 2 y1
k∂ β ey1 (˜ x, t; y)kLpx˜ ≤ C(1 + t)−
1+|β| − n−1 (1− p1 ) 3 3
hp,n (t)e− M t
k∂ β et (˜ x, t; y)kLpx˜ ≤ C(1 + t)−
3+|β| − n−1 (1− p1 ) 3 3
hp,n (t)e− M t .
2 y1
(II) Asymptotic terms. For |x − y| ≤ Kt, t ≥ T (x1 −y1 )2 (1− p1 ) − 23 − n−1 (1− p1 ) ˜ II (x, t; y)kLp ≤ C t− 21 − n−1 2 3 kG + t h (t) e− M t p,n x ˜ n−1 n−1 1 1 1 II ˜ p kGx1 (x, t; y)kLx˜ ≤ C t−1− 3 (1− p ) hp,n (t) + t− 2 − 2 (1− p ) e−η|x1 | (x1 −y1 )2 − 23 − n−1 (1− p1 ) −η|x1 | 3 +t hp,n (t)e e− M t n−1
(x1 −y1 )2
1
˜ II (x, t; y)kLp ≤ Ct−1− 3 (1− p ) hp,n (t)e− M t kG y1 x ˜ 4 n−1 1 (x1 −y1 )2 − 3 − 3 (1− p ) −1− n−1 (1− p1 ) II −η|x1 | ˜ 3 p kGx1 y1 (x, t; y)kLx˜ ≤ C t hp,n (t) + t hp,n (t)e e− M t , and for any multiindex β in x˜ and y˜, with |β| ≤ 3, 2+|β|
n−1
(x1 −y1 )2
1
˜ II (x, t; y)kLp ≤ Ct− 3 − 3 (1− p ) hp,n (t)e− M t k∂ β G x ˜ 3+|β| n−1 1 (x1 −y1 )2 2+|β| − 3 − 3 (1− p ) − 3 − n−1 (1− p1 ) −η|x1 | ˜ II 3 p ≤ C t k∂ β G (x, t; y)k h (t) + t h (t)e e− M t p,n p,n Lx˜ x1 ˜ II (x, t; y)kx2 ≤ Ct− k∂ β G y1
3+|β| − n−1 (1− p1 ) 3 3
hp,n (t)e− 8
(x1 −y1 )2 Mt
.
(III) Local terms. For |x − y| ≥ Kt or 0 < t < T , and for any multiindex α in x and y with |α| ≤ 3 ˜ III (x, t; y)kLp ≤ Ct− k∂ α G x ˜
1+|α| − 14 (1− p1 ) 4
e
−
(x1 −y1 )4/3 M t1/3
.
Moreover, precisely the same estimates hold if the L2 norm in x˜ is replaced by the transverse L2 norm in y˜. The remainder of the paper is organized as follows. In Section 2, we review some standard ODE estimates that will allow us to characterize the Evans function, and in Section 3 we define and analyze the Evans function for Lξ and precisely state spectral conditions (D0 ) and (Dξ ). In Section 4 we analyze the spectrum of Lξ for complex values of ξ, proving Part II of Theorem 1.1. In Section 5 we construct estimates on the resolvent kernel for the resolvent (λI − Lξ )−1 , and in Section 6 we prove Theorem 1.2 by establishing the stated estimates on G(x, t; y).
2
Preliminary ODE Estimates
We derive our estimates on the resolvent kernel for Lξ in terms of asymptotically growing and decaying solutions of the eigenvalue problem Lξ φ = λφ.
(2.1)
As x1 → ±∞ this equation is asymptotically close to the constant coefficient equations −M± Γφ0000 + (M± B± + 2|ξ|2 M± Γ)φ00 − (λI + |ξ|2 M± B± + |ξ|4 M± Γ)φ = 0.
(2.2)
If we search for solutions of the form φ(x1 ) = eµx1 r, where µ is a scalar constant and r ∈ Cm is a constant vector (constant in x1 ) we obtain the associated eigenvalue problem n o 4 2 2 2 4 − µ M± Γ + µ (M± B± + 2|ξ| M± Γ) − (λI + |ξ| M± B± + |ξ| M± Γ) r = 0. (2.3) In this last expression, it will be convenient to set κ := |ξ|2 to get n o − µ4 M± Γ + µ2 (M± B± + 2κM± Γ) − (λI + κM± B± + κ2 M± Γ) r = 0.
(2.4)
At this stage, we introduce a radial variable σ defined so that (λ, κ) = σ(λ0 , κ0 ),
(2.5)
where |(λ0 , κ0 )| = 1, which allows us to express our asymptotic eigenvalue problem as n o − µ4 M± Γ + µ2 (M± B± + 2σκ0 M± Γ) − σ(λ0 I + κ0 M± B± + σκ20 M± Γ) r = 0. (2.6) 9
(Our use of this radial variable follows particularly [29, 31].) Following [17] we set the notation σ(M± B± ) = {βj± }m j=1 σ(M± Γ) = {γj± }m j=1 ,
(2.7)
σ(Γ−1 B± ) = {νj± }m j=1 , where σ(·) denotes the collection of eigenvalues and we choose our ordering so that j < k implies βj± ≤ βk± , γj± ≤ γk± , and νj± ≤ νk± . The fact that the eigenvalues for these matrices are all real and positive follows from symmetry and positivity of Γ, B± , and M± , as discussed in more detail in [17]. To begin a perturbation argument, we set σ = 0 to get n o − µ4 M± Γ + µ2 M± B± r = 0. By factoring out µ2 we see that (in the determinant of the matrix in brackets) 2m values of µ will vanish for σ = 0 (the slow modes) and 2m values of µ will not vanish (the fast modes). Fast modes. For the fast modes, we have n o M± Γ − µ2 I + Γ−1 B± r = 0, and so the values of µ(0)2 correspond with the eigenvalues of Γ−1 B± . For j = 1, 2, . . . , m we label these values q ± ± + O(σ) µj (σ) = − νm+1−j q (2.8) ± µ± (σ) = + ν + O(σ), 3m+j j ± ± m (so that j < k implies µ± j (0) ≤ µk (0)) with the corresponding eigenvectors {rj (σ)}j=1 and ± ± m {r3m+j (σ)}m j=1 = {rm+1−j (σ)}j=1 . Setting σ = 0 we note the relations ± Γ−1 B± rj± (0) = νm+1−j rj± (0);
j = 1, 2, . . . , m.
Slow modes. For the slow modes, for which µ(0) = 0, it will be convenient to set ω = µ2 , in which case (2.6) can be expressed as n o (2.9) − ω 2 M± Γ + ω(M± B± + 2σκ0 M± Γ) − σ(λ0 I + κ0 M± B± + σκ20 M± Γ) r = 0. In addition, we set ω = σζ so that n o − σ 2 ζ 2 M± Γ + σζ(M± B± + 2σκ0 M± Γ) − σ(λ0 I + κ0 M± B± + σκ20 M± Γ) r = 0. Factoring out σ, we see that one set of solutions consists of the values ζ so that det − σζ 2 M± Γ + ζ(M± B± + 2σκ0 M± Γ) − (λ0 I + κ0 M± B± + σκ20 M± Γ) = 0. 10
(2.10)
Now, we set σ = 0 to get det ζM± B± − (λ0 I + κ0 M± B± ) = 0. Noting that det(M± B± ) 6= 0, we see that −1 ζ ∈ σ(B± M±−1 λ0 + κ0 I).
We denote −1 m σ(B± M±−1 λ0 + κ0 I) = {ρ± j }j=1 ,
(2.11)
and note that for j = 1, 2, . . . , m ± ± −1 M±−1 λ0 + κ0 I)rm+j (0) = ρm+1−j rm+j (0), (B± −1 so that (recalling σ(B± M±−1 ) = { β1± }) j
ρ± m+1−j =
λ0 + κ0 ; βj±
j = 1, 2, . . . , m.
According to standard perturbation theory (e.g. Theorem XII.1 in [26]), we know that if −1 the eigenvalues of this last matrix (i.e., B± M±−1 λ0 + κ0 I) are distinct then ζ will be analytic as a function of σ. This is precisely where we use our assumption that the eigenvalues of M± B± are distinct (or alternatively assumption (H4b), which directly asserts analyticity). Analyticity allows us to write ζj± (σ)
=
ρ± j
+
∞ X
k a± jk σ ,
k=1 ∞ for appropriate constants {a± jk }k=1 . Accordingly,
ωj± (σ)
=
σρ± j
+
∞ X
k+1 , a± jk σ
k=1
and so long as ρ± j 6= 0 we obtain the slow modes for j = 1, 2, . . . , m √ q ± σ ρm+1−j + O(|σ|3/2 ) √ q ± = σ ρj + O(|σ|3/2 ),
µ± m+j = µ± 2m+j
± ± ± m m with corresponding eigenvectors {rm+j (σ)}m j=1 and {r2m+j (σ)}j=1 = {r2m+1−j (σ)}j=1 .
11
(2.12)
Remark 2.1. It’s clear from (2.11) that this condition ρ± / j 6= 0 is equivalent to (λ0 /κ0 ) ∈ ± σ(−M± B± ), which means λ0 6= −βj κ0 (for any j = 1, 2, . . . , m), and finally that we must remain substantially away from λ = −βj± κ. We will find that we can set a left bound on λ of the form λ ≥ −c0 κ, with 0 < c0 < β := min βj± , (2.13) j,±
whenever κ and λ are real-valued. Here, the distance between λ ∈ R and −βκ is clearly at least (β − c0 )κ, and more generally we will work with contours Γ so that for each λ ∈ Γ the distance between λ and (−∞, −βκ] is at least (β − c0 )κ. For κ ∈ C (i.e., when ξ is complexified), we must remain away from branches λ/βj± + κ ∈ (−∞, 0], which we denote bκ,± j . Given any κ ∈ C, we denote the collection of all such branches Bκ = ∪j,± bκ,± j , and we will work with contours Γ so that for each λ ∈ Γ the distance between λ and Bκ is at least (β − c0 )|κ|. 4m Finally, we can express the {µ± j }j=1 analytically as functions of the variable √ s := σ = (|λ|2 + |ξ|4 )1/4 = (|λ|2 + κ2 )1/4 .
(2.14)
Here, |λ| denotes complex modulus of λ, but since ξ ∈ Rn−1 its more natural to view |ξ| as standard Euclidean length of ξ. In Section 6 we will complexify ξ, in which case s will become complex-valued. In anticipation of this, we use |s| to denote complex modulus of s. We have the following important relations for j = 1, 2, . . . , m: q ± ± µj (s) = − νm+1−j + O(|s|2 ) s λ µ± + κ + O(|s|3 ) m+j (s) = − βj± s (2.15) λ µ± + κ + O(|s|3 ) 2m+j (s) = ± βm+1−j q νj± + O(|s|2 ). µ± (s) = + 3m+j We note that in our derivation we have expressed √ q √ q ± 1 ± σ ρj + O(|σ|) = σ ρj + q O(|σ|) + . . . , 2 ρ± j and we must observe that from the bound discussed in Remark 2.1 we have s q λ β − c0 0 1/2 | ρ± ≥ |κ0 |1/2 . j | = | ± + κ0 | βj βj± 12
q If κ0 is bounded away from 0, then | ρ± j | is bounded away from 0, while if κ0 is small then q |λ0 | must be close to 1 by the definition of σ, and in this case again | ρ± j | is bounded away from 0. In the following lemma, we collect estimates on solutions to (2.1). First, we define a domain of applicability. Definition 2.1. For > 0, we will denote by S the following set: n o S := (λ, κ) : |s| < , λ ∈ / Bκ . Lemma 2.1. Under Conditions (C0)-(C3), there exist constants , η > 0 so that the following estimates hold uniformly in (λ, κ) ∈ S on a choice of linearly independent solutions of the eigenvalue problem (2.1): (I) For x1 ≤ 0, k = 0, 1, 2, 3, and j = 1, 2, . . . , m, µ− k − −η|x1 | 2m+j (s)x1 (µ− ∂xk1 φ− (x ; s) = e ) r + O(e ) ; (slow ) 1 j 2m+j 2m+1−j µ− k − −η|x1 | 3m+j (s)x1 (µ− ∂xk1 φ− ) ; (fast) m+j (x1 ; s) = e 3m+j ) rm+1−j + O(e and − k − −η|x1 | ∂xk1 ψj− (x1 ; s) = eµj (s)x1 (µ− ) r + O(e ) ; j j 1 − k µ−m+j (s)x1 − k −µ− m+j (s)x1 r − (µm+j ) e − (−µ− ) e ∂xk1 ψm+j (x1 ; s) = − m+j m+j µm+j + O(e−η|x1 | ).
(fast)
(slow )
(II) For x1 ≥ 0, k = 0, 1, 2, 3, and j = 1, 2, . . . , m, µ+ j (s)x1 (µ+ )k r + + O(e−η|x1 | ) ; (x ; s) = e ∂xk1 φ+ 1 j j j + µm+j (s)x1 + k + k + −η|x1 | ∂x1 φm+j (x1 ; s) = e (µm+j ) rm+j + O(e ) ;
(fast) (slow )
and ∂xk1 ψj+ (x1 ; s) =
1 µ+ 2m+j
k µ+ k −µ+ 2m+j (s)x1 − (−µ+ 2m+j (s)x1 r + (µ+ ) e ) e 2m+j 2m+j 2m+1−j
+ O(e−η|x1 | ); µ+ (s)x1 + + k k + −η|x1 | 3m+j ∂x1 ψm+j (x1 ; s) = e (µ3m+j ) rm+1−j + O(e ) . Throughout the statement, we have suppressed dependence on λ0 and κ0 . 13
(slow ) (fast)
Note on the proof of Lemma 2.1. The proof of Lemma 2.1 follows standard arguments, and we only remark on the slow growth modes. (See the proof of Proposition 3.1 in [30] for general details in a similar setting.) − We recall from [18] that the slow growth modes {ψm+j }m j=1 are constructed as linear − m combinations of the slow decay modes {φj }j=1 and the natural slow growth modes µ− (s)x1 − − −η|x1 | ¯ m+j rm+j (s) + O(e ψm+j (x1 ; s) = e ) . The difficulty arising with these natural slow growth modes is that for λ = 0 they are (asymptotically, as x1 → −∞) linearly dependent on the slow decay mode φ− m+1−j . We overcome this difficulty by setting 1 ¯− − ψm+j (x1 ; s) − φm+1−j (x1 ; s) , ψm+j (x1 ; s) = − µm+j which defines a slow growth solution that is linearly independent of φ− m+1−j . This approach to defining the choice of basis is taken from [5, 23, 24]. Remark 2.2. Since u¯0 (x1 ) decays at exponential rate as x1 → ±∞, it must be the case m that u¯0 (x1 ) is a linear combination of the fast-decaying solutions {φ− m+j (x1 ; 0)}j=1 and of the m fast-decaying solutions {φ+ j (x1 ; 0)}j=1 . Focusing for specificity on the latter, we note that the linear combination will not contain any solutions that decay at a slower rate than u¯0 (x1 ), We are justified then in letting J + denote the index of the slowest decaying solution that appears in the linear combination, or if multiple solutions have the same rate one of these indices. Noting that the faster decaying solutions can be subsumed into the exponential errors in Lemma 2.1, we can write u¯0 (x1 ) = φ+ J + (x1 ; 0), where in the case of multiple solutions with the same decay rate we may have to revise our original (arbitrary) selection of the eigenvector rJ++ . Proceeding similarly for x1 < 0 and appealing to analyticity in σ, we conclude φ− ¯0 (x1 ) + O(s2 e−η|x1 | ) J − (x1 ; s) = u φ+ ¯0 (x1 ) + O(s2 e−η|x1 | ). J + (x1 ; s) = u
(2.16)
In addition to Lemma 2.1, we will find it convenient to collect some observations regarding the behavior of our ODE solutions at s = 0, and also regarding s-derivatives of our ODE solutions. Lemma 2.2. Let the assumptions and notation of Lemma 2.1 hold. Then we have the following relations at s = 0: (i) For j = 1, 2, . . . , m − H0 φ− j (x1 ) = B− r2m+1−j (0); − H0 φ+ m+j (x1 ) = B+ rm+j (0);
14
H0 φ− m+j (x1 ) = 0; H0 φ+ j (x1 ) = 0;
and additionally H0 ψj− (x1 ) = 0;
+ H0 ψm+j (x1 ) = 0.
(ii) For j = 1, 2, . . . , m ¯ (x1 )(H0 ∂s φ− )0 = M− B− M j
q
− ρ− j r2m+1−j (0);
¯ (x1 )(H0 ∂s φ+ )0 = M+ B+ M m+j
q
+ ρ+ m+1−j rm+j (0);
H0 ∂s φ− m+j = 0; H0 ∂s φ+ j = 0;
and additionally H0 ∂s ψj− = 0
+ H0 ∂s ψm+j = 0.
+ (iii) For φ− J − and φJ + as in Remark 2.2 0 ¯ (x1 ) H0 ∂s2 (φ++ − φ−− ) = −2λ0 [u]. M J J
√ Proof. Expressed with κ = |ξ|2 and s = σ, our eigenvalue problem (1.11) is 0 00 2 0 ¯ ¯ ¯ (x1 )(−Γφ00 + B(x ¯ 1 )φ+κ0 s2 Γφ) = s2 λ0 φ. (2.17) M (x1 )(−Γφ + B(x1 )φ+κ0 s Γφ) −κ0 s2 M ¯ (H0 φ)0 )0 = 0, which can be integrated once to M ¯ (H0 φ)0 = c for Setting s = 0 we obtain (M ± some constant c. It’s clear from Lemma 2.1 that for any decay mode φj , fast or slow, 0 lim (H0 φ± j (x1 ; 0)) = 0,
x1 →±∞
so that in all cases c = 0. (Notice that our notation here indicates that for φ+ j we take −1 ¯ x1 → ∞, while for φ− we take x → −∞.) Multiplying through by M (x ) we find 1 1 j 0 (H0 φ) = 0, so that H0 φ = c˜ for some constant c˜. For fast modes lim H0 φ± j (x1 ; 0) = 0,
x1 →±∞
so that c˜ = 0, while for slow modes − lim H0 φ− j (x1 ; 0) = B− r2m+1−j (0);
x1 →−∞
+ lim H0 φ+ m+j (x1 ; 0) = B+ rm+j (0).
x1 →+∞
This establishes the first part of (i). For the fast-growing solutions in Part (i), we begin by observing that the equation H0 ψ = 0
(2.18)
has m linearly independent solutions that grow at −∞ and m linearly independent solutions that grow at +∞. Since each of these solutions is also a solution of −A0 H0 ψ = 0, 15
(2.19)
and since this latter equation has precisely m fast-growing solutions at each of ±∞, we can regard the linearly independent fast-growing solutions of (2.18) as a choice of linearly independent fast-growing solutions of (2.19). Suppose, then, that ψj− (x1 ; s) denotes any solution to −Aξ Hξ ψ = λψ that grows at exponential rate as x1 → −∞ when s = 0. Then ψj− (x1 ; 0) can be expressed as a linear combination of solutions to (2.18) and solutions of −A0 H0 ψ = 0 that do not grow as x1 → −∞. These latter solutions (that do not grow) can be eliminated by a change of the exponentially small errors of Lemma 2.1, so that ψj− (x1 ; 0) is written entirely as a linear combination of solutions of (2.18). This gives the second part of Part (i). For (ii), we note that for the fast-decaying and fast-growing modes the claim is clear from the fact that these solutions are analytic in s2 . For the slow-decaying solutions, we differentiate (2.17) with respect to s and subsequently set s = 0 to obtain −A0 H0 ∂s φ = 0. ¯ (x1 )(H0 ∂s φ)0 = c for some constant c. Integrating once, as for (i), we find that M To be precise, we take the case x1 < 0 for which Lemma 2.1 provides the estimate n o µ− (s)x1 −0 − − −η|x1 | 2m+j ) , φj (x1 ; s) = e µ2m+j r2m+1−j + O(e for j = 1, 2, . . . , m. Differentiating with respect to s, we obtain n o dµ− 0 2m+j µ− − − −η|x1 | 2m+j (s)x1 x )e µ r + O(e ) ∂s φ− (x ; s) = ( 1 1 2m+j 2m+1−j j ds − o n dµ− dr2m+1−j − 2m+j − −η|x1 | r2m+1−j + µ− + O(e ) . +eµ2m+j (s)x1 2m+j ds ds Appealing to analyticity in s, we can set s = 0 to obtain dµ− dµ− 2m+j 2m+j −0 − −η|x1 | ∂s φj (x1 ; s) =( )+ (0) + O(e−η|x1 | ). x1 )O(e r2m+1−j ds s=0 ds s=0 s=0 Now, taking the limit as x1 → −∞ we see that q dµ− 2m+j − − 0 r (0) = ρ− lim (∂s φ− (x ; s)) = 1 j r2m+1−j (0). j x1 →−∞ ds s=0 2m+1−j s=0 ¯ (x1 )(H0 ∂s φ− )0 = c, and proceeding similarly for x1 > 0 Combining these observations with M j we obtain the first part of (ii). + For (iii), let φ temporarily correspond with either φ− J − or φJ + , and note that if we take two derivatives of (2.17) with respect to s and set s = 0 in the result we obtain ¯ (x1 )(H0 ∂s2 φ)0 )0 = −2κ0 (M ¯ (x1 )Γ¯ (M u00 )0 + 2λ0 u¯0 . + For φ− J − we integrate over (−∞, x1 ], while for φJ + we integrate over [x1 , +∞). We obtain respectively ¯ (x1 )(H0 ∂ 2 φ−− )0 = −2κ0 M ¯ (x1 )Γ¯ M u00 (x1 ) + 2λ0 (¯ u(x1 ) − u− ) s J 2 + 0 00 ¯ ¯ u (x1 ) + 2λ0 (u+ − u¯(x1 )). −M (x1 )(H0 ∂s φ + ) = 2κ0 M (x1 )Γ¯ J
Part (iii) follows by adding these together. 16
3
Analysis of the Evans Function
In this section we develop some general observations about a Wronskian that serves as an Evans type function for Cahn-Hilliard systems. (The function is not analytic in the variables (λ, ξ), and analyticity cannot be recovered by computing it as a wedge product, so it doesn’t quite correspond with the general notion of an Evans function. See [3, 7, 25].) First, we set some convenient notation. Definition 3.1. Suppose {φj }N j=1 denote N vectors, each of length M ≤ N , and dependent on a single independent variable, and suppose N/M = l, where l is a positive integer. Then we set the Wronskian notation φ1 φ2 ... φN φ1 0 φ2 0 ... φN 0 W (φ1 , φ2 , . . . , φN ) := det .. (3.1) .. .. .. , . . . . φ1 (l−1) φ2 (l−1) . . .
where 0 and
(l−1)
φN (l−1)
denote usual differentiation with respect to the independent variable.
With this notation in place, we set slow
slow
z }| { z }| { + + − − − − + , φ , . . . , φ D(λ, κ) := W (φ+ , . . . , φ 2m , φ1 , . . . , φm , φm+1 , . . . , φ2m ), | 1 {z m} m+1 {z } | fast
and then with s =
√
fast
σ set D(s) := D(λ0 σ, κ0 σ),
(3.2)
where the dependence on λ0 and κ0 has been suppressed on the left hand side. If we take κ = 0 in D, we obtain precisely the Evans function associated with u¯(x1 ) viewed as a solution to the scalar system (1.14). In [17], the authors analyze this function, √ and following the notation used there we specify it as Da (ζ) = D(λ, 0), where ζ = λ. In particular, it is shown in [17] That under the assumptions (H0)-(H4), with κ0 = 0 in (H4), (k) we have Da (0) = 0 for k = 0, 1, . . . , m, and transversality (as described in the paragraph immediately preceding Theorem 1.1) is determined by the following condition. Condition (D0 ). dm+1 Da (0) 6= 0. dζ m+1
Remark 3.1. Recalling Remark 2.2 we see immediately from the linear dependence of + φ− J − (x1 ; 0) and φJ + (x1 ; 0) that D(0) = 0. Turning now to s derivatives of D, we note that 17
+ 2 since φ− J − and φJ + are analytic in s we will not obtain a non-zero derivative unless at least + two s derivatives fall on one of φ− J − and φJ + . 2m In addition, we observe from Lemma 2.2 that for all solutions {φ± j }j=1 we have 000
0
¯ 0 (x1 )φ± + Γ−1 B(x ¯ 1 )φ± . φ± = Γ−1 B j j j Accordingly, we can use column operations to eliminate the bottom m rows in each column of (Φ+ , Φ− ). In this way, we see that in order to get a non-zero derivative of D we must have derivatives on at least m columns of (Φ+ , Φ− ). This means that the lowest (possible) order non-zero derivative of D(s) will be the (m + 1)st derivative, with two derivatives on exactly + one of φ− J − and φJ + and one derivative on each of m − 1 slow-decaying solutions. Similarly as in [17] we denote these terms 2 D(m+1) (0) = (m + 1)! j
(2m)
X
˜ j1 ,j2 ,...,jm−1 , W
(3.3)
1 ,j2 ,...,jm−1 =1
P where the notation (2m) j1 ,j2 ,...,jm−1 =1 denotes summation for which j1 goes from 1 to m + 2, j2 goes from j1 + 1 to m + 3, and so on until jm−1 goes from jm−2 + 1 to 2m. (This corrects a notational error in [17].) + 2m m We note that there are precisely 2m slow decay modes, {φ− j }j=1 and {φj }j=m+1 , and so we can refer to them unambiguously with a set of indices running from 1 to 2m. In this way, ˜ j1 ,j2 ,...,jm−1 refers to the term in D(m+1) (0) for which derivatives appear on the summand W the slow modes with indices j1 , j2 , . . . , jm−1 . For example, for m = 2 we only have one index and it ranges from 1 to 4. (See Section 3.1 for details.) For m = 3 we have two indices, and j1 ranges from 1 to 5 while j2 ranges from 2 to 6. We conclude this remark by noting that we use tildes on W here to distinguish it from the coefficients in [17], which will also play a role in our analysis, and will be designated as in [17] without tildes.
3.1
The Case m = 2
Before working out the general case, we focus on the case m = 2. For specificity we’ll assume J − = 3 and J + = 2, which is expected in the sense that u¯0 will generally be a linear combination of the solutions that decay at exponential rate when s = 0, and generically these linear combinations will contain the slowest decaying solutions. In this case (3.3) becomes 1 000 − + + − + − ¯0 , φ− D (0) = W (φ+ 4) 1 , ∂ss (φ2 − φ3 ), ∂s φ3 , φ4 , φ1 , φ2 , u 3 + − + + − − + W (φ+ ¯0 , φ− 1 , ∂ss (φ2 − φ3 ), φ3 , ∂s φ4 , φ1 , φ2 , u 4) − − 0 − + + − + + + W (φ1 , ∂ss (φ2 − φ3 ), φ3 , φ4 , ∂s φ1 , φ2 , u¯ , φ4 ) + − + + − − + W (φ+ ¯0 , φ− 1 , ∂ss (φ2 − φ3 ), φ3 , φ4 , φ1 , ∂s φ2 , u 4) ˜3 + W ˜4 + W ˜1 + W ˜ 2. =W 18
(3.4)
Proceeding similarly as in [17], we find + φ1 φ+ φ+ φ− u¯0 φ− q 3 4 2 4 0 0 0 0 0 ˜ 1 = −2λ0 β2− ρ− φ+ W φ+ φ+ φ− u¯00 φ− 1 det 1 3 4 2 4 0 −Γ−1 B+ r3+ (0) −Γ−1 B+ r4+ (0) −Γ−1 B− r3− (0) 0 0 q ¯ (0)−1 ) det([u], β2− r4− (0)). × det(Γ−1 M p 1 (3) This can be expressed as λ0 β2− ρ− 1 W1 , where W1 is the corresponding term in 3 Da (0) (from [17]). In particular, proceeding similarly for the other terms we find that we can express 1 (3) D (0) = W1 + W2 + W3 + W4 , 3 a and the transversality condition in one space dimension is precisely that this sum be nonzero. Correspondingly, in multiple space dimensions we have q q q q 1 (3) − − + D (0) = λ0 W1 λ0 + β2 κ0 + W2 λ0 + β1 κ0 + W3 λ0 + β1 κ0 + W4 λ0 + β2+ κ0 , 3 which corresponds with q q q q 1 (3) − − + 3 2 2 2 D (0)s = λ W1 λ + β2 |ξ| + W2 λ + β1 |ξ| + W3 λ + β1 |ξ| + W4 λ + β2+ |ξ|2 . 3 (3)
In this case, the condition Da (0) 6= 0 does not provide enough information about D(3) (0), and we make the stronger assumption that the {Wj }4j=1 are all non-zero, and all have the same sign. This has been verified for an example case in [17], and we also note that in the framework of [17] each of the {Wj }4j=1 has to be computed individually, so there is no extra work associated with checking this stronger condition. This condition will be stated more precisely for all m ≥ 2 in the next subsection. We also need to clarify the behavior of D(s) for values of λ near λ∗ (ξ). Since D(s) is analytic in s, we can express it as o n 000 D(4) (0) 3 D (0) + s + ... , D(s) = s 3! 4! where the expression in brackets vanishes when λ = λ∗ (ξ) = −c3 |ξ|3 (1 + o(|ξ|)). Expressing (3.5) in terms of λ0 , κ0 and σ, we have 3/2
σλ0 = −c3 σ 3/2 κ0 (1 + o(σ 1/2 )). Since σ 6= 0 for ξ 6= 0 we can divide by σ to conclude that 3/2
λ0 = −c3 σ 1/2 κ0 (1 + o(σ 1/2 )), 19
(3.5)
which allows us to identify a factor of D(s). That is, n o 1/2 3/2 1/2 3 D(s) = s λ0 + c3 σ κ0 (1 + o(σ )) h(s, λ0 , κ0 )
(3.6)
= s(λ − λ∗ (ξ))h(s, λ0 , ξ0 ), where |h(s, λ0 , ξ0 )| ≥ h0 > 0 for s sufficiently small and (λ0 , κ0 ) ∈ S for some > 0.
3.2
The General Case m ≥ 3
For the general case, we keep in mind that every summand will have a term of the form − ± ± ∂ss (φ+ J + − φJ − ) and m − 1 terms of the form ∂s φj , where φj is a slowly decaying solution. + 2m m (Recall that the slow-decaying solutions are {φ− j }j=1 and {φj }j=m+1 .) These will all give + − the same values as their counterparts for Da (ζ), except that the term q ∂ss (φJ + − φJ − ) will
− m βm+1−j introduce λ0 , each of the {φ− ρ− j , and each of j }j=1 will introduce multiplication by q 2m βj+ ρ+ the {φ+ m+1−j . j }j=m+1 will introduce a scaling Implementing the relations discussed above, and using Remark 3.1, we see that
2 D(m+1) (0) = λ0 (m + 1)! j
(2m)
X
Wj1 ,j2 ,...,jm−1
p
λ0 + β(j1 )κ0 · · ·
p λ0 + β(jm−1 )κ0 ,
1 ,j2 ,...,jm−1 =1
and correspondingly (2m)
2 D(m+1) (0)sm+1 = λ (m + 1)! j
X
Wj1 ,j2 ,...,jm−1
p
λ + β(j1 )|ξ|2 · · ·
p λ + β(jm−1 )|ξ|2 ,
1 ,j2 ,...,jm−1 =1
(3.7) where β(ji ) denotes the value
βj±
corresponding with
˜ j1 ,j2 ,...,jm−1 = Wj1 ,j2 ,...,jm−1 W
p
φ± ji .
I.e., we have the relation
λ + β(j1 )|ξ|2 · · ·
p λ + β(jm−1 )|ξ|2 ,
where the Wj1 ,j2 ,...,jm−1 are the coefficients analyzed in [17] so that 2 D(m+1) (0) = (m + 1)! a j
(2m)
X
Wj1 ,j2 ,...,jm−1 .
1 ,j2 ,...,jm−1 =1
Similarly as in the case m = 2, we make the following assumption. Condition (Dξ ). We assume that at least one of the coefficients Wj1 ,j2 ,...,jm−1 in (3.7) is non-zero, denoted WJ , and that the remaining coefficients are either 0 or of the same sign as WJ . 20
In addition, proceeding as in the final calculation of Section 3.1 we conclude that there exists a function h(s, λ0 , κ0 ) so that D(s) = sm−1 (λ − λ∗ )h(s, λ0 , κ0 ), where for s sufficiently small h(s, λ0 , κ0 ) ≥ h0 > 0 for some constant h0 and (λ0 , κ0 ) ∈ S for some > 0.
4
Spectrum Under Complexification of ξ
During our contour analysis in Section 6 we will often need to complexify the Fourier variable ξ, and we need to understand what happens to the spectrum of Lξ under such complexification. In this section, we analyze the spectrum under complexification, proving Part II of Theorem 1.1. In this section, and subsequently in the remaining sectinos, we will simplify calculations by using sufficiently large constants C, even when more precise constants could be identified (with more work). We will often arrange a series of inequalities for which a new constant will be appropriate at each step, and we’ll designate these constants C1 , C2 , etc. Finally, we will recycle this notation, so that the next calculation will begin again with C1 , unrelated to C1 from the previous calculation. We begin with the essential spectrum, which is determined by the constant coefficient equation (2.2). (See, for example, the discussion in the appendix to Chapter 5 of [16] or [25].) We observe that when ξ is complexified to ξ = ξR + iξI the square of Euclidean norm is complexified to ζ = ζR + iζI = |ξR |2 − |ξI |2 + 2ihξR , ξI i. The essential spectrum is determined by solutions of the eigenvalue problem L± ξ φ = λφ of ikx1 the form φ(x1 ; ζ) = e v(ζ). Upon substitution into (2.2) we obtain the matrix eigenvalue problem n o − k 4 M± Γ − k 2 (M± B± + 2ζM± Γ) − (ζM± B± + ζ 2 M± Γ) v = λv. We multiply this equation by M±−1 (on the left), and take an inner product with v to find λhM±−1 v, vi = −k 4 hΓv, vi − k 2 hB± v, vi − 2k 2 ζhΓv, vi − ζhB± v, vi − ζ 2 hΓv, vi. Each of the matrices appearing in this last expression is positive definite, and this allows us to keep precise track of the real and imaginary parts of λ. First, Im λhM±−1 v, vi = −2k 2 ζI hΓv, vi − ζI hB± v, vi − 2ζR ζI hΓv, vi, from which −1 Im λhM± v, vi ≤ 2k 2 |ζI ||hΓv, vi| + |ζI ||hB± v, vi| + 2|ζR ζI hΓv, vi| ≤ C k 2 |ζI | + |ζI | + |ζI |2 + |ζR |2 |v|2 , 21
for some C > 0. Using the fact that M −1 is positive definite, we can conclude that there exists a constant C˜I large enough so that 2 2 2 ˜ |Im λ| ≤ CI k |ζI | + |ζI | + |ζI | + |ζR | . This leads to an inequality in terms of ξR and ξI of the form |Im λ| ≤ CI k 4 + |ξR |2 + |ξI |2 + |ξR |4 + |ξI |4 ,
(4.1)
for some constant CI . On the other hand, Re λhM±−1 v, vi = −k 4 hΓv, vi−k 2 hB± v, vi−2k 2 ζR hΓv, vi−ζR hB± v, vi−(ζR2 −ζI2 )hΓv, vi. (4.2) Expressing this in terms of ξR and ξI we find Re λhM±−1 v, vi = −k 4 hΓv, vi − k 2 hB± v, vi − 2k 2 |ξR |2 hΓv, vi + 2k 2 |ξI |2 hΓv, vi − |ξR |2 hB± v, vi + |ξI |2 hB± v, vi 4 2 2 4 2 − |ξR | − 2|ξR | |ξI | + |ξI | − 4hξR , ξI i hΓv, vi. Again, we will use the fact that each matrix in this expression is positive definite, and we will ensure that the subtracted terms with k 4 and |ξR |4 dominate by using the inequalities 1 k 2 |ξI |2 ≤ k 4 + |ξI |4 2 2 1 2 2 4 |ξR | |ξI | ≤ |ξR | + |ξI |4 , 2 2 for any > 0. We conclude that there exist constants cR and CR so that Re λ ≤ −cR k 4 + k 2 + |ξR |2 + |ξR |4 + CR |ξI |2 + |ξI |4 .
(4.3)
Combining (4.1) and (4.3), and choosing c1 sufficiently small, we obtain Re λ + c1 |Im λ| ≤ −θ1 k 4 + k 2 + |ξR |2 + |ξR |4 + Cθ1 |ξI |2 + |ξI |4 . We conclude that the essential spectrum of Lξ lies to the left of the contour Re λ + c1 |Im λ| ≤ −θ1 |ξR |2 + |ξR |4 + Cθ1 |ξI |2 + |ξI |4 . Turning now to the point spectrum, we observe that the perturbation analysis employed in deriving Part 4 of Theorem 1.1 extends to a complex neighborhood of ξ = 0, with, as above, ζ = |ξR |2 − |ξI |2 + 2ihξR , ξI i. (4.4) 22
Additional eigenvalues will either lie to the left of the essential spectrum boundary (in which case we bound them along with essential spectrum), or are isolated and can be analyzed by standard perturbation theory. More precisely, suppose λ2 (ξ) denotes a second isolated eigenvalue (λ∗ (ξ) being the first), so that for any fixed ξR 6= 0 we have λ2 (|ξR |2 ) ≤ −θ1 |ξR |2 (from Part 5 of Theorem 1.1). We now view complexification as a perturbation near λ2 (|ξR |2 ), and write λ2 (ζ) = λ2 (|ξR |2 ) + a1 (ζ − |ξR |2 ) + O(|ζ − |ξR |2 |). Since we are only complexifying, ζ has the form (4.4), and we conclude λ2 (ζ) = λ2 (|ξR |2 ) + a1 (−|ξI |2 + 2ihξ0 , ξI i) + O(|ζ − |ξ0 |2 |), so that Re λ2 + |Im λ2 | ≤ −c2 |ξR |2 + C2 |ξI |2 . Finally, we require a bound on the eigenvalues for large complex values of ξ. For this calculation, we are taking M to be a constant matrix, in which case our eigenvalue problem (1.11) can be expressed as ¯ 00 + 2ζΓφ00 − ζ Bφ ¯ − ζ2 Γφ = λM −1 φ. −Γφ0000 + (Bφ) We suppose λ is an eigenvalue, and take an inner product of this equation with its eigenfunction φ, which we take normalized, to see that (integrating by parts where appropriate) ¯ 0 , φ0 i − 2ζhΓφ0 , φ0 i − ζhBφ, ¯ φi − ζ 2 hΓφ, φi. λhM −1 φ, φi = −hΓφ00 , φ00 i − h(Bφ) In this case, it will be useful to use the interpolation inequality 1 1 kφ0 k2L2 ≤ ( kφk2L2 + kφ00 k2L2 ). 2 Similarly as with our analysis of essential spectrum, we see that ¯ 0 φ, φ0 i − 2ζI hΓφ0 , φ0 i − ζI hBφ, ¯ φi − 2ζR ζI hΓφ, φi. Im λhM −1 φ, φi = −Im hB We see that there exists a constant C1 so that 0 2 2 2 0 2 |Im λ| ≤ C1 1 + |ζI | + kφ kL2 + |ζI |kφ kL2 + |ζR | + |ζI | . We use our interpolation inequality to eliminate kφ0 k2L2 with the following inequalities: 1 kφ0 k2L2 ≤ (kφk2L2 + kφ00 k2L2 ) 2 1 1 |ζI |kφ0 k2L2 ≤ |ζI | |ζI |kφk2L2 + kφ00 k2L2 2 |ζI | 1 1 ≤ |ζI |2 kφk2L2 + kφ00 k2L2 . 2 2 23
(4.5)
(4.6)
In this way, |Im λ| ≤ C2 1 + |ζI | + kφ00 k2L2 + |ζR |2 + |ζI |2 . We can express this in terms of ξR and ξI as |Im λ| ≤ C3 1 + |ξR |2 + |ξR |4 + |ξI |2 + |ξI |4 + kφ00 k2L2 . Likewise, ¯ 0 , φ0 i − Re hB ¯ 0 φ, φ0 i Re λhM −1 φ, φi = −hΓφ00 , φ00 i − hBφ ¯ φi − (ζR2 − ζI2 )hΓφ, φi. − 2ζR hΓφ0 , φ0 i − ζR hBφ, ¯ and B ¯ 0 are not necessarily positive definite. Nonetheless, We need to keep in mind that B using the fact that M and Γ are postive definite we can write Re λ ≤ −c1 kφ00 k2L2 − 2ζR hΓφ0 , φ0 i − (ζR2 − ζI2 )hΓφ, φi + C1 1 + |ζR | + kφ0 k2L2 . Using (4.5) with sufficiently small, we can dominate part of kφ0 k2L2 with the term −c1 kφ00 k2L2 . For the remaining terms we proceed as follows. First, −2ζR hΓφ0 , φ0 i = −2|ξR |2 hΓφ0 , φ0 i + 2|ξI |2 hΓφ0 , φ0 i ≤ −2γ|ξR |2 kφ0 k2L2 + C|ξI |2 kφ0 k2L2 2 C 2 0 2 2 |ξI | 00 2 ≤ −2γ|ξR | kφ kL2 + |ξI | + kφ kL2 . 2 |ξI |2 Second, we observe that 1 ζR2 − ζI2 ≥ |ξR |4 − 17|ξI |4 2 so that ¯ φi ≤ −c4 |ξR |4 + C4 |ξI |4 . −(ζR2 − ζI2 )hBφ, Combining these observations, and taking c3 sufficiently small, we find Re + c3 |Im λ| ≤ −θ3 |ξR |4 + Cθ3 1 + |ξR |2 + |ξI |2 + |ξI |4 . This gives the claim.
5
Construction of the Resolvent Kernel
Upon taking the Fourier-Laplace transform of our Green’s function equation (1.15) (t → λ, x˜ → ξ, G → Gλ,ξ ), we obtain the resolvent kernel Gλ,ξ (x1 , y), which satisfies the ODE (Lξ − λI)Gλ,ξ = −
1 (2π) 24
n−1 2
e−i˜y·ξ δy1 (x1 )I.
(5.1)
As boundary conditions, we insist that Gλ,ξ must decay as x1 → ±∞, and so for x1 < 0 Gλ,ξ 2m will be constructed as a linear combination of the functions {φ− j }j=1 , while for x1 > 0 Gλ,ξ will + 2m be constructed as a linear combination of the functions {φj }j=1 . The expansion coefficients for these expressions can be characterized in terms of solutions to the dual eigenvalue problem ˜ ˜ ξ φ˜ = λφ, L
(5.2)
˜ ξ denotes the transposed adjoint operator, where φ˜ denotes a row vector function and L defined so that ˜ ξ φ˜ := (L∗ φ˜T )T , L (5.3) ξ where L∗ξ denotes the usual adjoint operator L∗ξ := −Hξ Aξ ,
(5.4)
and the superscript T denotes transpose. ˜ ξ as For certain calculations it will be convenient to express L
where
˜ ξ = −H ˜ ξ A˜ξ , L
(5.5)
¯ )x1 + |ξ|2 φ˜M ¯, A˜ξ φ˜ := −(φ˜x1 M ˜ ξ w˜ := −w˜x1 x1 Γ + w( ¯ + |ξ|2 Γ). H ˜ B
(5.6)
˜ ξ , so that We will denote by Hλ,ξ (x1 , y) the resolvent kernel of L ˜ ξ − λI)Hλ,ξ = − (L
1 (2π)
n−1 2
e−i˜y·ξ δy1 (x1 )I.
(5.7)
In order for Gλ,ξ to solve (5.1), it must be continuous in all derivatives, including mixed partials, up to and including order 2, and it must have jumps in at least some of its order 3 derivatives. In order to efficiently describe this behavior, we will adopt the jump notation [·], so that, for example, [Gλ,ξ ](y1 ) := lim+ Gλ,ξ (x1 , y) − lim− Gλ,ξ (x1 , y). x1 →y1
x1 →y1
(We note that [Gλ,ξ ] depends continuously on y˜, and this dependence is suppressed for notational brevity; as usual, x1 → y1± means approach to y1 from the right and left.) Likewise, it will be notationally convenient for certain calculations to set G± λ,ξ (x1 , y) := lim± Gλ,ξ (z1 , y), z1 →x1
so that − [Gλ,ξ ] = G+ λ,ξ (y1 , y) − Gλ,ξ (y1 , y).
25
In working with Gλ,ξ and Hλ,ξ it’s convenient to adopt the notation ∂ j,k =
∂j ∂k , ∂xj1 ∂y1k
(5.8)
and define the matrix
Gλ,ξ
Gλ,ξ ∂ 1,0 Gλ,ξ := ∂ 2,0 Gλ,ξ ∂ 3,0 Gλ,ξ
∂ 0,1 Gλ,ξ ∂ 1,1 Gλ,ξ ∂ 2,1 Gλ,ξ ∂ 3,1 Gλ,ξ
∂ 0,2 Gλ,ξ ∂ 1,2 Gλ,ξ ∂ 2,2 Gλ,ξ ∂ 3,2 Gλ,ξ
∂ 0,3 Gλ,ξ ∂ 1,3 Gλ,ξ . ∂ 2,3 Gλ,ξ ∂ 3,3 Gλ,ξ
(5.9)
Lemma 5.1. Suppose there exists a function Gλ,ξ (x1 , y) for which all derivatives in Gλ,ξ are continuous, except possibly at y1 = x1 , that satisfies (5.1) and for each fixed y ∈ Rn decays to 0 as x1 → ±∞. Then ¯ −1 0 0 0 −Γ−1 M ¯ −1 2Γ−1 dM¯ −1 0 0 Γ−1 M e−i˜y·ξ dy1 [Gλ,ξ ] = , ¯ −1 n−1 −1 ¯ −1 −1 dM 0 −Γ M −Γ dy1 g34 (2π) 2 −1 ¯ −1 0 g43 g44 Γ M where
¯ −1 d2 M −1 ¯ −1 ¯ −1 2 −1 ¯ −1 − Γ BΓ M − 2|ξ| Γ M dy12 ¯ −1 M ¯ −1 + 2|ξ|2 Γ−1 M ¯ −1 = Γ−1 BΓ
g34 = g43
− Γ−1
¯ −1 ¯ −1 −1 ¯ 0 −1 ¯ −1 2 −1 dM −1 dM ¯ − Γ B Γ M + 3|ξ| Γ , g44 = 2Γ BΓ dy1 dy1 ¯ and M ¯ are evaluated at y1 . and in all cases B Moreover, ¯Γ ¯Γ ¯ 0Γ ¯B ¯ − 2|ξ|2 M ¯B ¯ 0 + |ξ|2 M 0 M −M −M ¯B ¯ + 2|ξ|2 M ¯Γ−M ¯ 00 Γ ¯ 0Γ ¯Γ 0 M n−1 M −M . [Gλ,ξ ]−1 = (2π) 2 ei˜y·ξ 0 ¯ ¯ −2M Γ MΓ 0 0 ¯Γ −M 0 0 0 −1
Note on the proof of Lemma 5.1. The case ξ = 0 was established as Lemma 3.2 in [18], and the analysis for ξ 6= 0 is a straightforward generalization. We omit the details. ˜ ξ is defined as in Lemma 5.2. For each λ ∈ C and ξ ∈ Cn−1 , if z(·; λ, ξ) ∈ C 4 (R) and L ˜ ξ z = λz if and only if the entwining (5.3), then L w w0 z z 0 z 00 z 000 [Gλ,ξ ]−1 w00 , w000 26
is constant (in x1 ) for all w(·; λ, ξ) ∈ C 4 (R) satisfying Lξ w = λw. Note on the proof of Lemma 5.2. The case ξ = 0 was established as Lemma 3.3 in [18], and the analysis for ξ 6= 0 is a straightforward generalization. We omit the details. In the following development, we will need to specify a variety of vectors and matrices in terms of the φ± j , and we summarize our notation for these here. We will set: ± φj ± ± 0 φj ± ± (φ±j )00 Φ± = (Φ± wj± = Φ± (5.10) ± 0 1 , Φ2 , . . . Φ2m ). j = (φj ) (φj ) 000 (φ± j ) th We note that Φ± thus denotes the 4m × 2m matrix in which Φ± column. j comprises the j Following the analysis of [18], we can find a linearly independent set of solutions to (5.2) so that i˜ y ·ξ j ˜ − [Gλ,ξ ]−1 Φ− = (2π) n−1 2 e δk ; Φ j k − − −1 ˜ [Gλ,ξ ] Ψ = 0; Φ j k (5.11) − −1 − ˜ Ψj [Gλ,ξ ] Φk = 0; i˜ y ·ξ j ˜ − [Gλ,ξ ]−1 Ψ− = (2π) n−1 2 e δ , Ψ j
where
δkj
k
k
denotes a standard Kronecker delta. More briefly, we can express these relations as − ˜ Φ −1 Φ− Ψ− = I. − [Gλ,ξ ] ˜ Ψ
Using (5.11) and the estimates of Lemma 2.1, and proceeding as in [18], we obtain the following lemma describing our choice of solutions to (5.2). Lemma 5.3. Under Conditions (C0)-(C3), there exist constants , η > 0 so that the following estimates hold uniformly in (λ, ξ) ∈ S on a choice of linearly independent solutions of the eigenvalue problem (5.2): (I) For x1 ≤ 0, k = 0, 1, 2, 3, and j = 1, 2, . . . , m, x1 x1 − − − − k µ− k ˜− k −µ− 2m+j 2m+j − (µ2m+j ) e ∂x1 φj (x1 ; s) = c˜j (s) (−µ2m+j ) e r˜2m+1−j + O(e−η|x1 | ); −
(slow )
−µ3m+j x1 k − ˜− (−µ− ˜m+1−j ∂xk1 φ˜− m+j (x1 ; s) = c m+j (s)e 3m+j ) r
+ O(e−η|x1 | ) ;
(fast)
and −µ− − −η|x1 | j x1 (−µ− )k r ∂xk1 ψ˜j− (x1 ; s) = d˜− (s)e ˜ + O(e ) ; j j j −µ− x1 − − k ˜− k − −η|x1 | ˜ m+j ∂x1 ψm+j (x1 ; s) = dm+j (s)e (−µm+j ) r˜m+j + O(se ) . 27
(fast) (slow )
(II) For x1 ≥ 0, k = 0, 1, 2, 3, and j = 1, 2, . . . , m, −µ+ x1 + k + + −η|x1 | k ˜+ j (−µj ) r˜j + O(e ) ; ∂x1 φj (x1 ; s) = c˜j (s)e + + + k −µ+ k µ− m+j x1 − (µ+ m+j x1 r ∂xk1 φ˜+ (x ; s) = c ˜ (s) (−µ ) e ) e ˜m+j 1 m+j m+j m+j m+j + O(e−η|x1 | );
(fast)
(slow )
and −µ+ k + −η|x1 | 2m+j x1 (−µ+ ∂xk1 ψ˜j+ (x1 ; s) = d˜+ (s)e ) r ˜ + O(e ) ; j 2m+j m+1−j −µ+ + k + −η|x1 | 3m+j x1 (−µ+ (x1 ; s) = d˜+ (s)e ) r ˜ + O(e ) . ∂xk1 ψ˜m+j m+j 3m+j 2m+1−j
(slow ) (fast)
Here, there exists a constant C so that for (λ, ξ) ∈ S we have the estimates −1 |˜ c− j | ≤ Cs ;
j = 1, 2, . . . , m,
−1 |˜ c+ m+j | ≤ Cs ;
j = 1, 2, . . . , m,
while the remaining constants (in x1 ) are bounded by C. We are now prepared to state a series of three lemmas describing the behavior of our resolvent kernel Gλ,ξ . We note at the outset that we will state all three lemmas for the case y1 ≤ 0; an analogous series of lemmas holds for y1 ≥ 0. Lemma 5.4. Suppose Conditions (C0)-(C3) hold and that σ(Lξ ) satisfies the conclusions of Theorem 1.1 (possibly under weaker hypotheses), along with Spectral Condition (Dξ ). If Gλ,ξ is defined as in (5.9), then there exists a constant > 0 so that for (λ, ξ) ∈ S we have the following representation for y1 < x1 < 0: (2π)
n−1 2
˜ − (y1 ; s) + Ψ− (x1 ; s)Ψ ˜ − (y1 ; s). eiξ·˜y Gλ,ξ (x1 , y) = Φ− (x1 ; s)E − (s)Ψ
Here, E − denotes an m × m matrix whose entries have the following properties: (i) For i ∈ {1, . . . , m}, j ∈ {1, . . . , m} (slow-fast) and for i ∈ {m + 1, . . . , 2m}\{J − }, j ∈ {1, . . . , m} (fast-fast) λO(sm−1 ) + O(sm+2 ) − Eij (s) = . D(s) (ii) For i ∈ {1, . . . , m}, j ∈ {m + 1, . . . , 2m} (slow-slow) and for i ∈ {m + 1, . . . , 2m}\{J − }, j ∈ {m + 1, . . . , 2m} (fast-slow) Eij− (s)
λO(sm−1 ) + O(sm+2 ) = . µ− j (s)D(s)
28
(iii) For i = J − , j ∈ {1, . . . , m} (excited-fast) EJ−− j (s) =
O(sm ) . D(s)
(iv) For i = J − , j ∈ {m + 1, . . . , 2m} (excited-slow) EJ−− j (s) =
O(sm ) . µ− j (s)D(s)
Proof. The proof of Lemma 5.4 follows very closely the proof of the associated Lemma 3.9 in [18]. In particular, the calculations leading to equation (3.27) of [18] are precisely the same as in that reference, and we obtain Eij− (s) = −
+ − − − W (φ+ 1 , . . . , φ2m , φ1 , . . . , ψj , . . . , φ2m ) , D(s)
where the growth mode ψj− appears in the (2m + i)th slot of W . In the following discussion, we will designate terms according to what type of solution − ψj is (fast or slow) and what type of solution it replaces. So, for example, the slow-fast case will refer to the case in which a slow decay solution has been replaced by a fast growth solution. Since D(s) is understood, we focus primarily on the numerator of Eij− (s), which we denote Nij− . Slow-fast. When a slow decay solution is replaced by a fast growth solution, we can analyze Nij− similarly as D(s) (see Section 3 and especially Remark 3.1). We obtain a leading-order term of size λO(sm−1 ) (precisely as for the Evans function), and the higher + order term s2 O(e−η|x1 | ) in our expressions for φ− J − and φJ + gives a second term of size O(sm+2 ). Combining these observations, we conclude precisely the stated estimate for this case. We observe that since λ = s2 λ0 the term O(sm+2 ) is higher order in s than λO(sm−1 ); however, we will see in what follows that we can take advantage of the form λO(sm−1 ) during our analysis of the residue term, and we separate it out. Fast-fast. In this case, we replace a non-excited fast decay mode with a fast growth mode. Proceeding as in our analysis of the slow-fast case, we obtain the same result. Slow-slow. In this case, we replace a slow decay mode with a slow growth mode, which we recall (from our proof of Lemma 2.1) can be expressed as ψj− (x1 ; s) =
1 ¯− ψ (x ; s) − φ (x ; s) . 1 2m+1−j 1 j µ− j
The numerator Nij− can be analyzed again by the same methods used with the Evans function, except that we now have µ− j in the denominator. This leads to the stated estimate. We effectively analyze the term µ1− ψ¯j− separately from µ1− φ− 2m+1−j ; in fact, the matrix will j
j
29
typically have φ− 2m+1−j in it (unless this is the mode being replaced), and so there will be no contribution from that term. In light of this, there is no chance of advantageous cancellation. Fast-slow. In this case, we replace a fast decay mode with a slow growth mode, and otherwise proceed precisely as in the slow-slow case. Excited-fast. In this case, the excited solution φ− J − is replaced by one of the fast growth − solutions {ψj− }m . Here, N does not behave like the Evans function, because it only j=1 J −j involves one excited mode. We see by observations similar to those in Remark 3.1 that NJ−− j and its derivatives only vanish up to order m − 1. More precisely, when s = 0 the matrix in NJ−− j has all zeros in its last m rows, and in this way, derivatives of NJ−− j will be zero until there is at least one s derivative on each of m slow solutions. We conclude NJ − j (s) = O(sm ), giving the stated estimate. Excited-slow. In this case, the excited solution φ− J − is replaced by one of the slow growth − 2m solutions {ψj }j=m+1 . We can proceed precisely as in the excited-fast case, except that as in the slow-slow and fast-slow cases we get an additional factor µ− j (s) in the denominator. Lemma 5.5. Suppose Conditions (C0)-(C3) hold and that σ(Lξ ) satisfies the conclusions of Theorem 1.1 (possibly under weaker hypotheses), along with Spectral Condition (Dξ ). If Gλ,ξ is defined as in (5.9), then there exists a constant > 0 so that for (λ, ξ) ∈ S we have the following representation for x1 < y1 < 0: (2π)
n−1 2
˜ − (y1 ; s) + Φ− (x1 ; s)E − (s)Ψ ˜ − (y1 ; s). eiξ·˜y Gλ,ξ (x1 , y) = −Φ− (x1 ; s)Φ
Here, E denotes precisely the same m × m matrix described in Lemma 5.4. Note on the proof of Lemma 5.5. The proof of Lemma 5.5 is almost identical to that of Lemma 5.4, and we omit the details. Lemma 5.6. Suppose Conditions (C0)-(C3) hold and that σ(Lξ ) satisfies the conclusions of Theorem 1.1 (possibly under weaker hypotheses), along with Spectral Condition (Dξ ). If Gλ,ξ is defined as in (5.9), then there exists a constant > 0 so that for (λ, ξ) ∈ S we have the following representation for y1 < 0 < x1 : (2π)
n−1 2
˜ − (y1 ; s). eiξ·˜y Gλ,ξ (x1 , y) = Φ+ (x1 ; s)M − (s)Ψ
Here, M − denotes an m × m matrix whose entries have the following properties: (i) For i ∈ {m + 1, . . . , 2m}, j ∈ {1, . . . , m} (slow-fast) and for i ∈ {1, . . . , m}\{J + }, j ∈ {1, . . . , m} (fast-fast) Mij− (s) =
λO(sm−1 ) + O(sm+2 ) . D(s)
(5.12)
(ii) For i ∈ {m + 1, . . . , 2m}, j ∈ {m + 1, . . . , 2m} (slow-slow) and for i ∈ {1, . . . , m}\{J + }, j ∈ {m + 1, . . . , 2m} (fast-slow) Mij− (s) =
λO(sm−1 ) + O(sm+2 ) . µ− j (s)D(s) 30
(5.13)
(iii) For i = J + , j ∈ {1, . . . , m} (excited-fast) MJ−+ j (s) =
O(sm ) . D(s)
(5.14)
(iv) For i = J + , j ∈ {m + 1, . . . , 2m} (excited-slow) MJ−+ j (s) =
O(sm ) . µ− j (s)D(s)
(5.15)
Moreover, in the excited-slow case MJ−+ j (s) = EJ−− j (s) +
O(sm+2 ) . µ− j (s)D(s)
Proof. The proof of Lemma 5.6 is almost identical to that of Lemma 5.4, except for the last statement in Part (iv). In order to understand this relation, we note that the numerator for MJ−+ j will be − − + − − NJ−+ j = W (φ+ (5.16) 1 , . . . , ψj , . . . , φ2m , φ1 , . . . , φJ − , . . . , φ2m ), where ψj− appears in the J + slot. Recalling ψJ−− (x1 ; s) = u¯0 (x1 ) + O(s2 e−η|x1 | ), we see that − + − NJ−+ j (s) = W (φ+ ¯ 0 , . . . , φ− 1 , . . . , ψj , . . . , φ2m , φ1 , . . . , u 2m ) − + − 2 −η|x1 | + W (φ+ ), . . . , φ− 1 , . . . , ψj , . . . , φ2m , φ1 , . . . , O(s e 2m ) − − − ¯ 0 , . . . , φ+ = −W (φ+ 1 ,...,u 2m , φ1 , . . . , ψj , . . . , φ2m ) − + − 2 −η|x1 | + W (φ+ ), . . . , φ− 1 , . . . , ψj , . . . , φ2m , φ1 , . . . , O(s e 2m ) − 2 −η|x1 | − − = NJ−− j + W (φ+ ), . . . , φ+ 1 , . . . , O(s e 2m , φ1 , . . . , ψj , . . . , φ2m ) − + − 2 −η|x1 | + W (φ+ ), . . . , φ− 1 , . . . , ψj , . . . , φ2m , φ1 , . . . , O(s e 2m ).
The terms involving O(s2 e−η|x1 | ) can be analyzed like fast decay modes, except that we obtain the additional factor of s2 , leading to the final relation in Part (iv) of the theorem. In the expansions described in Lemmas 5.4-5.6 we obtain several terms, and it will be convenient to categorize these. Since these expansions are cumbersome and quite similar, we will only write out full details for the case y1 < x1 < 0. In this case, we have (2π)
n−1 2
eiξ·˜y Gλ,ξ (x1 , y1 ) = (2π) =
2m X
n−1 2
eiξ·˜y {Gλ,ξ }11 (x1 , y1 )
˜− Eij− φ− i (x1 )ψj (y1 )
i,j=1
+
2m X i=1
31
ψi− (x1 )ψ˜i− (y1 ),
(5.17)
where dependence on s has been suppressed for notational brevity. We have eight different categories of summand, six from the cases of Lemma 5.4 and two from the final sum in (5.17). We write (2π)
n−1 2
iξ·˜ y
e
Gλ,ξ (x1 , y1 ) =
m X
˜− EJ−− j φ− J − (x1 )ψj (y1 )
+
{z
}
excited-fast
m X
˜− Eij− φ− i (x1 )ψj (y1 )
+
i,j=1
| +
}
slow-fast
|i=1
˜− Eij− φ− i (x1 )ψj (y1 ) {z
|
}
slow-slow, I
˜− Eij− φ− i (x1 )ψj (y1 )
+
2m X
2m X
˜− Eij− φ− i (x1 )ψj (y1 )
i=m+1 j=m+1
i6=J −
+
}
excited-slow
m 2m X X
i=m+1 j=1
m X
{z
|
i=1 j=m+1
{z
2m X m X
|
˜− EJ−− j φ− J − (x1 )ψj (y1 )
j=m+1
j=1
| +
2m X
i6=J −
{z
}
fast-fast, I
ψi− (x1 )ψ˜i− (y1 )
+
2m X
{z
|
fast-slow
}
ψi− (x1 )ψ˜i− (y1 ).
i=m+1
{z
}
fast-fast, II
|
{z
slow-slow, II
} (5.18)
6
Green’s Function Estimates
In this section we employ a contour integral analysis to obtain estimates on the Green’s functions G(x, t; y) described in (1.15). This type of analysis has its origins in [8, 9, 30] and has been developed considerably since (see particularly [11, 21, 29] in the multidimensional setting). Our starting point is the Fourier-Laplace inverse of our resolvent kernels Z Z 1 eλt+i˜x·ξ Gλ,ξ (x1 , y1 , y˜)dξdλ, (6.1) G(x, t; y) = n (2π) i Γ Rn−1 where Γ denotes a contour that (for each ξ ∈ Rn−1 ) passes entirely to the right of the spectrum of Lξ . In the analysis that follows it will be useful to specify Γ as it relates to the wedge contour Γθ , defined parametrically by the relation λθ (k) = −θ1 − θ2 |k| + ik, for positive constants θ1 , θ2 . (See Figure 6.3.) 32
(6.2)
In order to give as simple a summary as possible, we observe at the outset that the leading-order terms in the expansion (5.18) have the forms
and
λO(sm−1+l ) µ−2m+j (s)z e ; µ− 2m+j (s)D(s)
l = 0, 1, 2 . . . ,
(6.3)
− O(sm+l ) eµ2m+j (s)z ; − µ2m+j (s)D(s)
l = 0, 1, 2, . . . ,
(6.4)
where z < 0 may denote y1 , y1 −x1 , or y1 +x1 . (Recall that (5.18) is for the case y1 < x1 < 0.) These expressions are valid for s sufficiently small, and we need to clarify the size of the integrand in (6.1) for medium and large values of s. We recall that the scaling for s is s = (|λ|2 + |ξ|4 )1/4 , which is appropriate for long time dynamics. For short-time dynamics, however, the more natural scaling is (|λ| + |ξ|4 )1/4 , consistent with a fourth order equation. Of course, if one of these is bounded by some small constant then so is the other. For large values of |λ| + |ξ|4 we can obtain estimates on Gλ,ξ by an argument similar to the one used in [18] for x ∈ R. In particular, using |λ| + |ξ|4 in place of |λ| in that reference, we obtain the following lemma (cf. Lemma 3.16 in [18]). Lemma 6.1. Suppose Gλ,ξ (x1 , y) is a solution to (5.1) in the usual (i.e., distributional) sense, assumptions (C0)-(C3) hold, and σ(Lξ ) satisfies the conclusions of Theorem 1.1 (possibly under weaker hypotheses). Then there exists a constant R sufficiently large, and constants m > 0 and C > 0 so that for |λ| + |ξ|4 ≥ R and Γ to the right of the wedge-shaped contours in Parts 6 and 8 of Theorem 1.1 there holds |∂ α Gλ,ξ (x1 , y)| ≤ C(|λ| + |ξ|4 )
|α|−3 4
e−m(|λ|+|ξ|
4 )|x −y | 1 1
,
for all multiindices |α| ≤ 3 (in x1 and y). For the range of values r < |λ| + |ξ|4 < R we will only require boundedness of Gλ,ξ , which is guaranteed as long as we remain to the right of essential spectrum. We summarize this observation in the next lemma (cf. Lemma 3.17 in [18]). Lemma 6.2. Suppose Gλ,ξ (x1 , y) is a solution to (5.1) in the usual (i.e., distributional) sense, assumptions (C0)-(C3) hold, and σ(Lξ ) satisfies the conclusions of Theorem 1.1 (possibly under weaker hypotheses). Then for any constants 0 < r < R, there exist constants m > 0 and C > 0 (depending on r and R) so that for r ≤ |λ| + |ξ|4 ≤ R and Γ to the right of σ(Lξ ) |∂ α Gλ,ξ (x1 , y)| ≤ C for all multiindices |α| ≤ 3 (in x1 and y).
33
6.1
Theorem 1.2, Case (III).
Case (III) of Theorem 1.2 has two subcases, |x − y| ≥ Kt and 0 < t < T2 , both of which are relatively easy to analyze. In both cases we can proceed entirely along contours for which |λ| + |ξ|4 ≥ R, where the estimate of Lemma 6.1 holds. Our goal is to understand integrals of the form Z Z I= eλt+i(˜x−˜y)·ξ gλ,ξ (x1 , y1 )dλdξ, (6.5) Rn−1
where
Γ
−3/4 4 1/4 |gλ,ξ (x1 , y1 )| ≤ C |λ| + |ξ|4 e−m(|λ|+|ξ| ) |x1 −y1 | .
(Estimates on derivatives can be obtained similarly.) As a start, we set 4/3 ˜ := |x − y| , R Lt4/3 ˜ > R. We complexify ξ as ξ = ξR + iξI , choosing and take K sufficiently large so that R ξI =
|x − y|1/3 x˜ − y˜ . L2 t1/3 |˜ x − y˜|
˜ and ξI , we see that there exist From Part II of Theorem 1.1, along with the definitions of R constants l and L so that the wedge-like contour ˜ − l |ξR |4 + |Im λ| Re λ = R (6.6) L lies entirely to the right of the spectrum of Lξ . We can express this contour as ˜ − l |ξR |4 + |k| + ik. (6.7) λ(k) = R L Keeping in mind that in the estimatesP of Lemma 6.1 |ξ|2 denotes complex modulus squared n−1 2 ξk ) we see that there exists some constant c > 0 (as opposed to the complexification of k=1 so that ˜ + |k| + |ξ|4 ). |λ| + |ξ|4 ≥ c(R We have, then, ˜
l
4
|eλt | = eRt− L (|ξR | +|k|)t −3/4 4 1/4 ˜ ˜ + |k| + |ξ|4 e−mc(R+|k|+|ξ| ) |x1 −y1 | . |gλ,ξ (x1 , y1 )| ≤ C1 R We see that I in (6.5) satisfies the estimate Z +∞ Z |x−y|1/3 l ˜ Rt− (|ξR |4 +|k|)t− |˜ x−˜ y| L L2 t1/3 |I| ≤ C2 e −∞
Rn−1
−3/4 4 1/4 ˜ ˜ + |k| + |ξ|4 × R e−mc(R+|k|+|ξ| ) |x1 −y1 | dξR dk. 34
Focusing on (part of) the exponent, we have 1/3 ˜ − |x − y| |˜ ˜ + |k| + |ξ|4 )1/4 |x1 − y1 | Rt x − y˜| − mc(R 1/3 L2 t |x − y|4/3 |x − y|1/3 |x − y|1/3 ≤ − |˜ x − y ˜ | − mc |x1 − y1 | Lt1/3 L2 t1/3 L1/4 t1/3 x − y˜| |x1 − y1 | o |x − y|4/3 |x − y|1/3 n |˜ + mc . − = Lt1/3 t1/3 L2 L1/4
Using |x − y| ≤ |x1 − y1 | + |˜ x − y˜|, we see that L can be taken sufficiently large to ensure that 1/3 ˜ − |x − y| |˜ ˜ + |k| + |ξ|4 )1/4 |x1 − y1 | Rt x − y˜| − mc(R L2 t1/3 |x − y|4/3 , ≤− M t1/3 for some sufficiently large constant M . We obtain the inequality Z +∞ Z −3/4 l |x−y|4/3 4 − 1/3 ˜ + |k| |I| ≤ C3 e M t R e− L (|ξR | +|k|)t dξR dk −∞ Rn−1 Z −3/4 l +∞ 4/3 |x−y| − − n−1 1/3 ˜ 4 M t ≤ C4 t R + |k| e e− L |k|t dk. −∞
In order to be clear about this final integration, we’ll consider the range k ≥ 0, and we’ll make the change of variables ζ = kt. We obtain the integral Z ∞ Z ∞ −3/4 l −3/4 l −1/4 − L ζ −1 ˜ ˜ +ζ A= R + ζ/t t dζ = t Rt e e− L ζ dζ 0 Z 1 Z 0∞ −3/4 l l ˜ +ζ ≤ t−1/4 Rt e− L ζ dζ e− L ζ dζ + t−1/4 0
1
˜ 1/4 + C2 t−1/4 . ≤ C1 R Observing that |x − y|1/3 1/4 ˜ R = , Lt1/3 we see that ˜ 1/4 e− R
|x−y|4/3 M t1/3
−
≤ Ct−1/4 e
|x−y|4/3 2M t1/3
,
for some constant C. (This uses Lemma 6.6, stated below). We conclude the expected estimate 4/3 −
|I| ≤ Ct−n/4 e for some M sufficiently large. 35
|x−y| 2M t1/3
,
Last, we consider 0 < t < T , for some fixed value T , along with |x−y| < Kt. In this case, exponential time growth such as eσt is acceptable in our estimate, because this is bounded by eσT . We proceed similarly as above, though since the quotient |x − y|4/3 /t4/3 may be small, we take 4/3 ˜ = |x − y| + R. R Lt4/3 Proceeding as in the case |x − y| ≥ Kt, we obtain an estimate by ˜
−
CeRt t−n/4 e ˜
|x−y|4/3 2M t1/3
,
˜
but as noted the growth eRt is bounded by eRT .
6.2
Theorem 1.2, Cases (I) and (II).
In order to conveniently work with individual summands in our expansion of Gλ,ξ (x1 , y), we introduce the following notation. If gλ,ξ (x1 , y) is a summand in our expansion of Gλ,ξ (x1 , y), then we adopt the notation Z Z 1 eλt+i˜x·ξ gλ,ξ (x1 , y1 , y˜)dξdλ, (6.8) Fg (x, t; y) := (2π)n i Γ Rn−1 Lemma 6.3. Suppose Gλ,ξ (x1 , y) is a solution to (5.1) in the usual (i.e., distributional) sense, assumptions (C0)-(C3) hold, and σ(Lξ ) satisfies the conclusions of Theorem 1.1 (possibly under weaker hypotheses), and suppose in addition that Spectral Condition (Dξ ) holds. If gλ,ξ (x1 , y) is a summand in Gλ,ξ bounded by (6.3) for |s| < r then there exist positive constants C1 , C2 , and M so that kFg kLp (Rn ) ≤ C1 t−
n−1 (1− p1 )− l+1 2 2
z2
e− M t + C2 t−
n−1 (1− p1 )− l+2 3 3
z2
hp,n (t)e− M t ,
and similarly if gλ,ξ (x1 , y) is a summand in Gλ,ξ bounded by (6.4) for |s| < r then there exist positive constants C1 , C2 , and M so that kFg kLp (Rn ) ≤ C1 t−
n−1 (1− p1 )− 2l 2
z2
e− M t + C2 t−
n−1 (1− p1 )− 3l 3
z2
hp,n (t)e− M t .
The estimates for Parts I and II of Theorem 1.2 follow primarily from a term-by-term application of Lemma 6.3 to the summands of (5.18).
6.3
Proof of Lemma 6.3
We proceed for the case of (6.3), noting that the analysis of (6.4) is similar. We note at the outset that it will be crucial to the result for (6.3) that λ appears by itself (in addition to its role as part of s). During the analysis, we will pick up residue terms associated with the 36
leading eigenvalue λ∗ (ξ) ≈ −c3 |ξ|3 , and λ is a better term in this context than s2 , because s2 is of size |ξ|2 . The principal contribution to G(x, t; y) will come from the case |s| < r, for r sufficiently small. For the sake of exposition, it will be convenient to let r1 > 0 and r2 > 0 be two small values, and to take |λ| < r1 and |ξ| < r2 . In addition, we let Γr1 denote the portion of Γ that intersects the ball centered at 0 with radius r1 . We set Z Z − λsm−1+l eλt+i(˜x−˜y)·ξ+µ2m+j (s)z − I= dλdξ. (6.9) µ2m+j (s)D(s) B(0,r2 ) Γr1 These contours will be extended through ∞ at the end of this section. Our starting point for choosing the contours arises from the relations s λ + |ξ|2 + O(|s|3 ), µ− 2m+j (λ, ξ) = − βm+1−j and for notational convenience we’ll drop the subscripts on β and µ for the remaining calculations. Our contours will be based on the leading order of µ, and in particular we’ll choose contours λ(k) so that s √ λ + |ξ|2 = R + ik, β where R will be chosen based on the values of x, y, t, and ξ. Solving for λ, we find √ λ(k) = βR − β(|ξ|2 + k 2 ) + 2iβk R, for which √ dλ = 2iβ( R + ik)dk = 2iβ
s
λ + |ξ|2 dk. β
(6.10)
(6.11)
We observe that along this contour 2 |λ|2 = β 2 R − (|ξ|2 + k 2 ) + 4β 2 Rk 2 , and we can verify the inequality |λ| ≥ β|R − |ξ|2 |,
(6.12)
for all k ∈ R. On the other hand, we see that |λ| ≤ C(R + |ξ|2 + k 2 ),
(6.13)
for some constant C, and by definition of s it follows that c1 (R + |ξ|2 ) ≤ s2 ≤ C1 (R + |ξ|2 + k 2 ), 37
(6.14)
for some constants c1 and C1 . In some cases, we will find it advantageous to complexify ξ, and we’ll write ξ = ξR + iξI ,
(6.15)
for which |ξ|2 complexifies as |ξ|2R
:= =
n−1 X
2
(ξRj + iξIj ) =
j=1 |ξR |2
n−1 X
ξR2 j + 2iξRj ξIj − ξI2j
j=1 2
− |ξI | + 2iξR · ξI .
In particular, |ξ|R will not refer to the complex modulus of ξ, but rather to the complexification of the standard Euclidean norm of ξ. In such cases, our contour will become √ (6.16) λ(k) = βR − β(|ξR |2 − |ξI |2 + k 2 ) + 2iβ( Rk − ξR · ξI ), and we’ll have 2 √ 2 |λ|2 = β 2 R − |ξR |2 + |ξI |2 − k 2 + 4β 2 Rk − ξI · ξR .
(6.17)
Likewise, under complexification we have |s2 | ≤ C(R + |ξR |2 + |ξI |2 + k 2 ). We will be particularly interested in whether this contour lies to the left or right of our leading eigenvalue λ∗ (ξ). (We will always take care that it remains to the right of the remainder of σ(Lξ ).) To lowest order λ∗ (ξ) ∼ −c3 |ξ|3 = −c3 (|ξ|2 )3/2 3/2 = −c3 |ξR |2 − |ξI |2 + 2iξR · ξI . In working with λ∗ (ξ) we find the following lemma useful. Lemma 6.4. Let ξR , ξI ∈ Rn−1 , and set ζ := |ξR |2 − |ξI |2 + 2iξR · ξI . Then |ζ|3/2 ≤ 23/2 (|ξR |3 + |ξI |3 ), and there exists a constant 0 < C < 14 so that 1 Re ζ 3/2 ≥ |ξR |3 − C|ξI |3 . 2 Moreover, in the case |ξR |2 > |ξI |2 we have √ 1/4 2 2 2 2 2 1/2 (|ξR | − |ξI | ) + 4(ξR · ξI ) . Re ζ ≥ 2 38
(6.18)
Proof. The first inequality follows from the observation |ζ|2 = (|ξR |2 − |ξI |2 )2 + 4(ξR · ξI )2 ≤ (|ξR |2 − |ξI |2 )2 + 4|ξR |2 |ξI |2 = (|ξR |2 + |ξI |2 )2 , from which we see that |ζ| ≤ |ξR |2 + |ξI |2 . Now we simply use the standard inequality (a + b)p ≤ 2p (ap + bp ) for p ≥ 0. For the second inequality, suppose that |ξI | ≥ η|ξR | for some 0 < η < 1 (to be chosen more precisely below). According to the first inequality of our lemma, Re ζ 3/2 ≥ −|ζ|3/2 ≥ −23/2 (|ξR |3 + |ξI |2 ) 1 + 23/2 1 1 1 3 3/2 3 3/2 3 3 3/2 2 = |ξR | − ( + 2 )|ξR | − 2 |ξI | ≥ |ξR | − +2 |ξI |3 . 2 2 2 η On the other hand, suppose ξI < η|ξR |. We express ζ in polar coordinates ζ = ρeiθ , with ρ2 = (|ξR |2 − |ξI |2 )2 + 4(ξR · ξI )2 ≥ (1 − η 2 )2 |ξR |2 2η 2ξ · ξ R I −1 tan | ≤ |θ| = | tan−1 . |ξR |2 − |ξI |2 1 − η2 Now Re ζ
3/2
3/2
=ρ
3 2η 3θ 2 3/2 3 −1 cos( ) ≥ (1 − η ) |ξR | cos tan . 2 2 1 − η2
We find (numerically, by Newton’s method) that the choice η = .3238 sets this coefficient to .5. With this choice of η we compute 1 2
+ 23/2 + 23/2 = 13.1091, η
giving an accurate value for our choice of C. For the final claim, we simply observe that θ Re ζ 1/2 = ρ1/2 cos , 2 √
and with |ξR |2 > |ξI |2 , we have |θ| < π2 , and so cos( 2θ ) < 22 . Finally, we will need lower bounds on our Evans function D(s), and this is where we employ Condition (Dξ ). For the contours we’ll take, λ will be bounded away from −βj± |ξ|2 for all {βj± }m j=1 , and likewise will be bounded away from λ∗ (ξ) (though a residue associated with λ∗ (ξ) will have to be analyzed). It follows from Condition (Dξ ) that along these contours |D(s)| ≥ c|s|m+1 39
(6.19)
for some constant c > 0. In particular, for the purposes of an estimate, we will be able to replace (6.9) with Z Z λsl−2 I := eλt+i(˜x−˜y)·ξ+µ(s)z dλdξ, (6.20) µ(s) B(0,r2 ) Γr1 where we will use λ = λ0 s2 for the non-residue terms. In addition, as discussed in Section 3, we have the following behavior near λ∗ (ξ): lim
λ→λ∗
D(λ, ξ) = D∗ (s; λ0 , κ0 ), λ − λ∗
with |D∗ (s; λ0 , κ0 )| ≥ c|s|m−1 ,
(6.21)
for some constant c > 0. In order to simplify the calculations as much as possible, we will consider the case in which |x2 − y2 | = max |xj − yj |, (6.22) j∈{2,3,...,n}
noting that alternative cases can be analyzed similarly. For notational convenience, especially in calculations toward the end of the argument, we take x2 − y2 ≥ 0. The case x2 − y2 < 0 can be analyzed similarly. (Nonetheless, we’ll often denote x2 − y2 as |x2 − y2 | to be clear that the sign is not an issue.) Following [11], we proceed now by dividing the analysis into four cases: √ 1. t ≤ |z| and |x2 − y2 | ≤ |z| √ 2. t ≤ |z| < |x2 − y2 | √ 3. |z| < t ≤ |x2 − y2 | √ √ 4. |z| < t and |x2 − y2 | < t. In addition, we will typically divide each case into two subcases, labeled (i) and (ii), based on the size of |ξ|. In this way, we have eight cases to consider in all, and for each of these we need to gather the following information prior to proceeding with our estimate: (A) Location of the resulting contour λ(k) relative to the leading eigenvalue λ∗ (ξ); (B) Upper bound on |λsl−2 dλ|; and (C) upper bound on Re λt + i(˜ x − y˜) · ξ + µ(s)z . Case (1i). For Case (1), we begin with |ξ| ≤
|z| , L1 t
40
Im λ
x
Essential spectrum
x
*
|ξ|>0
Re λ
λ(0)
λ*(ξ) B(0,r1) Γ
Γθ
Figure 1: Contours for Case (1i). where L1 > 0 denotes a constant that will be chosen sufficiently large, and we set Z Z λsl−2 dλdξ. I(1i) := eλt+i(˜x−˜y)·ξ+µ(s)z |z| µ(s) B(0, L t ) Γr1
(6.23)
1
We choose (recalling that z < 0)
so that λ(k) = β
√ z R=− , L2 t z2 z 2 2 − |ξ| − k − 2iβ k, 2 2 L2 t L2 t 2
where L2 > 0 is a constant that will be chosen along with L1 . Noting that λ(0) = β( Lz2 t2 − 2 |ξ|2 ), we recognize that if L1 > L2 then λ(0) > 0, and this contour will pass entirely to the right of both the essential and point spectrum (see Figure 6.3). For the exponent in (6.23) we have z2 z2 Re λt + i(˜ x − y˜) · ξ + µ(s)z ≤ β 2 − β(|ξ|2 + k 2 )t − a , L2 t L2 t where 0 < a < 1 accounts for higher order terms in µ(s). We take L2 large enough so that L22 > L2 , from which we see that there exists a constant M so that z2 Re λt + i(˜ x − y˜) · ξ + µ(s)z ≤ − − β(|ξ|2 + k 2 )t. Mt 41
Combining these observations, we obtain an estimate on (6.23) of the form Z kr Z z2 2 2 −M e−β(|ξ| +k )t Rl/2 + |ξ|l + |k|l dkdξ |I(1i) | ≤ C1 e t |z| ) 1t
−kr
B(0, L
=: I1 + I2 + I3 , where the terms Ij are respectively associated with the three summands in our integral, and ±kr denote the values where λ(k) strikes Γθ . Here, and in subsequent calculations, we will make use of the following standard lemmas. Lemma 6.5. Let γ > −1, α, m > 0. Then Z +∞ 1+γ 1 γ+1 m τ γ e−ατ dτ = α− m Γ( ). m m 0 Lemma 6.6. Let η, α, m > 0. Then for any > 0 m
m
τ η e−ατ ≤ Cα−η/m e−(α−)τ , where C depends on η, m, and , but not on α or τ . Lemma 6.7. For α, m > 0, and any p ∈ [1, ∞], we have n
m
ke−α|z| kLp (Rn ) ≤ Cα− mp , where C depends on p, m, and n, but not on α. Using Lemma 6.5, we see z2
I1 ≤ C2 Rl/2 t−n/2 e− M t = C2
z2 z l −n/2 − z2 − n − 2l − 2M Mt ≤ C t 2 t, t e e 3 Ll2 tl
and similarly for I2 and I3 . In this case, we have |x2 − y2 | ≤ |z|, and so the decay in z additionally gives decay in |x2 − y2 | with the same scaling (which gives decay in |˜ x − y˜| by (6.22)). We can express this as n
l
z2
|I(1i) | ≤ C4 t− 2 − 2 e− 4M t e−
|˜ x−˜ y |2 4M t
.
Finally, an application of Lemma 6.7 yields the estimate kI(1i) kLxp˜ ≤ C5 t−
n−1 (1− p1 )− l+1 2 2
z2
e− 4M t .
(6.24)
We note that (6.24) is the expected estimate in some sense, because it is the natural long-time Green’s function estimate we would get on the constant coefficient equations vt = L± ξ v, 42
where L± ξ denote the asymptotic linear operators obtained by taking x1 → ±∞ in Lξ . Case (1ii). We now turn to the subcase |z| < |ξ| ≤ r2 , L1 t for which we make the choice integral in this case is Z I(1ii) :=
√ R = γ|ξ| for some 0 < γ < 1 to be chosen below. The Z |z| L1 > L2 > 0, with additionally 1 1 1 + 2 < , 2 L2 L3 L3
(6.29)
where it is of course critical that L3 is not squared on the right-hand side. For this contour, we have 1 1 (x2 − y2 )2 Re λ(k) = β ( 2 + 2 ) − |ξR |2 − k 2 L2 L3 t x2 − y 2 Im λ(k) = 2β k − ξR1 , L3 t and the first thing we need to determine is the location of this contour relative to λ∗ (ξ). We will be able to show that λ(k) passes entirely to the right of a disk in C to which λ∗ (ξ) can be restricted. In order to see this, we first note from (6.17) that |λ| ≥ c1
|x2 − y2 |2 t2 45
for some constant c1 = O(L−2 2 ). At the same time, we see from (6.18) that |x2 − y2 |3 |λ∗ | ≤ c2 t3 for some constant c2 = O(L−3 2 ). By taking L2 (and hence both L1 and L3 ) sufficiently large, we can ensure λ(k) remains bounded away from λ∗ (ξ). Moreover, 1 x2 − y 2 1 (x2 − y2 )2 − |ξR |2 − 2iβ ξR1 λ(0) = β ( 2 + 2 ) L2 L3 t L3 t must lie in the positive-real half-plane, so λ(k) must pass to the right of λ∗ (ξ). Next, we observe that to lowest order in the expansion of µ Re λt + i(˜ x − y˜) · ξ + µz 1 1 1 (x2 − y2 )2 |x2 − y2 | ∼− − β( 2 + 2 ) − |z| − β(|ξR |2 + k 2 ). L3 L2 L3 t L2 t Using (6.29), and dropping off the term with |z| for simplicity, we can conclude that (x2 − y2 )2 Re λt + i(˜ x − y˜) · ξ + µz ≤ − − c(|ξR |2 + k 2 ), Mt where M and c are positive constants that also accommodate higher order terms that have been omitted. Noting finally that |x − y | 2 2 l |s|l ≤ C ( ) + |ξR |l + |k|l , t we see that we need to estimate an integral of the form Z Z |x − y | (x −y )2 2 2 l −c(|ξR |2 +k2 )t l l − 2M t2 e ( ) + |ξR | + |k| dkdξR . I(2i) = e |x −y | t |ξR |≤ 2L t 2 Γr1 1
Employing now Lemmas 6.5, 6.6, and 6.7, we recover the estimate (6.24) on kI(2i) kLpx˜ . 2| Case (2ii). We now turn to Case (2ii), for which |x2L−y < |ξR | ≤ r2 . Our contours are 1t given by (6.15), (6.16), and we make the choices √ R = γ|ξR | x − y 2 2 ξI = , 0, . . . , 0 , L3 t
where L1 and L3 are the same values used in Case (2i) and γ ∈ (0, 1) is a real value to be chosen during the analysis (playing the same role as γ in Case (1), albeit possibly with a different value). With these choices, our contour is (x2 − y2 )2 x2 − y 2 λ(k) = −β (1 − γ 2 )|ξR |2 + k 2 − + 2iβ kγ|ξ | − ξ . (6.30) R R1 L23 t2 L3 t 46
Similarly as in Case (2i) we can check that λ(k) is bounded away from a disk containing λ∗ (ξ), and in this case λ(k) passes entirely to the left of this disk. In light of this, we must replace λ(k) with a new contour that follows λ(k) near the boundary of B(0, r), but encircles λ∗ (ξ). We will integrate along this contour by proceeding along the entirety of λ(k) and adding to this the integral obtained from the closed loop created from the contour encircling λ∗ (ξ), which as in Case (1) we denote Γ∗ . Along λ(k), we can proceed almost precisely as we did for Case (2i), and we obtain the same estimate. For the closed loop, we proceed by Cauchy’s Integral Formula to find Z λ∗ (ξ)s(λ∗ (ξ), ξ)l eλ∗ (ξ)t+i(˜x−˜y)·ξ+µ(λ∗ (ξ),ξ)z dξR , (6.31) I(2∗) = |x2 −y2 | µ(s(λ (ξ), ξ))h(s(λ (ξ), ξ)) ∗ ∗ ≤|ξ |≤r 2 R L t 1
where |h(s(λ∗ (ξ), ξ))| ≥ c1 , for some constant c1 > 0. Using (6.26) and Lemma 6.4 we see that Re µ(λ∗ (ξ), ξ) ≥ c2 |ξR |, for some constant c2 > 0. Likewise, since in this case |ξR | ≥ (L3 /L1 )|ξI |
s(λ∗ (ξ), ξ)l ≤ C|ξR |l−1 , µ(s(λ∗ (ξ), ξ))h(s(λ∗ (ξ), ξ))
for some constant C. For the exponent, using Lemma 6.4 we have Re λ∗ (ξ)t + i(˜ x − y˜) · ξ + µ(λ∗ (ξ), ξ)z (x2 − y2 )3 (x2 − y2 )2 1 − − c|ξR ||z| ≤ − |ξR |3 t + C 2 L33 t2 L3 t 1 (x2 − y2 )2 ≤ − |ξR |3 t − − c|ξR ||z|, 2 Mt
(6.32)
where we’ve observed that (since we’re in the case |x2 − y2 | ≤ Kt) (x2 − y2 )3 (x2 − y2 )2 ≤ K , L33 t2 L33 t and we can choose L3 sufficiently large to give our estimate. We can estimate this integral by Z (x −y )2 (x −y )2 −c|ξ |3 t− 2L t2 −c|ξR ||z| − 2 2 3 C e R |ξR |l+2 dξR ≤ C|z|−(n+l+1) e L3 t . |x2 −y2 | t is advantageous), and the cubic scaling when integrating over ξR 2 . We also note that we in fact have four integrals to evaluate, arising from the four terms on the right-hand side of the inequality l+2 l+2 l+2 l+2 l+2 ≤ C |ξ | + |ξ | + |ξ | + |ξ | . (6.33) |ξ| R R1 I1 R2 I2 For the first summand on the right-hand side of (6.33), we recall that ξR 1 =
|x2 − y2 | , L1 t
and so the associated term in P1 , which we will denote P11 , can be estimated by P11
|x − y | l+2 −y2 )3/2 (x −y )2 −c3 (x2√ 2 2 −1/3 −1 − 2L3 t2 L3 t ≤ C1 t |x2 − y2 | e e L1 t ≤ C2 t
− − 13 − l+3 2
e
(x2 −y2 )2 L3 t
e
−c3
(x2 −y2 )3/2 √ L3 t
50
.
In obtaining this inequality, we have used Lemma 6.6, and we see from Lemma 6.7 that the Lpx˜ estimate associated with this estimate is 1
1
1
1
kP11 kLxp˜ ≤ C3 t− 3 (1− p )− 2 (1− p )−
l+2 2
,
√ and decay in z can be added since |z| < t in this case. For the second summand in (6.33) we use Lemma 6.5 to obtain the same estimate as for P11 . For the third summand in (6.33) we employ Lemma 6.5 with the cubic term to obtain an estimate by 1
P13 ≤ C1 t− 2 −
l+3 3
e−
(x2 −y2 )2 L3 t
(x2 −y2 )3/2 √ L3 t
−c3
e
,
and using Lemma 6.7 we conclude 1
1
1
1
kP13 kLxp˜ ≤ C2 t− 3 (1− p )− 2 (1− p )−
l+2 3
.
(6.34)
For the fourth summand in (6.33) we recall that s |x3 − y3 | , |ξI 2 | = L3 t and use Lemma 6.6 to get −c3
|ξI 2 |l+2 e
(x2 −y2 )3/2 √ L3 t
≤ Ct−(l+2)/3 e
−c4
(x2 −y2 )3/2 √ L3 t
,
for 0 < c4 < c3 . In this way, we obtain precisely the same estimate as in the case of P13 . Ω+ 1 , Path 2. Path 2 is the main contribution, and in particular we see that since s |x2 − y2 | ξI 1 = , L3 t along this path we obtain a cubic scaling in both |x2 − y2 | and |x3 − y3 |. Otherwise, the analysis is quite similar to that of Path 1, and we obtain a final estimate of the form 2
1
kP2 kLpx˜ ≤ Ct− 3 (1− p )− and decay in z can be added since |z|
1 one of these is Z +∞ 1 1−p (y3 − x3 )−p dy3 = t 3 . p−1 x3 +t1/3 53
In the special case p = 1, we recall that we are also in the case |x − y| ≤ Kt, and so in particular |x3 − y3 | ≤ Kt. In this way, we see that our L1y3 estimates can be computed as Z
x3 +Kt
x3
(y3 − x3 )−1 dy3 = ln(Kt) − ln t1/3 .
+t1/3
It is precisely from this calculation, here and in the following, that the logarithmic terms in Theorem 1.2 arise. Using Lemma 6.7 and Remark 6.1, we obtain the estimate 1
1
1
1
kP11 kLxp˜ ≤ C3 t− 2 (1− p )− 3 (1− p )−
l+2 2
,
for p > 1 and likewise l+2
kP11 kL1x˜ ≤ C3 t− 2 ln t. √ Decay in z can be added since |z| < t in this case. For the second and third summands in (6.33) the estimates are qualitatively exactly the same, and we arrive at the same result. For the fourth summand in (6.33) we arrive at integrals of the form r
P14
|x − y | (x2 −y2 )2 Z 2 2 − e L3 t ≤ C1 t 0 −
≤ C2 t−1/2 e
(x2 −y2 )2 L3 t
|x3 −y3 | L3 t
e−c2 (x3 −y3 )ξI 2 |ξI 2 |l+2 dξI 2
|x3 − y3 |−(l+3) .
Combining Lemma 6.7 and Remark 6.1 we obtain the estimate 1
1
1
1
kP14 kLxp˜ ≤ C3 t− 2 (1− p )− 3 (1− p )− Decay in z can be added since |z|
L2 , so that the integral associated with this case is Z Z λsl−2 I(4i) := eλt+i(˜x−˜y)·ξ+µ(s)z dλdξ. µ(s) B(0, 1√ ) Γr1 L1
t
Our contour in this case is λ(k) =
β β − β(|ξ|2 + k 2 ) + 2i √ k. 2 L2 t L2 t
We see that λ(0) = β
1 2 − |ξ| > 0, L22 t
so that this contour certainly passes to the right of λ∗ (0). In this case, we have β Re λt + i(˜ x − y˜) · ξ + µ(λ)z ≤ 2 − β(|ξ|2 + k 2 ), L2 and also
λsl−2 dλ ≤ C|s|l dk. µ(s)
We estimate Z |I(4i) | ≤ C1
Z
B(0, L1 t ) 1
e−β(|ξ|
2 +k 2 )
(|ξ|l + |k l | + t−l/2 )dkdξ.
λ(k)∈Γr1
We integrate using Lemma 6.5 to find |I(4i) | ≤ C2 t−
l+n 2
.
√ In this case |x2 − y2 | ≤ t, and so we can view the estimate as having quadratic-scaled exponential decay. Using Lemma 6.7 we conclude kI(4i) kLpx˜ ≤ C3 t−
n−1 (1− p1 )− l+1 2 2
z2
e− M t .
Case (4ii). For Case (4ii) we have |ξ| > 1/(L1 t1/2 ) (not complexified), and as in Case (1ii) we choose √ R = γ|ξ|, with γ to be chosen in 0 < γ < 1. The contour λ(k) has precisely the same form as in Case (1ii), and also as in Case (1ii) we see that for |ξ| = 6 0 λ(0) will be to the left of λ∗ (ξ) and that we can choose γ close enough to 1 so that λ(0) lies to the right of the remaining spectrum of Lξ . Since our contour Γr1 lies to the left of λ∗ , we proceed once again by extending a loop 56
around λ∗ (denoted Γ∗ ) so that we integrate over the contour Γ = Γr1 ∪ Γ∗ . Proceeding as in Case (1ii) we find that the integral over Γr1 provides an estimate Z Z r1 2 2 2 |I(4ii) | ≤ C e−β((1−γ )|ξ| +k )t (|ξ|l + |k|l )dk 1 1 l+2 1 1 1 1 kP11 kLpx˜ ≤ Ct− 3 (1− p )− 2 (1− p )− 2 , while for p = 1 kP11 kL1x˜ ≤ Ct−
l+2 2
log t. √ In either case, decay in z can be added since |z| < t. For the second summand in (6.33) we estimate r (x −y )3/2 −c3 2√ 2 L3 t
Z
(x −y )3/2 −c3 2√ 2 L3 t
P12 ≤ C1 t−1/3 e
|x2 −y2 | L3 t
e−(x2 −y2 )ξI 1 ξI l+2 1 dξI 1
0
≤ C2 t−1/3 e
|x2 − y2 | + t1/3
−(l+3)
.
We see that for l > −2 (and here l ≥ 0) 2
1
kP12 kLpx˜ ≤ C3 t− 3 (1− p )−
l+2 3
.
For the third summand in (6.33) we employ Lemma 6.5 with the cubic term to obtain an estimate by − l+3 3
P13 ≤ C1 t
−1 −c3
(x2 − y2 ) e
(x2 −y2 )3/2 √ L3 t
,
and using Lemma 6.7 we conclude that for p > 1 1
2
kP13 kLpx˜ ≤ Ct− 3 (1− p )−
l+2 3
,
(6.37)
while for p = 1 kP13 kL1x˜ ≤ Ct−
l+2 3
log t.
(6.38)
For the fourth summand in (6.33) we recall that s |x3 − y3 | |ξI 2 | = , L3 t and use Lemma 6.6 to get l+2 −c3
|ξI 2 |
e
(x2 −y2 )3/2 √ L3 t
−(l+2)/3 −c4
≤ Ct
e
(x2 −y2 )3/2 √ L3 t
,
for 0 < c4 < c3 . In this way, we obtain precisely the same estimate as in the case of P13 . Ω+ 1 , Path 2. Path 2 is the main contribution, and in particular we see that since s |x2 − y2 | ξI 1 = , L3 t 59
along this path we obtain a cubic scaling in both |x2 − y2 | and |x3 − y3 |. Otherwise, the analysis is quite similar to that of Path 1, and we obtain a final estimate of the form 2
1
kP2 kLpx˜ ≤ Ct− 3 (1− p )− Decay in z can be added since |z|
1 1
1
1
1
kP11 kLxp˜ ≤ C3 t− 3 (1− p )− 2 (1− p )−
l+2 2
,
while for p = 1 kP11 kLpx˜ ≤ C3 t−
l+2 2
log t.
The second summand from (6.33) does not appear (since ξI 1 = 0), so we proceed to the √ third, for which we recall that along Path 1 ξR 2 = 1/(L1 t). In this way, we obtain precisely the same estimate as for P11 . For the fourth summand in (6.33), we have r − 21
Z
P14 ≤ C1 t
|x3 −y3 | L3 t
1
−2 e−c2 (x3 −y3 )ξI 2 ξI l+2 (x2 − y2 )−(l+3) . 2 dξI 2 ≤ C2 t
0
Using Remark 6.1, we obtain an estimate 1
1
1
1
kP14 kLxp˜ ≤ C3 t− 3 (1− p )− 2 (1− p )− Ω+ 2 , Path 2. For Path 2, we fix r ξI 2 =
x3 − y 3 , L3 t
61
l+2 3
.
and compute J2 , where
3
Re λ∗ (ξ)t + i(˜ x − y˜) · ξ + µ(λ∗ (ξ), ξ)z ≤ −c1 |ξR | t +
C1 ξI 32 t
(x3 − y3 )3/2 √ − . L3 t
In this case C1 |ξI 32 |3 t can be subsumed into the remaining terms, and our estimate becomes (keeping in mind that ξI 1 = 0) P2 ≤ C1 e
−
(x3 −y3 )3/2 √ L3 t
Z
r2
Z
1√ L1 t
+
−
1√ L1 t
−c1 |ξR |3 t
e
1√ L1 t
l+2
|ξR 1 |
l+2
+ |ξR 2 |
l+2
+ |ξI 2 |
dξR 1 dξR 2 .
As in previous calculations, we divide our estimates into three terms, associated respectively with the three summands in P2 . For the first, we use Lemmas 6.5 and 6.6, and also the width of integration in ξI 1 to obtain an estimate by P21 ≤ C1 t
− − 31 − l+3 2
t
e
(x3 −y3 )3/2 √ Mt
.
Employing now Lemma 6.7 we see that 1
1
1
1
kP21 kLxp˜ ≤ C3 t− 2 (1− p )− 3 (1− p )−
l+2 2
.
The term P22 does not appear (since ξI 1 = 0), and for P23 we use Lemma 6.5 to see that P23 ≤ C1 t−
l+3 3
t−1/2 e
−
(x3 −y3 )3/2 √ Mt
.
Employing now Lemma 6.7 we see that 1
1
1
1
kP23 kLxp˜ ≤ C3 t− 2 (1− p )− 3 (1− p )−
l+2 3
.
For P24 , we use Lemma 6.6 to see that (
)3/2 −y3 )3/2 l+2 − (x3√ |x3 − y3 | l+2 − (x3 −y √ 3 Mt 2M t ) 2 e ≤ C1 t− 3 e , L3 t
and this leads to precisely the same estimate we obtained for P23 . Remark on the cases n ≥ 4. The cases n ≥ 4 can be analyzed in a similar manner as the case n = 3, and we only add this short remark about the appearance of log t. For n = 4, the analysis of Case 4 involves the two regions n o 3 R2 := ξ ∈ R : |ξj | ≤ r2 , j = 1, 2, 3 n o 1 3 Rt := ξ ∈ R : |ξj | ≤ , j = 1, 2, 3 . L1 t1/2 62
In particular, for Case (4ii) we integrate over ξ ∈ R2 \Rt . We’ll denote by Ω± 1 the top and ± ± bottom slabs of R2 \Rt , by Ω2 the right and left remaining slabs, and by Ω3 the front and back remaining slabs. Precisely, 1 √ , r2 ], ξ1 , ξ2 ∈ [−r2 , r2 ]}; L1 t 1 1 1 √ , + √ ], ξ2 ∈ [ √ , r2 ], ξ1 ∈ [−r2 , r2 ]}; Ω+ 2 := {ξ ∈ R2 \Rt : ξ3 ∈ [− L1 t L1 t L1 t 1 1 1 1 1 √ , + √ ], ξ2 ∈ [ √ , + √ ], ξ1 ∈ [ √ , r2 ]}, Ω+ 3 := {ξ ∈ R2 \Rt : ξ3 ∈ [− L1 t L1 t L1 t L1 t L1 t
Ω+ 1 := {ξ ∈ R2 \Rt : ξ3 ∈ [
± + 3 and similarly for {Ω− j }j=1 . For Ω1 we only need to complexify ξ3 , while for Ω2 we do not need to complexify ξ3 (due to the limits) and only need to complexify ξ2 . Likewise, for Ω+ 3 we only need to complexify ξ1 . In each case, the log t term only arises from complexification, and so we always get log t, but never (log t)2 or higher powers. (In the related analysis of [11] the author allowed the possibility of powers of log t.)
6.4
Absence of log t in the case n = 2
Finally, we need to understand why the log t terms do not arise for n = 2. We note at the outset that for the case under consideration (i.e., (6.3)) the log t terms are easy to eliminate since l = 0, 1, 2, . . . (see below). However, in the case of (6.4) we obtain residue integrals that have the form of (6.3) with l = −2 (corresponding with the case l = 0 in (6.4)). Although we’ll consider all values l = −2, −1, . . . for (6.3), we’ll see that out primary concern is l = −2. We see from our previous considerations that the only place it could arise is in the residue analysis of Case (4ii), and in the subcase |x2 − y2 | ≥ t1/3 . In this case, notation is simplified a bit by the fact that ξ is a scalar, and we observe that the critical integral to consider is Z eλ∗ (ξ)t+i(x2 −y2 )ξ+µ(λ∗ (ξ),ξ)z |ξ|l+2 dξ. I(4∗) = 1 −2 this is better than our claimed estimate. Likewise, for P42 we employ Lemma 6.5 to obtain an estimate |P42 | ≤ C2 (|x2 − y2 |)−(l+3) . 65
In this case, for l > −2 we obtain from Remark 6.1 1
1
kP42 kLpx˜ ≤ t− 3 (1− p )−
l+2 3
.
Turning finally to the critical case l = −2, and recalling that wep are in the case (x2 −y2 ) > −1/3 L3 t , we first consider the integration as ξI ranges from t to (x2 − y2 )/(L3 t). In this case, we obtain an estimate by 1/3
q
Z C1
x2 −y2 L3 t
1
−1/3
e−(x2 −y2 )ξI dξI ≤ C2 (x2 − y2 )−1 e− 2 (x2 −y2 )t
t−1/3 1
−1/3
≤ C4 t−1/3 e− 2 (x2 −y2 )t
,
which gives both the L1 and L∞ estimates from (6.40). Last, we consider the integration as ξI ranges from 0 to t−1/3 . Since the integrand for this integral is bounded, we immediately obtain the L∞ estimate in (6.40), leaving only the L1 estimate to be understood. In this case, we first observe that (using Young’s inequality) |λ∗ (ξ)t| ≤ C(t−1/2 + ξI3 t), so that eλ∗ (ξ)t = 1 + O(t−1/2 + ξI3 t). (It’s clear from the inequality ξI ≤ t−1/3 that ξI3 t is controlled in this case.) For the error terms, our integral can be estimated by Z C
t−1/3
e−(x2 −y2 )ξI (t−1/2 + ξI3 t)dξI .
0
Since we already have an appropriate L∞ estimate, the issue here regards whether or not we can obtain the expected L1 estimate. In order to see that we can, notice that for the first summand in this last integral direct integration yields 1/3 Ct−1/2 (x2 − y2 )−1 1 − e−(x2 −y2 )/t . Indeed the primary difficulty we’re trying to surmount is apparent here: For this case, crude estimates lead to algebraic decay (x2 − y2 )−1 , which √ is just shy of integrable. For this error term, however, we recall that we are in the case t > (x2 − y2 ), and so the t−1/2 decay can be regarded as giving additional (x2 − y2 )−1 decay, making the term L1x2 . For the second summand, we can again integrate directly to obtain C −
1 3t1/3 6t2/3 −(x2 −y2 )/t1/3 6t −(x2 −y2 )/t1/3 e + 1 − e . − − x2 − y2 (x2 − y2 )2 (x2 − y2 )3 (x2 − y2 )4 66
For the first three terms, we use the inequality (x2 − y2 ) > L3 t1/3 to see that we can obtain estimates by 1/3 t−1/3 e−(x2 −y2 )/t , which has the expected L1 integrability. For the final term, we can use the same inequality to obtain an estimate by C1 , ((x2 − y2 ) + t1/3 )4 for some constant C1 , which again has the expected L1 integrability. Likewise, with ξ = L 1√t + iξI we can write 1 (x2 −y2 ) (x2 − y2 ) −(x2 −y2 )ξI i(x2 −y2 )ξ −(x2 −y2 )ξI i L1 t1/2 =e 1 + O( e =e e ) . L1 t1/2 For the error term, we estimate (x2 − y2 ) L1 t1/2
Z
t−1/3
e−(x2 −y2 )ξI dξI ≤ Ct−1/2 ,
0
√ and we observe that since this estimate is for the case (x − y ) < t we only require L1 2 2 √ √ integrability over [y2 − t, y2 + t], which is clear. Proceeding similarly for P3 , we are left with Z 0 Z t−1/3 −(x2 −y2 )ξI −µ(λ∗ (ξ),ξ)z e−(x2 −y2 )ξI −µ(λ∗ (ξ),ξ)z dξI P3 + P4 = e dξI + t−1/3 t−1/3
Z
0
e−(x2 −y2 )ξI H(t, y1 ; ξI )dξI ,
= 0
where −µ(λ∗ (
1 +iξI ), 11/2 +iξI )z L1 t1/2 L1 t
−µ(λ∗ (−
1
+iξI ),−
1
+iξI )z
L1 t1/2 L1 t1/2 −e z |z| = e L1 t1/2 eiξz − e−iξz + O( + |ξI |2 |z|). t We are left with the leading order term Z t−1/3 Z t−1/3 z z −(x2 −y2 )ξI iξy1 −iξy1 L1 t1/2 L1 t1/2 e e (e −e )dξI = 2ie e−(x2 −y2 )ξI sin(ξI z)dξI
H(t, y1 ; ξI ) = e
0
=
0 z L1 t1/2
2ie (x2 − y2 )2 + z
n o −(x2 −y2 )/t1/3 ) 1/3 1/3 z + e − (x − y ) sin(z/t ) − z cos(y /t ) , 2 2 1 2
where we have used the integral identity Z eax eax sin(bx)dx = 2 (a sin(bx) − b cos(bx)). a + b2 Taking the L1x2 norm of each of these terms, we obtain the claimed estimate. 67
6.5
Extension Beyond B(0, r)
The preceding calculations have been carried out entirely in a sufficiantly small ball B(0, r), and in this section we discuss estimates obtained as these contours are continued out to B(0, r)c . Briefly, the key element associated with our contours as they extend beyond B(0, r) is that they lie in the stable half-plane, far enough to the left of the imaginary axis that we obtain exponential decay in t. We are currently in the setting with |x − y| ≤ Kt, so that t2 ≥
1 1 |x − y|2 2 . |x − y| ⇒ t ≥ K2 K2 t
In this way, exponential decay in time will be sufficient to give estimates that can be subsumed into the Case II estimates of Theorem 1.2. Precisely, an estimate of the form e−ηt for some η > 0 will give an inequality 1
1
1
1
e−ηt = e− 2 ηt e− 2 ηt ≤ e− 2 ηt e− 2 η
|x−y|2 K2t
.
According to Lemmas 6.1 and 6.2, we have (at least) |∂ α Gλ,ξ (x1 , y)| ≤ C for |λ| + |ξ|4 ≥ r. Fix some constant R1 > 0, and first consider the domain of integration with |ξ| ≤ R1 . For each such ξ we extend our contour beyond B(0, r) by taking Γθ , defined by (6.2). On Γθ we have 1/2 |λ| = (θ1 + θ2 k)2 + k 2 , and so we can view our parametrized integration as over the set 1/2 SR := {(k, ξ) : |ξ| ≤ R1 , (θ1 + θ2 k)2 + k 2 + |ξ|4 ≥ R4 }. In this way, we obtain estimates β
ZZ
e−θ1 t−θ2 |k|t dkdξ,
|∂ Gλ,ξ (x1 , y)| ≤ C1 SR ∩Γθ
which clearly decays at exponential rate in time. In particular, we note two things: (1) the case t ≤ 1 has already been analyzed, so the t−1 behavior that arises from integrating e−θ1 |k|t does not cause a problem, and (2) the domain of integration in |ξ| is bounded, so we only require boundedness of our integrand in ξ. For |ξ| ≥ R1 , the essential spectrum is bounded as in Part II of Theorem 1.1, and we can follow a modification of the wedge contour Γθ , described by λθ,ξ (k) = −θ3 |ξ|4 − θ4 |k| + ik, 68
(6.41)
where θ3 > 0 is taken small enough so that this contour lies to the right of essential spectrum. Along Γθ,ξ we proceed almost precisely as along Γθ , noting only that we now can integrate along an unbounded domain in ξ. We again obtain exponential decay in t, which is sufficient. Acknowledgments. This material is based upon work supported by the National Science Foundation under Grant Number DMS-0906370. This is a continuation of work the author began with Bongsuk Kwon, and the author is indebted to Dr. Kwon for several enlightening discussions.
References [1] N. D. Alikakos, S. I. Betelu, and X. Chen, Explicit stationary solutions in multiple well dynamics and non-uniqueness of interfacial energy densities, Euro. Jnl of Applied Mathematics 17 (2006) 525–556. [2] N. D. Alikakos and G. Fusco, On the connection problem for potentials with several global minima, Indiana U. Math. J. 57 (2008) 1871–1906. [3] J. Alexander, R. Gardner, and C. K. R. T. Jones, A topological invariant arising in the analysis of traveling waves, J. Reine Angew. Math. 410 (1990) 167-212. [4] S. J. Bernau, The square root of a positive self-adjoint operator, J. Australian Mathematical Society, Feb. 1968, 17–36. [5] J. Bricmont, A. Kupiainen, and J. Taskinen, Stability of Cahn-Hilliard fronts, Comm. Pure Appl. Math. 52 (1999) 839-871. [6] L. C. Evans, Partial Differential Equations, Graduate Studies in Mathematics, vol. 19, AMS 1998. [7] R. Gardner and K. Zumbrun, The Gap Lemma and geometric criteria for instability of viscous shock profiles, Comm. Pure Appl. Math. 51 (1998) 797-855. [8] P. Howard, Pointwise estimates for the stability of scalar conservation laws, Thesis at Indiana University 1998; Adv. K. Zumbrun. [9] P. Howard, Pointwise estimates on the Green’s function for a scalar linear convectiondiffusion equation, J. Differential Equations 155 (1999) 327-367. [10] P. Howard, Asymptotic behavior near transition fronts for equations of generalized CahnHilliard form, Commun. Math. Phys. 269 (2007) 765–808. [11] P. Howard, Asymptotic behavior near planar transition fronts for equations of CahnHilliard type, Physica D 229 (2007) 123-165. 69
[12] P. Howard, Spectral analysis of planar transition fronts for the Cahn-Hilliard equation, J. Differential Equations 245 (2008) 594-615. [13] P. Howard, Spectral analysis of stationary solutions of the Cahn-Hilliard equation, Advances in Differential Equations 14 (2009) 87-120. [14] P. Howard, Spectral analysis for transition front solutions in multidimensional CahnHilliard systems, J. Differential Equations 257 (2014) 3448-3465. [15] P. Howard, Stability for transition front solutions in multidimensional Cahn-Hilliard systems, Preprint 2015, available at www.math.tamu.edu/∼phoward/mathpubs.html. [16] D. Henry, Geometric theory of semilinear parabolic equations, Lecture Notes in Mathematics 840, Springer-Verlag 1981. [17] P. Howard and B. Kwon, Spectral analysis for transition front solutions in Cahn-Hilliard systems, Discrete Contin. Dyn. Syst. 32 (2012) 125-166. [18] P. Howard and B. Kwon, Asymptotic stability analysis for transition wave solutions in Cahn-Hilliard systems, Physica D 241 (2012) 1193–1222. [19] P. Howard and B. Kwon, Asymptotic Lp stability for transition fronts in Cahn-Hilliard systems, J. Differential Equations 252 (2012) 5814-5831. [20] P. Howard and C. Hu, Nonlinear stability for multidimensional fourth order shock fronts, Arch. Rational Mech. Anal. 181 (2006) 201–260. [21] D. Hoff and K. Zumbrun, Pointwise Green’s function bounds for multidimensional scalar viscous shock fronts, J. Differential Equations 183 (2002) 368-408. [22] P. Howard and K. Zumbrun, Stability of undercompressive shock profiles, J. Differential Eqns. 225 (2006) 308-360. [23] T. Korvola, A. Kupiainen, and J. Taskinen, Anomalous scaling for three-dimensional Cahn-Hilliard fronts, Comm. Pure Appl. Math. LVIII (2005) 1-39. [24] T. Korvola, Stability of Cahn-Hilliard fronts in three dimensions, Doctoral dissertation, University of Helsinki, 2003. [25] T. Kapitula and K. Promislow, Spectral and dynamical stability of nonlinear waves, Applied Mathematical Sciences 185, Spring 2013. [26] M. Reed and B. Simon, Method of modern mathematical physics IV: Analysis of operators, Academic Press 1978. [27] V. Stefanopoulos, Heteroclinic connections for multiple-well potentials: the anisotropic case, Proc. Royal Soc. Edinburgh 138A (2008) 1313-1330. 70
[28] K. Yosida, Functional analysis, Springer-Verlag 1980. [29] K. Zumbrun, Multidimensional stability of planar viscous shock waves, TMR Summer School Lectures: Kochel am See, May 1999; Birkhauser’s series: Progress in nonlinear differential equations and their applications (2001). [30] K. Zumbrun and P. Howard, Pointwise semigroup methods and stability of viscous shock waves, Indiana U. Math. J. 47 (1998) 741–871. See also the errata for this paper: Indiana U. Math. J. 51 (2002) 1017–1021. [31] K. Zumbrun and D. Serre, Viscous and inviscid stability of multidimensional planar shock fronts, Indiana U. Math. J. 48 (1999) 937-992.
71