Feb 12, 2009 - We also calculate these curves for some symmetric and non- ..... 2l i ︸r %. â« ... â« ei Φl bl(s, x, y, z, ξ)dlzdlξdls, where. Φl = n. â k=1 ..... Another way to define TrU(t) is to write it as the limit. 17 ...... This equation can be solved in a neighborhood of the support of bH , for ...... In other words the zeros run away.
Eigenvalues and Eigenfunctions of Schr¨odinger Operators: Inverse Spectral Theory; and the Zeros of Eigenfunctions by Hamid Hezari
A dissertation submitted to The Johns Hopkins University in conformity with the requirements for the degree of Doctor of Philosophy Baltimore, Maryland March, 2009
c
Hamid Hezari 2009 All rights reserved Updated 2-12-09
Abstract This dissertation contains two disjoint parts: Part I: In the first part (which is from [H1]) we find some explicit formulas for the semi-classical wave invariants at the bottom of the well of a Schr¨odinger operator. As an application of these new formulas for the wave invariants, we prove similar inverse spectral results, obtained by Guillemin and Uribe in [GU], using fewer symmetry assumptions. We also show that in dimension 1, no symmetry assumption is needed to recover the Taylor coefficients of V (x). Part II: In the second part (which is from [H2]) we study the semi-classical distribution of the complex zeros of the eigenfunctions of the 1D Schr¨odinger operators for the class of real polynomial potentials of even degree, with fixed energy level, E. We show that as hn → 0 the zeros tend to concentrate on the union of some level curves Rz p ℜ(S(zm , z)) = cm where S(zm , z) = zm V (t) − E dt is the complex action, and zm is a turning point. We also calculate these curves for some symmetric and nonsymmetric one-well and double-well potentials. The example of the non-symmetric double-well potential shows that we can obtain different pictures of complex zeros for different subsequences of hn .
Readers: Professors Steve Zelditch (Advisor), John Toth, Bernard Shiffman, Chris Sogge, Joel Spruck, Hans Christianson and Petar Maksimovic .
ii
Acknowledgments First and foremost, I would like to thank my advisor Steve Zelditch for his generous support and encouragement and also for his patient guidance during my five years at Johns Hopkins. I especially appreciate that he consistently shared his ideas with me.
I also want to thank many faculty members at Johns Hopkins, especially Bernard Shiffman, Richard Wentworth, Bill Minicozzi, Chris Sogge and Joel Spruck for many helpful discussions and for instructing many interesting courses.
I also extend my gratitude to Christine for her great love and support. Also I thank my friends in Baltimore, especially Sid, Mike, Reza, Arash, Vahid, Alireza Asadpoure, Mazdak, Ramin, Rafik, Peter, Mimi, Ananda, Darius, Behzad, Marcin and Yanir for their friendship and support. Finally I thank my friends in Iran, specially Arash Alavian, Mahmood, Morteza, Nima, Milad, Hamid Ranjbar, Amir Mohammadi, Keyvan, Behrooz, Jalil, Arash Yousefi, Ali Jelvehdar, Hamed and my brothers Saeed, Vahid and Navid for being good and loyal friends.
Dedication: I dedicate this work to my Mother, Fakhri, for her never ending love and to my Father, Mohammadali for his support and care. Their unwavering love made me the person I am.
iii
Contents
Acknowledgments
I
iii
Inverse Spectral Problems for Schr¨ odinger Operators
1 Introduction
1 2
1.1 Motivation and Background . . . . . . . . . . . . . . . . . . . . . . .
2
1.2 Statement of Results . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
1.2.1
Explicit formulas for the wave invariants . . . . . . . . . . . .
4
1.2.2
Inverse spectral result . . . . . . . . . . . . . . . . . . . . . .
6
1.2.3
Results of Guillemin and Colin de Verdi`ere . . . . . . . . . . .
6
1.3 Outline of proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
1.4 Some remarks and comparison of approaches . . . . . . . . . . . . . .
8
2 Semi-classical Parametrix for the Schr¨ odinegr Propagator 2.1 Two reductions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Construction of k(t, x, y), the kernel of U(t) = e
−it ˆ H ~
. . . . . . . . . .
3 Asymptotics of the Trace
11 11 12 17
3.1 Trace as a tempered distribution
. . . . . . . . . . . . . . . . . . . .
17
3.2 Trace as an infinite sum of oscillatory integrals . . . . . . . . . . . . .
19
iv
3.3 Estimates for the remainder . . . . . . . . . . . . . . . . . . . . . . .
19
3.4 Stationary phase calculations and the proof of parts 1 & 2 of Theorem 1.2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
3.5 Calculations of the wave invariants and the proof of part 3 of Theorem 1.2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RtRs 3.6 Calculations of 0 0 1 Pj+2 b2 (0), and the proof of Theorem 1.2.2 . . . 3.6.1
Calculation of S21 . . . . . . . . . . . . . . . . . . . . . . . . .
28
3.6.2
Calculation of S22 . . . . . . . . . . . . . . . . . . . . . . . . .
29
4 Appendix A 4.1 WKB construction for Θ(Pˆ )e
25
33 −it ˆ P ~
. . . . . . . . . . . . . . . . . . . .
34
ˆ H ˆ −it ~ 4.2 WKB construction for Θ~(H)e . . . . . . . . . . . . . . . . . . .
36
5 Appendix B
II
23
44
Complex Zeros of 1D Schr¨ odinger Operators
6 Introduction
47 48
6.1 Motivation and Background . . . . . . . . . . . . . . . . . . . . . . .
48
6.2 Statement of results . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
6.3 Results of Eremenko, Gabrielov and Shapiro . . . . . . . . . . . . . .
55
7 Background on Complex WKB Method
56
7.1 Stokes lines and Stokes graphs . . . . . . . . . . . . . . . . . . . . . .
56
7.2 Canonical Domains, Asymptotic Expansions of Eigenfunctions . . . .
57
7.3 Elementary Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
59
7.4 Transition Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . .
60
7.4.1
Polynomials with real coefficients . . . . . . . . . . . . . . . .
v
61
8 Proofs of Results
64
8.1 The complex zeros in a canonical domain D . . . . . . . . . . . . . .
64
8.2 Zeros in the complex plane and the proof of Theorem 6.2.1 . . . . . .
66
8.2.1
Complex zeros near the turning points . . . . . . . . . . . . .
67
8.3 Zeros for symmetric and non-symmetric double well potentials . . . .
69
8.4 More examples of the zero lines for some polynomial potentials . . . .
74
Vitae
81
vi
Part I Inverse Spectral Problems for Schr¨ odinger Operators
1
Chapter 1 Introduction 1.1
Motivation and Background
In this part of the dissertation we study some inverse spectral problems of the eigenvalue problem for the semi-classical Schr¨odinger operator,
2
(1.1)
~ Pˆ = − △ + V (x) 2
on L2 (Rn ),
associated to the Hamiltonian P (x, ξ) =
1 2 ξ + V (x). 2
Here the potential V (x) in (1.1) satisfies (1.2) V (x) ∈ C ∞ (Rn ), V (x) has a unique non-degenerate global minimum at x = 0 and V (0) = 0, For some ε > 0, V −1 [0, ε] is compact.
Under these conditions for sufficiently small ~, say ~ ∈ (0, h0 ), and sufficiently small δ, a classical fact tells us the spectrum of Pˆ in the energy interval [0, δ] is finite. We 2
denote these eigenvalues by {Ej (~)}m j=0 . We call these eigenvalues the low-lying eigenvalues of Pˆ . We notice the Weyl’s law reads (1.3)
1 m = N~(δ) = ♯{j; 0 ≤ Ej (~) ≤ δ} = ( (2π~)n
Z
1 2
dxdξ + o(1)). ξ 2 +V (x)≤δ
Recently in [GU], Guillemin and Uribe raised the question whether we can recover the Taylor coefficients of V at x = 0 from the low-lying eigenvalues Ej (~). They also established that if we assume some symmetry conditions on V , namely V (x) = f (x21 , ..., x2n ), then the 1-parameter family of low-lying eigenvalues, {Ej (~) | ~ ∈ (0, h0 )}, determines the Taylor coefficients of V at x = 0.
In this thesis we will attempt to recover as much of V as possible from the family Ej (~), by establishing some new formulas for the wave invariants at the bottom of the potential (Theorem 1.2.1). Using these new expressions for the wave invariants, in Theorem 1.2.2 we improve the inverse spectral results of [GU] for a larger class of potentials.
A classical approach in studying this problem is to examine the asymptotic behavior as ~ → 0 of the truncated trace (1.4)
−it ˆ T r(Θ(Pˆ )e ~ P ),
where Θ ∈ C0∞ ([0, ∞)) is supported in I = [0, δ] and equals one in a neighborhood of 0.
3
The asymptotic behavior of the truncated trace around the equilibrium point (x, ξ) = (0, 0) has been extensively studied in the literature. It is known that (see −it ˆ for example [BPU]) for t in a sufficiently small interval (0, t0 ), T r(Θ(Pˆ )e ~ P ) has an
asymptotic expansion of the following form:
−it ˆ T r(Θ(Pˆ )e ~ P ) ∼
(1.5)
∞ X
aj (t)~j ,
j=0
~ → 0.
Throughout this dissertation when we refer to wave invariants at the bottom of the well, we mean the coefficients aj (t) in (1.5).
By applying an orthogonal change of variable, we can assume that V is of the form n
1X 2 2 ω x + W (x), V (x) = 2 k=1 k k
(1.6)
W (x) = O(|x|3),
ωk > 0,
|x| → 0.
In addition to conditions in (1.2), we also assume that {ωk } are linearly independent over Q. We note that we have W (0) = 0, ∇W (0) = 0 and HessW (0) = 0.
1.2 1.2.1
Statement of Results Explicit formulas for the wave invariants
Our first result finds explicit formulas for the wave invariants. Theorem 1.2.1. There exists t0 such that for 0 < t < t0 , 1. a0 (t) = T r(e
−it ˆ H0 ~
n Y
n
1 )= , 2i sin ω2k t k=1
where
4
X ˆ 0 = − 1 ~2 ∆ + 1 H ω 2 x2 . 2 2 k=1 k k
2. For j ≥ 1 , the wave invariants aj (t) defined in (1.5) are given by
(1.7)
aj (t) = a0 (t)
2j X
l(n−1)+ n i π4 sgnHl 2
i
e
0
l=1
where for every m,
Pm bl (0) = bl =
l Y i=1
and
Hl−1
W(
Z tZ
s1
... 0
Z
sl−1
Pl+j bl (0)dsl ...ds1 ,
0
i−m < Hl−1 ∇, ∇ >m (bl )(0), m 2 m!
cos ωk si k sin ωk si k sin ωk (t − si ) + sin ωk si k (zi+1 + zik ) − ξi + ( )x ), 2 ωk sin ωk t
is the inverse matrix of the Hessian Hl =Hess Ψl (0), where n X
l X ωk 2 ωk k 2 k Ψl = Ψl (t, x, z1 , ..., zl , ξ1 , ..., ξl ) = {(−ωk tan t)xk +( cot ωk t)(z1 ) + (zi+1 −zik )ξik }. 2 2 i=1 k=1
The Hessian of Ψl is calculated with respect to every variable except t. Therefore the entries of the matrix Hl−1 are functions of t. The matrix Hl−1 is shown in (3.10).
3. The wave invariant aj (t) is a polynomial of degree 2j of the Taylor coefficients of V . The Taylor coefficients of highest order appearing in aj (t) are of order 2j + 2. In fact these highest order Taylor coefficients appear in the linear term of the polynomial and (1.8)
aj (t) =
a0 (t) X t −1 ~ω 2j+2 ( cot t)α~ D2~ α V (0) j+1 (2i) α ~ ! 2~ω 2 |~ α|=j+1
+{a polynomial of Taylor coefficients of order ≤ 2j + 1} Notice that in (1.8), we have used the standard shorthand notations for multiindices, i.e. α ~ = (α1 , ...αn ), ~ω = (ω1 , ...ωn ), |~ α| = α1 + ... + αn , α ~ ! = α1 !...αn !, ~ α~ = X α1 ...X αn , and D m = X 1 n α ~
∂m α n ∂x1 1 ...∂xα n
with m = |~ α|.
5
1.2.2
Inverse spectral result
Our second result improves the result of Guillemin and Uribe in [GU]. This theorem is actually a non-trivial corollary of Theorem 1.2.1. Theorem 1.2.2. Let V satisfy (1.2), (1.6), and be of the form V (x) = f (x21 , ..., x2n ) + x3n g(x21 , ..., x2n ),
(1.9)
for some f, g ∈ C ∞ (Rn ). Then the low-lying eigenvalues of Pˆ = − 21 ~2 ∆+V determine |~ α|
3 Dα~ V (0), |~ α| = 2, 3, and if D3~ en V (0) :=
∂3V (0) ∂x3n
6= 0, they determine all the Taylor
coefficients of V at x = 0. One quick consequence of Theorem 1.2.2 is the following: Corollary 1.2.3. If n = 1, and V ∈ C ∞ (R) satisfies (1.2), then (with no symmetry assumptions) the low-lying eigenvalues determine V ′′ (0) and V (3) (0), and if V (3) (0) 6= 0, then these eigenvalues determine all the Taylor coefficients of V at x = 0.
1.2.3
Results of Guillemin and Colin de Verdi` ere
Recently, Guillemin and Colin de Verdi`ere in [CG1] (see also [C]) studied inverse spectral problems of 1 dimensional semi-classical Schr¨odinger operators. One of the main results in [CG1] is our above Corollary 1.2.3 .
1.3
Outline of proofs
Let us briefly sketch our main ideas for the proofs. First, because of a technical reason which arises in the proofs, we will need to replace the Hamiltonian P by the following Hamiltonian H
6
H(x, ξ) = 12 ξ 2 + V~(x), P V~(x) = 21 nk=1 ωk2 x2k + W~(x), W~(x) = χ( 1x )W (x), ε > 0 sufficiently small, −ε
where the cut off χ ∈
~2 ∞ n C0 (R ) is
supported in the unit ball B1 (0) and equals one in
B 1 (0). 2
Then in two lemmas (Lemma 2.1.1 and Lemma 2.1.2) we show that for t in a sufficiently small interval (0, t0 ), in the sense of tempered distributions we have
−it ˆ −it ˆ T r(Θ(Pˆ )e ~ P ) = T r(e ~ H ) + O(~∞ ).
This reduces the problem to studying the asymptotic of T r(e
−it ˆ H ~
). For this we use
the construction of the kernel k(t, x, y) of the propagator U(t) = e
−it ˆ H ~
found in [Z].
We find that
(1.10)
k(t, x, y) = C(t)e
i S(t,x,y) ~
∞ X
al (t, ~, x, y),
l=0
where (1.11)
S(t, x, y) =
n X k=1
a0 = 1, and for l ≥ 1,
ωk 1 ( (cos ωk t)(x2k + yk2 ) − xk yk ), sin ωk t 2
2l
al (t, ~, x, y) = (
−1 ln 1 l(n+1) ) ( ) 2π i~
Z
0
t
...
Z
sl−1
0
z }| Z{ Z i ~ l zdl ξdl s, ... e ~ Φl bl (s, x, y, ~z, ξ)d
where n l X X ωk k 2 k Φl = { cot ωk t(z1 ) + (zi+1 − zik )ξik }, 2 i=1 k=1
7
and bl =
l Y
W~(
i=1
cos ωk si k sin ωk si k sin ωk (t − si ) k sin ωk si k (zi+1 + zik ) − ξi + y + x ). 2 ωk sin ωk t sin ωk t
Next we apply the expression in (1.10) for k(t, x, y) to the formula T r(e R
−it ˆ H ~
)=
k(t, x, x)dx. Then we obtain an infinite series of oscillatory integrals, each one
corresponding to one al . Finally we apply the method of stationary phase to each oscillatory integral and we show that the resulting series is a valid asymptotic expansion. From the resulting asymptotic expansion we obtain the formulas (1.7).
1.4
Some remarks and comparison of approaches
Now let us compare our approach for the construction of k(t, x, y) with the classical approach. In the classical approach (see for instance [DSj],[D], [R], [BPU] and [U]), one constructs a WKB approximation for the kernel kP (t, x, y) of the operator Θ(Pˆ )e
−it ˆ P ~
, i.e.
(1.12)
kP (t, x, y) =
Z
i
e ~ (ϕP (t,x,η)−y.η) bP (t, x, y, η, ~)dη,
where ϕP (t, x, η) satisfies the Hamilton-Jacobi equation (or eikonal equation in geometrical optics)
∂t ϕP (t, x, η) + P (x, ∂x ϕP (t, x, η)) = 0,
ϕP |t=0 = x.η,
and the function bP has an asymptotic expansion of the form bP (t, x, y, η, ~) ∼
∞ X j=0
8
bP,j (t, x, y, η)~j .
The functions bP,j (t, x, y, η) are calculated from the so called transport equations. See for example [R], [DSj], [EZ] or Appendix A of the paper in hand for the details of the above construction. In this setting, when one integrates the kernel kP (t, x, y) on the diagonal and applies the stationary phase to the given oscillatory integral, one obtains very complicated expressions for the wave invariants. Of course the classical calculations above show the existence of asymptotic formulas of the form (1.5) (which can be used to get Weyl-type estimates for the counting functions of the eigenvalues, see for example [BPU]). Unfortunately these formulas for the wave invariants are not helpful when trying to establish some inverse spectral results.
Hence, one should look for more efficient methods to calculate the wave invariants aj (t). One approach is to use the semi-classical Birkhoff normal forms, which was used in the papers [Sj] and [ISjZ] and [GU]. The Birkhoff normal forms methods were also used by S. Zelditch in [Z4] to obtain positive inverse spectral results for real analytic domains with symmetries of an ellipse. Zelditch proved that for a real analytic plane domain with symmetries of an ellipse, the wave invariants at a bouncing ball orbit, which is preserved by the symmetries, determine the real analytic domain under isometries of the domain.
Recently in [Z3], Zelditch improved his earlier result to the real analytic domains with only one mirror symmetry. His approach for this new result was different. He used a direct approach (Balian-Bloch trace formula) which involves Feynmandiagrammatic calculations of the stationary phase method to obtain a more explicit formula for the wave invariants at the bouncing ball orbit. Motivated by the work of Zelditch [Z3] mentioned above, our approach is also
9
somehow direct and involves combinatorial calculations of the stationary phase. Our formula in (1.10) for the kernel of the propagator, U(t) = e
−it ˆ H ~
, is differ-
ent from the WKB-expression in the sense that we only keep the quadratic part of the phase function, namely the phase function S(t, x, y) in (1.11) of the propagator P of Anisotropic oscillator , and we put the rest in the amplitude ∞ l=0 al (t, ~, x, y) in (1.10). The details of this construction are mentioned in Prop 2.2.1.
10
Chapter 2 Semi-classical Parametrix for the Schr¨ odinegr Propagator 2.1
Two reductions
Because of some technical issues arising in the proof of Theorem 1.2.1, we will need to use the following tow lemmas as reductions.
In the following, we let χ ∈ C0∞ (Rn ) be a cut off which is supported in the unit ball B1 (0) and equals one in B 1 (0). 2
Lemma 2.1.1. Let the Hamiltonians P and H be defined by (2.1) P (x, ξ) = 21 ξ 2 + V (x) Pn 1 2 2 , V (x) = k=1 ωk xk + W (x) 2 W (x) = O(|x|3), as x → 0
H(x, ξ) = 21 ξ 2 + V~(x), P V~(x) = 12 nk=1 ωk2x2k + W~(x), W~(x) = χ( 1x )W (x), ε > 0 sufficiently small, −ε ~2
11
ˆ be the corresponding Weyl (or standard) quantizations. Then for t and let Pˆ and H in a sufficiently small interval (0, t0 )
−it ˆ ˆ H ˆ −it ~ T r(Θ(Pˆ )e ~ P ) = T r(Θ(H)e ) + O(~∞).
In other words, the wave invariants aj (t) will not change if we replace P by H. Proof. Proof is given in Appendix A. ˆ Next we use the following lemma to get rid of Θ(H). Lemma 2.1.2. Let H be defined by (2.1). Then in the sense of tempered distributions −it ˆ ˆ H ˆ −it ~ T r(Θ(H)e ) = T r(e ~ H ) + O(~∞).
ˆ as This means that if we sort the spectrum of H
E1 (~) < E2 (~) ≤ ... ≤ Ej (~) → +∞, then for every Schwartz function ϕ(t) ∈ S(R) < T r(e
−it ˆ H ~
ˆ ) − T r(Θ(H)e
−it ˆ H ~
), ϕ(t) >=
∞ X j=1
(1 − Θ(Ej (~)))ϕ( ˆ
Ej (~) ) = O(~∞ ). ~
Proof. Proof is given in Appendix B. Because of the above lemmas, it is enough to study the asymptotic of T r(e
2.2
−it ˆ H ~
).
Construction of k(t, x, y), the kernel of U (t) = e
−it H ˆ ~
In this section we follow the construction in [Z] to obtain an oscillatory integral representation of k(t, x, y), the kernel of the propagator e
12
−it ˆ H ~
. The reader should consult
[Z] for many details. In that article Zelditch uses the Dyson’s Expansion of propagator to study the singularities of the kernel k(t, x, y). But he does not consider the semi-classical setting ~ → 0 in his calculations (i.e. in his calculations ~ = 1). So we follow the same calculations but also consider ~ carefully.
The following important proposition gives a new semi-classical approximation to the propagator U(t) near the bottom of the well. Proposition 2.2.1. Let k(t, x, y) be the Schwartz kernel of the propagator U(t) = e
−it ˆ H ~
. Then
(A) We have
(2.2)
n Y
∞
X 1 i ωk ) 2 e ~ S(t,x,y) al (t, ~, x, y), 2πi~ sin ω kt l=0 k=1
k(t, x, y) = (
where S(t, x, y) =
n X k=1
ωk 1 ( (cos ωk t)(x2k + yk2) − xk yk ). sin ωk t 2
Also a0 = 1 and for l ≥ 1, (2.3)
2l
al (t, ~, x, y) = (
−1 ln 1 l(n+1) ) ( ) 2π i~
Z
0
t
...
Z
0
sl−1
zZ }| Z{ i ~ l zdl ξdl s, ... e ~ Φl bl (s, x, y, ~z, ξ)d
where (2.4)
l n X X ωk k 2 k Φl = { cot ωk t(z1 ) + (zi+1 − zik )ξik }, 2 i=1 k=1
and (2.5) l Y cos ωk si k sin ωk si k sin ωk (t − si ) k sin ωk si k bl = W~( (zi+1 +zik )− ξi + y + x ). 2 ωk sin ωk t sin ωk t i=1 13
(zl+1 := 0)
~ there exists k0 = k0 (α, β) such that for every 0 < ~ ≤ h0 ≤ 1 (B) For every α ~ , β,
~ |∂xα~ ∂yβ al (t, ~, x, y)|
(2.6)
≤
Cα,β,n (t)l ||W~⋆||l|α|+|β|+k0 l!
1
1
~
~l( 2 −3ε)− 2 (|~α|+|β|) ,
where 1
W~⋆ (x)
(2.7)
=
W~(~ 2 x) 1
~3( 2 −ε)
1
ε
= χ(~ x)
W (~ 2 x) 1
~3( 2 −ε)
and is uniformly in B(Rnx ); i.e. W~⋆ is bounded with bounded derivatives and the P bounds are independent of ~. Hence the sum a(t, ~, x, y) = ∞ l=0 al (t, ~, x, y) in (2.2) is uniformly convergent in B(Rnx × Rny ). In fact
1 1 ~ ~ |∂xα~ ∂yβ a(t, ~, x, y)| ≤ ~− 2 (|~α|+|β|) exp ~ 2 −ǫ Cα,β,n (t)||W~⋆|||α|+|β|+k0 . Proof. Following [Z], we denote ˆ 0 = − 1 ~2 ∆ + 1 Pn ω 2 x2 , H k=1 k k 2 2
(Anisotropic Oscillator)
H ˆ =H ˆ 0 + W~(x) = − 1 ~2 ∆ + V~(x), 2
and by U0 (t) = e
−it ˆ H0 ~
, and U(t) = e
−it ˆ H ~
, we mean their corresponding propagators.
From ˆ 0 )U(t) = W~.U(t), (i~∂t − H we obtain (2.8)
1 U(t) = U0 (t) + i~
Z
t
0
U0 (t − s).W~.U(s)ds.
By iteration we get the norm convergent Dyson Expansion:
(2.9) Z t Z sl−1 ∞ X 1 U(t) = U0 (t)+ ... U0 (t)[U0 (s1 )−1 .W~.U0 (s1 )]...[U0 (sl )−1 .W~.U0 (sl )]dsl ...ds1 . l (i~) 0 0 l=1 14
It is well-known that for t 6=
mπ , ωk
the kernel of U0 (t) is given by
n Y
1 i ωk ) 2 e ~ S(t,x,y) , 2πi~ sin ωk t k=1
(2.10)
k0 (t, x, y) = (
where S(t, x, y) =
n X k=1
ωk 1 ( (cos ωk t)(x2k + yk2 ) − xk yk ). sin ωk t 2
Then by taking kernels in (2.9) and after some change of variables (see [Z], pages 8−9 and 18 − 19), we get (2.2). This finishes the proof of part (A) of the proposition. Before proving part (B), let us mention a useful estimate from [Z]. The setting in [Z] is a non-semiclassical one, i.e. ~ = 1. In [Z] on pages 17 − 18 the following estimate (for ~ = 1) is proved using integration by parts. That there exists a positive integer k0 = k0 (α, β, n) and a continuous function Cα,β,n (t) such that (2.11)
~
|∂xα~ ∂yβ al (t, 1, x, y)| ≤
1 Cα,β,n (t)l ||W1 ||l|α|+|β|+k0 , l!
(W1 = W~ |~=1 )
The estimates (2.11) will change if one considers ~ in the calculations. This would be part (B) of the proposition. Let us prove part (B), namely the estimate (2.6). First, 1 1 1 1 ~ in (2.3), we apply the change of variables x 7→ ~ 2 x, y 7→ ~ 2 y, ~z 7→ ~ 2 ~z and ξ~ 7→ ~ 2 ξ. 1
This gives us ~ln in front of the integral. Then we replace W~ by ~3( 2 −ε) W~⋆ . After collecting all the powers of ~ in front of the integral we obtain
2l 1
1
al (t, ~, ~ 2 x, ~ 2 y) = (
−1 ln l( 1 −3ε) ) ~ 2 2π
Z
t
... 0
Z
0
z }| Z{ sl−1 Z ~ l zdl ξdl s, ... eiΦl b⋆l (s, x, y, ~z, ξ)d
where ~ b⋆l (s, x, y, ~z, ξ)
=
l Y i=1
W~⋆ (
cos ωk si k sin ωk si k sin ωk (t − si ) k sin ωk si k (zi+1 +zik )− ξi + y + x ). 2 ωk sin ωk t sin ωk t
Next we apply (2.11) to the above integral with W1 replaced by W~⋆ , and we get (2.6). To finish the proof we have to show that for every positive integer m we can 15
find uniform bounds (i. e. independent of ~) for the m-th derivatives of the function W~⋆ (x). Since χ(x) is supported in the unit ball, from the definition (2.7) we see that W~⋆ is supported in |x| < h−ε . So from (2.7) it is enough to find uniform bounds in 1
~ for the m-th derivatives of the function
W (~ 2 x) 1 ~3( 2 −ε)
in the ball |x| < ~−ε . This is very
clear for m ≥ 3. For m < 3 , we use the order of vanishing of W (x) at x = 0. Since W (0) = 0, ∇W (0) = 0 and HessW (0) = 0, the order of vanishing of W at x = 0 is 3. Therefore in the ball |x| < ~−ε , the functions 1
1
1
W (~ 2 x) (∂ α W )(~ 2 x) (∂ α ∂ β W )(~ 2 x) , , , 1 1 1 (~ 2 x)3 (~ 2 x)2 ~2 x are bounded functions with uniform bounds in ~, and the statement follows easily for m < 3.
16
Chapter 3 Asymptotics of the Trace In this chapter we show that the integral T r U(t) =
R
k(t, x, x)dx is convergent as an
oscillatory integral and using (2.2) we express T r U(t) as an infinite sum of oscillatory integrals with an appropriate ~-estimate for the remainder term.
3.1
Trace as a tempered distribution
In this section we review some standard facts. We know that the sum T r U(t) =
X
e−
itEj (~) ~
is convergent in the sense of tempered distributions, i.e. T r U(t) ∈ S ′ (R). This can be shown by the Weyl’s law in its high energy setting, which implies that for potentials P of the form V (x) = 12 nk=1 ωk2 x2k + W~(x), with W ∈ B(Rn ), for fixed ~, the j−th eigenvalue Ej (~) satisfies
(3.1)
2
Ej (~) ∼ C(n, ~)j n ,
j → ∞.
Another way to define T r U(t) is to write it as the limit
17
(3.2)
T r U(t) = lim+ T r U(t − iτ ) = lim+ τ →0
τ →0
X
e−
(it+τ )Ej (~) ~
.
This time the Weyl’s law (3.1) implies that the sum T r U(t − iτ ) is absolutely uniformly convergent because of the rapidly decaying factor e−
τ Ej (~) ~
. As a result, U(t−iτ )
is a trace class operator. It is clear that the kernel of U(t − iτ ) is k(t − iτ, x, y), the analytic continuation of the kernel k(t, x, y) of U(t). Clearly k(t − iτ, x, y) is continR uous in x and y. So we can write T r U(t − iτ ) = k(t − iτ, x, x)dx. We notice that
this integral is uniformly convergent. This is because up to a constant this integral R i equals to e ~ S(t−iτ,x,x) a(t − iτ, ~, x, x), and the exponential factor in the integral is rapidly decaying for τ > 0 as |x| → ∞ and a is a bounded function. More precisely ℜ(iS(t − iτ, x, x)) =
n X k=1
n
ℜ(−iωk tan(
ωk (t − iτ ) 2 X ωk (1 − e2τ ωk ) 2 ))xk = x , wk (it+τ ) |2 k 2 |1 + e k=1
and ωk (1 − e2τ ωk ) < 0. |1 + ewk (it+τ ) |2 R The discussion above shows that the integral k(t, x, x)dx can be defined by integrations by parts as follows: Since
n
2
iS(t,x,x)
< Dx > e
iS(t,x,x)
:= (1−∆)e
X ~ω t ωk t iS(t,x,x) = (1+ k 2~ω tan( )~x k2 +2i ωk tan( ))e , 2 2 k=1
we can write (3.3)
~
n 2
Z
Z
e
i S(t,x,x) ~
a(t, ~, x, x)dx = ~
n 2
Z
eiS(t,x,x) a(t, ~,
√
~x,
√
~x)dx =
n
eiS(t,x,x) (< Dx >2 (1+ k 2~ω tan(
If we assume 0 < t
n , 2
a(t, ~, x, y) ∈ B(Rnx × Rny ), the integral becomes absolutely convergent. 18
and because
3.2
Trace as an infinite sum of oscillatory integrals
Since by (2.6) the series a(t, ~, x, y) = vergent, we have Z
e
i S(t,x,x) ~
P∞
l=0 al (t, ~, x, y)
a(t, ~, x, x)dx =
∞ Z X
is absolutely uniformly con-
i
e ~ S(t,x,x) al (t, ~, x, x)dx,
l=0
and therefore we obtain an infinite sum of oscillatory integrals. The next step is to apply the stationary phase method to each integral above and then add the asymptotic expansions to obtain an asymptotic expansion for the T r U(t).
3.3
Estimates for the remainder
Because we have an infinite sum of asymptotic expansions, we have to establish that the resulting asymptotic for the trace is a valid approximation. Hence we have to find some appropriate ~-estimates for the remainder term of the series. To be more precise define (3.4)
Il (t, ~) = ~
−n 2
Hence by this notation, T r U(t) = following crucial proposition.
Z
i
e ~ S(t,x,x) al (t, ~, x, x)dx.
Qn
1 ωk 2 k=1 ( 2πi sin ωk t )
P∞
l=0 Il (t, ~).
Now we have the
Proposition 3.3.1. Fix 0 < ε < 16 , and Il (t, ~) be defined by (3.4). Then for all m≥1 (3.5)
n Y
m−1 X 1 1 ωk T r U(t) = ( )2 Il (t, ~) + O(~m( 2 −3ε) ). 2πi sin ωk t l=0 k=1
Proof. If in (3.4) we integrate by parts as we did in (3.3), and choose n0 = [ n2 ] + 1, then using (2.6) we get |Il (t, ~)| ≤
Cn (t) Cn′ (t)
l
19
||W~⋆||l2n0 +k0 l( 1 −3ε) ~ 2 , l!
where Cn (t) = max|α|+|β|≤2n0 {Cα,β,n (t)}. We note that
1 2
− 3ε > 0 because 0 < ε < 16 .
Now it is clear that for every positive integer m, and every 0 < ~ ≤ h0 ≤ 1,
(3.6)
|
∞ X l=m
⋆
1
Il (t, ~)| ≤ Cn′ (t)e{Cn (t)||W~ ||2n0 +k0 } ~m( 2 −3ε) .
Since by Proposition 2.2.1.B, sup0 b (x, ~ z , ξ) = Pj bl (x, ~z, ξ) l l 2j .j! 2j .j!
X
r
hrl 1 r2 ...hl 2j−1
r2j
r1 ,...,r2j ∈Al
~ ∂ 2j bl (x, ~z, ξ) , ∂r1 ...∂r2j
where in the sum (3.13) the indices r1 , ..., r2j run in the set Al = {xk , z1k , ..zlk , ξ1k , ..ξlk }nk=1 , ′
with r, r ′ ∈ Al , corresponds to the (r, r ′ )-th entry of the inverse Hessian Hl−1 . and hrr l
We note that Pj bl (0) = 0 if 2j < 3l. This is true because of (2.5) and because W (0) = 0, ∇W (0) = 0 and HessW (0) = 0. This implies, first, there are not any negative powers of ~ in the expansion (as we were expecting). Second, the constant term (i.e. the 0-th wave invariant), which corresponds to the term l = j = 0 in the sum, equals n Y
n −πn n Y 1 ωk i− 2 e 4 1 2 a0 (t) = T rU0 (t) = ( ) . 1 = ω t k sin ωk t (2ωk tan 2i sin ω2k t 2 k=1 k=1 2 )
And third (using (3.11)), for j ≥ 1 the coefficient of ~j in (3.12) equals
Z t Z s1 Z sl−1 2j X 1 l(n−1)+ n i π4 sgnHl 2 aj (t) = ( ) i e ... Pl+j bl (0)dsl ...ds1 . 2i sin ω2k t l=1 0 0 0 k=1 n Y
22
The sum goes only up to 2j because if l > 2j then 2(l + j) < 3l and Pl+j bl (0) = 0.
This proves the first two parts of Theorem 1.2.1.
3.5
Calculations of the wave invariants and the proof of part 3 of Theorem 1.2.1
In this section we try to calculate the wave invariants aj (t) from the formulas (1.7). First of all, let us investigate how the terms with highest order of derivatives appear in aj (t). Because bl is the product of l copies of W~ functions, and because we have to put at least 3 derivatives on each W~ to obtain non-zero terms, the highest possible order of derivatives that can appear in Pj+l bl (0), is 2(j + l) − 3(l − 1) = 2j − l + 3. This implies that, because in the sum (1.7) we have 1 ≤ l ≤ 2j, the highest order of derivatives in aj (t) is 2j + 2 and those derivatives are produced by the term corresponding to l = 1, i.e. Pj+1 b1 (0). The formula (1.7) also shows that aj (t) is a polynomial of degree 2j. The term with the highest polynomial order is the one with l = 2j, i.e. P3j b2j (0) (which has the lowest order of derivatives) and the term Pj+1 b1 (0) is the linear term of the polynomial. Now let us calculate Pj+1 b1 (x, ~z, ξ~ ) and prove Theorem 1.2.1.3.
By (3.13),
(3.14)
Pj+1 b1 =
i−(j+1) 2j+1 .(j + 1)!
X
r
hr11 r2 ...h12j+1
r1 ,...,r2j+2 ∈A1
r2j+2
∂ 2j+2 b1 , ∂r1 ...∂r2j+2
where here by (2.5)
b1 = W~(
cos ωk s k sin ωk s k sin ωk (t − s) + sin ωk s k z − ξ +( )x ). 2 ωk sin ωk t
Also by (3.10),
23
H1−1
=
D( −1 2~ ω
cot( ω~2t ))
0
0
0
0
−I
0
−I −D(w ~ cot ~ ω t)
.
Hence the only non-zero entries of H1−1 are the ones of the form hx1 ξk ξk
h1
k xk
, hz1
k ξk
k zk
= hξ1
, and
. Now we let
k k r r ixk xk = the number of times hx1 x appears in hr11 r2 ...h12j+1 2j+2 in (3.14), k k r r iz k ξ k = the number of times hz1 ξ appears in hr11 r2 ...h12j+1 2j+2 in (3.14), i k k = the number of times hξ k ξ k appears in hr1 r2 ...hr2j+1 r2j+2 in (3.14). ξ ξ 1 1 1
By applying these notations to (3.14), (2.5) we get i−(j+1) Pj+1 b1 = j+1 2 (j + 1)! P n
P
X
k=1 ixk xk +iz k ξk +iξk ξk =j+1
× ×
Qn
k=1
Qn
k=1
ω t
− cot 2k 2ωk
i
n (j + 1)! 2 nk=1 izk ξk Qn k=1 ixk xk !iz k ξ k !iξ k ξ k !
xk xk
(−1)izk ξk (−ωk cot ωk t)iξk ξk sin ωk (t−s)+sin ωk s 2ixk xk cos ω s i k k − sin ω s iz k ξk +2iξk ξk sin ωk t
2
k
z ξ
ωk
k
o 2j+2 ×D2α W , ~ 1 ,...2αn
where αk = ixk xk + iz k ξ k + iξ k ξ k , for k = 1, ..., n. Next we write the above big sum as
X
Pn
k=1 ixk xk +iz k ξk +iξk ξk =j+1
= P
X
αk =j+1
(j + 1)! (⋆) i k=1 xk xk !iz k ξ k !iξ k ξ k !
Qn
(j + 1)! Q αk !
n Y
X
αk !
i k k !i k k !iξ k ξ k ! ixk xk +iz k ξk +iξk ξk =αk k=1 x x z ξ
(⋆).
2j+2 So the coefficient of D2α W~ in Pj+1 b1 , equals 1 ,...2αn
n Y (−1)j+1 i−(j+1) Q Q 2j+1 ( αk !)( ωkαk )
k=1
1 ωk t cot 2 2
sin ωk (t − s) + sin ωk s sin ωk t
2
− cos ωk s sin ωk s + cot ωk t sin2 ωk s
Now we observe that the term in the parenthesis simplifies to 24
!αk
.
1 ωk t cot 2 2
sin ωk (t − s) + sin ωk s sin ωk t
2
− cos ωk s sin ωk s + cot ωk t sin2 ωk s =
1 ωk t cot . 2 2
So we get α~ X 1 −1 1 ~ω t 2j+2 Pj+1 b1 = cot D2~ α W~, j+1 (2i) α ~ ! 2~ω 2
(3.15)
|~ α|=j+1
Finally, by plugging (x, ~z, ξ~ ) = 0 into equation (3.15) and applying it to (1.7), we get (1.8). This finishes the proof of Theorem 1.2.1.3.
For future reference let us highlight the equation we just established
(3.16) S1 :=
X
r r hr11 r2 ...h12j+1 2j+2
r1 ,...,r2j+2 ∈A1
α~ X 1 −1 ~ω t ∂ 2j+2 W 2j+2 = (j+1)! cot D2~ α W, ∂r1 ...∂r2j+2 α ~ ! 2~ω 2 |~ α|=j+1
where W = W(
3.6
cos ωk s k sin ωk s k sin ωk (t − s) + sin ωk s k z − ξ +( )x ). 2 ωk sin ωk t
Calculations of Theorem 1.2.2
R t R s1 0
0
Pj+2b2(0), and the proof of
Throughout this section we assume that V is of the form (1.9). Hence, the only 2j+2 2j+1 non-zero Taylor coefficients are of the form D2~ en = α V (0), or D2~ α+3~ en V (0), where ~
(0, ..., 0, 1). We notice that based on our discussion in the previous section, the Taylor coeffiRtRs cients of order 2j+1 appear in 0 0 1 Pj+2 b2 (0), and they are of the form Dβ2j+1 V (0)D~δ3 V (0). ~ Therefore we look for the coefficients of the data
25
n 2j+1 3 D2~ en V (0); α+3~ en V (0)D3~
(3.17)
o |~ α| = j − 1 .
in the expansion of aj (t). 2j+1 3 Proposition 3.6.1. In the expansion of aj (t), the coefficient of the data D2~ en V (0), α+3~ en V (0)D3~
|~ α| = j − 1, is
(3.18)
c2 (n) t (2i)j+2 α ~!
−1 ~ω t cot 2~ω 2
α~
1 2αn + 5 −1 ωn t 2 1 ( )( cot ) + 4 2 3ωn αn + 1 2ωn 2 9ωn
.
Therefore (3.19)
aj (t) =
c1 (n) X t −1 ~ω t 2j+2 ( cot )α~ D2~ α V (0) j+1 (2i) α ~ ! 2~ω 2 |~ α|=j+1
c2 (n) + (2i)j+2
X
|~ α|=j−1
t α ~!
−1 ~ω t cot 2~ ω 2
α~
1 2αn + 5 −1 ωn t 2 1 ( )( cot ) + 2 3ωn αn + 1 2ωn 2 9ωn4
2j+1 3 D2~ en V (0) α+3~ en V (0)D3~
+{a polynomial of Taylor coefficients of order ≤ 2j}.
2j+1 3 Proof. As we mentioned at the beginning of Section 2.7, the data D2~ en V (0), α+3~ en V (0)D3~ RtRs |~ α| = j −1, appears first in aj (t) and it is a part of the term 0 0 1 Pj+2 b2 (0). So let us
2j+1 3 calculate those terms in the expansion of Pj+2 b2 (0) which contain D2~ en V (0). α+3~ en V (0)D3~
By (2.5), since here l = 2, we have
b2 (s1 , s2 , x, x, z1 , z2 , ξ1, ξ2 ) = W1 W2 ,
(3.20)
W1 = W ( cos ω2 k s1 (z1k + z2k ) − W2 = W ( cos ω2 k s2 z2k −
sin ωk s1 k ξ1 ωk
sin ωk s2 k ξ2 ωk
26
where,
1 )+sin ωk s1 + ( sin ωk (t−s )xk ), sin ωk t
2 )+sin ωk s2 + ( sin ωk (t−s )xk ). sin ωk t
Also from (3.10) we have
(3.21) H2−1
=
D( −1 2~ ω
cot( ω~2t ))
0
0
0
0
0
0
0
0 0 −I −I 0 0 0 −I −I 0 −D(~ω cot(~ω t)) −D(~ω cot(~ω t)) −I −I −D(~ω cot(~ω t)) −D(~ω cot(~ω t))
.
5n×5n
By (3.13) and (3.20), Pj+2 b2 (0) =
(3.22)
0
X
S2 =
i−j
2j .j!
S2 , where S2 is the following sum
r
hr21 r2 ...h22j+3
r2j+4
(W1 W2 )r1 ...r2j+4 (0),
r1 ,...,r2j+4∈A2 ′
′ where A2 = {xk , z1k , z2k , ξ1k , ξ2k }nk=1 and for every r, r ′ ∈ A2 , hrr 2 is the (r, r )-entry of
the matrix H2−1 in (3.21). We would like to separate out those terms in S2 which 2j+1 3 include D2~ en V (0). To do this, from the total number 2j + 4 derivatives α+3~ en V (0)D3~
that we want to apply to W1 W2 , we have to put 3 of them on W1 (or W2 ) and put 2j + 1 of them on W2 (or W1 respectively). These combinations fit into one of the following two different forms (3.23)
S21 =
X
r
hr21 r2 ...h22j+3
r2j+4
(W1 )r1 r2 r3 (W2 )r4 ,r5 ...r2j+4 (0).
r1 ,...,r2j+4 ∈A2
There are 2(j + 1)(j + 2) terms of this form in the expansion of S2 . (3.24)
S22 =
X
r
hr21 r2 ...h22j+3
r2j+4
r1 ,...,r2j+4 ∈A2
27
(W1 )r1 r3 r5 (W2 )r2 ,r4 ,r6 ,r7 ...r2j+4 (0).
j+2 There are 23 terms of this form in the expansion of S2 . 3 Now, we calculate the sums S21 and S22 .
3.6.1
Calculation of S21
We rewrite S21 as
S21 =
X
hr21 r2 hr23 r4
r1 ,...,r4
X
r
hr25 r6 ...h22j+3
r2j+4
(W2 )r5 ...r2j+4
r5 ,...,r2j+4
r4
(W1 )r1 r2 r3 (0).
Then from the definition of W2 in (3.20) and also from (3.21) it is clear that we can apply (3.16) to the sum in the big parenthesis above. Hence we get
(3.25) S21
α~ X X 1 −1 ~ω t 2j cot hr21 r2 hr23 r4 D2~ W (W ) = (j + 1)! 1 r1 r2 r3 (0). α 2 r4 α ~ ! 2~ω 2 r ,...,r |~ α|=j
1
4
This reduces the calculation of S21 to calculating the small sum X
A12 =
˜ 2 )r4 (W1 )r1 r2 r3 (0), hr21 r2 hr23 r4 (W
˜ 2 = D 2j W2 ). (W 2~ α
r1 ,...,r4
Computation of the sum A12 is straight forward and we omit writing the details of this computation. Using Maple, we obtain Z tZ 0
0
s1
A12 ds2 dt = −
t −1 ωn t ˜ 2 D 3 W1 (0). ( cot ) D~e1n W 3~ en 2 2ωn 2ωn 2
If we plug this into (3.25), after a change of variable αn → αn + 1 in indices, we get
28
(3.26) Z t Z s1 0
α~ ~ω t t −1 ωn t 2 2j+1 (j + 1)! X 1 −1 3 ds2 dt = cot (− 2 )( cot ) D2~α+3~en V (0) D3~ en V (0). αn + 1 α ~ ! 2~ω 2 2ωn 2ωn 2
S21
0
|~ α|=j−1
Calculation of S22
3.6.2
We rewrite S22 as X
S22 =
h2r1 r2 h2r3 r4 hr25 r6
r1 ,...,r6
X
r
hr27 r8 ...h22j+3
r2j+4
(W2 )r7 ...r2j+4
r7 ,...,r2j+4
r2 ,r4 ,r6
(W1 )r1 r3 r5 (0).
Again from (3.21) it is clear that we can apply (3.16) to the sum in the big parenthesis above. So
(3.27) S22
X
= (j+1)!
|~ α|=j−1
1 α ~!
−1 ~ω t cot 2~ω 2
α~ X
2j−2 hr21 r2 hr23 r4 D2~ α W2
r1 ,...,r6
So we need to compute
A22 =
X
˜ 2 )r2 ,r4 ,r6 (W1 )r1 r3 r5 (0), hr21 r2 hr23 r4 hr25 r6 (W
(W ) 1 r1 r3 r5 (0). r2 ,r4 ,r6
˜ 2 = D 2j−2W2 ). (W 2~ α
r1 ,...,r6
Using Maple Z t Z s1 0
0
A22 ds2 dt = −
ωn t 2 t 3 ˜ t −1 3 cot − D~3en W2 D3~ en W1 (0). 2 4 2ωn 2ωn 2 12ωn
If we plug this into (3.27) we get
(3.28) Z t Z s1 0
0
S22 ds2 dt
= (j+1)!
X
|~ α|=j−1
1 α ~!
−1 ωt ~ cot 2~ ω 2
We note that the part of the expansion of 2j+1 3 D2~ en V (0), equals α+3~ en V (0) D3~
29
α~
−
R t R s1 0
0
t −1 ωn t 2 t 2j+1 3 cot − D V (0) D3~ en V (0). 2 2ωn 2ωn 2 12ωn4 2~α+3~en
Pj+2 b2 (0) which contains the data
i−j 2(j + 2)(j + 1) 2j .j!
Z tZ 0
s1 0
S21
+2
3
j + 2 Z tZ 3
0
s1 0
S22 .
Finally, by applying equations (3.26) and (3.28) to this we obtain (3.19).
Now using Proposition 3.6.1, we give a proof for Theorem 1.2.2.
Proof of Theorem 1.2.2. First of all, we prove that for all α ~ , the functions cot
~ω α~ t , 2
are linearly independent over C. To show this we define cot ~ : (0, π)n −→ Rn ,
cot(x ~ 1 , ..., xn ) = (cot(x1 ), ..., cot(xn )).
Because ωk are linearly independent over Q, the set {( ω21 t, ..., ω2n t) + πZn ; t ∈ R} ∩ ~ is a homeomorphism and is π-periodic, we con(0, π)n is dense in (0, π)n . Since cot clude that the set {(cot( ω21 t), ..., cot( ω2n t); t ∈ R} is dense in Rn . Now assume X
cα~ cot
α ~
~ω α~ t = 0. 2
Since {(cot( ω21 t), ..., cot( ω2n t); t ∈ R} is dense in Rn , we get X
~ α~ = 0, cα~ X
α ~
~ = (X1 , ..., Xn ) ∈ Rn . But the monomials X ~ α~ are linearly independent for every X over C. So cα~ = 0.
30
Next we argue inductively to recover the Taylor coefficients of V from the wave invariants. Since a0 (t) = we can recover
Qn
ωk t k=1 sin 2 ,
n Y
1 , 2i sin ω2k t k=1
and therefore we can recover {ωk } up to a permutation. Q This can be seen by Taylor expanding nk=1 sin ω2k t . We fix this permutation and we
3 move on to recover the third order Taylor coefficient D3~ en V (0). This term appears
first in a1 (t). By Proposition 3.6.1, we have t X 1 −1 ~ω α~ 4 ( cot t) D2~α V (0) (2i)2 α ~ ! 2~ω 2 |~ α|=2 2 t 5 −1 ωn t 2 1 3 ( cot ) + + c2 (n) D V (0) 3~ en (2i)3 3ωn2 2ωn 2 9ωn4
a1 (t) = c1 (n)
+{a rational function of ωk }. ω ~ α −1 ~ Now since the functions {(cot 2 t) }|~α|=2 and 3ω52 ( 2ω cot ω2n t )2 + n n
1 4 9ωn
are linearly in-
4 3 2 dependent over C, we can therefore recover the data {D2~ α|=2 and {D3~ α V (0)}|~ en V (0) } 3 from a1 (t). So we have determined the third order term D3~ en V (0) up to a minus sign
from the first invariant a1 (t). This choice of minus sign corresponds to a reflection. We fix this reflection and we move on to determine the higher order Taylor coefficients inductively. 3 Next we assume D3~ en V (0) 6= 0 and that we know all the Taylor coefficients 2j+1 Dβm α|=j−1 and ~ V (0) with m ≤ 2j. We wish to determine the data {D2~ α+3~ en V (0)}|~ 2j+2 {D2~ α|=j+1 , from the wave invariant aj (t). At this point we use Proposiα V (0)}|~
tion 3.6.1, and to finish the proof of Theorem 1.2.2 we have to show that the set of functions
n ~ω (cot t)α~ ; 2
o n ~ω t α~ 1 2αn + 5 −1 ωn t 2 1 |~ α| = j+1 ∪ (cot ) ( )( cot ) + 4 ; 2 3ωn2 αn + 1 2ωn 2 9ωn 31
o |~ α| = j−1 ,
are linearly independent over C. But this is clear from our discussion at the beginning of the proof.
32
Chapter 4 Appendix A In this appendix we prove Lemma 2.1.1.
Proof. First of all we would like to change the function Θ slightly by rescaling it. We choose 0 < τ < 2ε so that ~1−τ = o(~1−2ε ). Then we define
(4.1)
Θ~(x) := Θ(
x ~1−τ
).
Thus Θ~ ∈ C0∞ ([0, ∞)) is supported in the interval I~ = [0, ~1−τ δ]. In Appendix B, using min-max principle we show that
ˆ T r(Θ(H)e
−it ˆ H ~
ˆ ) = T r(Θ~(H)e
−it ˆ H ~
) + O(~∞ ) = T r(e
−it ˆ H ~
) + O(~∞ ).
Hence to prove the lemma it is enough to show
T r(Θ(Pˆ )e
−it ˆ P ~
ˆ ) = T r(Θ~(H)e
−it ˆ H ~
) + O(~∞).
To prove this identity we use the WKB construction of the kernel of the operators Θ(Pˆ )e
−it ˆ P ~
ˆ and Θ~(H)e
−it ˆ H ~
and make a compression between them.
33
4.1
WKB construction for Θ(Pˆ )e
−it Pˆ ~
In [DSj], chapter 10, a WKB construction is made for Θ(Pˆ )e
−it ˆ P ~
for symbols P
in the symbol class S00 (1) which are independent of ~ or of the form P (x, ξ, ~) ∼ P0 (x, ξ) + ~P1 (x, ξ) + ..., where Pj ∈ S00 are independent of ~ (but not for symbols H = H(x, ξ, ~) ∈ Sδ00 ). −it ˆ It is shown that we can approximate Θ(Pˆ )e ~ P for small time t, say t ∈ (−t0 , t0 ),
by a fourier integral operator of the form
UP (t)u(x) = (2π~)
−n
Z Z
ei(ϕP (t,x,η)−y.η)/~bP (t, x, y, η, ~)u(y)dydη,
where bP ∈ C ∞ ((−t0 , t0 ); S(1)) have uniformly compact support in (x, y, η), and ϕP is real, smooth and is defined near the support of bP . The functions ϕP and bP are found in such a way that for all t ∈ (−t0 , t0 ) ||Θ(Pˆ )e
−it ˆ P ~
− UP (t)||tr = O(~∞ ).
Let us briefly review this construction, made in [DSj]. First of all, in Chapter 8, Theorem 8.7, it is proved that for every symbol P ∈ S00 (1), we have Θ(Pˆ ) = Opw (aP (x, ξ, ~)) for some aP (x, ξ, ~) ∈ S00 (1), where here Pˆ and Opw (aP (x, ξ, ~)) are respectively the Weyl quantization of P and aP (x, ξ, ~). It is also shown that aP ∼ aP,0 (x, ξ) + haP,1 (x, ξ) + ... for some aP,j (x, ξ) ∈ S00 (1). The idea of proof is as ˜ ∈ C 1 (C) is follows. In Theorem 8.1 of [DSj] it is shown that if Θ ∈ C0∞ (R), and if Θ 0 ˜ an almost analytic extension of Θ (i.e. ∂¯Θ(z) = O(|ℑz|∞ )), then
(4.2)
−1 Θ(Pˆ ) = π
Z ¯˜ ∂ Θ(z) L(dz). ˆ C z −P
Then it is verified that for some symbol r(x, ξ, z; ~), we have (z−Pˆ )−1 = Opw (r(x, ξ, z; ~)). By symbolic calculus, one can find a formal asymptotic expansion of the form 34
1 q1 (x, ξ, z) 2 q2 (x, ξ, z) +~ + ~ + ..., z−P (z − P )3 (z − P )5 by formally solving Opw (r(x, ξ, z; ~))♯~(z − Pˆ ) = (z − Pˆ )♯~Opw (r(x, ξ, z; ~)) = 1. We r(x, ξ, z; ~) ∼
can see that qj (x, ξ, z) are polynomials in z with smooth coefficients. Finally it is shown that Θ(Pˆ ) = Opw (aP (x, ξ, ~)), where aP ∈ S00 is given by −1 aP (x, ξ, ~) = π
Z
˜ ∂¯Θ(z)r(x, ξ, z; ~)L(dz). C
By the above asymptotic expansion for r(x, ξ, z; ~) one obtains an asymptotic aP ∼ aP,0 + ~aP,1 + ..., where
(4.3)
aP,j
−1 = π
Z
qj (x, ξ, z) 1 2j ˜ L(dz) = ∂ (qj (x, ξ, t)Θ(t))|t=P (x,η) . ∂¯Θ(z) 2j+1 (z − P ) (2j)! t C
Then, again in Chapter 10 of [DSj], it is shown that ϕP (t, x, η) and bP (t, x, y, η, ~) satisfy (4.4)
∂t ϕP (t, x, η) + P (x, ∂x ϕP (t, x, η)) = 0, bP ∼ bP,0 + ~bP,1 + . . . ,
ϕP |t=0 = x.η,
bP,j = bP,j (t, x, y, η) ∈ C ∞ ((−t0 , t0 ); S00 (1)),
where (4.5)
1 ∂ b + ∂ ϕ , ∂ b + 2 ∆x ϕP . bP,j = − 12 ∆x bP,j−1 , t P,j x P x P,j
j ≥ 0,
(bP,−1 = 0),
b | = ψ(x, η)a ( x+y , η)ψ(y, η). P,j t=0 P,j 2
In (4.5), aP,j is given by (4.3) and ψ(x, η) is any C0∞ function which equals 1 in a neighborhood of P −1 (I) where I = [0, δ] is, as before, the range of our low-lying eigenvalues and where Θ is supported.
ˆ There exists a similar construction for Θ~(H)e 35
−it ˆ H ~
, except here H ∈ Sδ00 .
4.2
ˆ ˆ −it ~ H WKB construction for Θ~ (H)e
Since in (2.1), H = H(x, ξ, ~) ∈ Sδ00 , with δ0 =
1 2
− ε, we can not simply use the
construction in [DSj] mentioned above. Here in two lemmas we show that the same ˆ H ˆ −it ~ construction works for the operator Θ~(H)e . We will closely follow the proofs in
[DSj]. Lemma 4.2.1. 1) Let Θ~ be given by (4.1) and H ∈ Sδ00 by (2.1). Then for some aH ∈ Sδ00 we have ˆ = Opw (aH (x, ξ, ~)). Moreover aH (x, ξ, ~) ∼ aH,0 (x, ξ, ~) + ~aH,1(x, ξ, ~) + Θ(H) ..., where aH,j (x, ξ, ~) ∈ Sδ00 is given by
(4.6) aH,j
−1 = π
Z
˜ qH,j (x, ξ, z, ~) L(dz) = 1 ∂t2j (qH,j (x, ξ, t, ~)Θ~(t))|t=H(x,ξ,~) . ∂¯Θ(z) (z − H)2j+1 (2j)! C
2) Choose c such that 0 < c < min{1, ωk2 }nk=1 ≤ max{1, ωk2}nk=1 < 1c . Let ψ~(x, η) be a function in C0∞ (R2n ) ∩ Sδ00 (R2n ) which is supported in the ball {x2 + η 2 < 4c−1 ~1−τ δ} and equals 1 in a neighborhood of H −1 (I), where I~ = [0, ~1−τ δ] (I~ is where Θ~ is supported). Then
(4.7) ˆ Θ~(H)u(x) = (2π~)−n
Z Z
ei(x−y).η/~ψ~(x, η)aH (
x+y , η, ~)ψ~(y, η)u(y)dydη+K(~)u(x), 2
where ||K(~)||tr = O(~∞ ). Proof of Lemma 4.2.1: Since H ∈ Sδ00 and δ0 =
1 2
− ε < 12 , the symbolic cal-
culus mentioned in the last section can be followed similarly to prove Lemma 4.2.1.1. It is also easy to check that in (4.6), aH,j ∈ Sδ00 . The second part of the Lemma is 36
stated in [DSj] , equation 10.1, for the case P ∈ S00 . The same argument works for H ∈ Sδ00 , precisely because the factor ~N on the right hand side of the inequality in Proposition 9.5 of [DSj] changes to ~N −δ0 α . Thus the discussion on pages 115 − 116 still follows.
Lemma 4.2.2. There exists t0 > 0 such that for every t ∈ (−t0 , t0 ), there exist functions ϕH (t, x, η, ~) and bH (t, x, y, η, ~) such that the operator UH (t) defined by
(4.8)
UH (t)u(x) = (2π~)
−n
Z Z
ei(ϕH (t,x,η,~)−y.η)/~bH (t, x, y, η, ~)u(y)dydη,
satisfies ˆ H ˆ −it ~ ||Θ~(H)e − UH (t)||tr = O(~∞).
Moreover, we can choose ϕH and bH such that 1) ϕH satisfies the eikonal equation (4.9)
∂t ϕH (t, x, η, ~) + H(x, ∂x ϕH (t, x, η, ~)) = 0,
ϕH |t=0 = x.η.
This equation can be solved in (−t0 , t0 ) × {x2 + η 2 < C~1−τ δ} where C is an arbitrary constant. In fact ϕH is independent of ~ in this domain. (Only the domain of ϕH depends on ~. See (4.12).)
2) For all t ∈ (−t0 , t0 ), we have bH (t, x, y, η, ~) ∈ Sδ00 with supp bH ⊂ {x2 + η 2 , y 2 + η 2 < C1 ~1−τ δ} for some constant C1 . Also bH has an asymptotic expansion of the form 37
(4.10) bH ∼ bH,0 + ~bH,1 + . . . ,
bH,j = bH,j (t, x, y, η, ~) ∈ C ∞ ((−t0 , t0 ); Sδ00 (1)),
and the functions bH,j satisfy the transport equations
(4.11)
∂t bH,j + ∂x ϕH , ∂x bH,j + 1 ∆x ϕH . bH,j = − 1 ∆x bH,j−1 , 2 2
j ≥ 0,
(bH,−1 = 0),
b | = ψ (x, η)a ( x+y , η, ~)ψ (y, η), H,j t=0 ~ H,j ~ 2
where in (4.11) we let ψ~(x, η) be a function in C0∞ (R2n ) ∩ Sδ00 (R2n ) which is supported in the ball {x2 + η 2 < 4c~1−τ δ} and equals 1 in a neighborhood of H −1(I~), where I~ = [0, ~1−τ δ]. Here c is defined in Lemma 4.2.1.2. Also in (4.11), the functions aH,j are defined by (4.6). 3) For all t ∈ (−t0 , t0 ) (4.12)
on
ϕH (t, x, η, ~) = ϕP (t, x, η)
{x2 +η 2 , y 2 +η 2 < C1 ~1−τ δ} ⊃ supp (bH (x, y, η, ~)).
4) For all t ∈ (−t0 , t0 ) (4.13) bH,j (t, x, y, η, ~) = bP,j (t, x, y, η)
on
2 x + η 2 , y 2 + η 2 < c~1−τ δ .
Proof of Lemma 4.2.2: First of all we assume UH (t) is given by (4.8) and we try to solve the equation ˆ H (t)||tr = O(~∞), ||( ~i ∂t + H)U U (0) = Θ (H). ˆ H ~ 38
for ϕH and bH , for small time t. Using (4.7), this leads us to ˆ iϕH /~bH ) ∈ C ∞ ((−t0 , t0 ); S −∞ (1)), e−iϕH /~( ~i ∂t + H)(e δ0 b| = ψ (x, η)a ( x+y , η, ~)ψ (y, η). t=0 ~ H ~ 2
We choose the phase function ϕH = ϕH (t, x, η, ~) to satisfy the eikonal equation (4.9). This equation can be solved in a neighborhood of the support of bH , for small time t ∈ (−t0 , t0 ) with t0 independent of ~. Let us explain how to solve this equation. We let (x(t, z, η; ~), ξ(t, z, η; ~)) be the solution to the Hamilton equation
(4.14)
∂t x = ∂ξ H(x, ξ, ~) = ξ,
x(0, z, η; ~) = z .
∂ ξ = −∂ H(x, ξ, ~) = −∂ V (x), t x x ~
ξ(0, z, η; ~) = η
We can show that (see section 4 of [Ch]) there exists t0 independent of ~ such that for all |t| ≤ t0 we have
(4.15)
|∂z x(t, z, η; ~) − I| ≤ 12 , |∂ ξ(t, z, η; ~)| ≤ 1 , z 2
|∂η x(t, z, η; ~)| ≤
1 2
. |∂η ξ(t, z, η; ~) − I| ≤
1 2
We can choose t0 independent of ~, precisely because in equation 4.4 of [Ch] we have a uniform bound in ~ for Hess(V~(x)). Now, we define λ : (z, η) 7−→ (x(t, z, η; ~), η). It is easy to see that λ(0, 0) = (0, 0). This is because if (z, η) = (0, 0) then H(x, ξ) = H(z, η) = 0. By (2.1) and (1.2), and W (x) = O(|x|3), we can see that H(x, ξ) = 0 39
implies (x(t, 0, 0; ~), ξ(t, 0, 0; ~)) = (0, 0). On the other hand from (4.15) we have 1 2
< |∂z x(t, z, η; ~)| < 32 . Therefore λ is invertible in a neighborhood of origin. We
define the inverse function by
λ−1 (x, η) = (z(t, x, η; ~), η), which is defined in a neighborhood of (x, η) = (0, 0). Then we have (4.16) ϕH (t, x, η, ~) = z(t, x, η; ~).η +
Z
t
0
1 |ξ(s, z(t, x, η; ~), η; ~)|2−V~(x(s, z(t, x, η; ~), η; ~))ds, 2
A similar formula holds for ϕP except in (4.14) H should be replaced by P and in (4.16) V~ by V . It is known that the eikonal equation for ϕP can be solved near suppbP , for small time t ∈ (−t0 , t0 ) (Of course t0 is independent of ~). Now, we want to show that (4.17)
ϕH (t, x, η, ~) = ϕP (t, x, η)
in
(−t0 , t0 ) × {x2 + η 2 < C~1−τ δ}. 1
Let (x, η) be in {x2 + η 2 < C~1−τ δ}. First, we show that |z(t, x, η; ~)| < 8C 2 ~
1−τ 2
1
δ2.
Because z(t, 0, 0; ~) = 0, by Fundamental Theorem of Calculus we have |z(t, x, η; ~)| ≤ |x| + |η| sup{(|∂x | + |∂η |)(z(t, x, η; ~))}.
From x(t, z(t, x, η; ~), η; ~) = x, we get
∂η z = −(∂z x)−1 ∂η x. 1
Thus by (4.15), |∂x z|+|∂η z| ≤ 4. Hence |z(t, x, η; ~)| < 4(|x|+|η|) < 8C 2 ~
1−τ 2
1
δ 2 . This
implies that for all |t| ≤ t0 , (x(s, z(t, x, η; ~), η; ~), ξ(s, z(t, x, η; ~), η; ~)) will stay in a ball of radius O(~1−τ ) centered at the origin (this can be seen from the conservation of energy i.e. H(x, ξ) = H(z, η)). On the other hand, by definition (2.1), P and H agree in the ball {x2 + η 2 < 41 ~1−2ε } and τ < 2ε. So for all t, s ∈ (−t0 , t0 ) and (x, η) ∈ {x2 + η 2 < C~1−τ δ} we have 40
zP (t, x, η) = z(t, x, η; ~),
(4.18)
xP (s, zP (t, x, η), η) = x(s, z(t, x, η; ~), η; ~),
ξP (s, zP (t, x, η), η) = ξ(s, z(t, x, η; ~), η; ~), where zP (t, x, η), xP (s, zP (t, x, η), η) and ξP (s, zP (t, x, η), η) are corresponded to the Hamilton flow of P . Hence by (4.16) and a similar formula for ϕP , we have (4.17). This also shows that we can solve (4.9) in (−t0 , t0 ) × {x2 + η 2 < C~1−τ δ}. To find bH we assume it is of the form (4.10) and we search for functions bH,j such ˆ iϕH /~bH ) ∼ 0. After some straightforward calculations and that e−iϕH /~( ~i ∂t + H)(e using the eikonal equation for ϕH we obtain the so called transport equations (4.11). Now let us solve the transport equations inductively (see [Ch]). In [Ch] it is shown that the solutions to the transport equation (4.11) are given by
(4.19) 1
bH,0 (t, x, y, η, ~) = J − 2 (t, x, η, ~)bH,0 (0, z(t, x, η; ~), η; ~), y, η, ~) 1 bH,j (t, x, y, η, ~) = J − 2 (t, x, η, ~) bH,j (0, z(t, x, η; ~), η; ~), y, η, ~) Rt 1 − 12 0 J 2 (s, x, η, ~)∆bH,j−1(s, x(s, z(t, x, η; ~), η; ~), y, η, ~)ds .
where
J(t, x, η, ~) = det(∂x z(t, x, η; ~))−1 . Now, we notice by the assumption on ψ~, we have supp(bH,j (0, x, y, η; ~)) ⊂ {x2 + η 2 , x2 + η 2 < 4c−1 ~1−τ δ}. So by our previous discussion on z(t, x, η, ~), we can argue inductively that for all t ∈ (−t0 , t0 ), supp(bH,j ) ⊂ {x2 + η 2 , y 2 + η 2 < C1 ~1−τ δ} 41
for some constant C1 . Since bH,j |t=0 ∈ Sδ00 , we can also see inductively from (4.19) that bH,j ∈ Sδ00 . Finally, Borel’s theorem produces a compactly supported amplitude bH ∈ Sδ00 from the compactly supported functions bH,j ∈ Sδ00 . This finishes the proof of items 1, 2 and 3 of Lemma 4.2.2.
Now we give a proof for item 4 of Lemma 4.2.2.
By choosing C > C1 , equation (4.12) is clearly true from (4.17). Next we prove that equation (4.13) holds. Using (4.3) and (4.6), and because P and H agree in the ball {x2 +η 2 < 14 ~1−2ε }, we observe that the functions aP,j (x, η) and aH,j (x, ξ, ~) agree in this ball. Therefore, because suppψ~(x, η) ⊂ {x2 + η 2 < 4c−1 ~1−τ δ} and ψ~ = 1 in {x2 + η 2 < c~1−τ δ}, by (4.5) and (4.11)
bH,j (0, x, y, η, ~) = bP,j (0, x, y, η)
on {(x, y, η); x2 + η 2 , y 2 + η 2 < c~1−τ δ}.
This proves (4.13) only at t = 0. But by applying (4.18) to (4.19) and a similar formula for bP , we get (4.13). This finishes the proof of Lemma 4.2.2.
To finish the proof of Lemma 2.1.1, we have to show that for t sufficiently small T rUH (t) = T rUP (t) + O(~∞), or equivalently
Z Z
i(ϕH (t,x,η,~)−x.η)/~
e
bH (t, x, x, η, ~)dxdη =
Z Z
ei(ϕP (t,x,η)−x.η)/~bP (t, x, x, η, ~)dxdη+ O(~∞).
By (4.12), the phase function ϕH of the double integral on the left hand side equals ϕP on the support of the amplitude bH , so ϕH is independent of ~ in this domain. Now, if t ∈ (0, t0 ) where t0 is smaller than the smallest non-zero period of the flows
42
of P and H respectively in the energy balls {(x, η)| H(x, η) ≤ δ~1−τ C1 } ⊂ {(x, η)| P (x, η) ≤ δ}, then for every such t, (x, η) = (0, 0) is the only critical point of the phase functions ϕH (t, x, η, ~) − x.η and ϕP (t, x, η) − x.η in these energy balls. Obviously both integrals in the equation above are convergent because their amplitudes are compactly supported. But the question is whether or not we can apply the stationary phase lemma to these integrals around their unique non-degenerate critical points. By Lemma 4.2.2 the phase functions ϕH and ϕP are independent of ~ on the support of their corresponding amplitudes. Hence ϕH , ϕP ∈ S00 on supp bH and supp bP respectively. On the other hand bH (t, x, x, η, ~) ∈ Sδ00 , δ0
δ~1−τ }
ϕ( ˆ
Ej (~) ) = O(~∞ ). ~
Since ϕˆ is in S(R), for every p ≥ 0 there exists a constant Cp such that |ϕ(x)| ˆ ≤ Cp |x|−p . Hence by (5.3)
ϕ(
E (~) −p E 0 (~) − C −p Ej (~) j j ) ≤ Cp ≤ C . p ~ ~ ~
Again using (5.3) and because C = kW~(x)k 45
L∞
Rn ×(0,h
< A~ 23 −3ε < δ ~1−τ we get 4
0)
ϕ(
E 0 (~) −p Ej (~) δ~1−τ − C p Ej0 (~) −p j ) ≤ Cp ( 1−τ ) < 2C , p ~ δ~ − 2C ~ ~
for Ej (~) > δ~1−τ .
Now let m be an arbitrary positive integer. So in order to prove the lemma it is enough to find a uniform bound for
A(~) := ~
−m {~γ ;
P
X
ωk (γk + 21 )> δ~
1−τ −C } ~
|
n X
1 ωk (γk + )|−p . 2 k=1
By applying the geometric-arithmetic mean value inequality we get
X
−p −m
A(~) ≤ n ~
{~γ ;
≤ n−p
n n X
P
ωk (γk + 21 )> δ~
~−m
k=1
1−τ −C } ~
X
|
n Y
1 ωk (γk + )|−p 2 k=1
{γk ∈Z≥0 ; ωk (γk + 12 )> δ~
1−τ −C } n~
Y X o 1 1 |ωk′ (γk′ + )|−p . |ωk (γk + )|−p 2 2 γ k ′ 6=k k′
We claim for p large enough there is a uniform bound for the sum on the right hand P side of the above inequality. It is clear that if p ≥ 2 then the series γk′ |ωk′ (γk′ + 12 )|−p 1−τ
is convergent. Also if for some γk we have ωk (γk + 12 ) > δ~ n~−C , then because 1/τ 3 δ 1/τ 1 C = O(~ 2 −3ε ), for ~ small enough we have ωk (γk + 12 ) > ( 2n ) ~ . Thus X
{γk ∈Z≥0 ; ωk (γk + 12 )> δ~
1−τ −C } n~
X 1 2n 1 m ~−m |ωk (γk + )|−p ≤ ( )m/τ |ωk (γk + )| τ −p . 2 δ 2 γ k
So if we choose p > max { m , 2}, then the sum on the right hand side is convergent τ and therefore we have a uniform bound for the sum on the left hand side and hence for A(~). This finishes the proof of (5.1).
46
Part II Complex Zeros of 1D Schr¨ odinger Operators
47
Chapter 6 Introduction 6.1
Motivation and Background
This part of the dissertation is concerned with the eigenvalue problem for a onedimensional semi-classical Schr¨odinger operator
(6.1)
(−h2
d2 + V (x))ψ(x, h) = E(h)ψ(x, h), dx2
ψ(x, h) ∈ L2 (R) h → 0+
Using the spectral theory of the Shr¨odinger operators [BS], we know that if limx→+∞ V (x) = +∞ then the spectrum is discrete and can be arranged in an increasing sequence E0 (h) < E1 (h) < E2 (h) < · · · ↑ ∞. Notice that each eigenvalue has multiplicity one. We let {ψn (x, h)} be a sequence of eigenfunctions associated to En (h). If we assume the potential V (x) is a real polynomial of even degree with positive leading coefficient, then we can arrange the eigenvalues as above and the eigenfunctions ψn (x, h) possess analytic continuations ψn (z, h) to C. Our interest is in the distribution of complex zeros of ψn (z, h) as h → 0+ when an energy level E is fixed. The substitutions λ = h1 , and q(x) = V (x) − E changes the eigenvalue problem (6.1) to the problem: (6.2)
y ′′ (x, λ) = λ2 q(x)y(x, λ),
y(x, λ) ∈ L2 (R), 48
λ → ∞.
Since limx→+∞ q(x) = +∞, again the spectrum is discrete and can be arranged as (6.3)
λ0 < λ1 < λ2 < · · · < λn · · · ↑ ∞.
We define the discrete measure Zλn by (6.4)
Zλn =
1 λn
X
δz .
{z| y(z,λn )=0}
In this thesis we study the limits of weak∗ convergent subsequences of the sequence {Zλn } as n → ∞. We say Zλnk −→ Z,
(in weak∗ sense)
if for every test function ϕ ∈ Cc∞ (R2 ) we have Zλnk (ϕ) −→ Z(ϕ). We will call these weak limits, the zero limit measures. But before stating our results let us mention some background and motivation for the problem. Form the classical Sturm-Liouville theory we know everything about the real zeros of solutions of (6.2). We know that on a classical interval (i.e. an interval where q(x) < 0), every real-valued solution y(x, λ) of (6.2) (not necessarily L2 -solution) is oscillatory and becomes highly oscillatory as λ → ∞. In fact the spacing between the real zeros on a classical interval, measured by the A¨gmon metric, is
π . λ
On the
other hand there is at most one real zero on each connected forbidden interval where q(x) > 0. This shows that every limit Z in (6.5) has the union of classical intervals in its support. It turns out that other than the harmonic oscillator q(z) = z 2 −a2 where the eigenfunctions do not have any non-real zeros, the complex zeros are more complicated. It is easy to see that when q(z) = z 4 + az 2 + b, the eigenfunctions have infinitely many zeros on the imaginary axis. For q(z) = z 4 +a4 , Titchmarsh in [T] made a conjecture 49
that all the non-real zeros are on the imaginary axis. This conjecture was proved by Hille in [H1]. In general one can only hope to study the asymptotics of large zeros of y(z, λ) rather than finding the exact locations of zeros. The asymptotics of zeros of solutions to (6.2) for a fixed λ and large z, have been extensively studied mainly by E. Hille, R. Nevanlinna, H. Wittich and S. Bank (see [N], [W], [B]). But it seems the semi-classical limit of complex zeros has not been studied in the literature, at least not from the perspective that was mentioned in Theorem 6.2.1, which is closely related to the quantum limits of eigenfunctions. This problem was raised around fifteen years ago when physicists were trying to find a connection between eigenfunctions of quantum systems and the dynamics of the classical system. It was noticed that for the ergodic case the complex zeros tend to distribute uniformly in the phase space but for the integrable systems the zeros tend to concentrate on one-dimensional lines. An article which made this point and contains very interesting graphics is [LV]. The problem of complex zeros of complexified eigenfunctions and relations to quantum limits is suggested by S. Zelditch mainly in [Z5]. There, the author proves that if a sequence {ϕλn } of eigenfunctions of the Laplace-Beltrami operator on a real analytic manifold M is quantum ergodic then the sequence {Zλn } of zero distributions associated to the complexified eigenfunctions {ϕCλn } on M C , the complexification of M, is weakly convergent to an explicitly calculable measure. A natural problem is to generalize the results in [Z5] for Schr¨odinger eigenfunctions on real analytic manifolds. This is indeed a difficult problem. Perhaps the first step to study such a problem is to consider the one dimensional case which we do in this paper. The main reason to study the complex zeros rather than the real zeros is that the problem is much easier in this case (in higher dimensions). For example in studying the zeros of polynomials as a model for eigenfunctions, the Fundamental Theorem of Algebra and Hilbert’s Nullstellensatz are two good examples of how the complex zeros are easier and some-
50
how richer. See [Z6] for some background of the problem and some motivation in higher dimensions.
6.2
Statement of results
Throughout this part of the thesis we assume that q(z) has simple zeros. We may be able to extend the results in the case of multiple turning points using the methods in [F1, O2] on the asymptotic expansions around multiple turning points. Notice that q(x) has to change its sign on the real axis, because if it is positive everywhere then (6.2) does not have any solution in L2 (R). Hence q(z) has at least two simple real zeros. We say q(x) is a one-well potential if it has exactly two (simple) real zeros and a double-well potential if it has exactly four (simple) real zeros. One of our results is Theorem 6.2.1. Let q(z) be a real polynomial of even degree with positive leading coefficient. Then every weak limit Z (zero limit measure) of the sequence {Zλn } is of the form
(6.5)
Z=
1 p | q(z)| |dγ|, π
|dγ| = |γ ′ (t)| dt,
where γ is a union of finitely many smooth connected curves γm in the plane. For each γm there exists a constant cm , a canonical domain Dm and a turning point zm on the boundary of Dm such that γm is given by
(6.6) where S(zm , z) = zm to z.
ℜ(S(zm , z)) = cm , Rz p zm
z ∈ Dm ,
q(t) dt and the integral is taken along any path in Dm joining
51
This theorem shows that if Z in (6.5) is the limit of a subsequence {Zλnk }, then the complex zeros of {y(z, λnk )} tend to concentrate on γ as k → ∞ and in the limit p p they cover γ. The factor | q(z)| = | V (z) − E| indicates that the limit distribution
of the zeros on γ is measured by the A¨gmon metric. We call the curves γm the zeros lines of the limit Z. The next question after seeing Theorem 6.2.1 is ”what are all the possible zero limit measures and corresponding zero lines for a given polynomial q(z)?” We answer this question for some one-well and double-well potentials. One of our results is that for a symmetric quartic oscillator the full sequence {Zλn } is convergent, i.e. there is a unique zero limit measure. Here, by q(z) being symmetric we mean that after a translation on the real axis, q(z) is an even function. But for a non-symmetric quartic oscillator there are at least two zero limit measures. The Stokes lines play an important role in the description of the zero lines. In fact the infinite zero lines are asymptotic to Stokes lines. This fact was observed in [B]. Our proofs are elementary. We use the complex WKB method, connection formulas and asymptotics of the eigenvalues by Fed¨oryuk in [F1]. At the end of this manuscript (§8.4) we will briefly mention some interesting examples of one-well and double-well potentials where deg(q(z)) = 4, 6. Theorem 6.2.2. Let q(z) = (z 2 − a2 )(z 2 − b2 ), where 0 < a < b. Then as n → ∞ Zλn −→
1 p | q(z)| |dγ|, π
γ = (a, b) ∪ (−b, −a) ∪ (−∞i, +∞i).
Notice in Theorem 6.2.2 we can express γ by three equations 1 γ = {ℜS(a, z) = 0} ∪ {ℜS(−a, z) = 0} ∪ {ℜS(a, z) = − ξ}, 2 Ra p where ξ = −a q(t) dt, and each equation is written in some canonical domain.
Theorem 6.2.3. Let q(z) = (z 2 − a2 )(z 2 + b2 ), where a, b > 0. Then as n → ∞ Zλn −→
1 p | q(z)| |dγ|, π
γ = (−a, a) ∪ (bi, +∞i) ∪ (−∞i, −bi). 52
We also note that in Theorem 6.2.3 we can express γ by three equations γ = {ℜS(a, z) = 0} ∪ {ℜS(bi, z) = 0} ∪ {ℜS(−bi, z) = 0}, where each equation is written in some canonical domain. Theorems 6.2.2 and 6.2.3 state that for a symmetric quartic polynomial there is a unique zero limit measure. This is not always the case when q(z) is not symmetric. Let q(z) = (z − a0 )(z − a1 )(z − a2 )(z − a3 ) where a0 < a1 < a2 < a3 . Using the quantization formulas (for example in [S, F1]), we have two sequences of eigenvalues (6.7)
(6.8)
λ(1) n
2n + 1 1 = π + O( ), 2α1 n
λ(2) n
2n + 1 1 = π + O( ), 2α2 n
α1 =
Z
a1
Z
a3
a0
α2 =
a2
p | q(t)|dt, p | q(t)|dt.
Now with this notation we have the following theorem: Theorem 6.2.4. Let q(z) = (z − a0 )(z − a1 )(z − a2 )(z − a3 ), where a0 < a1 < a2 < a3 are real numbers. Then 1. if
α1 α2
is irrational then for each ℓ ∈ {1, 2} there is a full density subsequence
(ℓ)
(ℓ)
{λnk } of {λn } such that Zλ(ℓ) −→ n k
1 p | q(z)| |dγℓ| , π
where γ1 = (a0 , a1 ) ∪ (a2 , a3 ) ∪ {ℜ(S(a2 , z) = 0}, (6.9) γ2 = (a0 , a1 ) ∪ (a2 , a3 ) ∪ {ℜ(S(a1 , z) = 0}. 2. if
α1 α2
is rational and of the form
2r1 2r2 +1
Zλ(ℓ) −→ n
or
2r1 +1 2r2
then for each ℓ ∈ {1, 2}
1 p | q(z)| |dγℓ |. π
53
a0
a1
a2
a3
a0
γ1
a1
a2
γ2
Figure 6.1: The zero lines of the non-symmetric quartic oscillator. The thick lines are the zeros lines γ1 and γ2 given by (6.9).
3. if
α1 α2
is rational and of the form
2r1 +1 2r2 +1
where gcd(2r1 + 1, 2r2 + 1) = 1, then for (ℓ)
(ℓ)
each ℓ ∈ {1, 2} there exists a subsequence {λnk } of {λn } of density
2rℓ 2rℓ +1
such
that Zλ(ℓ) −→ n k
(ℓ)
(ℓ)
1 p | q(z)| |dγℓ|. π
In fact {λnk } = {λn |2n + 1 6= 0 (mod 2rℓ + 1)}.
Figure (6.1) shows the zero lines γ1 and γ2 defined in (6.9). As we see in this case the zero lines are made of Stokes lines. We should mention that in Theorem 6.2.4, when
α1 α2
is irrational or of the form
2r1 +1 2r2 +1
6= 1, we do not know what happens to the
rest of the subsequences. There might be some exceptional subsequences (of positive density in the case
α1 α2
=
2r1 +1 ) 2r2 +1
for which the zero lines are different from γℓ . As we
saw in Theorem 6.2.2, this is the case for the symmetric double-well potential when α1 α2
= 1. We probably need a more detailed analysis of the eigenvalues in order to
answer this question.
54
a3
6.3
Results of Eremenko, Gabrielov and Shapiro
Here we mention some recent results of A. Eremenko, A. Gabrielov, B. Shapiro in [EGS1] and [EGS2], and compare them to ours as their interests and their approach are very similar to ours. 1. Theorems 6.2.2 and 6.2.3 do not say anything about the exact location of the zeros but they only state that as n → ∞ the zeros approach to γ with the distribution law in [6.5]. It is easy to see that for both of these symmetric cases, for each n, all the zeros of y(z, λn ) except finitely many of them lie on γ. In [EGS1], the authors prove that for the solutions of the equation −y ′′ + P (x)y = λy,
(6.10)
y ∈ L2 (R),
where P (x) is an even real monic polynomial of degree 4, all the zeros of y(z) belong to the union of the real and imaginary axis. This result indeed implies that for all n, all the zeros of y(z, λn ) in Theorem 6.2.2 and Theorem 6.2.3 are on the corresponding γ.
2. In [EGS2], the authors show that the complex zeros of the scaled eigenfunctions 1/d
Yn (z) = y(λn z) of (6.10), where d =degree(P (x)), have a unique limit distribution in the complex plane as λn → ∞. The scaled eigenfunctions satisfy an equation of the form Yn′′ (z) = kn2 (z d − 1 + o(1))Yn (z),
kn → ∞.
The main reason that they could establish a uniqueness result for the limit distribution of complex zeros of Yn (z) is due the special structure of the Stokes graph of the polynomial z d − 1 which is proved in Theorem 1 in [EGS2].
55
Chapter 7 Background on Complex WKB Method In this chapter we review some basic definitions and facts about complex WKB method. We follow [F1]. See [O1, S, EF] for more references on this subject. We consider the equation (7.1)
y ′′ (z, λ) = λ2 q(z)y(z, λ),
λ → ∞,
on the complex plane C, where q(z) is a polynomial with simple zeros.
7.1
Stokes lines and Stokes graphs
A zero z0 , of q(z) is called a turning point. We let S(z0 , z) =
Rz p q(t)dt. This z0
function, is in general, a multi-valued function. The maximal connected component
of the level curve ℜ(S(z0 , z)) = 0 with initial point z0 and having no other turning points are called the Stokes lines starting from z0 . Stokes lines are independent of the choice of the branches for S(z0 , z). The union of the Stokes lines of all the turning points is called the Stokes graph of (7.1).
56
Figure (7.1) shows the Stokes graphs of many polynomials. Since the turning points are simple, from each turning point three Stokes lines emanate with equal angles. In general if z0 is a turning point of order n, then n + 2 Stokes lines with equal angles emanate from z0 .
7.2
Canonical Domains, Asymptotic Expansions of Eigenfunctions
Since q(z) is a polynomial, the Stokes graph divides the complex plane into two type of domains: 1. Half-plane type: A simply connected domain D which is bounded by Stokes lines is a half-plane type domain if under the map S = S(z0 , z), it is biholomorphic to a half-plane of the form ℜS > a or ℜS < a. Here z0 is a turning point on the boundary of D. 2. Band-type: D as above, is of band-type if under S, it is biholomorphic to a band of the form a < ℜS < b. A domain D in the complex plane is called canonical if S(z0 , z) is a one-to-one map of D onto the whole complex plane with finitely many vertical cuts such that none cross the real axis. A canonical domain is the union of two half-plane type domains and some band type domains. For example, the union of two half-plane type domains sharing a Stokes line is a canonical domain. Let ε > 0 be arbitrary.
We denote D ε for the pre-image of S(D) with ε-
neighborhoods of the cuts and ε-neighborhoods of the turning points removed. A canonical path in D is a path such that ℜ(S) is monotone along the path. For example, the anti-Stokes lines (lines where, ℑ(S) = 0), are canonical paths. For every 57
q(z)=z 2-a 2
q(z)=z
q(z)=(z 2-a 2 )(z 2 +b2 )
q(z)=(z 2-a 2 )(z 2 -b2 )
q(z) = (z 2 -a 2 )(z 2 +bz+c)
q(z) = (z-a 0 )(z-a 1 )(z-a 2 )(z-a 3 )
Figure 7.1: Stokes lines for some polynomials
58
point z in D ε , there are always canonical paths γ − (z) and γ + (z) from z to ∞, such that ℜS ↓ −∞ and ℜS ↑ ∞, respectively. Now we have the following fact: With D, γ + , γ − as above, to within a multiple of a constant, equation (7.1) has a unique solution y1 (z, λ), such that (7.2)
lim
y1 (z, λ) = 0,
z→∞,z∈γ −
ℜS(z0 , z) ↓ −∞,
and a unique solution y2 (z, λ) (up to a constant multiple), such that (7.3)
lim
z→∞,z∈γ+
y2 (z, λ) = 0,
ℜS(z0 , z) ↑ ∞.
The solutions y1 and y2 in (7.2) and (7.3) have uniform asymptotic expansions in D ε in powers of λ1 . Here we only state the principle terms:
(7.4)
y1 (z, λ) = q
(7.5)
y2 (z, λ) = q
−1 4
−1 4
(z)eλS(z0 ,z) (1 + ε1 (z, λ))
λ → ∞,
(z)e−λS(z0 ,z) (1 + ε2 (z, λ))
λ → ∞,
where (7.6)
1 ε1 (z, λ) = O( ), λ
uniformly in D ε ,
λ → ∞,
(7.7)
1 ε2 (z, λ) = O( ), λ
uniformly in D ε ,
λ → ∞.
Notice that the equalities (7.6) and (7.7) would not necessarily be uniformly in D ε if q(z) was not a polynomial.
7.3
Elementary Basis
Let D be a canonical domain, l a Stokes line in D, and z0 ∈ l a turning point. We use the triple (D, l, z0 ) to denote this data. We select the branch of S(zo , z) in D 59
such that ℑS(z0 , z) > 0 for z ∈ l. The elementary basis {u(z), v(z)} associated to (D, l, z0 ) is uniquely defined by u(z, λ) = cy1 (z, λ), and v(z, λ) = cy2 (z, λ), (7.8) |c| = 1, and arg(c) = lim 1/4 (z)), z→z0 ,z∈l arg(q where y1 (z, λ), y2 (z, λ) are given by (7.4) and (7.5).
7.4
Transition Matrices
Assume (D, l, z0 )j and (D, l, z0 )k are two triples and βj = {uj , vj } and βk = {uk , vk } their corresponding elementary basis. The matrix Ωjk (λ) which changes the basis βj to βk is called the transition matrix from βj to βk . Fed¨oryuk, in [EF], introduced three types of transition matrices that he called elementary transition matrices, and he proved that any transition matrix is a product of a finitely many of these elementary matrices. The three types are 1) (D, l, z1 ) 7→ (D, l, z2 ). This is the transition from one turning point to another along a finite Stokes line remaining in the same canonical domain D. The transition matrix is given by −iλα 0 e (7.9) Ω(λ) = eiϕ , eiλα 0
α = |S(z1 , z2 )|,
eiϕ =
c2 . c1
2) (D, l1, z1 ) 7→ (D, l2 , z2 ). Here the rays S(l1 ) and S(l2 ) are directed to one side. This is the transition from one turning point to another along an anti-Stokes line, remaining in the same domain D. The transition matrix is −λa O c2 e (7.10) Ω(λ) = eiϕ a = |S(z1 , z2 )|, eiϕ = . , c1 0 eλa 60
3) (D1 , l1 , z0 ) 7→ (D2 , l2 , z0 ) This is a simple rotation around a turning point z0 so that D1 and D2 have a common sub-domain. More precisely, let {lj ; j = 1, 2, 3} be the Stokes lines starting at z0 and ordered counter-clockwise so that lj+1 is located on the left side of lj . We choose the canonical domain Dj so that the part of Dj on the left of lj equals the part of Dj+1 on the right of lj+1 . Then −1 0 αj,j+1(λ) − π6 Ω (λ) = e , j,j+1 1 iα (λ) j+1,j+2 (7.11) αj,j+1(λ) = 1 + O( λ1 ), 1 ≤ j ≤ 3, α (λ)α (λ)α (λ) = 1, and α 1,2 2,3 3,1 j,j+1 (λ)αj+1,j (λ) = 1.
7.4.1
Polynomials with real coefficients
We finish this section with a review of some properties of the Stokes lines and transition matrices in (7.11) when the polynomial q(z) has real coefficients. 1) The turning points and Stokes lines are symmetric about the real axis. If x1 < x2 are two real turning points and q(x) < 0 on the line segment l = [x1 , x2 ], then l is a Stokes line (See Figure 7.1). Similarly, if q(x) > 0 on l, then l is an antiStokes line. Let x0 be a simple turning point on the real axis, and let l0 , l1 , l2 be the Stokes lines starting at x0 . Then one of the Stokes lines, say l0 , is an interval of the real axis, and l2 = l¯1 . The Stokes lines l1 and l2 do not intersect the real axis other than at the point x0 . If a Stokes line l intersects the real axis at a non-turning point, then l is a finite Stokes line and it is symmetric about the real axis. If limx→∞ q(x) = ∞, and x+ is the the largest zero of q(x), and l0 , l1 , l2 are the
61
corresponding Stokes lines, then there is a half type domain D + such that [x+ , +∞] ⊂ D + ,
D+ = D+ ,
l1 ∪ l2 ⊂ ∂D + .
Clearly [x0 , +∞] is an anti-Stokes line and S(x0 , ∞) = ∞. By (7.2), there exists a unique solution y + (z, λ) such that lim y + (x, λ) = 0.
x→∞
Similarly by (7.3) if limx→−∞ q(x) = ∞ and x− is the smallest root of q(x), and D − a half type domain containing [−∞, x0 ], there exists a unique solution y − (z, λ) such that lim y − (x, λ) = 0.
x→−∞
Therefore if y(x, λ) is an L2 -solution to (6.2) , then for some constants c+ , c− y(x, λ) = c+ y + (x, λ) = c− y − (x, λ). Now let Ω+,− (λ) be the transition matrix connecting D + to D − and let
a(λ) 0 = Ω+,− (λ) . b(λ) 1
The fact that y + (x, λ) is a constant multiple of y − (x, λ) is equivalent to (7.12)
b(λ) = 0,
which is the equation that determines the eigenvalues λn . To calculate Ω+,− (λ) and hence b(λ) we have to write this matrix as a product of finitely many elementary transition matrices connecting D + to D − . 2) When the polynomial q(z) has real coefficients, the transitions matrices in (7.11) have some symmetries. Let x0 be a simple turning point and q(x) > 0 on the 62
l1 l0
.b
x0 l2
Figure 7.2:
interval [x0 , b]. We index the Stokes lines l0 , l1 , l2 as in Figure (7.13). We define the canonical domains D0 , D1 , D2 by their internal Stokes lines and their boundary Stokes lines as the following D0 = D0 ,
l0 ⊂ D0 , l1 ∪ l2 ⊂ ∂D,
[x0 , b] ⊂ D1 ,
l0 ∪ l2 ⊂ ∂D1 ,
D2 = D 1 . Now with the same notation as in (7.11), we have (7.13)
α0,1 = α0,2 ,
63
|α1,2 | = 1.
Chapter 8 Proofs of Results 8.1
The complex zeros in a canonical domain D
The following lemma determines how the complex zeros are distributed in a canonical domain: Lemma 8.1.1. Let T = (D, l, z0 ) be a triple as in §7.3 and let {u(z, λ), v(z, λ)} be the elementary basis associated to T in (7.8). We write y(z, λn ) in this basis as (8.1)
y(z, λn ) = a(λn )u(z, λn ) + b(λn )v(z, λn ).
If {λnk } is a subsequence of {λn } such that the limit 1 b(λnk ) log | |, k→∞ 2λnk a(λnk )
(8.2)
t = lim
exists, then in D ε we have Zλnk −→
1 p | q(z)| |dγ|, π
γ = {z ∈ D| ℜS(z0 , z) = t}.
The last expression means that for every ϕ ∈ Cc∞ (D ε ) we have 1 Zλnk (ϕ) → π
Z
ϕ(z)|
γ
64
p
q(z)||dγ|.
Proof of Lemma. For simplicity we omit the subscript nk in λnk , but we remember that the limit in (8.2) is taken along λnk . Using (8.1), (7.8), (7.4), and (7.5), the equation y(z, λ) = 0 in D ε , is equivalent to (8.3) S(z0 , z)−
1 1 + ε1 (z, λ) 1 b(λ) 2k + 1 1 b(λ) log( )= log | | +i( π+ arg( )), 2λ 1 + ε2 (z, λ) 2λ a(λ) 2λ 2λ a(λ)
k ∈ Z.
˜ where we have chosen log z = log r + iθ, −π < θ < π. We use S(z) for the function on the left hand side of (8.3) and ak for the sequence of complex numbers on the right ˜ hand side. As we see S(z) is the sum of the biholomorphic function S(z0 , z) and the function (8.4) µ(z, λ) := −
1 1 + ε1 (z, λ) 1 log( ) = O( 2 ), 2λ 1 + ε2 (z, λ) λ
uniformly in D ε , by (7.6), (7.7).
Now suppose ϕ ∈ Cc∞ (D ε ) and K = supp(ϕ). We also define K ′ = S(K) where S(z) = S(z0 , z). Without loss of generality we can assume that {x = t} ∩ int(K ′ ) is a connected subset of the vertical line x = t, because we can follow the same argument for each connected component. Now let s = length({x = t} ∩ int(K ′ )). It is clear that because of (8.2) (8.5)
N := #{ ak ∈ int(K ′ )} ∼ sλ.
We call this finite set {ak }m+1≤k≤m+N . Now let K ⊂ V ⊂ D ε be an open set with compact closure in D ε . We choose λ large enough such that |µ(z, λ)| < |S(z0 , z) − a|
∀ a ∈ K ′,
∀ z ∈ ∂V.
Since S is a biholomorphic map, by Rouch´e’s theorem the equation ˜ S(z) = ak ,
m + 1 ≤ k ≤ m + N,
has a unique solution zk in V for each k. Now by (8.2), (8.3), and (8.4), we have 1 b(λ) 2k + 1 1 zk = S −1 log | | −µ(zk , λ) + i π + O( ) 2λ a(λ) 2λ λ 65
It follows that
2k + 1 1 = S −1 t + o(1) + i π − ℑ µ(zk , λ) + O( )) . 2λ λ
m+N m+N 2k + 1 1 X 1 1 X ϕ(zk ) = (ϕ◦S −1 ) t+o(1)+i π−ℑ µ(zk , λ) +O( )) . Zλ (ϕ) = λ λ 2λ λ k=m+1
k=m+1
Using the mean value theorem on the x−axis and (8.5), we obtain (8.6)
m+N 2k + 1 1 X h 1 i −1 lim Zλ (ϕ) = lim (ϕ◦S ) t+i π−ℑ µ(zk , λ) +O( )) +o(1) . λ→∞ λ→∞ λ 2λ λ k=m+1
Because of (8.4), we know that ℑ(µ(zk , λ)) = O( λ12 ) uniformly in k. Also the term
O( λ1 ) is independent of k. Therefore the set ℘ = {(t,
2k + 1 1 π − ℑ(µ(zk , λ)) + O( ))| m + 1 ≤ k ≤ m + N} 2λ λ
is a partition of the vertical interval {x = t} ∩ int(K ′ ) with mesh(℘) → 0 as λ → ∞. This together with (8.6) implies that 1 lim Zλ (ϕ) = λ→∞ π
Z
{x=t}
ϕ ◦ S −1 dy.
Now, if in the last integral we apply the change of variable z 7→ S(z), then by the Cauchy-Riemann equations for S, we obtain Z Z p 1 1 −1 ϕ ◦ S dy = ϕ(z)| q(z)| |dγ|. π {x=t} π {ℜS(z)=t} This proves the Lemma.
8.2
Zeros in the complex plane and the proof of Theorem 6.2.1
First of all, we cover the plane by finitely many canonical domains Dm . Let ε > 0 be sufficiently small as before. Assume {Zλnk } is a weak∗ convergent subsequence 66
ε converging to a measure Z. Clearly {Zλnk } converges to Z in each Dm . We claim
that the limit (8.2) exists for every triple Tm = (Dm , zm , lm ). This is clear from Lemma 8.1.1. This is because if in (8.2) we get two distinct limits t1 and t2 for two subsequences of {Zλnk }, then we get two corresponding distinct limits Z1 and Z2 which contradicts our assumption about {Zλnk }. We should also notice that if in (8.2), t = +∞ then in the proof of Lemma 8.1.1 for λ large enough we have {x =
1 2λ
log |
b(λ) a(λ)
ε = 0. This means that we do |} ∩ int(K ′ ) = Ø, and therefore Z|Dm
not obtain any zero lines in this canonical domain. In other words the zeros run away from this canonical domain as λ → ∞. But as we mentioned in the introduction, all the Stokes lines on the real axis are contained in the set of zero lines of every limit Z, meaning that in Theorem 6.2.1, γ is never empty. S ε Now notice that because m Dm covers the plane except the ε-neighborhoods around the turning points, we have proved that
Z(ϕ) =
8.2.1
1 π
R
γ
ϕ(z)|
p
q(z)| |dγ|,
ϕ ∈ Cc∞ (C\
S
m
B(zm , ε)).
Complex zeros near the turning points
To finish the proof we have to show that if ϕε ∈ Cc∞ ( function of ε, then
S
m
B(zm , ε)) is a bounded
lim lim sup Zλnk (ϕε ) = 0.
ε→0 λn →∞ k
This is clearly equivalent to showing that if z0 is a turning point, then (8.7)
lim lim sup
ε→0
λ→∞
#{z ∈ B(z0 , ε)| y(z, λ) = 0} = 0. λ
To prove this we use the following fact in [F1] pages 104 − 105 or [EF] pages 39 − 41, which enables us to improve the domain D ε in (7.6) and (7.7) from a fixed ε to ε(λ)
67
dependent of λ such that ε(λ) → 0 as λ → ∞. Let D be a canonical domain with turning points zm on its boundary. Assume N(λ) is a positive function such that N(∞) = ∞. Now if we denote D(λ) = D \
[ m
B(zm , |q ′(zm )|−1/3 N(λ)λ−2/3 ),
Then in place of equations (7.6) and (7.7) we have ε1 (z, λ), ε2 (z, λ) = O(N(λ)−3/2 ),
uniformly in D(λ),
λ → ∞.
In fact this implies that Lemma 8.1.1 is true for every ϕ supported in D. This is because we can follow the proof of the lemma line by line except that in (8.4) we get µ(z, λ) = O(N(λ)−3/2 λ−1 ) uniformly in D(λ) and therefore, using N(∞) = ∞, we can still conclude mesh(℘) → 0 as λ → ∞. We choose N(λ) = λ1/12 . By the discussion in the last paragraph in (8.7) we can replace ε by ε(λ) = cN(λ)λ−2/3 = cλ−7/12 , where c = |q ′ (z0 )|−1/3 . Let us find a bound for the number of zeros of y(z, λ) in B(z0 , ε(λ)). Let M = supB(z0 ,δ) (|q(z)|) where δ > 0 is fixed and is chosen such that the ball B(z0 , δ) does not contain any other turning points. We also choose λ large enough so that ε(λ) < δ. If ζ is a zero of y(z, λ) in the ball B(z0 , ε(λ)) then by Corollary 11.1.1 page 579 of [H1] we know that there are no zeros of y(z, λ) in the ball of radius
#{z ∈ B(z0 , ε(λ))| y(z, λ) = 0} ≤
√π λ−1 M
around ζ except ζ. Therefore
√π λ−1 )) 2 M √π λ−1 )) 2 M
area(B(z0 , ε(λ) + area(B(ζ,
and so #{z ∈ B(z0 , ε(λ))| y(z, λ) = 0} = 0. λ→∞ λ lim
This finishes the proof of Theorem 6.2.1.
68
= O(λ5/6 ),
m
l l =m xl xm
n
p
n=p xn xp p
l
m
n
Figure 8.1:
8.3
Zeros for symmetric and non-symmetric double well potentials
In this section we give proofs for Theorems 6.2.2 and 6.2.4. We will not prove Theorem 6.2.3, because the proof is similar to (in fact easier than) the proof of the two well potential. To simplify our notations let us rename the turning points as xl = a0 , xm = a1 , xn = a2 , xp = a3 . Then we can index the Stokes lines as in Fig (8.1). We define the canonical domains Dl , Dm0 , D, Dn0 , and Dp by l1 ⊂ Dl , m0 ⊂ Dm0 , (8.8)
m1 , n1 ⊂ D, n0 ⊂ Dn0 , p 1 ⊂ Dp ,
m1 , m0 , l2 ⊂ ∂Dl , l1 , l2 , m1 , m2 ⊂ ∂Dm0 , l1 , l0 , m2 , n2 , p0 , p1 ⊂ ∂D, p1 , p2 , n1 , n2 ⊂ ∂Dn0 , n1 , n0 , p2 ⊂ ∂Dp .
Notice that the complex conjugates of the these canonical domains are also canonical domains and in fact if we include these complex conjugates then we obtain a covering 69
of the plane by canonical domains. But because q(x) is real, the zeros are symmetric with respect to the x-axis, and it is therefore enough to find the zeros in Dl ∪ Dm0 ∪ D ∪ Dn0 ∪ Dp . By lemma 8.1.1 we only need to discuss the limit (8.2) in each of these canonical domains. First of all let us compute the equation of the eigenvalues (7.12). Here, the transition matrix Ω+,− , is the product of the seven elementary matrices associated to the following sequence of triples: (Dp , l1 , xp ) 7→ (Dn0 , n0 , xp ) 7→ (Dn0 , n0 , xn ) 7→ (D, n1 , xn ) 7→ (D, m1 , xm ) 7→ (Dm0 , m0 , xm ) 7→ (Dm0 , m0 , xl ) 7→ (Dl , l1 , xl ). In fact if we define
α1 =
Z
xm xl
p | q(t)|dt,
α2 =
Z
xp
xn
then by (7.9),(7.10) and (7.11), we have
ξ=
Z
xn
xm
p
q(t)dt,
a(λ) 0 = Ω+,− b(λ) 1
p | q(t)|dt,
αl−1 0 l1
−1 e−iλα1 0 αm e−λξ 0 0 0 1 m0 = 1 iαl1 12 eiλα1 0 1 iαm0 m2 0 eλξ
−1 e−iλα2 0 αp−1 0 0 αn0 n1 0 1 p0 × . 1 iαn1 n2 eiλα2 0 1 iαp0 p2 1
A simple calculation shows that
−1 eiλα2 )eλξ . eiλα1 )(αp0 p2 e−iλα2 +αn1 n2 αp−1 b(λ) = αp−1 α−1 eiλ(α2 −α1 ) e−λξ −(αm0 m2 e−iλα1 +αl1 l2 αm 1 p0 1 m0 1 p 0 n0 n1
Hence b(λ) = 0 implies that
(8.9)
Γ1 (λ)Γ2 (λ) = αp−1 α−1 eiλ(α2 −α1 ) e−2λξ , 1 p 0 n0 n1 70
where −1 Γ1 (λ) = αm0 m2 e−iλα1 + αl1 l2 αm eiλα1 = 2 cos(α1 λ) + O( λ1 ), 1 m0
(8.10) Γ2 (λ) = αp0 p2 e−iλα2 + αn1 n2 αp−1 eiλα2 = 2 cos(α2 λ) + O( λ1 ). 1 p0
Now let us discuss the limit in (8.2) for each of the canonical domains defined in (8.8). Even though the coefficients a(λ), b(λ) are different for different canonical domains, we do not consider it in our notation. By (7.4), (7.5) , (7.6), and (7.7), it is clear that for λ large enough there are no zeros in Dlε and Dpε . For (Dn0 , n0 , xp ) we have −1 −1 a(λ) 0 αp1 p0 0 αp1 p0 = = . b(λ) 1 iαp0 p2 1 iαp0 p2 Using (7.11),(7.13), for the full sequence λn we have
b(λn ) 1 1 log | |= log |αp1 p2 | = 0. 2λn a(λn ) 2λn Hence t = 0 and, by Lemma 8.1.1, the Stokes line n0 = (a2 , a3 ) is a zero line in Dn0 . The same proof shows that the Stokes line [a0 , a1 ] is a zero line for the full sequence Zλn in Dm0 . Now it only remains to discuss the limit in (8.2) in the canonical domain D. For the triple (D, n1 , xn ) we have −1 −iλα2 −1 iλα2 −1 −1 e a(λ) 0 αn0 n1 0 0 αp1 p0 0 e αp1 p0 αn0 n1 = = . iλα2 b(λ) 1 iαn1 n2 e 0 1 iαp0 p2 1 iΓ2 (λ)
Therefore, by the second equation in (7.11), we obtain
(8.11)
iΓ2 (λn ) 1 1 log | iλn α2 −1 −1 | = lim log |Γ2 (λn )|. n→∞ 2λn n→∞ 2λn e αp1 p0 αn0 n1
t = lim
The limit (8.11) does not necessarily exist for the full sequence {λn }. We study this limit in different cases as follows: 71
1.
α1 α2
= 1:
This is exactly the symmetric case in Theorem 6.2.2. It is easy to see that if α1 = α2 , then there exists a translation on the real line which changes q(z) to an even function. When q(z) is even, because of the symmetry in the problem, we have Γ1 (λ) = Γ2 (λ). On the other hand equation (8.9) implies that (8.12)
1 |Γ1 (λ)||Γ2(λ)| = e−2λξ (1 + O( )). λ
This means that in the symmetric case, the full sequence λn satisfies |Γ1 (λn )| = |Γ2 (λn )| = e−λn ξ (1 + O(
1 )). λn
Therefore, by (8.11) we have t = − 12 ξ and using the lemma the line ℜS(a, z) = − 12 ξ, is the zero line in D. We note that this in fact determines the whole imaginary axis, because ℜS(a, 0) = − 12 ξ. Also notice that in the symmetric case, by our notations we have a = a2 = xn . This proves Theorem 6.2.2.2.
2.
α1 α2
6= 1:
In this case as we mentioned in the introduction, there are more than one zero limit measures. Here the limit (8.11) behaves differently for the two subsequences in (6.7) and (6.8) (notice that the equations (6.7) and (6.8) in fact follow from (8.9)). It is clear from (8.10) that if for a subsequence {λnk } we have a lower bound δ for | cos(α2 λnk )|, then we have t=limk→∞ 2λ1n log |Γ2 (λnk )| = 0. Also k
if we have a lower bound δ for | cos(α1 λnk )| then limk→∞ 2λ1n log |Γ1 (λnk )| = 0, k
and by (8.12) we have t = limk→∞
1 2λnk
72
log |Γ2 (λnk )| = −ξ. To find such subse-
quences we denote for each ℓ = 1, 2 (ℓ)
Aδ = {λn ; | cos(αℓ λn )| > δ}. (1)
By (6.7) and (6.8), it is clear that up to some finite sets Aδ (2)
(1)
(2)
⊂ {λn } and (1)
(2)
Aδ ⊂ {λn }. We would like to find the density of the subsets Aδ and Aδ in (2)
(1)
{λn } and {λn } respectively. Here by the density of a subsequence {λnk } of {λn } we mean #{k; λnk ≤ λn } . n→∞ n
d = lim
If we set τ = arcsin(δ) then we have 1 α1 1 1 (1) Aδ = {n ∈ N; |(n + ) + (m + )| > τ + O( ), 2 α2 2 n
∀ m ∈ Z},
1 α2 1 1 (2) Aδ = {n ∈ N; |(n + ) + (m + )| > τ + O( ), 2 α1 2 n
∀ m ∈ Z}.
(1)
We only discuss the density of the subset Aδ . We rewrite this subset as
1 (1) Aδ = {n ∈ N; |(2n + 1)α1 + (2m + 1)α2 | > 2α2 τ + O( ), n From this we see that if
α1 α2
is a rational of the from
2r1 2r2 +1
(or
∀ m ∈ Z}.
2r1 +1 ), 2r2
then because
for every m and n we have |(2n + 1)(2r1) + (2m + 1)(2r2 + 1))| ≥ 1, (1)
therefore d(Aδ ) = 1 for τ = rational of the form
2r1 +1 , 2r2 +1
(1)
1 . 8r2 +4
This proves Theorem 6.2.4.2. When
we define (1)
Bδ = {n ∈ Aδ ; 2n + 1 6= 0 (mod 2r2 + 1)}.
73
α1 α2
is a
(1)
Since for every n ∈ Bδ
and m ∈ Z we have
|(2n + 1)(2r1 + 1) + (2m + 1)(2r2 + 1))| ≥ 1, for τ =
1 , 8r2 +4
(1)
(1)
we get d(Aδ ) ≥ d(Bδ ) =
2r2 . 2r2 +1
This completes the proof of
Theorem 6.2.4.3. To prove Theorem 6.2.4.1, when
α1 α2
is irrational, we use the fact that the set
Zα1 ⊕ Zα2 is dense in R. In fact it is easy to see that the subset A = {nα1 + (1)
mα2 | n ∈ N, m ∈ Z} is also dense. Now if we rewrite Aδ as
1 1 (1) Aδ = {n ∈ N; |(nα1 + mα2 ) + (α1 + α2 )| > α2 τ + O( ), 2 n
∀ m ∈ Z},
then from the denseness of the set A, it is not hard to see that in this case (1)
d(Aδ ) = 1 − (ℓ)
2α2 τ. α1
Hence we conclude that when l = 1 there is a subsequence
(ℓ)
{λnk } of {λn } of density 1. The same argument works for l = 2. This finishes the proof.
8.4
More examples of the zero lines for some polynomial potentials
In Figure (8.2) we have illustrated the zero lines for the polynomials q(z) = (z 2 − a2 )(z 2 + bz + c)
and
q(z) = (z 2 − a2 )(z 2 − b2 )(z 2 + c2 ).
The thickest lines in these figures are the zero lines. In fact for these examples there is a unique zero limit measure as in the other symmetric cases we mentioned in Theorems 6.2.2 and 6.2.3. We will not give the proofs, as they follow similarly, but we
74
q(z) = (z 2 − a2 )(z 2 + bz + c)
q(z) = (z 2 − a2 )(z 2 − b2 )(z 2 + c2 )
Figure 8.2:
would like to raise the following question:
Question: Is there any polynomial potential with n wells, n ≥ 3 for which there is a unique zero limit measure for the zeros of eigenfunctions?
75
Bibliography [B]
Bank, S., A note on the zeros of solutions w ′′ + P (z)w = 0 where P is a polynomial, Appl. Anal. 25 (1987), no. 1-2, 2941.
[BS]
F.A. Berezin and M.A. Shubin, The Schrodinger Equation (Moscow State University Press, Moscow 1983).
[BPU]
Brummelhuis, R.; Paul, T.; Uribe, A., Spectral estimates around a critical level. Duke Math. J. 78 (1995), no. 3, 477–530.
[C]
Colin De Verdi`ere, Y., A semi-classical inverse problem II: reconstruction of the potential . Math-Ph/arXiv:0802.1643.
[Ca]
Camus, B. A semi-classical trace formula at a non-degenerate critical level. (English summary) J. Funct. Anal. 208 (2004), no. 2, 446-481.
[CG1]
Colin De Verdi`ere, Y.; Guillemin, V., A semi-classical inverse problem I: Taylor expansions. Math-Ph/arXiv:0802.1605.
[Ch]
Chazarain, J., Spectre d’un Hamiltonien quantique et m´echanique classique, Comm. PDE, 5 (1980), 595-644.
[D]
Duistermaat, J. J. Oscillatory integrals, Lagrange immersions and unfolding of singularities. Comm. Pure Appl. Math. 27 (1974), 207-281.
76
[DSj]
Dimassi, Mouez; Sj¨ostrand, Johannes. Spectral asymptotics in the semiclassical limit. London Mathematical Society Lecture Note Series, 268. Cambridge University Press, Cambridge, 1999.
[EZ]
Evans L.C ,Zworski M. Lectures on semiclassical analysis, Lecture note.
[EF]
M.A. Evgradov, M.V. Fedoryuk, Asymptotic behaviour as λ → ∞ of the solution of the equation w ′′ (z) − p(z, λ)w(z) = 0 in the complex plane, Russian Math. Surveys, 21: 1 (1966) pp. 148 Uspekhi Mat. Nauk, 21: 1 (1966) pp. 350.
[EGS1] A. Eremenko, A. Gabrielov, B. Shapiro, Zeros of eigenfunctions of some anharmonic oscillators, Arxiv math-ph/0612039. [EGS2] A. Eremenko, A. Gabrielov and Boris Shapiro, High energy eigenfunctions of one-dimensional Schrodinger operators with polynomial potentials, Arxiv math-ph/0703049. [F1]
Fedoryuk, M. V., Asymptotic Analysis. Springer Verlag. 1993.
[GU]
Guillemin, Victor; Uribe, Alejandro, Some inverse spectral results for semiclassical Schr¨odinger operators. Math. Res. Lett. 14 (2007), no. 4, 623–632.
[GPU]
Guillemin, V.; Paul, T.; Uribe, A. ”Bottom of the well” semi-classical trace invariants. Math. Res. Lett. 14 (2007), no. 4, 711–719.
[He1]
Hezari, Hamid; Inverse Spectral Problems for Schr¨odinger Operators. To be published in the Communications in Mathematical Physics. arXiv:0801.3283.
77
[He2]
Hezari, Hamid;
Complex Zeros of Eigenfunctions of 1-dimensional
Schr¨odinger Operators. International Mathematics Research Notices, Vol. 2008, Article ID: rnm148. [HeZ]
Hezari, H. and Zelditch, S.; Inverse spectral problems for analytic Zn2 symmetric domains in Rn . arXiv:0902.1373.
[H1]
E. Hille, Lectures on ordinary differential equations, Addison-Wesley, Menlo Park, CA, 1969.
[H2]
E. Hille, Ordinary differential equations in the complex domain, John Wiley and sons, NY, 1976.
[I]
E.L. Ince, Ordinary Differential Equations, Dover, New York, 1926.
[ISjZ]
Iantchenko, A.; Sjstrand, J.; Zworski, M., Birkhoff normal forms in semiclassical inverse problems. Math. Res. Lett. 9 (2002), no. 2-3, 337-362.
[LV]
P. Leboeuf and A. Voros, Chaos-revealing multiplicative representation of quantum eigenstates, Journal of Physics A: Mathematical and General, 23 (21 May 1990) 1765-1774.
[MGZ]
Martinez-Finkelshtein A.; Martinez-Gonzalez; Zarzo A. WKB approach to zero distribution of solutions of linear second order differential equations Journal of Computational and Applied Mathematics archive Volume 1, Issue 1, August 2002.
[MZ]
A. Martinez-Finkelshtein, A. Zarzo, Zero distribution of solutions of linear second order differential equations, Complex Methods in Approximation Theory, Universidad de Almera, 1997. Pages: 167 - 182 Year of Publication: 2002 ISSN:0377-0427.
78
[N]
¨ R. Nevanlinna, Uber Riemannsche Fl¨achen mit endlich vielen Windungspunkten, Acta Math. 58, 295-373 (1932).
[O1]
FWJ Olver, Asymptotics and Special Functions, Academic Press, 1974.
[O2]
Selected papers of FWJ Olver / edited by Roderik Wong. Singapore, New Jersey, World Scientific, c2000.
[PU]
Paul, T.; Uribe, A. The semi-classical trace formula and propagation of wave packets. J. Funct. Anal. 132 (1995), no. 1, 192–249.
[R]
Robert, Didier, Autour de l’approximation semi-classique. (French) [On semiclassical approximation] Progress in Mathematics, 68. Birkhuser Boston, Inc., Boston, MA, 1987.
[S]
Y. Sibuya, Global Theory of a Second Order Linear Ordinary Differential Equation with a Polynomial Coefficient ,North-Holland, Amsterdam, 1975.
[Sj]
Sjstrand, Johannes, Semi-excited states in nondegenerate potential wells. Asymptotic Anal. 6 (1992), no. 1, 29-43.
[Sh]
Shubin, M. A. Pseudodifferential operators and spectral theory. Translated from the 1978 Russian original by Stig I. Andersson. Second edition. Springer-Verlag, Berlin, 2001.
[SH]
C. A. Swanson, V. B. Headley, An Extension of Airy’s Equation, SIAM Journal on Applied Mathematics, Vol. 15, No. 6. (Nov., 1967), pp. 14001412.
[T]
E. Titchmarsh, Eigenfunction expansions associated with second order differential equations, Clarendon press, Oxford, 1946.
79
[U]
Uribe, Alejandro, Trace formulae. First Summer School in Analysis and Mathematical Physics (Cuernavaca Morelos, 1998), 61-90, Contemp. Math., 260, Amer. Math. Soc., Providence, RI, 2000.
[W]
H. Wittich, Eindeutige L¨osungen der Differentialgleichung w ′ = R(z, w), Math. Z. 74 (1960), 278-288.
[Z]
Zelditch, Steven, Reconstruction of singularities for solutions of Schrdinger’s equation. Comm. Math. Phys. 90 (1983), no. 1, 1-26.
[Z1]
Zelditch, Steve, The inverse spectral problem. With an appendix by Johannes Sjstrand and Maciej Zworski. Surv. Differ. Geom., IX, Surveys in differential geometry. Vol. IX, 401–467, Int. Press Somerville, MA, 2004.
[Z2]
Zelditch, Steve, Inverse spectral problem for analytic domains. I. BalianBloch trace formula. Comm. Math. Phys. 248 (2004), no. 2, 357-407.
[Z3]
Zelditch, Steve, Inverse spectral problem for analytic plane domains II: Z2 -symmetric domains, to appear in Ann. Math. Math Arxiv: math.SP/0111078.
[Z4]
Zelditch, S. Spectral determination of analytic bi-axisymmetric plane domains. Geom. Funct. Anal. 10 (2000), no. 3, 628-677.
[Z5]
S. Zelditch, Complex zeros of real ergodic eigenfunctions, Inventiones Mathematicae , Volume 167, Number 2, 419-443 (2007).
[Z6]
S. Zelditch, Nodal lines, ergodicity and complex numbers, The European Physical Journal-Special Topics, Volume 145, Number 1 (2007), 271-286.
80
Vitae
Hamid Hezari was born in March of 1979 and was raised in Tehran, IRAN. In 2001 he received a Bachelors of Science degree in Mathematics from Sharif University of Technology, Tehran, IRAN. In 2003 he obtained a Masters degree in Mathematics from Sharif University of Technology. He enrolled as a PhD student at Simon Fraser University , Vancouver, Canada, in the Fall of 2003. He transferred to Johns Hopkins University in the Fall of 2004. He defended this PhD thesis on March 3, 2009.
81