Gaussian matrix model in an external field and non-intersecting

0 downloads 0 Views 436KB Size Report
Mar 5, 2008 - dMe−N Tr “M2. 2−AM”. (1-1) where one integrates over the hermitian matrices M of size N ×N, A is a deterministic diagonal matrix of the form.
Gaussian matrix model in an external field and non-intersecting Brownian motions

arXiv:0803.0705v1 [math-ph] 5 Mar 2008

N. Orantin 1 D´epartement de math´ematiques, Chemin du cyclotron, 2 Universit´e Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium.

Abstract: We study the Gaussian hermitian random matrix ensemble with an external matrix which has an arbitrary number of eigenvalues with arbitrary multiplicity. We compute the limiting eigenvalues correlations when the size of the matrix goes to infinity in non-critical regimes. We show that they exhibit universal behavior and can be expressed with the Sine and Airy kernels. We also briefly review the implication of these results in terms of non-intersecting Brownian motions.

Contents 1 Introduction and statement results

2

2 Gaussian matrix integral in an external field, 2.1 Definitions . . . . . . . . . . . . . . . . . . . . 2.2 Algebraic curve . . . . . . . . . . . . . . . . . 2.3 Kernel and correlation functions . . . . . . . . 2.4 Non-intersecting Brownian motions . . . . . . 3 Large time case 3.1 Study of the spectral curve . . . . 3.2 Riemann-Hilbert analysis . . . . . 3.2.1 First transformation . . . 3.2.2 Second transformation and 1

E-mail: [email protected]

1

reminder . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . opening of lenses

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

5 5 7 8 10

. . . .

12 12 16 17 18

3.3 3.4 3.5

3.2.3 Model RH problem. . . 3.2.4 Parametrix at the edge 3.2.5 Final transformation . Mean density of particles . . . Behavior in the Bulk . . . . . Behavior at the edge . . . . .

. . . . points . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

4 Intermediate state 4.1 Study of the spectral curve . . . . . . . . .. . . (m) (h) 4.1.1 Study of the sign of Re λi − λj . . 4.2 Riemann-Hilbert analysis . . . . . . . . . . . . . 4.2.1 First transformation . . . . . . . . . . . 4.2.2 Second transformation, global opening of 4.2.3 Third transformation . . . . . . . . . . . 4.2.4 Model Riemann-Hilbert problem . . . . 4.2.5 Final transformation . . . . . . . . . . . 4.3 Asymptotics of the kernel and universality . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . . . . . . . . . . . . lenses . . . . . . . . . . . . . . . .

. . . . . .

. . . . . . . . .

5 Perspectives

1

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

. . . . . . . . .

. . . . . .

20 22 25 26 27 27

. . . . . . . . .

30 30 33 35 37 39 43 47 50 51 51

Introduction and statement results

In these notes, we are interested in the hermitian matrix model in an external field defined by the partition function: Z ” “ 2 −N Tr M2 −AM (1-1) Z := dMe HN

where one integrates over the hermitian matrices M of size N × N, A is a deterministic diagonal matrix of the form n

n

n

z }|1 { z }|2 { z }|k { A := diag(a1 , . . . , a1 , a2 , . . . , a2 , . . . , ak , . . . , ak )

(1-2)

with ai 6= aj for i 6= j and the measure dM is the product of the Lebesgue measure of the real entries of the matrix M: Y Y dM := dRe (Mij ) dIm (Mij ) dMii . (1-3) i 0 and

i=1

Z

z2i

ρ(x)dx =

z2i−1

13

ni . N

(3-4)

On the edges of the spectrum, there exists ρi > 0 such that ρ(x) =

ρi |x − zi |(1 + O(z − zi )) as x → zi . π

(3-5)

proof: From the preceding lemma, one directly get that ∀x ∈

k [

]z2i−1 , z2i [ , ρ(x) > 0.

(3-6)

i=1

Consider now any integer i ∈ [1, k] and x ∈]z2i−1 , z2i [, one has ρ(x) =

1 1 1 Imξ0+ (x) = (ξ0+ (x) − ξ0+ (x)) = (ξi− (x) − ξi+ (x)) π 2iπ 2iπ

and thus, one can compute the integral Z z2i I Z z2i 1 1 ξi (x)dx (ξi− (x) − ξi+ (x)) = ρ(x) = 2iπ z2i−1 2iπ Γi z2i−1

(3-7)

(3-8)

where the integration contour Γi encircles the cut [z2i−1 , z2i ]. Since ξi has no other poles except infinity, one can move it and use the asymptotics: Z z2i ni (3-9) ρ(x)dx = − Res ξi (x)dx = . x→∞ N z2i−1 The last part of the lemma follows from the usual behavior on the edge of the spectrum (see for example [3]).  Remark 3.1 All the results of this lemma simply follows on the behavior of he 1-form ydx in the neighborhood of a cut linking two simple branch points. Thus all these local result could be obtained by the former study of simpler models where one has only two sheets for example.

We now define the integrals of the ξi functions as: Definition 3.2 We define the integrals λk by Z z λk (z) := ξk (x)dx

(3-10)

where the integration constants are constrained by: λ0 (z2k ) = λk (z2k ) = 0

(3-11)

∀i = 1, . . . , k − 1 , λi (z2i ) = λ0+ (z2i ).

(3-12)

and

14

Lemma 3.4 With this definition, the jumps and asymptotics of these λ-functions are given by: • When z → ∞:

z2 − ln z + l0 + O λ0 (z) = 2

and, for i 6= 0:



1 z2



(3-13)

  ni 1 λi (z) = ai z + ln z + li + O (3-14) N z2 where li ’s are some constants chosen to satisfy the constraints Eq. (3-11) and Eq. (3-12). • They follow the jumps:

P λ0+ (x) − λ0− (x) = −2iπ kj=i+1 nNi λ0+ (x) − λ0− (x) = −2iπ ∀i ∈ [1, k] , λi+ (x) − λi− (x) = 2iπ nNi λ0+ (x) − λk− (x) = 0 , λ0− (x) − λk+ (x) = 0 P ∀i ∈ [1, k − 1] , λ0+ (x) − λi− (x) = 0 , λ0− (x) − λi+ (x) = 2iπ kj=i+1 nNi

x ∈ [z2i , z2i+1 ] x ∈ (−∞, z1 ] x ∈ (−∞, z2i−1 ] x ∈ [z2k−1 , z2k ] x ∈ [z2i−1 , z2i ] (3-15)

proof: The first part of this lemma is just the rewriting of the integration of the asymptotics of ydx on the different sheets. The second part is obtained by moving the integration contours so that one is left by the computation of a residue at infinity on the different sheets.  Let us now state a fundamental theorem for the following Theorem 3.1 For any i = 1, . . . , k, on R\[z2i−1 , z2i ], one has Reλi+ < Reλ0− . For any i = 1, . . . , k, there exists a neighborhood Ui of ]z2i−1 , z2i [ such that ∀j 6= i , Reλj (z) < Reλ0 (z) < Reλi (z)

(3-16)

for any z ∈ Ui −]z2i−1 , z2i [. proof: Once again, one can follow [3] for the first part of the theorem: for x > z2i , ξ0 (x) > ξi (x), and λ0 (x) > λi (x) because λi (z2i ) = λ0 (z2i ) and λ′i = ξi ; for x < z2i−1 , one easily sees that Reξo (x) < ξi (x) which gives the result. For the second part, one has to study the properties of the λ-functions more carefully. Consider the function Fi := λi+ − λ0+ . It is purely imaginary on ]z2i−1 , z2i [ and its derivative F ′ (x) = −2iπρ(x) has negative imaginary part. Thus Reλ0 (z) < Reλi (z) in a neighborhood of ]z2i−1 , z2i [. The other inequality follows from the definition and continuity.  15

3.2

Riemann-Hilbert analysis

In this section, we solve the Riemann-Hilbert problem leading to the computation of the kernel. We look for the unique function Y defined as follows: Definition 3.3 Let Y be the solution of the following constraints: • Y : C\R → C(k+1)×(k+1) is analytic; • for x ∈ R, Y (x) has the jumps    Y+ (x) = Y− (x)   

1 ω1 (x) ω2 (x) 0 1 0 0 0 1 ... ... ... 0 0 0

. . . ωk (x) ... 0 ... 0 ... ... ... 1

x2

     

(3-17)

where ωi (x) := e−N ( 2 −ai x) and Y+ (x) and Y− (x) denote respectively the limit of Y (z) when z → x from the upper and lower half planes; • when x → ∞, one has the asymptotic behavior:  N z 0 0     0 z −n1 0  1 −n2  0 0 z Y (x) = I + O  x  ... ... ... 0 0 0

... 0 ... 0 ... 0 ... ... . . . z −nk

     

(3-18)

The kernel can then be written: KN (x, y) =

1 Ω1 (y)Y+−1 (y)Y+ (x)Ωt2 (x) 2iπ(x − y)

(3-19)

with the notations of Eq. (2-21) and Eq. (2-22). To do so, we transform the problem step by step to turn it to a simple one (see [2] for a review on the subject). It is important to keep in mind the constraints we impose on the n′i ’s and N: • We fix a set of k positive rational number called filling fractions: pi qi

(3-20)

ǫi = 1.

(3-21)

ǫi = where pi and qi are coprime and k X i=1

16

• N is a common multiple of the qi ’s. • The integers ni are fixed by the condition: ni = ǫi N.

(3-22)

With these constraints, the spectral curve associated to this RH problem does not change as N grows. 3.2.1

First transformation

Using the notations of section 3.1, we can define   “ ” 2 n  n n N λ0 − x2 N (λ1 − N1 x) N (λ2 − N2 x) N (λk − Nk x) −N l0 −N l1 −N lk ,e ,e ,...,e T (x) := diag e ,e ,...,e Y (x)diag e (3-23)

for x ∈ C\R. Then one can check that it is solution of the Riemann-Hilbert problem: • T is analytic on C\R; • for x ∈ R, one has the jumps T+ (x) = T− (x)jT (x) where



(3-24)

eN (λ0+ (x)−λ0− (x)) eN (λ1+ (x)−λ0− (x)) eN (λ2+ (x)−λ0− (x)) 0 eN (λ1+ (x)−λ1− (x)) 0 0 0 eN (λ2+ (x)−λ2− (x)) ... ... ... 0 0 0

  jT (x) =   

. . . eN (λk+ (x)−λ0− (x)) ... 0 ... 0 ... ... N (λk+ (x)−λk− (x)) ... e (3-25)

• when x → ∞, one has the asymptotic behavior:    1 T (x) = I + O . x



  .  

(3-26)

Note that one can write the jump matrix jT (x) depending on the cut on which x lies: For x ∈ [z2i−1 , z2i ], jT takes the form 0 B B B B B B B B B B @

N (λ0 (x)−λi (x))+

e

0 ... 0 0 0 ... 0

N (λ1+ (x)−λ0− (x))

e

1 ... 0 0 0 ... 0

... ... ... ... ... ... ... ...

N (λi−1+ (x)−λ0− (x))

1 0 ... 0

e

0 ... 1 0 0 ... 0

17

N (λ0 (x)−λi (x))−

e

0 ... 0

N (λi−1+ (x)−λ0− (x))

e

0 ... 0 0 1 ... 0

N (λk+ (x)−λ0− (x))

... e ... ... ... ... ... ... ... (3-27)

0 ... 0 0 0 ... 1

1

C C C C C C. C C C C A

For x ∈ R\ 

  jT =   

[

[z2i−1 , z2i ], jT reduces to

i

1 eN (λ1+ (x)−λ0− (x)) eN (λ2+ (x)−λ0− (x)) 0 1 0 0 0 1 ... ... ... 0 0 0

. . . eN (λk+ (x)−λ0− (x)) ... 0 ... 0 ... ... ... 1

The kernel can then be written: N

KN (x, y) =



2 x2 − y4 4



  .  

«

e Ω1,T (y)T+−1(y)T+ (x)Ωt2,T (x) 2iπ(x − y)

(3-28)

(3-29)

where Ω1,T (x) = [0 eN λ1+ (x) . . . eN λk+ (x) ]

(3-30)

Ω2,T (x) = [e−N λ0+ (x) 0 . . . 0].

(3-31)

and

3.2.2

Second transformation and opening of lenses

Remark hat one can further factorize the matrix jT (x) for x ∈ [z2i−1 , z2i ]: jT (x) = e ji,T (x)jS (x)b ji,T (x)

(3-32)

where, for x ∈ [z2i−1 , z2i ], one has defined the k + 1 × k + 1 matrices: jS (x) = [(jS )αβ ]kα,β=0 := [(1 − δαi − δα0 )δαβ + δα0 δβi − δαi δβ0 ]kα,β=0 , "

and

e ji,T (x) := δαβ + δαi (δβ0 eN (λ0 −λi )− − "

b ji,T (x) := δαβ + δαi (

X j6=i

X

δβj eN (λj −λi )− )

j6=0,i

δβj eN (λj −λi )+ )

#k

(3-33) (3-34)

α,β=0

#k

.

(3-35)

α,β=0

This allows us to define the second transformation. Consider a set of k lenses with vertices z2i−1 and z2i (see fig.2) contained in the neighborhood Ui defined in lemma 3.1 and define ( T (z)Jbi (x) in the upper lens region S(z) := (3-36) T (z)Jei (x) in the lower lens region 18

Im(x)

z2i−2

z2i−1

z2i+1

z2i

Re(x)

Figure 2: Lenses used in the first transformation: the lense i has vertices z2i−1 and z2i .

with

"

Jei (x) := δαβ + δαi (δβ0 eN (λ0 −λi ) −

and

"

Jbi (x) := δαβ − δαi (

X

X

δβj eN (λj −λi ) )

j6=0,i

δβj eN (λj −λi ) )

j6=i

#k

(3-37)

α,β=0

#k

(3-38)

.

α,β=0

for z in the lens i , and

S(z) := T (z)

(3-39)

for z outside the lenses. We now define the jump matrix jS (x) on all the discontinuity contours of S(x): • On [z2i−1 , z2i ], for i = 1, . . . , k: jS (x) = [(1 − δαi − δα0 )δαβ + δα0 δβi − δαi δβ0 ]kα,β=0 , • On the upper boundary of the lens i: "

jS (x) = δαβ − δαi (

X j6=i

• On the lower boundary of the lens i: "

jS (x) = δαβ + δαi (δβ0 eN (λ0 −λi ) −

• On R\

[ [z2i−1 , z2i ] :

δβj eN (λj −λi ) )

#k

(3-40)

(3-41)

,

α,β=0

X

δβj eN (λj −λi ) )

j6=0,i

#k

,

(3-42)

α,β=0

i

jS (x) = jT (x).

(3-43)

Using these definition, one sees that S(x) is solution of the following RH problem: 19

• S(x) is analytic on C\(R • S(x) has jumps:

S

Γ) where Γ denotes the boundaries of the lenses;

∀x ∈ R

• when x → ∞:

[

Γ , S+ (x) = S− (x)jS (x);

(3-44)

  1 S(x) = I + O x

(3-45)

and the kernel can be expressed in terms of this matrix S by the following expression: for (x, y) ∈ (z2i−1 , z2i ) eN [h(y)−h(x)] Ω1,S,i (y)S+−1(y)S+ (x)Ωt2,S,i (x) 2iπ(x − y)

KN (x, y) = where

h(x) = −

x2 + Re [λ0+ (x)] , 4 i−1

N iIm[λ0+ (x)]

Ω1,S,i (x) = [−e and Ω2,S,i (x) = [e

(3-47) k−i

z }| { −N iIm[λ (x)] z }| { 0+ 0 ... 0 e 0 . . . 0] i−1

−N iIm[λ0+ (x)]

(3-46)

(3-48)

k−i

z }| { z }| { 0 . . . 0 eN iIm[λ0+ (x)] 0 . . . 0].

(3-49)

One can see that, as N → ∞, this RH problem reduces to a model RH of the same kind of that studied in [3, 4, 5] thanks to the properties of the λ-functions in section 3.1. Let us solve it in the same way. 3.2.3

Model RH problem.

This means that we want to solve the following RH problem: • M(x) : C\

k [

[z2i−1 , z2i ] → Ck+1×k+1;

i=1

• M(x) is analytic in C\

k [

[z2i−1 , z2i ];

i=1

• M has jumps on the cuts [z2i−1 , z2i ] given by: ∀x ∈ • as x → ∞:

k [

[z2i−1 , z2i ] , M+ (x) = M− (x)jS (x);

(3-50)

i=1

  1 . M(x) = I + O x 20

(3-51)

Let us lift this problem to the associated spectral curve by cutting the complex plane along the different y-sheets: one defines: Ωi = ξi (C\[z2i−1 , z2i ])

(3-52)

for i = 1, . . . , k and Ω0 = ξ0 (C\

k [

[z2i−1 , z2i ]).

(3-53)

i=1

These images of the different y-sheets give a partition of the complex plane by definition S − and we can define the boundaries Γi = Γ+ Γi separating Ω0 and Ωi for any i 6= 0 as i follows: \ Γ+ {Imz ≥ 0} = ξ0+ ([z2i−1 , z2i ]) = ξi− ([z2i−1 , z2i ]) (3-54) i = Γi

and

Γ− i = Γi

\ {Imz ≤ 0} = ξ0− ([z2i−1 , z2i ]) = ξi+ ([z2i−1 , z2i ]).

We are looking for a solution under  φ0 (ξ0 (x))  φ1 (ξ0 (x)) M(x) =   ... φk (ξ0 (x))

(3-55)

the form of a Lax matrix

 . . . φ0 (ξk (x)) . . . φ1 (ξk (x))   (3-56)  ... ... . . . φk (ξk (x)) S where φ is a Baker-Akhiezer type function analytic on C\ i Γi which must satisfy the jump conditions: [ ∀x ∈ Γ+ (3-57) j , φi+ (ξ) = φi− (ξ) φ0 (ξ1 (x)) φ1 (ξ1 (x)) ... φk (ξ1 (x))

j

and

∀x ∈

[ j

Γ− j , φi+ (ξ) = −φi− (ξ).

(3-58)

These functions are also constrained by their asymptotic behavior: ∀i, j = 0, . . . , k , φi (aj ) = φi (ξj (∞)) := δij where one conventionally notes a0 := ∞. One also notes \ Γi R = {p2i−1 , p2i } = {ξi (z2i−1 ), ξi (z2i )}.

(3-59)

(3-60)

One has the solution:

k Y (ξ − ai )

i=1 φ0 (ξ) = v u k uY t (ξ − p2i−1 )(ξ − p2i ) i=1

21

(3-61)

and

Y (ξ − aj ) j6=i

φi (x) = ci v u k uY t (ξ − p2i−1 )(ξ − p2i )

(3-62)

i=1

for i = 1, . . . , k, where the ci ’s are constants which can be fixed by the asymptotic S behavior of φi’s and the square roots have branch cuts along Γi . Indeed, the algebraic equation defining the branch points reads: k k k Y Y X ni Y 2 (ξ − aj )2 (ξ − p2i−1 )(ξ − p2i ) = (ξ − ai ) − N i=1 i=1 i=1 j6=i

which gives φi (ai ) = ci i.e.

r



N ni

r ni ci = −i . N

3.2.4

(3-63)

(3-64)

(3-65)

Parametrix at the edge points

Now the solution of the RH problem can be build away from the branch points thanks to the preceding section, let us have a closer look at the branch points zj . Since the behavior near the branch points is the typical square root and one can use the usual method (see for example [3]). The problem is to find a parametrix in the neighborhood of the branch points, i.e. a solution to the RH problem: • P (z) is analytic in

2k [

D(zi , r)\(R ∪ Γ) where D(zi , r) is a disc centered at zi with

i=1

small radius r and Γ is the set lenses defined used for the second transformation; • P has jumps: ∀z ∈ (R ∪ Γ)

2k \ [

D(zi , r)

i=1

!

, P+ (z) = P− (z)jS (z);

(3-66)

• When n → ∞, the boundary conditions for P are:    1 M(z) uniformly. ∀i = 1, . . . , 2k , ∀z ∈ ∂D(zi , r) , P (z) = I + O N (3-67) 22

I

II z 2i

III

IV

Figure 3: Decomposition of the disc D(z2i , r) into four sectors.

Let us now consider a particular branch point z2i 5 . When z → z2i , lemma 3.3 states that: 3 λ0 (z) = p2i (z − z2i ) + 2ρ32i (z − z2i ) 2 + O ((z − z2i )2 ) (3-68) 3 λi (z) = p2i (z − z2i ) − 2ρ32i (z − z2i ) 2 + O ((z − z2i )2 ) i.e.

  3 5 4ρ2i (z − z2i ) 2 + O (z − z2i ) 2 (3-69) 3 2  which means that the function βi (z) = 34 (λ0 (z) − λi (z)) 3 is analytic in a neighborλ0 (z) − λi (z) =

2

hood of z2i with a strictly positive derivative βi′ (z2i ) = ρ2i3 > 0. Thus βi is a conformal map from D(z2i , r) to a neighborhood of the origin for r small enough. Let us now cut the image of D(z2i , r) by a a special choice of Γ representing the cuts of the lens based in z2i :   \ 2π βi (Γ D(z2i , r)) = z|arg(z) = ± . (3-70) 3 It gives a partition of the disc D(z2i , r) into four regions labeled I, II, III and IV (see figure 3) following: • 0 < arg(β(z)) < •

2π 3

2π 3

in region I;

< arg(β(z)) < π in region II;

• −π < arg(β(z)) < − 2π in region III; 3 < arg(β(z)) < 0 in region IV. • − 2π 3 5

We could do exactly the same for a branch point z2i−1 with the same function βi .

23

We can now transform the RH for P to a model one by the transformation:  P (z)TPeP (z) in regions I and IV. e (3-71) P (z) := P (z) in regions II and III.

where one has defined:

"

TPeP (z) = δαβ − δαi

X

δβj eN (λj −λi )

j6=0,i

#k

.

(3-72)

α,β=0

Thus, Pe is solution of the same RH problem as P except that its jumps are now given by ! 2k \ [ D(zi , r) , P+ (z) = P− (z)jPe (z) (3-73) ∀z ∈ (R ∪ Γ) i=1

where jPe (z) is given by:

• for z on [z2i − r, z2i [: jPe = [(1 − δα0 − δαi )δαβ + δα0 δβi − δαi δβ0 ]kα,β=0 ;

(3-74)

• for z on the upper and lower sides of the lens in D(z2i , r):

• for z on ]z2i , z2i + r]:

 k jPe = δαβ + δαi δβ0 eN (λ0 −λi ) α,β=0 ;

 k jPe = δαβ + δα0 δβi eN (λ0 −λi ) α,β=0 .

(3-75)

(3-76)

One sees that these jumps are equal to the identity except for a 2 × 2 block corresponding to the (0, 0), (0, i), (i, 0) and (0, 0) coefficients. The RH problem reduces then to a model problem of size 2 × 2 which can be solved explicitly as follows (see [3] for example):  2  e 3 P (z) = EN (z)Φ N βi (z) D(z) (3-77)

where EN ensures the boundary conditions EN = with



fM, c πM M

f = [(1 − δαi (1 + i))δαβ − δα0 δβi − iδαi δβ0 ]k M α,β=0 , 24

(3-78)

(3-79)

and

h ik c = (1 − δα0 (1 − n 61 β 14 ) − δαi (1 − n− 61 β − 14 ))δαβ M

(3-80)

,

α,β=0

Φ(z) is defined with the use of the Airy function Ai(z):  [(1 − δα0 (1 − y0 (z)) − δαi (1 + y2′ (z))))δαβ + δαi δβ0 y0′ (z) − δαi δβ0 y2 (z)]kα,β=0     [(1 − δ (1 + y (z)) − δ (1 + y ′ (z))))δ − δ δ y ′ (z) − δ δ y (z)]k α0 1 αi αβ αi β0 1 αi β0 2 2 α,β=0 ′ ′  [(1 − δα0 (1 + y2 (z)) − δαi (1 − y1 (z))))δαβ − δαi δβ0 y2 (z) + δαi δβ0 y1 (z)]kα,β=0    [(1 − δα0 (1 − y0 (z)) − δαi (1 − y1′ (z))))δαβ + δαi δβ0 y0′ (z) + δαi δβ0 y1 (z)]kα,β=0

for 0 < arg < 2π 3 2π for 3 < arg < π for − π < arg < − 2π 3 for − 2π < arg < 0 3 (3-81)

where

yα = e

2αiπ 3

Ai(e

2αiπ 3

(3-82)

z)

for α = 0, 1, 2 and D is the diagonal matrix h ik N N D(z) = (1 − δα0 (1 − e 2 (λ0 (z)−λi (z)) ) − δαi (1 − e− 2 (λ0 (z)−λi (z)) ))δαβ

α,β=0

3.2.5

.

(3-83)

Final transformation

We now have a parametrix away from the branch points (M(z)) and in their neighborhood (P (z)). It remains to put them together in a unique function defined on the whole complex plane: this is the last transformation. Define the matrix R(z) = S(z)M(z)−1 for z outside the disks D(zi , r), i = 1, . . . , 2k R(z) = S(z)P (z)−1 for z inside the disks D(zi , r), i = 1, . . . , 2k.

(3-84)

One can easily check that: R(z) = I + O

as N → ∞ uniformly for z ∈ C\

2k [



∂D(zi , r)

i=1



1 N(|z| + 1) !

R (z) = O





(3-85)

.

(3-86)

and

1 N(|z| + 1)



It follows the important relation: −1

R (x2 )R(x1 ) = I + O



x1 − x2 N



needed to study the behavior of the Kernel in the different regimes. 25

(3-87)

3.3

Mean density of particles

With the use of the kernel KN , one can state more precisely the role played by the spectral curve. It follows from: Theorem 3.2 The limiting mean density of eigenvalues of the random matrix KN (x, x) N →∞ N

ρ(x) := lim

(3-88)

exists, is supported by the real cuts {[z2i−1 , z2i ]}ki=1 and is expressed in terms of the solution ξ0 of the algebraic equation E(x, ξ) = 0

(3-89)

by

1 |Imξ0 (x)| . π It is analytic on the cuts and vanishes like a square root at the edges zi . ρ(x) =

(3-90)

proof: Let us consider x ∈]z2i−1 , z2i [ for any i = 1, . . . , k, assuming that x does not belong to any D(zj , r). Thus, Eq. (3-84) and Eq. (3-87) imply that S+−1 (y)S+ (x) = I + O(x − y)

(3-91)

uniformly in N when y → x. The expression of the kernel in terms of S(x), Eq. (3-46), gives: KN (x, y)

= = (3 − 92)

eN [h(y)−h(x)] Ω1,S,i (y) [I + O(x − y)] Ωt2,S,i (x) 2iπ(x − y)   N [h(y)−h(x)] sin (NIm (λ0+ (x) − λ0+ (y))) e + O(1) π(x − y)

uniformly in N. Remember that, by definition, ρ(x) =

1 dImλ0+ (x) . π dx

(3-93)

Thus, we get KN (x, x) = Nρ(x) + O(1),

(3-94)

proving the first part of the theorem. For x outside of the cuts (or at a branch point), we easily see that KN (x, x) =0 (3-95) lim N →∞ N because of the ordering of the λ functions (see th.3.1).  26

3.4

Behavior in the Bulk

Let us first study the asymptotic of the kernel inside the support of eigenvalues. We recover the usual sine-kernel: Theorem 3.3 For any x ∈

k [

]z2i−1 , z2i [ and any (u, v) ∈ R2 , one has

i=1

1 b KN lim N →∞ Nρ(x)



u v x+ ,x + Nρ(x) Nρ(x)



=

sin π(u − v) π(u − v)

(3-96)

where we use the rescaled kernel: b N (x, y) = eN (h(x)−h(y)) KN (x, y). K

(3-97)

proof: Consider x ∈]z2i−1 , z2i [ for any i = 1, . . . , k and define u and Nρ(x)  Then, there exists t = x + O N1 such that X =x+

NIm(λ0+ (X) − λ0+ (Y )) =

Y =x+

v . Nρ(x)

(3-98)

π(u − v)ρ(t) . ρ(x)

(3-99)

From Eq. (3-92), it follows b N (X, Y ) K Nρ(x)

= = (3 − 100)

sin (NIm (λ0+ (X) − λ0+ (Y ))) +O π(u − v)  sin(π(u − v)) 1 . +O π(u − v) N



1 N





3.5

Behavior at the edge

Let us now study the behavior of the kernel at the edge of the spectrum, i.e. near any branch point of the spectral curve.

27

Theorem 3.4 For any i = 1, . . . , 2k and every (u, v) ∈ R2 , the rescaled kernel satisfies: ! 1 Ai(u)Ai′ (v) − Ai′ (u)Ai(v) u v i i b lim K z + (−1) = , z + (−1) , N i i 2 2 2 N →∞ u−v (ρi N) 3 (ρi N) 3 (ρi N) 3 (3-101) where ρi is a constant characterizing the behavior of ρ near the branch point zi : ρ(x) =

1 ρi |x − zi | 2 (1 + o(1)) π

(3-102)

when x → zi . proof: Let us consider an arbitrary branch point6 z2i and define X := z2i +

u (ρ2i N)

2 3

and Y := z2i +

v 2

(ρ2i N) 3

(3-103)

for any (u, v) ∈]0, ∞[2 so that, for N large enough, X and Y lie inside the circle D(z2i , r) on the cut. The rescaled kernel can thus be written: b N (X, Y ) = K

1 Ω1,S,i (Y )S+−1 (Y )S+ (X)Ωt2,S,i (X) 2iπ(X − Y )

(3-104)

with the notations of Eq. (3-46) and S+ (X) = R(X)P+ (X) = R(X)Pe+ (X)

since we are in the region II. Using the form 3-77of Pe, one gets:   2 S+ (X) = R(X)EN (X)Φ+ N 3 βi (X) Di,+ (X)

(3-105)

(3-106)

where



 i−1 k−i z }| { }| { z Di,+ (X) = diag eN iIm[λ0+ (X)] , 1, . . . , 1, e−N iIm[λ0+ (X)] , 1, . . . , 1 .

(3-107)

It follows 1

b 2 KN (X, Y ) =

(ρ2i N) 3

    1 e i Φ−1 N 23 βi (Y ) E −1 (Y )R−1 (Y )R(X)EN (X)Φ+ N 32 βi (X) D bt D + i N 2iπ(u − v) (3-108)

6

We consider here a branch point on the right side of a cut but we could do exactly the same derivation for the other end point of the cut z2i−1 .

28

where we use the vector of size k + 1: 

 z }| { z }| { e i := −1 0 . . . 0 1 0 . . . 0 D

and

k−i

i−1

(3-109)



 i−1 k−i z }| { z }| { b i := 1 0 . . . 0 1 0 . . . 0 . D

(3-110)

2

Remember that βi (z2i ) = 0 and βi′ (z2i ) = ρ2i3 . Thus  2  Φ+ N 3 βi (X) → Φ+ (u) when N → ∞

and

lim Φ+

N →∞

as well as e i Φ−1 lim D +

N →∞





(3-111)

t z }| { z }| { 2 b t = y0 (u) 0 . . . 0 y ′ (u) 0 . . . 0 N 3 βi (X) D i 0





i−1



k−i

 z }| { z }| { 2 N 3 βi (Y ) = −2iπ −y0′ (u) 0 . . . 0 y0 (u) 0 . . . 0 

i−1

(3-112)

k−i

(3-113)

with the function y0 defined by Eq. (3-82) in terms of the Airy function. −1 Since one knows the behavior of EN , EN and R when N → ∞:  1 EN (X) = O N6  1 −1 EN (Y ) = O N6   1 −1 EN (Y )EN (X) = I + O N−3   5 R−1 (Y )R(X) = I + O N−3 (3 − 114) we finally prove that ′ ′ b N (X, Y ) = y0 (u)y0 (v) − y0 (u)y0(v) K N →∞ (ρ N) u−v 2i

lim

1

2 3

(3-115)

which gives the theorem for u, v > 0. For the other signs of u and v, we can derive the result in the same way by using the appropriate expression of the kernel as well as the following definition of the h function outside of the cut. 

29

(2)

wi

(1)

wi+1

(3)

(1)

wi

wi

(1) z2i−2 ri z2i−1

(2)

ri

(1)

(3)

ri

z2i

(1)

ri+1

z2i+1

z2i+2

z 2i+3

(3)

wi

(2)

wi

(1)

wi+1

wi

Figure 4: Cuts of the ξ functions in the x-plane

4

Intermediate state

We consider here one intermediate configuration where two or more segment have already merged. It corresponds to having at least two non-real complex conjugate branch points.

4.1

Study of the spectral curve

We consider an intermediate time in the evolution of the Brownian motions. The spectral curve remains the same: E(x, y) = 0 (4-1) but the value of the coefficients makes its structure different through the position of the branch points. Let us consider the following case (which is generic). The 2k branch points, solution of the equation in y k X ni (4-2) 1= N (y − ai (t))2 i=1 are such that

• there are 2l real branch points zi with i = 1, . . . , 2l and 0 < l < k; • for i = 1, . . . , l, there are 0 ≤ bi pairs of distinct non-real complex conjugated  b (m) i (m) (m) branch points wi , w i linked by a cut Γi crossing the real axis in a (m)

unique point ri

m=1

satisfying:

(1)

∀i , z2i−1 < ri

(2)

< ri

30

(bi )

< . . . < ri

< z2i .

(4-3)

Remark 4.1 One obviously has the relation: k=l+

l X

(4-4)

bi .

i=1

and we use a double index notation to sort the filling fraction: (m)

ni and the eigenvalues:

(m)

ai

:= nPi−1 bj +m+i

(4-5)

:= aPi−1 bj +m+i .

(4-6)

j=1

j=1

Let us now study the properties of the spectral curve as we did in the preceding section. We are particularly interested in its sheeted structure. A generic point of the complex plane (i.e. not a branch point) has k + 1 pre(m) images on the spectral curve. Let us note these pre-images by ξ0 (x) and ξi (x) with i = 1, . . . , l and m = 0, . . . , bi . These different pre-images are characterized by their asymptotic behaviors when x → ∞:  ξ0 (x) = x − x1 + O x12 , (m)  (4-7) n (m) (m) ξi = ai + iN x1 + O x12 . From these asymptotic behaviors, one can define the ξ-functions by analytical continuation as follows7 : • ξ0 (x) can be analytically continued to C\

l [

[z2i−1 , z2i ];

i=1

(m)

• for i = 1, . . . , l and m = 0, .  . . , bi , ξi (x) can be analytically continued to (m) (m) m+1 (m+1) C\ Γi ∪ [ri , ri ] ∪ Γi where we use the notation: ( (0) (0) (0) wi = w i = ri := z2i−1 , (4-8) (b +1) (b +1) (b +1) = ri i := z2i . wi i = wi i It implies the sheet structure described in fig.58 : (m)

• the 0-sheet is connected to the (i, m)-sheet along the real cut [ri i = 1, . . . , l and m = 0, . . . , bi with the jumps: (m)

(m)

ξ0± (x) = ξi∓ (x) for x ∈]ri 7

(m+1)

, ri

[;

(m+1)

, ri

] for

(4-9)

For the sake of brevity, I did not include any proof of this sheet structure in the present manuscript. Nevertheless, the simplicity of the spectral curve implies a simple, but long and uninteresting, proof of these facts. One can, for example, follow the time evolution of the sheet structure from t = 1 to t = 0 through the rational parametrization of the algebraic curve. As such a proof does not involve any new technical element, it is left to the reader as an exercise. 8 We label the sheets (except the 0-sheet) by two indices (i, m) corresponding to the associated ξ (m) function ξi .

31

ξ0 ξ(0) 1 ξ(1) 1

ξ(2) 1

ξ(3) 1 ξ2

(0)

ξ2

(1)

Figure 5: Sheeted structure of the spectral curve for an intermediate step. In this example, there are 4 real branch points and 4 pairs of conjugated ones.

• for i = 1, . . . , l and m = 1, . . . , bi −1, the (i, m)-sheet is connected to the (i, m−1)(m) (m+1) and (i, m + 1)-sheets along the imaginary cuts Γi and Γi respectively, with the jumps: (m) (m−1) (m) ξi± (x) = ξi∓ (x) for x ∈ Γi , (4-10) (m) (m+1) (m+1) ξi± (x) = ξi∓ (x) for x ∈ Γi . We finally define the associated λ functions as in the preceding section: (m)

(m)

Definition 4.1 Let the function λi

be the primitives of ξi

defined by the conditions:

λ0 (z2l ) = 0,

(4-11)

λi+ (z2i−1 ) = λ0− (z2i−1 ),

(4-12)

(0)

(b )

λi i (z2i ) = λ0+ (z2i )

(4-13)

and (m)

(m+1)

λi+ (wi

(m+1)

) = λi−

(m+1)

(wi

)

for m = 1, . . . , bi − 1. One can easily prove that they satisfy the following properties: 32

(4-14)

Lemma 4.1 λ0 is analytic in C\

[

[z2i−1 , z2i ] and

(m) λi

is analytic in C\

i

z2 λ0 = − ln z + l0 + O 2

and, for i 6= 0:

(m)

where li



1 z2



(m)

(m) λi

(m) ai z

=

[z2j−1 , z2j ]∪

j=1

h i (m) (m) z2i−1 , ri ∪ Γi for i = 1, . . . , k and m = 1, . . . , bi . The jumps and asymptotics of these λ-functions are given by: • When z → ∞:

i−1 [

n (m) + i ln z + li + O N



(4-15)

1 z2



(4-16)

’s are some constants chosen to satisfy the constraints (4-11) − (4-14).

• They follow the jumps: bj (m) l X X nj , λ0+ (x) − λ0− (x) = −2iπ N j=i+1 m=1

λ0+ (x) − λ0− (x) = −2iπ,

x ∈]z2i−2 , z2i−1 [ x ∈] − ∞, z1 [.

(4 − 17)

For i ∈ [1, l] (m)

n − = 2iπ i , N (b ) (b ) λ0+ (x) − λl−l (x) = 0 , λ0− (x) − λl+l (x) = 0, (m) λi+ (x)

(m) λi− (x)

(m)

x ∈] − ∞, ri

[

x ∈]z2l−1 , z2l [

(4 − 18)

For i ∈ [1, l − 1] (m)

(m)

λ0+ (x) − λi− (x) = 0,

(4 − 19)

x ∈ [ri

(m+1)

, ri

]

and (m)

(m+1)

λi± (x) = λi∓ (m)+

where Γi

(m)

denotes the part of Γi

(x),

(m)+

x ∈ Γi

(4-20)

lying inside the upper half plane.

  (m) (h) Study of the sign of Re λi − λj   (m) (h) It is very important to know the sign of Re λi − λj for (i, m) 6= (j, h) in order to study the asymptotic behavior of the jump matrices in the Riemann-Hilbert problem we 4.1.1

33

  (m) want to solve. For this purpose, it is convenient to draw the curves where Re λi =   (h) Re λj . Let us build them step by step. First of all, from the behavior of the λ-functions near the branch points, one can state the following results: • From at equal angles  any  branch point z2i−1 go h three curves i (0) (1) Re λi , one of them being z2i−1 , ri . • From at equal angles  any branch point z2i goh three curves i (bi ) (bi ) Re λi one of them being ri , z2i . (m)

2π 3

2π 3

where Re (λ0 ) =

where Re (λ0 ) =

(m)

2π where • From point 3 h  any branch   wi (resp. wi ) go three h curves at h equal angles h (m) (m−1) (m) (m) Re λi = Re λi , one of them being wi , i∞ (resp. w i , −i∞ ). (m+)

(m+)

(m−)

(m−)

Let us denote Γi,R and Γi,L (resp. Γi,R and Γi,L ) the two other curve going (m) (m) (m) from wi (resp. wi ), the first one lying to the right hand side of the curve Γi (m+) (m−) (m+) (m−) (see fig. 6). The curves Γi,R and Γi,R (resp. Γi,L and Γi,L ) intersect in the (m)

(m∓)

(m)

(m∓)

real line in xi,R (resp. xi,L ). On the analytical continuation Γi,R (resp. Γi,L )   (m±) (m±) (m) (m) (m−1) of Γi,R (resp. Γi,L ) beyond xi,R (resp. xi,L ), one has Re λ0 − λi =0   (m) (resp. Re λ0 − λi = 0) because of the cut on the real line.

  (m) (h) We have investigated the curves Re λi − λj near the branch points, let us now look at their behavior away from them. Many different configuration can occur, (m) (m) (m) depending on the relative positions of the xi,R ’s, the xi,L ’s, the ri ’s and the real branch points zi . In order to limit the size of this paper, we restrict our study to (m) (m) (m+1) (m) (m+1) (m+1) the cases where ri < xi,R < ri and ri < xi,L < ri . Nevertheless, the RH problem corresponding to the other possible configurations can be solved with the same method and the same even if we do not give a proof here (the  transformations  (m) (h) study if the signs of Re λi − λj is a little bit more subtle but does not present any particular difficulty).         (m−1) (m) (m+1) (m) Note also that the curves Re λi = Re λi and Re λi = Re λi (m+ 12 ) joining cross in two points symmetricwith respect to the real axis. On the line D i    (m−1) (m+1) these two points, one has Re λi = Re λi . (m)

With this assumption, between ri configurations: • In the first case:

(m)

ri

(m+1)

and ri

(m)

(m+1)

< xi,R < xi,L 34

, there are two different possible

(m+1)

< ri

.

(4-21)

(m+)

(m+)

Γi,R

Γi,L

wi(m) (m+) Γi,L

Γ

(m+)

Γi,R

(m)

(m)

(m)

ri

x i,L

x i,R

(m−)

Γi,L

(m−)

Γi,R wi(m)

(m−)

Γi,R

(m−)

Γi,L

(m)

Figure 6: Zoom near the solid lines are the curves  vertical cut Γi . The thick m± m± (m) (m−1) where Re λi − λi = 0. On the dashed lines ΓiR (resp. ΓiL ), one has     (m−1) (m) Re λ0 − λi = 0 (resp. Re λi − λ0 = 0).   m−1 m+1 m The respective weights of Re λ (x) , Re (λ (x)), Re λ (x) and i i i  (0) (m) (m=1) Re λ0 (x) for x between Γi and Γi are depicted in fig.7.

• In the second case:

(m)

(m+1)

(m)

(m+1)

< xi,R < ri . (4-22)   m−1 m+1 m The respective weights of Re λ (x) , Re (λ (x)), Re λ (x) and i i i  (0) (m) (m=1) Re λ0 (x) for x between Γi and Γi are depicted in fig.8. ri

< xi,L

In both configuration one has the following result: (m)

Lemma 4.2 For x between Γi

(m+1)

and Γi (h)

:

λj (x) > λ0 (x)

(4-23)

for (j, h) 6= (i, m − 1), (i, m), (i, m + 1).

4.2

Riemann-Hilbert analysis

We now solve the Riemann Hilbert problem presented in the preceding section for this particular value of the external matrix eigenvalues ai by performing a sequence of transformation leading to a simple problem when N → ∞. 35

m=m−1

m−1=m+1

0=m+1

0=m−1

m>m−1>m+1 m>m+1>m−1 m+1>0

m=m+1

m−1>0

m>m−1>0>m+1

m>m+1>0>m−1

m>0

m>0

m−1>m>0>m+1 0>m−1>m+10>m+1>m−1

m+1>m>0>m−1

  (h) (h′ ) Figure 7: Study of the sign of Re λj − λj ′ for the first case. For readability,   m m > m + 1 > m − 1 > 0 stands for Re (λi (x)) > Re λm+1 (x) > Re λm−1 (x) > i i Re (λ00 (x)). For convenience, we use a double index notation for the matrices in this section: considering an arbitrary k + 1 × k + 1 matrix A, we note its matrix elements as follows: [A](i,m);(i′ ,m′ ) := [A]Pi−1 bj +m+i,Pi′ −1 bj +m′ +i′ j=1

(4-24)

j=1

for i = 1, . . . , l and m = 0, . . . , bi . The first line (or column) is denoted by the pair (0, 0). This notation coincides with the notation used to sort the branch points and (m) (m) filling fraction as well as the ξi and λi functions: the lines are sorted in i groups of bi elements corresponding to the i real cuts [z2i−1 , z2i ] shared by bi + 1 sheets. With these notations, let us remember the RH problem to solve. We are looking for a k + 1 × k + 1 matrix Y (x) satisfying the constraints: • Y is analytic on C \ R; • for x ∈ R, one has the jumps    Y+ (x) = Y− (x)   

(0)

(1)

1 ω1 (x) ω1 (x) 0 1 0 0 0 1 ... ... ... 0 0 0 36

(b )

. . . ωl l (x) ... 0 ... 0 ... ... ... 1

     

(4-25)

m=m+1

m−1=m+1

m−1=m

m>m−1>m+1>0

m>m+1>m−1>0

m+1=0

m−1=0

m−1>m>m+1>0

m+1>m>m−1>0 m+1>m>0>m−1

m−1>m>0>m+1

m−1>m+1>m>0 m+1>m−1>m>0

  (h) (h′ ) for the second case. For readability, Figure 8: Study of the sign of Re λj − λj ′   m+1 m > m + 1 > m − 1 > 0 stands for Re (λm (x) > Re λm−1 (x) > i (x)) > Re λi i Re (λ00 (x)). where Y+ (x) and Y− (x) denote respectively the limit of Y (z)” when z → x from “ (m)

the upper and lower half planes and ωi

−N

(x) := e

• when x → ∞, one has the asymptotic behavior:  N z 0 0 (0)  −n     0 z 1 0 1 (1)  −n Y (x) = I + O  0 0 z 1  x  ... ... ... 0

4.2.1

0

0

(m) x2 −ai x 2

... ... ... ...

.

0 0 0 ... (bl )

. . . z −nl



   .  

(4-26)

First transformation

The first transformation is the same as in the large time case when the branch points are all real. We define: T (x) := Ω1,Y T Y (x)Ω2,Y T (4-27) for x ∈ C\R

bi l [ [

(m)

[wi

(m)

, w i ], with

i=1 m=1

  (bl ) (0) Ω1,Y T := diag e−N l0 , e−N l1 , . . . , e−N ll 37

(4-28)

and 

“ ” 2 N λ0 − x2

Ω2,Y T := diag e

N

,e

(0) (0) n1 λ1 − N

x

!

N

,e

(1) (1) n1 λ1 − N

x

!

(bl ) (b ) n λl l − lN x

N

,...,e

!

.

(4-29)

Then one can check that it is solution of the Riemann-Hilbert problem: • T is analytic on x ∈ C\R • T has the jumps

bi l [ [

(m)

[wi

(m)

, wi ];

i=1 m=1

T+ (x) = T− (x)jT (x)

(4-30)

where 

   jT (x) =    

(0)

(1)

eN (λ0+ (x)−λ0− (x)) eN (λ1+ (x)−λ0− (x)) eN (λ1+ (x)−λ0− (x)) (0) (0) 0 0 eN (λ1+ (x)−λ1− (x)) (1) (1) N (λ1+ (x)−λ1− (x)) 0 0 e ... ... ...

for x ∈ R and [jT (x)](j,h);(j ′,h′ )

0

0

(bl )

. . . eN (λl+ (x)−λ0− (x)) ... 0 ... 0 ... ... (bl ) (b ) (x)−λl−l (x))

. . . eN (λl+

0

(4-31)    ” (h)  N “λ(h) j+ −λj− = δ(j,h);(j ′ ,h′ ) 1 + δ(j,h);(i,m) + δ(j,h);(i,m+1) e −1

(4-32)

for x ∈

(m+1) (m+1) [wi , wi ]

with i = 1, . . . , l and m = 0, . . . bi − 19 .

• when x → ∞, one has the asymptotic behavior:    1 . T (x) = I + O x

(4-33)

Note that one can factorize the jump matrix jT (x) depending on the cut on which x lies: (m) (m+1) For x ∈ [ri , ri ] with i = 1, . . . , l and m = 0, . . . , bi , all the diagonal terms of jT reduce to 1 except the first one and the term [(i, m); (i, m)] whereas the term 9

(m+1) (m+1) ,w ]“except for the ” ” i (m+1) (m+1) N λi+ −λi−

It simply means that the jump matrix is the identity for x “∈ [wi

(i, m)’th and (i, m + 1)’th diagonal terms which are equal to e respectively.

38

N

(m) (m) λi+ −λi−

and e

       

[(0, 0); (i, m)] (i.e. the (i, m)’th term of the first line) also goes to 110 :  (m−1) (m+1) (0) eN (λ0+ −λ0− ) eN (λ1+ −λ0− ) . . . eN (λi+ −λ0− ) 1 eN (λi+ −λ0− )  0 1 ... 0 0 0   ... ... ... ... ... ...   0 0 ... 1 0 0   (m) (m) N (λ −λ )  0 0 ... 0 e i+ i− 0   0 0 . . . 0 0 1   ... ... ... ... ... ... 0 0 ... 0 0 0

(bl )

. . . eN (λl+ −λ0− ) ... 0 ... ... ... 0 ... 0 ... 0 ... ... ... 1 (4-34)

S For x ∈ R\ i [z2i−1 , z2i ], all the diagonal terms of jT reduce to 1 and it takes the form   (bl ) (0) (1) 1 eN (λ1+ (x)−λ0− (x)) eN (λ1+ (x)−λ0− (x)) . . . eN (λl+ (x)−λ0− (x))   0 1 0 ... 0   . (4-35) jT =  0 0 1 . . . 0     ... ... ... ... ... 0 0 0 ... 1 The kernel can be written: N

KN (x, y) =



2 x2 − y4 4

«

e Ω1,T (y)T+−1(y)T+ (x)Ωt2,T (x) 2iπ(x − y)

(4-36)

where (bl ) (x)

(0)

Ω1,T (x) = [0 eN λ1+ (x) . . . eN λl+

]

(4-37)

and Ω2,T (x) = [e−N λ0+ (x) 0 . . . 0]. 4.2.2

(4-38)

Second transformation, global opening of lenses

We now proceed to the second transformation allowing to cancel the exponentially increasing coefficients of the jump matrix jT . This transformation was introduced in [4], for the particular case of two imaginary branch points i = 1 and b1 = 1. We generalize this procedure using intensively the study of the spectral curve presented in the preceding section. l [ For this purpose, we define a contour Σ = Σi composed of l connected compoi=1

nents Σi build as follows (see figures 7, 8 and 9): 10

We show here the form of jT for m 6= 0, bi . These two cases take exactly the same kind of form.

39



      .     

Σi

Figure 9: Example of contour Σ used for the global opening of lenses. Here, bi = 2 and (1) (2) we are in the first case where xi,R < xi,L .

• Σi is a closed curved encircling the cut [z2i−1 , z2i ] cutting the real axis in x2i−1 and x2i satisfying: x2i−2 < x2i−1 < z2i−1 < z2i < x2i ; (4-39) (m)

• Σi intersects the line ri

(m)

+ iR in wi

(h)

(m)

and w i

for m = 1, . . . , bi ;

(m)

• in aneighborhood   of wi  and wi , Σi is the analytic continuation of the curves (m) (m−1) Re λi = Re λi ; (m)

• in the to the  left (resp. to  half-plane   the right)  of ri +iR, Σ lies in the region (m−1) (m) (m−1) (m) Re λi > Re λi (resp. Re λi < Re λi ) for any m = 1, . . . , bi . Let us now define the second transformation T → U.

Definition 4.2 Let U(x) be the k + 1 × k + 1 matrix defined by U(x) = T (x)JU T (x) where JU T is given by • For x outside Σ, JU T (x) = I; • For x inside Σi and Re(x) < ri (1):

“ ” (1) (0) N λi (x)−λi (x)

[JU T (x)](j,h);(j ′ ,h′) := δ(j,h);(j ′ ,h′ ) − δ(j,h);(i,0) δ(j ′ ,h′);(i,1) e 40

;

(4-40)

(m)

• For x inside Σi between Γi [JU T (x)](j,h);(j ′,h′ )

:= (4 − 41)

(m+1)

and Γi

: “ ” (m−1) (m) N λi (x)−λi (x)

δ(j,h);(j ′ ,h′ ) − δ(j,h);(i,m) δ(j ′ ,h′ );(i,m−1) e

“ ” (m+1) (m) N λi (x)−λi (x)

−δ(j,h);(i,m) δ(j ′ ,h′);(i,m+1) e

• For x inside Σi and Re(x) > ri (bi ):

;

“ ” (b −1) (b ) N λi i (x)−λi i (x)

[JU T (x)](j,h);(j ′ ,h′) := δ(j,h);(j ′ ,h′) − δ(j,h);(i,bi ) δ(j ′ ,h′);(i,bi −1) e

. (4-42)

The matrix U(x) is then solution of the Riemann-Hilbert problem: ! S [ (m) • U(x) is analytic in C\ R Σ Γi ; i,m

• when x → ∞, one has the asymptotic behavior:    1 . U(x) = I + O x

(4-43)

• One has the jumps U+ (x) = U− (x)jU (x) with the jump matrix given by: (m)

(m+1)

+ for x ∈]ri , ri [, with i = 1, . . . , l and m = 0, . . . , bi , the jump of the matrix U is the same as jump matrix jT of Eq. (4-34) except the (i, m − 1)’th and the (i, m + 1)’th terms of the first line which are taken equal to 0: 0

N (λ0+ (x)−λ0− (x))

B e B B B B B B B B B B B @

N (λ

e

(0) (x)−λ0− (x)) 1+

0 ... 0

1 ... 0

... ... ... ...

0 0 ... 0

0 0 ... 0

... ... ... ...

0 0 ... 1 0 0 ... 0

1 0 ... 0 N (λ

e

(m) (m) (x)−λ (x)) i+ i−

0 ... 0

0 0 ... 0

... ... ... ...

0 1 ... 0

... ... ... ...

N (λ

e

(bl ) (x)−λ0− (x)) l+

1

C C C C C C C. C C C C C A

0 ... 0 0 0 ... 1

(4-44)

(1)

+ for x ∈ [z2i−1 , ri ]:  (2) eN (λ0+ (x)−λ0− (x)) 1 0 eN (λ1+ (x)−λ0− (x))  (0) (0) N (λ1+ (x)−λ1− )  0 e 0 0  jU =  0 0 1 0   ... ... ... ... 0 0 0 0

41

(bl ) (x)−λ0− (x))

. . . eN (λl+ ... ... ... ...

0 0 ... 1 (4-45)



   .  

(b )

+ for x ∈ [ri i , z2i ], jU is given by: 0

N (λ0+ (x)−λ0− (x))

B e B B B B B B B B B B B @

N (λ

e

(0) (x)−λ0− (x)) 1+

0 ... 0

1 ... 0

... ... ... ...

0 0 ... 0

0 0 ... 0

... ... ... ...

0 0 ... 1 0 0 ... 0

N (λ

1 0 ... 0 N (λ

e

e

(0) (x)−λ0− (x)) i+1+

(bi ) (b ) (x)−λ i (x)) i+ i−

0 ... 0

0 ... 0

... ... ... ...

0 1 ... 0

... ... ... ...

N (λ

e

(bl ) (x)−λ0− (x)) l+

0 ... 0 0 0 ... 1

1

C C C C C C C. C C C C C A

(4-46)

+ for x ∈ R\ ∪i [x2i−1 , x2i ] 

(0)

(1)

1 eN (λ1+ (x)−λ0− (x)) eN (λ1+ (x)−λ0− (x)) 0 1 0 0 0 1 ... ... ... 0 0 0

  jU =   

(bl ) (x)−λ0− (x))

. . . eN (λl+ ... ... ... ...

+ for x ∈ [x2i−1 , z2i−1 ], jU is the same matrix as for x ∈ R\ term (i, 1) of the first line taken equal to 0. + for x ∈ [z2i , x2i ] jU is the same matrix as for x ∈ R\ (i, bi − 1) of the first line taken equal to 0.

[

[



  .  

0 0 ... 1

(4-47)

[x2i−1 , x2i ] with the

i

[x2i−1 , x2i ] with the term

i

(m)

+ for x ∈ Γi

[jU (x)](j,h);(j ′ ,h′)

=

δ(j,h);(j ′ ,h′)



  “ ” (m−1) (m−1) N λi+ −λi− −1 1 − δ(j,h);(i,m) + δ(j,h);(i,m−1) e

+δ(j,h);(i,m) δ(j ′ ,h′);(i,m−1) − δ(j,h);(i,m−1) δ(j ′ ,h”′);(i,m) “ (m−2)

N λi

−δ(j,h);(i,m−1)  δ(j ′ ,h′);(i,m−2) e

(m−1)

−λi−

“ ” (m+1) (m) N λi −λi−

+δ(j,h);(i,m) δ(j ′ ,h′ );(i,m+1) e (4 − 48)

“ ” (m−2) (m) N λi −λi+

− δ(j ′ ,h′ );(i,m−2) e

It means that the jump matrix reduces to the identity outside the 2 × 4 block: (i, m − 1) (i, m)

(i, m − 2)

“ ” (m−2) (m−1) N λ −λi− e “i ” (m−2) (m) N λi −λi+

(i, m − 1)

“ ” (m−1) (m−1) N λi+ −λi−

(i, m) −1

e

1

e

0

(i, m + 1) N

e



0 (m+1) (m) λi −λi−



(4-49)

(1)

+ for x ∈ Σi between x2i−1 + iR and ri + iR

“ ” (1) (0) N λi (x)−λi (x)

[jU (x)](j,h);(j ′,h′ ) := δ(j,h);(j ′,h′ ) + δ(j,h);(i,0) δ(j ′ ,h′);(i,1) e 42

;

(4-50)

(m)

(m+1)

+ for x ∈ Σi between Γi :=

[jU (x)](j,h);(j ′,h′ )

and Γi

:

δ

+ δ(j,h);(i,m) δ

(j,h);(j ′ ,h′ )

(j ′ ,h′ );(i,m−1)

“ ” (m−1) (m) N λi (x)−λi (x)

e

“ ” (m+1) (m) N λi (x)−λi (x)

+δ(j,h);(i,m) δ(j ′ ,h′ );(i,m+1) e

;

(4 − 51) (bi )

+ for x ∈ Σi between ri

+ iR and x2i + iR ” “ (b ) (b −1) (x)−λi i (x) N λi i

[jU (x)](j,h);(j ′,h′ ) := δ(j,h);(j ′ ,h′ ) + δ(j,h);(i,bi ) δ(j ′ ,h′ );(i,bi −1) e (m)

. (4-52)

(h)

Using the study of the sign of Re(λi )−Re(λj ) performed in the beginning of this section, let us check that these jump matrices have no ! exponentially increasing entries [ when N → ∞. For x ∈ Σ and x ∈ R\ [z2i−1 , z2 i] , the jump matrices converge i

(m)

to the identity matrix as N → ∞. For x ∈ Γi , the jump matrix 4-48 reduces to the identity except the block 4-49 which converges to the block (i, m − 2) (i, m − 1) (i, m) (i, m + 1) (i, m − 1) 0 0 −1 0 (i, m) 0 1 0 0

(4-53)

exactly as it was encountered in the simpler case studied by [4]. Finally, all the non-constant off-diagonal entries of the remaining jump matrices 4-44, 4-45 and 4-46 converge to 0 as N → ∞. The non- constant diagonal entries have modulus 1 and are rapidly oscillating as N → ∞. The third transformation aims to turn them to exponentially decreasing terms. 4.2.3

Third transformation

The third transformation aims to cancel the oscillating parts of the jump matrices by opening lenses on [z2i−1 , z2i ] as in the large time case. (m) (m+1) Remark hat one can further factorize the matrix jU (x) for x ∈ [ri , ri ]:

(m)

where, for x ∈ [ri

[(jS )](j,h);(j ′,h′ )

(m+1)

, ri

:=

jU (x) = e jU S (x)jS (x)b jU S (x)

(4-54)

], one has defined the k + 1 × k + 1 matrices: (1 − δ(j,h);(i,m) − δ(j,h);(0,0) )δ(j,h);(j ′,h′ ) + +δ(j,h);(0,0) δ(j ′ ,h′);(i,m) − δ(j,h);(i,m)„δ(j ′ ,h′ );(0,0)«+ (h′ ) X N λj ′ + −λ0− − δ(j ′ ,h′ );(j”,h”) e +δ(j,h);(0,0) (j”, h”) 6= (0, 0), (i, m), (i, m − 1), (i, m + 1)

43

Σi Li

Figure 10: Example of lens for the third transformation.

−δ(j,h);(i,m) h i e jU S (x)

h i b jU S (x)

« „ (h′ ) N λj ′ + −λ0+

δ(j ′ ,h′);(j”,h”) e

(m)

:= δ(j,h);(j ′ ,h′) + δ(j,h);(i,m) δ(j ′ ,h′ );(0,0) eN (λ0 −λi

(m)

(j,h);(j ′ ,h′ )

,

(j”, h”) 6= (0, 0), (i, m), (i, m − 1), (i, m + 1)

(4 − 55)

(j,h);(j ′ ,h′ )

and

X

:= δ(j,h);(j ′ ,h′) + δ(j,h);(i,m) δ(j ′ ,h′ );(0,0) eN (λ0 −λi

)−

(4-56)

)+

(4-57)

.

As in the large time case, we define a set of l lenses Li with vertices z2i and z2i−1 (see fig.10) which do not intersect the contour Σ. Using these lenses, we define the change of variables: S(x) := U(x) (4-58) outside of the lenses and, for x inside the lenses: S(x) := with

(

(up)

U(x)JU S (x) in the upper lens region (down) U(x)JU S (x) in the lower lens region

 h i (m)   JU(up) := δ(j,h);(j ′ ,h′) − δ(j,h);(i,m) δ(j ′ ,h′);(0,0) eN (λ0 −λi ) S (j,h);(j i ′ ,h′) h (m) (down)  := δ(j,h);(j ′ ,h′ ) + δ(j,h);(i,m) δ(j ′ ,h′ );(0,0) eN (λ0 −λi )  JU S

(4-59)

(4-60)

(j,h);(j ′ ,h′ )

(m)

(m+1)

for x between Γi and Γi ,  h i (0)   JU(up) := δ(j,h);(j ′,h′ ) − δ(j,h);(i,0) δ(j ′ ,h′ );(0,0) eN (λ0 −λi ) S ′ ′ (j,h);(j ,h ) h i (0) (down)  := δ(j,h);(j ′ ,h′) + δ(j,h);(i,0) δ(j ′ ,h′);(0,0) eN (λ0 −λi )  JU S (j,h);(j ′ ,h′ )

44

(4-61)

(1)

for x between z2i−1 and Γi and,  h i (bi )   JU(up) := δ(j,h);(j ′,h′ ) − δ(j,h);(i,bi ) δ(j ′ ,h′ );(0,0) eN (λ0 −λi ) S (j,h);(j i ′,h′ ) h (bi ) (down)  := δ(j,h);(j ′,h′ ) + δ(j,h);(i,bi ) δ(j ′ ,h′);(0,0) eN (λ0 −λi )  JU S

(4-62)

(j,h);(j ′ ,h′ )

(b )

for x between Γi i and z2i . We finally get the jump matrices for S(x) out of the discontinuities of U(x) thanks to the factorization of the jump matrices for U on the real axis. One has the jumps S+ (x) = S− (x)jS (x) with the jump matrix given by: (m)

• for x ∈]ri

(m+1)

, ri

[(jS )](j,h);(j ′,h′ )

[, with i = 1, . . . , l and m = 0, . . . , bi : :=

(1 − δ(j,h);(i,m) − δ(j,h);(0,0) )δ(j,h);(j ′,h′ ) + δ(j,h);(0,0) δ(j ′ ,h′);(i,m) − δ(j,h);(i,m) δ(j ′ , „ « (h′ ) X N λj ′ + −λ0− δ(j ′ ,h′ );(j”,h”) e +δ(j,h);(0,0) (j”, h”) 6= (0, 0), (i, m), (i, m − 1), (i, m + 1)

−δ(j,h);(i,m)

„ « (h′ ) N λj ′ + −λ0+

X

δ(j ′ ,h′);(j”,h”) e

(j”, h”) 6= (0, 0), (i, m), (i, m − 1), (i, m + 1)

(4 − 63) • for x ∈ R\∪i [x2i−1 , x2i ] 

  jS =   

(0)

(1)

1 eN (λ1+ (x)−λ0− (x)) eN (λ1+ (x)−λ0− (x)) 0 1 0 0 0 1 ... ... ... 0 0 0

(bl ) (x)−λ0− (x))

. . . eN (λl+ ... ... ... ...



  .  

0 0 ... 1

(4-64)

• for x ∈ [x2i−1 , z2i−1 ], jS is the same matrix as for x ∈ R\ ∪i [x2i−1 , x2i ] with the term (i, 1) of the first line taken equal to 0. • for x ∈ [z2i , x2i ] jS is the same matrix as for x ∈ R\ ∪i [x2i−1 , x2i ] with the term (i, bi − 1) of the first line taken equal to 0. (m)

• For x ∈ Γi

(m)

(m)

between x˜i,up and wi

[jS (x)](j,h);(j ′ ,h′)

=

(m)

and between wi 

(m)

and x˜i,down 

“ ” (m−1) (m−1) N λi+ −λi−

δ(j,h);(j ′,h′ ) 1 − δ(j,h);(i,m) + δ(j,h);(i,m−1) e

+δ(j,h);(i,m) δ(j ′ ,h′);(i,m−1) − δ(j,h);(i,m−1) δ(j ′ ,h′ );(i,m) 45

−1



“ ” (m−2) (m−1) N λi −λi−

−δ(j,h);(i,m−1)  δ(j ′ ,h′ );(i,m−2) e

“ ” (m+1) (m) N λi −λi−

+δ(j,h);(i,m) δ(j ′ ,h′);(i,m+1) e (4 − 65)

“ ” (m−2) (m) N λi −λi+

− δ(j ′ ,h′ );(i,m−2) e

It means that the jump matrix reduces to the identity outside the 2 × 4 block: (i, m − 1) (i, m)

(m)

• For x ∈ Γi

(i, m − 2)

“ ” (m−2) (m−1) N λ −λi− e “i ” (m−2) (m) N λi −λi+

(m)

−1

1

0

(i, m + 1) N

e



0 (m+1) (m) λi −λi−



(4-66)

(m)

and x˜i,up : 

=

(i, m)

e

e

between ri

[jS (x)](j,h);(j ′ ,h′)

(i, m − 1)

“ ” (m−1) (m−1) N λi+ −λi−



“ ” (m−1) (m−1) N λi+ −λi−

δ(j,h);(j ′,h′ ) 1 − δ(j,h);(i,m) + δ(j,h);(i,m−1) e “

(m−1)

N λ −λ



−δ(j,h);(i,m−1) δ(j ′ ,h′ );(0,0) e 0 i− +δ(j,h);(i,m) δ(j ′ ,h′);(i,m−1) − δ(j,h);(i,m−1) δ(j ′ ,h”′ );(i,m) “ (m−2)

N λi

−δ(j,h);(i,m−1)  δ(j ′ ,h′ );(i,m−2) e

(m−1)

−λi−

“ ” (m+1) (m) N λi −λi−

+δ(j,h);(i,m) δ(j ′ ,h′);(i,m+1) e (4 − 67)

−1



“ ” (m−2) (m) N λi −λi+

− δ(j ′ ,h′ );(i,m−2) e

i.e. this is the same jump matrix as 4-65 except the term (0, 0) of the (i, m − 1)’st “ ” (m−1)

N λ0 −λi−

instead of 0.

line which is equal to −e (m)

• For x ∈ Γi

(m)

(m)

between x˜i,down and ri

[jS (x)](j,h);(j ′ ,h′)

=

: 



“ ” (m−1) (m−1) N λi+ −λi−

δ(j,h);(j ′,h′ ) 1 − δ(j,h);(i,m) + δ(j,h);(i,m−1) e “ ” (m−1) N λ0 −λi−

+δ(j,h);(i,m−1) δ(j ′ ,h′);(0,0) e +δ(j,h);(i,m) δ(j ′ ,h′);(i,m−1) − δ(j,h);(i,m−1) δ(j ′ ,h”′ );(i,m) “ (m−2)

N λi

−δ(j,h);(i,m−1)  δ(j ′ ,h′ );(i,m−2) e

(m−1)

−λi−

“ ” (m+1) (m) N λi −λi−

+δ(j,h);(i,m) δ(j ′ ,h′);(i,m+1) e (4 − 68)

“ ” (m−2) (m) N λi −λi+

− δ(j ′ ,h′ );(i,m−2) e

i.e. this is the same jump“ matrix as” 4-65 except the term (0, 0) of the (i, m − 1)’st (m−1)

N λ0 −λi−

line which is equal to e

instead of 0. (1)

• for x ∈ Σi between x2i−1 + iR and ri + iR

“ ” (1) (0) N λi (x)−λi (x)

[jS (x)](j,h);(j ′,h′ ) := δ(j,h);(j ′,h′ ) + δ(j,h);(i,0) δ(j ′ ,h′);(i,1) e 46

−1



;

(4-69)

(m)

(m+1)

• for x ∈ Σi between Γi [jS (x)](j,h);(j ′,h′ )

and Γi

: “ ” (m−1) (m) N λi (x)−λi (x)

:=

δ(j,h);(j ′ ,h′ ) + δ(j,h);(i,m) δ(j ′ ,h′ );(i,m−1) e

“ ” (m+1) (m) N λi (x)−λi (x)

+δ(j,h);(i,m) δ(j ′ ,h′ );(i,m+1) e

;

(4 − 70) (bi )

• for x ∈ Σi between ri

+ iR and x2i + iR

[jS (x)](j,h);(j ′ ,h′) := δ

(j,h);(j ′ ,h′ )

(m)

• for x ∈ Li between Γi

+ δ(j,h);(i,bi ) δ

(j ′ ,h′ );(i,bi −1)

“ ” (b −1) (b ) N λi i (x)−λi i (x)

e

. (4-71)

(m+1)

and Γi

(m)

[JS ](j,h);(j ′,h′ ) := δ(j,h);(j ′,h′ ) + δ(j,h);(i,m) δ(j ′ ,h′);(0,0) eN (λ0 −λi

)

(4-72)

(1)

• for x ∈ Li between z2i−1 and Γi

(0)

[JS ](j,h);(j ′,h′ ) := δ(j,h);(j ′,h′ ) + δ(j,h);(i,0) δ(j ′ ,h′);(0,0) eN (λ0 −λi (bi )

• for x ∈ Li between Γi

)

(4-73)

and z2i (bi ) )

[JS ](j,h);(j ′ ,h′) := δ(j,h);(j ′ ,h′) + δ(j,h);(i,bi ) δ(j ′ ,h′ );(0,0) eN (λ0 −λi

(4-74)

Now, when N → ∞ all the non-constant terms of the jump matrices tend to 0. We are then left with a model Riemann-Hilbert problem reduced locally to a 2 × 2 matrix problem similar to the one studied in the preceding section. We can thus use the same parametrix. 4.2.4

Model Riemann-Hilbert problem

We proceed exactly as in the preceding section (large time case) and consider the following RH problem (this is the limiting problem obtained when N → ∞). Definition 4.3 M is solution to the following RH problem:   (m) • M : C\ ∪i [z2i−1 , z2i ] ∪i,m Γi → Ck+1×k+1 is analytic • M(x) = Ik+1×k+1 + O

1 x



when x → ∞

• M(x) satisfies the jumps M+ (x) = M− (x)jM (x) with:

 [jM (x)](j,h),(j ′ ,h′) = δ(j,h),(j ′ ,h′ ) 1 − δ(j,h),(0,0) − δ(j,h),(i,m) +δ(j,h),(0,0) δ(j ′ ,h′ ),(i,m) −δ(j ′ ,h′ ),(0,0) δ(j,h),(i,m) (4-75) 47

(m)

for x ∈]ri

(m+1)

, ri

[ and

[jM (x)](j,h),(j ′ ,h′)

= (4 − 76)

 δ(j,h),(j ′ ,h′) 1 − δ(j,h),(i,m−1) − δ(j,h),(i,m) + δ(j,h),(i,m−1) δ(j ′ ,h′),(i,m) −δ(j ′ ,h′),(i,m−1) δ(j,h),(i,m)

(m)

for x ∈ Γi . We proceed as in the large time case, following [4], to solve this problem: we build a solution under the form of a Lax matrix associated to the spectral curve. For brevity we will not enter the details of the resolution but only give the result, the proof is similar to the building of the parametrix away from the branch points in the large time case. The matrix M takes the following form:  ′  (h) (h ) [M(x)](j,h),(j ′ ,h′ ) = Mj ξj ′ (x) (4-77) (h)

where the functions Mj

are Baker-Akhiezer functions defined by: (0)

M0 (x) = and (h)

(h)

Mj (x) = cj where

(m)

ai R(x) :=

l Y i=1

where

(m) qi

and

Q

(m)

ci

(m) ai

x− p R(x)

(i,m)



  (m) x − a i (i,m)6=(j,h) p R(x)

Q

(m)

:= ξi

(x − p2i−1 ) (x − p2 i)

  (m) := ξ0 wi



(4-79)

(∞),

(4-80)

  Y (m) (m) x − qi x − q˜i

(4-81)

i,m

and s

:= −i

(4-78)

(m) q˜i

  (m) := ξ0 w i

(4-82)

(m)

ni . N

(4-83)

We now look at the behavior of the solution in the neighborhood of the branch points by approximating it by a local paramerix build from the Airy function. Near the real branch points zi , we use the local parametrices P built in the preceding section (see Eq. (3-77)). Around the imaginary branch point, we have to build a slightly 48

different parametrix. For this purpose, we can directly use the parametrix built in (m) section 7 of []. Let us remind here this result for the parametrix around wi . (m) Around the branch point wi , let us define a local parametrix P (x) for x inside (m) (m) (m) a disc Di (r) of small radius r centered in wi . Studying the behavior of λi (x) − (m−1) λi (x) inside this disc, one can define a conformal map f (x) by:    32 3 (m) (m−1) f (x) := . λ (x) − λi (x) 4 i

(4-84)

(m)

One can remark that the neighborhood Di (r) is mapped to the complex plane in such a way that:  (m)  − 2π for x ∈ Γi 3 arg (f (x)) = (4-85) 0 for x ∈ Σi in the left half-plane  2π for x ∈ Σi in the right half-plane 3 (m)

With these tools, one defines the parametrix P (x) in Di   2 3 P (x) := E(x)Φ N f (x) D(x)

(r) by:

(4-86)

with D(x) is a diagonal matrix defined by    “   “ ” ” (m−1) (m) (m) (m−1) N N λ −λ λ −λ i i i i − 1 + δ(j,h),(i,m) e 2 −1 [D(x)](j,h);(j ′,h′ ) := δ(j,h);(j ′,h′ ) 1 + δ(j,h),(i,m−1) e 2

(4-87) for x ∈ Φ(x) is the identity matrix except the 2 × 2 block corresponding to the intersection of the (i, m − 1)’th and (i, m)’th lines and column which takes the value (m) Di (r),

for 0 < arg (f (x))
arg (f (x)) > − 2π 3 (i, m − 1) (i, m)

49

for 2π < arg (f (x)) < 4π with the functions yα defined in Eq. (3-82). The matrix E 3 3 is a prefactor used to ensure that the local parametrix can be linked properly to the (m) global parametrix on the boundary of Di (r): E(x) := M(x)L(x)−1

1 e b L(x) := √ L(x) L(x) 2 π

where

(4-91)

e the diagonal matrix with L  i  h i h  1 1 1 1 e L(x) := δ(j,h);(j ′ ,h′ ) 1 + δ(j,h);(i,m−1) N 6 f (x) 4 − 1 + δ(j,h);(i,m) N − 6 f (x)− 4 − 1 (j,h);(j ′ ,h′ )

(4-92) b the identity matrix except for the 2 × 2 block corresponding to the intersection and L of the (i, m − 1)’th and (i, m)’th lines and columns: (i, m − 1) (i, m) (i, m − 1) i −1 (i, m) i 1

4.2.5

(4-93)

Final transformation

We now have all the ingredients in hand to perform the final transformation and build a solution to our Riemann-Hilbert probem. Let us define: R(x) := S(x)M −1 (x) outside of the discs surrounding the branch points R(x) := S(x)P (x)−1 inside of the discs surrounding the branch points

(4-94)

where P (x) is the local parametrix Eq. (3-77) if one considers a real branch point or Eq. (4-86) if one considers a complex branch point. As N → ∞, it is solution to the following RH problem: • R(x) the discs  is analytic inside  S [ [ (m) C\ R Σ Li Γi  outside; i

surrounding

the

branch

and

in

(i,m)

• its jumps outside the discs are of the form: R+ = R − I + O e−αN



(4-95)

with α > 0 and its jumps on the circles surrounding the branch points are such that:    1 . (4-96) R+ = R − I + O N • as x → ∞:

R(x) = I + O 50



1 N



.

(4-97)

As in the large time case this implies the behavior of R:   1 R(x) = I + O N(|x| + 1)

(4-98)

as N → ∞ (see [4]).

4.3

Asymptotics of the kernel and universality

Using this final transformation as well as the properties of the parametrices, one easily shows theorems 1.1, 1.2 and 1.3 by mimicking the proof of the large time case.

5

Perspectives

We used the Riemann-Hilbert approach to study the asymptotic behavior of N nonintersecting Brownian bridges and show the universality of their correlation functions as N → ∞. Nevertheless, we did not study the critical configurations corresponding to a spectral curve where two real branch points merge. One expects to find a universal behavior given by the Pearcy kernel as it was found in the particular case studied by [5]. This critical case may be addressed by the same techniques and is left to a forthcoming work. However, as the length of these notes witnesses, the RH approach is long and requires a global information on the spectral curve to finally give local results. It would be interesting to obtain these results in a more direct way using, for example, a direct steepest-analysis as performed by Adler and Van Moerbeke [1].

Acknowledgements The author would like to thank A.B.J. Kuijlaars for a critical reading of a first draft of this paper. This research is supported by the Enigma European network MRT-CT2004-5652 through a post-doctoral fellowship and benefited also of the support of the ANR project G´eom´etrie et int´egrabilit´e en physique math´ematique ANR-05-BLAN0029-01, the Enrage European network MRTN-CT-2004-005616, the European Science Foundation through the Misgam program and the French and Japanese governments through PAI Sakura.

References [1] M. Adler, P. Van Moerbeke, “Looking for a non-symmetric Pearcy process”, in preparation. 51

[2] P.M. Bleher, “Lectures on Random Matrix Models: The Riemann-Hilbert Approach”, arXiv:0801.1858. [3] P.M. Bleher and A.B.J. Kuijlaars, “Large n limit of gaussian random matrices with external source, Part I”, Commun. Math. Phys. 252(2004),43. [4] A.I. Aptekarev, P.M. Bleher and A.B.J. Kuijlaars, “Large n limit of gaussian random matrices with external source, Part II”, Commun. Math. Phys. 259(2005),367. [5] P.M. Bleher and A.B.J. Kuijlaars, “Large n limit of gaussian random matrices with external source, Part III: Double scaling limit”, Commun. Math. Phys. 270(2006),481. [6] E. Daems and A.B.J. Kuijlaars, “A Christoffel-Darboux formula for multiple orthogonal polynomials”, J. Approx. Theory 130(2004),190. [7] B.Eynard and N. Orantin, “Invariants of algebraic curves and topological expansion”, Communication in Number Theory and Physics vol.1 n2, mathph/0702045. [8] V. Lysov, F. Wielonsky, “Strong asymptotics for multiple Laguerre polynomials”, Constructive Approximation 28(2008), 61. [9] L.A. Pastur, “The spectrum of random matrices” (Russian), Teoret. Mat. Fiz. 10 (1972), 102. [10] C. Tracy and H. Widom, “Nonintersecting Brownian excursions”, math/0607321. [11] P. Zinn-Justin, “Random Hermitian matrices in an external field”, Nucl. Phys. B 497 (1997), 725. [12] P. Zinn-Justin, “Universality of orrelation functions of Hermitian random matrices in an external field”, Comm. Math. Phys. 194 (1998), 631.

52

Suggest Documents