Nonsymmetric linear difference equations for multiple ... - K.U.Leuven

0 downloads 0 Views 142KB Size Report
orthogonal polynomials is the explicit solution of the differential equations for the Toda ..... Integrate both sides of the equality (2.12) with respect to µ1, then (2.8) ...
Nonsymmetric linear difference equations for multiple orthogonal polynomials Walter Van Assche



Katholieke Universiteit Leuven

Abstract We first give a brief survey of some aspects of orthogonal polynomials. The three-term recurrence relation gives a tridiagonal matrix and the corresponding Jacobi operator gives useful information about the orthogonalizing measure and the asymptotic behavior of the zeros of the orthogonal polynomials. The Toda lattice and other similar dynamical systems (Langmuir lattice or Kac-Van Moerbeke lattice) can be solved explicitly using Jacobi operators. Then we present multiple orthogonal polynomials, which are less known. These multiple orthogonal polynomials are defined using orthogonality conditions spread out over r different measures. There is a higher order recurrence relation with r + 2 terms, which gives a banded Hessenberg matrix and a corresponding operator which is essentially nonsymmetric. We give some examples and indicate how one can start working out a spectral theory for such operators. As an application we show that one can explicitly solve the Bogoyavlenskii lattice using certain multiple orthogonal polynomials.

1

Orthogonal Polynomials

In this paper we will introduce multiple orthogonal polynomials and show how they are related to certain nonsymmetric linear operators that correspond to a finite order linear recurrence relation. In this section we will first recall some relevant facts from orthogonal polynomials (see Szeg˝o [25] or [27] for a more thorough treatment) and in the next section we will see how some of these facts have an extension to multiple orthogonal polynomials, but that the new setting is richer and still needs further study (see Nikishin and Sorokin [20] and Aptekarev [2] for more information on multiple orthogonal polynomials). Let µ be a positive measure on the real line for which all the moments exist and for which the support contains infinitely many points. Without loss of generality we will normalize µ so that it is a probability measure. The monic orthogonal polynomials Pn (n = 0, 1, 2, . . .) for the measure µ are such that Pn (x) = xn + · · · has degree n and Z k = 0, 1, 2, . . . , n − 1. (1.1) Pn (x)xk dµ(x) = 0, ∗

Research Director of the Belgian National Fund for Scientific Research (FWO). This research is supported by FWO research project G.0278.97 and INTAS 93-219ext.

1

These polynomials always exist and are unique. Often one considers the orthonormal polynomials pn (x) = γn Pn (x), with γn > 0 and Z pn (x)pm (x) dµ(x) = 1, n, m ≥ 0. (1.2)

1.1

Recurrence relation

A very useful fact is that orthogonal polynomials on the real line always satisfy a threeterm recurrence relation of the form Pn+1 (x) = (x − bn )Pn (x) − a2n Pn−1 (x),

n ≥ 0,

(1.3)

with initial values P0 = 1 and P−1 = 0 and with a2n+1 > 0 and bn ∈ R for all n. The recurrence relation for the orthonormal polynomials is given by xpn (x) = an+1 pn+1 (x) + bn pn (x) + an pn−1 (x),

n≥0

(1.4)

with an+1 > 0 and bn ∈ R the same coefficients as in (1.3) and p0 = 1, p−1 = 0. To show that (1.4) holds, one expands xpn (x) into a Fourier series in the orthonormal polynomials, xpn (x) =

n+1 X

ak,n pk (x),

k=0

with Fourier coefficients given by ak,n =

Z

xpn (x)pk (x) dµ(x).

Obviously xpk (x) is a polynomial of degree k + 1 and by orthogonality it follows that ak,n = 0 whenever k + 1 < n. Therefore the Fourier series only contains three terms and we put Z xpn (x)pn−1 (x) dµ(x) := an , an−1,n = Z an,n = xp2n (x) dµ(x) := bn , Z an+1,n = xpn (x)pn+1 (x) dµ(x) = an+1 . By comparing the coefficient of xn+1 in (1.4) we find that an+1 = γn /γn+1 , and if we use pn (x) = γn Pn (x), then we indeed obtain (1.3). There is also an important converse of this connection between orthogonal polynomials and three-term recurrence relations, known as Favard’s theorem. It states that any sequence of monic polynomials Pn that satisfies a three-term recurrence relation (1.3) with a2n > 0 and bn ∈ R for all n and initial conditions P0 = 1 and P−1 = 0, is always a sequence of monic orthogonal polynomials for some positive measure µ on the real line. This means that the orthogonality conditions (1.1) and the three-term recurrence relation (1.3) are two equivalent ways to characterize orthogonal polynomials. This gives two important problems in the theory of orthogonal polynomials: 2

1. Given the recurrence coefficients, how can one obtain the orthogonality measure? (the direct problem) 2. Given the orthogonality measure, how can we find the recurrence coefficients? (the inverse problem) A numerical analyst usually knows the measure (or its moments) and wants to compute the recurrence coefficients, hence problem 2 would be the relevant problem. On the other hand, there are many situations when one knows the recurrence coefficients (e.g., they may be birth and death rates in a birth and death process) and then one is interested in obtaining the measure µ, in which case problem 1 is the relevant problem. Later we will see that both the direct and inverse problem are needed for solving the differential equations for the Toda lattice and other similar dynamical systems.

1.2

Jacobi matrix and Jacobi operator

Starting with the recurrence relation (1.3) we can construct a tridiagonal matrix   b0 1  a2 b1 1   1 2   b 1 a 2 2   Jn =   .. .. ..   . . .   2  an−2 bn−2 1  a2n−1 bn−1 and we can write the recurrence relation (1.3) as        b0 P0 (x) P0 (x) 0 1  a2 b1       1  1   P1 (x)   P1 (x)  0    P2 (x)   P2 (x)  0 a22 b2 1        = x − P (x)        ..  . . . n . . . .. .. .. .. ..      .        2  Pn−2 (x) 0 an−2 bn−2 1  Pn−2 (x) 2 1 an−1 bn−1 Pn−1 (x) Pn−1 (x) This shows that each zero xj,n of Pn is an eigenvalue of Jn . Zeros of orthogonal polynomials can therefore be identified with the spectrum of a tridiagonal matrix. The disadvantage of the matrix Jn is that it is not symmetric. One can easily symmetrize the matrix:   b0 a1  a1 b1 a2     a b a 2 2 3   −1 ˜ Rn Jn Rn = Jn =   . . . . . .   . . .    an−2 bn−2 an−1  an−1 bn−1 where Rn is the diagonal matrix with on the diagonal the vector (γ0 , γ1 , γ2 , . . . , γn−1 ). This symmetric Jacobi matrix corresponds to the recurrence relation (1.4) for the orthonormal polynomials and has the same eigenvalues as Jn . Since J˜n is symmetric, it follows immediately that the zeros of pn are all real. 3

1.3

Spectral theory and perturbations

More interesting is the infinite Jacobi matrix   b0 a1 a1 b1 a2    ˜ J =  a b a 2 2 3   .. .. .. . . . which acts as a linear operator J˜ : `2 → `2 . If the recurrence coefficients are bounded, then J˜ is a bounded symmetric operator, hence selfadjoint. For unbounded recurrence coefficients the operator J˜ is unbounded, but well defined on elements of `2 with only finitely many non-zero elements. One can extend this matrix to its maximal domain, but there may be several selfadjoint extensions. Let us assume that we are dealing with bounded recurrence coefficients. The tridiagonal structure implies that e0 = (1, 0, 0, 0 . . .) ∈ `2 is a cyclic vector, i.e., span{J˜n e0 : n ∈ N} = `2 , hence J˜ has a simple spectrum and the spectral theorem tells us that there exists a positive measure µ (the spectral measure) such that `2 and L2 (µ) are isometric. The operator J˜ : `2 → `2 corresponds to the multiplication operator M : L2 (µ) → L2 (µ) for which Mf (x) = xf (x) for every f ∈ L2 (µ). The isometry maps J˜n e0 to xn and the three-term recurrence relation (1.4) shows that the unit vector en ∈ `2 is mapped to the polynomial pn . Obviously the orthogonality hen , emi = δm,n holds in `2 , hence the isometry also gives hpn , pm i = δm,n in L2(µ), which gives (1.2). The spectral theorem therefore proves the Favard theorem. The infinite Jacobi matrix (operator) therefore gives the orthogonality measure (which is the spectral measure of J˜) and the finite sections J˜n give the orthonormal polynomials. A very useful class of orthogonal polynomials is the class for which the recurrence coefficients converge lim an = a ≥ 0, lim bn = b. (1.5) n→∞

This class is known as M(a, b) and has Consider the constant Jacobi operator  b a  J˜0 =  

n→∞

been studied intensively by Nevai [17] and others. a b a



 a  , b a  .. .. .. . . .

then in the class M(a, b) the operator J˜ − J˜0 has diagonals that converge to 0, which means that J˜ − J˜0 is a compact operator. Weyl’s theorem (see, e.g., [15]) then implies that J˜ and J˜0 have the same essential spectrum. The spectrum of J˜0 for a > 0 is the interval [b − 2a, b + 2a] and the spectral measure has a density 1 p 2 4a − (x − b)2 , 2πa2

x ∈ [b − 2a, b + 2a].

Hence Weyl’s theorem implies that the spectrum of J˜ in M(a, b) is equal to [b−2a, b+2a]∪ E, where E is bounded, at most denumerable and the accumulation points (if any) are at 4

b ± 2a. One also knows the asymptotic distribution of the zeros x1,n < x2,n < · · · < xn,n of the orthogonal polynomials pn in M(a, b): Z

1 1X f (xj,n ) = lim n→∞ n π j=1 n

b+2a b−2a

f (x) p dx, 2 4a − (x − b)2

which holds for every bounded and continuous real function f . The limit measure for this asymptotic zero distribution is the equilibrium measure for the set [b − 2a, b + 2a] in logarithmic potential theory.

1.4

Application: Toda lattice

An important application of Jacobi operators and the direct and inverse problem for orthogonal polynomials is the explicit solution of the differential equations for the Toda lattice, as was observed by Moser [16]. Consider the chain of differential equations   a˙ n = 1 an (bn−1 − bn ), 2 n≥1 (1.6)  ˙ 2 2 bn = an − an+1 , with a0 ≡ 0 and initial data an+1 (0) > 0, bn (0) ∈ R for n ∈ N. It is well known that this is an integrable system which can be written as L˙ = ML − LM, where (L, M ) is the Lax pair   b0 a1  a1 b1 a2   L= , a2 b2 a3   .. .. .. . . .



0  1 a1 M =  2

−a1 0 a2

 −a2 0 .. .

−a3 .. .. . .

  . 

In order to find the solution an+1 (t), bn (t) for t > 0 we first solve the direct problem for the Jacobi operator L(0) with the initial data. This will give the spectral measure µ0 and the spectrum E, with corresponding orthogonal polynomials pn (x; 0) for which Z pn (x; 0)pm (x; 0) dµ0 (x) = δm,n . E

We modify the spectral measure and use the measure µt for which dµt (x) = e−xt dµ0 (x) which has the same spectrum as µ0 (isospectral deformation). Now we have to solve the inverse problem for the measure µt to find the orthogonal polynomials pn (x; t) for which Z pn (x; t)pm (x; t) dµt (x) = δm,n . E

The recurrence coefficients an+1 (t), bn (t) for these orthonormal polynomials then give the solution of the equations (1.6). 5

A related dynamical system is the Langmuir lattice (or Kac-Van Moerbeke lattice [9]) for which we have the chain of differential equations 1 a˙ n = an (a2n−1 − a2n+1 ), 2

n ≥ 1,

(1.7)

with a0 ≡ 0. This can also be solved using the direct and inverse problem for orthogonal polynomials for initial data an (0) > 0 whenever n > 0. Indeed, consider the Jacobi matrix   0 a1(0)  a1(0) 0 a2(0)   L(0) =  , 0 a3(0) a2(0)   .. .. .. . . . then we solve the direct problem to find the spectral measure µ0 and the corresponding orthogonal polynomials pn (x; 0). We now modify the measure and define a measure µt for which 2 dµt (x) = e−x t dµ0 (x), so that µt has the same spectrum as µ0 . Now solve the inverse problem for this new measure µt to find the orthogonal polynomials pn (x; t). The Jacobi operator L(0) has zeros on the diagonal, which implies that the spectral measure µ0 is symmetric with respect to the origin. As a consequence the orthogonal polynomials pn (x; 0) are also symmetric in the sense that pn (−x; 0) = (−1)n pn (x; 0). The new measure µt is also symmetric (since 2 the modification e−x t is symmetric in x), hence the orthogonal polynomials pn (x; t) are also symmetric and therefore satisfy a recurrence relation of the form xpn (x; t) = an+1 (t)pn+1 (x; t) + an (t)pn−1 (x; t),

n ≥ 0.

The recurrence coefficients an (t) then give the solution of the differential equations (1.7) for the Langmuir lattice. For the symmetric orthogonal polynomials one can write p2n (x; t) = qn (x2; t),

p2n+1 (x2; t) = xrn (x2 ; t),

where qn (x; t) (n ∈ N) and rn (x; t) (n ∈ N) are two new sequences of orthogonal polynomials for the measures µ1t and µ2t respectively, which now have a spectrum on the positive real axis and for which Z ∞ Z ∞ 1 f (x) dµt (x) = f (x2) dµt (x), Z ∞−∞ Z ∞0 f (x) dµ2t (x) = x2 f (x2 ) dµt (x). 0

−∞

Using the recurrence relation twice, we have x2pn (x; t) = an+1 (t)xpn+1 (x; t) + an (t)xpn−1 (x; t) = an+1 (t)an+2(t)pn+2 (x; t) + [a2n+1 (t) + a2n (t)]pn (x; t) + an (t)an−1 (t)pn−2 (x; t), 6

so that the recurrence relation for qn (x; t) becomes xqn (x; t) = An+1 (t)qn+1 (x; t) + Bn (t)qn (x; t) + An (t)qn−1 (x; t), with An (t) = a2n (t)a2n−1(t) and Bn (t) = a22n+1(t) + a22n (t). For the rn (x; t) we have in a similar way xrn (x; t) = Cn+1 (t)rn+1 (x; t) + Dn (t)rn (x; t) + Cn (t)rn−1 (x; t), with Cn (t) = a2n+1 (t)a2n(t) and Dn (t) = a22n+2(t) + a22n+1 (t). Observe that (1.7) implies that   A˙ n = 1 An (Bn−1 − Bn ), 2 n ≥ 1,  ˙ 2 2 Bn = An − An+1 , so that the new recurrence coefficients (An+1 , Bn ) satisfy the chain of differential equations (1.6) for the Toda lattice. The same is true for (Cn+1 , Dn ). The transformations (an+1 : n ≥ 0) 7→ (An+1 , Bn : n ≥ 0) or (Cn+1 , Dn : n ≥ 0) are known as Miura transformation and correspond, in the theory oforthogonal polynomials, to the well known transformation from a symmetric system of orthogonal polynomials on (−∞, ∞) to a system of orthogonal polynomials on (0, ∞), the classical example being the relationship between Hermite polynomials and Laguerre polynomials: H2n (x) = (−1)n 22n n!L(−1/2) (x2 ), n

2

H2n+1 (x) = (−1)n 22n+1 n!xL(1/2) (x2 ). n

Multiple Orthogonal Polynomials

For multiple orthogonality we will spread the orthogonality conditions over several measures. Let r ≥ 1 be an integer and let µ1 , µ2 , . . . , µr be r positive measures for which all the moments exist and each with infinitely many points in their supports E1 , E2 . . . , Er . We will use multi-indices ~n = (n1 , n2, . . . , nr ) ∈ Nr . The multiple orthogonal polynomial p~n is the monic polynomial of degree |~n| = n1 +n2 +· · · nr which satisfies the orthogonality conditions Z p~n (x)xk dµ1 (x) = 0, k = 0, 1, . . . , n1 − 1, (2.8) E1 Z p~n (x)xk dµ2 (x) = 0, k = 0, 1, . . . , n2 − 1, (2.9) E2

.. .

Z

p~n (x)xk dµr (x) = 0,

k = 0, 1, . . . , nr − 1.

(2.10)

Er

This polynomial need not be unique; we need to put some extra conditions on the r measures µ1 , . . . , µr in order that p~n is uniquely defined by (2.8)–(2.10). Two useful systems of measures are: • Angelesco systems [1] for which each Ei is a subset of an interval ∆i with all ∆i mutually disjoint: ∆i ∩ ∆j = ∅ for 1 ≤ i 6= j ≤ r. For such a system one can easily show that p~n has exactly nj zeros in the interval ∆j , for j = 1, 2, . . . , r. 7

• An AT system is such that dµj (x) = uj (x) dµ0 (x), where µ0 has support E0 ⊂ ∆0 and uj ≥ 0 on ∆0. Furthermore we require that each linear combination r X

uj (x)Pnj −1 (x),

j=1

with Pnj −1 a polynomial of degree at most nj − 1 has at most |~n| − 1 zeros on ∆0. Hence for an AT system all the measures are supported on one interval ∆0 but we require that {(1, x, x2, . . . , xn1 −1 )u1(x), . . . , (1, x, x2, . . . , xnr −1 )ur (x)} are a Chebyshev system. In this case p~n is unique and all its zeros are in ∆0 .

2.1

Recurrence relation

We will take a closer look at the multiple orthogonal polynomials for which all the indices in the multi-index ~n are nearly the same. In particular for n = kr + j we will consider the multi-index ~s(n) = (k + 1, k + 1, . . . , k + 1, k, k, . . . , k) and investigate the polynomials {z } | j times

Pn = p~s(n) . A very useful relation for these polynomials is that they satisfy a linear recurrence relation of order r + 1, generalizing the second order linear recurrence relation (1.3) for ordinary monic orthogonal polynomials (for which r = 1). Indeed, we have xPn (x) = Pn+1 (x) +

r X

ai (n)Pn−i (x),

n≥0

(2.11)

i=0

with initial conditions P0 = 1 and P−1 = P−2 = · · · = P−r = 0. To see why this is true, we will expand xPn (x) using the basis functions Pk , (k = 0, 1, . . . , n + 1): xPn (x) =

n+1 X

ai,n Pi (x),

(2.12)

i=0

with an+1,n = 1. Suppose n = kr + j, then we will show that amr+`,n = 0 for m = 0, 1, . . . , k − 2 and ` P = 0, . . . , r − 1, and when m = k − 1 also for ` = 0, 1, . . . , j − 1 so that the sum reduces to n+1 i=n−r ai,n Pi (x), which is of the form (2.11). We will use induction on m. First suppose m = 0, then we want to prove that a`,n = 0 for ` = 0, 1, . . . , r − 1. Integrate both sides of the equality (2.12) with respect to µ1 , then (2.8) implies that the right hand side reduces to a0,n µ1 (E1 ), and the left hand side is 0 whenever n > r, hence we see that a0,n = 0. To see that a1,n = 0 we integrate both sides of (2.12) with respect to µ2 , and in general integration with respect to µ`+1 will show that a`,n = 0 for ` = 0, 1, . . . , r − 1. Now suppose that aM r+` = 0 for M ≤ m − 1 and ` = 0, 1, . . . , r − 1, with m ≤ k − 2, then we will show that amr+` = 0 for ` = 0, 1, . . . , r − 1 as well. Multiply both sides of (2.12) by xm and integrate with respect to µ1 , then the right hand side only R contains amr,n E1 xm Pmr (x) dµ1 (x). The integral can’t be zero, since otherwise we would have an orthogonality condition too much which would imply that Pmr ≡ 0. The left hand side is also zero, which implies that amr,n = 0. In a similar way, we can integrate with respect to µ`+1 to find that amr+`,n = 0 for ` = 0, 1, . . . , r − 1. Now finally, assume that m = k − 1, then we multiply (2.12) by xk−1 and integrating with respect to µ`+1 still gives 0 on the left hand side, but only for ` = 0, 1, . . . , j − 1, whereas the right hand side is proportional to a(k−1)r+`,n , which implies that a(k−1)r+`,n = 0 for ` = 0, 1, . . . , j − 1. This is what we wanted to prove. 8

2.2

Banded Hessenberg matrix

We can now introduce the banded Hessenberg matrix   a0(0) 1 a1(1) a0 (1)  1   a2(2) a1 (2)  a0(2) 1  .  .. .. ..  .  . . .  .  Hn =   a1(r) a0(r) 1 ar (r) ar−1(r)    . . . .   .. .. .. ..     ar (n − 2) 1 a1(n − 1) a0(n − 1) ar (n − 1) · · · (2.13) and the recurrence relation (2.11) now becomes       P0 (x) P0 (x) 0  P1 (x)   P1 (x)   ..        Hn   = x  − Pn (x)  .  , .. ..     0 . . Pn−1 (x) Pn−1 (x) 1 so that each zero of Pn is an eigenvalue of Hn . But the matrix Hn is not symmetric and there is no simple way to make it symmetric (unless r = 1), so there is no reason in general that the eigenvalues (zeros) should be real. Nevertheless in many cases (e.g., for an Angelesco system or for an AT system) the zeros turn out to be real, which already gives some indication that there is some relationship between the sequences a0, a1, . . . , ar . In general we do not believe that every banded Hessenberg matrix will lead to a system of multiple orthogonal polynomials with r measures on the real line, but that one needs extra conditions on these recurrence coefficients for a bona-fide Favard theorem. An important case is a banded Hessenberg matrix with entries only on the extreme diagonals, i.e.,   0 1  0 0 1    0 0 0 1     .. .. .. .. . . .  .  (2.14) Hn =   ar 0 0 0 1   .. .. ..  . . .  ar+1 0     . .  . 1 an−1 · · · 0 0 with ak > 0 for k ≥ r. In this case, Aptekarev et al. [4] proved that the corresponding polynomials Pn will be the multiple orthogonal polynomials p~s(n) on a (r + 1)-star in the complex plane, i.e., the measures µ1 , . . . , µr are supported on r [ Γr = {xe2ijπ/(r+1) : x ≥ 0}, (2.15) j=0

and they are all radially symmetric in the sense that they are invariant under a rotation over an angle 2π/(r + 1). It is therefore sufficient to restrict the measures µ1 , µ2 , . . . , µr to [0, ∞) and to consider the measures dνj (x) = dµj (x1/(r+1)). 9

2.3

Some examples

As a typical example we will take a generalization of the Jacobi polynomials, first considered by Pineiro [22], see also Nikishin and Sorokin [20, p. 162]. The r orthogonalizing measures are given by dµj (x) = xαj (1 − x)α0 dx,

x ∈ (0, 1),

/ Z for 1 ≤ i 6= j ≤ r. The multiple where αi > −1 for i = 0, 1, . . . , r and αi − αj ∈ orthogonal polynomials can be found explicitly using a Rodrigues type formula (−1)

|~ n|

r r Y Y  −α n n +α  −α0 x j D j x j j (1 − x)|~n|+α0 , (|~n| + α0 + αj + 1)nj p~n (x) = (1 − x) j=1

j=1

where the product of the differential operators can be taken in any order. For r = 1 we have the Jacobi polynomials. For r = 2 we have the recurrence relation xPn (x) = Pn+1 (x) + bn Pn (x) + cn Pn−1 (x) + dn Pn−2 (x), with rather involved recurrence coefficients. Asymptotically they behave as   4 , lim bn = 3 n→∞ 27  2 4 , lim cn = 3 n→∞ 27  3 4 lim dn = . n→∞ 27 There is a similar generalization of the Laguerre polynomials by taking dµj (x) = xαj e−x dx,

x ∈ (0, ∞),

/ Z for 1 ≤ i 6= j ≤ r. The recurrence where αj > −1 for j = 1, 2, . . . , r and αi − αj ∈ relation for these multiple Laguerre polynomials is xPn (x) = Pn+1 (x) + bn Pn (x) + cn Pn−1 (x) + dn Pn−2 (x), with b2n = 3n + α1 + 1, c2n = n(3n + α1 + α2 ), d2n = n(n + α1 )(n + α1 − α2 ),

2.4

b2n+1 = 3n + α2 + 2, c2n+1 = (n + 1)(3n + α1 + α2 ) − α1 + 1, d2n+1 = n(n + α2 )(n + α2 − α1 ).

Spectral theory and perturbations

The operator H : `2 → `2 associated with the infinite matrix  a0(0) 1 a1(1) a0(1) 1  a2(2) a1(2) a0(2) 1  . . . ..  . .. .. .  .  ar (r) ar−1 (r) · · · a1(r) a0(r) 1 .. .. . . 0 10

        

is an interesting object for further study. One would hope that the spectral theory of this operator will give useful information about the orthogonalizing measures (µ1 , . . . , µr ), in particular about the support of these measures. Partial results in this direction have been obtained by Kalyagin [11], [12], but there are still various open problems. The main open problem is to find conditions on the sequences a0, a1, . . . , ar on the diagonals that lead to multiple orthogonal polynomials on the real line. The next problem is to find out from the operator H when one is dealing with an Angelesco system or an AT-system (in particular a Nikishin system), or any of the mixtures which where recently analyzed by Gonchar et al. [7]. The main difference with the spectral analysis of the Jacobi matrix for orthogonal polynomials is that there is no way to symmetrize H and that H therefore is essentially nonsymmetric. We can therefore not use the spectral theorem for selfadjoint operators. Furthermore, there is not a cyclic vector, but one needs to work with a cyclic set, which can be taken as {e0 , e1, . . . , er−1 }. If all the recurrence coefficients are bounded then H is a bounded operator and r

span{H n1 e0 , H n2 e1, . . . , H n er−1 : ~n ∈ Nr } = `2 . A possible way to do the spectral analysis is to work with the resolvent (zI − H)−1 and the resolvent functions fj (z) = h(zI − H)−1 ej−1 , e0 i,

j = 1, 2, . . . , r.

If one looks for the Hermite-Pad´e approximation (type II) for the vector (f1, f2 , . . . , fr ), i.e., the rational approximation with common denominator p~n,0 of degree |~n| and numerators (p~n,1 , p~n,2 , . . . , p~n,r ) for which p~n,0(z)fj (z) − p~n,j (z) = O(

1 z nj +1

),

z → ∞,

then this common denominator is precisely the multiple orthogonal polynomial p~n that we are interested in. The connection between the limit behaviour of the zeros of multiple orthogonal polynomials and the spectrum of the operator H is not as strong as in the case of Jacobi operators. For Jacobi operators in the class M(a, b) the limit set of the zeros coincides with the spectrum of the operator. For r > 1 this is no longer true, as can be seen by the following extension of the class M(a, b). Suppose we take r = 2 and that we examine the multiple orthogonal polynomials for which the recurrence coefficients have the following asymptotic behaviour lim a0(n) = 3a,

n→∞

lim a1(n) = 3a2,

n→∞

lim a2(n) = a3.

n→∞

(2.16)

This class is not empty since the Jacobi-Pineiro polynomials belong to it for a = 4/27. If we consider the constant Hessenberg operator   3 1 3 3 1    1 3 3 1  H0 =  (2.17) ,   1 3 3 1   .. .. .. .. . . . . 11

then H − H0 will be a compact operator so that the essential spectrum of H and H0 is the same. The spectrum of H0 has been studied before, since H0 is a Toeplitz matrix associated with a Laurent polynomial (see, e.g., Schmidt and Spitzer [23] and Hirschman [8]). Without loss of generality we take a = 1, then the corresponding Laurent polynomial is g(w) = w + 3 + 3/w + 1/w2 = w(1 + 1/w)3 . Consider the sets St = {g(w) : |w| = t} and the sets St+ consisting of the interior and the boundary of St, then for t large the curves St look like circles and are analytic. As t decreases these curves will schrink until t = 2 in which case the curve has a cusp at the real point 27/4. If t decreases, then the curve St has a double point on the interval [0, 27/4] and this double point takes any value on the interval (0, 27/4) starting from 27/4 for t = 2− and ending at 0 for t = 1+. The winding number of the curve St is one for each point in the interior of St+ . For t = 1 the curve S1 has an inward cusp at 0 and for 0 < t < 1 the curves St have a double point to the left of 0. The winding number of the curve St is one or two for each point in the interior of St+ . The spectrum σ(H0 ) is precisely S1+ (the shaded area in Figure 1). On the other hand, the set of limit points of eigenvalues λj,n (1 ≤ j ≤ n) of the truncated n × n matrices obtained from Hn , i.e., the set B = {λj,n : 1 ≤ j ≤ n, n ∈ N}0 T is a strict subset of σ(H0) and is given by t>0 St+ , which turns out to be the interval [0, 27/4]. Another description of the set B is to look at the zeros of z 2 (g(z) − λ) = (1+z)3 −z 2λ. There are three zeros z1 (λ), z2(λ), z3 (λ) and we order them so that |z1(λ)| ≤ |z2(λ)| ≤ |z3(λ)|. These zeros can be found explicitly using Cardano’s formula. Then the set B is also equal to the set {λ : |z2(λ)| = |z3(λ)|}. In order to determine the supports of the orthogonalizing measures (µ1 , µ2 ), the set B seems to be more relevant than the spectrum of the operator. The asymptotic distribution of the zeros of multiple orthogonal polynomials Pn for which the recurrence coefficients satisfy (2.16) with a = 4/27 is given by 1X lim f (xk,n ) n→∞ n n

k=1

√ Z 1 h √ √ 1/3 1/3i 3 −2/3 −1/2 dt. = f (t)t (1 − t) + 1− 1−t 1+ 1−t 4π 0

This limiting measure appeared earlier in [14]. Observe that the density has a stronger singularity at 0 than at 1 so that the zeros are shifted towards 0 as compared to the zero distribution in the class M(1/4, 1/2) which also lives on [0, 1]. If we take the balayage of this measure to the boundary St for t > 0, then this balayage measure is the equilibrium measure for the set St+ . So even though the limit distribution is supported on B = [0, 1], it is still characterized as an equilibrium measure for the spectrum S1+ of H0 .

12

Figure 1: Spectrum for the Toeplitz matrix H0 and the limit set of the zeros

2.5

Application: Bogoyavlenskii lattice

Consider the higher order chain of differential equations for the Bogoyavlenskii lattice [5] [6] ! r r X X n ≥ 1, (2.18) an−j − an+j , a˙ n = an j=1

j=1

with a0 ≡ 0 and initial data an (0), n ≥ 1. If the initial data is such that an (0) > 0 then one can find the solution of this dynamical system by using banded Hessenberg operators for multiple orthogonal polynomials, much as for the Toda lattice and the Langmuir lattice [21] [24] [3] [4]. Indeed, start with the Hessenberg operator for the initial data by putting them on the rth subdiagonal under the main diagonal, with r zero diagonals up to the subdiagonal just above the main diagonal which contains ones:   0 1   0 0 1     0 0 0 1     .. .. .. ..   . . . . H(0) =   .. ..   . . 0 1  a1 (0)   .. ..   . . a (0) 0 1 2   .. .. .. .. .. . . . . . This operator corresponds to a system of multiple orthogonal polynomials on the (r + 1)star Γr , given by (2.15), for r measures (µ1,0, µ2,0, . . . , µr,0 ) which are invariant under 13

a rotation over an angle 2π/(r + 1), i.e., for j = 1, 2, . . . , r we have the orthogonality conditions Z p~n (x; 0)xk dµj,0 (x) = 0, k = 0, 1, 2 . . . , nj − 1. Γr

We will now apply the dynamics and consider the measures dµj,t (x) = e−x

r+1 t

dµj,0 (x),

j = 1, 2, . . . , r,

then the system (µ1,t , µ2,t, . . . , µr,t ) lives again on the (r + 1)-star Γr and these measures are again invariant under the rotation over an angle 2π/(r + 1), thanks to the presence of xr+1 in the exponential function. Let p~n (x; t) be the multiple orthogonal polynomials for these new measures, then the polynomials Pn (x; t) satisfy a recurrence relation (2.11) for which all the recurrence coefficients, except for the last one, are identically zero. Hence the Hessenberg operator is of the form   0 1  0  0 1    0  0 0 1    ..  .. .. ..  .  . . . H(t) =   .. ..   . . 1 0 a1(t)    .. ..   . . 1 a2(t) 0   .. .. .. .. .. . . . . . and the an (t) on the lower diagonal are the solution of the differential equations (2.18). Observe that the case r = 1, corresponds to the Langmuir lattice if we replace an by a2n .

References [1] M. A. Angelesco, Sur deux extensions des fractions continues alg´ebriques, C.R. Acad. Sci. Paris 18 (1919), 262–263. [2] A. I. Aptekarev, Multiple orthogonal polynomials, J. Comput. Appl. Math. 99 (1998), 423–447. [3] A. I. Aptekarev, V. A. Kaliaguine, Complex rational approximation and difference equations, Suppl. Rend. Circ. Mat. Palermo, II 52 (1998), 3–21. [4] A. Aptekarev, V. Kaliaguine, J. Van Iseghem, Genetic sum representation for the moments of a system of Stieltjes functions and its application, manuscript [5] O. I. Bogoyavlenskii, Some constructions of integrable dynamical systems, Izv. Akad. Nauk SSSR, Ser. Mat. 51 (1987) no. 4, 737–766 (in Russian); Math. USSR Izv. 31 (1988) no. 1, 47–75. [6] O. I. Bogoyavlenskii, Integrable dynamical systems associated with the KdV equation, Izv. Akad. Nauk, Ser. Mat. 51 (1987) no. 6, 1123–1141 (in Russian); Math. USSR Izv. 31 (1988) no. 3, 435–454. 14

[7] A. A. Gonchar, E. A. Rakhmanov, V. N. Sorokin, Hermite-Pad´e approximants for systems of Markov-type functions, Mat. Sb. 188 (1997), 33–58 (in Russian); Sbornik Math. 188 (1997), 671–696. [8] I. I. Hirschman Jr., The spectra of certain Toeplitz matrices, Illinois J. Math. 11 (1967), 145–159. [9] M. Kac, P. Van Moerbeke, On an explicitly soluble system of nonlinear differential equations related to certain Toda lattices, Adv. Math. 16 (1975), 160-169. [10] V. A. Kalyagin, On a class of polynomials defined by two orthogonality relations, Mat. Sb. 110 (1979), 609–627 (in Russian); Math. USSR Sb. 38 (1981), 563–580. [11] V. A. Kalyagin, Hermite-Pad´e approximants and spectral analysis of nonsymmetric operators, Mat. Sb. 185 (1994), 79–100 (in Russian); Sbornik Math. 82 (1995), 199– 216. [12] V. A. Kaliaguine, On operators associated with Angelesco systems, East J. Approx. 1 no. 2 (1995), 157–170. [13] V. A. Kaliaguine, A. Ronveaux, On a system of classical polynomials of simultaneous orthogonality, J. Comput. Appl. Math. 67 (1996), 207–217. [14] A. B. J. Kuijlaars, Chebyshev quadrature for measures with a strong singularity, J. Comput. Appl. Math. 65 (1995), 207–214. [15] A. M´at´e, P. Nevai, W. Van Assche, The supports of measures associated with orthogonal polynomials and the spectra of the related self adjoint operators, Rocky Mountain J. Math. 21 (1991), 501–527. [16] J. Moser, Three integrable Hamiltonian systems connected with isospectral deformations, Adv. Math. 16 (1975), 197–220. [17] P. G. Nevai, Orthogonal Polynomials, Memoirs Amer. Math. Soc. 213, Providence, RI, 1979. [18] E. M. Nikishin, A system of Markov functions, Vestnik Mosk. Univ., Ser. I (1979), no. 4, 60–63 (in Russian); Moscow Univ. Math. Bull. 34 (1979), 63–66. [19] E. M. Nikishin, On simultaneous Pad´e approximants, Math. Sb. 113 (1980), 499–519 (in Russian); Math. USSR Sb. 41 (1982), 409–425. [20] E. M. Nikishin, V. N. Sorokin, Rational Approximations and Orthogonality, Translations of Mathematical Monographs 92, Amer. Math. Soc., Providence, RI, 1991. [21] A. S. Osipov, Discrete analog of the Korteweg-de Vries equation, Mat. Zametki 56 (1994) no. 6, 141–144 (in Russian); Math. Notes 56 (1994) no. 6, 1312–1314. [22] L. R. Pineiro, On simultaneous approximations for a collection of Markov functions, Vestnik Mosk. Univ., Ser. I (1987), no. 2, 67–70 (in Russian); Moscow Univ. Math. Bull. 42 (2) (1987), 52–55

15

[23] P. Schmidt, F. Spitzer, The Toeplitz matrices of an arbitrary Laurent polynomial, Math. Scand. 8 (1960), 15–38. [24] V. N. Sorokin, Integrable nonlinear dynamical systems of Langmuir lattice type, Mat. Zametki [25] G. Szeg˝o, Orthogonal Polynomials, Amer. Math. Soc. Colloq. Publ. 23, fourth edition 1975. [26] W. Van Assche, Asymptotics for Orthogonal Polynomials, Lecture Notes in Mathematics 1265, Springer-Verlag, Berlin, 1987. [27] W. Van Assche, Orthogonal polynomials in the complex plane and on the real line, Fields Institute Comm. 14 (1997), 211–245. Department of Mathematics Katholieke Universiteit Leuven Celestijnenlaan 200 B B-3001 Heverlee BELGIUM [email protected]

16