Orthogonal polynomials and Banach algebras Summer School on Orthogonal Polynomials, Harmonic Analysis and Applications Inzell, Sep, 17-21 2001
Ryszard Szwarc
Contents
1 Introduction 2 Introduction to orthogonal polynomials 2.1 2.2 2.3 2.4
Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . Dierential equations . . . . . . . . . . . . . . . . . . . . General orthogonal polynomials and recurrence relations Zeros of orthogonal polynomials . . . . . . . . . . . . . .
3 Nonnegative linearization 3.1 3.2 3.3 3.4 3.5
Preliminaries . . . . . . . . . . . Renormalization . . . . . . . . . . History . . . . . . . . . . . . . . . Discrete boundary value problem Quadratic transformation . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
2 3
. . . .
. . . .
. 3 . 5 . 7 . 10
. . . . .
. . . . .
. . . . .
11 11 13 15 18 24
4 Commutative Banach algebras
26
5 Open problems References
32 32
4.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.2 Convolution associated with orthogonal polynomials . . . . . . 29
Institute of Mathematics, University of Wroclaw, pl. Grunwaldzki 2/4, 50-384 Wroclaw, Poland;
[email protected]
1
1 Introduction One of the central problems in the theory of orthogonal polynomials is to determine which orthogonal systems have the property that the coecients in the expansion of the product
pn(x)pm (x) =
X
k
c(n; m; k)pk (x)
are nonnegative for any n; m and k: This property called nonnegative linearization has important consequences in a number of topics like pointwise estimates of polynomials convolution structures associated with orthogonal polynomials positive de nitness of orthogonal polynomials limit theorems for random walks associated with recurrence relations Usually in these applications the explicit formulas for the coecients c(n; m; k) are not necessary. The positivity is the only property that counts. Actually for many orthogonal systems these coecients have been computed explicitly in the past. Nonetheless still there are many systems including the nonsymmetric Jacobi polynomials, for which such explicit expressions are not available. That's why any criteria which imply nonnegative linearization are of great importance. Some orthogonal polynomial systems show up as matrix coecients of irreducible representations of classical matrix groups. Then the product formula can be interpreted in terms of the decomposition of tensor product of two such representations into irreducible components. In such case the coecients c(n; m; k) are products of multiplicities of representations and lengths of certain cyclic vectors. In this way they are always nonnegative. The main part of these notes is devoted to study the nonnegative linearization problem and the convolution structure it induces. We are going to state and prove certain criteria for nonnegative linearization. The interesting part of this property is the connection to a maximum principle of certain discrete boundary value problem. We do not expect the reader to know much about orthogonal polynomials. For this we start with a concise introduction to orthogonal polynomials 2
which contains basic facts we are going to use in other sections. The proofs are complete (at least in our opinion), although certain points are left to the reader. These places are indicated as Exercises. There is a number of examples illustrating the theoretical results. The notes end with three open problems that we were trying to solve in the past but so far we didn't succeed. It would be great if one of the readers could solve any of these problems.
2 Introduction to orthogonal polynomials
2.1 Preliminaries
The notion of orthogonality comes from elementary geometry. Usually the rst time we encounter this notion in the context of function spaces is while studying Fourier series. For example most of the students know the following formulas. Z cos m cos n d = 0 for n 6= m; Z
0
0
for n 6= m:
sin m sin n d = 0
This means the trigonometric polynomials cos n are orthogonal to each other and sin n are orthogonal to each other as well, relative to the inner product (f; g) =
Z
0
f ()g() d:
What about the algebraic polynomials ? Can we produce orthogonal algebraic polynomials ? One can show by induction and by trigonometry (Exercise) that there are algebraic polynomials Tn and Un of exact degree n; such that Tn(cos ) = cos n; n + 1) : Un(cos ) = sin(sin The polynomials Tn are called the Chebyshev polynomials, while Un are called the Chebyshev polynomials of the second kind. We have T = 1; T (x) = x; T (x) = 2x ? 1; T (x) = 4x ? 3x; U = 1; U (x) = 2x; U (x) = 4x ? 1; T (x) = 8x ? 4x: 0
1
2
0
1
2
2
2
3
3
3
3
3
Also orthogonality relations can be carried over to the polynomials Tn and Un: Indeed, by substitution x = cos we obtain for n 6= m Z Z 0 = cos n cos m d = Tn(x)Tm (x) p dx 1?x ? Z Z sin m sin n sin d 0 = sin n sin m d = sin sin 1
0
2
1
2
0
=
Z 1
?1
0
p
Un(x)Um (x) 1 ? x d: 2
Thus Tn are orthogonal to each other in the Hilbert space L ((?1; 1); w(x)dx); with the weight w(x) = (1 ? x )? = while Un are orthogonal with the weight w(x) = (1 ? x ) = : The trigonometric polynomials cos n and sin n satisfy the simple dierential equations d (cos n) = ?n cos n; @ d (sin n) = ?n sin n: @ By changing the variables x = cos we get (Exercise) (1 ? x ) d Tn ? x d Tn = ?n Tn; dx dx d (1 ? x ) Un ? 3x d Un = ?n(n + 2)Un: dx dx Moreover by using trigonometric identities 2
2
1 2
2 1 2
2
2
2
2
2
2
2
2
2
2
2
2
2
2 cos cos n = cos(n + 1) + cos(n ? 1); 2 cos sin n = sin(n + 1) + sin(n ? 1); we obtain the recurrence relations
xT = T ; 2xTn = Tn + Tn? 2xU = U ; 2xUn = Un + Un? 0
1
+1
0
n1
1
1
+1
n 1:
1
4
These formulas allow to compute the polynomials recursively, assuming that T = U = 1. Observe that the formulas for Tn and Un are dierent only at n = 0: The fact that the polynomials are eigenfunctions of second order dierential equation imply immediately the orthogonality of the polynomials with respect to a certain weight. This is a point at which we can consider more general situation. 0
0
2.2 Dierential equations
Assume we are given a sequence of functions 'n(x) which are C 1([?1; 1]) and satisfy second order dierential equation of the form L'n = n'n; where d d 1 L = w(x) dx A(x) dx ; and A(x); w(x) are positive and smooth in (?1; 1): Assume also that A(x) R is continuous on [?1; 1] and A(?1) = A(1) = 0: Let ? w(x)dx < +1: It turns out that these assumptions imply the functions 'n are orthogonal for dierent values of n in the Hilbert space L ((?1; 1); w(x)dx): Indeed, let n 6= m: Then Z Z d d' n n 'n'm w(x)dx = dx A(x) dx 'm dx ? ? Z Z d' n d'm = ? A(x) dx dx = m 'n'mw(x)dx: ? ? For the polynomials Tn we have p w(x) = p 1 ; A(x) = 1 ? x 1?x while for Un p w(x) = 1 ? x ; A(x) = (1 ? x ) = : In general if the dierential operator L is of the form d + D(x) d L = C (x) dx dx 5 1
1
2
1
1
1
1
1
1
1
1
2
2
2 3 2
2
2
2
we have (Exercise)
w(x) = C (1x) exp A(x) = Cw((xx)) :
Z
D(x) dx C (x)
Bochner considered dierential equation of the form
Ly = a (x)y00 + a (x)y0 + a (x)y = y where ai(x) are polynomials. He wanted to determine all the cases for which for any n 0 there is a number n and a polynomial of degree exactly n which is a solution to this equation. Assume this property holds. Then considering n = 0 implies that a (x) is a constant polynomial. We can actually take a = 0: Next considering n = 1 yields that a is a linear polynomial. In similar way we can derive that a is a quadratic polynomial. Bochner showed that there are actually 5 dierent cases, so that we cannot reduce one case to another by ane change of variables. These cases are Ly = (1 ? x )y00 + [a ? bx]y0; Ly = xy00 + (a ? x)y0; Ly = y00 ? 2xy0 Ly = x y00 + xy0; Ly = x y00 + 2(x + 1)y0: It can be checked directly that the polynomial solutions of the last two equations do not lead to orthogonality. Consider the rst dierential operator. By using the method described before the polynomial solutions to that equation are orthogonal with respect to the weight Z 1 ? bx dx w(x) = 1 ? x exp a1 ? x a b ? = = (1 ? x) (1 + x) b?a? = : Let a = ? and b = + + 2: Then w(x) = (1 ? x) (1 + x) : This weight is integrable on (?1; 1) only when ; > ?1: The corresponding orthogonal polynomials are called the Jacobi polynomials. We can recognize 2
1
0
0
0
1
2
2
2
2
2
2
( +
6
2) 2
(
2) 2
the Chebyshev polynomials case for = = ?1=2 and the Chebyshev polynomials of the second kind case for = = 1=2: The second dierential equation leads the so called Laguerre polynomials and the third equations leads to the Hermite polynomials. One can easily nd the weight w(x) in these cases (Exercise). The three families of polynomials are called the classical orthogonal polynomials.
2.3 General orthogonal polynomials and recurrence relations As we have seen before the Chebyshev polynomials of both kinds satisfy simple recurrence relations. This is another point at which we can generalize our considerations. Let be a nite positive measure on the real line. We can always assume that the total mass is 1, i.e. that is a probability measure. This measure can be absolutely continuous like d(x) = w(x)dx or not. Let us assume for simplicity that the measure has bounded support. In other words we require that (R n [?a; a]) = 0: To avoid technical problems we assume that our measure cannot be concentrated on nitely many points. However we admit measures which are concentrated on countably many points (actually there are concrete interesting measures of this kind). The measure gives rise to the Hilbert space L (R ; ) of square integrable functions with respect to : A complex valued function f belongs to that space if and only R if jf (x)j d(x) < +1: The inner product of two functions is given by 2
2
(f; g) =
Z
f (x)g(x) d(x):
For example all continuous functions are in L (); because they are bounded in [?a; a]: Once we have a Hilbert space we immediately seek for a nice orthogonal basis for it. If no basis pops up in natural way we pick up any natural linearly independent sequence of functions and try to make it orthogonal using Gram{Schmidt procedure. The most natural independent set in our case is the sequence of monomials 2
1; x; x ; : : : ; xn; : : : : 2
7
The monomials are linearly independent in L () because we have assumed that is not concentrated on nitely many points (Exercise) Now applying Gram-Schmidt process we obtain a sequence of polynomials (linear combinations of monomials) pn with the following properties. (i) p = 1: (ii) pn(x) = knxn + : : : + k ; kn > 0: (iii) pn ? f1; x; : : : ; xn? g: R (iv) pn(x)d(x) = 1: We have spanfp ; p ; : : : ; png = spanf1; x; : : : ; xng: The polynomial xpn has degree n +1 hence it can be expressed in terms of the rst n +2 polynomials. 2
1
0
0
1
2
0
1
xpn(x) = npn (x) + npn(x) + npn? (x) + : : : : +1
1
Lower order term in the expansion of xpn (x) vanish because xpn(x) ? f1; x; : : : ; xn? g: Thus xpn(x) = npn (x) + npn(x) + npn? (x): Let's investigate the coecients in this expansion. We have n > 0 as the ratio of the leading coecients:
n = kkn : n By orthogonality we have the following. 2
+1
1
+1
Z
n = xpn (x)pn (x)d(x); +1
Z
n = xpn? (x)pn(x)d(x); 1
Z
n = xpn (x)d(x): 2
Hence
n = n > 0; +1
n 2 R :
Actually all what follows can be carried over to the case of the measures concentrated on nitely many points (Exercise). 1
8
Let n = n: Then
xpn = npn + npn + n? pn? : +1
1
1
(1)
For n = 0 the formula reduces to
xp = p + p : 0
0 1
0 0
It is customary to set p? = ? = 0: Once the numbers n and n are known, we can compute the polynomials pn one by one. Indeed pn = 1 [(x ? n)pn ? n? pn? )] : n By Pythagoras theorem we get 1
1
1
+1
n + n + n? = 2
a
2
Z 2
2
a
?a
1
a
Z
?a
1
x pn d(x) 2
2
pn d = a : 2
2
Hence the sequences n and n are bounded. As we have seen before polynomials which are eigenfunctions of certain dierential operators were automatically orthogonal with respect to a weight simply associated with the coecients of the equations. The question arises if the similar statement is true for polynomials satisfying a recurrence relation (1), with n > 0 and 2 R : We assume that the sequence of polynomials satis es (1). Are the polynomials orthogonal with respect to some measure on R ? The answer is yes, but the measure cannot be easily computed in terms of the coecients. This theorem is due to Favard (1935). We will give a sketch of the proof assuming for simplicity that the coef cients in (1) are bounded. In the space of all polynomials we introduce an inner product such that (pn; pm) = nm: This is possible because the polynomials pn are linearly independent. We complete the space of polynomials with respect to this inner product thus 9
obtaining a Hilbert space H. The elements of this space are series of the form 1 1 X X anpn; janj < +1: 2
n=0
n=0
Let M be the linear operator acting on the polynomials by multiplication by the variable x: It can be shown by using (1) (Exercise) that (i) M extends to a bounded operator on H. (ii) M is selfadjoint. As such by spectral theorem there is a resolution of the identity E (x) such that
p(M ) =
Z
p(x) dE (x)
for any polynomial p: Let d(x) = d(E (x)1; 1): Then is a probability measure and Z
pn(x)pm (x)d(x) = (pn(M )pm (M )1; 1) = (pn; pm) = nm :
The similar reasoning works for unbounded coecients, but the operator M is symmetric and unbounded. In order to use spectral theorem rst one has to nd a selfadjoint extension of this operator. It can happen that this extension is not unique, and also the orthogonality measure is not unique. We are not going to discuss this topic here. It belongs to the theory of moments.
2.4 Zeros of orthogonal polynomials
Assume pn are polynomials orthogonal with respect to a probability measure as constructed in the previous subsection. Theorem 1. The polynomial pn has n distinct roots. If supp (?1; b] then pn(x) > 0 for any x b: If supp [a; +1) then (?1)n pn(x) > 0 for any x a: In particular if supp [a; b] all the roots of pn lie in (a; b): 10
Proof. Let x1 ; x2 ; : : : ; xm be distinct roots of pn which have odd multiplicity. Clearly m n: Observe that the polynomial pn(x)(x ? x1 )(x ? x2 ) : : : (x ? xm ) is nonnegative on R : Therefore Z
pn(x)(x ? x )(x ? x ) : : : (x ? xm ) d(x) > 0: 1
2
The integral cannot vanish because it would imply that is concentrated on nitely many points. By orthogonality the integral vanishes if m < n: Therefore to avoid contradiction we must have m = n: In order to prove the second part of the theorem it suces to take only those roots which lie in (?1; b) or (a; +1) and integrate over (?1; b) or (a; +1); respectively.
3 Nonnegative linearization 3.1 Preliminaries
By the well known trigonometric identity we have cos m cos n = 1 cos(n ? m) + 1 cos(n + m): 2 2 Another trigonometric identity is sin(m + 1) sin(n + 1) = sin(n ? m + 1) + sin(n ? m + 3) sin sin sin sin sin( n + m + 1) +:::+ : sin
(2)
(3)
Remark. The identity (3) has a natural group theoretic interpretation. Let G = SU (2) denote the special unitary group of 2 2 unitary matrices with complex coecients and determinant 1. This group acts on nite dimensional space of polynomials in two variables z and z ; homogeneous of degree n by the rule p(z) 7! p(g? z); where g 2 G; z = (z ; z )T and p is a polynomial. This space has dimension n + 1; as it is spanned by zn ; zn? z ; : : : ; z zn? ; zn: The group action gives rise to a group representation n ; according to n (g)p(z) = p(g? z): 1
2
1
1
2
1
1
1
2
1 2
1
11
1
2
The representations n are irreducible, i.e. the space of Hn has no nontrivial subspace invariant for all the operators n(g): It can be shown that every irreducible representation is of the form n for some n: The character of the representation n is by de nition the function n on G de ned as
n(g) = Trn(g): The characters of dierent (inequivalent) representations are orthogonal in L (G; m); where m is the Haar measure on G: Let ei and e?i be the eigenvalues of g 2 G: It turns out (Exercise ) that n depends only on and 2
n + 1) : n(g) = sin(sin
(4)
Consider the tensor product m n of representations. This new representation is no longer irreducible hence it decomposes into irreducible components. It can be shown that for n m we have
m n = n?m n?m : : : n m: +2
+
Computing characters of both sides of the formula and using (4) give (3). We can rewrite formulas (2) and (3) in terms of the Chebyshev polynomials using the relationship described in Section 2.1. Tm Tn = 21 Tn?m + 21 Tn m; Um Un =Un?m + Un?m + : : : + Un m: +
+2
+
Thus we obtained that the product of two Chebyshev polynomials can be expressed as a sum of these polynomials with nonnegative coecients. We are interested if this property holds also for other orthogonal polynomials, especially the classical orthogonal polynomials. Let fpng1 n be a system of orthogonal polynomials, as constructed in Section 2.3. The product pnpm is a polynomial of degree n + m and it can be expressed with respect to the polynomial basis fpk g1 k : We obtain =0
=0
pn(x)pm (x) =
nX +m k=jn?mj
12
c(n; m; k)pk (x)
where
c(n; m; k) =
Z
pk d
?
1 Z
2
pnpmpk d:
The sum ranges from jn ? mj because if k < jn ? mj then k + m < n or k + n < m: Hence deg(pk pm ) < deg(pn) or deg(pk pn) < deg pm: In both cases the integral of pnpmpk vanishes. We are interested in determining when
c(n; m; k) 0; for all n; m; k: If this is satis ed we say that the system fpng1 n admits nonnegative linearization or for short we will say it has the property (P). This property has been studied by many authors since 1894. The numbers c(n; m; k) are called the linearization coecients. In the last part we will show the consequences of nonnegative linearization to convolution structures associated with orthogonal polynomials. Here we will show one important estimate that follows from the property (P). Assume that supp (?1; ]: For example supp [?1; 1] in the case of the Chebyshev polynomials. Then pn( ) > 0 because the leading coecient is positive and pn(x) cannot vanish in [; 1): We have =0
pnN (x) = 2
Then
Z
X
k
dk pk (x); where dk 0: X
pnN (x) d(x) = d 2
0
Z
pnN (x)d(x) 2
k
dk pk ( ) = pnN ( );
1=2N
2
pn( ):
Taking the limit with n ! 1 gives jpn(x)j pn( ); for x 2 supp :
3.2 Renormalization
We will be dealing with systems of polynomials orthogonal with respect to a measure ; but we do not insist that they are necessarily orthonormal. How can one obtain such polynomials ? 13
Let
Let fpng1 n be polynomials orthonormal with respect to a measure : =0
Pn(x) = n? pn(x);
(5)
1
for a sequence of positive coecients n; with = 1: The polynomials Pn are orthogonal relative to the measure : By (1) we obtain 0
xPn = nPn + nPn + nPn? ; +1
(6)
1
with (Exercise)
n = n n (7) n (8) n = n? n? : n Therefore n > 0 for n 1 and n > 0 for n 0: It can be shown easily (Exercise) that the equations (7) and (8) are equivalent to +1
1
1
n n = n:
(9)
2
+1
In other words if polynomials Pn satisfy (6) and (9) they are related to the orthonormal polynomials pn by (5), for some positive sequence n; or equivalently they are orthogonal with respect to and have positive leading coecients. The polynomials Pn are simply renormalized versions of orthonormal polynomials pn: What are the possible choices of renormalized polynomials once we have orthonormal polynomials satisfying (1) ? Assume that we have positive sequences n and n such that (9) is satis ed. Then we may have
xPn = nPn + nPn + nPn? ; xPn = n Pn + nPn + n? Pn? ; xPn = Pn + nPn + n? Pn? : +1
+1
1
+1
1
2
+1
1
1
1
(10) (11)
All these systems are renormalized versions of pn because in each case the formula (9) is satis ed. We should have used dierent symbols to denote these polynomials according to the choice of normalization, because the polynomials are not equal for the same value of n: They are equal only modulo the coecient depending on n: 14
We will deal with polynomials Pn satisfying (6) and (9). Does it aect the property (P) ? Fortunately the property (P) for the polynomials pn is satis ed if and only if it is satis ed for Pn: This follows from the fact that
pn(x)pm (x) = immediately implies
Pn(x)Pm (x) = where
nX +m
k=jn?mj nX +m k=jn?mj
c(n; m; k)pk (x)
g(n; m; k)Pk (x);
g(n; m; k) = k c(n; m; k): n n
3.3 History
For ; > ?1 the Jacobi polynomials Pn; are orthogonal with respect to the weight d; (x) = c; (1 ? x) (1 + x) dx; ?1 < x < 1: As special case we have Chebyshev = = ? Tn(cos ) = cos n Legendre = = 0 Lebesgue measure sin(n + 1) Chebyshev II = = Un (cos ) = sin Gegenbauer = In the next table we listed the basic achievements for the Jacobi polynomials. The third column shows the range of the parameters for which nonnegative linearization have been shown. 1919 Dougall = = ? 1962 Hylleraas = = ? = +1 = ?1 1970 Gasper + +10 > ?1 1970 Gasper c(2; 2; 2) 0 1 2
1 2
1 2
1 2
15
The last result gives necessary and sucient conditions for the property (P) for the Jacobi polynomials. Graphically the region given by the Gasper's results is depicted below. The dierence between these results is a tiny region enclose by the curve that starts at (? ; ) and the lines = ?1 and + + 1 = 0: 1 2
6
=
-
(? ; ? ) 1 2
1 2
1 2
The equation of the curve is
a(a + 3) (a + 5) = b (a ? 7a ? 24); where a = + + 1 and b = ? : The problem has been studied for other polynomials. 1894 Rogers q-ultraspherical 1981 Bressoud q-ultraspherical continuous q-Jacobi 1981 Rahman 0 < q < 1; > ?1 + +10 continuous q-Jacobi 1983 Gasper 0 < q < 1; > ?1 c(2; 2; 2) 0; 1983 Lasser Associated Legendre Actually Dougall's and Hylleraas' theorems follow from the paper of Rogers, by taking limits with q ! 1 or q ! ?1; but Rogers' paper have been discovered only around 1970. De nitions and basic properties of these polynomials can be found in [12]. 2
2
16
2
The methods used by the authors didn't allow them to generalize their results to other orthogonal polynomials. The rst general theorem is due to Askey [2]. Theorem 2 (Askey 1970). If the sequences n and n are nondecreasing then the system fpn g satis es the property (P). Example. The Gegenbauer polynomials pn satisfy xpn = 2nn++22++11 pn + 2n + n2 + 1 pn? : We have n 0 and 1 ? 4 1 n = n n = 4 1 + 4(n + + 1) ? 1 : n is nondecreasing if and only if : The interval ? < is left open, although the property (P) follows from explicit formulas. Askey called for some more general criteria that would be able to capture at least the Legendre polynomials case and he stated the problem of nding them, if possible. In order to formulate these criteria we will pass to the renormalized polynomials. Assume that there is renormalization of the system pn such that the polynomials Pn satisfy (6). Theorem 3 (Sz. 1992). If n; n; n + n are nondecreasing and n n for all n; then the system fpng1 n has property (P). In the case when n 0; which is equivalent to the fact that the orthogonality measure is symmetric about the origin, we can split assumptions according to the parity of the index n: Theorem 4 (Sz. 1992). If n 0 and n ; n ; n + n ; n + n are nondecreasing and n n for all n; then the system fpng1 n has property (P). All the mentioned results, with exception of Gasper's second paper on the Jacobi polynomials, follow from Theorems 3 and 4. Moreover the associated polynomials are covered as well because the assumptions are invariant for the shift of the index. The associated polynomials of order k are de ned by the recurrence relation xPnk = n k Pnk + n k Pnk + n k Pnk? : +1
1
2
2
+1
2
1 2
1 2
1 2
=0
2
2 +1
2
2
2 +1
=0
( )
+ +1
( ) +1
+
17
( )
+
( ) 1
2 +1
Example. For the Gegenbauer polynomials we have n 0; n + n 1
and
n = 2n + n2 + 1 % 21
? 12 ; n n i ? 21 : Example. The generalized Chebyshev polynomials are orthogonal with respect to d(x) = jxj (1 ? x )dx on the interval (?1; 1); where ; > ?1: They satisfy xp n = 2nn++++ ++11 p n + 2n + n+ + 1 p n? ; xp n? = 2n n+++ p n + 2n n++ + p n? : We have n 0 and n + n 0: The assumptions of Theorem 4 are satis ed if and only if and + + 1 0: Moreover we have 2 +1
i
2
2
2 +1
2
2
1
2
1
2
2
Jn; (2x ? 1) = p n(x); 2
2
where Jn; denote the Jacobi polynomials (Exercise). This trick to relate the orthogonal polynomials to other polynomials orthogonal with respect to a symmetric measure is very useful in the context of nonnegative linearization.
3.4 Discrete boundary value problem
We are going to prove Theorems 3 and 4 by using a discrete version of second order hyperbolic boundary value problem in two variables. In application of these theorems we have used the normalization in which the polynomials Pn satis ed (6). In the proof we will switch normalization to the one given by (10). As we know for the property (P) it is irrelevant in which normalization we show it. Hence let
xPn = n Pn + nPn + n? Pn? : +1
+1
1
18
1
We are interested in proving the nonnegativity of the coecients in the expansion nX m PnPm = g(n; m; k)Pk: +
By orthogonality we have that Z
R
k=jn?mj
PnPm Pk d =
Z
Pk d g(n; m; k):
(12)
2
R
Let u(n; m) be a matrix indexed by n; m 0: De ne two operators L and L acting on the matrices by the rule (L u)(n; m) = n u(n + 1; m) + nu(n; m) + n? u(n ? 1; m); (L u)(n; m) = m u(n; m + 1) + m u(n; m) + m? u(n; m ? 1): Observe that by the recurrence relation if we take u(n; m) = Pn(x)Pm(x) for some x; then L u = xu; L u = xu: Thus for such matrices u we have 1
2
1
+1
2
1
+1
1
1
2
Hu := (L ? L )u = 0: The last equality holds also for the special matrix 1
u(n; m) = g(n; m; k) =
Z
?
1Z
Pk d 2
R
2
R
Pn(x)Pm (x)Pk (x)d(x):
(13)
Additionally we have
1 k = n; 0 k 6= n: Hence the matrix u satis es Hu = 0 and has nonnegative boundary values. Proposition 1. The polynomials Pn have the property (P) if and only if every matrix u = fu(n; m)g such that Hu = 0; (14) u(n; 0) 0: satis es u(n; m) 0 for n m 0:
u(n; 0) = g(n; 0; k) =
19
Proof. The "if" direction is clear, because we can always assume n m and as we have seen above the matrix u(n; m) = g(n; m; k) satis es (14). Assume Pn have the property (P). Let u = u(n; m) be any solution to (14). By uniqueness of solution we have
u(n; m) =
1
X
k=0
u(k; 0)g(n; m; k):
Thus u(n; m) 0:
}
Now we are down to showing that under certain assumptions on the coef cients n; n and n the boundary problem (14) have nonnegative solutions. To this end we will give an equivalent condition which is much more convenient for direct applications. For each point (n; m) with n m 0; let n;m denote the set of lattice points in the plane de ned by n;m = f(i; j ) j 0 j i; jn ? ij < jm ? j jg: The set n;m is depicted below (the points are marked with empty circles). m 6 (n; m)
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
-
n
Theorem 5 (Sz 2001). The boundary problem (14) admits nonnegative solutions if and only if for every (n; m); with n m 0; there exists a matrix vn;m(i; j ) such that
20
(i) supp vn;m n;m : (ii) (H vn;m)(n; m) < 0: (iii) (H vn;m)(i; j ) 0 for (i; j ) 6= (n; m): Remark. H is the adjoint operator to H with respect to the inner product of matrices (u; v) =
1
X
u(n; m)v(n; m):
n;m=0
This operator acts according to (H v)(n; m) = nv(n + 1; m) + nv(n; m) + nv(n ? 1; m) ? mv(n; m + 1) + m v(n; m) + m v(n; m ? 1): Proof. (() Let u(n; m) satis es the boundary value problem (14). It suces to show that u(n; m) 0: We will use induction in m: Assume u(i; j ) 0 for j < m: Then
0 = hHu; vn;mi = hu; H vn;mi X = (H vn;m )(n; m) u(n; m) + (H vn;m)(i; j ) u(i; j ): j 0: ())
Lemma 1. There exists a matrix vn;m(i; j ) such that (i) supp vn;m n;m : (ii) (H vn;m)(n; m) = ?1: (iii) (H vn;m)(i; j ) = 0 for 1 j < m: 21
Proof. The conditions (ii) and (iii) provide (m ? 1)2 linear equations with (m ? 1)2 unknowns vn;m(i; j ) for (i; j ) 2 n;m: These equations are independent because the coecients i and i do not vanish for all i: Hence the system can be solved. }
In order to complete the proof of Theorem it suces to show that for vn;m as in Lemma 1 we have (H vn;m)(i; 0) 0; for i = jn ? mj; jn ? mj + 1; : : : ; n + m ? 1; n + m: Let u be any solution of (14). By Lemma 1 we have 0 = (Hu; vn;m) = (u; H vn;m) = ?u(n; m) + Thus
u(n; m) =
nX +m
(H vn;m)(i; 0)u(i; 0):
i=jn?mj
nX +m
(H vn;m)(i; 0)u(i; 0):
i=jn?mj
Assume (H vn;m)(i ; 0) < 0: Take the boundary condition u(i; 0) = 0 for i 6= i and u(i ; 0) = 1: Then u(n; m) < 0; which contradicts the assumptions. 0
0
0
}
In view of the preceding theorem it suces now to come up with a suitable choice of matrices vn;m in order to get some conditions for nonnegative linearization. The funny thing is that the obvious choice
vn;m(i; j ) = 1
for (i; j ) 2 n;m
is a complete failure (Exercise). This may be a reason for which Askey's problem remained unsolved for such a long time. But if we assign the value 1 to every other point in n;m interesting outcome follows. Let
vn;m(i; j ) =
1 (i; j ) 2 n;m ; (n + m) ? (i + j ) odd 0 otherwise 22
(15)
m 6 (n; m)
/
/
e
.
/
e
u
e
/
e
u
e
u
e
.
e
u
e
u
e
u
e
. . n
Then supp v consists of ; while supp H v consists of ; ; /; . and : Moreover we have
?m i ? j
8 > > > >
i + i ? j ? j (i; j ) ? > i ? j (i; j ) ? . > > :
i ? j (i; j ) ? / If the assumptions of Theorem 3 are satis ed then (H v)(i; j ) 0 for j < m; (H v)(n; m) < 0: In this way the conclusion of Theorem 3 holds. Let's turn to Theorem 4, i.e. to the case when n 0: First observe that by the recurrence relation xPn = n Pn + n? Pn? the polynomials with even indices involve only even powers of x while the polynomials with odd indices involve only odd powers. In order to show nonnegative linearization we have to show nonnegativity of the integral of R the triple product PnPmPk d: By symmetry of the measure the condition +1
Z
R
+1
1
PnPm Pk d 6= 0 23
1
implies that one of the indices n; m or k is an even number. Since it is possible to interchange the roles of n; m and k we can always assume k is an even number. Observe that if u(n; m) = g(n; m; 2k) then u(2n + 1; 0; 2k) = 0 for any n: Therefore instead of studying (14) we can study 8 Hu = 0; < u (2 n; 0) 0; (16) : u(2n + 1; 0) = 0: We would like to show that this boundary problem has nonnegative solutions. It can be checked directly that u(n; m) = 0 if n + m is an odd number. Thus it suces to consider those (n; m) for which both n; m are even or odd. For such (n; m) the matrix H vn;m ; where vn;m is de ned by (15), satis es the assumptions of Theorem 5 exactly when the assumptions of Theorem 4 are ful lled.
3.5 Quadratic transformation
Theorems 3 and 4 are more ecient when we deal with polynomials orthogonal with respect to a symmetric measure, i.e. when n 0: There is a way of reducing nonsymmetric case to the symmetric one by a special transformation. Assume we are dealing with a nonsymmetric measure with support contained in the interval [?1; 1]: Let (y) be a new symmetric measure such that d (y) = 21 d(2y ? 1) for y > 0 and (f0g) = (f?1g): This transformation consists in shifting the measure by 1 to the right, scaling to the interval [0; 1]; substituting x = y and symmetrizing about the origin. The resulting measure is symmetric and supported in the interval [?1; 1]: The point is that the polynomials orthogonal with respect to and are closely related. Indeed, denote by Qn(y) the polynomials orthogonal relative to d (y): Let n 6= m: By substituting x = 2y ? 1 we obtain 2
2
2
0= =
Z 1
?1
Z 1
?1
Q n (y)Q m(y)d (y) 2
Qn 2
2
q
x+1 2
Qm 2
24
q
x+1 2
d(x):
The polynomial Q n involves only even powers of theqvariable, which follows from symmetry of the measure. Thus Pn(x) = Q n x is a polynomial of degree n in x: For dierent indices these polynomials are orthogonal to each other relative to d(x): Therefore if Qn admit nonnegative linearization so do Pn: Moreover the recurrence relations for Pn and Qn are related, as well. We have Q n (y) = Pn(2y ? 1): Let yQn(y) = nQn (y) + nQn? (y): Then 2
+1 2
2
2
2
+1
1
xPn (x) =(2y ? 1)q n(2y ? 1) =2 n n Pn (x) + [2 n n + 2 n n? ? 1]Pn(x) + 2 n n? Pn? (x): 2
2
2
2
2 +1
+1
2
1
2
2 +1
2
2
2
1
1
Obviously the recurrence relation for Pn is much more complicated than the one for Qn: That is why it is convenient to test Theorem 4 on the symmetric polynomials Qn instead of applying Theorem 3 to the nonsymmetric polynomials Pn: We have already experienced this when we considered the nonsymmetric Jacobi polynomials. Example. The Askey{Wilson polynomials satisfy the recurrence relation 2xPn(x) = AnPn (x) + [a + a? ? (An + Cn)]Pn(x) + CnPn? (x); 1
+1
1
n? )(1 ? abq n )(1 ? acq n )(1 ? adq n ) (1 ? abcdq An = ; a(1 ? abcdq n? )(1 ? abcdq n) > > a(1 ? qn)(1 ? bcqn )(1 ? bdqn? )(1 ? cdqn? ) : > : Cn = (1 ? abcdq n? )(1 ? abcdq n? ) Let Qn) be the polynomials de ned by the recurrence relation 8 > > >
> > > > > > > > > > > > > < > > > > > > > > > > > > > > :
1
2
2
1
n? )(1 ? abq n ) ;
n = (1 ? abcdq 1 ? abcdq n? 1
2
2
1
n )(1 ? bdq n ) n = ? a(1 b?(1bcq ? abcdq n) ; 2 +1
2
n )(1 ? adq n)
n = (1 ?(1acq ? abcdq n) : It can be checked by using recurrence relations that Q n (x) = Pn(2a? x + (b + b? )): 2 +1
2
2
1
2
1 2
1
We have
n + n = 1 ? ab and n + n = 1 ? ab? : We notice that the recurrence relation for the polynomials Qn makes it possible to apply Theorem 4 and get reasonable conditions on the parameters. At he same time working directly with the recurrence relation for Pn looks hopeless. 2
2
2 +1
2 +1
1
4 Commutative Banach algebras 4.1 Preliminaries
In the sequel we are going to construct and study certain Banach algebras associated with systems of orthogonal polynomials. Here we will recall basic facts which we are going to use in the sequel. A Banach algebra B is a Banach space in which we can also multiply elements in such a way that B becomes an algebra and
kabk kakkbk for any a; b 2 B: The algebra is called commutative if ab = ba for all a; b 2 B: The algebra is unital if there exists an element e called the unit such that ea = ae = a for any a 2 B: An element a is called invertible if there exists b 26
such that ab = ba = e: The set of all complex numbers z such that ze ? a is not invertible is called the spectrum of a and denoted by (a): The algebra is called -algebra if there is a conjugation such that (i) (xy) = yx : (ii) kxk = kxk: The principal example that we are going to generalize later to orthogonal polynomials setting is
(
` (Z) = 1
a = fang1 ?1
kak =
1
X
?1
)
janj < +1
The multiplication is given by convolution of sequences
a b(n) =
1
X
?1
a(m)b(n ? m):
The algebra ` (Z) is commutative and unital, because the element 1
(n) =
0
1 n = 0; 0 n= 6 0
is the unit. The conjugation is given by complex conjugation of the terms. Other examples of Banach algebras (not necessarily commutative or unital) are: (i) C [0; 1] with sup-norm and pointwise multiplication (more generally C (X ); where X is a compact Hausdor space). (ii) B (H) - the algebra of all bounded operators on a Hilbert space with operator norm, or any norm closed subalgebra of B (H): (iii) L (G; m) where G is a locally compact group and m is left invariant Haar measure with multiplication given by convolution 1
f g(x) =
Z
f (y)g(y? x) dm(y): 1
G
27
Assume we are dealing with a commutative unital Banach algebra B: We may think of ` (Z) or C (X ): Take an element in the algebra. If we are dealing with C (X ) it is easy to determine if our element is invertible or not, while for elements in the algebra ` (Z) it is not so obvious. It was Gelfand who invented a way of mapping B into certain C (X ) space such that and element in B is invertible if and only if its image in C (X ) is invertible. Let X be the set of all multiplicative linear functionals on B; i.e. those linear functionals ' : B ! C which satisfy '(ab) = '(a)'(b): It can be shown that such functionals are necessarily continuous with norm bounded by 1: j'(a)j kak; a 2 B: Thus X (B ) : In B we have various topologies to choose from like norm topology, weak topology and weak-star topology. By de nition the weak-star topology is the weakest topology such that the mappings B 3 7! (a) are continuous for a 2 B: We endow (B ) with the weak-star topology. Then by the Banach-Alaoglu Theorem (B ) is a compact set and hence X is compact as a closed (Exercise) subset of (B ) : For each element a 2 B we de ne the Fourier-Gelfand transform ba as a function on X according to b a(') = '(a); ' 2 X: By de nition ba is a continuous function on X: Theorem 6 (Gelfand). The mapping a 7! ba is a homomorphism of norm 1 from B to C (X ): Moreover the element a is invertible if and only if ba does not vanish on X: More generally the spectrum of a is equal to the range of b a: If we have an element a in B we can raise it to any power and add these powers with complex coecients. In other words we can apply any polynomial to a: Also since B is a Banach algebra we can operate on a with any entire function by using its Taylor series. Can we go beyond it ? We can operate on a with functions F (z) which are holomorphic in the neighborhood of the spectrum of a or the range of ba with a help of the Cauchy integral formula. Z 1 F (a) := 2i zIF (?z)a dz;
1
1
1
1
1
1
28
where is a closed Jordan curve enclosing the range of ba: Let's turn to our principal example ` (Z): The multiplicative functionals are given by (Exercise .) 1
1 X 1 fang?1 7! ba() = anein : ?1
In other words X = T: Hence a sequence a is invertible in ` ; i.e. a? 2 ` if and only if its transform does not vanish on T: This theorem is due to Wiener. Also if a function F (z) is holomorphic in the neighborhood of the set P1 f ?1 anein j 2 R g then F (a) belongs to ` : This result is due to Levy. 1
1
1
1
4.2 Convolution associated with orthogonal polynomials
The systematic study of this topic have been undertaken by Lasser [14]. Let the orthonormal polynomial system fpng1 n satis es property (P). Assume that supp (?1; ]: For example supp [?1; 1] in the case of the Jacobi polynomials. Then pn( ) > 0 and =0
jpn(x)j pn( ); for x 2 supp :
It is more convenient to use renormalized polynomials Rn(x); where Z ? p n (x) Rn(x) = p ( ) ; hn = Rnd ; n because jRn(x)j 1; for x 2 supp : The new polynomials also have property (P). Let 1
2
Rn(x)Rm (x) =
nX +m k=jn?mj
g(n; m; k)Rk (x):
(17)
Hence g(n; m; k) 0: Plugging in x = gives nX +m k=jn?mj
g(n; m; k) = 1: 29
(18)
We would like to use the coecients g(n; m; k) to de ne convolution of sequences by mimicking standard convolution on ` (Z): Since the polynomials Rn are not orthonormal we will make use of the coecients 1
h(n) =
Z
Rn (x)d(x) 2
?
1
= pn( ) : 2
There is an important symmetry relation involving g(n; m; k) and h(n): Indeed, multiplying both sides of (17) by Rk (x) and integrating with d(x) gives Z RnRm Rk d = g(n; m; k)h(k)? : 1
Therefore
g(n; m; k)h(k)? = g(n; k; m)h(m)? = g(k; m; n)h(n)? : 1
1
1
(19)
First of all we will have to work with the weighted ` (N ; h) space consisting of sequences fang1 n such that 1
0
=0
1
X
n=0
anh(n) < +1:
Let fang 2 ` (N ; h): The series an Rn(x)hn is uniformly convergent on supp ; because Rn(x) are bounded by 1: We introduce the convolution operation in ` (N ; h) by setting P
1
1
fang fbng = fcng if and only if 1
X
n=0
!
an Rn(x)h(n)
1
X
n=0
!
bn Rn(x)h(n) =
1
X
n=0
cnRn(x)h(n):
(20)
The convolution can be computed explicitly in terms of the coecients g(n; m; k): Indeed, we have (a b)(k) =
X
n;m
anbm h(n)h(m)h(k)? g(n; m; k): 1
30
Let's check if the operation is well de ned.
ka bk = 1
X X
k
X
n;m
n;m
anbm h(n)h(m)h(k)? g(n; m; k) h(k) 1
janjjbmjh(n)h(m)
X
k
g(n; m; k) = kak kbk : 1
1
The space ` (N ; h) with the operation becomes a commutative Banach algebra. Associativity and commutativity immediately follow from the multiplication rule (20). Let's compute the multiplicative functionals of this algebra. By (20) the mappings 1 X a = fang 7! anRn(z)h(n) 1
n=0
for all z 2 C such that jRn(z)j 1; for any n 2 N ; give rise to multiplicative functionals. It can be shown that every multiplicative functional is of that form (Exercise ) (Hint: Let ' be a multiplicative functional. Then there is a complex number z such that '( ) = R (z)h(1): Show that '(n) = Rn(z)h(n)). The space of multiplicative functionals M of the algebra ` (N ; h) is thus given by M = fz 2 C : jRn(z)j 1; n 2 N g: We showed that supp M: The set supp is usually known explicitly, while M can be sometimes not easy to determine. Thus it would be convenient to have supp = M: Under additional assumptions, like lim inf h(n) =n 1; one can show (Sz 1995) that 1
1
1
1
fx 2 C : jRn(x)j 1; n 2 N g = supp
In particular for the Jacobi polynomials we have h(n) = O(n ): More generally if the coecients in the recurrence relation for orthonormal polynomials pn are convergent and 2 supp then lim sup h(n) =n 1: Now we are in a position to apply analogs of Wiener and Levy theorems in the context of convolution algebras associated with the polynomials Rn: 2 +1
1
31
5 Open problems
1. Determine the range of (; ) for which the generalized Chebyshev polynomials have property (P). We know it holds for > ?1 and + + 1 0: The property can hold only in the region where it is
valid for the Jacobi polynomials. 2. Find any criteria for the property (P) for nite systems of orthogonal polynomials, like the Krawtchouk polynomials. The Krawtchouk polynomials are orthogonal relative to the measure
=
N X
n=0
N pN ?n(1 ? p)n : n n
They satisfy
xKn = (1 ? p)(N ? n)Kn + [p(N ? n) + (1 ? p)n]Kn + pnKn? : +1
1
Eagleson showed that they satisfy the property (P) if and only if p : 3. Assume orthonormal polynomials satisfy property (P) and 1 2
2 supp (?1; ]: Does it imply
lim sup h(n) =n 1 ? Recall that h(n) = pn( ) : 1
2
References [1] N. I. Akhiezer, "The Classical Moment Problem", Hafner Publ. Co., New York, 1965. [2] R. Askey, Linearization of the product of orthogonal polynomials, in Problems in Analysis, R. Gunning, ed., Princeton University Press, Princeton, New Jersey, 1970, 223{228. 32
[3] R. Askey, "Orthogonal polynomials and special functions", Regional Conference Series in Applied Mathematics 21, Society for Industrial and Applied Mathematics, Philadelphia, Pensylwania, 1975. [4] D. M. Bressoud, Linearization and related formulas for qultraspherical polynomials, SIAM J. Math. Anal. 12 (1981), 161168. [5] T. Chihara, "An Introduction to Orthogonal Polynomials", volume 13 of Mathematics and Its Applications, Gordon and Breach, New York, London, Paris, 1978. [6] J. Dougall, A theorem Sonine in Bessel functions, with two extensions to spherical harmonics, Proc. Edinburgh Math. Soc. 37 (1919), 33{47. [7] . Gasper, Linearization of the product of Jacobi polynomials, I, II, Canad. J. Math. 22 (1970), 171-175, 582{593. [8] G. Gasper, A convolution structure and positivity of a generalized translation operator for the continuous q{Jacobi polynomials, in Conference on Harmonic Analysis in Honor of Antoni Zygmund (1983), Wadsworth International Group, Belmont, Calif., 44-59. [9] G. Gasper, Rogers' linearization formula for the continuous q{ ultraspherical polynomials and quadratic transformation formulas, SIAM J. Math. Anal. 16 (1985), 1061{1071. [10] G. Gasper and M. Rahman, "Basic Hypergeometric Series", Vol. 35, Encyclopedia of Mathematics and Its Applications, Cambridge University Press, Cambridge, 1990. [11] E. Hylleraas, Linearization of products of Jacobi polynomials, Math. Scand. 10 (1962), 189{200. [12] R. Koekoek, R. F. Swarttouw, The Askey-scheme of hypergeometric orthogonal polynomials and its q-analogue, Report 98-17, TU Delft, 1998. 33
[13] T. H. Koornwinder, Discrete hypergroups associated with compact quantum Gelfand pairs, in Applications of Hypergroups and Related Measure Algebras (eds. W. Connett at al.), Contemp. Math. 183 1995, 213{237. [14] R. Lasser, Orthogonal polynomials and hypergroups, Rend. Mat. 3 (1983), 185{209. [15] R. Lasser, Linearization of the product of associated Legendre polynomials, SIAM J. Math. Anal. 14 (1983), 403{408. [16] C. Markett, Linearization of the product of symmetric orthogonal polynomials, Constr. Approx. 10 (1994), 317{338. [17] A. de Medicis and D. Stanton, Combinatorial orthogonal expansions, Proc. Amer. Math. Soc. 123 (1996), 469{473. [18] M. Rahman, The linearization of the product of continuous q{ Jacobi polynomials, Can. J. Math. 33 (1981), 961{987. [19] L. J. Rogers, Second memoir on the expansion of certain in nite products, Proc. London Math. Soc. 25 (1894), 318{343. [20] G. Szego, "Orthogonal Polynomials", Amer. Math. Soc., Colloquium Publications 23, Providence RI, 1975. [21] R. Szwarc, Orthogonal polynomials and a discrete boundary value problem, I, II, SIAM J. Math. Anal. 23 (1992), 959{964, 965{969. [22] R. Szwarc, Convolution structures associated with orthogonal polynomials, J. Math. Anal. Appl. 170 (1992), 158-170. [23] R. Szwarc, A lower bound for orthogonal polynomials with an application to polynomial hypergroups, J. Approx. Theory 81 (1995), 145{150. [24] R. Szwarc, Nonnegative linearization and quadratic transformation of Askey{Wilson polynomials, Canad. Math. Bull. 39 (1996), 241{ 249. [25] M. Voit, Central limit theorems for a class of polynomial hypergroups, Adv. Appl. Prob. 22 (1990), 68{87. 34