Algebraic evaluations of some Euler integrals, duplication ... - CiteSeerX

6 downloads 0 Views 264KB Size Report
Apr 28, 1999 - Explicit evaluations of the symmetric Euler integral ..... ?arcsin a b. (0 jaj< b). (46) now yields the moment generating function of jXY j: for j j < (1 ...
Algebraic evaluations of some Euler integrals, duplication formulae for Appell's hypergeometric function F1, and Brownian variations  by Mourad E. H. Ismail and Jim Pitman Technical Report No. 554 Department of Statistics University of California 367 Evans Hall # 3860 Berkeley, CA 94720-3860 April 28, 1999 Abstract

R

Explicit evaluations of the symmetric Euler integral 01 u (1 ? u) f (u)du are obtained for some particular functions f . These evaluations are related to duplication formulae for Appell's hypergeometric function F1 which give reductions of F1 ( ; ; ; 2 ; y; z ) in terms of more elementary functions for for arbitrary with z = y=(y ? 1) and for = + 21 with arbitrary y; z . These duplication formulae generalize the evaluations of some symmetric Euler integrals implied by the following result: if a standard Brownian bridge is sampled at time 0, time 1, and at n independent random times with uniform distribution on [0; 1], then the broken line approximation to the bridge obtained from these n + 2 values has a total variation whose mean square is n(n + 1)=(2n + 1).

Key words and phrases. Brownian bridge, Gauss's hypergeometric function, Lauricella's multiple hypergeometric series, uniform order statistics, Appell functions. AMS 1991 subject classi cation. Primary: 33C65. Secondary: 60J65.  Research supported in part by NSF Grants DMS 99-70865 and DMS 97-03961

1

1 Introduction This paper is concerned with the explicit evaluation of some integrals of Euler type Z ua? (1 ? u)b? f (u) du (1) 1

1

1

0

for particular functions f , especially in the symmetric case a = b. These evaluations are related to various reduction formulae for hypergeometric functions represented by such integrals. Sections 4 and 5 explain how we were led to consider such integrals by a simple formula found in [24], for the mean square of the total variation of a discrete approximation to a Brownian bridge obtained by sampling the bridge at n independent random times with uniform distribution on (0; 1). We rst recall the basic Euler integrals which de ne the the beta function Z a)?(b) (2) B (a; b) := ua? (1 ? u)b? du = ?( ?(a + b) for a and b with positive real parts, and Gauss's hypergeometric function ! Z b? 1 n c?b? X a; b F c z = B (b; 1c ? b) t (1(1??tzt))a dt = (a()cn)(b)n zn! (3) n n where in the integral it is assumed that z 2= [1; 1) and that the real parts of b and b ? c are positive, and for the series jzj < 1 and nY ?  + n) : ()n := ( + i) = ?(?( ) i We note for ease of later reference the well known consequence of the series expansion in (3) that for any b ! a; b (4) F b z = (1 ? z)?a the Euler transformation [3] ! ! ? d; c ? a a; c + d ? a ? d (5) F c z ; c z = (1 ? z) F and Pfa -Kummer transformation ! ! z a; ? d a; c + d ? a F (6) c z = (1 ? z) F c z ? 1 : 1

1

1

0

1

1

0

1

=0

1

=0

2

By expansion of (1 ? yu)?p and (1 ? zu)?q in powers of yu and zu, for all positive real a; b; p; q there is the classical evaluation [23] Z ua? (1 ? u)b? du (7) (1 ? yu)p(1 ? zu)q = B (a; b)F (a; p; q; a + b; y; z) where F is one of Appell's hypergeometric functions of two variables [4], whose series expansion is 1 X 1 ( ) X m n ( )m ( 0)n m n 0 F ( ; ; ; ; y; z) = (jyj < 1; jzj < 1): (8) ( )m n m! n! y z m n 1

1

1

1

0

1

+

1

=0

+

=0

Appell [4, x4] gave a number of reduction formulae which allow F to be expressed in terms of simpler functions when its arguments are subject to various constraints. In particular, Appell gave formulae for F ( ; ; 0; ; y; ty); and F ( ; ; 0; + 0; y; z) in terms of Gauss's hypergeometric function. These reduction formulae for F , and various transformations between F and Appell's three other hypergeometric functions of two variables y and z, commonly denoted F ; F and F , can be found in many places [5, 6, 8, 10, 29]. The main results of this paper are the following formulae (9), (10) and (11). These duplication formulae for the Appell function F are reductions in the special case when

= 2 and 0 = , which do not seem to appear in the classical sources. For all complex ; ; y with jyj < 1=2 ! ! y y ; F ; ; ; 2 ; y; y ? 1 = F + 4(y ? 1) : (9) Section 2 derives this formula, explains its close relation to the well known duplication formulae for the sine and gamma functions, and gives a generalization to one of Lauricella's multiple hypergeometric functions. In view of (4), the case = + of (9) reduces to ! !? y y F ; + ; + ; 2 ; y; y ? 1 = 1 ? 4(y ? 1) = (1(1?? yy)) : (10) Section 6 o ers an interpretation of this formula in terms of a family of probability densities on (0; 1) parameterized by y 2 [0; 1). Formulae (9) and (5) show that ! y F ; + + d; + + d; 2 ; y; y ? 1 1

1

1

1

1

2

3

4

1

2

1

1 2

1 2

1

2

1 2

1 2

1

1 2

1 2

1 2

3

2

is an algebraic function of y for each and each positive integer d. Formula (10) is the particular case z = y=(y ? 1) of another duplication formula for F which we learned from Ira Gessel: for all complex ; y; z with jyj < 1 and jzj < 1 ! ! 1 2 1 p p1 ? y + p1 ? z : (11) F ( ; + ; + ; 2 ; y; z) = 2 1 + p 1?y 1?z Gessel's proof of (11) is presented in Section 3 together with some generalizations of(11). By (7), for > 0 the right side of (11) gives an algebraic evaluation of 1 Z [u(1 ? u)] ? du (12) B ( ; ) [(1 ? yu)(1 ? zu)] = and hence of other Euler integrals by di erentiating with respect to y and z. See also [7, 8], [16, (15)] and [32, 31] regarding various other reduction formulae for hypergeometric functions which involve duplication of an argument. 1

2

1

1 2

1 2

1

1

+1 2

0

2 Duplication formulae and symmetric Euler Integrals Consider rst the elementary duplication formula for the square of the sine function: sin 2 = 4 sin (1 ? sin ): 2

2

2

(13)

If  is picked uniformly at random from [0; 2], then so is 2 modulo 2, and hence sin 2 =d sin  where =d denotes equality in distribution of two random variables. As shown by Levy[20] (see also [12, 13, 25]) various random variables A with 2

2

A =d sin  (14) arise naturally in the study of Brownian motion B ; for instance the time that B attains its minimum value on [0; 1], and the amount of time that B spends positive during the time interval [0; 1]. The duplication formula (13) is then re ected by the identity in distribution 4A(1 ? A) =d A: (15) In other words, the arc sine distribution of A on (0; 1), with density P (A 2 du)=du = ? u? = (1 ? u)? = (0 < u < 1) 2

1

1 2

1 2

4

is invariant under the transformation u ! 4u(1 ? u). In the theory of iterated maps [21, Example 1.3] this observation is usually attributed to Von Neumann and Ulam [33]. See also [26] for further interpretations of this property. In purely analytic terms, the identity (15) states that for all non-negative measurable functions h h(t)dt 1 Z h(4u(1 ? u))du = 1 Z (16) = = =  u (1 ? u)  t (1 ? t) = p The substitution g(x) = ? h(4x)= x reduces (16) to the following expression of the calculus change of variable t = 4u(1 ? u): for every non-negative measurable g: Z g(t=4) dt Z (17) g(u(1 ? u))du = 2(1 ? t) = : If f is symmetric, meaning f (u) = f (1 ? u), then f (u) is a function of u(1 ? u), and so is ua? (1 ? u)a? f (u); the Euler integral (1) for a = b can then be simpli ed by application of (17). As indicated in [13, Ex. II.9.2], for a > 0 the case of (17) with g(x) = xa? yields B (a; a) = 2 ? aB (a; ) (18) which in view of Euler's formula B (a; b) = ?(a)?(b)=?(a + b) amounts to Legendre's duplication formula for the gamma function ?(2a) = 2 a? ?(a + ) : (19) ?(a) ?( ) The table of Euler integrals in Exton [11] provides dozens of other examples of (17). Proof of formula (9). Since the coecient of yk on each side of (9) is evidently a rational function of and , it suces to establish the identity for > 0 and > 0. Since ! y (1 ? yu) 1 ? y ? 1 u = 1 ? y y? 1 u(1 ? u) for a = b = ; p = q = ; z = y=(y ? 1), formula (7) reduces to ! Z 1 [u(1 ? u)] ? du y F ; ; ; 2 ; y; y ? 1 = B ( ; ) [1 ? y u(1 ? u)=(y ? 1)] which equals the right side of (9) by application of (17), (18), and the integral representation (3) of Gauss's hypergeometric function. 2 1

1

1 2

0

1 2

1 2

0

1 2

1

1

1

1

1 2

0

0

1

1

1 2

2

1 2

1

1 2

1 2

2

1

1

1

0

2

In x6 we will further explore the symmetry or antisymmetry of functions around the middle of the domain of integration. 5

A duplication formula for a Lauricella function. The Lauricella function [18] X (a)m  m (b )m    (bn)m xm    xmn FDn (a; b ; : : :; bn; c; x ; : : :; xn) = (c)m  m m !    mn! m ;:::;m ( )

1

1+ +

1

(

1

1

n

1

n

1+ +

n)

n

1

1

n

1

where the sum is over all vectors of n non-negative integers (m ; : : :; mn), is known [10, (2.3.6)] to admit the integral representation Z a? (1 ? u)c?a? du 1 u n FD (a; b ; : : :; bn; c; x ; : : : ; xn) = B (a; c ? a) (1 ? ux )b1    (1 ? ux )b (20) n provided the real parts of a and c ? a are positive and x ; : : : ; xn are in the open unit disc. Another application of (17) and (18) yields the following duplication formula, which reduces a Lauricella function of 2n variables, with c = 2a, with n equal pairs of parameters, and n corresponding pairs of variables xi and x^i with x^i = xi=(xi ? 1) for 1  i  n, to a Lauricella function of n variables: 1

1

1

( )

1

1

0

1

n

1

1

FD n (a; b ; b ; : : :; bn; bn; 2a; x ; x^ ; : : :; xn; x^n) = FDn (a; b ; : : :; bn; a + ; y ; : : :; yn) (2 )

1

1

1

( )

1

1 2

1

1

where yi := xi =(4xi ? 4). See also Karlsson [17] for some reductions of generalized Kampe de Feriet functions obtained by a similar method. 2

3 Formula (10) and related topics We rst start with Gessel's proof of formula (11). Gessel argues that formula (11) can be obtained by taking the average of the following two formulas: ! 2 p (21) F ( ; + ; + ; 2 + 1; y; z) = p 1?y + 1?z and ! 2 1 p p F ( + 1; + ; + ; 2 + 1; x; y) = p p : (22) 1?y 1?z 1?y + 1?z 2

1 2

1

1 2

2

1

1 2

1 2

To prove these formulas, apply the reduction formula [6, 9.5 (2)], ! y ? z a; b ? a F (a; b; c; b + c; y; z) = (1 ? z) F b + c 1 ? z 1

6

to get

F ( ; + ; + 1 2

1

1 2

; 2 + 1; y; z) = (1 ? z)? F

and

; + 2 + 1

1 2

! y ? z 1?z

! + 1 ; + y ? z ? ? F ( + 1; + ; + ; 2 + 1; x; y) = (1 ? z) F 2 + 1 1 ? z Finally, apply the formulas [1, (15.1.13) and (15.1.14), p. 556] ! ! 2 ; + p (23) F 2 + 1 u = 1+ 1?u and ! ! 2 1 + 1 ; + p p (24) F 2 + 1 u = 1 ? u 1 + 1 ? u and simplify. To discover what is behind formula (11) we appeal to (2), p. 239 in [8] and get, for j = 0; 1; 2; : : :, the following result 1 2

1

1 2

1 2

1

2

1 2

2

1 2

(1 ? y) F ( ; + j ? 0; 0; ; y; z!) = F ; ?j; 0; ; y ; z ? y y ?1 1?y j 1 ( )m n ( 0)n z ? y !n X (?j )m y !m X = m! y ? 1 n ( )m n n! 1 ? y : m 1

1

+

Therefore

F

1

( ; + j ? 0; 0; ; y; z)

+

=0

=0

=

! j (?j ) ( ) X y m m m m!( )m y ? 1 m 0 z ? y ! + m;  F + m 1 ? y :

(1 ? y)?

=0

(25)

At this stage one needs to make judicious choices of the parameters of the hypergeometric function on the right-hand side of formula (25) to reduce it to an algebraic function. One choice is

= + 0 ? 1=2: (26) 7

j = 1 the right-hand side of (25) is the sum of the F in (24) and a similar F with a di erent value of . This explains Gessel's identity. For general j we chose 0 = + 1=2 then apply the quadratic transformation [8, (2.11.13)]. The result is that for all j = 0; 1; 2; : : : and 0 = + 1=2 the F in (25) is ! ! 1 1 ? z = F  2 + 2m ? 1; 2 1 ? 1 1 ? z = A: 1?y 2 + m 2 2 1 ? y The hypergeometric function in the above expression is algebraic as can be seen from the Pfa -Kummer transformation (6). The nal result is F ( ; + j ? 0; 0; ; y; z) q p ? s 1 ? z Xj mX (?j )m( )m (?m ? 1)k y !m = 1?y+ 1?z 2 1?y m!k!(2 + k)m y?1 m k  q ? k p (27) (y ? z)k 1 ? y + 1 ? z : 1 2

1 2

1

2

+1

2

=0

=0

2

Another case is

= ? 0 + 1:

We need to apply

(28)

! ! a; b a= 2 ; ( a + 1) = 2 4 z F a ? b + 1 z = (1 ? z)?a F a ? b + 1 (1 ? z) ; (29) [8, (2.11.34)]. The quadratic transformation (38) had a misprint in the original reference where a ? b + 1 on the right-hand side was printed as a ? b ? 1. In this case (28) 0 z ? y ! + m; (30) F ? 0 + 1 + m 1 ? y  1 ? y  m ( + m)=2; ( + m + 1)=2 4(z ? y)(1 ? y) ! = 1?z (1 ? z) F : ? 0 + 1 + m It is clear from this formula that the choice 0 = k + 1 + =2, so that = ?k + =2, k = 0; 1; : : :, and the Pfa -Kummer transformation (6) reduce the F in (25) to an algebraic function. We take this opportunity to add that a general useful identity which generalizes (6) is the Fields and Wimp formula [15] ! a ; : : :; a p ; c ; : : :; cr (31) p r Fq s b ; : : : ; bq; d ; : : :; ds zw 2

+

2

1

+

+

1

1

1

1

8

! 1 (a ) : : : (a ) (?z )n X n + a ; : : :; n + a p n pn = (b ) : : : (b ) n! pFq n + b ; : : :; n + b z q n q n n ! c ; : : :; cr w :  r Fs ?n; d ; : : : ; ds =0

1

1

1

1

1

+1

1

For a treatment of such formulas, see [14].

4 Brownian variations.

Let (b(u); 0  u  1) be a standard Brownian bridge, that is the centered Gaussian process with continuous sample paths and covariance function

E (b(u)b(v)) = u(1 ? v) (0  u  v  1);

(32)

obtained by conditioning a standard one-dimensional Brownian motion started at 0 to return to 0 at time 1. See [28, 27] for background. Let Vn denote the variation of the path of b over the random partition of [0; 1] de ned by cutting the interval at each of n points picked uniformly at random from [0; 1], independently of each other and of b. That is nX (33) Vn := je n;ij where e n;i := b(Un;i) ? b(Un;i? ) +1

1

i=1

for Un; < Un; <    < Un;n the uniform order statistics obtained by putting the n uniform random variables in increasing order, and Un; = 0; Un;n = 1. In [24] the distribution of Vn is characterized as the unique distribution on (0; 1) whose pth moment is given for each p > 0 by the formula n + p) ?(2n) ?(n + p=2) EVnp = 2p= ?(?( (34) n) ?(2n + p) ?(n) and the corresponding density is expressed in terms of the Hermite function de ned in [19]. In particular, (34) gives + 1) : EVn = p1 ?(n?(+n1)=2) ; EVn = n2(nn + (35) 1 2 The formula for EVn is easily checked as follows, by conditioning on the Uni . It is well known that the Brownian bridge b has exchangeable increments, and that the spacings n;i := Un;i ? Un;i? for 1  i  n + 1 are exchangeable [2]. It follows that in the sum 1

2

0

2

2

1

9

+1

(33) de ning Vn the n +1 terms je n;ij are exchangeable. Combined with the consequence of (32) that q (36) e n; =d Un; Un; Z where Un; := 1 ? Un; , and Z is a standard Gaussian variable independent of Un; , so s E jZ j = 2 ; EZ = 1; (37) 1

1

1

1

1

1

2

the exchangeability of the je n;ij allows the following evaluation:  q  EVn = (n + 1)E je n; j = (n + 1) E Un; Un; E jZ j s ?( )?(n + ) 2 = (n + 1)n ?(n + 2)  1

1

3 2

1

1 2

which reduces to the expression for EVn in (35). It is not so easy to check the evaluation of EVn in (35) by the same method. Rather, this method yields an expression for EVn which when compared with that in (35) leads by a remarkable sequence of integral identities to the evaluation of Euler integral presented in the introduction as (56). It might also be interesting to explore the integral identities implied similarly by (34) for n = 3; 4; : : :, but these appear to be much more complicated. By the same considerations of exchangeability ! nX EVn = E (38) je n;ij = (n + 1)E e n; + (n + 1)nE je n; e n; j: 2

2

2

2

+1

2

i=1

1

1

2

Now by (36)

  n (n + 1)E e n; = (n + 1) E (Un; Un; ) EZ = (n + 1) (n + 1)(n n + 2) = n + 2 2

1

1

2

1

and by a straightforward extension of (36)   q q e e   E jn; n; j = E jXn j n; n; jYn j n; n; 1

1

2

1

2

(39) (40)

2

where x := 1 ? x and (Xn ; Yn ) is a pair of random variables which given n; and n; has the standard bivariate normal distribution with correlation s n; n; (41) E (Xn Yn j n; ; n; ) := ?  n;  n; : 1

1

2

10

1

2

1

2

2

Here n; and n; have the same joint distribution as min in Ui and 1 ? max in Ui for independent uniform (0; 1) variables Ui. For n  2 this means P (n; 2 dx; n; 2 dy) = n(n ? 1)(1 ? x ? y)n? dx dy (x; y  0; x + y  1): (42) The expectation in (40) can be evaluated with the help of the following lemma: Lemma 1 Let (X; Y ) have the standard bivariate normal distribution with E (X ) = E (Y ) = 0; E (X ) = E (Y ) = 1; E (XY ) = r 2 [?1; 1]; meaning that p (43) Y = rX + 1 ? r Z for independent standard normal X and Z . Then p  E jXY j = 2 1 ? r + r arcsin r : (44) Proof. By conditioning on X and applying a standard integral representation of the McDonald function K , as indicated in [30], there is the following correction of formula (2) of [30]: for all real z !  rz  1 j z j exp 1 ? r K 1 ? r dz: P (XY 2 dz) = p (45)  1?r The classical identity   Z1 a 1 az (46) ? arcsin b (0  jaj < b) e K (bz) = p b ?a 2 now yields the moment generating function of jXY j: for jj < (1 + r)? 2 arcsin(r + (1 ? r )) +  ? q 2 arcsin(r ? (1 ? r )) EejXY j =  + q (47) 2 1 ? r ? (1 ? r ) 2 1 + r ? (1 ? r ) and (44) is read from the coecient of  in the expansion of (47) in powers of . 2 1

2

1

1

1

2

2

2

2

2

2

0

2

2

0

0

2

0

2

2

1

2

2

2

2

2

2

As a check on (46), this formula combined with (45) yields the following companion of (44), which is a well known consequence of (43) and the symmetry of the joint distribution of (X; Z ) under rotations: P (XY > 0) = 21 + 1 arcsin r 11

As checks on (47), the coecient of  agrees with the formula E (XY ) = 1 + 2r which is obvious from (43), and the coecients of n for small even n are found to agree with those of the well known m.g.f. of XY : q EeXY = 1= 1 ? 2r ? (1 ? r ) : (48) 2

2

2

2

2

The computation in (40) can now be continued by conditioning on (n; ; n; ) and applying (41) and (44) to evaluate s ! q 2 2  n; n; e e E jn; n; j =  E n; n; (n; + n; ) +  E n; n; arcsin   (49) n; n; 1

1

1

2

2

1

1

2

2

2

1

2

1

2

where x := 1 ? x. By (42), the rst term is a Dirichlet integral, which is easily evaluated as (n ? 1) =(n ? ) , where ()m is the rising factorial with m factors. Substitute this in (49), then (49) and (39) in (38), and compare with (35) to deduce that for each n = 1; 2; : : : s ! 2 E   arcsin n; n; = 3(3n + 3n ? 1) : (50) n; n;   n;  n; 8(n + 1) (n ? ) If n = 1 then  ; =  ; = U say has uniform distribution on (0; 1), and (50) reduces to the elementary evaluation E (U U ) = 1=6. For n  2 set p = n ? 2. In view of (42), the identity (50) with both sides divided by n(n ? 1) = (p + 1) reads s 2 Z Z xy(x + y)p arcsin xy dx dy = 3(3p + 15p + 17) : (51)  x;y ;x y xy 8(p + 1) (p + ) 1 2

1 2 3

2

1

11

2

1

2

1

2

2

1 2 3

2

12

2

2

0

+

4

1

3 2 3

The following argument shows that this identity holds in fact for all Rcomplex p with positive real part. The left side of (51) is evidently the pth moment zpf (z)dz of a positive density f on (0; 1), which is found by partial fraction expansion of the right side of (51) to be p p p f (z) = 121 (1 ? z) (2 + z)(1 + 2 z): (52) 1 0

4

In view of (42) for n = 2 and its consequence that  ; =  ; +  ; has probability density 2(1 ? z) at z 2 (0; 1), it follows from the above discussion that the function f (z)=(1 ? z) can be interpreted as follows as a conditional expectation: for 0 < z < 1 s ! 2 E   arcsin  ;  ;  = z = f (z) : (53) ; ;  (1 ? z)  ;  ; ; 23

21

22

21

22

21

22

12

23

21

22

Since  ; given  ; = z has uniform distribution on 1 ? z, which is the distribution of U (1 ? z) for U with uniform distribution on (0; 1), it follows after setting w = (1 ? z) and U = (1 ? U ) that for 0 < w  1 v 0 1  u 2 E @U U arcsin u w U U t A = f (1 ? w) : (54)   (1 ? wU )(1 ? wU ) w 21

23

2

3

5 Some symmetric Euler integrals. Substitute x = 1=w in (54), use the formula (52) for f , and simplify, to see that (54) amounts to the following identity: for all x  1 s   Z q u u   uu arcsin (x ? u)(x ? u) du = 24 ?8x + 12x ? 2 + x(x ? 1)(8x ? 8x ? 3) (55) The formula q p arcsin 1=(1 + z) = =2 ? arctan z reduces (55) to v u Z u ? 1)  8x ? 12x + 4 ? qx(x ? 1)(8x ? 8x ? 3) : u(1 ? u) arctan t xu((1x ? du = u) 24 (56) Equivalently, from (56) via (17), s   Z t q p arctan 4x(xt? 1) dt = 3 8x ? 12x + 4 ? x(x ? 1)(8x ? 8x ? 3) (57) t which can be veri ed using Mathematica. To relate (56) to the identities discussed in the introduction, consider for x  1; > 0 and arbitrary real the integral Z ? I ( ; ; x) := ((x ?(uu(1)(?x ?u))1 + u)) du (58) where the denominator of the integrand can be expressed variously using   u  u (x ? u)(x ? 1 + u) = x(x ? 1) + u(1 ? u) = x(x ? 1) 1 ? x 1 ? 1 ? x : (59) 1

3

2

2

0

1

3

2

2

0

1

3

2

2

0

1

1

0

13

By di erentiating (56) with respect to x, and dividing both sides by 2x ? 1, formula (56) is seen to be equivalent to = = I ( ; 1; x) = 2x 2x(x??1 1) ? 8 (8x ? 8x ? 1): It is easily seen that 1 d I ( ; ; x) I ( ; + 1; x) = (2? x ? 1) dx and provided > 0 this operation is inverted by Z1 I ( ; ; x) = (2y ? 1)I ( ; + 1; ; y)dy: 3 2

5 2

3 2

2

x

(60) (61) (62)

Thus by repeatedly di erentiating and dividing by 2x ? 1, we nd that (60) is equivalent in turn to each of an in nite chain of explicit evaluations of I ( ; ; x) for = 1; 2; : : :, the next few of which are 1 0 q x ( x ? 1)(8 x ? 8 x + 3) A I ( ; 2; x) =  @1 ? (63) (2x ? 1) 5 2

2

5 2

3

3 (64) I ( ; 3; x) = q 4 x(x ? 1)(2x ? 1) x ? 24x + 1) I ( ; 4; x) = 8x = (24 (65) (x ? 1) = (2x ? 1) and so on. The simplest of this sequence of identities is (64). This is the special case = of the following identity: for all real > 0 and x  1 5 2

5

2

5 2

3 2

3 2

7

5 2

: I ( ; + ; x) = q B ( ; )2 x(x ? 1)(2x ? 1) 2

1 2

2

(66)

By using the last expression in (59) and (7) the integral I ( ; ; x) can be presented as   I ( ; ; x) = x B(x( ;? 1)) F ; ; ; 2 ; x1 ; 1 ?1 x (67) 1

14

where F is Appell's hypergeometric function (8). Thus (66) is just a restatement of (10), and (9) amounts to the following more general evaluation: for x  1; > 0 and real , ! ? 1 B ( ; ) ; I ( ; ; x) = x (x ? 1) F + 4x(x ? 1) : (68) The operation (61) shows that for each formula (66) generates an in nite sequence of explicit evaluations of I ( ; + d + ; x) in terms of algebraic functions for d = 0; 1; 2; : : :. For some , as in the case for = as illustrated above, it is also possible to work backwards using (62) to get algebraic expressions for I ( ; + d + ; x) for negative d. Now if d is a positive integer the Gauss function on the right side of (5) is a polynomial in z. Substituted in (68) with a = and c = + , this gives an explicit formula for I ( ; + d + ; x) instead of a recursive evaluation. The question of whether or not there is an explicit representation for I ( ; ; x) in terms of elementary functions of x for a particular choice of ( ; ) reduces to whether or not the Gauss function appearing in (68) can be so represented. See for [8, 1] for tabulations of such elementary evaluations of the Gauss function. 1

1 2

1 2

5 2

1 2

1 2

1 2

6 Generalized beta densities

For arbitrary positive a; b, and real cj and yj with jyj j < 1, 1  j  n the formula n Y (69) ua? (1 ? u)b? (1 ? yj u)?c 1

1

j

j =1

de nes a positive function of u 2 (0; 1) which can be normalized to de ne a probability density on (0; 1), which we shall call a generalized beta density. As noted by Exton [10, x7.1.1], the integral representation (20) of the Lauricella function FDn implies that this function appears in the normalization constant and in formulae for the moments of this family of distributions on (0; 1). The functions in (69) are weight functions for orthogonal polynomials which generalize Jacobi polynomials. These weight functions are referred to as generalized Jacobi weights [22]. The following discussion concerns a particular one-parameter sub-family of this multiparameter family of distributions on (0; 1), which is related to the duplication formula (9) for the Appell function F . For each y 2 [0; 1) de ne non-negative functions y and fy with domain [0; 1] by the formulae ( )

1

(1 ? y) (1 ? y) u(1 ? u)(1 ? y) : y (u) := (1 ? yu)(1 ? y(1 ? u)) and fy (u) := ((1 ? yu)(1 ? y(1 ? u)) 1 2

1 2

2

15

1 2

2

3 2

(70)

Observe that in view of (7), formula (10) can be rewritten as follows: for each y 2 [0; 1) and > 0 Z Z ? (71) [ y (u)] fy (u) du = B ( ; ) = [u(1 ? u)] ? du: For = 1 this shows fy is a probability density on (0; 1) for each y 2 [0; 1). This family is evidently the subfamily of the family of obtained by normalization of the functions (69), for a = b = 1, n = 2, y = y, y = y=(y ? 1) and c = c = . The following graphs show the densities of fy for y = i=10; 0  i  9. 1

1

1

1

0

0

1

2

1

2

3 2

3 2.5 2 1.5 1 0.5

0.2

0.4

0.6

0.8

1

The graphs illustrate the following facts which are easily veri ed by calculus. For each 0 < y < 1 the density fy is convex and symmetric about 1=2, with maximum value (1 ? y) =(1 ? y) attained at 0 and 1, and as y " 1 the probability distribution with density fy converges weakly to the Bernoulli( ) distribution on f0; 1g. According to formula (71), if Uy denotes a random variable with density fy , and U = U is a random variable with uniform distribution on (0; 1), then 1 2

2

1 2

0

E [ y (Uy )] ? = B ( ; ) = E [U (1 ? U )] ? 1

( > 0)

1

(72)

Since a distribution on (0; 1) is uniquely determined by its moments, this implies the identity in distribution y (Uy ) =d U (1 ? U ): (73) Equivalently, for each non-negative measurable function g with domain [0; 1=4] Z g(x=4)dx Z Z (74) g( y (u)) fy (u) du = g(u(1 ? u))du = 2(1 ? x) = : where the second equality is just (17) again. This second equality shows that 4U (1 ? U ) has the beta (1; ) distribution with density (1 ? u)? = at u 2 (0; 1), hence that 1

1

1

0

0

0

1 2

1 2

16

1 2

1 2

U (1 ? U ) =d (1 ? U ). It then follows by inversion of the transformation y that a random variable Uy with density fy can be constructed as !? = y (75) Uy :=  U 1 ? (2 ? y) (1 ? U ) where  is a random sign, equally likely to be +1 or ?1, independent of U . By symmetry, Uy has mean 1=2 for all y. The variance of Uy is found by integration using (75) to be ! p1 ? y y (2 ? y ) (76) 1 ? 2 y arctan 2p1 ? y : E (Uy ? ) = 4y As the rst equality in (74) does not seem obvious, we provide the following check: Direct proof of the rst equality in (74). Since y (u) = y (1 ? u) and fy (u) = fy (1 ? u), the left side of (70) equals Z Z dv y (w) 2 g( y ( + v)) fy ( + v) dv = 2 g(w) fy ( + vy (w)) dw dw (77) 1 4

2

1 2

0

2

2

2

1 2 2

1 2

1 2

2

1 2

1 2

2

1 4

1 2

1 2

0

by the change of variable w = y ( + v) which makes w decrease from 1=4 to 0 as v increases from 0 to , with s 1 dvy (w) = cy ? 4 v = vy (w) := 2 11 ??c4ww ; dw y 4(1 ? 4w) (1 ? cy w) 1 2

1 2

1 2

where

3 2

cy := (1 ?y y) : The right side of (77) can be simpli ed, by use of the following identities, which follow easily from the above de nitions: 2

1 2

2

fy ( + v) = 2(2 ? y) (1 ? y) ((2 ? y) + 4v y ) 2

1 2

2

1 2

3 2 2 2

where for v = vy (w)

? y) (2 ? y) + 4v y = (2 ? y) + (11??4cww)y = 4(1 1 ? cy w y 2

2

2

2

17

2

so

) (1 ? cy w) fy ( + vy (w)) = 2(2 ? y) (1 ? y) (1 ? cy w) = (2 ? y4(1 (78) ? y) 4 (1 ? y) and 4 ? cy = 16(1 ? y)=(2 ? y) so that 16(1 ? y) dv ( w ) y = (79) dw 4(2 ? y) (1 ? 4w) (1 ? cy w) and the equality of rst and last expressions in (74) follows by cancellation, and a nal change of variable x = 4w. 2 1 2

2

1 2

3 2

3 2

2

3 2

3 2

2

1 2

2

3 2

7 Companion Identities For arbitrary non-negative measurable functions h and g there is the following extension of the change of variable formula (17) to deal with asymmetric integrands, obtained by p p setting t = 4u(1 ? u) so u = (1  1 ? t); jdu=dtj = 1=(4 1 ? t): Z X Z h(t=4)g( (1 + p1 ? t)) p h(u(1 ? u))g(u)du = dt (80) 4 1?t 2f g We now show how (11) and other related formulas follow from (80). The basic ingredients are the following relationships involving the ultraspherical polynomials fCn (x)g and Jacobi polynomials fPn ; (x)g, )m P ? = ;? = (2x ? 1); (81) C m(x) = (1(=2) m m )m xP ? = ; = (2x ? 1); C m (x) = (1(=2) (82) m m 1 X Pn ; ( )wn = 2 R? (1 ? w + R)? (1 + w + R)? ; (83) 1 2

1

1 2

1

0

0

1

(

(

2

+1

2

(

where

+1

)

1 2

(

1 2)

1 2 1 2)

2

2

+1

+

1

n=0

R = (1 ? 2w + w ) = : (84) See for example (10.9.21), (10.9.22), (10.9.29), and (10.9.30), respectively in [9]. We shall also need the well-known generating function 1 X Cn (y)wn = (1 ? 2yw + w )? : (85) 2 1 2

2

n=0

18

Theorem 2 We have F ( ; ; ; 2 ; y; z) = [(1 ? y=2)(1 ? z=2)]? 1 X )n ( )n P ? = ;? = (Z );  [(2 ? y(yz )(2 ? z)]n ( + 1=2) n 1

(

(86)

1 2)

n

n=0

with Z de ned as

1 2

? yz) ? 1; Z = yz2((2y +? zy)(2 ? z) 2

(87)

and

F ( + 1; ; ; 2 + 1; y; z) ? F ( ; ; ; 2 ; y; z) (88) 1 ) X (yz)m( )m ? = ; = (Z ) = [(24 ?(yy+(2z??zyz (89) )] m [(2 ? y)(2 ? z)]m( + 1=2)m Pm Proof. Substitution of the right-hand side of (85) in the integral representation (7) ([9, (5.8.5)]) shows that the left-hand side of (86) is " ! # ?(2 ) Z [u(1 ? u)] ? 1 ? y + z (2u ? 1) + yz(2u ? 1) ? du ? ( ) 2?y 2?z (2 ? y)(2 ? z)   Z ? ) 1 t p = ?(2 ? ( ) 4 4 1?t 0 1 n= X y + z ? yz (yz) @q A (1 ? t)n= dt: C 2 n n= [(2 ? y)(2 ? z)] yz(2 ? y)(2 ? z) n ;n 1

1

+1

+1

1

2

(

1 2 1 2)

+1

=0

2

1

0

1

1

2

0

2

0

2

2

even

The above expression then simpli es to the right-hand side of (86) through the use of (82). Using the (7) and the duplication formula for the gamma function, the di erence in (88) is seen to equal ?(2 ) Z [u(1 ? u)] ? (2u ? 1)[(1 ? uy)(1 ? uz)]? du ? ( ) 1

2

1

0

Z  t  ? ?(2 ) 1  2p1 ? t p = ? ( ) 4 4 1?t 0 1 n= X ( yz ) y + z ? yz @ A (1 ? t)n= dt:  n= Cn q [(2 ? y )(2 ? z )] yz(2 ? y)(2 ? z) n ;n 1

2

1

0

2

0

odd

2

2

19

by expansion of the integrand as in the previous case. This simpli es via (82) to the expression in (89). 2 When = + 1=2 in Theorem 2, the generating function (83)-(84) implies (11). Another special case is to choose z = y=(y ? 1) but keep general subject to restrictions that make the integrals and sums involved converge. This choice makes y + z ? yz = 0 and hence Z = ?1. Now [9, (10.8.3), (10.8.13)] imply Pn ; (?1) = ( +n!1)n (?1)n: Thus we get ! ! (1 ? y ) y ; 1 = 2 y (90) F ; ; ; 2 ; y; y ? 1 = (1 ? y=2) F + (2 ? y) (

)

2

1

1 2

2

2

which is equivalent to (9) through the Pfa -Kummer transformation (6). The same cases = + 1=2 or z = y=(y ? 1) are of interest in the second formula of Theorem 2. Thus

F ( + 1; + ; + ; 2 + 1; y; z) ! ! 2 1 1 p p1 ? y + p1 ? z = 2 1+ p 1?y 1?z 1 (y + z ? yz ) X m( )m 2 ( yz ) + [(2 ? y(2 ? z)] = [(2 ? y)(2 ? z)]m( + 1=2) 1 2

1

1 2

2

2 +1

+1

+3 2

m=0

Pm ; = (Z ): (91) (

m+1

1 2)

In the notation of (87),

q 4 (1 ? y)(1 ? z)  = Z; R = (2 ? y)(2 ? z) ; w = (2 ? yyz )(2 ? z) p h q i p  1 + (1 ? y)(1 ? z) 1?y+ 1?z 1 ? w + R = 2 (2 ? y)(2 ? z) ; 1 + w + R = 2 (2 ? y)(2 ? z) : Now (91) implies formula (22). Another way to evaluate the integral in (89) is to write 2u ? 1 as u ? (1 ? u) then write the left-hand side of (89) as 2

2

1 2

[F ( + 1; ; ; 2 + 1; y; z) ? F ( ; ; ; 2 + 1; y; z)]: 1

1

20

This establishes formula (21). Finally the case z = y=(y ? 1) in (89) gives the identity

F ( + 1; ; ; 2 + 1; y; y=(y ? 1)) = F ( ; ; ; 2 ; y; y=(y ? 1)): 1

1

Thus (9) can be recast as

F ( + 1; ; ; 2 + 1; y; y=(y ? 1)) = F ;+ 1

1 2

! y 4(y ? 1) : 2

(92) (93)

Acknowledgment

Thanks to Persi Diaconis, Charles Dunkl, and Nobuki Takayama for helpful suggestions and pointers to the literature. Special thanks to Ira Gessel for the contribution of identity (11) and its proof, and for detailed comments on a preliminary draft of this paper.

References [1] M. Abramowitz and I. A. Stegun. Handbook of Mathematical Functions. Dover, New York, 1965.  [2] D.J. Aldous. Exchangeability and related topics. In P.L. Hennequin, editor, Ecole  d'Ete de Probabilites de Saint-Flour XII, Springer Lecture Notes in Mathematics, Vol. 1117. Springer-Verlag, 1985. [3] G. E. Andrews, R. Askey, and R. Roy. Special Functions, volume 71 of Encyclopedia of Math. and Appl. Cambridge Univ. Press, Cambridge, 1999. [4] P. Appell. Sur les fonctions hypergeometriques de plusiers variables. GauthierVillars, Paris, 1925. Mem. des Sciences Math. de l'Acad. des Sciences de Paris, III. [5] P. Appell and J. Kampe de Feriet. Fonctions Hypergeometriques et Hyperspheriques: Polyn^omes d'Hermite . Gauthier-Villars, Paris, 1926. [6] W. N. Bailey. Generalized Hypergeometric Series. Cambridge Univ. Press, Cambridge, 1935. [7] J. L. Burchnall and T. W. Chaundry. Expansions of Appell's double hypergeometric functions II. Quart. J. Math Oxford Ser., 12:112{128, 1941. 21

[8] A. Erdelyi et al. Higher Transcendental Functions, volume I of Bateman Manuscript Project. McGraw-Hill, New York, 1953. [9] A. Erdelyi et al. Higher Transcendental Functions, volume II of Bateman Manuscript Project. McGraw-Hill, New York, 1953. [10] H. Exton. Multiple hypergeometric functions and applications. Ellis Horwood Ltd., Chichester, 1976. [11] H. Exton. Handbook of hypergeometric integrals. Ellis Horwood Ltd., Chichester, 1978. [12] W. Feller. An Introduction to Probability Theory and its Applications, Vol 1, 3rd ed. Wiley, New York, 1968. [13] W. Feller. An Introduction to Probability Theory and its Applications, Vol 2, 2nd ed. Wiley, New York, 1971. [14] J. L. Fields and M. E. H. Ismail. Polynomial expansions. Math. Comp., 29:894{902, 1975. [15] J. L. Fields and J. Wimp. Expansions of hypergeometric functions in hypergeometric functions. Math. Comp., 15:390{395, 1961. [16] I. M. Gessel. Super ballot numbers. J. Symbolic Comput., 14(2-3):179{194, 1992. [17] P. W. Karlsson. Reduction of certain generalised Kampe de Feriet functions. Math. Scand., 32:265{268, 1973. [18] G. Lauricella. Sulle funzioni hypergeometriche a piu variabli. Rendiconti del Circolo Matematico di Palermo, 7:111{158, 1893. [19] N. N. Lebedev. Special Functions and their Applications. Prentice-Hall, Englewood Cli s, N.J., 1965. [20] P. Levy. Sur certains processus stochastiques homogenes. Compositio Math., 7:283{ 339, 1939. [21] M. Yu. Lyubich. The dynamics of rational transforms: the topological picture. Russian Math. Surveys, 41:43{117, 1986. [22] P. Nevai. Geza Freud, orthogonal polynomials and Christo el functions. A case study. J. Approx. Theory, 48:3{167, 1986. 22

[23] E. Picard. Sur une extension aux fonctions de deux variables du probleme de Riemann relatif aux fonctions hypergeometriques. C. R. Acad. Sci. Paris, 90:1119 and 1267, 1880. [24] J. Pitman. Characterizations of Brownian motion, bridge, meander and excursion by sampling at independent uniform times. Technical Report 545, Dept. Statistics, U.C. Berkeley, 1999. Available via http://www.stat.berkeley.edu/users/pitman. [25] J. Pitman and M. Yor. Arcsine laws and interval partitions derived from a stable subordinator. Proc. London Math. Soc. (3), 65:326{356, 1992. [26] J. Pitman and M. Yor. Some properties of the arc sine law related to its invariance under a family of rational maps. Technical Report 558, Dept. Statistics, U.C. Berkeley, 1999. Available via http://www.stat.berkeley.edu/users/pitman. [27] D. Revuz and M. Yor. Continuous martingales and Brownian motion. Springer, Berlin-Heidelberg, 1994. 2nd edition. [28] G. R. Shorack and J. A. Wellner. Empirical processes with applications to statistics. John Wiley & Sons, New York, 1986. [29] L. J. Slater. Generalized Hypergeometric Functions. Cambridge Univ. Press, Cambridge, 1966. [30] M. D. Springer. Review of \Selected tables in mathematical statistics (Vol. VII). The product of two normally distributed random variables" by W. Q. Meeker Jr., L. W. Cornwell and L. A. Aroian. Technometrics, 25:211 { 212, 1983. [31] K. Ueno. Hypergeometric series formulas generated by the Chu-Vandermonde convolution. Mem. Fac. Sci. Kyushu Univ. Ser. A, 44(1):11{26, 1990. [32] K. Ueno. Hypergeometric series formulas through operator calculus. Funkcial. Ekvac., 33(3):493{518, 1990. [33] S. M. Ulam and J. von Neumann. On combination of stochastic and deterministic processes. Bull. Amer. Math. Soc., 53:1120, 1947. Department of Mathematics, University of South Florida, Tampa, Fl 33620-5700. Department of Statistics, University of California-Berkeley, Berkeley, CA 94720-3860 email [email protected] [email protected] 23

Suggest Documents