Some useful integrals and their applications in ... - Springer Link

3 downloads 0 Views 174KB Size Report
Aug 22, 2006 - The robustness of the distribution of the correlation for elliptical ... The bivariate elliptical distribution which includes bivariate normal distribution.
Statistical Papers (2008) 49:211–224 DOI 10.1007/s00362-006-0007-6 R E G U L A R A RT I C L E

Some useful integrals and their applications in correlation analysis Anwar H. Joarder

Received: 15 November 2005 / Revised: 19 April 2006 / Published online: 22 August 2006 © Springer-Verlag 2006

Abstract The distribution of the product moment correlation coefficient based on the bivariate normal distribution is well known. Recently in many business and economic data, fat tailed distributions especially some elliptical distributions have been considered as parent populations. The normal and t-distributions are well known special cases of elliptical distribution. In this paper we derive some theorems involving double integrals and apply them to derive the probability distribution of the correlation coefficient for some elliptical populations. The general nature of the theorems indicates their potential use in probability distribution theory. Keywords Correlation coefficient · Double integrals · Elliptical distribution · Bivariate t-distribution 1 Introduction The distribution of the product moment correlation coefficient based on the bivariate normal distribution was derived by Fisher (1915). A recent interest among the applied scientists is the use of fat tailed distribution for modeling business data especially stock returns. Since the bivariate t-distribution has fatter tails, it has been increasingly applied for modeling business data especially stock returns. The distribution is said to be robust if it remains the same under violation of normality. The robustness of the distribution of the correlation for elliptical

A. H. Joarder (B) Department of Mathematical Sciences, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia e-mail: [email protected]

212

A. H. Joarder

population was proved by Fang and Anderson (1990, 10) by stochastic representation. Fang (1990) derived the null distribution whereas Ali and Joarder (1991) derived the nonnull distribution of the correlation coefficient for bivariate elliptical distribution. It should be pointed out that in the case of bivariate elliptical distribution, the observations in the sample are not necessarily independent. In this paper we derive some theorems containing bivariate integrals that help derive the distribution of the correlation for different populations. The general nature of the theorems indicate their potential use for many other applications in the distribution theory of bivariate elliptical distribution. The objective is to provide insight to those experts in business, science and engineering who use elliptical models as models for samples. See e.g. Sutradhar and Ali (1986), Lange et al. (1989), Sutradhar and Ali (1989), Fang (1990), Fang and Anderson (1990), Fang et al. (1990), Joarder and Ahmed (1998), Kibria and Haq (1999), Billah and Saleh (2000), Kibria (2003), Kibria(2004), Kibria and Saleh (2004), and Kotz and Nadarajah (2004). 2 The bivariate normal, T and elliptical distributions The bivariate elliptical distribution which includes bivariate normal distribution as t-distribution is outlined in this section. 2.1 The bivariate normal distribution Let X = (X1 , X2 ) be bivariate normal random vector with probability density function (pdf) f1 (x) = (2π )

−1

||

−1/2



 −1   −1 (x − θ )  (x − θ ) exp 2

(1)

where θ = (θ1 , θ2 ) is unknown vector of location parameters and  is the 2 × 2 unknown positive definite matrix of population variances and covariance. The probability density function of the bivariate normal distribution will be denoted by N2 (θ , ). Now consider a sample X1 , X2 , . . . , XN (N > 2) having the joint probability density function f2 (x1 , x2 , . . . , xN ) =

||−N/2 (2π )−N

⎛ exp ⎝

N −1 

2

⎞ (xj − θ )  −1 (xj − θ )⎠.

(2)

j=1

¯ ) so that the sums of squares and cross prod¯ = (X ¯ ,X The mean vector is X

N 1 2 ¯ ¯  ucts matrix is given by j=1 (Xj − X)(X j − X) = A. The symmetric bivariate matrix A can be written as A = (aik ), i = 1, 2; k = 1, 2 where aii = mS2i =

N

N ¯ 2 ¯ ¯ j=1 (Xij − Xi ) , m = N − 1, (i = 1, 2) and a12 = j=1 (X1j − X1 )(X2j − X2 ) = mRS1 S2 . Fisher (1915) derived the distribution of A for p = 2 in order to study

Some useful integrals and their applications in correlation analysis

213

the distribution of correlation coefficient from a normal sample. Wishart (1928) obtained the joint distribution of sample variances and covariances from the multivariate normal population. The distribution of the bivariate Wishart matrix based on the bivariate normal distribution is given by 2−m ||−m/2 1 (m−3)/2 −1 |A| exp − tr A , f3 (A) = √ 2 π (m/2)((m − 1)/2)

(3)

A > 0, m > 2 (See e.g. Anderson 2003, 252). 2.2 The bivariate T-distribution Let X = (X1 , X2 ) be a bivariate t random vector with probability density function  −ν/2−1 f4 (x) = (2π )−1 ||−1/2 1 + (x − θ ) (ν)−1 (x − θ )

(4)

where θ = (θ1 , θ2 ) is unknown vector of location parameters and  is the 2 × 2 unknown positive definite matrix of scale parameters while the scalar ν is assumed to be a known positive constant (Muirhead 1982, 48). Notice that though the components X1 and X2 are uncorrelated, they are not independent unless ν → ∞. Now consider a sample X1 , X2 , . . . , XN (N > 2) having the joint probability density function f5 (x1 , x2 , . . . , xN )

⎛ ⎞−ν/2−1 N  (ν/2 + N)||−N/2 ⎝ 1+ (xj − θ ) (ν)−1 (xj − θ )⎠ . = (νπ )N (ν/2)

(5)

j=1

which is the bivariate t-model for the sample. Note that the observations in the sample are uncorrelated and not independent unless ν → ∞. The random symmetric positive definite matrix A is said to have a Wishart distribution based on the bivariate t- population with m = N − 1 > 2 and (2 × 2) > 0, written as A˜W(m, ; ν) if its probability density function is given by f6 (A) = Cν (m, 2)||−m/2 |A|(m−3)/2 (1 + tr(ν)−1 A)−ν/2−m , A > 0, m > 2 where ν −m (ν/2 + m) Cν (m, 2) = √ π (ν/2)(m/2)((m − 1)/2) (See Sutradhar and Ali 1989, 160).

(6)

214

A. H. Joarder

By the use of the duplication formula for gamma function given by (z) = π −1/2 2z−1 ((z + 1)/2)(z/2)

(7)

(Anderson 2003, 125) with z = m − 1 we have Cν (m, 2) =

2m−2 ν −m (m + ν/2) . π (m − 1)(ν/2)

(8)

2.3 The bivariate elliptical distribution The probability density function for the bivariate elliptical distribution is given by f7 (x) = K(N, 2)||−1/2 gN,2 ((x − θ )  −1 (x − θ ))

(9)

where θ = (θ1 , θ2 ) is unknown vector of location parameters and  is the 2 × 2 unknown positive definite matrix of scale parameters while the normalizing constant K(N, 2) is determined by the form of g (cf. Sutradhar and Ali 1989). Now consider a sample X1 , X2 , . . . , XN (N > 2) having the joint probability density function ⎛ f8 (x1 , x2 , . . . , xN ) =

K(N, 2) gN,2 ⎝ ||N/2

N 

⎞ (xj − θ )  −1 (xj − θ )⎠

(10)

j=1

which is the bivariate elliptical model. Theorem 2.1 (Sutradhar and Ali 1989, 158) Consider the pdf of the bivariate Wishart matrix based on the bivariate elliptical model given by (10). Then the pdf of the Wishart matrix is given by f9 (A) = C(m, 2)||−m/2 |A|(m−3)/2 gm,2 (tr −1 A), C(m, 2) = 2m−2 π m−1 K(m, 2)/ (m − 1),

m > 2.

(11)

3 Main results √ Lemma 3.1 Let V = U1 U2 be the geometric mean of two independent chi2 (i = 1, 2). Then the moment generating funcsquare random variables Ui ∼ χm tion of V at ρ is given by MV (ρ) =

∞  (2ρ)k  2 ((m + k)/2) k=0

k!

 2 (m/2)

,

− 1 < ρ < 1.

Some useful integrals and their applications in correlation analysis

215

Proof By definition, the moment generating function of V = given by  ∞ ∞ √  √ ρ U1 U2 = eρ u1 u2 E e 0

0

1 2m  2 (m/2)

(u1 u2 )



U1 U2 at ρ is

e−(u1 +u2 )/2 du1 du2 .

m/2−1

Then the lemma is obvious by virtue of eρ

√ u1 u2

=

∞  ρk k=0

k!

(u1 u2 )k/2

Lemma 3.2 Let I(ρ, m) = Then 2m−1 ∞ (i) I(ρ, m) = (m) k=0 (ii) I(ρ, m) =

π 0

 2k (m + k/2)  k/2 = E Ui , (m/2)

and

(i = 1, 2).  

(sin θ )m−1 (1−ρ sin θ )−m dθ , − 1 < ρ < 1.

(2ρ)k 2 k!  ((m + k)/2),

2m−1  2 (m/2) MV (ρ). (m)

Proof Since |ρ sin θ | < 1, we have (1 − ρ sin θ )−m = that

∞  ρ k (m + k)



ρ k (m+k) k k=0 k! (m) (sin θ )

so

π

I(ρ, m) =

k!

k=0

=

∞  k=0

(m)

(sin θ )m+k−1 dθ

0



(m + k) π ((m + k)/2) k! (m) ((m + k + 1)/2)

ρk

by virtue of √

π m

(sin θ ) dθ = 0

π ((m + 1)/2) . (m/2 + 1)

Next, replacing (m + k) by the duplication formula of gamma function, given by (7), with z = m + k, we have (i), which can also be written as (ii) by virtue of Lemma 3.1.   Theorem 3.1 For −1 < ρ < 1 and ν > 0, let

∞ ∞ J(ρ, m, ν) = 0

Then

0

 −ν−m √ (u1 u2 )m/2−1 1 + u1 + u2 − 2ρ u1 u2 du1 du2 .

216

A. H. Joarder

(i) J(ρ, m, ν) = (ii) J(ρ, m, ν) =



(ν) ∞ (2ρ)k 2 m+k k=0 k!  (m+ν) 2 2 (ν) (m/2) (m+ν) MV (ρ).

 ,

Proof The integral in the theorem can be written as

∞ ∞  −ν−m J(ρ, m, ν) = 4 (y1 y2 )m−1 1 + y21 + y22 − 2ρy1 y2 dy1 dy2 . 0

(12)

0

The transformation y1 = w cos θ , y2 = w sin θ with Jacobian J(y1 , y2 → w, θ ) = w yields −m+3

π/2 ∞

J(ρ, m, ν) = 2

 −ν−m dwdθ . (sin 2θ )m−1 w2m−1 1 + w2 − ρw2 sin 2θ

θ=0 w=0

Next, the transformations w2 = u, 2θ = α yield

π J(ρ, m, ν) = α=0

1 sin α 2

m−1 ∞

 −ν−m um−1 1 + (1 − ρ sin α)u dudα. (13)

u=0

Then by virtue of

∞ 0

xm−1 (m)(n) , dx = n m (a + bx)m+n a b (m + n)

the last integral of (13) becomes (m)(ν/2) (1 − ρ sin θ )−m (m + ν/2) so that J(ρ, m, ν) =

1 2m−1

(m)(ν) (m+ν)

π α=0

(sin α)m−1 (1 − ρ sin α)−m dα.  

Then the theorem follows by Lemma 3.2.

Lemma 3.3 For a bivariate elliptical probability model given by (10), we have ∞ ξ(N) = 0 wN−1 gN,2 (w)dw = π −N (N)K−1 (N, 2) (cf. Fang et al. 1990, 66). Proof Make the transformation  −1/2 (xj − θ ) = zj (j = 1, 2, . . . , N). Then the probability density function of Z1 , Z2 , . . . , ZN is given by ⎛ f10 (z1 , z2 , . . . , zN ) = K(N, 2)gN,2 ⎝

N  j=1

⎞ zj zj ⎠.

(14)

Some useful integrals and their applications in correlation analysis

217

The pdf of z11 = u1 , z12 = u2 , . . . , zN1 = u2N−1 , zN1 = u2N is given by ⎛ f10 (u1 , u2 , . . . , u2N ) = K(N, 2)gN,2 ⎝

N 

⎞ u2j ⎠

j=1



 ∞ ∞ 2N 2 and hence 0 · · · 0 K(N, 2)gN,2 u j=1 j du1 · · · du2N = 1. Make the polar transformation ⎛ uj = w⎝



j−1 

sin θk ⎠ cos θj ,

(j = 1, 2, . . . , 2N − 1)

k=1 2N−1 

u2N = w

sin θk

k=1

where w ∈ [0, ∞), θk ∈ [0, π ) for k = 1, 2, . . . , 2n − 2; θ2N−1 ∈ [0, 2π ) with Jacobian J(u1 , u2 , · · · , u2N → θ1 , · · · θ2N−1 , w) = w2N−1

2N−2 

(sin θk )2N−k−1 .

k=1

Then

π

π



∞ w2N−1

···

K(N, 2)

θ2N−2 =0 θ2N−1 =0 w=0

0

2N−2 

(sin θk )2N−k−1

k=1

  × gN,2 w2 dθ1 · · · dθ2N−2 dθ2N−1 dw = 1 or,

∞ K(N, 2)

wN−1 gN,2 (w)dw = π −N (N).

0

  Theorem 3.2 Let

∞ ∞ Jg (ρ, m) = 0

Then

0

  √ (u1 u2 )m/2−1 gm,2 u1 + u2 − 2ρ u1 u2 du1 du2 .

218

A. H. Joarder

(i) Jg (ρ, m) = (ii) Jg (ρ, m) =

ξ(m) ∞ (2ρ)k 2 k=0 k!  ((m + k)/2), (m) 2 ξ(m) (m/2) MV (ρ) (m)

where ξ(m) = π −m (m)K−1 (m, 2) = 2m−2 (m − 1)C−1 (m, 2)/π . Proof The integral in the theorem can be written as

∞ ∞   √ (y1 y2 )m/2−1 gm,2 y21 + y22 − 2ρ y1 y2 dy1 dy2 . Jg (ρ, m) = 4 0

(15)

0

The transformation y1 = w cos θ , y2 = w sin θ with Jacobian J(y1 , y2 → w, θ ) = w yields Jg (ρ, m) −m+3

π/2 ∞

=2

 −ν−m dwdθ . (sin 2θ )m−1 w2m−1 gm,2 w2 − ρw2 sin 2θ

θ=0 w=0

∞ Next, letting w=0 wm−1 gm,2 (w)dv = ξ(m), the independent transformations w2 = u, 2θ = α yield −m+1

π



Jg (ρ, m) = 2

(sin α)

α=0

m−1

  um−1 gm,2 (1 − ρ sin α)u dudα

u=0

⎤⎡ ∞ ⎤

m−1 (sin α) = 2−m+1 ⎣ dα ⎦⎣ wm−1 gm,2 (w)dw⎦ (1 − ρ sin α)m α=0 w=0   m−1 2 2  (m/2) MV (ρ) ξ(m). = 2−m+1 (m) ⎡

π

(m/2) That is Jg (ρ, m) = ξ(m) MV (ρ). Then the theorem follows by (11) and (m) Lemma 3.3 in the following way: 2

∞ um−1 gm,2 (u)du

ξ(m) = 0

= π −m (m)K−1 (m, 2) = 2m−2 (m − 1)C−1 (m, 2)/π .  

Some useful integrals and their applications in correlation analysis

219

4 Applications in correlation analysis The long proof of the distribution of correlation coefficient by Fisher (1915) has been made shorter and elegant in this section. Further the distribution of correlation coefficient has been derived, along Fisher (1915), for bivariate t-distribution as well as bivariate elliptical distribution. Theorem 4.1 The probability density function of the correlation coefficient R based on a bivariate normal population, t-population or elliptical population is given by 2m−2  2 (m/2)(1 − ρ 2 )m/2 (1 − r2 )(m−3)/2 MV (ρr) π (m − 1) ∞  (2ρr)k 2 m + k 2m−2 (1 − ρ 2 )m/2 = (1 − r2 )(m−3)/2  , −1 0, a22 > 0, −∞ < a12 < ∞, −1 < ρ < 1, m > 2, σ1 > 0, σ2 > 0. Under the transformation a11 = ms21 , a22 = ms22 , a12 = mrs1 s2 with Jacobian m3 s1 s2 , followed by the transformation ms21 = σ12 u1 , ms22 = σ22 u2 , with Jacobian J(s21 , s22 → u1 u2 ) = (σ1 σ2 /m)2 and the integration over u1 and u2 , the probability density function of R will be

h(r) =

(1 − r2 )(m−3)/2 4π (m − 1)(1 − ρ 2 )m/2   √

∞ ∞ u1 + u2 − 2ρr u1 u2 m/2−1   × (u1 u2 ) exp − du1 du2 . 2 1 − ρ2 0

0

(17)

220

A. H. Joarder

Then the transformation u1 = (1 − ρ 2 )y1 , u2 = (1 − ρ 2 )y2 with Jacobian J(u1 , u2 → y1 , y2 ) = (1 − ρ 2 )2 yields

h(r) =

(1 − ρ 2 )m/2 (1 − r2 )(m−3)/2 4π (m − 1)

∞ ∞ √ m/2−1 −(y +y )/2 × eρr y1 y2 (y1 y2 ) e 1 2 dy1 dy2 0

(18)

0

 

The theorem is thus complete by Lemma 3.1.

Corollary 4.1 For −1 < ρ, r < 1, we have

∞ ∞ 0

eρr



y1 y2

(y1 y2 )

m/2−1

e−(y1 +y2 )/2 dy1 dy2 =

0

4π (m − 1) (1 − r2 )−(m−3)/2 . (1 − ρ 2 )m/2

4.2 Bivariate T-distribution case Proof The pdf of the elements of A given by (6) can be written as

f6 (a11 , a22 , a12 ) =

Cν (m, 2) (a11 a22 − a212 )(m−3)/2 (1 − ρ 2 )m/2 (σ1 σ2 )m  −ν/2−m  a11 1 a22 2ρa12 × 1+ + 2 − σ1 σ2 ν(1 − ρ 2 ) σ12 σ2

(19)

where a11 > 0, a22 > 0, −∞ < a12 < ∞, −1 < ρ < 1, m > 2, σ1 > 0, σ2 > 0. Under the transformation a11 = ms21 , a22 = ms22 , a12 = mrs1 s2 with Jacobian J(a11 , a22 , a12 → r, s21 , s22 ) = m3 s1 s2 , followed by the transformation ms21 = σ12 u1 , ms22 = σ22 u2 with Jacobian J(s21 , s22 → u1 , u2 ) = (σ1 σ2 /m)2 and then integrating out u1 and u2 we have the probability density function of R as follows:

Some useful integrals and their applications in correlation analysis

h(r) =

221

Cν (m, 2)(1 − r2 )(m−3)/2 (1 − ρ 2 )m/2   √

∞ ∞ u1 + u2 − 2ρr u1 u2 −ν/2−m m/2−1 1+ × (u1 u2 ) du1 du2 . (20) ν(1 − ρ 2 ) 0

0

Then the transformation u1 = (1 − ρ 2 )y1 , u2 = (1 − ρ 2 )y2 with Jacobian J(u1 , u2 → y1 , y2 ) = (1 − ρ 2 )2 yields h(r) = ν m C(m, ν)(1 − ρ 2 )m/2 (1 − r2 )(m−3)/2 J(ρr, m, ν/2)

(21)

where J(ρ, m, ν) is defined in Theorem 3.1 and the Theorem 4.1 follows.

 

Corollary 4.2 For −1 < ρ, r < 1, we have

∞ ∞ 0

0

=

  √ u1 + u2 − 2ρr u1 u2 −ν/2−m (u1 u2 )m/2−1 1 + du1 du2 ν(1 − ρ 2 )

Cν−1 (m, 2) (1 − r2 )−(m−3)/2 . (1 − ρ 2 )m/2

4.3 Bivariate elliptical distribution case We now demonstrate how Theorem 3.2 eases the derivation of the distribution of the correlation coefficient (Ali and Joarder 1991). The general nature of Theorem 3.2 indicates its potential application in the sampling distribution theory of elliptical population. Proof The pdf of the elements of A given by Theorem 2.1 can be written as f9 (a11 , a22 , a12 ) =

C(m, 2)(a11 a22 − a212 )(m−3)/2 (1 − ρ 2 )m/2 (σ1 σ2 )m    a22 2ρa12 1 a11 × gm,2 + 2 − σ1 σ2 1 − ρ 2 σ12 σ2

(22)

where a11 > 0, a22 > 0, −∞ < a12 < ∞, −1 < ρ < 1, m > 2, σ1 > 0, σ2 > 0. Under the transformation a11 = ms21 , a22 = ms22 , a12 = mrs1 s2 with Jacobian J(a11 , a22 , a12 → r, s21 , s22 ) = m3 s1 s2 , followed by the transformation ms21 = σ12 u1 , ms22 = u2 σ22 with Jacobian J(s21 , s22 → u1 , u2 ) = (σ1 σ2 /m)2 and the integrating out u1 and u2 , we have the probability density function of R given by

222

A. H. Joarder

h(r) = C(m, 2)(1 − ρ 2 )−m/2 (1 − r2 )(m−3)/2 √

∞ ∞ u1 + u2 − 2ρr u1 u2 m/2−1 du1 du2 . × (u1 u2 ) gm,2 1 − ρ2 0

(23)

0

Then the transformation u1 = (1 − ρ 2 )y1 , u2 = (1 − ρ 2 )y2 with Jacobian J(u1 , u2 → y1 , y2 ) = (1 − ρ 2 )2 yields h(r) = C(m, 2)(1 − ρ 2 )m/2 (1 − r2 )(m−3)/2 Jg (ρr, m)

(24)

where Jg (ρ, m) is defined in Theorem 3.2 and then the Theorem 4.1 follows.

 

Corollary 4.3 For −1 < ρ, r < 1, we have

∞ ∞ 0

√ (u1 u2 )m/2−1 gm,2 (u1 + u2 − 2ρr u1 u2 )du1 du2 =

0

(1 − r2 )−(m−3)/2 C(m, 2)(1 − ρ 2 )m/2

where gm,2 (.) is defined in Theorem 2.1. 5 Robustness of some tests on correlation coefficient The results in Sect. 4 indicate robustness of the correlation coefficient in the bivariate elliptical population only. Thus the assumption of bivariate normality under which tests on correlation coefficient are developed can be relaxed to a broader class of bivariate elliptical distribution. The likelihood ratio test of the hypothesis H0 : ρ = 0 against all alternatives H1 : ρ = 0 is performed by √ T = m − 1R(1 − R2 )−1/2 which has a Student t-distribution with (m − 1) > 0 degrees of freedom (d.f.). One significant lesson of the paper is that the acceptance of the null hypothesis does not mean independence unless the sample is from the bivariate  normal distribution. The most popular test is based on Z = tanh−1 R= ln (1 + R)/(1 − R) has an approximate normal distribution with mean ln (1 + ρ)/(1 − ρ) and variance 1/(m − 2). Muddapur (1988) proved that the statistic T has an exact t-distribution with m − 1 degrees of freedom where √ (υR − ρS∗ ) m − 1 T=  ,  ζ (1 − ρ 2 ) 1 − R2

 √ S∗ a11 a22 = σ12 a22 + σ22 a11 + (a22 + a11 )σ12 σ22 1 − ρ 2 ,   2 2 2 υ = 2σ1 σ2 + (σ1 + σ2 ) (1 − ρ ), ζ = 2σ1 σ2 1 − ρ 2 + (σ12 + σ22 ).

(25)

Some useful integrals and their applications in correlation analysis

223

In particular, if the population variances are same σ12 = σ22 , then the t statistic defined as √ (R − ρS∗ ) m − 1 a11 + a22 T= , S∗ = √ (26) 2 2 2 a11 a22 (1 − ρ )(1 − R ) has an exact t distribution with m − 1 d.f. If the population variances are the same σ12 = σ22 and sample variances are the same i.e., s21 = s22 , the above statistic simplifies to √ (R − ρ) m − 1 T= (1 − ρ 2 )(1 − R2 )

(27)

which has an exact t -distribution with m − 1 d.f. The above statistic was shown to have an approximate t distribution without the assumptions of σ12 = σ22 or of s21 = s22 by Samiuddin (1970). Muddapur (1988) also noted that the quantity f =

(1 + r)(1 − ρ) (1 − r)(1 + ρ)

(28)

has an approximate F distribution with (m − 1, m − 1) ) degrees of freedom for anyρ, and an exact F-distribution for ρ = 0. 6 Concluding remarks We warn that the distribution of R is not necessarily robust for independent observations from elliptical population. The models for samples considered in Sect. 2 imply that the observations Xj (j = 1, 2, . . . , N) are uncorrelated but not necessarily independent. The asymptotic distribution of R for independent observations from bivariate elliptical population was obtained by Muirhead (1982, 157). For the distributions of R in nonelliptical populations, the reader is referred to Johnson et al.(1995) and the references therein. It is conjectured that the distribution of correlation coefficient may have a nicer form if the following representation is used in the derivation: (N − 1)R =

N 

¯ (T1j − T¯ 1 )(T2j − T)

(29)

j=1

where Tij = (Xij − µi )/Si ,(i = 1, 2; j = 1, 2, . . . , N). The conditional expectation can also be employed on T1j and T2j to have possibly better forms for the moments of R. Acknowledgment The author acknowledges the excellent research support provided by King Fahd University of Petroleum and Minerals, Saudi Arabia through the Project FT 2004/22. The

224

A. H. Joarder

author also thanks the referee for constructive suggestions that have helped improve the quality of the paper.

References Ali MM, Joarder AH (1991) Distribution of the correlation coefficient for the class of bivariate elliptical models. Can J Stat, 19:447–452 Anderson TW (2003) An introduction to multivariate statistical analysis. Wiley, New York Billah MB, Saleh AKME, (2000) Performance of the large sample tests in the actual forecasting of the pretest estimators for regression model with t-error. J Appl Stat Sci 9(3):237–251 Fang KT (1990) Generalized multivariate analysis. Springer Verlag, Berlin Heidelberg New York Fang KT, Anderson TW (1990) Statistical inference in elliptically contoured and related distributions. Allerton Press, New York Fang KT, Kotz S, Ng KW (1990) Symmetric multivariate and related distributions. Chapman and Hall, London Fisher RA (1915) Frequency distribution of the values of the correlation coefficient in samples from an indefinitely large population. Biometrika 10:507–521 Joarder AH, Ahmed SE (1998) Estimation of the trace of the scale matrix of a class of elliptical distributions. Metrika 48:149–160 Johnson NL, Kotz S, Balakrishnan N (1995) Continuous Univariate Distributions, Vol 2, Wiley, New York Kibria BMG (2003) Robust predictive inference for the multivariate linear models with elliptically contoured distribution. Far East J Theor Stat 10(1):11–24 Kibria BMG (2004) Conflict among the shrinkage estimators induced by W, LR and LM tests under a Student’s t regression model. J Korean Stat Soc 33(4):411–433 Kibria BMG, Haq MS (1999) Predictive inference for the elliptical linear model. J Multivariate Anal 68:235–249 Kibria BMG, Saleh AKME (2004) Preliminary test ridge regression estimators with Student’s t errors and conflicting test statistics. Metrika 59(2):105–124 Kotz S, Nadarajah S (2004) Multivariate t distributions and their Applications. Cambridge University Press, Cambridge Lange KL, Little RJA, Taylor JMG (1989) Robust statistical modeling using the t distribution. J Am Stat Assoc 84:881–896 Muddapur MV (1988) A simple test for correlation coefficient in a bivariate normal distribution. Sankhya, Series B 50, 50–68 Muirhead RJ (1982) Aspects of multivariate statistical theory. Wiley, New York Samiuddin M (1970) On a test for an assigned value of correlation in a bivariate normal distribution. Biometrika 57:461–464 Sutradhar BC, Ali MM (1986) Estimation of the parameters of regression model with a multivariate t error variable. Commun Stat:Theory and Methods 15:429–450 Sutradhar BC, Ali MM (1989) A generalization of the Wishart distribution for the elliptical model and its moments for the multivatiate t model. J Multivariate Analy 29(1):155–162. Wishart J (1928) The generalized product moment distribution in samples from a normal multivariate population. Biometrika A20:32–52