x) when p. , and we obtain an explicit form of the regression E(X2jX1 = x) as a function of x. Although this regression is, in general, not linear, it can be linear ...
Non-linear regression of stable random variables yz Clyde D. Hardin Jr. The Analytic Sciences Corporation 55 Walkers Brook Drive Reading, MA 01867
Gennady Samorodnitsky Cornell University School of ORIE Ithaca, NY 14853
Murad S. Taqqu Boston University Department of Mathematics Boston, MA 02215 Appeared in The Annals of Applied Probability, 1 (1991) 582-612 Abstract
Let (X1 ; X2 ) be an -stable random vector, not necessarily symmetric, with 0 < < 2. The paper investigates the regression E (X2 jX1 = x) for all values of . We give conditions for the existence of the conditional moment E (jX2 jp jX1 = x) when p , and we obtain an explicit form of the regression E (X2 jX1 = x) as a function of x. Although this regression is, in general, not linear, it can be linear even when the vector (X1 ; X2 ) is skewed. We give a necessary and sucient condition for linearity and characterize the asymptotic behavior of the regression as x ! 1. The behavior of the regression functions is also illustrated graphically. The rst author was supported by AFOSR contract F49620 82 C 0009 of the Center for Stochastic Processes at the University of North Carolina. The two other authors were supported by the National Science Foundation grant DMS-88-05627, by the AFOSR grant 89-0115 and the ONR grant 90-J-1287 at Boston University. y AMS 1980 subject classi cations. Primary 60E07, 62J02. Secondary 60E10. z Keywords and phrases Stable random vectors, linear regression, symmetric -stable.
1 Introduction The stable distributions, according to the central limit theorem, are the only limiting distributions of normalized sums of independent, identically distributed random variables; and perforce include the normal, or Gaussian, distributions as distinguished elements. Gaussian distributions and processes have long been well-understood, and their utility as both stochastic modeling constructs and analytical tools is wellaccepted. The much richer class of non-Gaussian stable distributions and processes is the subject of a great deal of recent research, and holds much promise for use in modeling and analysis as well. Stable distributions are de ned in Section 2. They are indexed by a parameter 0 < 2. The distribution is Gaussian when = 2 and is non-Gaussian when 0 < < 2. A good reference for univariate stable distributions is Feller [3] and the more recent monograph of Zolotarev [17]. For multivariate distributions, see Cambanis and Miller [2] and Hardin [5]. The central limit argument often used to justify the use of Gaussian model in applications may also be applied to support the choice of a non-Gaussian stable model. That is, if the randomness observed is the result of summing many small eects, and those eects follow a heavy-tailed distribution, then a non-Gaussian stable model may be appropriate. An important distinction between Gaussian and non-Gaussian stable distributions is that the stable distributions are heavy-tailed, always with in nite variance, and in some cases with in nite rst moment. Another distinction is that they admit asymmetry, or skewness, while a Gaussian distribution is necessarily symmetric about its mean. In certain applications, then, where an asymmetric or heavy-tailed model is called for, a stable model may be a viable candidate. In any case, the nonGaussian stable distributions furnish tractable examples of non-Gaussian behavior and provide points of comparison with the Gaussian case, highlighting the special nature of Gaussian distributions and processes. We refer the reader to Cambanis [1], Weron [16] and Zolotarev [17], for surveys of applications. Mandelbrot [11] uses the term \Noah eect" to describe random situations characterized by high variability, where a stable model may be applicable (the biblical gure Noah lived through an unusually severe ood). A fundamental and crucial step towards understanding stable distributions and eventually employing them in applications is to recognize their behavior under conditioning. Perhaps the rst question to be answered regarding conditional behavior concerns the conditional expectation, or regression, of one stable variable given the observation of others. In the Gaussian case, the bivariate regression is always linear and is determined solely by the rst two moments:
E (X2jX1 = x) = 2 + (x ? 1) 2
where i is the mean of Xi and
X 2 ; X1 ) : = Cov( VarX 1
In the stable case, Kanter [8] shows that the same relation holds for symmetric stable distributions with rst moments, where is de ned to be the normalized covariation of X2 on X1, which is the stable analog of the normalized covariance. Also, Samorodnitsky and Taqqu [13] show that the rst moment requirement may be relaxed, i.e. that regressions involving variables without rst moments can be legitimately de ned under appropriate conditions, and are linear. A distinction between Gaussian and stable distributions in the symmetric case is seen, however, in the case of regression on more than one variate. Although general multivariate regressions in the Gaussian case are always linear, the papers of Miller [12], Cambanis and Miller [2], and Hardin [4] show that multivariate regressions involving symmetric stable variates are not always linear, and illustrate some of the complexities involved. This paper gives a complete picture of bivariate regression behavior in the general (possibly asymmetric) stable case. We show that when skewness is present, regressions can be either linear or nonlinear. We determine the form of the regression, when it exists, and give necessary and sucient conditions for its linearity. The sometimes exotic behavior of the regression functions in the nonlinear case is illustrated graphically. Interestingly, these regressions are always asymptotically linear. We make no moment assumptions on the stable variates, assuming only a weaker condition assuring that the regression is de ned. Since we want to study E (X2jX1 = x) for all 0 < < 2, we must determine when the conditional expectation is de ned. It is always de ned for > 1 because the mean exists in that case. We show in Section 2, that when 1, the condition for existence of the conditional expectation, given in [13] for the symmetric stable case, is also sucient here. Therefore the applications to moving averages, sub-Gaussian, sub-stable and harmonizable processes, discussed in [13], apply under exactly the same conditions when the vector (X1; X2) is skewed. The regression, however, will typically be non-linear. Explicit formulae for the regression involve the ratio of two integrals (see (3.9)), neither of which can be computed analytically. (The integral in the denominator is proportion to the univariate stable density function.) To make matters worse, these integrals have features which make them not amenable to the use of standard numerical integration software packages. In the past, the lack of usable formulae has been an impediment to the application of stable distributions to the real-life phenomena. In Hardin, Samorodnitsky and Taqqu [6], we develop ecient algorithms for computing the regression formulae and we present them in a form useful to practioners. These numerical procedures can also be used to evaluate other integrals which appear in the context of stable distributions. The paper Hardin, Samorodnitsky and Taqqu [6], 3
moreover, lists the complete source code, written in the C language, for computing stable density functions and regression function. That code is used in this paper to obtain the gures of stable density functions and regression functions displayed. The paper is structured as follows. Basic de nitions and conditions for the niteness of the conditional moment E (jX2jpjX1 = x); p , are given in Section 2. Explicit formulae for E (X2jX1 = x) are given in Section 3 and established in Section 4. Section 5 displays graphs of various regression functions.
2 De nitions and existence of conditional expectations A random variable X is said to have a stable distribution if for any A > 0; B > 0, there exists a C > 0 and a D 2 R1 such that
AX (1) + BX (2) =d CX + D where X (1) and X (2) are independent copies of X . Necessarily, C = (A + B )1= for some 0 < 2. The characteristic function of X; X (t) = E expfitX g has the form
8 > < expf? jtj + ia t<> + itg if 6= 1; X (t) = > : expf?jtj ? i 2 t ln jtj + itg if = 1;
(2.1)
P (jX j > ) const ? as ! 1:
(2.2)
where 0, a = tan 2 , 2 [?1; 1]; 2 R and where u = jujv sign u for any reals u and v. Thus, in addition to the index of stability 2 [0; 2], the distribution of X is characterized by a scale parameter 0, a skewness parameter 2 [?1; 1] and a shift parameter 2 R. When = 2, X (t) reduces to expf?2 t2 + itg, the characteristic p function of a Gaussian random variable with mean and standard deviation 2. The random variable X is Cauchy when = 1. Figures 2 and 3 illustrate the shape of various density functions of X when 0 < < 2. These density functions are not generally known in close form. Their tails are much fatter than the Gaussian tails, and The lower the , the slower the decay of the probability tails. Because of (2.2), no -stable random variable has nite second moment when < 2 and even the rst moment does not exist when 1. The regression problem involves a pair of random variables X1 and X2. The de nition of the univariate stable distribution generalizes readily to more than one 4
variable. A random vector X = (X1; X2) is said to have a stable distribution if for any scalars A > 0; B > 0, there exists a C > 0 and a D 2 R2 such that
AX(1) + B X(2) =d C X + D; where X(1) and X(2) are independent copies of X. Again, C = (A + B )1= for some 0 < 2. The vector X is called -stable. The vector X is called symmetric stable (SS ) if it is -stable and X =d ?X. Every symmetric -stable vector X has characteristic function X(t; r) = E exp i(tX1 + rX2) of the form Z expf? jts1 + rs2j?(ds)g (2.3) S2
where ? is a nite symmetric measure on the Borel subsets of the unit circle S2 in R2. Changing ? changes the characteristic function (2.3) of (X1; X2). When = 2, the characteristic function reduces to the usual Gaussian one, since (2.3) equals
Z
Z
Z
s2?(ds) + 2tr s1s2?(ds) + r2 s22 ?(ds)]g; S2 1 S2 S2 R R 2 S2 s21?(ds), VarX2 = 2 S2 s22?(ds), and Cov(X1 ; X2)
expf?[t2
= soR that VarX1 = 2 S2 s1s2 ?(ds). For convenience, unless stated otherwise, we suppose from now on 0 < < 2, that is, we exclude the Gaussian case = 2. An -stable random vector X = (X1; X2) with 0 < < 2 is not necessarily symmetric. The general form of its characteristic function is
X(t; r) = expf? if 6= 1, and
X(t; r) = expf?
Z S2
Z
S2
jts1 + rs2j(1 ? iasign(ts1 + rs2))?(ds) + i(t01 + r02)g
jts1 + rs2 j(1+ i 2 sign(ts1 + rs2 ) ln jts1 + rs2j)?(ds)+ i(t01 + r02)g
if = 1. Here a = tan 2 , ? is as before but not necessarily symmetric, and 0 = (01; 02) is a vector in R2. The measure ? is called the spectral measure and the pair (?; 0) is called the spectral representation of the random vector X. The spectral representation is unique when 0 < < 2 (see Kuelbs [9] for example for details). The components X1 and X2 of X have a marginal -stable distribution. In fact, X1 has characteristic function (2.1) with = 1 ; = 1 and = 1, where
1 = ( is the scale parameter of X1 ,
Z
S2
js1j?(ds))1=
Z 1 1 = s<> 1 ?(ds) 2 [?1; 1] 1 S2 5
(2.4) (2.5)
is the skewness parameter of X1, and
8 0 > if 6= 1; < 1 1 = > (2.6) : 01 ? 2 RS s1 ln js1j?(ds) if = 1, 2 is the shift parameter of X1 . Kanter [8] proved that if (X1; X2) is SS and > 1, then the regression E (X2jX1 = x) is linear and, for almost every x, E (X2 jX1 = x) = x;
(2.7)
where
Z 1 ?1> ?(ds): = s2s< 1 1 S2 R ?1> ?(ds) is called the covariation of X on X and is often The constant S2 s2 s< 2 1 1 denoted [X2 ; X1]. Samorodnitsky and Taqqu [13] showed that (2.7) may still hold when (X1; X2) is a SS vector with 1. The purpose of this paper is to obtain the regression E (X2jX1 = x) when (X1; X2) is a general, possibly skewed -stable random vector with 0 < < 2. Since we want to study E (X2 jX1 = x) for all 0 < < 2, we must determine when the conditional expectation is de ned. The following theorem was proved in [13] in the symmetric -stable case. We extend it here to an arbitrary -stable vector X. It provides conditions for the existence of conditional moments of order greater or equal to . Theorem 2.1 Let X = (X1; X2) have representation (?; 0 ) and suppose that
Z ?(ds) < + ; if < 1; p : + 1; if 1; when 0 < < 1, and
(2.8)
8 > < + ; if < 2 ? ; p : 2; if 2 ? ; when 1 2. In the latter case, if 2 ? , then E (X22 jX1 = x) < 1 for a.e. x. 6
Proof: Let and p be as in the theorem and let (Y1 ; Y2 ) be an independent copy
of (X1; X2). Then (Z1; Z2) = (X1; X2) ? (Y1; Y2) is a symmetric -stable vector with spectral measure ?~ , de ned by ?~ (A) = ?(A) + ?(?A) for every Borel set A of S2 . R R ? ~ Since S2 js1j ?(ds) = 2 S2 js1j? ?(ds) < 1, we have E (E (jZ2jpjX1; Y1; Y2)jZ1) = E (jZ2jpjZ1) < 1 a:s: by [13], and consequently E (jZ2jpjX1; Y1; Y2) < 1 a.s. by Fubini's theorem. Since (Y1; Y2) is independent of (X1; X2), E (jX2jpjX1) = E (jX2 ? Y2 + Y2jpjX1; Y1; Y2) 2p(E (jX2 ? Y2jpjX1; Y1; Y2) + E (jY2jpjX1; Y1; Y2)) = 2p(E (jZ2jpjX1; Y1; Y2) + jY2jp) is a.s. nite. Remarks. \For almost every x" means for all x in the support of the probability density function of X1 . (When < 1 and 1 = 1 for example, the support of the probability density function of X1 is the non-negative real line.) Relation (2.8) is always satis ed if = 0 because ? is a nite measure. In this case Theorem 2.1 merely tells us what we already know, that the (conditional) moment of order p exists for p < . If relation (2.8) is satis ed for some > 0, then the conditional moment E (jX2jpjX1 = x) may exist with p , even though the unconditional moment E jX2jp is in nite. For an alternate expression of relation (2.8), see Proposition 3.1 below. To understand the signi cance of condition (2.8), consider the following extreme situation where ? concentrates its mass on the points (0; 1) and (0; ?1) of the unit circle S2 . Then X1 is constant, E (jX2jpjX1 = x) = E jX2jp a.s. and hence the conditional moment is nite if and only if p < . Theorem 2.1 states that, up to some limit, the lower the density of ? around the points (0; 1) and (0; ?1), the higher the admissible conditional moments. Refer to Samorodnitsky and Taqqu [13] for more details.
3 Analytic representations of the non-linear regression functions Let (X1; X2) be -stable, 0 < < 2, with spectral representation (?; 0). In order to study the regression E (X2jX1 = x), we must ensure that it is well-de ned. Clearly, > 1 ) E jX2j < 1 ) E (jX2jjX1 = x) < 1 for a.e. x: When 1, we make the following: 7
Standard Assumption If (X1 ; X2 ) is -stable with spectral measure ? and 1,
then there is a number > 1 ? such that Z ?(ds) S2
js1j < 1:
(3.1)
Theorem 2.1 ensures then E (jX2jjX1 = x) < 1 for a.e. x. Observe that the choice > 0 is adequate when = 1 and the choice = 1 is adequate for all 1. Since we are interested in the regression E (X2 jX1 = x) as a function of x, we assume 1 > 0, because 1 = 0 implies X1 degenerate and hence E (X2 jX1) = EX2 . Because of (2.4), 1 > 0 is equivalent to ?(S2 nf(0; 1) [ (0; ?1)g) > 0:
(3.2)
We also assume, without loss of generality, 0 = (01; 02) = 0, i.e. that (X1; X2) has representation (?; 0), because if 0 6= 0, then setting X1 = X~1 + 01 and X2 = X~2 + 02, yields E (X2jX1 = x) = E (X~2jX~1 = x ? 01) + 02; with (X~1; X~2 ) having representation (?~ ; 0). R +1 e?itx (t)dt, equals When 0 = 0, the density function of X1, fX1 (x) = 21 ?1 X1
if 6= 1, and
Z +1 1 e?itx expf?1 jtj + ia 1 1 t<>gdt fX1 (x) = 2 ?1 Z1 = 1 e?1 t cos(tx ? a 1 1t )dt 0
(3.3)
Z +1 e?itx expf?1 jtj ? i 1 1 2 t ln jtj + i1 tgdt fX1 (x) = 21 Z 1?1 1 ?1 t cos(t(x ? ) + 2 t ln t)dt = (3.4) e 1 0 1 1 R if = 1, where 1 = ? 2 S2 s1 ln js1j?(ds). The following theorem provides an explicit formula for the regression in the case 6= 1. Theorem 3.1 (Case 6= 1). Let (X1 ; X2) be -stable, 6= 1, with spectral representation (?; 0). If 0 < < 1, let (X1 ; X2 ) satisfy the Standard Assumption. Then, for almost every x, # " 1 ? xH ( x ) a ( ? 1 ) (3.5) E (X2 jX1 = x) = x + 1 + a2 2 a 1x + f (x) X1 1 8
where
Z1
e?1 t sin (tx ? a 1 1t ) dt; (3.6) 0 R s s<?1> ?(ds) [ X 2 ; X1 ] = = S2 2 1 ; (3.7) 1 1 R s js j?1?(ds) = S2 2 1 ; (3.8) 1 and where a = tan 2 , and 1 ; 1 and fX1 are respectively the scale parameter (2.4), the skewness parameter (2.5) and the probability density function (3.3) of the random variable X1 . If < 1 and 1 = 1, relation (3.5) makes sense only for x 0, and if < 1 and 1 = ?1, it makes sense only for x 0. H (x) =
Remarks 1. To understand the reason for the last statement in the theorem, recall that when < 1 and 1 = 1, the random variable X1 is totally skewed to the right and when < 1 and 1 = ?1, X1 is totally skewed to the left. The density function fX1 (x) vanishes for x < 0 when 1 = 1 and it vanishes for x > 0 when 1 = ?1. Therefore conditioning with respect to X1 = x makes no sense when x < 0 if 1 = 1 or when x > 0 if 1 = ?1. When either < 1; 1 6= 1 or 1, the support of the density fX1 (x) is the whole real line. 2. As can easily be seen from the proof of the theorem, the following expression is equivalent to (3.5):
E (X2jX1
= x) = x + a( ? 1
R 1 e?1 t t?1 cos(xt ? a t)dt 1 1 0 : (3.9) 1 ) R 1 ?1 t cos(xt ? a t )dt e 0
1 1
Relation (3.9) is particularly useful for evaluating E (X2jX1 = x) numerically. 3. The constants and in the theorem are nite when 1, because, by (3.1), Z Z Z s2 js1j?1?(ds) js2jjs1j?1+ ?(ds) ?(ds) < 1 js1j S2 S2 S2 js1 j since js1 j 1; js2j 1 and ? 1 + > 0. 4. If X is symmetric -stable, 6= 1, we have ? symmetric, 1 = 0 and = 0 and hence E (X2jX1 = x) = x for a.e. x: We thus recover the result of [8] in the case > 1 and that of [13] in the case 1. 9
5. If X1 is (marginally) symmetric, then 1 = 0 and Z1 1 ? t 1 E (X2 jX1 = x) = x + (tan 2 ) f (x) 1 ? x e sin txdt : 0 X1 We now turn to the case = 1. Theorem 3.2 (Case = 1). Let (X1; X2) be -stable with = 1 and spectral representation (?; 0) satisfying the Standard Assumption. Then, for almost every x,
"
E (X2jX1 = x) = ? 21 k0 + (x ? 1) + ? 1 (x ? 1) ? 1 fU (x()x) 1 X1
#
(3.10)
if 1 6= 0, and
E (X2jX1 = x) = ? 21 k0 + (x ? 1) ? 21 fV (x()x) X1
if 1 = 0. Here,
U (x) =
Z1 Z01
e?1 t sin
2 t(x ? 1) + 11 t ln t dt;
(3.11)
(3.12)
e?1 t(1 + ln t)(cos t(x ? 1))dt; (3.13) 0 Z k0 = 1 s2 ln js1j?(ds); 1 S2 Z Z 1 1 [ X 2 ; X1 ]1 < 0 > = s s ?( d s ) = s2 signs1 ?(ds); = 2 S2 1 S2 1 1 1 Z 1 = s2 ?(ds) 1 S2
V (x) =
and
Z 2 1 = ? s1 ln js1j?(ds): S2
(3.14)
Remarks 1. The shift parameter 1 also appears in the expression (3.4) of fX1 (x). 2. The constants and are de ned as in Theorem 3.1. When = 1, the constant is proportional to the skewness parameter of X2 . 3. If 1 6= 0, R U (x) = Im R01 e?itxX1 (t)dt : f (x) Re 1 e?itx (t)dt X1
0
10
X1
4. If X = (X1 ; X2) is symmetric, then ? is symmetric and 1 = 1 = k0 = = 0. Hence by (3.11) E (X2 jX1 = x) = x for a.e. x: as established in [13]. The following corollary shows that the regression is linear when X1 is totally skewed to the right ( 1 = 1) or when it is totally skewed to the left ( 1 = ?1). Corollary 3.1 (Case 1 = 1). Suppose that the conditions of Theorems 3.1 or 3.2 hold. If 1 = 1, then for almost every x, 8 > if 6= 1; < x; E (X2jX1 = x) = > (3.15) : ? 21 k0 ? 1 + x; if = 1: In the case < 1, this relation makes sense only for x 0 when 1 = 1, and for x 0 when 1 = ?1.
0 a.e. ? by (2.4) and (2.5), and therefore = by (3.7) and (3.8). Similarly, 1 = ?1 implies s1 0 a.e. ? and = ?. In both cases, ? 1 = 0, and the corollary follows from Theorems 3.1 and 3.2. The next result shows that the regression E (X2jX1 = x) is always asymptotically linear as x ! 1. We know from Corollary 3.1 that the regression is linear when 1 = 1. For other values of 1 , one has: Corollary 3.2 (Asymptotic relations). Let (X1 ; X2) be -stable, 0 < < 2, with spectral representation (?; 0) and suppose that the Standard Assumption holds if 0 < 1. Then for 1 = 6 1, Proof: 1 = 1 implies s1
E (X2jX1 = x) 1++ x ; x ! 1;
(3.16)
E (X2jX1 = x) 1?? x ; x ! ?1:
(3.17)
1
and
1
The following corollary gives a necessary and sucient condition for linearity of the regression. Corollary 3.3 (Linearity). Let (X1; X2) be -stable, 0 < < 2, with spectral representation (?; 0) and suppose that the Standard Assumption holds if 0 < 1. Then the regression E (X2 jX1 = x) is linear if and only if
= 1 : If = 1 , then E (X2 jX1 = x) is given by (3.15) for a.e. x. 11
(3.18)
Examples. 1. The regression can be linear even though (X1 ; X2) is not symmetric. For example, suppose > 1 and let X = (X1; X2) have spectral measure ? = (( p1 ; p1 )) + ((? p1 ; ? p1 )) + ((0; ?1)); 2 2 2 2 where ((x1 ; x2 )) puts a unit mass at the point (x1; x2 ). The vector X is not symmetric because ? is not symmetric. However, 1 = 0 (X1 is SS ) and = 0. Since (3.18) is satis ed in this case, the is linear. The slope is = 1 R regression ?1> ?(ds) equal 2( p1 ) . because both 1 and the covariation S2 s2s< 1 2 2. The regression may be non-linear even though both components X1 and X2 are SS . For example, let > 1 and X = (X1; X2) have spectral measure ? = ((1; 0)) + ((0; 1)) + 2=2((? p1 ; ? p1 )): 2 2 In this case, X is not symmetric, RX1 and X2 are SS , and the regression is not linear because 1 = 0 but 1 = S2 s2 js1j?(ds) = ?2=2 ( p12 )+1 6= 0.
Integral Representation. In applications, (X1; X2) is often given through its integral representation (X1; X2) =d
Z
E
f1(x)M (dx);
Z
E
f2 (x)M (dx) ;
(3.19)
where M is an -stable random measure on the measure space (E; E ) with control measure m and skewness intensity () : E ! [?1; 1]. Moreover, fj : E ! R1; j = 1; 2, satis es Z j fj (x)j m(dx) < 1; E and it also satis es the condition
Z
E
jfj (x) (x) ln jfj (x)j jm(dx) < 1
if = 1 ([5], [14]). The following proposition expresses condition (3.1) and the constants of the regression in terms of f1 ; f2; () and m(dx).
Proposition 3.1
Z ?(ds) Z jf2(x)j+ (x)m(dx); 1 E Z 2 1 = ? f1(x)(ln q 2 jf1(x)j 2 ) (x)m(dx); (case = 1); E+ f1 (x) + f2 (x) Z f2 (x)(ln q 2 jf1(x)j 2 ) (x)m(dx); k0 = 1 (case = 1); 1 E+ f1 (x) + f2 (x) Z 1 [ X 2 ; X 1 ] = f2(x)f1 (x)<?1> m(dx) = 1 E + Z1 1 = f2(x)jf1 (x)j?1 (x)m(dx): 1 E+ Any -stable vector with spectral representation (?; 0) has an integral representation (3.19) with () 1. In fact, if 6= 1, it has one with (E; E ) = ([0; 1]; B), () 1 and m(dx) = dx. ([5] and [14].) See Section 5 for an example.
4 Proofs Proof of Theorem 3.1: Since (3.2) holds, the characteristic function X(t; r) of
X = (X1; X2) is absolutely integrable with respect to t and therefore the conditional characteristic function X2 jx(r) of X2 given X1 = x equals Z +1 e?itx (X(t; r) ? X1 (t))dt: X2jx(r) = 1 + 2f 1 (x) ?1 X1
(See Section 2 of [13].) Hence, for almost any x,
E (X2jX1 = x) = ?i0X2 jx(0) ! Z +1 @ i ? itx e X(t; r)dt : = ? 2fX1 (x) @r ?1 r=0 @ We start with the evaluation of @r X(t; r) . For any t 6= 0, r=0 ! @ (t; r) = (t; 0) lim 1 X(t; r) ? 1 ; r!0 r X (t; 0) @r X r=0 X 13
(4.1)
(4.2)
where X(t; r)=X(t; 0) = e?u(t;r) with
u(t; r) =
Z
S2
[jts1 + rs2j (1 ? ia sign (ts1 + rs2)) ? jts1j(1 ? ia sign ts1)]?(ds):
Since u(t; r) ! 0 as r ! 0, we get ! 1 ( t; r ) expf?u(t; r)g ? 1 lim u(t; r) X lim ? 1 = lim r!0 r X(t; 0) r!0 r!0 r u(t; r) u(t; r) = ? rlim !0 r Z 1 = ? rlim (jts + rs2j ? jts1j)?(ds) !0 S2 r 1 Z 1 <> <> ((ts1 + rs2) ? (ts1) )?(ds)) ?ia rlim !0 S2 r = : ? [L1 ? iaL2 ] : Clearly, Z ?1> ?(ds) = t<?1> L1 = t<?1> s2s< 1 1 S2
by (3.7). To evaluate L2, we express the corresponding integral as a sum of two integrals Q1 (t; r) and Q2 (t; r), the rst over S2 \fs : jts1j 2jrjg and the second over S2 \ fs : jts1 j < 2jrjg. The mean value theorem gives (ts1 + rs2)<> ? (ts1 )<> = s2r u?1 where u 2 (jts1 j ^ jts1 + rs2j; jts1 j _ jts1 + rs2j), and so, for any s1 6= 0, the integrand of Q1 converges to s2 jts1j?1. The integrand is dominated by an integrable ?1 function, i h jts1j because jts1 j 2jrj implies u 2 2 ; 2jtj and therefore juj?1 ts21 +j2tj?1. This is certainly integrable with respect to ? if > 1. If < 1, on S2, juj?1 C jtj?1(js1j?1+ js1j? + 1) C jtj?1(js1j? + 1); which, by (3.1), is integrable with respect to ?. Therefore, by the dominated convergence theorem, Z ?1 lim Q ( t; r ) = j t j s2 js1j?1?(ds): r!0 1 S2 Now, consider Z 1 ((ts + rs )<> ? (ts )<>)?(ds): Q2 (t; r) = 1 2 1 S2 \f(s1 ;s2 ):jts1 j 1, we get rlim Q (t; r) = 0. The !0 2 case > 1 is similar. Therefore, by (3.8),
L2 = jtj?1
Z
S2
s2 js1j?1?(ds) = 1jtj?1;
and we conclude by (4.2),
@X (t; r) = e?1 jtj eia 1 1 t<> (t<?1> ? iajtj?1 ) 1 @r r=0 for almost all t. A similar argument applied to (4.1) yields ! Z +1 i @ ? itx E (X2jX1 = x) = ? 2f (x) e @r X(t; r) r=0 dt: ?1 X1
(4.3)
(4.4)
Substituting (4.3) in (4.4), we get Z +1 e?itxeia 1 1 t<> e?1 jtj (t<?1> ? iajtj?1 )dt E (X2jX1 = x) = 2fi1(x) ?1 X1 = f 1(x) [(I11 + I12 ) + a(I21 + I22 )]; (4.5) X1
where
I11 =
Z1 0Z
sin tx cos(a 1 1t )e?1 t t?1dt; 1
I12 = ? cos tx sin(a 1 1t )e?1 t t?1 dt; Z 10 I21 = cos tx cos(a 11 t )e?1 t t?1 dt; Z01 I22 = sin tx sin(a 1 1t )e?1 t t?1 dt: 0
After integrating by parts, Z1 x I11 = ? K e?1 t [a 1 sin(a 11 t) ? cos(a 11 t )] cos tx dt; 0 Z1 a I12 = ? K1 + Kx e?1 t [a 1 cos(a 1 1t ) + sin(a 1 1t )] sin tx dt; 0 Z1 e?1 t [a 1 sin(a 1 1t ) ? cos(a 1 1 t)] sin tx dt; I21 = K1 + Kx 0 Z1 x I22 = ? K e?1 t [a 1 cos(a 1 1 t) + sin(a 1 1t )] cos tx dt; 0
15
where K = 1(1 + a2 12). Therefore, by (3.3) and (3.6), I11 + I12 = (1 1+ a2 2) [?a 1 + xfX1 (x) + a 1xH (x)]; 1 1 1 I21 + I22 = (1 + a2 2) [1 + a 1xfX1 (x) ? xH (x)]: 1
1
Substituting these expressions in (4.5) yields
# " 2 + a a ( ? xH ( X ) 1 1 1 ) E (X2 jX1 = x) = 1 + a2 2 x + 1 + a2 2 f (x) ? f (x) X1 1 # X1 " 1 = x + a( ? 2 12 ) a 1 x + 1 ? xH (x) : 1 + a 1 fX1 (x)
Proof of Theorem 3.2: We have, as in (4.2),
! @X (t; r) = (t; 0) lim 1 X(t; r) ? 1 X r!0 r X(t; 0) @r t=0 = : ?X(t; 0) L1 + i 2 L2
(4.6)
where
Z 1 [jts1 + rs2j ? jts1j]?(ds); S2 r Z 1 L2 = rlim [(ts + rs2) ln jts1 + rs2j ? ts1 ln jts1j]?(ds): !0 S2 r 1 Clearly, Z L1 = sign t s2 sign s1 ?(ds) = 1 t : (4.7) S2 As in the case 6= 1, to evaluate L2 , we write L2 = rlim (Q (t; r) + Q2 (t; r)) where Q1 !0 1 involves integration over S2 \ fs : jts1j 2jrjg and Q2 over S2 \ fs : jts1j < 2jrjg, L1 = rlim !0
and we apply the dominated convergence theorem. We give details only for Q2. Let f : [0; 1) ! [0; 1) be de ned by f (r) = rj ln rj; f (0) = 0. For jrj small enough (0 < jrj < e?1 ), f is monotone increasing, and therefore, when jts1j < 2jrj and 0 < jrj < (3e)?1 , one has jrj?1(f (jts1 + rs2 j) + f (jts1j)) jrj?1(f (3jrj) + f (2jrj)) 2jrj?1f (3jrj) 6j ln 3jrjj 6j ln 1=( 23 jts1j)j 6(j ln( 32 jtj) + js1j?=2), which is integrable by (3.1). Applying the dominated convergence theorem, we get rlim Q (t; r) = 0. Therefore, !0 2
L2 = rlim Q (t; r) + rlim Q (t; r) !0 1 !0 2 16
Z
s2(1 + ln jts1j)?(ds) Z Z = (1 + ln jtj) s2?(ds) + s2 ln js1 j?(ds) S2 S2 = 1 (1 + ln jtj) + 1 k0:
=
S2
Substituting (4.7) and (4.8) in (4.6) yields
(4.8)
@X(t; r) = ? (t; 0) t + i 2 (1 + ln jtj) + i 2 k : 1 X @r r=0 0 The dominated convergence theorem applied to (4.1), with 1 as in (3.14), gives ! Z +1 i @ ( t; r ) X ? itx E (X2jX1 = x) = ? 2f (x) e @r r=0 dt ?1 X1 Z +1 i 1 = 2f (x) e?it(x?1 ) e?i 2 11 t ln jtje?1 jtj[t + i 2 (1 + ln jtj) + i 2 k0]dt ?1 X1 (4.9) = f 1(x) U (x) ? 2 (I21 + I22 ) ? 2 k0 fX1 (x) ; X1 where Z1 I21 = e?1 t cos t(x ? 1)(1 + ln t) cos( 2 1 1 t ln t)dt; 0Z 1 I22 = ? e?1 t sin t(x ? 1) (1 + ln t) sin( 2 1 1 t ln t)dt: 0 (i) Case 1 6= 0: After integrating by parts, Z1 e?1 t cos t(x ? 1 ) sin( 2 11 t ln t)dt I21 = 2 1 0 Z1 + 2 (x ? 1) e?1 t sin t(x ? 1) sin( 2 11 t ln t)dt; 0 1 1
Z1 e?1 t sin t(x ? 1) cos( 2 11 t ln t)dt I22 = 2 1 0 Z1 ? 2 (x ? 1) 0 e?1 t cos t(x ? 1 ) cos( 2 1 1 t ln t)dt; 1 1 so that
I21 + I22 = ? 2 (x ? 1)fX1 (x) + 2 U (x): 1 1
1
17
Substituting this expression in (4.9) and rearranging the terms, we get E (X2 jX1 = x) = ? 21 k0 + (x ? 1) ? 1 ( ? 1 ) fU (x()x) 1 1 X1 2 ? = ? 1 k0 + (x ? 1) + 1 [(x ? 1) ? 1 fU (x()x) ]: X1
1
(ii) Case 1 = 0: After integrating by parts,
U (x) = Since
Z1 0
I21 + I22 = relation (4.9) becomes
e?1 t sin t(x ? 1 )dt = fX1 (x) (x ? 1): 1
Z1 0
e?1 t cos t(x ? 1)(1 + ln t)dt = V (x);
" # f ( x ) 2 2 1 X 1 E (X2 jX1 = x) = f (x) (x ? 1 ) ? V (x) ? k0 fX1 (x) X1 1 = ? 21 k0 + (x ? 1) ? 21 fV (x()x) : X1
! 1. Indeed, consider ^ the vector (?X1 ; X2) whose parameters are 1 = ? 1 ; ^ = ?, and ^ = . Since E (X2jX1 = x) = E (X2j ? X1 = ?x), one can obtain the asymptotic behavior of E (X2jX1 = x) as x ! ?1 from that of E (X2jX1 = x) as x ! 1 by replacing 1 by ? 1 , , by ?, and x by ?x = jxj. Suppose 1 6= 1. We study rst the cases with 6= 1 and then those with = 1. (a) Case 6= 1; x ! 1: To obtain the asymptotic behavior of E (X2 jX1 = x), we use its expression in terms of Iij ; i; j = 1; 2, as given in (4.5). By [15], Theorem 126, as x ! 1, 8 > < I11 ?()(sin 2 )x? ; I12 = o(x? ); (4.10) > I ?()(cos )x? ; I = o(x? ); : 22 21 2 when 0 < < 1 and also when 1 < < 2. (If 1 < < 2, use integration by parts to transform the terms t?1 in Iij ; i; j = 1; 2, into t?2 .) Moreover, fX1 (x) 1 (1 + 1)(sin )?( + 1)x??1 2 1 (see [7], Theorem 2.4.2.) Substituting in (4.5) and using a = tan 2 , we get Proof of Corollary 3.2: It is sucient to focus on x
E [X2 jX1 = x] 18
+1 x 1 ? ? (1 + )(sin ) ?(1 + ) ?()(sin 2 )x + a?()(cos 2 )x 1 2 1 1++ x: 1 (b) Case = 1; 1 6= 0; x ! 1: Since x ? 1 x as x ! 1, we may assume 1 = 0. Then Z1 U (x) = e?1 t sin(tx + 2 1 1 t ln t)dt = U1(x) + U2(x)
0
where
Z1
e?1 t sin( 2 1 1t ln t) cos tx dt; Z01 e?1 t cos( 2 11 t ln t) sin tx dt: U2 (x) = 0 After integrating by parts, Z1 1 U1 (x) = ? x e?1 t[?1 sin( 2 1 1t ln t) + 2 1 1 (1 + ln t) cos( 2 1 1 t ln t)] sin txdt; 0Z 1 1 1 U2 (x) = x + x e?1 t[?1 cos( 2 1 1t ln t) + 2 1 1(1 + ln t) sin( 2 1 1 t ln t)] cos tx dt: 0 Since the factors of sin tx and cos tx are integrable, we get U1 (x) = o(x?1) and U2 (x) x?1 as x ! 1 by the Riemann-Lebesgue lemma ([15], Theorem 1). Therefore (4.11) U (x) x1 : Since, as is well-known, fX1 (x) 1(1 + 1 ) x?2 ; (4.12) we get 1 x; U (x) f (x) (1 + ) U1 (x) =
X1
which, substituted in (3.10), gives
1
1
" # x ? 1 x ? 1 + = 1++ x: E (X2jX1 = x) x + 1 1 1 This is the same result as in the case 6= 1. (c) Case = 1; 1 = 0, x ! 1: If Y is SS with = 1 and scaling parameter 1 , then by the inversion formula ([10], Theorem 3.2.1), P (0 < Y x) = 19
1 R 1 ?1 t t?1 sin tx dt. 0 e
On the other hand, because of the symmetry, xlim !1P (0 < Y < 1) = 12 . Therefore limx!1 R01 e?1 t t?1 sin tx dt = 2 , and hence Z1 V (x) = e?1 t(1 + ln t) cos tx dt 0 Z Z1 1 1 1 ?1 t ?1 t t?1 sin tx dt = e (1 + ln t) sin tx dt ? e x 0 x 0 1 ? 2 x; (4.13) since the rst integral is o(1) by the Riemann-Lebesgue lemma. Substituting (4.13) and (4.12) in (3.11) yields E (X2 jX1 = x) ( + )x, that is, (3.16) with 1 = 0. This completes the proof of the corollary. Proof of Corollary 3.3: If = 1 , then E (X2 jX1 = x) = x for a.e. x by Theorem 3.1 when 6= 1, and E (X2jX1 = x) = ? 21 k0 + (x ? 1) for a.e. x by Theorem 3.2 when = 1. Suppose now E (X2jX1 = x) = Ax + B for a.e. x and also, ad absurdum, 6= 1 . This contradicts (3.10) if 1 = 1. If 1 6= 1, then by linearity, E (X2 jX1 = x) = lim E (X2jX1 = x) : xlim !1 x!?1 x x In view of (3.16) and (3.17), this implies + = ?; 1 + 1 1 ? 1 and hence = 1, a contradiction. q Proof of Proposition 3.1: Suppose rst 6= 1 and set jf j = f12 + f22 . The characteristic function of (X1 ; X2) is 2 Z X E expfi j fj M (dx)g
E
j =1
= = + =
3 Z 2 X 2 2 X <> expf 4?j j fj j + ia( j fj ) (x)5 m(dx)g E j =1 8Z 2 j=1 2 1<>3 02 < X X f f j j exp : 4?j j jf j j + ia @ j jf j A 5 1 + 2 (x) jf jm(dx) E+ j =1 j =1 9 2 1<>3 0 Z 2 2 = X X ( ? f ) ( ? f ) 1 ? ( x ) m(dx) 4?j j j j + ia @ j j A 5 j f j ; jf j jf j 2 E+ j =1 j =1 0 1 3 2 <> Z 2 2 X X expf 4?j j sj j + ia @ j sj A 5 ?(ds) S 2
j =1
j =1
20
by making the change of variables
! f 1 (x) f2 (x) T : E+ ! S2 ; T : x 7! (s1; s2 ) = jf (x)j ; jf (x)j and setting ?(ds) = ?+(ds) + ??(?ds), with ?(ds) = 1 (x) jf (x)j m(dx); x = T ?1(s): 2 Hence, in order to transform an integral involving s1; s2 and ? into one involving f1 ; f2 and m, one expresses it as a sum of two terms: the rst is obtained by replacing each sj by jffj((xx))j j = 1; 2, and ?(ds) by 12 (1+ (x))jf (x)jm(dx), and the second is obtained by replacing each sj by ?jff(jx()xj) ; j = 1; 2, and ?(ds) by 21 (1 ? (x))jf (x)j m(dx). (The same rule applies when = 1.) For example, when = 1, Z k0 = s1 ln js2j?(ds) S2 Z ?f1 j ? f2j ! 1 ? (x) Z f1 jf2j ! 1 + (x) ln jf j jf jm(dx) + E jf j ln jf j jf jm(dx) = 2 2 E+ jf j + ! Z j f 2j = f1 ln jf j (x)m(dx): E+
Similarly, ! Z ?(ds) Z 1 + (x) + 1 ? (x) gjf jm(dx) j f j = f 2 2 S2 js1 j !# "EZ+ jf1 j+ Z Z jf2j+ j f 2j 2 E jf j m(dx); c E m(dx) + E jf j m(dx) : + + + 1 1
5 Graphical representations In this section, we present graphical representations of the regression functions given analytically in Theorems 3.1 and 3.2. In the case 6= 1, the ve parameters ; 1; ; , and 1 are required to describe the regression function completely. In the case = 1, these ve parameters, with the addition of k0 and 1, are required to describe the regression function completely. Thus, there are a myriad of possible choices for these parameters which are consistent with their de nitions, giving rise to an unmanageably large family of regression functions. In producing the plots that follow, we have chosen to restrict the parameter space considerably, but in a way which we hope will give 21
the reader some feeling for the general character of these functions. Clearly though, the following examples do not illustrate all possible behavior. It is convenient to use the stochastic integral representation (3.19) for (X1 ; X2). Here, the random measure M is taken to be totally right-skewed (i.e. , () 1), with domain E = [0; 1], and Lebesgue control measure, m. The functions fj , illustrated in Figure 1, are restricted to those of the form
fj (t) = 1[0;cj )(t) ? 1[cj ;1](t) where 1A is the indicator function of the set A. With these restrictions, the three parameters , c1, and c2 determine the regression function.
Figure 1: The Function fj (t) This choice forces all scale parameters to be unity, and ensures that the Standard Assumption is always satis ed. Since E (a2X2 ja1X1 = x) = a2 E (X2jX1 = x=a1 ) for non-zero ai , other related regressions may be inferred from those strictly in this class. Regressions for (X1; X2) in this class may be interpreted as regressions involving three independent stable variables as follows. De ne cmin = min(c1; c2 ) and cmax = max(c1 ; c2), and let Z1; Z2, and Z3 be independent identically distributed totally rightskewed ( = 1) -stable random variables with unity scale parameter. Then (X1; X2) is distributed as (Y1; Y2), where = Z (?1)j +1 (c ? c )1= Z ? (1 ? c )1= Z Yj = c1min 1 max min 2 max 3
22
and the upper sign prevails if cmax = c1, and the lower sign prevails otherwise. For example, if c1 = 0:5 and c2 = 1, then the regression of X2 on X1 is the regression of k(Z1 + Z2 ) on k(Z1 ? Z2 ) for the appropriate constant k. From Proposition 3.1, the parameters de ning the regressions in Theorem 3.2 are
1 1 1 k0
= = = = =
1; 2c1 ? 1: 2 ln(p2) ; p 1 ? ln( 2); 1 ? 2jc1 ? c2j;
and
= 2c2 ? 1: We shall refer to the skewness parameter for X2 as 2 . For the present class of distributions, 2 is equal to . The asymptotic results of Corollary 3.2 are illustrated in the graphs, and translate to the present parameterization as follows: ( E ( X 1 if c2 c1 , 2 jX1 = x) lim = x!+1 2(c2=c1) ? 1 if c1 > c2, x and ( E ( X 1 ? 2 c12??cc11 if c2 c1, 2 jX1 = x) lim = x!?1 1 if c > c . x 1
2
The limits here do not depend on the value of . Moreover, by Corollary 3.3, the regression is linear when c1 = c2 (this corresponds here to X1 = X2 .) The regression functions are computed according to Theorems 3.1 and 3.2. Unfortunately, the functions H (x); U (x); V (x), and the density function fX1 (x) do not in general have representations in terms of elementary functions, and thus their values are computed via numerical integrations. See Hardin, Samorodnitsky and Taqqu [6] for a detailed discussion of the numerical procedures. For reference, a few density functions for selected values of are plotted in Figures 2 and Figure 3. < Display here Figures 2 and 3 > Figure 2 illustrates the symmetric densities for the values = 0:5; 0:9; 1:0; 1:1; 1:5 and 1.9. There is a natural ordering in that for smaller , the value of the density at the mode is higher, the tails are larger, and the width of the modal peak is narrower. Figure 3 shows totally right-skewed densities for the same values of . Here, the \ordering" is not continuous across the value = 1. For > 1, the mode is negative, while the tail is heavier on the positive axis, resulting in a zero mean density. As approaches one from above the mode becomes more negative, and the right tail becomes heavier. For < 1, the densities are non-zero only on the positive axis, with 23
increasing mode as approaches one from below. For = 1, the mode is negative, although the modal peak has more mass to the right of zero than to the left. In no discernible way, however, do the skewed densities approach the skewed = 1 density as approaches one, as is true in the symmetric case. This is due to the choice of parameters in the (marginal) characteristic function. There is a dierent choice which makes the characteristic function continuous as ! 1 (see Zolotarev [17]). In the graphs that follow, two of the parameters ; c1, and c2 are held constant while the third varies. Although not all possible variations are illustrated, much of the behavior for parameter choices not illustrated can be correctly inferred from the graphs. < Display here Figure 4> Figure 4 corresponds to c1 = 0:5 and c2 = 1 and hence to 1 = 0 and 2 = = 1. It shows the regression of a totally right-skewed variable X2 upon a symmetric variable X1 for selected , or equivalently, the regression of k(Z1 + Z2) on k(Z1 ? Z2) as mentioned above. When 6= 1, the value of these regression functions at the origin has the same sign as a = tan 2 and hence is positive for < 1 and negative for > 1. < Display here Figure 5> Figure 5 corresponds to c2 = 0:5 and various values of c1. It represents a regression of a symmetric variable upon X2 upon variables X1 with skewness 1 varying from 0 to 1, for the case = 1:9. The value c1 = 0:5 corresponds to X1 symmetric, in which case X1 = X2 and the regression is linear. When c1 = 1; X1 is totally right-skewed and the regression is linear at zero. < Display here Figures 6 and 7> In Figure 6, X2 of varying skewness is regressed on a symmetric X1 for the value = 1:5. Here c1 = 0:5. When c2 = 0:5, the regression is linear since X2 = X1. The curious non-zero intersection of the regression lines occurs for all xed values of 1 at an x value depending on , but does not occur for values of < 1 (see Figure 12). Figure 7 corresponds also to = 1:5, but this time it is X2 that is symmetric ( 2 = 0). < Display here Figures 8 and 9> In Figures 8 and 9, equals 1.1. Figure 8 should be compared to Figure 7 because they both illustrate the regression of a symmetric random variable X2 on random variables X1 of varying skewness. Figure 9 shows the regression of a totally rightskewed X2 upon X1 of varying skewness. Here c2 = 1 and hence 2 = = 1. As the skewness of X1 approaches that of X2 , the regression function approaches the identity, yet the left asymptote always has slope ?1. < Display here Figure 10> In Figure 10, = 1. The parameter c1 is chosen to be 0.9, so that X1 has skewness 0.8. The skewness of X2 varies from -0.8 to 1. (For X2 of skewness -1, the regression is the negative of that for X2 of skewness 1.) The value c2 = 0:9 corresponds to X2 = X1, in which case the regression is linear. This graph shows that a small 24
change in skewness can result in a large change in the global shape of the regression function. < Display here Figure 11> Figure 11 represents the regressions of variables of varying skewness upon a symmetric variable for the value = 0:9. The value c2 = 0:5 corresponds to X2 = X1 in which case the regression is linear. This plot should be compared with the case = 1:5 illustrated in Figure 6. < Display here Figures 12 and 13> Figures 12 and 13 both correspond to = 0:5, but 1 = 0 in Figure 12 whereas 2 = 0 in Figure 13. Observe that Figures 6 and 12 where 1 = 0, have roughly the same shape. So do Figures 7, 8 and 13, where 2 = 0. In Figure 13 the regression for 1 = 1 (c1 = 1) is de ned only for x 0, because the density of X1 has only support on [0; 1]. It is linear with slope = 0.
25
Figure 2: Symmetric Stable Density Functions, = 0
Figure 3: Skewed Stable Density Functions, = 1
26
Figure 4: Regression Functions for 1 = 0, 2 = 1; varying
Figure 5: Regression Functions for = 1:9, 1 = 2c1 ? 1 and 2 = 0
27
Figure 6: Regression Functions for = 1:5, 1 = 0 and 2 = 2c2 ? 1
Figure 7: Regression Functions for = 1:5, 1 = 2c1 ? 1 and 2 = 0
28
Figure 8: Regression Functions for = 1:1, 1 = 2c1 ? 1 and 2 = 0
Figure 9: Regression Functions for = 1:1, 1 = 2c1 ? 1 and 2 = 1
29
Figure 10: Regression Functions for = 1:0, 1 = 0:8 and 2 = 2c2 ? 1
Figure 11: Regression Functions for = 0:9, 1 = 0 and 2 = 2c2 ? 1
30
Figure 12: Regression Functions for = 0:5, 1 = 0 and 2 = 2c2 ? 1
Figure 13: Regression Functions for = 0:5, 1 = 2c1 ? 1 and 2 = 0
31
References [1] Cambanis, S. (1984). Similarities and contrasts between Gaussian and other signals. Fifth Aachen Colloquium on Mathematical Methods in Signal Processing, 113-120. Butzer, P.L., ed. Technische Hochschule Aachen. [2] Cambanis, S. and Miller, G. (1981). Linear problems in pth order and stable processes. SIAM J. Appl. Math. 41, 43-69. [3] Feller, W. (1971). An Introduction to Probability Theory and its Applications. Vol. 2, 3rd edition, Wiley, New York. [4] Hardin, C.D., Jr. (1982). On the linearity of regression. Z. Wahrscheinlichkeitsth. verw. Geb. 61, 293-302. [5] Hardin, C.D.,Jr. (1984). Skewed stable variables and processes. Technical Report 79, Center for Stoch. Proc., U.N.C. Chapel Hill. [6] Hardin, C.D., Jr., Samorodnitsky, G. and Taqqu, M.S. (1990). Numerical computation of non-linear stable regression functions. Proceedings of the Workshop on Stable Processes and Related Topics, Birkhauser, Boston. To appear. [7] Ibragimov, I.A., Linnik, Yu. V. Independent and stationary sequences of random variables (1971). Wolters-Nordho, The Netherlands. [8] Kanter, M. (1972). Linear sample spaces and stable processes. J. Funct. Anal., 9, 441-459. [9] Kuelbs, J. (1973). A representation theorem for symmetric stable processes and stable measures on H . Zeitschrift Wahrsch. verw. Geb., 26, 259-271. [10] Lukacs, E. (1970). Characteristic functions. Second edition. Charles Grin & Co., London. [11] Mandelbrot, B.B. and Wallis, J.R. (1968). Noah, Joseph and operational hydrology. Water Resources Research 4, 909-918. [12] Miller, G. (1978). Properties of certain symmetric stable distributions. J. Multivariate Anal. 8, 346-360. [13] Samorodnitsky, G. and Taqqu, M.S. (1989). Conditional moments of stable random variables. Preprint. [14] Samorodnitsky, G. (1988). Extrema of skewed stable processes. Stoch. Proc. Appl., 30, 17-39. 32
[15] Titchmarsh, E.C. (1986). Introduction to the Theory of Fourier Integrals 3rd. ed. Chelsea, New York. [16] Weron, A. (1984). Stable processes and measures, a survey. Prob. Theory in Vector Spaces III, Lecture Notes in Math., 1080, 306-364, Springer Verlag. [17] Zolotarev, V.M. (1986). One-dimensional Stable Distributions. Volume 65 of Translations of Mathematical Monographs, American Mathematical Society.
33