On joint sum/max stability and sum/max domains of attraction

38 downloads 0 Views 328KB Size Report
Sep 8, 2016 - limit distributions and describe their domains of attraction. 1. ... limit distributions, that are sum-stable and max-stable laws, are well understood.
arXiv:1606.03109v1 [math.PR] 9 Jun 2016

ON JOINT SUM/MAX STABILITY AND SUM/MAX DOMAINS OF ATTRACTION KATHARINA HEES AND HANS-PETER SCHEFFLER Abstract. Let (Wi , Ji )i∈N be a sequence of i.i.d. [0, ∞)×R-valued random vectors. Considering the partial sum of the first component and the corresponding maximum of the second component, we are interested in the limit distributions that can be obtained under an appropriate scaling. In the case that Wi and Ji are independent, the joint distribution of the sum and the maximum is the product measure of the limit distributions of the two components. But if we allow dependence between the two components, this dependence can still appear in the limit, and we need a new theory to describe the possible limit distributions. This is achieved via harmonic analysis on semigroups, which can be utilized to characterize the scaling limit distributions and describe their domains of attraction.

1. Introduction Limit theorems for partial sums and partial maxima of sequences of i.i.d. random variables have a very long history and form the foundation of many applications of probability theory and statistics. The theories, but not the methods, in those two cases parallel each other in many ways. In both cases the class of possible limit distributions, that are sum-stable and max-stable laws, are well understood. Moreover, the characterization of domains of attraction is in both cases based on regular variation. See e.g. [5, 4, 6, 9] to name a few. The methods used in the analysis in those two cases appear, at least at the first glance, to be completely different. In the sum case one usually uses Fourier- or Laplace transform methods, whereas in the max-case the distribution function (CDF) is used. However, from a more abstract point of view these two methods are almost identical. They are both harmonic analysis methods on the plus resp. the max-semigroup. Surprisingly, a thorough analysis of the joint convergence of the sum and the maximum of i.i.d. random vectors, where the sum is taken in the first coordinate and the max in the second coordinate has never been carried out. Of course, if the components of the random vector are independent, one can use the classical theories componentwise and get joint convergence. To our knowledge, the only other case considered is the case of complete dependence where the components are identical, see [3]. The purpose of this paper is to fill this gap in the literature and to present a theory that solves this problem in complete generality. Moreover, there is a need for a general theory describing the dependence between the components of the limit distributions 1

2

KATHARINA HEES AND HANS-PETER SCHEFFLER

of sum/max stable laws. For example, in [14] on page 1862 it is explicitly asked how to describe such limit distributions. Moreover, there are various stochastic process models and their limit theorems, that are constructed from the sum of non-negative random variables Wi , interpreted as waiting times between events of magnitude Ji , which may describe the jumps of a particle, in particular the continuous time random maxima processes studied in [7, 10], or the shock models studied in [11, 12, 13, 1, 8]. In those papers it is either assumed that the waiting times Wi and the jumps Ji are independent or asymptotically independent, meaning that the components of the limiting random vector are independent. Motivated by these applications, in this paper we only consider the case of nonnegative summands. More precisely, let (Wi , Ji )i∈N be a sequence of i.i.d. R+ × R-valued random variables. The random variables Wi and Ji can be dependent. Furthermore we define the partial sum n X (1.1) S(n) := Wi i=1

and the partial maximum (1.2)

M(n) :=

n _

Ji := max{J1 , . . . , Jn }.

i=1

Assume now that there exist constants an , bn > 0 and cn ∈ R, such that (1.3)

(an S(n), bn (M(n) − cn )) ⇒ (D, A) as n −→ ∞,

where A and D are non-degenerate. We want to answer the following questions: (i) How can we characterize the joint distribution of (D, A) in (1.3)? (ii) How can we describe the dependence between D and A? (iii) Are there necessary and sufficient conditions on (W, J), such that the convergence in (1.3) is fulfilled? Observe that by the classical theory of sum- or max-stability it follows by projecting on either coordinate in (1.3) that D has a β sum-stable distribution for some 0 < β < 1 and A has one of the three extreme value distributions. To answer all these questions we use Harmonic Analysis on the sum/max semigroup and derive a theory that subsumes both the classical theories of sum-stability or max-stability, respectively. This paper is organized as follows: In Section 2, by applying results from abstract harmonic analysis on semigroups to the sum/max semigroup defined by + (x , t ) := (x + x , t ∨ t ) (1.4) (x , t )∨ 1

1

2

2

1

2

1

2

for all (x1 , t1 ), (x2 , t2 ) ∈ R+ ×R, we develop the basic machinery. We will give a L´evyKhintchine type formula for sum/max infinitely divisible laws based on a modified Laplace transform on the semigroup as well as convergence criteria for triangular arrays. These methods are then used in Section 3 to answer questions (i), (ii) and

ON JOINT SUM/MAX STABILITY AND SUM/MAX DOMAINS OF ATTRACTION

3

(iii) in the α-Fr´echet case, where we additionally assume that A in (1.3) has an α-Fr´echet distribution. The general case then follows by transforming the second component in (1.3) to the 1-Fr´echet case. In Section 4 we present some examples showing the usefulness of our results and methods. Technical proofs are given in the appendix. 2. Harmonic Analysis on semigroups Even though the random variables Ji in (1.1) are real valued, in extreme value theory it is more natural to consider random variables with values in the two-point compactification R = [−∞, ∞]. Observe that −∞ is the neutral element with respect to the max operation. The framework for analyzing the convergence in (1.3) ∨ ), where + ∨ is defined in (1.4). Obis the Abelian topological semigroup (R+ × R, + serve that the neutral element is (0, −∞). The semigroup operation + ∨ naturally b induces a convolution ⊛ on M (R+ × R), the set of bounded measures. Indeed, let Π((s1 , y1 ), (s2 , y2 )) := (s1 + s2 , y1 ∨ y2 ). For µ1 , µ2 ∈ Mb (R+ × R) we define µ1 ⊛ µ2 = Π(µ1 ⊗ µ2 ), where µ1 ⊗ µ2 denotes the product measure. Then we have for independent R+ × R-valued random vectors (W1 , J1 ) and (W2 , J2 ) that P(W1 ,J1) ⊛ P(W2 ,J2 ) = P(W1 ,J1 ) + ∨ (W2 ,J2 ) = P(W1 +W2 ,J1 ∨J2 ) . The natural transformation on the space of bounded measures for the usual convolution that transforms the convolution into a product, is the Fourier- or Laplace transform. We will now introduce a similar concept on our semigroup (R+ × R, + ∨) and present its basic properties. In order to do so we first recall some basic facts about Laplace transforms on semigroups. On an arbitrary semigroup S a generalized Laplace transform L : µ → L(µ) is defined by Z L(µ)(s) = ρ(s)dµ(ρ), s ∈ S, ˆ S

ˆ is the set of all bounded semicharacters on S and µ ∈ Mb (S) ˆ (see 4.2.10 in where S [2]). A semicharacter on (S, ◦) is a function ρ : S → R with the properties (i) ρ(e) = 1; (ii) ρ(s ◦ t) = ρ(s)ρ(t) for s, t, ∈ S. We now consider the topological semigroup S := (R+ × R, + ∧ ) with neutral element + e = (0, ∞), where ∧ is defined as (2.1)

+ (x , t ) := (x + x , t ∧ t ) (x1 , t1 )∧ 2 2 1 2 1 2

for all (x1 , t1 ), (x2 , t2 ) ∈ R+ × R. The set of bounded semicharacters on S is given by n o ˆ = e−t· 1{∞} (·), e−t· 1[x,∞] (·), e−t· 1(x,∞](·) t ∈ [0, ∞] , x ∈ [−∞, ∞) (2.2) S

4

KATHARINA HEES AND HANS-PETER SCHEFFLER

with ∞ · s = ∞ for s > 0 and 0 · ∞ = 0, hence for t = ∞ we get e−t· = 1{∞} (·). We ˆ which we denote by S: ˜ consider only a subset of S, n o ˜ := ρt,x (s, y) := e−ts 1[−∞,y ] (x) t ∈ [0, ∞) , x ∈ [−∞, ∞] . (2.3) S

This is again a topological semigroup under pointwise multiplication and the neutral element is the constant function 1. It is easy to see that this set of semicharacters ∨ ). Hence, with together with the pointwise multiplication is isomorph to (R+ × R, + ˜ with measures on R+ × R we a little abuse of notation, by identifying measures on S can define a Laplace transform for measures on (R+ × R, + ∨ ). Definition 2.1. For bounded measures µ on R+ × R, the CDF-Laplace transform (short: C-L transform, or C-L function) L : µ → L(µ) is given by Z (2.4) e−st 1[−∞,y] (x)µ(dt, dx), (s, y) ∈ R+ × R. L(µ)(s, y) := R+ ×R

Observe that setting s = 0 results in the CDF of the second component, whereas setting y = ∞ results in the usual Laplace transform of the first component. That is, if we consider a random vector (W, J) on R+ × R with joint distribution µ and put s = 0 resp. y = ∞ we get (2.5) (2.6)

L(µ)(0, y) = µ(R+ × [−∞, y]) = P {J ≤ y} = FJ (y) resp. Z ∞ L(µ)(s, ∞) = e−st P(W,J) (dt, R+ ) = E[e−sW ] = P˜W (s), 0

where P˜W is the Laplace transform of PW and FJ the distribution function of J, which explains the name CDF-Laplace transform. In the following we collect some important properties of the C-L transform needed for our analysis. ∧ ) (meaning that ϕ(0, ∞) = 1), Lemma 2.2. A normalized function ϕ on (R × R, + +

is the C-L transform of a probability measure µ on R+ × R if and only if ϕ is positive semidefinite, ϕ(0, y) is the distribution function of a probability measure on R and ϕ(s, ∞) the Laplace transform of a probability measure on R+ . Proof. See Appendix. Proposition 2.3. Let µ1 , µ2 , µ ∈ M1 (R+ × R) and α, β ∈ R. Then (a) (b) (c) (d)

L (αµ1 + βµ2 ) (s, y) = αL(µ1)(s, y) + βL(µ2)(s, y) for all (s, y) ∈ R+ × R. L(µ1 ⊛ µ2 )(s, y) = L(µ1)(s, y) · L(µ2 )(s, y) for all (s, y) ∈ R+ × R. µ1 = µ2 if and only if L(µ1 )(s, y) = L(µ2 )(s, y) for all (s, y) ∈ R+ × R. It is 0 ≤ L(µ)(s, y) ≤ 1 for all (s, y) ∈ R+ × R.



ON JOINT SUM/MAX STABILITY AND SUM/MAX DOMAINS OF ATTRACTION

5

Proof. Property (a) is obvious. The proof of (b) is also straightforward, because the ˆ ×S ˆ→S ˆ with T (ρ1 , ρ2 ) := convolution is the image measure under the mapping T : S ρ1 ρ2 . Property (c) follows immediately from Theorem 4.2.8 in [2] and (d) is obvious.  The Laplace transform is a very useful tool for proving weak convergence of sums of i.i.d. random variables using the so called Continuity Theorem. The next theorem is the analogue of the Continuity Theorem for the Laplace transform in the sum/max case. Theorem 2.4 (Continuity Theorem for the C-L transform). Let µn , µ ∈ M1 (R+ × R) for all n ∈ N. Then we have: w

(a) If µn − → µ, then L(µn )(s, y) − → L(µ)(s, y) for all (s, y) ∈ R+ × R in which L(µ) is continuous. (This is the case for all but countably many y ∈ R.) (b) If L(µn )(s, y) − → ϕ(s, y) in all but countable many y ∈ R and lims↓0 ϕ(s, ∞) = w 1, then there exists a measure µ ∈ M1 (R+ × R) with L(µ) = ϕ and µn − → µ. Proof. (a): With R the Portmanteau RTheorem (see for example [6], Theorem 1.2.2) we know that f (t, x)µn (dt, dx) → f (t, x)µ(dt, dx) as n → ∞ for all real-valued, bounded functions f on R+ × R with µ (Disc(f )) = 0, where Disc(f ) is the set of discontinuities of f . If we choose f as fs,y (t, x) := e−st 1[−∞,y](x) it follows that L(µn )(s, y) −→ L(µ)(s, y) as n → ∞ for all (s, y) ∈ R+ ×R in which L(µ) is continuous. Because Disc(fs,y ) = R+ ×{y} and µ(R+ ×·) has as probability measure at most countable many atoms, µ(Disc(fs,y )) 6= 0 for at most countable many y ∈ R. (b): Let (µn )n∈N be a sequence of probability measures on R+ × R. With Helly’s Selection Theorem (see [4], Theorem 8.6.1) we know that for all subsequences (nk )k∈N there exists another subsequence (nkl )l∈N and a measure µ ∈ M≤1 (R+ × R) such that v

µnkl −→ µ as l → ∞. Then µ is a subprobability measure, i.e. µ(R+ × R) ≤ 1. With (a) it follows that L(µnkl )(s, y) −→ L(µ)(s, y) as l → ∞ for all (s, y) where L(µ) is continuous. By assumption we know that L(µnkl )(s, y) −→ ϕ(s, y) as l → ∞ pointwise in all but countable many y ∈ R. Then it follows because of uniqueness of the limit that L(µ)(s, y) = ϕ(s, y) for all subsequences (nk )k∈N . So the limits are equal for all subsequences (nk )k∈N . Because of the uniqueness of the C-L transform it follows that v µn −→ µ as n → ∞

6

KATHARINA HEES AND HANS-PETER SCHEFFLER

where µ(R+ × R) ≤ 1. Because of the assumption lims↓0 ϕ(s, ∞) = 1 we get Z ∞ e−st µ(dt, R) 1 = lim ϕ(s, ∞) = lim L(µ)(s, ∞) = lim s↓0 s↓0 s↓0 0 Z ∞ = 1 µ(dt, R+ ) = µ(R+ × R). 0

Hence it is µ(R+ × R) = 1, i.e. µ ∈ M1 (R+ × R).



The following Lemma extends the convergence in Theorem 2.4 to a kind of uniform convergence on compact subsets needed later. Lemma 2.5. Let µn , µ ∈ M1 (R+ × R) for all n ∈ N and assume that L(µ)(s, y) w is continuous in y ∈ R. If µn −→ µ and (sn , yn ) −→ (s, y) then L(µn )(sn , yn ) − → L(µ)(s, y) as n → ∞. Proof. See Appendix.



As for any type of convolution structure, there is the concept of infinite divisibility. Definition 2.6. A probability measure µ ∈ M1 (R+ × R) is infinitely divisible with respect to + ∨ (or short: + ∨ -infinitely divisible), if for all n ∈ N there exists a probability 1 measure µn ∈ M (R+ × R), such that µ⊛n n = µ. Trivially, every distribution on R is max-infinitely divisible. The following example shows that sum-infinite divisibility in one component and max-infinite divisibility in the other component not necessarily implies + ∨ -infinite divisibility. Example 2.7. Let (X, Y ) be a random vector which distribution is given by • P (X = k, Y = 1) = Poisλ (k) if k ∈ N0 is even; • P (X = k, Y = 0) = Poisλ (k) if k ∈ N0 is odd; • P (X = k, Y = l) = 0 for k ∈ N0 , l ≥ 2; for a λ > 0. Furthermore the distribution of Y is given by P (Y = 1) = P (Y = 0) = 1/2. Y is trivially max-infinite divisible (every univariate distribution is max-infinite divisible). The random variable X is Poisson distributed with parameter λ > 0 and hence sum-infinite divisible. If (X, Y ) is + ∨ -infinite divisible, there exist i.i.d. random vectors (X1 , Y1 ), (X2 , Y2 ), such that d + (X , Y ). (X, Y ) = (X , Y )∨ 1

1

2

2

However, there is no distribution which fulfils this. In fact, by necessity the support of (X1 , Y1 ) has to be a subset of N0 × {0, 1} and (X1 , Y1 ) has no mass in (0, 0). Consequently there exists no distribution for (X1 , Y1 ), such that P (X1 + X2 = 1, Y1 ∨ Y2 = 0) is positive. But on the other hand we have P (X = 1, Y = 0) = Poisλ (1) > 0.

ON JOINT SUM/MAX STABILITY AND SUM/MAX DOMAINS OF ATTRACTION

7

So (X, Y ) can not be + ∨ -infinite divisible. The next Lemma shows that the weak limit of a sequence of + ∨ -infinite divisible + measures is ∨ -infinite divisible as well. w

Lemma 2.8. Let µn , µ ∈ M1 (R+ × R) for all n ∈ N and µn −→ µ as n → ∞. If µn is + ∨ -infinite divisible for each n ∈ N then µ is + ∨ -infinite divisible. Proof. See Appendix.



In the following let x0 denote the left endpoint of the distribution of A in (1.3), i.e.  x0 := inf y ∈ R : FA (y) > 0 .

For FA there are two possible cases, namely either FA (x0 ) = 0 or there is an atom in x0 so that FA (x0 ) > 0. Since the limit distributions of rescaled maxima are the extreme value distributions which are continuous, in the following we will only consider the case where FA (x0 ) = 0. If ϕ is a C-L transform, we call the function Ψ : S → R with (2.7)

ϕ = exp(−Ψ)

and

Ψ(0, ∞) = 0

C-L exponent (similar to the Laplace exponent in the context of Laplace transforms). The following Theorem gives us a L´evy-Khintchine representation for the C-L exponent of + ∨ -infinite divisible distributions on the semigroup R+ × R. Theorem 2.9. A function ϕ is the C-L transform of a + ∨ -infinite divisible measure µ on R+ × R with left endpoint x0 such that µ(R+ × {x0 }) = 0, if and only if there exists an a ∈ R+ and a Radon measure η on R+ × [x0 , ∞] with η({(0, x0 )}) = 0 satisfying the integrability conditions Z (2.8) min(1, t)η(dt, [x0 , ∞]) < ∞ and η(R+ × (y, ∞]) < ∞ ∀y > x0 , R+

such that Ψ := − log(ϕ) has the representation Z Z   a · s + 1 − e−st · 1[x0 ,y] (x) η(dt, dx) ∀y > x0 (2.9) Ψ(s, y) = R+ [x0 ,∞]  ∞ ∀y ≤ x0

for all (s, y) ∈ R+ × R. The representation given in (2.9) is unique and we write µ ∼ [x0 , a, η]. We call a measure η which fulfils (2.8) a L´evy measure on the semigroup ∨ ). (R+ × R, + Proof. Let ϕ be the C-L transform of a + ∨ -infinite divisible measure µ. Since Z e−st 1[−∞,y] (x)µ(dt, dx), ϕ(s, y) = R+ ×R

we have by our assumptions that 0 < ϕ(s, y) ≤ 1 for all (s, y) ∈ R+ × (x0 , ∞]. On the set R+ × (−∞, x0 ) we have ϕ ≡ 0 and hence Ψ ≡ ∞. Observe further that

8

KATHARINA HEES AND HANS-PETER SCHEFFLER

ϕ(s, x0 ) = 0. In the following we consider ϕ restricted on Sx0 := R+ × (x0 , ∞] with the semigroup operation + ∧ . The function ϕ is strictly positive, positive semidefinite + and ∨ -infinite divisible, consequently the map Ψ : Sx0 → R with Ψ := − log(ϕ) is due to Theorem 3.2.7 in [2] negative semidefinite. With Theorem 4.3.20 in [2] it then follows that there exists and additive function q : Sx0 → [0, ∞[ and a radon measure ˆ x0 \ {1}) such that η˜ ∈ M+ (S Z Ψ(s, y) = Ψ(e) + q(s, y) + (2.10) (1 − ρ(s, y)) η˜(dρ), ˆ x \{1} S 0

ˆ x0 is the set of semicharacters on the semigroup (Sx0 , + where S ∧ ). We now show that the additive function q is of the form q(s, y) = a · s for some a ≥ 0. In view of the fact that ϕ(s, y) is continuous in s for an arbitrary but fixed y ∈ (x0 , ∞], Ψ has to be continuous and hence also q for s > 0 (the integral in (2.10) has at most a discontinuity in s = 0). Due to the fact that q is additive we have q(s1 + s2 , y1 ∧ y2 ) = q(s1 , y1) + q(s2 , y2 ) for any (s1 , y1), (s2 , y2 ) ∈ Sx0 . Because q is continuous for an arbitrary but fixed y in s (up to s = 0) and q(s1 + s2 , y) = q(s1 , y) + q(s2 , y) there exists an a(y) ≥ 0 such that q(s, y) = a(y) · s. Additionally we have (2.11)

q(2s, y1 ∧ y2 ) = q(s + s, y1 ∧ y2 ) = q(s, y1 ) + q(s, y2) = a(y1 ) · s + a(y2 ) · s.

First we assume y1 < y2 . Then we have (2.12)

q(2s, y1 ∧ y2 ) = q(2s, y1) = a(y1 ) · 2s.

If we subtract (2.12) from (2.11) we obtain a(y1 ) = a(y2 ). Due to the fact that y1 , y2 ∈ (x0 , ∞] were chosen arbitrarily, it follows that a(y) is independent of y and q has the form q(s, y) = a · s with an a ≥ 0. We divide the set ˆ x0 of semicharacters in two disjoint sets S   ′′ ˆ = e−t· 1(x,∞]|x ∈ [x0 , ∞) , s ∈ [0, ∞] . ˆ′ = e−t· 1[x,∞] |x ∈ [x0 , ∞] , s ∈ [0, ∞] , S S x0 x0 ˆ ′x and S ˆ ′′x Accordingly we divide the integral in (2.10) and get due to the fact that S 0 0 are isomorphic to [0, ∞] × [x0 , ∞] and [0, ∞] × [x0 , ∞), respectively, Z Z  Ψ(s, y) = a · s + 1 − e−st · 1[x,∞](y) η1 (dt, dx) [0,∞] [x0 ,∞] Z Z  + (2.13) 1 − e−st · 1(x,∞](y) η2 (dt, dx), [0,∞]

[x0 ,∞)

ON JOINT SUM/MAX STABILITY AND SUM/MAX DOMAINS OF ATTRACTION

9

where η1 and η2 are radon measures on (R+ × [x0 , ∞], + ∨ ) resp. (R+ × [x0 , ∞), + ∨ ). If we put s = 0 in (2.13) we get Z Z Ψ(0, y) = (1 − 1[x0 ,y] (x))η1 (dt, dx) [0,∞] [x0 ,∞] Z Z + (1 − 1[x0 ,y) (x))η2 (dt, dx) [0,∞] [x0 ,∞) Z Z = 1(y,∞] (x)η1 (R+ , dx) + 1[y,∞] (x)η2 (R+ , dx). [x0 ,∞]

[x0 ,∞)

Due to the fact that ϕ(0, y) is right continuous in y, Ψ(0, y) is right continuous in y, too. Consequently we have η2 (R+ × {y}) = 0 for all y > x0 or η2 ≡ 0. If η2 (R+ × {y}) = 0 it follows that η2 (A × {y}) = 0 for all A ∈ B(R+ ). Hence Ψ has the representation Z Z  Ψ(s, y) = a · s + (2.14) 1 − e−st · 1[x,∞](y) η(dt, dx), [0,∞]

[x0 ,∞]

where η is a radon measure on R+ × [x0 , ∞]. If we put y = ∞ in (2.14), we get Z Ψ(s, ∞) = a · s + (1 − e−st )η(dt, [x0 , ∞]) + 1]0,∞[(s) · η({∞} × [x0 , ∞]). [0,∞)

Since Ψ(s, ∞) is continuous in every s ∈ R+ it follows that η({∞} × [x0 , ∞]) = 0.

(2.15)

Consequently Ψ has the representation Z Z (2.16) Ψ(s, y) = a · s + (1 − e−st · 1[x0 ,y] (x))η(dt, dx) ∀y > x0 [0,∞)

[x0 ,∞]

where η is a radon measure on R+ × [x0 , ∞] with η({(0, x0 )}) = 0 and η(R+ × [x0 , ∞]) = ∞. Since Ψ(s, y) < ∞ for all (s, y) ∈ R+ × (x0 , ∞] the conditions in (2.8) hold true. Conversely, assume that Ψ has the representation in (2.16) for all y > x0 . In view of the conditions (2.8), we get for all (s, y) ∈ R+ × (x0 , ∞] that Z

(1 − e−st 1[x0 ,y] (x))η(dt, dx) R ×[x ,∞] Z + 0 Z −st = (1 − e )η(dt, dx) + e−st 1(y,∞] (x)η(dt, dx) R ×[x ,∞] R+ ×[x0 ,∞] Z + 0 (1 − e−st )η(dt, [x0 , ∞]) + η(R+ × (y, ∞]) ≤

Ψ(s, y) =

R+

< ∞.

10

KATHARINA HEES AND HANS-PETER SCHEFFLER

ˆ x0 by h(t, x) = e−t· 1[x,∞] and We now define a homomorphism h : R+ × [x0 , ∞] − →S write Ψ as Z Ψ(s, y) = Ψ(0, ∞) + q(s, y) +

(1 − ρ(s, y))h(η)(dρ),

ˆ x \{1} S 0

where (0, ∞) is the neutral element on the semigroup (Sx0 , + ∧ ), q an additive function and h(η) the image measure of η under h. Due to Theorem 4.3.20 in [2] is Ψ a negative definite and bounded below function on Sx0 . Hence the function ϕ = exp(−Ψ) is positive definite and due to Proposition 3.2.7 in [2] infinite divisible. The function ϕ(0, y) = exp(−Ψ(0, y)) is an uniquely determined distribution function and ϕ(s, ∞) a Laplace transform due to Z (1 − e−st )η(dt, [x0 , ∞]) and Ψ(s, ∞) = a · s + R+ Z min(1, t) η(dt, [x0 , ∞]) < ∞, R+

Furthermore we have Ψ(0, ∞) = 0. Consequently ϕ is normalized and it follows from Lemma 2.2 that ϕ is the C-L transform of a measure µ ∈ M1 (R+ × [x0 , ∞]). Since ϕ(s, y) = 0 for all (s, y) ∈ R+ × [−∞, x0 ] we get that ϕ is the C-L transform of an + ∨ -infinite divisible probability measure µ on R+ × R with µ(R+ × [−∞, x0 ]) = 0.  Remark 2.10. If ϕ(0, x0 ) > 0 (or equivalently µ(R+ × {xo }) > 0) the case y = x0 in (2.9) has to be included in the case y > x0 . In the following we define the L´evy measure to be zero on R+ × [−∞, x0 ). Hence the C-L exponent in (2.9) can be uniquely represented by (2.17) Z Z (1 − e−st · 1[−∞,y] (x))η(dt, dx) ∀(s, y) ∈ R+ × (x0 , ∞],

Ψ(s, y) = a · s +

R+

R

for all (s, y) ∈ R+ × [x0 , ∞] in the case ϕ(0, x0 ) > 0. In the following we say that the set B ⊂ R+ × [x0 , ∞] is bounded away from the origin (here we think of (0, x0 ) if we talk about the origin), if dist((0, x0 ), B) > 0, which means that for all x = (x1 , x2 ) ∈ B exists an ǫ > 0 such that x1 > ǫ or x2 > x0 + ǫ. In view of the conditions (2.8), a L´evy measure has the property that it assigns finite mass to all sets bounded away from the origin. We say that a sequence (ηn )n∈N of measures converges vaguely to a L´evy measure η (with left endpoint x0 ) if lim ηn (B) = η(B)

n→∞

for all S ∈ B(R+ × [x0 , ∞]) with η(∂S) = 0 and dist((0, x0 ), S) > 0. We write v



ηn −−−→ η. n→∞

in this case.

ON JOINT SUM/MAX STABILITY AND SUM/MAX DOMAINS OF ATTRACTION

11

Remark 2.11. Let Ψ, , Ψ be C-L exponents of + ∨ -infinitely divisible laws µn , µ, respectively, where µ has left endpoint x0 ∈ [−∞, ∞] with µ(R+ × {x0 }) = 0. If we want to show the convergence Ψn (s, y) − → Ψ(s, y) as n → ∞ for all (s, y) ∈ R+ ×R it is enough to show the convergence for all (s, y) ∈ R+ × (x0 , ∞]. This is because L(µ)(0, x0 ) = 0 and L(µn )(s, y) ≤ L(µn )(0, x0 ) −−−→ L(µ)(0, x0 ) = 0, y ≤ x0 , n→∞

meaning that L(µn )(s, y) −−−→ 0 = L(µ)(s, y) n→∞

for all (s, y) ∈ R+ × (−∞, x0 ]. Lemma 2.12. Let (µn )n∈N be a sequence of + ∨ -infinite divisible probability measures on R+ × R with w µn ∼ [xn , an , ηn ] for each n ∈ N. Then µn −→ µ where µ ∼ [x0 , a, η] (where either xn ≤ x0 for all n ∈ N or xn → x0 ) if and only if (a) an − → a for an a ≥ 0, v′

(b) ηn − → η Rand (c) lim lim {0≤t 0. We define a probability measure Π(c, µ) by −c

Π(c, µ) := e

∞ X ck k=0

k!

µ⊛k

on R+ × [x0 , ∞], where µ⊛0 = ε(0,x0 ) . Then Π(c, µ) is + ∨ -infinite divisible with Π(c, µ) ∼ [x0 , 0, c · µ] and L(Π(c, µ))(s, y) > 0 for all (s, y) ∈ R+ × [x0 , ∞]. Proof. See Appendix.



Lemma 2.14. Let µn , ν ∈ M1 (R+ × R) for each n ∈ N with left endpoints xn and x0 , resp., where either xn → x0 or xn ≤ x0 for each n ∈ N. Then the following are equivalent: w

(i) Π(n, µn ) −→ ν as n → ∞; w (ii) µ⊛n n −→ ν as n → ∞. Proof. See Appendix.



Finally, the following theorem gives convergence criteria for triangular arrays on ∨ ). (R+ × R, +

12

KATHARINA HEES AND HANS-PETER SCHEFFLER

Theorem 2.15. w Let µn ∈ M1 (R+ × R) for each n ∈ N with left endpoint xn . Then µn⊛n −→ ν as n → ∞, where ν + ∨ -infinite divisible, ν ∼ [x0 , 0, Φ] (where either xn −→ x0 or xn ≤ x0 for all n ∈ N) if and only if v



(a) n · µn − → ΦR and (b) lim lim n · {0≤t 0 and cn ∈ R such that for i.i.d. copies (D1 , A1 ), . . . , (Dn , An ) of (D, A) we have d +···+ −1 (D1 , A1 )∨ ∨ (Dn , An ) = D1 + · · · + Dn , A1 ∨ · · · ∨ An ) = (a−1 n D, bn A + cn ).

Theorem 3.2. Let (D, A) be R+ × R-valued with non-degenerate marginals. Then (D, A) is sum-max stable if and only if (D, A) is a limit distribution in (1.3). Proof. Trivially, every sum-max stable random vector is a limit distribution in (1.3). Now assume that (D, A) is a non-degenerate limit distribution in (1.3). Fix any k ≥ 2. Then we have  (3.1) ank S(nk), bnk (M(nk) − cnk ) =⇒ (D, A) as n → ∞.

ON JOINT SUM/MAX STABILITY AND SUM/MAX DOMAINS OF ATTRACTION

13

For i = 1, . . . , k let (Sn(i) , Mn(i) )

=

n X

Wn(i−1)+j ,

j=1

(1)

(1)

(k)

n _

j=1

Jn(i−1)+j



(k)

so that (Sn , Mn ), . . . , (Sn , Mn ) are i.i.d. Moreover, by (1.3) (an Sn(i) , bn (Mn(i) − cn )) =⇒ (Di , Ai ) as n → ∞, where (D1 , A1 ), . . . , (Dk , Ak ) are i.i.d. copies of (D, A). Then we have   an Sn(1) , bn (Mn(1) − cn ) + ∨ ···+ ∨ an Sn(k) , bn (Mn(k) − cn )  +···+ = an S(nk), bn (M(nk) − cn ) =⇒ (D1 , A1 )∨ ∨ (Dk , Ak ) as n → ∞.

Hence, in view of (3.1), convergence of types yields ank →a ˜k > 0, an

bnk → ˜bk > 0 and bnk cn − cn bnk → c˜k bn

as n → ∞ and therefore d +···+ ˜−1 (D1 , A1 )∨ ∨ (Dk , Ak ) = (˜a−1 ˜k ) k D, bk A + c

so (D, A) is sum-max stable.



Definition 3.3. Let (D, A) be a R+ × R-valued random vector. We say that the random vector (W, J) belongs to the sum-max domain of attraction of (D, A), if (1.3) holds for i.i.d. copies (Wi , Ji ) of (W, J). We write (W, J) ∈ sum-max-DOA(D, A). If cn = 0 in (1.3), we say (W, J) belongs to the strict sum-max-DOA of (D, A) and write (W, J) ∈ sum-max-DOAS (D, A). Corollary 3.4. Let (D, A) be R+ × R-valued with non-degenerate marginals. Then (D, A) is sum-max stable if and only if sum-max-DOA(D, A) 6= ∅. The next theorem characterizes the sum-max domain of attraction of (D, A) in the case where A has an α-Fr´echet distribution. Theorem 3.5. Let (W, J), (Wi , Ji )i∈N be i.i.d. R+ × R-valued random vectors. Furthermore assume that (D, A) is a R+ × R-valued random vector, where D is strictly β-stable with 0 < β < 1 and A is α-Fr´echet distributed with α > 0. Then the following are equivalent: (a) (W, J) ∈ sum-max-DOAS (D, A). (b) There exist sequences (an )n∈N ,(bn )n∈N with an , bn > 0 such that v′

n · P(an W,bn J) −−−→ η, n→∞

where η is a L´evy measure on (R+ × R, + ∨ ).

14

KATHARINA HEES AND HANS-PETER SCHEFFLER

Then (D, A) is sum-max stable and has the L´evy representation [0, 0, η]. We can use the same sequences (an )n∈N and (bn )n∈N in (a) and (b). Furthermore (an ) is regularly varying with index −1/β and (bn ) is regularly varying with index −1/α. Remark 3.6. Since the left endpoint of the Fr´echet distribution is x0 = 0, the convergence in (b) means n · P(an W,bn J) (B) −→ η(B) as n → ∞ for all B ∈ B(R2+ ) with η(∂B) = 0 and dist((0, 0), B) > 0. Proof. That assertion (a) implies (b) follows directly with Theorem 2.15. We assume that for sequences (an )n∈N , (bn )n∈N with an > 0 and bn > 0 we have (3.2)

(an S(n), bn M(n)) ===⇒ (D, A). n→∞

We denote Since (Wi , Ji )i∈N

µn := P(an W,bn J) and µ := P(W,J) . are i.i.d. and distributed as (W, J) equation (3.2) is equivalent to w

µ⊛n −−→ P(D,A) , where P(D,A) ∼ [0, 0, η] . n − n→∞

Let F (x) = P {J ≤ x} denote the distribution function of J. In case that the left endpoint of F is −∞, the left endpoint of F (b−1 n x) is equal to −∞ for each n. If the left endpoint of F is any real number, the left endpoint of F (b−1 n x) converges as n → ∞ to x0 = 0. With Theorem 2.15 it then follows that v′

n · P(an W,bn J) −−−→ η. n→∞

That (b) implies (a) follows with Theorem 2.15 as well, if we show that (3.3)

v′

n · P(an W,bn J) −−−→ η n→∞

implies that lim lim n · ǫ↓0 n→∞

Z

t µn (dt, R) = 0.

{0≤t 0

with E = diag(1/β, 1/α), where tE = diag(t1/β , t1/α ).

ON JOINT SUM/MAX STABILITY AND SUM/MAX DOMAINS OF ATTRACTION

15

Proof. Since (an )n∈N ∈ RV(−1/β) and (bn )n∈N ∈ RV(−1/α) in Theorem 3.5 we know that diag(an , bn ) ∈ RV(−E) in the sense of Definition 4.2.8 of [6]. Observe that L(P

an

P⌊nt⌋ i=1

Wi ,bn

W⌊nt⌋ i=1

Ji

 )(ξ, x) = (L(P(a

n Wi ,bn Ji )

)

⌊nt⌋ n

)n (ξ, x)

−−−→ L(P(D,A) )t (ξ, x), n→∞

so that w

t P(an P⌊nt⌋ Wi,bn W⌊nt⌋ Ji) −−−→ P(D,A) ∼ [0, 0, t · η], i=1

n→∞

i=1

t where P(D,A) is for t t L(P(D,A)) (ξ, x) and

> 0 defined as the distribution which C-L transform is given by hence has the L´evy representation [0, 0, t · η]. 1/β 1/α On the other hand we get using an a−1 and bn b−1 as n → ∞ that ⌊nt⌋ → t ⌊nt⌋ → t W⌊nt⌋ =P P⌊nt⌋ W⌊nt⌋  P P⌊nt⌋ (an

i=1

Wi ,bn

i=1

Ji )

an a−1 a ⌊nt⌋ ⌊nt⌋

i=1

Wi ,bn b−1 b ⌊nt⌋ ⌊nt⌋

i=1

Ji

w

−−−→ PtE (D,A) ∼ [0, 0, tE η]. n→∞

Because of the uniqueness of the L´evy-Khintchine representation the assertion follows.  One of our aims was to describe possible limit distributions that can appear as limits of the sum and the maximum of i.i.d. random variables. We call these limit distributions sum-max stable. Due to the harmonic analysis tools in Section 2 we have a method to describe sum-max infinite divisible distributions, namely by the L´evyKhintchine representation (see Theorem 2.9). The sum-max stable distributions are a special case of sum-max infinite divisible distributions and the next theorem describes the sum-max stable distributions by a representation of its L´evy measure. Theorem 3.8. (Representation of the L´evy measure) Under the assumptions of Theorem 3.5, there exist constants C ≥ 0, K > 0 and a R∞ probability measure ω ∈ M1 (R) with ω(R+ ) > 0 and 0 xα ω(dx) < ∞ such that the L´evy measure η of P(D,A) on R2+ is given by  (3.5) η(dt, dx) = ǫ0 (dt)Cαx−α−1 dx + 1(0,∞)×R+ (t, x) tβ/α ω (dx)Kβt−β−1 dt. Proof. First we define two measures

η1 ((r, ∞) × B1 ) := η ((r, ∞) × B1 ) and η2 (B2 ) := η({0} × B2 ) for all Borel sets B1 ∈ B(R+ ), B2 ∈ B((0, ∞)) and r > 0. The L´evy measure η on R2+ \ {(0, 0)} of the limit distribution P(D,A) can then be represented by η(dt, dx) = ǫ0 (dt)η2 (dx) + 1(0,∞)×R+ (t, x)η1 (dt, dx). With Corollary 3.7 we get for all t > 0 setting E = diag(1/β, 1/α) (3.6)

t · η2 (B2 ) = (tE η)({0} × B2 ) = η({0} × t−1/α B2 ) = (t1/α η2 )(B2 ).

16

KATHARINA HEES AND HANS-PETER SCHEFFLER

The measure η2 is a L´evy measure of a probability distribution on the semigroup (R+ , ∨). If η2 6= 0, there exists a distribution function F on R+ , such that F (y) = exp(−η2 (y, ∞)) for all y > 0 . From (3.6) it follows that F (y)t = F (t−1/α y) for all t > 0 and y > 0. Hence it follows (see proof of Proposition 0.3. in [9]) that F (y) = exp(−Cy −α ) with C > 0 for all y > 0. So the measure η2 on B((0, ∞)) is given by (3.7)

η2 (dx) = Cαx−α−1 dx.

The measure η2 can also be the zero measure and so η2 has the representation (3.7) with C ≥ 0. We still have to show that η1 has the representation η1 (dt, dx) = (tβ/α ω)(dx)Kβt−β−1dt. For B1 ∈ B(R+ ) and r > 0 we define the set  T (r, B1 ) := (t, tβ/α x) : t > r, x ∈ B1 .

All sets of this form are a ∩-stable generator of B((0, ∞) × R+ ). This follows because the map (t, x) → (t, tβ/α x) is a homeomorphism from (0, ∞) × R+ onto itself. Furthermore we have T (r, B1 ) = r βE T (1, B1 ) with E = diag(1/β, 1/α) and so we get with equation (3.4) that (3.8)

η1 (T (r, B1 )) = η1 (r βE T (1, B1) = (r −βE η1 )(T (1, B1 )) = r −β · η1 (T (1, B1 )).

Additionally we get for any probability measure ω on R and a constant K > 0 Z Z ∞Z β/α −β−1 (tβ/α ω)(dy)Kβt−β−1dt (t ω)(dy)Kβt dt = T (r,B1 ) r tβ/α B1 Z ∞ = (3.9) ω(B1 )Kβt−β−1 dt = ω(B1 )Kr −β . r

1 η (T (1, B1)) K 1

We define ω(B1 ) := where K is given by K := η1 (T (1, R+ )) > 0, since η1 6= 0, because of non-degeneracy and the fact that T (1, R+ ) is bounded away from zero. It then follows with (3.8) and (3.9) that Z −β −β η1 (T (r, B1 )) = r η1 (T (1, B1 )) = ω(B1 )r K = (tβ/α ω)(dy)Kβt−β−1dt T (r,B1 )

for all r > 0 and B1 ∈ B(R+ ). Altogether it follows that the L´evy measure has the ∨ ), it necessarily satisfies representation (3.5). Since η1 is a L´evy measure on (R+ ×R, + condition (2.8) so that for all y > 0 we have η1 (R+ × (y, ∞) < ∞. Using the above representation of η1 , a simple calculation shows that this is equivalent to Restablished ∞ α x ω(dx) < ∞. This concludes the proof.  0

With the next theorem we are able to construct random vectors which are in the sum-max domain of attraction of particular sum-max stable distributions.

ON JOINT SUM/MAX STABILITY AND SUM/MAX DOMAINS OF ATTRACTION

17

Theorem 3.9. d Let (Wi )i∈N be a sequence of i.i.d. R+ -valued random variables with W = Wi and W ∈ DONAS (D), where D is strictly β-stable with 0 < β < 1 and E e−sD = exp −KΓ(1 − β)sβ with K > 0 for s ≥ 0. This means that we can choose an = n−1/α in (1.3). Further (J¯i )i∈N are i.i.d. R-valued random variables with (3.10)

 P J¯i ∈ B2 |Wi = t = (tβ/α ω)(B2 ) ∀B2 ∈ B(R),

R∞ where ω is a probability measure on R with ω(R+ ) > 0 and 0 xα ω(dx) < ∞. Then the sequence (Wi , J¯i )i∈N fulfils (3.2) with an = n−1/β , bn = n−1/α and a limit distribution P(D,A) which L´evy measure η has the form (3.5) with C = 0. Furthermore, if we choose i.i.d. (J˜i )i∈N with P (J˜i ≤ x) = exp(−Cx−α ) with C > 0 for all x > 0 and such that (Wi , J¯i ) and J˜i are independent for all i ∈ N, and we define Ji by Ji := J˜i ∨ J¯i , then (Wi , Ji )i∈N fulfil (3.2) with an = n−1/β , bn = n−1/α and a limit distribution P(D,A) which L´evy measure has the representation (3.5) with C > 0. Proof. We first consider the case C = 0. In view of Theorem 3.5 it is enough to show that for any continuity set B ∈ B(R2+ ) with dist((0, 0), B) > 0 we have (3.11)

−−→ η1 (B), n · P(n−1/β W,n−1/αJ) ¯ (B) − n→∞

where η1 is given by (3.5) with C = 0. First let r > 0 and x ≥ 0. Then we get 1/β r, J¯ > n1/α x) nP(n−1/β W,n−1/α J) ¯ ((r, ∞) × (x, ∞)) = n · P (W > n Z ∞ =n P (J¯ > n1/α x|W = t)1(r,∞) (n−1/β t)PW (dt) Z0 ∞ =n (tβ/α ω)(n1/α x, ∞)1(r,∞)(n−1/β t)PW (dt) Z0 ∞ =n (tβ/α ω)(x, ∞)Pn−1/β W (dt) r Z ∞ −−−→ (tβ/α ω)(x, ∞))Kβt−β−1dt n→∞

r

= η1 ((r, ∞) × (x, ∞)), where the last step follows from Proposition 1.2.20 in [6], since the set (r, ∞) is bounded away from zero and furthermore the map t → (tβ/α ω)(x, ∞) is continuous

18

KATHARINA HEES AND HANS-PETER SCHEFFLER

and bounded. On the other hand, for r ≥ 0 and x > 0 we get 1/β r, J¯ > n1/α x) nP(n−1/β W,n−1/αJ) ¯ ((r, ∞) × (x, ∞)) = nP (W > n Z ∞ =n (tβ/α ω)(x, ∞)Pn−1/β W (dt) Zr ∞ =n P (n−1/β W > max(r, (u/x)−α/β )) ω(du). 0

Observe that nP (n−1/β W > max(r, (u/x)−α/β )) Z ∞ →

Kβt−β−1 dt = K max(r, (u/x)−α/β )−β ,

max(r,(u/x)−α/β )

as n → ∞. Moreover, since W ∈ DONAS (D) we know that there exists a constant M > 0 such that P (W > t) ≤ Mt−β for all t > 0. Hence nP (n−1/β W > max(r, (u/x)−α/β ) ≤ nP (W > n1/β (u/x)−α/β ) ≤ Mx−α uα . R∞ Since by assumption 0 uα ω(du) < ∞, dominated convergence yields

nP(n−1/β W,n−1/αJ) ¯ ((r, ∞) × (x, ∞)) Z ∞ → K max(r, (u/x)−α/β )−β ω(du) = η1 ((r, ∞) × (x, ∞)) 0

as n → ∞ again. Hence we have shown, that for r, x ≥ 0 with max(x, y) > 0 we have nP(n−1/β W,n−1/αJ) ¯ ((r, ∞) × (x, ∞)) → η1 ((r, ∞) × (x, ∞)) as n → ∞, which implies (3.11). In view of Theorem 3.5 we therefore have −1/β

(n

n X

−1/α

Wi , n

i=1

n _

¯ J¯i ) ===⇒ (D, A) n→∞

i=1

¯ is given by (3.5) with C = 0. and the L´evy measure η1 of (D, A) If we now choose a sequence of i.i.d. and α-Fr´echet distributed random variables (J˜i )i∈N with P (J˜i ≤ x) := exp(−Cx−α ) which are independent of (Wi , J¯i ) it follows −1/α

[(0, n

n _

i=1

J˜i ), (n−1/β

n X i=1

−1/α

Wi , n

n _

i=1

˜ (D, A)]. ¯ J¯i )] ===⇒ [(0, A), n→∞

˜ has the L´evy measure η1 (dt, dx) = ǫ0 (dt)Cαx−α−1 . Since The distribution of (0, A) ˜ and (D, A) ¯ are also inde(Wi , J¯i ) and J˜i are independent, the random vectors (0, A) pendent. With the continuous mapping theorem applied to the semigroup operation

ON JOINT SUM/MAX STABILITY AND SUM/MAX DOMAINS OF ATTRACTION

19

+ ∨ it then follows that −1/β

(n

n X

−1/α

Wi , n

i=1

n _

Ji ) ===⇒ (D, A) n→∞

i=1

¯ Hence the the L´evy measure of the distribution of (D, A) is where A := A˜ ∨ A. η := η1 + η2 and thus has the representation in (3.5) with C > 0.  The next Corollary characterizes the case of asymptotic independence i.e. D and A are independent. Corollary 3.10. The random variables A and D in Theorem 3.5 are independent if and only if in the L´evy representation in (3.5) we have C > 0 and ω = ǫ0 . Proof. If A and D are independent, the L´evy measure has the representation η(dt, dx) = ǫ0 (dt)ΦA (dx) + ǫ0 (dx)ΦD (dt), where ΦA (dx) = Cαx−α−1 dx with C > 0, α > 0 and ΦD (dt) = Kβt−β−1 dt with K > 0,0 < β < 1. With Theorem 3.8 the L´evy measure has the representation (3.5). The uniqueness of the L´evy measure implies that C > 0 and tβ/α ω = ǫ0 , hence we get ω = ǫ0 . Conversely, if C > 0 and ω = ǫ0 , the L´evy measure is given by η(dt, dx) = ǫ0 (dt)Cαx−α−1 dx + ǫ0 (dx)Kβt−β−1 dt. This implies that the C-L exponent of (D, A) is Z Ψ(s, y) = (1 − e−st 1[0,y] (x))ǫ0 (dt)Cαx−α−1 dx R2+

+

Z

R2+

(1 − e−st 1[0,y] (x))ǫ0 (dx)Kβt−β−1 dt

= − log FA (y) + ΨD (s), which implies that A and D are independent.



The following Proposition delivers us a representation for the C-L exponent of the sum-max stable distributions in the α-Fr´echet case. Proposition 3.11. The C-L exponent of the limit distribution P(D,A) ∼ [0, 0, η] in Theorem 3.8 is given by   Z ∞ −β/α −β−1 −sty α/β β −α C+ (3.12) Ψ(s, y) = KΓ(1 − β)s + y ω(t , ∞)Kβt dt e 0

for all (s, y) ∈ R2+ , y > 0.

20

KATHARINA HEES AND HANS-PETER SCHEFFLER

Proof. For the proof we look at the two additive parts of the L´evy measure in (3.5) separately. For the first part we get Z  Ψ1 (s, y) := 1 − e−st 1[0,y] (x) ǫ0 (dt)Cαx−α−1 dx R2+ ∞

=

Z

1(y,∞) Cαx−α−1 dx = Cy −α.

0

For the second part we compute Z  Ψ2 (s, y) := 1 − e−st 1[0,y] (x) (tβ/α ω)(dx)Kβt−β−1 dt =

Z

R2+ ∞

−st

1−e 0



−β−1

Kβt

=KΓ(1 − β)sβ +

Z

dt +

Z

R2+

e−st 1(y,∞) (x)(tβ/α ω)(dx)Kβt−β−1dt



e−st ω(t−β/α y, ∞)Kβt−β−1dt 0 Z ∞ α/β β −α =KΓ(1 − β)s + y e−suy ω(u−β/α , ∞)Kβu−β−1du. 0

The C-L exponent Ψ of the limit distribution P(D,A) is Ψ(s, y) = Ψ1 (s, y) + Ψ2 (s, y) and this corresponds to (3.12).  After analysing the α-Fr´echet case above, we now consider the general case, where A in (1.3) can have any extreme value distribution. As before, let x0 ∈ [−∞, ∞) denote the left endpoint of FA . Furthermore let x1 denote the right endpoint of FA . Theorem 3.12. Let (W, J), (Wi , Ji )i∈N be i.i.d. R+ × R valued random vectors. Furthermore let (D, A) be R+ × R valued with non-degenerate marginals. Then the following are equivalent: (a) There exist sequences (an ), (bn ), (cn ) with an , bn > 0 and cn ∈ R such that  (3.13) an S(n), bn M(n) − cn ===⇒ (D, A), n→∞

that is (W, J) ∈ sum-max-DOA(D, A). (b) There exist sequences (an ), (bn ), (cn ) with an , bn > 0 and cn ∈ R such that

(3.14)

v′

n · P(an W,bn (J−cn )) −−−→ η, n→∞

∨ ). where η is a L´evy measure on (R+ × R, + Then (D, A) is sum-max stable and has L´evy representation [x0 , 0, η]. Proof. The proof is similar to the proof of Theorem 3.5 and left to the reader.



As in the α-Fr´echet case it is also possible to describe the L´evy measure η in (3.14) in the general case.

ON JOINT SUM/MAX STABILITY AND SUM/MAX DOMAINS OF ATTRACTION

21

Theorem 3.13. Under the assumptions of Theorem 3.12, there exist R ∞constants C ≥ 0, K > 0 and a probability measure ω ∈ M1 (R) with ω(R+ ) > 0 and 0 x ω(dx) < ∞ such that the L´evy measure of (D, A) on R+ × [x0 , x1 ] \ {(0, x0 )} is given by (3.15)  η(dt, dx) = ε0 (dt)CΓ(x)−2 Γ′ (x)dx + 1(0,∞)×(x0 ,x1 ) (t, x) Γ−1 (tβ ω) (dx) Kβt−β−1 dt, where Γ(x) = 1/(− log FA (x)).

Proof. Observe that (D, Γ(A)) is sum-max stable where Γ(A) is 1-Fr´echet. In view of Theorem 3.8 the L´evy measure η˜ of (D, Γ(A)) has the representation  (3.16) η˜(dt, dx) = ε0 (dt)Cx−2 dx + 1(0,∞)×R+ (t, x) tβ ω (dx) Kβt−β−1 dt R∞ with constants C ≥ 0, K > 0 and ω ∈ M1 (R) with ω(R+ ) > 0 and 0 x ω(dx) < ∞. ˜ denote the C-L-exponent of (D, Γ(A)). Since Now let Ψ    ˜ Γ(y)) , L P(D,A) (s, y) = L P(D,Γ(A)) (s, Γ(y)) = exp −Ψ(s, ˜ Γ(y)). Setting g(t, x) = the C-L-exponent of (D, A) is given by Ψ(s, y) = Ψ(s, −1 (t, Γ (x)) we therefore get ˜ Γ(y)) Ψ(s, y) = Ψ(s, Z  = 1 − e−st 1[−∞,Γ(y)] (x) η˜(dt, dx) R ×R Z + +  = 1 − e−st 1([−∞,y](Γ−1 (x)) η˜(dt, dx) R ×R Z + +  = 1 − e−st 1[−∞,y] (x) g(˜ η)(dt, dx), R+ ×[x0 ,x1 )

so g(˜ η) is the L´evy measure of (D, A). Using (3.16) it is easy to see that g(˜ η) has the form (3.15) and the proof is complete.  4. Examples In this section we present some examples of random vectors (W, J) which are in the domain of attraction of a sum-max stable distribution and calculate the L´evy measures of the corresponding limit distributions as well as the C-L exponent, using the theory developed in section 3 above. In the following let (Wi , Ji )i∈N be a sequence d of R+ × R-valued random vectors with (Wi , Ji ) = (W, J). Example 4.1. First we consider the case of complete dependence, that is Wi = Ji for all i ∈ N. This is the case which was already studied in [3]. We choose W to be in the strict normal domain of attraction (meaning that we have an = n−1/β in (1.3)) of

22

KATHARINA HEES AND HANS-PETER SCHEFFLER

 a β-stable random variable D with 0 < β < 1 and E e−sD = exp(−sβ ). The L´evy measure of PD is given by (4.1)

Φβ (dt) = η(dt, R+ ) =

β t−β−1 dt. Γ(1 − β)

We now choose an = bn = n−1/β and α = β to get    n · P n−1/β W > t, n−1/β J > y = n · P n−1/β W > max(t, y) −−−→ n→∞

1 max(t, y)−β Γ(1 − β)

for t, y > 0. Thus we know with Theorem 3.5 that the L´evy measure η is given by 1 η((t, ∞) × (y, ∞)) = Γ(1−β) max(t, y)−β . If we choose in equation (3.5) α = β, ω = 1 ǫ1 , K = Γ(1−β) and C = 0 in equation (3.5) we also get Z ∞Z ∞ β r −β−1 dr η((t, ∞) × (y, ∞)) = (rǫ1 )(dx) Γ(1 − β) t y Z ∞ β 1 = 1(t,∞) (r)1(y,∞) (r) r −β−1 dr = max(t, y)−β . Γ(1 − β) Γ(1 − β) 0 Hence the limit distribution in case of total dependence is uniquely determined by P(D,A) ∼ [0, 0, η] with η(dt, dx) = 1(0,∞)×R+ ǫt (dx)Φβ (dt). Setting α = β, ω = ǫ1 , K = case is given by

1 Γ(1−β)

β

−β

(4.2)

Ψ(s, y) = s + y

and C = 0 in (3.12), the C-L exponent in this Z

∞ −sty

e 1

 β −β−1 t dt . Γ(1 − β)

Example 4.2. Again we choose W to be in the strict normal domain of attraction of a β-stable random variable D with 0 < β < 1 and E e−sD = exp(−sβ ). Furthermore let Z be a standard normal distributed random variable, i.e. Z ∼ N0,1 and Z is independent of W . We define J := W 1/2 Z, hence the conditional distribution of J given W = t is N0,t distributed. Define a homeomorphism T : R+ × R → R+ × R with T (t, x) = (t, t1/2 x). Then we get for continuity sets A ⊆ R2+ that are bounded away from {(0, 0)} n · P(n−1/β W,n−1/2β J) (A) = n · PT (n−1/β W,Z)(A) = n · (Pn−1/β W ⊗ PZ ) (T −1 (A)) −−−→ (Φβ ⊗ N0,1 )(T −1 (A)), n→∞

ON JOINT SUM/MAX STABILITY AND SUM/MAX DOMAINS OF ATTRACTION

23

where Φβ is again the L´evy measure of D, given by (4.1). Hence the L´evy measure of (D, A) is given by η(dt, dx) = T (Φβ ⊗ N0,1 ) (dt, dx) = N0,t (dx)Φβ (dt). This coincides with (3.5) in Theorem 3.8, if we choose C = 0, α = 2β, ω = N0,1 and 1 . For the C-L exponent we get with (3.12) in Proposition 3.11 K = Γ(1−β) β

Ψ(s, y) = s + y

(4.3)

−2β

Z



2

e−sty N0,t (1, ∞)

0

β t−β−1 dt. Γ(1 − β)

Example 4.3. Again we choose W to be in the strict normal attraction of a β-stable  domain of −sD β random variable D with 0 < β < 1 and E e = exp(−s ). Furthermore let Z be a −γ γ−Fr´echet distributed random variable with distribution function P (Z ≤ t) = e−C1 t with C1 > 0 and γ > 0, and Z is independent of W . We define J := W 1/γ Z. Let T : R+ × R → R+ × R be the homeomorphism with T (t, x) = (t, t1/γ x). We then have for all continuity sets B ⊆ R2+ bounded away from {(0, 0)} n·P

(n−1/β W,n

− 1 βγ

J)

(B) = n · PT (n−1/β W,Z)(B) = n · P(n−1/β W,Z)(T −1 (B)) = n · (Pn−1/β W ⊗ PZ ) (T −1 (B)) −−−→ (Φβ ⊗ PZ ) (T −1 (B)) = T (Φβ ⊗ PZ )(B), n→∞

where Φβ denotes the L´evy measure of PD . Consequently the L´evy measure of (D, A) is given by  η(dt, dx) = t1/γ PZ (dx)

β t−β−1 dt. Γ(1 − β)

This coincides with Theorem 3.8 if we let ω = PZ , α = βγ, K = 1/Γ(1 − β) and C = 0 in (3.5). With Theorem 3.5 we know (n−1/β S(n), n−1/βγ M(n)) ===⇒ (D, A), n→∞

where D strictly stable with 0 < β < 1 and A is α = βγ-Fr´echet distributed. The R∞ condition 0 xα ω(dx) < ∞ is fulfilled then due to 0 < β < 1 is α = βγ < γ and ω is γ-Fr´echet distributed. This means that (W, J) is in the sum-max domain of attraction

24

KATHARINA HEES AND HANS-PETER SCHEFFLER

of (D, A). With Proposition 3.11 we compute the C-L exponent with K := 1/Γ(1−β): Z ∞ γ β −βγ Ψ(s, y) = s + y e−sty ω(t−1/γ , ∞)βKt−β−1dt Z0 ∞ γ = sβ + y −βγ e−sty (1 − e−C1 t )βKt−β−1 dt 0Z ∞  Z ∞ β −βγ −sty γ −β−1 −t(sy γ +C1 ) −β−1 =s +y (e − 1)βKt dt + (1 − e )βKt dt 0 0  = sβ + y −βγ −(sy γ )β + (sy γ + C1 )β = y −βγ (sy γ + C1 )β = (s + C1 y −γ )β . 5. Appendix: Proofs In this section we give some of the technical proofs of section 2 above. Proof of Lemma 2.2. First we assume that ϕ is the C-L transform of a probability measure µ ∈ M1 (R+ × R). The map ˆ (t, x) → e−t · 1[−∞, ] (x) ∨ ) → S, h : (R+ × R, + · ˆ is the set of all bounded semicharacters on is an injective homomorphism, where S + S = (R+ × R, ∧ ) in (2.2). We get Z Z e−st 1[−∞,y] (x)µ(dt, dx) ϕ(s, y) = R+ R Z = ρ(s, y)h(µ)(dρ), for all (s, y) ∈ R+ × R. ˆ S

Theorem 4.2.5 in [2] implies that ϕ is positive semidefinite. It is obvious that ϕ is also bounded and normalized. If we put s = 0 we get ϕ(0, y) = µ(R+ × [−∞, y]) for all y ∈ R and hence the distribution function of a probability measure on R. Otherwise, if we put y = ∞ we get Z e−st µ(dt, R) ϕ(s, ∞) = R+

for all s ∈ R+ , hence the Laplace transform of a probability measure on R+ . Conversely ϕ is now a positive semidefinit, bounded and normalized function on (R+ × R, + ∧ ). Theorem 4.2.8 in [2] implies, that there exists exactly one probability

ON JOINT SUM/MAX STABILITY AND SUM/MAX DOMAINS OF ATTRACTION

25

ˆ of the semigroup S = (R+ × R, + ∧ ), measure µ on the set of bounded semicharacters S such that Z ϕ(s, y) = ρ(s, y)µ(dρ), for all (s, y) ∈ R+ × R. ˆ S

ˆ in (2.2) in the two disjoint subsets We divide S n o ˆ′ = e−t· 1[x,∞](·), x ∈ [−∞, ∞], t ∈ [0, ∞] and S n o ′′ −t· ˆ S = e 1(x,∞] (·), x ∈ [−∞, ∞), t ∈ [0, ∞] . ˆ ′ and h2 : R+ × R → S ˆ′′ by We define the isomorphisms h1 : R+ × R → S h1 (t, x) := e−t · 1[x,∞](·)

and

h2 (t, x) := e−t · 1(x,∞] (·).

Hence we get Z Z Z Z −st −1 ϕ(s, y) = e 1[−∞,y](x)h1 (µ)(dt, dx) + e−st 1[−∞,y)(x)h−1 2 (µ)(dt, dx) [0,∞] R [0,∞] R Z  Z −st −1 −1 1[−∞,y] (x) e h1 (µ)(dt, dx) + h1 (µ)({∞} , dx) · 1{0} (s) = R [0,∞) Z  Z −st −1 −1 e h2 (µ)(dt, dx) + h2 (µ)({∞} , dx) · 1{0} (s) . + 1[−∞,y)(x) R

[0,∞)

Due to the right continuity of ϕ(0, y) in y ∈ R there are only two possible cases: −1 Either h−1 2 (µ)([0, ∞] × {y}) = 0 for all y ∈ R or h2 (µ)([0, ∞] × ·) ≡ 0. In the −1 first case we choose µ ˜ := h−1 1 (µ) + h2 (µ). In the second case the last integral −1 disappears and we choose µ ˜ := h1 (µ). Since ϕ(s, ∞) is continuous in s it follows −1 that hi (µ)({∞} , dx) = 0 for i = 1, 2. Due to the fact that ϕ is normalized, µ is a probability measure. Hence we get the desired form in (2.4).  Proof of Lemma 2.5. We write L(µn )(sn , yn ) =

Z

=

Z

e−sn t 1[−∞,yn] (x)µn (dt, dx)

R+ ×R

e−sn t µn (dt, [−∞, yn ])

R+

= L(˜ µn )(sn ), where we define the measures µ ˜n (dt) := µn (dt, [−∞, yn ]) and µ ˜ := µ(dt, [−∞, y]). The assertion follows, if we can show that (5.1)

w

µ ˜ n −−−→ µ ˜. n→∞

26

KATHARINA HEES AND HANS-PETER SCHEFFLER

Then due to the uniform convergence of the Laplace transform it follows L(µ˜n )(sn ) −−−→ L(˜ µ)(s). n→∞ w

So it remains to show (5.1). Because µn −→ µ, we know that µn (A1 × A2 ) −−−→ µ(A1 × A2 )

(5.2)

n→∞

for A1 × A2 ∈ B(R+ × R) with µ(∂(A1 × A2 )) = 0. Hence µn (B × [−∞, y]) −−−→ µ(B × [−∞, y])

(5.3)

n→∞

for all B ∈ B(R+ ) with µ(∂B × [−∞, y]) = 0, then (5.3) is fulfilled for all sets B × [−∞, y] with µ(∂(B × [−∞, y])) = 0. It is ∂(B × [−∞, y]) = ∂B × [−∞, y] ∪ B × {y} and because y is a point of continuity of the function µ(R+ × [−∞, y]), it follows from µ(∂B × [−∞, y]) = 0 that µ(∂(B × [−∞, y])) = 0. For a set B ∈ B(R+ ), µn (B × [−∞, y]) is an increasing, right continuous function which is continuous in y, and so an (improper) distribution function. But then it follows that µn (B × [−∞, yn ]) −−−→ µ(B × [−∞, y]) n→∞

if yn −→ y for n → ∞ an (5.1) holds true.



Proof of Lemma 2.8. From Theorem 2.4 we know that L(µn )(s, y) −→ L(µ)(s, y) as n → ∞ in all (s, y) ∈ R+ × R but countable many y ∈ R. Since the probability measures µn are + ∨ -infinite divisible there exists a measure µm,n for all n, m ≥ 1 such ⊛m that µn = µm,n . Because of Proposition 2.3 (b) and (c) this is equivalent to L(µn )(s, y) = L(µm,n )m (s, y) for all (s, y) ∈ R+ × R. It then follows that L(µm,n )(s, y) = L(µn )1/m (s, y) −−−→ (L(µ))1/m (s, y) n→∞

in all (s, y) ∈ R+ × R but countable many y ∈ R. Since 1/m  Z ∞ 1/m −st = (µ(R+ × R))1/m = 1 e µ(dt, R) lim (L(µ)) (s, ∞) = lim s↓0

s↓0

0

it follows from Theorem 2.4 that there exists a measure ν ∈ M1 (R+ × R) with L(ν) = (L(µ))1/m . Hence µ = ν ⊛m , so µ is + ∨ -infinite divisible.  Proof of Lemma 2.12. In view of Lemma 2.8 we already know that µ is + ∨ -infinite divisible. By Theorem 2.9 an Remark 2.10 we know that the C-L exponent has the form Z Z  1 − e−st · 1[−∞,y] (x) η(dt, dx) ∀(s, y) ∈ R+ × (x0 , ∞]. Ψ(s, y) = a · s + R+

R

ON JOINT SUM/MAX STABILITY AND SUM/MAX DOMAINS OF ATTRACTION

27

First we define for any h1 > 0 and any h2 > x0 a function Ψ∗ on R+ × (x0 , ∞] by Ψ∗ (s, y) = Ψ(s + h1 , y ∧ h2 ) − Ψ(s, y). For Ψ∗ we get for all (s, y) ∈ R+ × (x0 , ∞] Z ∗ (e−st 1[−∞,y] (x) − e−(s+h1 )t 1[−∞,y∧h2] (x))η(dt, dx) Ψ (s, y) = ah1 + R ×R Z + = ah1 + e−st 1[−∞,y](x)(1 − e−h1 t 1[−∞,h2] (x))η(dt, dx) R ×R Z + e−st 1[−∞,y](x)K(t, x)η(dt, dx), = ah1 + R+ ×R

where K(t, x) := 1 − e−h1 t 1[−∞,h2] (x). By Taylor expansion we get for all x ≤ h2 K(t, x) = h1 t + o(t) as t → 0. Now we define a measure M on R+ × [x0 , ∞] by M(dt, dx) := K(t, x)η(dt, dx) on R+ × [x0 , ∞]\ {(0, x0 )} and M({(0, x0 )}) := ah1 . This is a finite measure, because for 0 < ǫ < 1 we get M(R+ × [x0 , ∞]) Z Z Z Z −h1 t (1 − e−h1 t )η(dt, dx) (1 − e )η(dt, dx) + = ah1 + 0 0 if t > 0 or x > h2 it follows that ηn (S) − → η(S) as n → ∞. Since v′

w

h2 > x0 was chosen arbitrarily, it follows ηn − → η, i.e. (b) is fulfilled. Because µn − →µ implies Ψn (s, y) → Ψ(s, y) as n → ∞ for all (s, y) ∈ R+ × R where Ψ is continuous, we have Ψn (s, ∞) − → Ψ(s, ∞) and it follows, that an − → a as n → ∞. Hence (a) is also fulfilled. It remains to show (c). For all but countable many y > x0 we have   Z −st (1 − e 1[−∞,y](x))ηn (dt, dx) Ψ(s, y) = lim an s + n→∞ R+ ×R Z  Z −st −st = as + lim (1 − e )ηn (dt, dx) + e 1(y,∞) (x)ηn (dt, dx) . n→∞

R+ ×R

R+ ×R

We divide the first integral into two parts and get Z  Z −st −st Ψ(s, y) = as + lim lim (1 − e )ηn (dt, R) + (1 − e )ηn (dt, R) ǫ↓0 n→∞ {t:0≤t 0 for all y > x0 , this is equivalent to n · log L(µn )(s, y) − → log L(ν)(s, y) as n → ∞ in all but countable many y > x0 . Because L(µn )(s, y) → 1 as n → ∞ for all y > x0 and log z ∼ z − 1 as z → 1, this is equivalent to n · (L(µn )(s, y) − 1) − → log L(ν)(s, y) as n → ∞ in all but countable many y > x0 . And this is equivalent to exp(n [L(µn )(s, y) − 1]) − → L(ν)(s, y) as n → ∞ in all but countable many y > x0 and because of Theorem 2.4 and Lemma 2.13, this is equivalent to w Π(n, µn ) − → ν as n → ∞.  References [1] Anderson, K. (1987) Limit Theorems for General Shock Models with Infinite Mean Intershock Times. Journal of applied probability 24, 449176. [2] C. Berg, J.P.R. Christensen and P. Ressel (1984) Harmonic Analysis on Semigroups, Springer. [3] T. L. Chow, J. L. Teugels (1978) The sum and the maximum of i.i.d. random variables. Proc. Second Prague Symp. Asymptotic Statistics 81, 81-92. [4] W. Feller (1971) An Introduction to Probability Theory and Its Applications Vol. 2, Wiley, New York. [5] Gnedenko, B. V. und Kolmogorov, A. (1968) Limit distributions for sums of independent random variables. Addison-Wesley (Amsterdam). [6] M.M. Meerschaert and H.P. Scheffler (2001) Limit Distributions for Sums of Independent Random Vectors: Heavy Tails in Theory and Practice. Wiley Interscience, New York. [7] Meerschaert, M. und Stoev, S. (2009) Extremal Limit Theorems for observations separated by random waiting times. Journal of Statistical Planning and Inference 139, 21751788. [8] Pancheva, E., Mitov, I., und Mitov, K. (2009) Limit theorems for extremal processes generated by a point process with correlated time and space components. Statistics and Probability Letters 79, 390175. [9] S. I. Resnick (1987) Extreme Values, Regular Variation and Point Processes. Springer, New York. [10] Schumer, R., Baeumer, B., und Meerschaert, M. M. (2011) Extremal behavior of a coupled continuous time random walk. Physica A: Statistical Mechanics and its Applications 390, 505171. [11] Shanthikumar, J. und Sumita, U. (1983) General shock models associated with correlated renewal sequences. Journal of Applied Probability 20, 600174. [12] Shanthikumar, J. und Sumita, U. (1984) Distribution Properties of the System Failure Time in a General Shock Model. Advances in Applied Probability 16, 363177. [13] Shanthikumar, J. und Sumita, U. (1985) A class of correlated cumulative shock models. Advances in Applied Probability 17, 347176. [14] D.S. Silvestrov and J.L. Teugels (2004) Limit theorems for mixed max-sum processes with renewal stopping. The Annals of Applied Probability 14, 1838-1868.

ON JOINT SUM/MAX STABILITY AND SUM/MAX DOMAINS OF ATTRACTION

31

¨t Katharina Hees, Institut fr medizinische Biometrie und Informatik, Universita Heidelberg, 69126 Heidelberg, Germany E-mail address: [email protected] ¨t Siegen, 57072 Siegen, Hans-Peter Scheffler, Department Mathematik, Universita Germany E-mail address: [email protected]

Suggest Documents