Bounds and Approximations for Sums of Dependent Log-Elliptical ...

1 downloads 0 Views 258KB Size Report
Oct 2, 2008 - Keywords: comonotonicity, bounds, elliptical distributions, .... The marginal components of a multivariate elliptical distribution have an elliptical ...
Bounds and Approximations for Sums of Dependent Log-Elliptical Random Variables Emiliano A. Valdez



Jan Dhaene



Mateusz Maj



Steven Vanduffel



October 2, 2008

ABSTRACT Dhaene, Denuit, Goovaerts, Kaas & Vyncke (2002a,b) have studied convex bounds for a sum of dependent random variables and applied these to sums of log-normal random variables. In particular, they have shown how these convex bounds can be used to derive closed-form approximations for several of the risk measures of such a sum. In this paper we investigate to which extent their general results on convex bounds can also be applied to sums of log-elliptical random variables which incorporate sums of log-normals as a special case. Firstly, we show that unlike the log-normal case, for general sums of log-ellipticals the convex lower bound does no longer result in closed form approximations for the different risk measures. Secondly, we demonstrate how instead the weaker stop-loss order can be used to derive such closed form approximations. We also present numerical examples to show the accuracy of the proposed approximations. Keywords: comonotonicity, bounds, elliptical distributions, log-elliptical distributions.

1

Introduction

Sums of non-independent random variables (r.v.’s) occur in several situations in insurance and finance. As a first example, consider a portfolio of n insurance risks X1 , X2 ,..., Xn . The aggregate claim amount S is defined to be the sum of these individual risks:

S=

n X

Xk ,

(1)

k=1

where generally the risks are non-negative r.v.’s, i.e. Xk ≥ 0. Knowledge of the distribution of this sum provides essential information for the insurance company and can be used as an input in the calculation of premiums and reserves. A particularly important problem is the determination of stop-loss premiums of S. Suppose that the insurance company agrees to enter into a stop-loss reinsurance contract where total claims beyond a pre-specified ∗ †

Corresponding author. Department of Mathematics, University of Connecticut, Storrs, Connecticut, USA Department of Applied Economics, Katholieke Universiteit Leuven, BELGIUM

2

Introduction

amount d, called the retention, will be covered by the reinsurer. The stop-loss premium with retention d is then defined as Z ∞   F S (x) dx, (2) E (S − d)+ = d

where F S (x) = 1 − FS (x) = Pr (S > x) and (s − d)+ = max (s − d, 0). In classical risk theory, the individual risks Xk are typically assumed to be mutually independent, mainly because computation of the aggregate claims becomes more tractable in this case. For special families of individual claim distributions, one may determine the exact form of the distribution for the aggregate claims. Several exact and approximate recursive methods have been proposed for computing the aggregate claims in the case of discrete marginal distributions, see e.g. Dhaene & De Pril (1994) and Dhaene & Vandebroek (1995). Approximating the aggregate claims distribution by a Normal distribution with the same first and second moment is often unsatisfactory for the insurance practice, where the third central moment is often substantially different from 0. In this case, approximations based on a translated Gamma distribution or the Normal power approximation will perform better, see e.g. Kaas, Goovaerts, Dhaene & Denuit (2002b). It is important to note that all standard actuarial methods mentioned above for determining the aggregate claims distribution are only applicable in case the individual risks are assumed to be mutually independent. However, there are situations where the independence assumption is questionable, for instance in a situation where the individual risks Xk are influenced by the same economic or physical environment. In finance, a portfolio of n investment positions may be facing potential losses L1 , L2 , ..., Ln over a given reference period, e.g. one month or one year. The total potential loss L for this portfolio is then given by L=

n X

Lk .

(3)

k=1

As the returns on the different investment positions will in general be non-independent, it is clear that L will be a sum of non-independent r.v.’s. Quantities of interest are quantiles of the distribution of (3), which in finance are called Values-at-Risk. Regulatory bodies require financial institutions like banks and investment firms to meet risk-based capital requirements for their portfolio holdings. These requirements are often expressed in terms of a Value-at-Risk or some other risk measure which depends on the distribution of the sum in (3). For a recent account on risk measures as well as some discussion on their applicability in Insurance and Finance we refer to Dhaene et al. (2006, 2008a), amongst others. A related problem is determining an investment portfolio’s total rate of return. Suppose R1 , R2 , ..., Rn denote the random yearly rates of return of n different assets in a portfolio and suppose w1 , w2 , ..., wn denote the weights in the portfolio. Then the total portfolio’s yearly rate of return is given by R=

n X

wk Rk ,

(4)

k=1

which is clearly a sum of non-independent r.v.’s. The interplay between actuarial and financial risks is currently gaining increasing attention. To illustrate, consider random payments of Xk to be made at times k for the next n periods. Further, suppose that the stochastic discount factor over the period (0, k) is the r.v. Yk . Hence, an amount of one unit at time 0 is assumed to grow to a stochastic amount Yk−1 at time k. The present value r.v. S is defined as the scalar product of the payment vector (X1 , X2 , · · ·, Xn ) and the discount vector (Y1 , Y2 , . . . , Yn ): S=

n X

Xk Yk .

(5)

k=1

The present value quantity in (5) is of considerable importance for computing reserves and capital requirements for long term insurance business. The r.v.’s Xk Yk will be non-independent not only because in any realistic

3

Elliptical and Spherical Distributions

model, the discount factors will be rather strongly positive dependent, but also because the claim amounts Xk can often not be assumed to be mutually independent. As illustrated by the examples above, it is important to be able to determine the distribution function of sums of r.v.’s in the case that the individual r.v.’s involved are not assumed to be mutually independent. In general, this task is difficult to perform or even impossible because the dependency structure is unknown or too cumbersome to work with. In this paper, we develop approximations for sums involving non-independent log-elliptical r.v.’s. Dhaene et al. (2002a, 2002b) constructed so-called convex bounds for sums of general dependent r.v.’s and applied them to log-normal distribution. In our article we investigate if their results can be extended to sums of general log-elliptical distributions. Firstly, we prove that unlike the log-normal case the construction of a convex lower bound in explicit form appears to be out of reach for general sums of log-elliptical risks. Secondly, we show how we can construct stop-loss bounds and we use these to construct mean preserving explicit approximations for general sums of log-elliptical distributions in explicit form. The remainder of the paper is structured as follows. In Sections 2 and 3, we introduce elliptical, spherical and log-elliptical distributions, as in Fang et al. (1990). In order for the paper to be self-contained for the convenience of the reader, we repeat some results from literature that will be used in later sections. In Section 4, we summarise the ideas developed in Dhaene et al. (2002a, 2002b) regarding the construction of convex upper and lower bounds for sums of non-independent r.v.’s which were successfully applied to sums of log-normals. In Section 5 we study these bounds for sums of log-ellipticals and we show that the convex lower bound cannot be obtained explicitly. In Section 6 we propose new approximations that are based on stop-loss ordering and that allow closed form calculations. In Section 7 we numerically illustrate the accuracy of these approximations. Finally, Section 8 concludes the paper.

2 2.1

Elliptical and Spherical Distributions

Definition of elliptical distributions T

It is well-known that a random vector Y = (Y1 , ..., Yn ) is said to have a n-dimensional normal distribution if its characteristic function is given by     tT = (t1 , t2 , . . . , tn ) . (6) E exp itT Y = exp itT µ exp − 12 tT Σt , for some fixed vector µ(n × 1) and some fixed matrix Σ(n × n). Equivalently, one can say that Y is multivariate normal if d

Y = µ + AZ,

(7)

T

where Z = (Z1 , ..., Zm ) is a random vector consisting of m mutually independent standard normal r.v.’s, A is a d

n × m matrix, µ is a n × 1 vector and = stands for “equality in distribution”. For random vectors belonging to the class of multivariate normal distributions with parameters µ and Σ, we will use the notation Y ∼Nn (µ, Σ). It is well-known that the vector µ is the mean vector and that the matrix Σ is the variance-covariance matrix. Note that the relation between Σ and A is given by Σ = AAT . The class of multivariate elliptical distributions is a natural extension of the class of multivariate normal distributions. T

Definition 2.1 (Multivariate elliptical distribution). The random vector Y = (Y1 , ..., Yn ) is said to have an elliptical distribution with parameters the vector µ(n × 1) and the matrix Σ(n × n) if its characteristic function can be expressed as     tT = (t1 , t2 , . . . , tn ) , (8) E exp itT Y = exp itT µ φ tT Σt ,

4

Elliptical and Spherical Distributions for some scalar function φ and where Σ is given by Σ = AAT

(9)

for some matrix A(n × m). If Y has the elliptical distribution as defined above, we shall write Y ∼En (µ, Σ, φ) and say that Y is elliptical. The function φ is called the characteristic generator of Y. It is well-known that the characteristic function of a random vector always exists and that there is a one-to-one correspondence between distribution functions and characteristic functions. Note however that not every function φ can be used to construct a characteristic function of an elliptical distribution. Obviously, this function φ should fulfil the requirement φ (0) = 1. A necessary and sufficient condition for the function φ to be a characteristic generator of an n-dimensional elliptical distribution is given in Theorem 2.2 of Fang et al. (1990). In the remainder of the paper, we shall denote the elements of µ and Σ by T

µ = (µ1 , ..., µn )

(10)

and Σ = (σ kl ) for k, l = 1, 2, ..., n,

(11)

respectively. Note that (9) guarantees that the matrix Σ is symmetric, positive semidefinite and has non-negative elements on the first diagonal. Hence, for any k and l, one has that σ kl = σ lk , whereas σ kk ≥ 0 which will often be denoted by σ 2k . T

It is also known that any n-dimensional random vector Y = (Y1 , ..., Yn ) is multivariate normal with parameters µ and Σ if and only if for any vector b(n × 1), the linear combination bT Y of the different marginals Yk has a univariate normal distribution with parameters bT µ and variance bT Σb. From (8) it follows easily that this characterisation for multivariate normality can be extended to the class of general multivariate elliptical distributions. Then, the n-dimensional random vector Y is elliptical with parameters µ and Σ, notation Y ∼E n (µ, Σ, φ) , if and only if for any vector b(n × 1), one has that   bT Y ∼E1 bT µ, bT Σb, φ . (12) Invoking (8) again we also find that for any matrix B (m × n), any vector c (m × 1) and any random vector Y ∼ En (µ, Σ, φ) that   BY + c ∼ Em Bµ + c, BΣBT , φ , (13) see also Theorem 2.16 in Fang et al. (1990). Hence, any random vector with components that are linear combinations of the components of a multivariate elliptical distribution is again an elliptical distribution with the same characteristic generator. In particular it holds for k = 1, 2, ..., n,  Yk ∼ E1 µk ,σ 2k , φ . (14) The marginal components of a multivariate elliptical distribution have an elliptical distribution with the same characteristic generator. As n X S= Yk = eT Y, (15) k=1 T

where e(n × 1) = (1, 1, ..., 1) , it also follows that S ∼ E1 eT µ, eT Σe, φ



(16)

5

Elliptical and Spherical Distributions where eT µ =

Pn

k=1

µk and eT Σe =

Pn

k=1

Pn

l=1

σ kl .

Finally, Kelker (1970) proved the interesting result that any multivariate elliptical distribution with mutually independent components must necessarily be multivariate normal, see also Theorem 4.11 in Fang et al. (1990). Moments and densities of elliptical distributions, together with some member of this class, are provided in the Appendix.

2.2

Spherical distributions T

An n-dimensional random vector Z = (Z1 , ..., Zn ) is said to have a multivariate standard normal distribution if all the Zi ’s are mutually independent and standard normally distributed. We will write this as Z ∼ Nn (0n , In ), where 0n is the n-vector with i-th element E(Zi ) = 0 and In is the n × n covariance matrix which equals the identity matrix. The characteristic function of Z is given by    E exp itT Z = exp − 21 tT t , tT = (t1 , t2 , . . . , tn ) . (17) The class of multivariate spherical distributions is an extension of the class of standard multivariate normal distributions. Definition 2.2 (Spherical Distributions). A random vector Z = (Z1 , ..., Zn ) dimensional spherical distribution with characteristic generator φ if Z ∼ En (0n , In ,φ).

T

is said to have an n-

We will often use the notation Sn (φ) for En (0n , In ,φ) in the case of spherical distributions. From the definition above, we find as a corollary that Z ∼ Sn (φ) if and only if    E exp itT Z = φ tT t , tT = (t1 , t2 , . . . , tn ) . (18) Consider an m-dimensional random vector Y such that d

Y = µ + AZ,

(19)

for some vector µ(n × 1), some matrix A(n × m) and some m-dimensional elliptical random vector Z ∼ Sm (φ). Then it is straightforward to prove that Y ∼ En (µ, Σ,φ), where the variance-covariance matrix is given by Σ = AAT . From equation (18) it immediately follows that, if they exist, the correlation matrices of members of the class of spherical distributions are identical. That is, if we denote by Corr (Z) = (rij ) the correlation matrix, then  Cov (Zi , Zj ) 0, if i 6= j = , rij = p 1, if i = j V ar (Zi ) V ar (Zj ) provided that for all i and j the covariances Cov(Zi , Zj ) exist. The factor −2φ0 (0) in the covariance matrix as shown in (91) enters into the covariance structure but cancels in the correlation matrix. Note that although the correlations between different components are 0, this does not imply that these components are mutually independent. This can only be true if the spherical vector belongs to the family of multivariate normal distributions. From the characteristic functions of Z and aT Z, one immediately finds that Z ∼ Sn (φ) if and only if for any n-dimensional vector a, one has aT Z √ ∼S1 (φ) . (20) aT a

6

Elliptical and Spherical Distributions As a special case we find that any component Zi of Z has a S1 (φ) distribution.

From the results concerning elliptical distributions we find that if a spherical random vector Z ∼ Sn (φ) has a density fZ (z), then it will have the form  fZ (z) = cg zT z , (21) where the density generator g satisfies the condition (96) and the normalising constant c satisfies (97). Furthermore, the opposite also holds: any  non-negative function g (·) satisfying the condition (96) can be used to define an n-dimensional density cg zT z of a spherical distribution with the normalising constant c satisfying (97). One often writes Sn (g) for the n-dimensional spherical distribution generated from the density generator g (·).

2.3

Conditional distributions

It is well-known that if (Y, Λ) has a bivariate normal distribution with σ Λ > 0 and σ Y > 0, then the conditional distribution of Y, given that Λ = λ, is normal with mean and variance given by E (Y |Λ = λ ) = E (Y ) + r (Y, Λ) and

σY [λ − E (Λ)] σΛ

(22)

  2 V ar (Y |Λ = λ ) = 1 − r (Y, Λ) σ 2Y ,

(23)

where r (Y, Λ) is the correlation coefficient between Y and Λ: Cov (Y, Λ) . r (Y, Λ) = p V ar (Y ) V ar (Λ)

(24)

In the following theorem it is stated that this conditioning result can be generalised to the class of bivariate elliptical distributions. This result will be useful in Section 4 where convex bounds for sums of dependent r.v.’s will be derived. Theorem 2.1 (Conditional elliptical distribution). (See Fang et al. (1990), Theorem 2.18) T Let the random vector Y = (Y1 , ..., Yn ) ∼ En (µ, Σ, φ) with density generator denoted by gn (·). Define Y and Λ to be linear combinations of the variates of Y, i.e. Y = αT Y and Λ = β T Y , for some αT = (α1 , α2 , . . . , αn ), β T = (β 1 , β 2 , . . . , β n ), and σ Λ > 0, σ Y > 0 Then, we have that (Y, Λ) has an elliptical distribution:   (Y, Λ) ∼ E2 µ(Y,Λ) , Σ(Y,Λ) , φ (25) where the respective parameters are given by    T  α µ µY µ(Y,Λ) = = , µΛ βT µ    T α Σα σ 2Y r (Y, Λ) σ Y σ Λ Σ(Y,Λ) = = r (Y, Λ) σ Y σ Λ σ 2Λ αT Σβ

(26) αT Σβ β T Σβ



Furthermore, conditionally given Λ = λ, the r.v. Y has a univariate elliptical distribution:     σY 2 2 a Y |Λ = λ ∼ E1 µY + r (Y, Λ) (λ − µΛ ) , 1 − r (Y, Λ) σ Y , g , σΛ

.

(27)

(28)

where the density generator g a (·) is given by g a (w) = R ∞ 0

g2 (w + a) −1/2 u g2 (u +

a) du

,

(29)

7

Log-Elliptical Distributions 2

with a = (λ − µΛ ) /σ 2Λ . If we denote by φa the characteristic generator of Y |Λ = λ , then we find from Theorem 2.1 that the characteristic function of Y |Λ = λ can be expressed as     (30) E (exp (iY t) |Λ = λ ) = exp i µY |Λ=λ t φa σ 2Y |Λ=λ t2 , where µY |Λ=λ = µY + r (Y, Λ) and

σY (λ − µΛ ) , σΛ

  2 σ 2Y |Λ=λ = 1 − r (Y, Λ) σ 2Y ,

(31)

(32)

Note that (29) and (30) can then be used to determine φa but unfortunately it is often too difficult to derive φa or g a explicitly. An important exception is when Y ∼Nn (µ, Σ) in which case we easily find that w g a (w) = φa (w) = exp(− ), 2

(33)

which reflects the well-known fact that for multivariate normal distributions also the conditional distributions will be normally distributed.

3

Log-Elliptical Distributions

Multivariate log-elliptical distributions are natural generalisations of multivariate log-normal distributions. T For any n-dimensional vector x = (x1 , . . . , xn ) with positive components xi , we define T

log x = (log x1 , log x2 , ..., log xn ) . Recall that an n-dimensional random vector X has a multivariate log-normal distribution if log X has a multivariate normal distribution. In this case we have that log X ∼ Nn (µ, Σ) . Definition 3.1 (Multivariate log-Elliptical Distribution). The random vector X is said to have a multivariate log-elliptical distribution with parameters µ and Σ if log X has an elliptical distribution: log X ∼ En (µ, Σ, φ) .

(34)

In the remainder of the paper we shall denote log X ∼ En (µ, Σ, φ) as X ∼ LEn (µ, Σ, φ) . When µ = 0n and Σ = In , we shall write X ∼ LSn (φ) . Clearly, if Y ∼ En (µ, Σ, φ) and X = exp (Y), then X ∼ LEn (µ, Σ, φ). If the density of Y = log X ∼ En (µ, Σ, φ) exists, then the density of X ∼ LEn (µ, Σ, φ) also exists. From (95), it follows that the density of X must be equal to  n  h i Q −1 c T fX (x) = p xk g (log x − µ) Σ−1 (log x − µ) , (35) |Σ| k=1 see Fang et al. (1990). The density of the multivariate log-normal distribution with parameters µ and Σ follows from (100) and (35). Furthermore, any marginal distribution of a log-elliptical distribution is again log-elliptical. This immediately follows from the properties of elliptical distributions.

8

Log-Elliptical Distributions

Theorem 3.1 (Some Properties for log-Elliptical Distributions). Let X ∼ LEn (µ, Σ, φ). If the mean of Xk exists, then it is given by  E (Xk ) = exp (µk ) φ −σ 2k . (36) Provided the covariances exist, they are given by    Cov (Xk , Xl ) = exp (µk + µl ) φ(−(σ 2k + σ 2l + 2σ kl )) − φ −σ 2k φ −σ 2l .

T

Proof. Define the vector ak = (0, 0, ..., 0, 1, 0, ..., 0) which is 1. Thus, for k = 1, 2, ..., n, we have E (Xk )

= = =

(37)

to consist of all zero entries, except for the k-th entry

E (exp (Yk )) = E exp aTk Y   exp aTk µ φ −aTk Σak  exp (µk ) φ −σ 2k



= ϕaTk Y (−i)

and the result for the mean immediately follows. For the covariance, first define the vector bkl = (0, 0, ..., 1, 0, ...0, 1, 0, ..., 0) to consist of all zero entries, except for the k-th and l-th entries which are each 1. Note that ν kl =E(Xk Xl ) −E(Xk )E(Xl ) where  E (Xk Xl ) = E exp bTkl Y = ϕbTkl Y (−i)   = exp bTkl µ φ −bTkl Σbkl = exp (µk + µl ) φ(−(σ 2k + σ 2l + 2σ kl )) and the result should now be obvious. Note that for the different means and covariances of X ∼ LEn (µ, Σ, φ) to exist it will be required that the characteristic generator φ(u), which is defined on [0, ∞), can be positively extended to intervals of the type [−δ, ∞) for some δ > 0 sufficiently large but this is no clear cut case in all instances. For example, from (109) we see that, since the modified Bessel function of the third kind is only defined on the positive interval, the characteristic generator for a Student-t distribution is not extendable to the negative interval at all. Some risk measures for log-elliptical distributions are derived in the next theorem.  Theorem 3.2 (Risk measures for log-Elliptical Distributions). Let X ∼ LE1 µ,σ 2 , φ and Z ∼ S1 (φ) with density fZ (x) and survival function SZ (x) . Then we find that  −1 FX (p) = exp µ + σFZ−1 (p) , 0 < p < 1, (38)  Moreover, if E [X] and φ −σ 2 > 0 are well defined then we also find that        E (X − d)+ = exp (µ) φ −σ 2 SZ ∗ log σd−µ − dSZ log σd−µ , d > 0,   −1 E X X > FX (p) =

  exp (µ) φ −σ 2 SZ ∗ SZ−1 (1 − p) , 0 < p < 1, 1−p

(39)

where the density of Z ∗ is given by fZ ∗ (x) =

fZ (x)exp (σx) . φ (−σ 2 ) d

Proof. (1) The quantiles of X follow immediately from log X = µ + σZ.

(40)

T

Convex Order Bounds for Sums of Random Variables 1 fZ (2) Using fX (x) = σx



9

 log x − µ log x − µ , and substituting x by t = , we find that σ σ   Z ∞  log d − µ 2 xfX (x)dx = exp (µ) φ −σ SZ ∗ , σ d

where SZ ∗ refers to the survival function of Z ∗ . The stated result then follows from   Z ∞   µ − log d E (X − d)+ = xfX (x) dx − dFZ . σ d (3) The expression for the tail conditional expectation follows from h    i 1 −1 −1 −1 E X − FX E X X > FX (p) = FX (p) + . (p) + 1−p

Note that the density of Z ∗ in the theorem above can be interpreted as the Esscher transform with parameter σ of Z. Furthermore, note that the expression for the quantiles holds for any one-dimensional elliptical distribution, whereas the expressions for the stop-loss premiums and tail conditional expectations were only derived for continuous elliptical distributions under some suitable restrictions. We also have that if g is the normalised density generator of Z ∼ S1 (φ), then Z x  FZ (x) = g z 2 dz. −∞

4

Convex Order Bounds for Sums of Random Variables

This section describes convex bounds for (the distribution function of) sums of r.v.’s as presented in Dhaene et al. (2002a, 2002b). We first introduce some well-known actuarial ordering concepts which are essential ingredients for developing the bounds.

4.1

Actuarial Orderings

In the remainder of the paper we assume all r.v.’s to have a finite mean. In this case, we find Z 0 Z ∞ E (X) = F X (x) dx − FX (x) dx.

(41)

−∞

0

  and also that the stop-loss premium E (X − d)+ can be written as: Z ∞   F X (x) dx, E (X − d)+ =

(42)

d

which can be interpreted as a measure for the weight of the upper-tail of the distribution for X from d on. In actuarial science, it is common to replace a r.v. by another one which is “less attractive”, hence “more save”, and with a simpler structure so that the distribution function is easier to determine. The notion of “less attractive” can be translated in terms of stop-loss and convex orders, as defined below. Definition 4.1 (Stop-loss order). A r.v. X is said to precede another r.v. Y in stop-loss order, written as X sl Y , if     E (X − d)+ ≤ E (Y − d)+ , for all d. (43)

10

Convex Order Bounds for Sums of Random Variables

It can be proven that X sl Y ⇔ E [v (X)] ≤ E [v (Y )] ,

(44)

holds for all increasing convex functions v(x), which also explains why stop-loss order is also called increasing convex order, denoted by icx . If in addition the r.v.’s X and Y have equal means we obtain the convex order. Definition 4.2 (Convex order). A r.v. X is said to precede another r.v. Y in convex order, written as X cx Y , if X sl Y and E (X) = E (Y ) . It can be proven that X cx Y ⇔ E [v (X)] ≤ E [v (Y )]

(45)

for all convex functions v(x). The convex ordering reflects the common preferences of all risk adverse decision makers when choosing between r.v.’s with equal mean. This holds in both the classical utility theory from von Neuman & Morgenstern as in Yaari’s dual theory for decision making under risk; see for instance Denuit et al. (1999) for more details.

4.2

Comonotonicity T

Consider an n-dimensional random vector X = (X1 , ..., Xn ) with multivariate distribution function given T by FX (x) = Pr (X1 < x1 , . . . , Xn < xn ) , for any x = (x1 , · · · , xn ) . It is well-known that this multivariate distribution function satisfies the so called Fr´echet bounds: ! n X  max FXk (xk ) − (n − 1) , 0 ≤ FX (x) ≤ min FX1 (x1 ) , · · · , FXn (xn ) , k=1

see Hoeffding (1940) or Fr´echet (1951). Definition 4.3 (Comonotonicity). A random vector X is said to be comonotonic if its joint distribution is given by the Fr´echet upper bound, i.e.,  FX (x) = min FX1 (x1 ) , · · · , FXn (xn ) .

Alternative characterisations of comonotonicity of a random vector are given in the following theorem, the proof of which can be found in Dhaene et al. (2002a). Theorem 4.1 (Characterisation of Comonotonicity). Suppose X is an n-dimensional random vector. Then the following statements are equivalent: 1. X is comonotonic.  d −1 −1 −1 2. X = FX (U ) , ..., FX (U ) for U ∼ Uniform(0, 1) where FX (·) denotes the quantile function defined by 1 n k −1 (q) = inf {x ∈ R |FX (x) ≥ q } , 0 ≤ q ≤ 1. FX k

3. There exists a r.v. Z and non-decreasing functions h1 , ..., hn such that d

X = (h1 (Z) , ..., hn (Z)) .

Convex Order Bounds for Sums of Random Variables

11

In the sequel, we shall use the superscript c to denote comonotonicity of a random vector. Hence, the vector Xc = (X1c , ..., Xnc ) is a comonotonic random vector with the same marginals as the vector (X1 , ..., Xn ). The former vector is called the comonotonic counterpart of the latter. Consider the comonotonic random sum S c = X1c + · · · + Xnc .

(46)

In Dhaene et al. (2002a), it is proven that each quantile of S c is equal to the sum of the corresponding quantiles of the marginals involved: n X −1 FS−1 (q) = FX (q) , 0 ≤ q ≤ 1. (47) c k k=1

Furthermore, they showed that in case all marginal distributions FXk are strictly increasing, the stop-loss premiums of a comonotonic sum S c can easily be computed from the stop-loss premiums of the marginals: n   X   E (S c − d)+ = E (Xk − dk )+ ,

(48)

k=1

where the dk ’s are determined by

dk = FX−1 (FS c (d)) . k

(49)

This result can be extended to the case of marginal distributions that are not necessarily strictly increasing, see Dhaene et al. (2002a).

4.3

Convex Order Bounds

In this subsection, we present convex upper and lower bounds for the sum S = X1 + · · · + Xn .

(50)

Proofs for these bounds can be found in Kaas, Dhaene and Goovaerts (2000) or Dhaene et al. (2002a). It can be shown that S ≤cx S c ,

(51)

which implies that S c is indeed a convex order upper bound for S. Let us now suppose that we have additional information available about the dependency structure of X, in the sense that there is some r.v. Λ with a known distribution function and such that we also know the distri−1 butions of the r.v.’s Xk |Λ = λ for all outcomes λ of Λ and for all k = 1, ..., n. Let FX (U ) be a notation for k |Λ −1 the r.v. fk (U, Λ), where the function fk is defined by fk (u, λ) = FXk |Λ=λ (u). Now, consider the random vector T

Xu = (X1u , ..., Xnu ) , where Xku is given by −1 Xku = FX (U ) . k |Λ

(52)

The improved upper bound (corresponding to Λ) of the sum S is then defined as S u = X1u + · · · + Xnu .

(53)

Notice that the random vectors Xc and Xu have the same marginals. It can be proven that S ≤cx S u ≤cx S c ,

(54)

12

Convex Order Bounds for log-Elliptical Sums

which means that the sum S u is indeed an improved upper bound (in the sense of convex order) for the original sum S. For the prove please refer to Dhaene et al. (2002a). Finally, consider the random vector Xl = X1l , ..., Xnl

T

, where Xkl is given by

Xkl = E (Xk |Λ ) .

(55)

Using Jensen’s inequality, it is straightforward to prove that the sum S l = X1l + · · · + Xnl

(56)

S l ≤cx S.

(57)

is a convex order lower bound for S: For the proof we refer to Dhaene et al. (2002a).

5

Convex Order Bounds for log-Elliptical Sums

In this section we develop convex lower and upper bounds for sums involving log-elliptical r.v.’s. It generalises the results for the log-normal case as obtained in Kaas, Dhaene & Goovaerts (2000). Consider a series of deterministic non-negative payments α1 , ..., αn , that are due at times 1, ..., n respectively. The present value r.v. S is defined by n X αi exp [− (Y1 + · · · + Yi )] , (58) S= i=1

where the r.v. Yi represents the continuously compounded rate of return over the period (i − 1, i) , i = 1, 2, . . . , n. T Furthermore, define Y (i) = Y1 + · · · + Yi , the sum of the first i elements of the random vector Y = (Y1 , ..., Yn ) , and Xi = exp [−Y (i)]. Using these notations, we can write the present value r.v. in (58) as S=

n X

αi exp (−Y (i)) =

i=1

n X

Xi .

(59)

i=1

T

We will assume that the return vector Y = (Y1 , ..., Yn ) belongs to the class of multivariate elliptical distributions, i.e. Y ∼ En (µ, Σ, φ), with parameters µ and Σ given by T

µ = (µ1 , ..., µn ) , Σ = (σ kl ) for k, l = 1, 2, ..., n. Thus, the randomvector X = (X1 , ..., Xn ) E1 µ (i) , σ 2 (i) , φ with

T

(60)

is a log-elliptical random vector. From (16), we find that Y (i) ∼

µ (i)

=

i X

µk ,

(61)

k=1

σ 2 (i)

=

i X i X

σ kl .

(62)

k=1 l=1

In order to develop lower and improved upper bounds for S, we define the conditioning r.v. Λ as a linear combination of the r.v.’s Yi , i = 1, 2, ..., n : n X Λ= γ i Yi . i=1

13

Convex Order Bounds for log-Elliptical Sums  Using the property of elliptical distributions described in (12), we know that Λ ∼ E1 µΛ ,σ 2Λ , φ where µΛ =

n X

γ i µi

(63)

i=1

and σ 2Λ =

n X

γ i γ j σ ij .

(64)

i,.j=1

Note that if the mean and variance of Λ exist, then they are given by E(Λ) = µΛ and V ar (Λ) = −2φ0 (0) σ 2Λ , respectively. Theorem 5.1 (Convex Bounds). Let S be the present value sum as defined in (58), with αi , i = 1, . . . , n, non-negative real numbers and Y ∼ En (µ, Σ, φ). Let the conditioning r.v. Λ be defined by Λ=

n X

γ i Yi ,

(65)

i=1

and let ri denote the correlation between Yi and Λ. Then the comonotonic upper bound S c is given by c

S =

n X

  αi exp −µ (i) + σ (i) FZ−1 (U ) ,

(66)

i=1

where U is a uniformly distributed r.v. on (0,1) and Z ∼ S1 (φ). Furthermore, provided all E(Xi ) and φa −σ 2 (i) 1 − ri2 > 0 are well defined, we find that the convex lower bound S l will be given by   n X  Λ − µΛ a φ −σ 2 (i) 1 − ri2 , (67) Sl = αi exp −µ (i) − ri σ (i) σΛ i=1

Proof. From d

X(i) = αi exp (−µ(i) − σ(i)Z) , and because the quantile of a comonotonic sum is the sum of the component quantiles, we find that FS−1 c (p) =

n X

−1 FX(i) (p) =

i=1

n X

 αi exp −µ(i) + σ(i)FZ−1 (p) ,

0 < p < 1,

i=1

Hence, the comonotonic upper bound S c of S is given by (66).

In order to derive the lower bound S l , we first determine the characteristic function of the bivariate random T vector (Y (i) , Λ) for i = 1, . . . , n. For any 2-vector (t1 , t2 ) , we find   E [exp (i (t1 Y (i) + t2 Λ))] = E exp isT Y with

 sk =

t1 + t2 β k t2 β k

: k = 1, . . . , i, : k = i + 1, . . . , n.

As Y ∼ En (µ, Σ, φ), this leads to E [exp (i (t1 Y (i) + t2 Λ))]

  = exp isT µ · φ sT Σs   = exp itT µ∗ · φ tT Σ∗ t

14

Convex Order Bounds for log-Elliptical Sums with µ∗

T

=

(µ (i) , µΛ ) ,  2  σ (i) σ ∗1,2 = σ ∗2,1 σ 2Λ

Σ∗ and

σ ∗1,2 = σ ∗2,1 = ri σ(i)σ Λ . Hence, we can conclude that the bivariate random vector (Y (i) , Λ) is elliptical: T

(Y (i) , Λ) ∼ E2 (µ∗ , Σ∗ , φ) .   Thus, from Theorem (2.1), we know that (Y (I) |Λ = λ ) ∼ E1 µY (i)|Λ , σ 2Y (i)|Λ , φa with µY (i)|Λ = E (Y (i) |Λ = λ ) = µ (i) + ri

σ (i) (λ − µΛ ) σΛ

(68)

and  σ 2Y (i)|Λ = V ar (Y (i) |Λ = λ ) = σ 2 (i) 1 − ri2 .

(69)

Consequently, using the moment generating function of an elliptical distribution and the results in Section 2.5 we have that E (αk exp (−Y (i)) |Λ = λ )

= =

αi E (exp (−Y (i)) |Λ = λ ) = αi MY (i)|Λ (−1)     αi exp −µY (i)|Λ · φa −σ 2Y (i)|Λ .

Hence, we find that a lower bound for the present value random sum in (58) can be written as S l = X1l + · · · + Xnl with Xil

= E (αi exp (−Y (i)) |Λ )    Λ − µΛ · φa −σ 2 (i) 1 − ri2 . = αi exp −µ (i) − ri σ (i) σΛ (70)

Note that

Λ−µΛ σΛ

∼ S1 (φ)

The quantile of S c is defined as: Qp [S c ] =

n X

 αi exp µ(i) + σ(i)Φ−1 (p) ,

0 < p < 1,

(71)

i=0

While expression (67) for the convex lower bound is elegant it needs to be observed that it involves the characteristic generator φa , function of a, that is dependent on the unknown parameter Λ. For example, in case of the multivariate normal family we find from (33) that (67) can be expressed in closed-form as Sl =

n X

 Λ − µΛ 1 ). αi exp(−µ (i) + σ 2 (i) 1 − ri2 − ri σ (i) 2 σΛ i=1

(72)

Apart from the multivariate normal family we are not aware of another multivariate elliptical distribution where φa can be readily obtained, and this limits the practical applicability of the convex lower bound for general sums of log-elliptical risks. This is the reason why we will proceed with developing other approximations.

15

Closed-Form Approximations for log-Elliptical Sums

6 6.1

Closed-Form Approximations for log-Elliptical Sums

Approximations based on stop-loss order

As mentioned in the previous section, φa cannot be readily obtained, and this limits the practical applicability of the convex lower bound. This is the reason why in this section we will present approximations that allow closed-form calculations. Theorem 6.1. Using the notation introduced above let us consider the r.v. Pn S SL = i=1 αi exp (E[−Y (i) | Λ]) . We find that S SL is given by: S SL =

n X

αi exp(−µ (i) − ri σ (i)

i=1

Λ − µΛ ), σΛ

(73)

and S SL ≤sl S

(74)

Furthermore, we also define a r.v. S B given by S

B

n X

 φ −σ 2 (i) Λ − µΛ exp(−µ (i) − ri σ (i) ), = αi · 2 2 φ (−σ (i) ri ) σΛ i=1

(75)

  where it is tacitly assumed that φ −σ 2 (i) exists and also that φ −σ 2 (i) > 0 for all i, and we find that E[S B ] = E[S]

(76)

Finally, let us also consider the r.v. S LN given by S LN =

n X

αi exp(−µ (i) +

i=1

Λ − µΛ 1 · (1 − ri2 ) · σ 2 (i)] − ri σ (i) ). 2 σΛ

(77)

Then, in case Y ∼ N (µ, Σ) we have that S B = S LN = S l .

(78)

Proof. Since E (Y (i) |Λ = λ ) = µ (i) + ri

σ (i) (λ − µΛ ) , σΛ

(79)

we immediately find that S SL =

n X

αi exp (E(−Y (i)|Λ)) =

i=1

n X

αi exp(−µ (i) − ri

i=1

σ (i) [Λ − E(Λ)]). σΛ

(80)

Moreover, since the exponential function is convex we find from (44) that S SL ≤sl S. Furthermore, we also find easily that E[S] =

n X i=1

αi exp (E[Yi ]) φ(−σ 2 (i)),

(81)

16

Closed-Form Approximations for log-Elliptical Sums and E[S SL ] =

n X

αi exp (E[Yi ]) φ(−ri2 σ 2 (i) ,

(82)

i=1

from which it follows that E[S B ] =E[S]. Finally, when Y ∼ N (µ, Σ) we find from (33) that  φ −σ 2 (i) 1 = exp( · (1 − ri2 ) · σ 2 (i)). 2 2 φ (−σ (i) ri ) 2

(83)

and hence S B = S LN = S l will follow immediately in this case. The quantiles of the S B , S LN and S SL are given by n X

 φ −σ 2 (i) exp(µ (i) − ri σ (i) FZ−1 (p)), Qp (S ) = αi · 2 (i) r 2 ) φ (−σ i i=1 B

Qp (S LN ) =

n X

αi exp(µ (i) −

i=1

Qp (S SL ) =

n X

1 · (1 − ri2 ) · σ 2 (i) − ri σ (i) FZ−1 (p)) . 2

αi exp(µ (i) − ri σ (i) FZ−1 (p)),

(84)

(85)

(86)

i=1

where FZ−1 (p) is the p−quantile of the spherical distribution Z ∼ S1 (φ) In the remainder of the paper we will say that approximations for the risk measures of S that are based on the r.v. S c and S l are “convex upper bound approximations” and “convex lower bound approximations”, respectively. The approximations based on S SL will be called “stop-loss lower bound approximations”. Finally, when using S B and S LN we will use the term “mean preserving approximations” and “normal based approximations”, respectively. Note that apart from the fact that in the multivariate normal case S LN coincides with the convex lower bound S l , there is no other compelling theoretical reason that supports it use.

6.2

Optimal choice of Λ

In case of approximations based on the r.v. S l = E(S |Λ ) , Kaas et al. (2000) have proposed to take the conditioning r.v. Λ as n X Λ= αi exp (E[−Y (i)]) Y (i) , (87) i=1

Indeed, this choice makes Λ a linear transformation of a first-order approximation of the sum S so that S l = E (S |Λ ) ≈ S. This follows from: S=

n X

αi exp (E[−Y (i)] ) exp ((−Y (i) + E [Y (i)])) ≈ C +

i=1

n X

αi exp (E[−Y (i)]) Y (i) ,

(88)

i=1

where C is some appropriate constant. We note that the reasoning to support the choice (87) for Λ, initially developed in a log-normal context, is also valid in a log-elliptical world. Furthermore, since it holds as a first approximation that S l is equal to S SL , we will suggest the choice (87) also for the approximations based on S SL , S B and S LN . For a detailed account on how to choose Λ appropriately in a log-normal context, we refer to Vanduffel et al. (2004) and Vanduffel et al. (2008).

17

Numerical Illustrations

7

Numerical Illustrations

In order to compare the performance of the different approximations presented above, we will consider a r.v. S which is defined as the random present value of a series of n deterministic unit payment obligations due at times 1, 2, ..., n respectively:

S=

n X

def

exp (−(Y1 + Y2 + . . . + Yi )) =

i=1

n X

exp (Zi ) .

i=1

where (Y1 , . . . , Yn ) ∼ En (µ, Σ, φ), the r.v. Yi is the random return over the year [i−1, i] and exp (−(Y1 + Y2 + . . . + Yi )) is the random discount factor over the period [0, i]. Let us assume that the yearly returns Yi are identically dis2 tributed and uncorrelated with mean (µ − σ2 ) and variance σ 2 . We will compute provisions set up at time 0 and determined as Qp (S), for some sufficiently high probability value, such that future unit payments can be met. A provision equal to Q0.95 (S), for instance, will guarantee that all payments can be made with a probability of 0.95, see also Vanduffel, Dhaene, Goovaerts & Kaas (2003) for a similar problem setting . We will use Monte Carlo simulation results as the benchmark and these are based on generating 500.000 random paths. The tables show the results obtained by Monte Carlo simulation (M C) for Q0.95 (S) as well as the deviation of the different approximation methods relative to these Monte Carlo results. These deviations are defined as: Qp (S method ) − Qp (S M C ) · 100% , Qp (S M C ) where S method corresponds to the approximated results using one of the approximations methods based on S l , S SL , S B or S LN . The numbers displayed in bold represent the best approximation result, i.e. when there is the smallest deviation from the simulated result. For many years, empirical literature in finance has been discussing which distribution is most suitable to model stochastic rate of returns. It is well-known that short term returns (daily, weekly, monthly) are sharply peaked and heavy tailed (Rama Cont. (2001), Doyne Farmer.(1999)) and cannot be described by Gaussian distributions. In literature many parametric models for modelling short-term returns have been proposed, including stable distributions (Mandelbrot (1963)), the Student-t distribution (Blattberg (1974)) and hyperbolic distributions (Eberlein (1998)) amongst others. Some empirical studies indicate that as soon as the periodicity of the returns is longer than 1 year the assumption of a Gaussian model for the returns are appropriate (Cesari & Cremoni (2003), McNeil(2005)). However, several authors claim that even when the conditions for the Central Limit Theorem hold, the convergence in the tails may be very slow (Bradley and Taqqu (2003), and the hypothesis of a normal model may then be valid in the central part of the return distribution only. Hence, we will suggest Student-t and Laplace distributions as suitable alternatives for the classical Gaussian framework to model the returns. We refer to Kotz et al. (2004) and Kotz et al. (2000) for algorithms that allow an efficient simulation for a Student-t and Laplace distribution respectively.

Method Sc S l (= S B = S LN ) S SL MC (±s.e.)

σ = 0,05 3.26% 0,00% -0,26% 12,194 (0.04%)

σ = 0,15 7.77% -0,25% -2,77% 20,506 (0.10%)

σ = 0,25 9.82% 0,31% -6,97% 41,409 (0.25%)

18

Numerical Illustrations

Table 1: Approximations for the 0.95-quantile of S for different volatilities in case of normally distributed logreturns. (µ = 0.075; 20 yearly payments of 1). The table show the results obtained by Monte Carlo simulation (M C) as well as the deviation of the different approximation methods relative to these Monte Carlo results. The figure between brackets represents the standard error on the Monte-Carlo result. Method Sc Sl S SL SB S LN MC (±s.e.)

σ = 0,05 3.38% N.A. -0.31% N.A. -0.05% 12.311 (0.8%)

σ = 0,15 7.88% N.A. -3.15% N.A. -0.63% 21.229 (0.11%)

σ = 0,25 8.18% N.A. -9.08% N.A. -1.92% 44.846 (0.26%)

Table 2: Approximations for the 0.95-quantile of S for different volatilities in case of Student−t distributed logreturns with 20 degrees of freedom (µ = 0.075; 20 yearly payments of 1). The table show the results obtained by Monte Carlo simulation (M C) as well as the deviation of the different approximation methods relative to these Monte Carlo results. The figure between brackets represents the standard error on the Monte-Carlo result. Method Sc Sl S SL SB S LN MC (±s.e.)

σ = 0.05 8.73% N.A. -0.40% -0.14% -0.15% 12.187 (0.8%)

σ = 0.15 25.89% N.A. -4.45% -1.60% -1.98% 20.734 (0.14%)

σ = 0.25 43.61% N.A. -11.18% 1.31% -3.35% 42.862 (0.25%)

Table 3: Approximations for the 0.95-quantile of S for different volatilities in case of Laplace-distributed logreturns (µ = 0.075; 20 yearly payments of 1). The table show the results obtained by Monte Carlo simulation (M C) as well as the deviation of the different approximatin methods relative to these Monte Carlo results. The figure between brackets represents the standard error on the Monte-Carlo result. In Table 1 we compare different lower bound approximations for the 95% quantiles of S for different levels of the yearly volatility where the returns are log-normally distributed. The comonotonic upper bound performs reasonably in this case but the approximations based on S l , which coincide with the ones based on S B and S LN , turn out to fit the quantile the best for all values of the parameters σ. Table 2 compares the approximations for 95% quantile when the logreturns are t-distributed with 20 degrees of freedom. Note that for the Student-t distribution the characteristic generator is not available and consequently the moment matching approximation based on S B is out of reach. It is interesting to observe that the (na¨ıve) approximation based on S LN seems to outperform the other methods. Next in Table 3 we see that in case of Laplace distributed logreturns the approximations based on S B outperform the other approximations.

19

Concluding Remarks Method Sc S l (= S B = S LN ) S SL MC (±s.e.)

p = 0.995 13.42% -0.79% -3.96% 29.7787 (0.15%)

p = 0.99 11.97% -0.56% -2.73% 26.8748 (0.12%)

p = 0.95 7.81% -0.22% -0.26% 20.4991 (0.10%)

p = 0.90 5.50% -0.17% 0.53% 17.8418 (0.06%)

p = 0.75 1.87% 0.02% 1.30% 14.2264 (0.05%)

p = 0.50 -2.31% -0.03% 0.85% 11.2057 (0.01%)

p = 0.25 -6.35% -0.01% -0.67% 8.9218 (0.04%)

Table 4: Approximations for some selected quantiles of S in case of normally distributed returns (σ = 0.15; µ = 0.075; 20 yearly payments of 1). The table show the results obtained by Monte Carlo simulation (M C) as well as the deviation of the different approximation methods relative to these Monte Carlo results. The figure between brackets represents the standard error on the Monte-Carlo result. Method Sc S SL S LN MC (±s.e.)

p = 0.995 13.61% -4.89% -2.26% 33.7305 (0.23%)

p = 0.99 12.08% -4.28% -1.68% 29.4721 (0.20%)

p = 0.95 7.88% -3.15% -0.63% 21.2288 (0.11%)

p = 0.90 5.57% -2.84% -0.37% 18.1762 (0.11%)

p = 0.75 1.83% -2.48% -0.1% 14.3077 (0.07%)

p = 0.50 -2.38% -2.38% -0.099% 11.2136 (0.062%)

p = 0.25 -6.64% -2.42% -0.24% 8.9053 (0.67%)

Table 5: Approximations for some selected quantiles of S in case of t-distributed logreturns (df = 20, σ = 0.15; µ = 0.075; yearly payments of 1; n=20). The table show the results obtained by Monte Carlo simulation (M C) as well as the deviation of the different approximation methods relative to these Monte Carlo results. The figure between brackets represents the standard error on the Monte-Carlo result. Method Sc S SL SB MC (±s.e.)

p = 0.995 12,19% -8,28% -5,26% 41,4289 (0.48%)

p = 0.99 10,21% -7,72% -4,76% 33,6582 (0.29%)

p = 0.95 6,26% -4,27% -1,42% 20,695 (0.14%)

p = 0.90 3,90% -3,22% -0,44% 17,0119 (0.01%)

p = 0.75 1,11% -2,07% 0,63% 13,2745 (0.06%)

p = 0.50 -1,19% -1,17% 1,45% 11,0764 (0.02%)

p = 0.25 -4,87% -1,82% 0,69% 9,4446 (0.05%)

Table 6: Approximations for some selected quantiles of S in case of Laplace distributed logreturns (σ = 0.15; µ = 0.075; 20 yearly payments of 1). The table show the results obtained by Monte Carlo simulation (M C) as well as the deviation of the different approximation methods relative to these Monte Carlo results. The figure between brackets represents the standard error on the Monte-Carlo result. Tables 4, 5 and 6 compare the different approximations for some selected quantiles of S, with a fixed volatility σ = 0.15 and parameter µ = 0.075, for normal, Student-t, and Laplace distributed logreturns respectively. We observe that in case of normal or Laplace distributed logreturns the mean-preserving approximations based on S B outperform the other approximations almost always. When the logreturns are t-distributed, we observe from Table 6 that the approximation based on S LN appears to produce the most accurate results.

8

Concluding Remarks

In this paper we first focus on developing upper and lower convex order bounds for the distribution function of a sum of non-independent log-elliptical r.v.’s. We have extended results of Dhaene, Denuit, Kaas, Goovaerts

20

References

& Vyncke (2002a, 2002b), who constructed general bounds for sums of dependent r.v.’s and applied these in the lognormal case. The extension to the class of elliptical distributions makes sense because it makes the ideas developed in Kaas, Dhaene & Goovaerts (2000) also applicable in situations where the shape of log-normal distributions is not fitted, but heavier tailed distributions such as Student-t are required. As multivariate log-elliptical distributions share many of the tractable properties of multivariate log-normal distributions, it is not surprising to find that the bounds developed for the log-elliptical case are very similar in form to those developed for the log-normal case. The convex upper bound is based on the sum of comonotonic r.v.’s while convex lower and improved upper bounds can be constructed from conditioning on some additional available random variable. We have shown that unfortunately the convex lower bound can not be obtained in explicit form in general. Finally we show how the weaker stop-loss order can be used to develop more explicit approximations and we propose three new approximations. We also numerically show that these newly proposed approximations are useful to to measure satisfactorily the risk of discounted or compounded sums in case the stochastic returns are elliptically distributed.

9

REFERENCES

[1] Blattberg, R., Gonnedes,. N., (1974) “A comparison of stable and Student distribution as statistical models for stock prices,” J. Business 47 244–80 [2] Bradley, B., Taqqu, M., (2003) “Financial risk and heavy tails“, “Handbook of Heavy Tailed Distributions in Finance. Elsevier, North-Holland “. [3] Cambanis, S., Huang, S., and Simons, G. (1981), “On the Theory of Elliptically Contoured Distributions,” Journal of Multivariate Analysis 11: 368-385. [4] Denuit, M., Dhaene, J. & Van Wouwe, M. (1999). “The economics of insurance: a review and some recent developments”, Mitteilungen der Schweiz. Aktuarvereinigung, 1999(2), 137-175. [5] Dhaene, J. and De Pril, N. (1994), “On a Class of Approximate Computation Methods in the Individual Risk Model,” Insurance: Mathematics & Economics 14: 181-196. [6] Dhaene, J., Denuit, M., Goovaerts, M.J., Kaas, R., and Vyncke, D. (2002a), “The Concept of Comonotonicity in Actuarial Science and Finance: Theory,” Insurance: Mathematics & Economics 31(1), 3-33. [7] Dhaene, J., Denuit, M., Goovaerts, M.J., Kaas, R., and Vyncke, D. (2002b), “The Concept of Comonotonicity in Actuarial Science and Finance: Applications,” Insurance: Mathematics & Economics 31(2), 133-161. [8] Dhaene, J., Vandebroek, M. (1995), “Recursions for the Individual Model,” Insurance: Mathematics & Economics 16, 31-38. [9] Dhaene, J., Vanduffel, S., Tang, Q., Goovaerts, M.J., Kaas, R., Vyncke, D. (2006). “Risk measures and comonotonicity: a review”, Stochastic Models, 22, 573-606. [10] Dhaene, J., Laeven, R., Vanduffel, S. Darkiewicz, G. and Goovaerts, M. (2008a) “Can a coherent risk measure be too subadditive?”, Journal of Risk and Insurance, 75 (2), 365-386. [11] Dhaene, J., Henrard, L., Landsman, Z., Vandendorpe, A., Vanduffel, S. (2008b). “Some results on the CTE based capital allocation rule,” Insurance, Mathematics and Economics, 42 (2), 855–863. [12] Doyne Farmer., J. (1999), “Physicists attempt to scale the ivory towers of finance,” Computing in Science and Engineering (IEEE), pages 26 39, 1999. November-December.

References

21

[13] Eberlein, E., Keller, U., Prause, K., (1998) “New insights into smile, mispricing and value at risk: the hyperbolic model,” J. Business 71 371–405 [14] Fang, K.T., Kotz, S. and Ng, K.W. (1990) Symmetric Multivariate and Related Distributions London: Chapman & Hall. [15] Fr´echet, M. (1951) “Sur les tableaux de corr´elation dont les marges sont donn´ees et programmes lin´eaires,” Ann. Univ. Lyon, Sect. A 9: 53-77. [16] Gradshteyn, I.S. and Ryzhik, I.M. (2001), Table of Integrals, Series, and Products New York: Academic Press. [17] Gupta, A.K. and Varga, T. (1993), Elliptically Contoured Models in Statistics Netherlands: Kluwer Academic Publishers. [18] Hoeffding, W. (1940) “Masstabinvariante Korrelationstheorie,” Schriften Math. Inst. Univ. Berlin 5: 181-233. [19] Joe, H. (1997), Multivariate Models and Dependence Concepts London: Chapman & Hall. [20] Kaas, R., Dhaene, J. and Goovaerts, M.J. (2000), “Upper and lower bounds for sums of random variables”, Insurance: Mathematics & Economics 27, 151-168. [21] Kaas, R., Goovaerts, M., Dhaene, J. and Denuit, M. (2001), Modern Actuarial Risk Theory, Boston, Dordrecht, London, Kluwer Academic Publishers. [22] Kaas, R., van Heerwaarden, A.E. and Goovaerts, M.J. (1994), Ordering of Actuarial Risks, Institute for Actuarial Science and Econometrics, University of Amsterdam, Amsterdam. [23] Kelker, D. (1970), “Distribution Theory of Spherical Distributions and Location-Scale Parameter Generalization,” Sankhya 32: 419-430. [24] Kotz, S. and Nadarajah, S. (2004), “Multivariate t distributions and their applications”, Cambridge. [25] Kotz, S., Balakrishnan, N. and Johnson, N.L. (2000), Continuous Multivariate Distributions, New York: John Wiley & Sons, Inc. [26] Kotz, S. Kozubowski T. J. and Podgorski, K. (2000), “An asymmetric multivariate Laplace distribution”, Technical Report No. 367, Department of Statistics and Applied Probability, University of California at Santa Barbara. [27] Landsman, Z. and Valdez, E.A. (2003), “Tail Conditional Expectations for Elliptical Distributions,” North American Actuarial Journal 7: 55-71. [28] Mandelbrot, B., (1963) “The variation of certain speculative prices ” J. Business XXXVI 392–417 [29] Rama Cont. (2001) “Empirical properties of asset returns: stylized facts and statistical issues. ” Quantitative Finance 1:223236, 2001. [30] Shaked, M. and Shanthikumar, J.G. (1994) Stochastic Orders and their Applications New York: Academic Press. [31] Vanduffel, S, Hoedemakers, T. and Dhaene, J. (2004), “Comparing approximations for sums of nonindependent lognormal random variables”, North American Actuarial Journal 9(4): 71-82 (2005). [32] Vanduffel, S., Chen, X., Dhaene, J., Goovaerts, M., Henrard L., Kaas, R. (2008), “Optimal Approximations for Risk Measures of Sums of Lognormals based on Conditional Expectations”, Journal of Computational and Applied Mathematics, In press. [33] Vanduffel, S., Dhaene, J., Goovaerts, M., Kaas, R. (2003), “The Hurdle-Race Problem”, Insurance: Mathematics & Economics 33(2): 405-413.

22

APPENDIX

[34] Witkovsky, V. (2001), “On the Exact Computation of the Density and of the Quantiles of Linear Combinations of t and F random variables”, Journal of Statistical Planning and Inference 94: 1-13. APPENDIX

A

Moments of elliptical distributions

Qn Suppose that for a random vector Y, the expectation E( k=1 Ykrk ) exists for some set of non-negative integers r1 , r2 , . . . , rn . Then this expectation can be found from the relation !  r1 +r2 +...+rn  n Y   ∂ 1 rk T E E exp it Yk = r1 +r2 +...+rn Y (89) i ∂tr11 ∂tr22 . . . ∂trnn t=0 k=1 T

where 0 (n × 1) = (0, 0, ..., 0) . The moments of Y ∼En (µ, Σ, φ) do not necessarily exist. However, from (8) and (89) we deduce that if E (Yk ) exists, then it will be given by E (Yk ) = µk (90) so that E(Y) = µ, if the mean vector exists. Moreover, if Cov (Yk , Yl ) and/or V ar (Yk ) exist, then they will be given by Cov (Yk , Yl ) = −2φ0 (0) σ kl (91) and/or V ar (Yk ) = −2φ0 (0) σ 2k ,

(92)

0

where φ denotes the first derivative of the characteristic generator. In short, if the covariance matrix of Y exists, then it is given by Cov (Y) = −2φ0 (0) Σ. (93) A necessary condition for this covariance matrix to exist is 0 φ (0) < ∞,

(94)

see Cambanis et al. (1981).

B

Multivariate densities of elliptical distributions

An elliptically distributed random vector Y ∼En (µ, Σ,φ) does not necessarily possess a multivariate density function fY (y). A necessary condition for Y to possess a density is that rank (Σ) = n. For elliptical distributions, one can prove that if Y ∼En (µ, Σ,φ) has a density, then it will be of the form h i c T g (y − µ) Σ−1 (y − µ) (95) fY (y) = p |Σ| for some non-negative function g satisfying the condition Z ∞ 0< z n/2−1 g(z)dz < ∞ 0

(96)

23

APPENDIX and a normalising constant c given by Γ (n/2) c= π n/2

Z

−1



z

n/2−1

g(z)dz

.

(97)

0

Also, the opposite statement holds:h Any non-negative function g satisfying the condition (96) can be used to define i c T −1 an n-dimensional density p g (y − µ) Σ (y − µ) of an elliptical distribution, with c given by (97). The |Σ| function g is called the density generator. One sometimes writes Y ∼En (µ, Σ,g) for the n-dimensional elliptical distributions generated from the function g. A detailed proof of these results, using spherical transformations of rectangular coordinates, can be found in Landsman & Valdez (2003). Note that for a given characteristic generator φ, the density generator g and/or the normalising constant c may depend on the dimension of the random vector Y. Often one considers the class of elliptical distributions of dimensions 1, 2, 3, . . ., all derived from the same characteristic generator φ. In case these distributions have a density, we will denote their respective density generators and normalising coefficients by gn and cn respectively, where the subscript n denotes the dimension of the random vector Y. In the following example we consider in more detail multivariate normal distributions which are the best known subclass of elliptical distributions. Example B.1 (Multivariate normal distribution). The n-dimensional random vector Y has the multivariate normal distribution with parameters µ and Σ, notation Y ∼Nn (µ, Σ), if its characteristic function is given by     (98) E exp itT Y = exp itT µ exp − 12 tT Σt . From (8) we see that Nn (µ, Σ) has an elliptical distribution with characteristic generator φ given by t φ(t) = exp(− ). 2

(99)

Since φ0 (0) = − 21 the matrix Σ in (98) is the covariance matrix of Y. In case Σ is positive definite, the random vector Y ∼Nn (µ, Σ) has a density which is given by h i 1 T fY (y) = exp − 21 (y − µ) Σ−1 (y − µ) . n p (2π) 2 |Σ|

(100)

Comparing (95) and (100) we find that the density generator gn and the normalising constant cn of Nn (µ, Σ) are given by u (101) gn (u) = exp(− ), 2 and 1 cn = (102) n , (2π) 2 respectively.

Next, we study multivariate Student−t distributions. Example B.2 (Multivariate Student-t distribution). let us consider the elliptical Student-t distribu u −(n+m)/2 tion En (µ, Σ,gn ) , where the density generator gn (·) is given by gn (u) = 1 + . We will also denote m (m) this multivariate distribution (with m degrees of freedom) by tn (µ, Σ). Its multivariate density is given by cn fY (y) = p |Σ|

T

(y − µ) Σ−1 (y − µ) 1+ m

!−(n+m)/2 .

(103)

24

APPENDIX In order to determine the normalising constant, first note from (97) that −1 Z ∞ Z ∞ −1  Γ (n/2) z −(n+m)/2 Γ (n/2) n/2−1 n/2−1 = 1 + z g(z)dz z . cn = dz m π n/2 π n/2 0 0 Performing the substitution u = 1 + (z/m), we find Z ∞ Z ∞  n/2−1 −m/2−1 z −(n+m)/2 z n/2−1 1 + u du. 1 − u−1 dz = mn/2 m 0 1 Making one more substitution v = 1 − u−1 , we get Z ∞  z −(n+m)/2 Γ (n/2) Γ (m/2) z n/2−1 1 + dz = mn/2 , m Γ ((n + m) /2) 0 from which we find Γ ((n + m) /2)

. (104) n/2 (mπ) Γ (m/2) Furthermore, the marginals of the multivariate elliptical Student-t distribution are again Student-t distributions, (m) hence, Yk ∼ t1 (µ, Σ). The results above lead to "   2 #−(m+1)/2 Γ m+1 1 1 y − µk 2 fYk (y) = 1+ , k = 1, 2, ..., n, (105)  1/2 m σk (mπ) Γ m σ k cn =

2

which is indeed the well-known density of a univariate Student-t random variable with m degrees of freedom. Its mean is E (Yk ) = µk , (106) whereas it can be verified that its variance is given by V ar (Yk ) =

m σ2 , m−2 k

(107)

m provided the degrees of freedom m > 2. Note that m−2 = −2φ0 (0), where φ is the characteristic generator of the family of Student-t distributions with m degrees of freedom. In order to derive the characteristic function of Yk , note that  −(m+1)/2  Z ∞ Γ m+1 1 2 2 E (exp (itYk )) = exp (iσ k tz) 1 + z dz  exp (iµk t) 1/2 m −∞ (mπ) Γ m 2  Z ∞ −(m+1)/2 Γ m+1 exp (iµk t) 2 = 2 cos (tz) mσ 2k + z 2 dz. (108)  1/2 −(m+1)/2 m 0 (mπ) Γ 2 σ k (mσ 2k ) (m)

Hence, from Gradshteyn & Ryzhik (2000, p. 907), we find that the characteristic function of Yk ∼ t1 given by m/2  √ √ 1 E (exp (itYk )) = exp (iµk t) m/2−1 m  t mσ k Km/2 t mσ k , 2 Γ 2

(µ, Σ) is (109)

where Kν (·) is the Bessel function of the second kind. For a similar derivation, see Witkovsky (2001). Observe that equation (109) can then be used to find the characteristic generator for the family of Student-t distributions. Next, we consider multivariate Laplace distributions which provide another subclass of elliptical distributions. Example B.3 (Multivariate Laplace distribution). The random vector Y is said to have a Multivariate Laplace density with parameters µ and positive definite variance-covariance matrix Σ if the density has the form ! r h iυ/2 2 1 T T −1 1 fY (y) = (y − µ) Kυ 2 (y − µ) Σ−1 (y − µ) . (110) n p 2 (y − µ) Σ 2 (2π) 2 |Σ|

25

APPENDIX

Here, υ = (2 − n)/2, while Kυ (u) is the modified Bessel function of the 3rd kind, see also Abramovich & Stegun (1965, p. 376). We write Y ∼Lan (µ, Σ) . Furthermore, comparing (95) and (110) we find that Y is elliptically distributed with density generator gn and normalising constant cn given by gn (u) = 2

 u υ/2 2

√ Kυ ( 2u),

and cn = When n = 1 we have that

r K1/2 (x) =

1 n

(2π) 2

u>0

.

π exp(−x), 2x

(111)

(112)

x>0

(113)

and we obtain the Laplace (or double exponential) density: √ |x − µ| 1 ). fX (x) = √ exp(− 2 σ 2σ

(114)

The characteristic function of Y ∼Lan (µ, Σ) is given by    1 E exp itT Y = exp itT µ (1 + tT Σt)−1 , 2

(115)

which implies that the characteristic generator φ is given by φ(t) =

1 . 1 + 2t

(116)

Note that since φ0 (0) = − 12 the matrix Σ in (110) is indeed a covariance matrix. We remark that some actuarial applications of elliptical distributions are considered in Landsman & Valdez (2003) and Dhaene et al (2008b).

Suggest Documents