The Finite-time and In…nite-time Ruin Probabilities of a Bivariate Lévy-driven Risk Process with Heavy Tails Xuemiao Hao[a]; [a]
and Qihe Tang[b]
Warren Centre for Actuarial Studies and Research, University of Manitoba 181 Freedman Crescent, Winnipeg, Manitoba R3T 5V4, Canada E-mail:
[email protected]
[b]
Department of Statistics and Actuarial Science, The University of Iowa 241 Schae¤er Hall, Iowa City, IA 52242, USA E-mail:
[email protected] June 22, 2010
Abstract Consider an insurance risk model proposed by Paulsen in a series of papers. In this model, the surplus process is described as a general bivariate Lévy-driven risk process in which one Lévy process, representing a loss process in a world without economic factors, is compounded by another Lévy process, describing return on investments. Motivated by a conjecture of Paulsen, we study the …nite-time and in…nite-time ruin probabilities for the case in which the …rst Lévy process has a heavy-tailed Lévy measure and the second one ful…lls a moment condition. We obtain a simple uni…ed asymptotic formula, which con…rms Paulsen’s conjecture. Keywords: Asymptotics; (Extended) regular variation; Finite-time and in…nitetime ruin probabilities; Lévy process; Tail probabilities Mathematics Subject Classi…cation: Primary 91B30; Secondary 60G51, 91B28
1
Introduction and Main Results
Consider a bivariate Lévy-driven risk model in which the surplus process of an insurance company is modeled by Z t Yt = x P t + Ys dRs (1.1) 0
with Y0 = x > 0 the initial surplus level, and P and R two independent Lévy processes
representing, respectively, a loss process in a world without economic factors and the process Corresponding author. Phone: 1-204-474-8710; fax: 1-204-474-7545.
1
describing return on investments in real terms. The solution of (1.1) is given by Z t ~ ~ ~t R e Rs dPs = eRt (x Zt ); x Yt = e
(1.2)
0
~ also a Lévy process, is the logarithm of the Doléans-Dade exponential of R. See where R, Paulsen (1998a, 1998b, 2002, 2008) for detailed explanations of this risk model. We shall start with (1.2) instead of (1.1), as has been done by many researchers including Kalashnikov and Norberg (2002) and Klüppelberg and Kostadinova (2008). The stochastic process Z in (1.2) is usually called the discounted net loss process. We are interested in the asymptotic behavior of the …nite-time and in…nite-time ruin probabilities of this bivariate Lévy-driven risk model. As usual, the …nite-time ruin probability is de…ned as (x; T ) = Pr
inf Yt < 0 Y0 = x ;
T
0 t T
0;
and the in…nite-time ruin probability as (x; 1) = lim
(x; T ) = Pr
T !1
Introduce ZT = sup Zt = sup 0 t T
0 t T
where the supremum is taken over 0 are equivalent, we have
Z
t
e
inf Yt < 0 Y0 = x :
0 t x
(x; T ) = Pr (ZT > x) ;
0
T
1:
(1.3)
For the case that ruin is possible but not certain, we aim at an explicit, uni…ed, asymptotic expression for (x; T ) for 0
1 as the initial surplus level becomes large.
T
Before stating our main results, we need to list some important facts regarding characterization of Lévy processes. For a general Lévy loss process P , its Lévy-Itô representation takes the form Pt = pt +
P BP;t
+
Z tZ 0
= pt +
P BP;t
x(
P
is the compensator of
P.
P
P (dx)ds)
+
0
+ MP;t + SP;t ;
where BP is a Brownian motion, and
P (ds; dx)
jxj 1
Z tZ
jxj>1
x
P (ds; dx)
(1.4)
is the random measure associated with the jumps of P , Actually,
P
2
is the Lévy measure of P , MP is a square
integrable martingale and SP is a compound Poisson process; see, for example, Applebaum (2004) and Kyprianou (2006). Moreover, denote by 'L (u) = log Ee
uL1
the Laplace exponent of a Lévy process L if it is …nite. Clearly, it holds for every t t'L (u) = log Ee
uLt
0 that
.
Throughout this paper, we assume P is heavy tailed. When P (1) = P ((1; 1)) > 0, introduce P ( ) = ( P (1)) 1 P ( )1(1;1) , which is a proper probability measure on (1; 1). We assume that the distribution
P
is of extended regular variation (ERV). By de…nition,
a distribution F belongs to the class ERV(
) for some 0
;
< 1 if F (x) =
F (x) > 0 holds for all x and the relations
1
v
lim inf x!1
F (vx) F (x)
lim sup x!1
hold for all v 1. Note that relations (1.5) with distributions of regular variation.
F (vx) F (x) =
(1.5)
v
de…ne the famous class R
of
Hereafter, all limit relationships are for x ! 1 unless otherwise stated. For two positive
b(x) if lim a(x)=b(x) = 1, write a(x) . b(x) or
functions a( ) and b( ), we write a(x)
b(x) & a(x) if lim sup a(x)=b(x) 1, and write a(x) b(x) if 0 < lim inf a(x)=b(x) lim sup a(x)=b(x) < 1. For a real number x we write x+ = x _ 0 and x = ( x) _ 0 as the
positive and negative parts of x, respectively. Our main result is given below:
Theorem 1.1 Consider the bivariate Lévy-driven risk process Y given by (1.2), where P ~ are two independent Lévy processes. Assume P 2 ERV( ; and R ) for some 0
0. Paulsen (2002) studied the asymptotic behavior
3
of the in…nite-time ruin probability of this risk model and proposed a conjecture that if P
2R
for some
> 0 and 'R~ ( + ") < 0 for some " > 0, then P (x)
(x; 1)
j'R~ ( )j
(1.7)
:
See Theorem 3.2(b) and Remark 3.2(b) of Paulsen (2002). The following corollary of Theorem 1.1 shows that relation (1.7) holds, indeed, under an additional assumption that P ((
1; x)) = o(
Corollary 1.1 Let
P (x)).
=
in Theorem 1.1. Then it holds for every T 2 (0; 1] that 1
(x; T )
e'R~ ( )T j'R~ ( )j
P (x):
The rest of this paper consists of two sections. Section 2 prepares some lemmas and Section 3 proves the main results.
2
Lemmas
In this section we prepare some lemmas for Theorem 1.1. The following …rst lemma describes some well-known properties of distributions of extended regular variation; see Proposition 2.2.3 of Bingham et al. (1987) and Lemma 3.5 of Tang and Tsitsiashvili (2003). Lemma 2.1 Suppose F 2 ERV( (1) For every 0 < " < 1 y b
;
) for some 0
1, there is some x0 > 0 such that the inequalities ")
(
^y
( +")
F (xy) F (x)
")
(
b y
_y
( +")
(2.1)
hold whenever x > x0 and xy > x0 . (2) It holds for every " > 0 that F (x) = o x
(
")
and x
( +")
= o F (x) .
A merit of the following lemma is that inequality (2.2) holds uniformly for all such random variables Y satisfying EY
+"
< 1 for some 0 < " < .
Lemma 2.2 Let X and Y be two independent random variables, with X real valued and Y nonnegative. If the distribution of X belongs to the class ERV(
;
) for some 0
1, there is some x0 > 0 independent of the distribution of Y such
that, for all x > x0 ,
Pr(XY > x) Pr(X > x)
bE Y
4
"
_Y
+"
:
(2.2)
Proof. For arbitrarily chosen b0 2 (1; b), by Lemma 2.1(1), there is some x00 > 0, independent of the distribution of Y , such that, for all x > x00 , Pr (XY > x)
x=x00 ) + Pr (Y > x=x00 )
Pr (XY > x; Y
"
b0 Pr(X > x)E Y ( +")
Then inequality (2.2) follows since x
_Y
+"
+ (x=x00 )
( +")
EY
+"
:
= o(Pr(X > x)) by Lemma 2.1(2).
By going along the same lines of the proof of Lemma 4.4.2 of Samorodnitsky and Taqqu (1994), we obtain the following: Lemma 2.3 Let X and Y be two (unnecessarily independent) random variables. If the distribution of X belongs to the class ERV(
) for some 0
;
Pr (jY j > x) = o (Pr (X > x)), then Pr (X + Y > x)
Pr (X > x).
< 1 and
The next lemma describes the tail behaviors of randomly weighted in…nite sums. Closely related discussions can be found in Resnick and Willekens (1991), Goovaerts et al. (2005), Wang and Tang (2006), Zhang et al. (2009), and Chen and Yuen (2009). Lemma 2.4 Let fXk ; k = 1; 2; : : :g be a sequence of independent and identically distributed (i.i.d.) random variables with common distribution F and let fWk ; k = 1; 2; : : :g be another
sequence of nonnegative, non-degenerate-at-zero, random variables independent of fXk ; k =
1; 2; : : :g. Assume F 2 ERV( ; ) for some 0 < of the following two conditions holds: (1) when 0
x Pr W k Yk > x Pr (Wk Yk > x) k=1
k=1
H(x)
"G(x):
k=1
By the arbitrariness of ", relation (2.8) follows. Consider the compound Poisson process St =
Nt X
Xk ;
t
0;
(2.9)
k=1
where fXk ; k = 1; 2; : : :g is a sequence of i.i.d. random variables with generic random variable
X and common distribution F , and fNt ; t 0g is a Poisson process with intensity > 0 independent of fXk ; k = 1; 2; : : :g. A result similar to the lemma below is Theorem 2.1 of
Tang et al. (2010), which assumes process.
= , X nonnegative, and fNt ; t
7
0g a renewal counting
Lemma 2.6 Consider the compound Poisson process S described in (2.9). Let L be a Lévy process independent of S. Suppose F 2 ERV(
) for some 0
x
k=1
t T
Rt 0
e
>x =
Ls
dSs and
Z
RT 0
e
Lt
dSt belong to the class
T
Pr Xe
Lt
> x dt:
0
k=1
The proof is completed. According to Chen and Yuen (2009), two nonnegative random variables X1 and X2 with distributions F1 and F2 , respectively, are said to be quasi-asymptotically independent if Pr(X1 > x; X2 > x) = 0: x!1 F 1 (x) + F 2 (x) lim
8
(2.10)
More generally, two real-valued random variables X1 and X2 are still said to be quasiasymptotically independent if relation (2.10) holds with (X1 ; X2 ) in the numerator replaced by each of fX1+ ; X2+ g, fX1+ ; X2 g, and fX1 ; X2+ g. The following lemma is extracted from
Theorem 3.1 of Chen and Yuen (2009).
Lemma 2.7 Let X1 ; : : : ; Xn be n pairwise quasi-asymptotically independent random variables with distributions belonging to the class ERV( ; ) for some 0 < 1. It
holds that
Pr
n X
Xk > x
k=1
!
n X
Pr(Xk > x):
k=1
The following lemma slightly extends one statement of Theorem 1 of Grey (1994). Lemma 2.8 Let (A; B) be a random pair satisfying E log (jAj _ 1) < 1, Pr(B 0) = 1, and 1 E log B < 0. Let Q be a random variable independent of (A; B). Then there is exactly one distribution for Q such that
D
(2.11)
Q = A + BQ; D
where = denotes ‘equal in distribution’. If further the distribution of A belongs to the class ERV( E B
"
_B
; +"
) for some 0
x dt: 0
The corresponding asymptotic lower bound can be established in a similar way. Obviously, (x; T )
Pr (ZT > x) Z T ~ = Pr p e Rt dt + 0
P
Z
T
e
~t R
dBP;t +
0
Z
0
13
T
e
~t R
dMP;t +
Z
0
T
e
~t R
dSP;t > x :
Thus, going along the same lines as above we obtain Z Z T ~t R (x; T ) & Pr e dSP;t > x
~t R
Pr Xe
> x dt:
0
0
3.2
T
Proof of Theorem 1.1 for T = 1
Recall (1.3) with T = 1, that is, bound of Z1 , we observe that Z1
Z1 + e
~1 R
(x; 1) = Pr (Z1 > x). To derive the asymptotic upper
sup 1 t 1
Z
t
~ e (Rs
~1 ) R
D
dPs = Z1 + e
~1 R
1
(3.6)
Z1 ;
~ 1 . Consider the random functional where on the right-hand side Z1 is independent of Z1 ; R equation
~1 R
D
U = Z1 + e
(3.7)
U;
~ 1 . From the proof of Theorem 1.1 where on the right-hand side U is independent of Z1 ; R for the case T < 1 we know that the distribution of Z1 belongs to the class ERV(
and that
Pr (Z1 > x)
Z
Pr (Z1 > x)
1
Pr Xe
~t R
;
)
(3.8)
> x dt:
0
By relation (3.6) and Lemma 2.8, the unique solution U of (3.7) satis…es ! 1 k X Y1 (x; 1) Pr(U > x) Pr Ak Bi > x ;
(3.9)
i=1
k=1
where (Ak ; Bk ), k = 1; 2; : : :, are i.i.d. copies of the random pair Z1 ; e every > 0, there is some x0 > 0 such that, for all x > x0 , Z 1 ~ Pr Xe Rt > x dt: Pr (Z1 > x) (1 + )
~1 R
. By (3.8), for
(3.10)
0
It follows from (3.8)–(3.10) that (x; 1) 1 k k X Y1 Y1 . Pr Ak Bi > x; Bi k=1
(1 + ) = (1 + )
i=1
1 Z k X
k=1 k 1 1
Z
0
i=1
~ Pr Xe (Rt
Pr Xe
x x0 ~k R
1
! )
+
1 X k=1
k Y1
> x dt +
(x=x0 )
1
Ee 14
Bi >
i=1
!
Bi > x dt +
i=1
~t R
Pr
k Y1
( +") ~1 ( +")R
:
x x0
x x0
! 1 ( +") X k=1
E
k Y1 i=1
Bi +"
Since the last term above is negligible and can be arbitrarily small, we obtain Z 1 ~ Pr Xe Rt > x dt: (x; 1) . 0
To derive the corresponding asymptotic lower bound, we use Z t ~ ~ ~1 D R e (Rs R1 ) dPs = Z1 + e sup Z1 Z 1 + e 1 t 1
~1 R
1
(3.11)
Z1 ;
~ 1 . Let V be the unique solution where on the right-hand side Z1 is independent of Z1 ; R to the random functional equation
D
V = Z1 + e
~1 R
V;
~ 1 . To apply Lemma 2.8 again, we where on the right-hand side V is independent of Z1 ; R need to verify that Pr(Z1
x)):
Actually, Pr(Z1
x :
0
0
0
1
By relation (3.4), the tail probabilities of the …rst three terms in the bracket on the righthand side above are o(Pr(Z1 > x). By Lemma 2.5, the tail probability of the last term in the bracket is o(Pr(Z1 > x) too. Hence, relation (3.12) holds. Then, by relation (3.11) and Lemma 2.8, we have (x; 1)
Pr(V > x)
1 X
Pr Ak
k Y1
(3.13)
Bi > x ;
i=1
k=1
!
where (Ak ; Bk ), k = 1; 2; : : :, are i.i.d. copies of the random pair Z1 ; e
~1 R
. Similarly as
above, using (3.13) and (3.8) we obtain (x; 1) &
3.3
Z
1
Pr Xe
~t R
> x dt:
0
Proof of Corollary 1.1 ~
~
Recall that E e ( ")R1 _ e ( +")R1 < 1 for some " > 0. For arbitrarily …xed 2 (0; ^ ") and b > 1, let x0 > 0 be speci…ed as in Lemma 2.1 such that inequalities (2.1) hold. Split the integral on the right-hand side of (1.6) into two parts as Z T Z T x ~t ~t R R (x; T ) Pr Xe > x; e dt + Pr Xe x0 0 0 = I1 (x) + I2 (x): 15
~t R
> x; e
~t R
>
x x0
dt
By (2.1), Z T b
E e
~t )R
(
0
^e
~t ( + )R
Z
I1 (x) dt . .b Pr (X > x)
T
E e
(
~t )R
0
_e
~t ( + )R
dt:
By Lemma 2.1(1), x x0
I2 (x) It follows that Z T E e ( b 0
~t )R
^e
~t ( + )R
( + )
First letting
(
~t )R
_e
T
Ee
~t ( + )R
dt = o (Pr (X > x)) :
0
Z
(x; T ) dt . .b Pr (X > x)
Note that, for every t > 0 and every E e
Z
2 (0;
~t ( + )R
T
E e
(
~t )R
0
_e
~t ( + )R
dt:
^ "), 2 max Ee
(
~t ")R
; Ee
~t ( +")R
:
! 0 by the dominated convergence theorem, then letting b ! 1, we obtain Z T (x; T ) 1 e'R~ ( )T ~ lim = Ee Rt dt = : x!1 Pr (X > x) j'R~ ( )j 0
This completes the proof of Corollary 1.1.
References [1] Applebaum, D. Lévy Processes and Stochastic Calculus. Cambridge University Press, Cambridge, 2004. [2] Bingham, N. H.; Goldie, C. M.; Teugels, J. L. Regular Variation. Cambridge University Press, Cambridge, 1987. [3] Chen, Y.; Yuen, K. C. Sums of pairwise quasi-asymptotically independent random variables with consistent variation. Stoch. Models 25 (2009), no. 1, 76–89. [4] Cline, D. B. H.; Samorodnitsky, G. Subexponentiality of the product of independent random variables. Stochastic Process. Appl. 49 (1994), no. 1, 75–98. [5] Goldie, C. M. Implicit renewal theory and tails of solutions of random equations. Ann. Appl. Probab. 1 (1991), no. 1, 126–166. [6] Goovaerts, M. J.; Kaas, R.; Laeven, R. J. A.; Tang, Q.; Vernic, R. The tail probability of discounted sums of Pareto-like losses in insurance. Scand. Actuar. J. (2005), no. 6, 446–461. [7] Grey, D. R. Regular variation in the tail behaviour of solutions of random di¤erence equations. Ann. Appl. Probab. 4 (1994), no. 1, 169–183. 16
[8] Kalashnikov, V.; Norberg, R. Power tailed ruin probabilities in the presence of risky investments. Stochastic Process. Appl. 98 (2002), no. 2, 211–228. [9] Klüppelberg, C.; Kostadinova, R. Integrated insurance risk models with exponential Lévy investment. Insurance Math. Econom. 42 (2008), no. 2, 560–577. [10] Kyprianou, A. E. Introductory lectures on ‡uctuations of Lévy processes with applications. Springer-Verlag, Berlin, 2006. [11] Liptser, R. Sh.; Shiryayev, A. N. Theory of Martingales. Kluwer Academic Publishers Group, Dordrecht, 1989. [12] Paulsen, J. Sharp conditions for certain ruin in a risk process with stochastic return on investments. Stochastic Process. Appl. 75 (1998a), no. 1, 135–148. [13] Paulsen, J. Ruin theory with compounding assets— a survey. The interplay between insurance, …nance and control. Insurance Math. Econom. 22 (1998b), no. 1, 3–16 [14] Paulsen, J. On Cramér-like asymptotics for risk processes with stochastic return on investments. Ann. Appl. Probab. 12 (2002), no. 4, 1247–1260. [15] Paulsen, J. Ruin models with investment income. Probab. Surv. 5 (2008), 416–434. [16] Resnick, S. I.; Willekens, E. Moving averages with random coe¢ cients and random coe¢ cient autoregressive models. Comm. Statist. Stochastic Models 7 (1991), no. 4, 511–525. [17] Samorodnitsky, G.; Taqqu, M. S. Stable Non-Gaussian Random Processes. Stochastic models with In…nite Variance. Chapman & Hall, New York, 1994. [18] Tang, Q.; Tsitsiashvili, G. Precise estimates for the ruin probability in …nite horizon in a discrete-time model with heavy-tailed insurance and …nancial risks. Stochastic Process. Appl. 108 (2003), no. 2, 299–325. [19] Tang, Q.; Wang, G.; Yuen, K. C. Uniform tail asymptotics for the stochastic present value of aggregate claims in the renewal risk model. Insurance Math. Econom. 46 (2010), no. 2, 362–370. [20] Vervaat, W. On a stochastic di¤erence equation and a representation of nonnegative in…nitely divisible random variables. Adv. in Appl. Probab. 11 (1979), no. 4, 750–783. [21] Wang, D.; Tang, Q. Tail probabilities of randomly weighted sums of random variables with dominated variation. Stoch. Models 22 (2006), no. 2, 253–272. [22] Zhang, Y.; Shen, X.; Weng, C. Approximation of the tail probability of randomly weighted sums and applications. Stochastic Process. Appl. 119 (2009), no. 2, 655–675.
17