Statistics and Probability Letters 83 (2013) 345–349
Contents lists available at SciVerse ScienceDirect
Statistics and Probability Letters journal homepage: www.elsevier.com/locate/stapro
On simple representations of stopping times and stopping time sigma-algebras Tom Fischer ⇤ Institute of Mathematics, University of Wuerzburg, Campus Hubland Nord, Emil-Fischer-Strasse 30, 97074 Wuerzburg, Germany
article
info
Article history: Received 20 June 2012 Received in revised form 25 September 2012 Accepted 27 September 2012 Available online 3 October 2012 MSC: 60G40 97K50 97K60
abstract There exists a simple, didactically useful one-to-one relationship between stopping times and adapted càdlàg (RCLL) processes that are non-increasing and take the values 0 and 1 only. As a consequence, stopping times are always hitting times. Furthermore, we show how minimal elements of a stopping time sigma-algebra can be expressed in terms of the minimal elements of the sigma-algebras of the underlying filtration. This facilitates an intuitive interpretation of stopping time sigma-algebras. A tree example finally illustrates how these for students notoriously difficult concepts, stopping times and stopping time sigma-algebras, may be easier to grasp by means of our results. © 2012 Elsevier B.V. All rights reserved.
Keywords: Début Theorem Hitting time Stopped sigma-algebra Stopping time Stopping time sigma-algebra
1. Stopping times are hitting times: a natural representation The possibly most popular example of a stopping time is the ‘hitting time’ of a set by a stochastic process, defined as the first time at which a certain pre-specified set is hit by the process considered. Often this example is considered for a Borel-measurable set and a one-dimensional real-valued adapted process on a discrete time axis. Similar results in more general setups are usually collectively called the Début Theorem; see Bass (2010, 2011) for a proof. Astonishingly, it seems to be less widely taught (and maybe known) that the converse is true as well: for any stopping time there exists an adapted stochastic process and a Borel-measurable set such that the corresponding hitting time will be exactly this stopping time. Furthermore, the stochastic process can be chosen very intuitively: it will be 1 until just before the stopping time is reached, from which time on it will be 0. The 1–0-process therefore first hits the Borel set {0} at the stopping time. Hence, a one-to-one relationship is established between stopping times and adapted càdlàg processes that are non-increasing and take the values 0 and 1 only. A 1–0-process as described above is obviously akin to a random traffic light which can go through three possible scenarios over time (here, ‘green’ stands for ‘1’ and ‘red’ stands for ‘0’): (i) it stays red forever (‘stopped immediately’); (ii) it is green at the beginning, then turns red, and stays red for the rest of the time (‘stopped at some stage’); (iii) it stays green forever (‘never stopped’). So, the traffic light can never change back to green once it has turned red (stopped once means stopped forever), and, for the adaptedness of the 1–0-process, it can only change on the basis of information up to the corresponding point in
⇤
Tel.: +49 931 3188911. E-mail address:
[email protected].
0167-7152/$ – see front matter © 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.spl.2012.09.024
T. Fischer / Statistics and Probability Letters 83 (2013) 345–349
346
time. This very intuitive interpretation of a stopping time as the time when such a ‘traffic light’ changes is considerably easier to understand than the concept of a random time which is ‘known once it has been reached’—one of the verbal interpretations of the usual standard definition of a stopping time (see Definition 2). While this representation and alternative definition of stopping times seems natural and didactically useful, it does not seem to be widely taught, or otherwise one would expect to find it in textbooks. However, there is no mention of it in standard textbooks on probability such as Billingsley (1995) and Bauer (2002) (Bauer as an example of a popular German stochastics textbook), or in standard textbooks on stochastic processes, stochastic calculus and stochastic mathematical finance such as Karatzas and Shreve (1991) and Bingham and Kiesel (2004). Definition 1. We define the time axis as a set T ⇢ R such that if h 2 R is an accumulation point of T and h < sup T, then h 2 T.
The above requirement for the time axis (where we allow sup T = +1) is very mild, as in most applications evenly spaced discrete points in time or continuous time intervals would be considered, and the required condition would be fulfilled. Definition 2. A stopping time with respect to a filtration F = (Ft )t 2T on a probability space (⌦ , F1 , P) is a random variable with values in T [ {+1} such that
{⌧ t } 2 Ft (t 2 T).
(1)
Interpreted verbally, (1) means that at any time t 2 T one knows – on the basis of information up to time t – whether one has been stopped already, or not. Note that Ft ⇢ F1 is assumed for t 2 T. In the following, we call a real-valued stochastic process càdlàg if for all ! 2 ⌦ the paths X· (!) : T ! R have the property of being right-continuous with left-handed limits (RCLL). Note that for right-continuity to be well-defined, the condition of Definition 1 must hold on T in order to ensure that any point h 2 R to which a sequence in T might converge from the right is indeed contained in T and, hence, to ensure that Xh (!) exists. Definition 3. We call an adapted càdlàg process X = (Xt )t 2T on (⌦ , F1 , F, P) a ‘stopping process’ if and
Xt (!) 2 {0, 1} Xs
Xt
(! 2 ⌦ , t 2 T)
(s t ; s, t 2 T).
(2)
(3)
Obviously, {Xt = 1} 2 Ft for t 2 T. For a finite or infinite discrete time axis given by T = {tk : k 2 N, tk tj if k j}, an adapted process fulfilling (2) and (3) is automatically càdlàg. This follows from the observation that in this case T is bounded from below and either (a) is finite as a set, or (b) is infinite and has one accumulation point, sup T, which lies outside T, and therefore is bounded, or (c) has no accumulation point and is unbounded from above. Evidently, such T also fulfills Definition 1. Definition 4. For a stopping process X on (⌦ , F1 , F, P), define
⇢ +1 ⌧ (!) = min{t 2 T : Xt (!) = 0} X
if Xt (!) = 1 for all t 2 T, otherwise.
(4)
The minimum in the lower case exists because of the càdlàg property for each path. By definition of ⌧ X , it is clear that Xt = 1{⌧ X >t }
(t 2 T),
(5)
which by adaptedness of X and {⌧ X > t } = {⌧ X t }C implies
{⌧ X t } 2 Ft (t 2 T). X
(6) X
Therefore, ⌧ is a stopping time for any stopping process X . Clearly, ⌧ is the first time of X hitting the Lebesgue-measurable set {0}. Definition 5. For a stopping time ⌧ , define a stochastic process X ⌧ = (Xt⌧ )t 2T by Xt⌧ = 1{⌧ >t }
(t 2 T).
By {⌧ > t } = {⌧ t }C and (1), X ⌧ is an adapted process. One has Xs⌧ = 1{⌧ >s} limt #⌧ (!) Xt⌧ (!) = 0 = X⌧⌧(!) , X ⌧ is càdlàg and hence a stopping process.
(7) 1{⌧ >t } = Xt⌧ for s t. Because
T. Fischer / Statistics and Probability Letters 83 (2013) 345–349
347
Theorem 1. The mapping f : X 7 ! ⌧X
(8)
is a bijection between the stopping processes and the stopping times on (⌦ , F1 , F, P) such that f
1
and hence
: ⌧ 7 ! X⌧ ,
(9)
X
⌧
⌧ X = ⌧ and X ⌧ = X .
(10)
Proof. That f maps stopping processes X to stopping times ⌧ X was seen in (6). That different stopping processes lead to different stopping times under f is obvious from (4); f is therefore an injection. For any stopping time ⌧ , one has by (4) and (7) that ⌧
⌧ X (!) =
⇢
+1 min{t 2 T : 1{⌧ >t } (!) = 0}
if 1{⌧ >t } (!) = 1 for all t 2 T, otherwise.
(11) ⌧
Since 1{⌧ >t } (!) = 0 if and only if ⌧ (!) t, the right-hand side of (11) is ⌧ (!). Therefore, ⌧ (!) = ⌧ X (!), and f is a X
surjection and therefore is a bijection. From (5) and (7), we obtain Xt⌧ = 1{⌧ X >t } = Xt for t 2 T, which proves (9).
⇤
Note that there was no identification of almost surely identical stopping times or stopping processes in Theorem 1 or any of the definitions; however, one can obviously transfer all results to equivalence classes of almost surely identical objects. 2. Minimal elements of stopping time sigma-algebras Another concept that students of stochastic processes and probability usually struggle with is that of the so-called stopping time -algebras. Definition 6. Let ⌧ be a stopping time on a filtered probability space (⌦ , F1 , F, P). We define the stopping time -algebra with respect to ⌧ as
F⌧ = {F 2 F1 : F \ {⌧ t } 2 Ft for all t 2 T}. It is well-known and straightforward to show that F⌧ is indeed a
(12) -algebra.
Definition 7. For a measurable space (⌦ , F ), we define the set of the minimal elements in the
-algebra F by
A(F ) = {A 2 F : A 6= ;, and if F 2 F and F ⇢ A, then F = A}.
(13)
Eq. (13) means that A 2 A(F ) cannot be ‘split’ in F , which is why elements of A(F ) are also referred to as ‘atoms’ of F . Obviously, A(F ) ⇢ F . For |F | < +1, it is therefore easy to see that A(F ) is a partition of ⌦ and that
F =
(14)
(A(F ))
since any non-empty F 2 F can be written as a finite union of elements in A(F ). Definition 8. For t 2 T [ {+1}, we denote the set of minimal elements in Ft by At = A(Ft ). Further, we define
At⌧ = {A 2 At : A ⇢ {⌧ = t }} (t 2 T [ {+1}), A⌧ =
[
t
A⌧ .
(15) (16)
t 2T[{+1}
Note that the At⌧ are disjoint for t 2 T [ {+1}. Theorem 2. The elements of A⌧ are minimal elements of F⌧ , i.e. A⌧ ⇢ A(F⌧ ). If |F1 | < +1, then A⌧ = A(F⌧ ) and F⌧ = (A⌧ ). Proof. (i) A⌧ ⇢ F⌧ : Let A 2 A⌧ , so, for some s 2 T [ {+1}, A 2 As and A ⇢ {⌧ = s} and therefore A \ {⌧ = s} = A. Assume now that t < s for some t 2 T. Then A \ {⌧ t } = A \ {⌧ = s} \ {⌧ t } = ; 2 Ft . For t 2 T, assume now that t s. Then A \ {⌧ t } = A \ {⌧ = s} \ {⌧ t } = A \ {⌧ = s} = A 2 Fs ⇢ Ft . Hence, A \ {⌧ t } 2 Ft for
T. Fischer / Statistics and Probability Letters 83 (2013) 345–349
348
Fig. 1. Example.
all t 2 T, and therefore A⌧ ⇢ F⌧ . (ii) A 2 A⌧ , F 2 F⌧ and F ⇢ A implies F = A: (a) Assume that A 2 A1 ⌧ and F 2 F⌧ with F ⇢ A. Clearly, A 2 A1 , implying F = A since F 2 F1 . (b) Assume that A 2 At⌧ for some t 2 T and F 2 F⌧ with F ⇢ A. Therefore, A 2 At and F ⇢ A ⇢ {⌧ = t }, and hence F \ {⌧ = t } = F . As F 2 F⌧ , one has F \ {⌧ t } 2 Ft . Since {⌧ = t } 2 Ft , F \ {⌧ t } \ {⌧ = t } = F \ {⌧ = t } = F 2 Ft , but A 2 At , and therefore F = A. (i) and (ii) prove the first statement of the theorem. Assume now that |F1 | < +1. A⌧ is S then a partition of ⌦ , becauseS any two distinct sets in A⌧ are disjoint, and, since {⌧ = t } 2 Ft for t 2 T [ {+1}, one has At⌧ = {⌧ = t } and, hence, A⌧ = ⌦ . This proves the second statement. The last statement follows from (14). ⇤ The following result is well-known.
Proposition 1. If |F1 | < +1, then
(⌧ ) ⇢ F⌧ and, in general, (⌧ ) 6= F⌧ .
Proof. {⌧ = t } (t 2 T) and {⌧ = +1} are the minimal elements of (⌧ ) if they are non-empty, but it is well-known and straightforward to see that these sets are elements of F⌧ , too. Therefore, (⌧ ) ⇢ F⌧ . It is an easy exercise to find examples where (⌧ ) 6= F⌧ (see the example below). ⇤
We can interpret the filtered probability space (⌦ , F1 , F, P) as a random ‘experiment’ or ‘experience’ that develops over time. The common interpretation of the filtration F = (Ft )t 2T is that, if it is possible to repeat the experiment arbitrarily often until time t 2 T, the maximum information that can be obtained about the experiment is Ft . Hence, Ft represents (potentially) available information up to time t. Using Theorem 2 for |F1 | < +1, we can interpret the stopping time -algebra F⌧ as the maximum information that can be obtained from repeatedly carrying out the experiment up to the random time ⌧ . This is straightforward from the definition of the At⌧ in (15). While this interpretation is, of course, generally known, minimal sets are usually not used to derive it. However, the example below will illustrate that this is a very natural way of interpreting stopping time -algebras. 3. An example In Fig. 1, we see the usual interpretation of a discrete time finite space filtration as a stochastic tree. In such a setting, the set of paths of maximal length represents ⌦ . In this case, ⌦ = {!1 , . . . , !8 }. A path up to some node at time t (here
T. Fischer / Statistics and Probability Letters 83 (2013) 345–349
349
T = {0, 1, 2, 3} and F3 = F1 = P (⌦ )) represents the set of those paths of full length that have this path up to time t in common. The paths up to time t represent the minimal elements (atoms) At of Ft . For instance, in the example of Fig. 1,
A1 = {{!1 , !2 }, {!3 , !4 }, {!5 , . . . , !8 }}.
(17)
A⌧ = {{!1 , !2 }, {!3 }, {!4 }, {!5 , !6 }, {!7 }, {!8 }}.
(18)
A stopping time ⌧ (see Fig. 1) and the corresponding stopping process X ⌧ are given (X ⌧ is given by the values of the nodes of the tree). The intuitive interpretation of the stopping time as the random time at which X ⌧ jumps to (‘hits’) 0 becomes clear (see the boxed zeros). Furthermore, the paths up to those boxed zeros represent the minimal elements A⌧ of the stopping time -algebra F⌧ . This follows of course from the fact that for any A 2 At⌧ one has A ⇢ {⌧ = t } by (15) and therefore Xt⌧ (A) = 0, but Xs⌧ (A) = 1 for s < t. So, in a tree example such as the given one, the ‘frontier of first zeros’ (or, more precisely, the paths leading up to it) describes the stopping time -algebra (as well as the stopping time itself). Therefore, it becomes very obvious in what sense F⌧ contains the information in the system that can be obtained up to time ⌧ . In the case of the example, It is also easy to see that in this case the minimal elements of
(⌧ ), denoted by A( (⌧ )), are given by
A( (⌧ )) = {{!1 , !2 }, {!5 , !6 }, {!3 , !4 , !7 , !8 }}.
Therefore, A⌧ 6= A( (⌧ )) and, hence,
(⌧ ) 6= F⌧ , as stated in Proposition 1.
References Bass, R.F., 2010. The measurability of hitting times. Electronic Communications in Probability 15, 99–105. Bass, R.F., 2011. Correction to ‘‘the measurability of hitting times’’. Electronic Communications in Probability 16, 189–191. Bauer, H., 2002. Wahrscheinlichkeitstheorie. 5. Auflage. de Gruyter. Billingsley, P., 1995. Probability and Measure, third ed. Wiley-Interscience. Bingham, N.H., Kiesel, R., 2004. Risk-Neutral Valuation. Pricing and Hedging of Financial Derivatives, second ed. Springer. Karatzas, I., Shreve, S.E., 1991. Brownian Motion and Stochastic Calculus, second ed. Springer.
(19)