Super-Tree Random Measures

5 downloads 0 Views 240KB Size Report
May 18, 1998 - Let fpk : k 2 Ng be a probability distribution on f1;2;:::g with p1 < 1. ... Our main interest here will be in the case an = n where 0 < < 1, but it will ...
Super-Tree Random Measures Hassan Allouba, Rick Durrett, Cornell U. John Hawkes, U. of Swansea Edwin Perkins, U. of British Columbia Abstract. We use supercritical branching processes with random walk steps of geometrically decreasing size to construct random measures. Special cases of our construction give close relatives of the super-(spherically symmetric stable) processes. However, other cases can produce measures with very smooth densities in any dimension.

1. Introduction and Statement of Results Let fpk : k 2 Ng be aPprobability distribution on f1; 2; : : :g with p1 < 1. Let  = 2 kp k k (> 1) and suppose k k pk < 1. In this paper we will use a branching process with o spring distribution fpk g to construct random measures using the following recipe: (i) We start at time 0 with the \progenitor," one individual of generation 0, who immediately splits into k individuals (of generation 1) with probability pk . (ii) The j th individual of generation n is displaced from its parent by an amount an Xn;j where the Xn;j are i.i.d. random vectors in Rd and the an are positive numbers. (iii) We assign mass ?n at the location of each individual in generation n to construct a random measure n (!; dx) and let n ! 1. Of course the rst thing we have P to show is that the limit exists. Let S denote a generic random variable equal in law to 1 n=1 an Xn;1 , and X denote a generic random variable equal in law to each Xn;j .

P

Theorem 1. Suppose P1n an Xn; converges a.s. Then with probability one the sequence of measures n (!; dx) converges weakly to a random measure  (!; dx) on Rd which has P ( (Rd ) > 0) = 1 and E (A) = P (S 2 A) for all Borel sets A. =1

1

Our main interest here will be in the case an = n where 0 <  < 1, but it will also be interesting to look at an = n?q where q > 0, and some of our results can be proved for general sequences. To explain our motivation for the construction and the title of the paper consider the special case

an =  n

E exp(it  Xn;j ) = e?jtj where 0 <  2: May 18, 1998

In this case the convergence hypothesis in Theorem 1 is trivial to check (see the comment following Theorem 7). We claim that when  = ?1= this relative of a superP1is a ?close n (spherically symmetric -stable) process at time  = n=1  when started from a pointmass at 0 at time 0. To see this, note that (i) ?n= Xn;j corresponds to running a symmetric stable process for time ?n and (ii) if we go back to time t =  ? ?n then we nd about n = 1=( ? t) individuals, which is the same as \the number of individuals" in a superprocess ( nite variance branching) at time t who have o spring alive at time . To make the phrase in quotes precise we must either use the historical process (see Proposition 3.5 of Dawson and Perkins (1991)) or think in terms of approximating branchingPprocesses. Pn ? = ? k ?k If  =  then the number of ancestors at time t = k=1  <  = 1 k=1  ? 1= scales like ( ? t) . This corresponds to the number of ancestors at time t of those individuals alive at time  in a superprocess with critical branching laws in the domain of attraction of a one-sided stable law of index 1 + for 0 <  1 (see Proposition 3.5 in Dawson and Perkins (1991)). Note, however, we can now have any value of > 0: Until otherwise indicated, and this won't be until Theorem 7, we will assume (1:1)

1 X n an =  for some  in (0; 1) and S = n Xn;1 converges a.s. 1

Our rst results about the measures constructed above computes the Hausdor dimension of their support. First, we recall the usual heuristic argument for computing such dimensions (see e.g., Mandelbrot (1982)). At the time the progenitor splits we get a mean number  of copies of the original measure scaled by . Thus scaling space by 1= gives mean  copies of the original and we expect the Hausdor dimension to be (1:2)

D = (log )= log(1=)

providing of course that D  d. To check the heuristic, note that multiplying a ddimensional cube by 2 results in 2d copies and d = (log 2d )= log 2. Similarly, multiplying the standard Cantor set by 3 results in 2 copies and its dimension is log 2= log 3. The next result says that the heuristic answer is right as long as the distribution of S is not too singular. First we need some de nitions. Let m denote the x - Hausdor measure and let dim(A) denote the Hausdor dimension of a set A in Rd . The carrying dimension of a measure m on Rd is (cardim(m)= ) if for any 0 > there is a Borel set B such that m(B c ) = 0 but m (B ) = 0, while for any 0 < and any Borel set B; m(B c ) = 0 implies m (B ) = 1. If m is a non-zero nite measure, de ne the energy dimension of m to be 0

0



endim(m) = sup :

Z Z

jx ? yj? dm(x)dm(y) < 1



(sup ; = 0).

It follows easily from a well-known Density Theorem for Hausdor measures that (1:3)

m(B ) > 0 implies dim(B ) endim(m) 2

(see 7.2.1 and 7.2.2 of Dawson (1992)), and therefore (1:4)

endim(m) cardim(m).

If cardim(m)=endim(m), we call this common value the dimension of m (dim(m)) and say that dim(m) exists. In this case, (1.3) shows that m can be supported by a Borel set of dimension dim(m) but will assign zero mass to any set of dimension less than dim(m). This terminology is not standard but is suitable for our purposes.

Theorem 2. Assume (1.1), let S 0 denote an independent copy of S , and let PS denote the law of S . (a) cardim( )  min(D; cardim(PS )): (b) endim( )  min(D; endim(PS )): (c) If dim(PS ) exists then so does dim( ) a.s. and dim( ) = min(D; dim(PS )) a.s. (d) If S ? S 0 has a bounded density then dim( ) = min(D; d) a.s.

Of course the hypothesis of (d) is satis ed if X ? X 0 or X has a bounded density (X 0 is an independent copy of X ) and, in particular, is satis ed if X has a symmetric stable distribution. As an example when (c) applies but (d) does not, suppose d = 1; P (X = 1) = P (X = ?1) = 1=2, and  2 (0; 1=2). It is then well-known that dim(PS ) = log 2= log(1=) (in fact this is also the dimension of the closed support). To see the lower bound for endim(PS ) note that P (jS ? S 0j  N )2N is bounded and bounded away from 0 as N becomes large. The obvious covering argument gives the upper bound for cardim(PS ). Therefore (c) implies that dim( ) = min(log ; log 2)= log(1=). Note that the heuristic computation leading to (1:2) says nothing about the form of the distribution of X , except for the implicit assumption that the particles don't land on top of each other. If you nd the fact \dimension is independent of the motion" surprising in (d), or think it is wrong, note that when we make our identi cation with super-(stable processes)  = ?1= (or ? = ). The previous Theorem does not assert the existence of a density when dim( ) = d. However, a simple argument, see e.g., Theorem 2.8 in Section 2.3 of Kallenberg (1983), shows:

Theorem 3. If (1.1) holds, S ? S 0 has a bounded density, and D > d, then almost surely

 is absolutely continuous with respect to Lebesgue measure.

Note that when E exp(it  Xn;j ) = exp(?jtj ) and  = ? = , these conclusions agree with known results for super-(spherically symmetric -stable) processes with 1 + -stable branching: the dimension of the support is = if d  = (see Chapter 7 of Dawson (1992)), while there is a density i d < = (see Theorem 8.3.1 in Dawson (1993)). We considered the Borel support in Theorem 2 because the closed support (i.e., the smallest closed set K with  (K c ) = 0) can be much larger. Let B (z; r) = fy : jy ? zj < rg be the open ball of radius r centered at z. 3

Theorem 4. Suppose that for some > 0 and each  > 0 there are strictly positive constants C () and r() such that (1:5)

P (X 2 B (z; jzj))  C ()jzj? for jzj > r():

If   1, then the closed support of  is Rd almost surely. Note that (1.5) holds if the distribution of X is spherically symmetric and for some 0 < c1  c2 < 1, (1:6)

c1 r?  P (jX j > r)  c2 r? for r large.

The above theorem again agrees with known results for super-(spherically symmetric stable) processes. If  = ? = for some 2 (0; 1] and 2 (0; 2), and X has a symmetric -stable law, then (1.6) holds with = ,  = 1?  1 (with equality if = 1). Therefore the closed support of  is almost surely Rd . The corresponding result for super(spherically symmetric -stable) processes may be found in Evans and Perkins (1991) (Corollary 5.3 and the ensuing Example (i)). Note that for any  > 1 >  > 0 we have   1 when is close enough to 0, so for any D < d there are examples where the dimension of the Borel support is D but the closed support is Rd . One can construct a non-random measure with these properties by taking a xed non-zero measure 0 whose support has dimension D and then considering 1 X n=1

2?n 0  qn

where x denotes translation by x and fqn g is a sequence that is dense in Rd . Since a particle of generation n gives rise to a copy of the original measure scaled by ?n this is probably a reasonable mental picture. Our next result complements Theorem 4 and gives an upper bound on the dimension of the closed support.

Theorem 5. Suppose P (jX j > r)  Cr? for all r > 0 and  < 1. Then with probability one the closed support of  is compact and has Hausdor dimension at most

 D = log(1=) log ? (1= ) log  : Note that the assumption  < 1 is the opposite of the one in Theorem 4 and guarantees that the denominator in D is positive. Theorems 2 and 5 imply that when P (jX j > r) ! 0 faster than any polynomial and dim(PS ) = d (e.g., X Gaussian), then the dimension of the closed support is the same as that of the Borel support. We believe that if  < 1, S ? S 0 has a bounded density, and (1.6) holds, then the closed support has dimension D ^ d but proving this seems dicult. 4

Theorem 3 establishes the existence of a density but does not give much information about its properties. The next result shows that as the \dimension" D gets larger the measure gets smoother.

Theorem 6. Suppose that

R Rd

jtjk jEeitS j dt < 1. If k 2 Z+ satis es k < D=2 ? d then

almost surely  has a density function that has a continuous kth derivative. The condition in Theorem 6 may look dicult to check but noting

jEeitS j =

1 Y

n=1

Ee

n i tXn;1

 jE exp(it  X1;1)j

one sees that it holds for the stable laws and a number of other examples with smooth densities. R If we suppose, for example, that jE exp(it  Xn;1 )j dt < 1 then the inversion formula implies that Xn;1 (and hence Xn;1 ? Xn;2 ) has a bounded continuous density. In this case Theorem 3 implies there is a density when D > d while Theorem 6 implies there is a continuous density when D > 2d. This gap comes R from the fact that we use the decay of the random characteristic function (t; !) = eitx  (!; dx) as jtj ! 1 to establish the smoothness of  and this is not sharp. For example in the case of one-dimensional super-Brownian motion (d = 1, Xn;j are normal,  = ?1=2 , D = 2), Theorem 6 does not allow us to conclude there is a continuous density even though results of Konno and Shiga (1988) (see also Reimers 1989)) suggest that it is Holder continuous of any index < 1=2. In this case, the weakness is in the harmonic analysis and not in the probability. Our estimate E (j(t)j2)  C jtj?D (which follows easily from (5.3) below) is sharp but this is not enough for the existence of a continuous density in the above case. Roelly-Coppoletta (1986), see Theorem 1.12 on page 54, had similar troubles with this technique. InP1 the case of the stable laws, we can compute the characteristic function of S = n=1 an Xn;j exactly and this enables us to get results for general sequences an . We therefore now drop the assumption (1.1).

Theorem 7. Suppose E exp(it  Xn;j ) = e?jtj and An = P1m

n

= +1

1 X n=0

a m is nite. If

?n=2 A?n (k+d)= < 1

then almost surely  has a C k density. First note that 1 m=1 am < 1 is necessary and sucient for the sum de ning S to converge a.s. (Compute the characteristic function of the sum S and recall that for independent random variables, convergence of the in nite sum in distribution implies that it converges almost surely { see Exercise 3.24 in Chapter 2 of Durrett (1991).) If an = n then An  Cn and we have convergence when ?1=2 ?(k+d) < 1. That is, (log )=2 > (k + d) log(1=) or D=2 ? d > k which is the same as the conclusion of P

5

Theorem 6. If an = n?r= with r > 1 then An  Cn1?r so we have convergence for all k and the density is C 1 . It is easy to show

Theorem 8. Suppose the Xn;j are independent standard normals and an = n?r= . If 2

r > 3 then  has compact support.

So we have situations in which the random measure  has a C 1 density with compact support. Theorem 8 is a simple sucient condition for compact support. By working harder it is possible (for standard normal fXn g) to derive a necessary and sucient condition on fan g which, in the context of Theorem 8, shows  has compact support if r > 2 and dense support if r  2. This latter result, which may yet appear in a future paper of two of us, in fact shows that if the condition fails then the support is a.s. dense. The suciency is a simple consequence of Dudley's metric entropy condition for the continuity of Gaussian processes, applied to the positions of the particles as a Gaussian process indexed by a family tree. Steve Evans has pointed out to us that in certain related settings one can derive the necessity of the condition from his results on Gaussian processes indexed by local elds (Evans (1988a)). The last and most complicated thing to describe is the genesis of this paper. Allouba and Durrett \invented" this construction in the course of Allouba's dissertation research. They wrote to Perkins for some help with lower bounds on the dimension of the support, only to nd that Hawkes and Perkins had extensive notes written in 1990 which contained proofs of Theorems 1, 2, and the stronger form of 8 discussed above for the special case in which the Xn;i are i.i.d. normal with mean 0 and variance 1. Further collaboration including a visit by Durrett to Vancouver in August 1995 led to the results presented here. The authors would like to express their appreciation to Steve Evans for explaining his results on local elds and thus inspiring us to tackle the non-Gaussian case. They are also grateful for an NSERC Collaborative Projects Grant which supported an informal conference at which the conversations with Evans occurred and which supported Durrett's visit to Vancouver.

2. Construction of the random measure The notation and approach here follows that of Hawkes (1981). Let Z+ be the set of positive integers, let Fn = Zf+1;2;:::;ng for n  1, F0 = f0g, and F = [1 n=0 Fn . Fn is the set of all possible individuals in generation n. If f 2 Fn we write jf j = n. Let I = Zf+1;2;:::g be the set of all possible in nite lines of descent. We equip Z+ with the discrete topology and I with the product topology. Let B(I ) be the Borel subsets of I . Our approach will be to rst construct a random measure on (I; B(I )) then transfer the measure to (Rd ; Rd ) where Rd = the Borel subsets of Rd . Let fpk , k  0g be a probability distribution with ()

p0 = 0; p1 < 1;  =

X

k

6

kpk ;

X

k

k2 pk < 1:

To construct a branching process we let Z f , f 2 F , de ned on some probability space ( b ; Fb; Pb), be i.i.d. with P (Z f = k) = pk . Here, b is for branching. Let f 2 Fm and n  m. If for g 2 Fn we let gjm denote the rst m coordinates of g (with gj0 = 0) then Knf (!) = fg 2 Fn : gjm = f; gj+1  Z gjj for m  j < ng gives the descendants of f in generation n. For J  Fn let Ynf (!; J ) = ?n jJ \ Knf (!)j where jAj denotes the cardinality of A. In words we assign mass ?n to each point of Knf . Well-known results for branching processes imply that under assumption () we have (2.1) Theorem. If f 2 Fm then Wf = limn!1 ?n Knf exists a.s. and in L2, and has P (Wf > 0) = 1, EWf = ?m . If f 2 Fm we let D(f ) and D (f ) be the possible descendants of f in F and I respectively. That is, D(f ) = fg 2 F : gjm = f g D (f ) = fg 2 I : gjm = f g: To de ne a random measure on (I; B(I )) we begin by xing ! outside a null set, ?c , o which Wf (!) exists for all f in F , and setting 

0 Y (!; D (f )) = Wf if f 2 Km 0 otherwise. Results of Hawkes (1981) imply that

(2.2) Theorem. For ! 2 ?, Y (!; ) extends to a nite measure supported by the branching set K = fi 2 I : ijn 2 Kn0 for all n  1g. Let fX f ; f 2 F g be i.i.d. random variables taking values in Rd . As for the branching random variables it is convenient to give a name, ( d ; Fd; Pd ), to the space on which they are de ned. Here d is for displacement. We will construct our random measure on the product space ( ; F ; P ) = ( b  d ; Fb  Fd ; Pb  Pd ) Let an be a xed sequence of positive real numbers. For f 2 Fn we let

Sf =

n X

m=1

am X f jm

be the spatial location of f . Let 1 = (1; 1; : : :) and suppose that ()

S=

1 X

m=1

amX 1jm converges a.s. 7

This, of course, implies that for each i 2 I

Si =

1 X m=1

am X ijm

converges for !d 2 i with Pd (i ) = 1. Here i = f! : (i; !) 2 g, where  = f(i; !) : the series de ning S convergesg 2 B(I )  Fd : To have S i de ned everywhere we set S i = 1 on ci . Here 1 is a point not in Rd , which we think of as the point \at in nity" usually added to compactify it. We let V = Rd [f1g and V be the -algebra generated by Rd and f1g. Note that (i; !) ! S i (!) is a B(I ) Fd measurable map from I  to V . We abuse the notation slightly and consider Fb and Fd as the obvious sub-sigma elds of F . For !b 2 ?, Y (!b ; ) de nes a measure on (I; B(I )) so we can de ne a random measure on (V; V ) by setting (2:3)

 (!; A) = Y (!b; fi : S i (!d ) 2 Ag)

for A 2 V and ! 2 ?  g , and setting  (!; A) = 0 if !b 62 ?. The displacement random variables are independent of the branching ones and we have Z

Y (!b ; di)Pd(ci ) = 0

so we have  (!; f1g) = 0 for Pb  Pd a.e. !. Note that

 (!; V ) = Y (!b ; I ) = W0 so P ( (!; Rd) > 0) = 1 and E (!; Rd) = 1. Conditioning on Fb , we have for A 2 Rd , Z

E ( (A)jFb)(!b ; !d ) = E ( 1A(S i )Y (di)jFb)(!b; !d ) =

Z Z

1A(S i (!d0 ))Y (!b ; di)Pd(d!d0 )

= P (S 2 A)Y (!b ; I ) = P (S 2 A) (!; Rd): Now take means to conclude E (A) = P (S 2 A). (2.3) de nes our random measure. Our next task in this section is to show that  is the measure de ned in the Introduction. Let

n (!; A) = ?n jff 2 Kn0 (!b ) : S f (!d ) 2 Agj: We regard n (!) as a sequence of random variables taking values in the space MF (Rd ) of nite measures on (Rd ; Rd ) endowed with the topology of weak convergence. Let Gn be 8

the - eld generated by fZ f ; jf j < ng and fX f ; jf j  ng, which are the random variables we need to construct the rst n generations and determine the locations of the particles.

Proof of Theorem 1. Let Tn = P1m

n

= +1

am X 1jm, and let

Z

n (!; A) = n (!; dx)P (x + Tn 2 A): To explain this de nition, note that

n (!; A) = E ( (A)jGn)(!): To see this, write  (A) as an integral with respect to Y , decompose the integral as a sum of integrals over D (f ), f 2 Kn0 , condition on Gn _ Fb and use Fubini's Theorem as in the calculation of the unconditional mean. A standard result from martingale theory implies that the right-hand side of the above converges a.s. and in L1 to  (!; A). To estimate the di erence between n and n let Lip1 = f : Rd ! R : j (x) ? (y)j  jx ? yj ^ 1g: Recall that if

d(;  0 ) = sup

 Z

d ?



Z

d 0 :



2 Lip1 ; ;  0 2 MF (Rd );

then d is a complete metric inducing the weak topology on MF (Rd ). See page 150 of Ethier and Kurtz (1986). If 2 Lip1, then Z

dn ?

Z

dn

=

Z Z

(x) ? E (

(x + Tn ))n (!; dx)

 E (j (x) ? (x + Tn )j)n (!; dx)  E (Tn ^ 1)n (!; Rd): This shows d(n ; n ) ! 0 a.s. Choose a countable set f i g of bounded continuous functions such that n !  in MF (Rd ) if and only if n ( i ) ! ( i ) for all i 2 N. Since n ( i ) !  ( i ) for all i a.s., we have n !  a.s. It follows from the above that n !  a.s. The other assertions in Theorem 1 have already been established. We end this Section by calculating the mean measure of    . This will be important in the proofs of Theorems 3 and 6. Let

Sn =

n X m=1

amX 1jm;

9

Tn be as above and let Tn0 be independent of (Sn ; Tn ) and have the same law as Tn . (2.4) Lemma. Let  : Rd  Rd ! R be bounded and measurable. Then

E

Z Z

1 X ? 1 2 (x; y)d (x)d (y) = (1 ?  )E (W0 ) ?n E ((Sn + Tn ; Sn + Tn0 )): n=0 

Proof. It suces to consider   0. Let (i; j ) = supfn : im = jm for all m  ng be the generation number of the last common ancestor of i and j , with (i; j ) = 0 if i = 6 j , and 1

let

pn (!) =

ZZ

1

1f(i;j)ng Y (!; di)Y (!; dj )

be the \probability" that (i; j )  n. It is easy to see that (2:5)

pn (!) =

X

f 2Kn0

Wf2 and Epn (!) = ?n EW02:

The expected value we wish to compute equals

E

Z Z

 X 1 ZZ i j (S ; S )Y (di)Y (dj ) = E ( 1((i; j ) = n)(S i ; S j )Y (di)Y (dj )): n=0

Use Fubini's theorem to see the nth summand equals ZZ ZZ

(S i (!d ); S j (!d ))dPd (!d )1((i; j ) = n)Y (!b ; di)Y (!b ; dj )dPb(!b ) = E ((Sn + Tn ; Sn + T 0 n ))E (pn ? pn+1 ):

We now use (2.5) to complete the proof.

3. Dimension of the Borel support P1 Assume

(1.1). By the converse to the Borel-Cantelli Lemma this implies P ( j X j > ?m ) is nite, which in turn implies that E (log+ jX j) < 1. Markov's m=1 inequality now gives

(3:1)

rlim !1(log r)P (jX j > r) = 0:

Let  2 (0; 1) and let rn () " 1 be de ned by

rn () = inf fr : P (jX j > r)  =n2g: Note that (3.1) implies that for xed , (3:2)

rn () = exp(nn ) with n ! 0: 10

Imitating de nitions from the previous section, we let

K^ nf (!) = fg 2 Knf : jX gjmj  rm for all m  ng; Y^nf (!; J ) = ?n jJ \ K^ nf (!)j: The reader should note that these quantities and the other ones wearing hats below depend on  even though we have not recorded this dependence in the notation. A straightforward generalization of well-known results for branching processes (see e.g., page 13 of Harris (1963) or pages 218-9 of Durrett (1991)) implies that under assumption () (3.3) Theorem. If f 2 Fm then W^ f = limn!1 ?n K^ nf exists a.s. and in L2, and has Q E W^ f = ?m 1 n=m+1 P (jX j  rn ( )). As before, we can de ne a random measure on (I; B(I )) by setting 

0 Y^ (!; D (f )) = W^ f if f 2 K^ m 0 otherwise for ! 2 ?^ , a set of probability 1 on which W^ f exists for all f 2 F , and results of Hawkes (1981) imply that (3.4) Theorem. For ! 2 ?^ with Pb (?^ ) = 1, Y^ (!; ) extends to a nite measure supported by the branching set K^ = fi 2 I : ijn 2 K^ n0 for all n  1g.

Again, we can de ne a random measure on (V; V ) by setting

  (!; A) = Y^ (!b ; fi : S i (!d ) 2 Ag) for A 2 V and ! 2 ?^  d , and setting   (!; A) = 0 if !b 62 ?^ . We have   (!; A)   (!; A) and

(3:5)

(3:6)

1 1 X Y  d E ( ?  )(!; R ) = 1 ? P (jX j  rn ())   n?2 : n=1 n=1

0 0 ^ 0 Let K^ 1 = [1 ; g 2 D(f )g and n = supfjS g ? S f j : f 2 K^ n0 ; g 2 K^ 1 n=1 K^ n , 

bn =

1 X m=n+1

mrm():

It follows from our de nitions that ^ n  bn and from (3.2) that (3.7) If  <  < 1 then ^ n  n for large n. 11

Upper bound on the carrying dimension. Let c >  and 1 >  > . From (3.3) and (3.7) it follows that if n  N0(!) then jK^ n0 j  cn and ^ n  n. When this occurs the support of   can be covered by cn balls of radius n . If = ?(1 + )(log c)= log  with  > 0 then cn (n ) = c?n ! 0. This shows that   is supported by a set of zero -dimensional Hausdor measure. Letting  = 1=k2, k ! 1, and using (3.6) and BorelCantelli we see that the same is true of  . Since  > 0, c > , and  >  are arbitrary, we conclude that the carrying dimension of  is less than or equal to (log )= log(1=). Let d > cardim(PS ) and let B support PS and satisfy dim(B )  d. Then  (B c ) = 0 a.s. because its mean is zero by Theorem 1. This shows that cardim( )  cardim(PS ) a.s. and completes the proof of Theorem 2(a). ?



Lower bound on the energy dimension. Let 0 < < min D; endim(PS ) . If S and S 0 are as in Theorem 2 and

E ( ) =

ZZ

jx ? yj? d (x)d (y)

is the Riesz -energy of  , then (2.4) implies 1 X ?  ? 1 2 E E ( ) = (1 ?  )E (W0 ) ?n E jTn ? Tn0 j? n=0 1 X  ? = C ( )?n E jS ? S 0 j? n=0 ?



and this is nite by the choice of . This shows E ( ) < 1 a.s. and we can conclude endim( )  . This proves Theorem 2(b), and (c) is then immediate from this and (a). To prove (d) it suces to show that if S ? S 0 has a bounded density f , then we have endim(PS )  d. Let < d and note that ?

E jS ? S 0 j

? 

=

Z

Z1

f (x)jxj? dx  ckf k1 r? +d?1 dr + 1 < 1: 0

This completes the proof of Theorem 2, so we turn to the

Proof of Theorem 3. The rst step is to let Q(x; r) be the cube of side 2r centered at x and show (3:8)

sup E n

Z

 (dx) (Q(x; n))?nd 12



< 1:

The quantity in question is sup E

Z Z

n

 ?  1 y ? x 2 Q(0; n) d (y)d (x) ?nd :

By (2.4), this equals

C sup

1 X

n m=0

= C sup

?  ?m P (Tm ? Tm0 ) 2 Q(0; n) ?nd

1 X

n m=0

?  ?m P (S ? S 0 ) 2 Q(0; n?m) ?nd :

As S ? S 0 has a bounded density, the above is bounded by 1 X

1 X ? m nd ? md ? nd  = C (d)?m C sup   n m=0 m=0

which is nite because d < D = (log )= log(1=). To get from (3.8) to the desired conclusion, recall the martingale proof of the RadonNikodym theorem. Let  be a measure with support in the unit cube. Let  be Lebesgue measure on [0; 1]d. Let r be the part of  absolutely continuous with respect to , and let s =  ? r be the singular part. Subdivide the unit cube into 2nd cubes Iin with side 2?n in the obvious way. Then we have (see Durrett (1991), page 209) (3.13) Theorem. Mn = (Iin )=(Iin) on Iin de nes a martingale with 

dr =d -a.s. Mn ! 1 s -a.s. Thus if s 6= 0, Fatou's lemma implies Z

lim inf Mn (dx) = 1: n!1 Applying this reasoning to the random measure  restricted to a translate of the unit cube, and noting that if m  2?n > m+1, then the cube Q(x; m) contains the cube of side 2?n to which x belongs, we see that if  has a singular part then the quantity inside the expectation in (3.8) converges to 1. This contradicts (3.8) (use Fatou's lemma) and Theorem 3 is proved.

4. Properties of the closed support To prove Theorem 4 it suces prove 13

(4.1) Lemma. Suppose   1. If x 2 Rd and  > 0, then P ( (B (x; r)) > 0) > 1 ? . Proof Consider the truncated process de ned in Section 3, let Z^n = jK^ n0 j be the associated nonhomogeneous branching process and 1 = fZ^n > 0 for all ng be the survival event. The rst step is to pick  > 0 small enough so that the survival probability, P ( 1 )  1 ? =4. To see that this isPpossible note that since p0 = 0 we always have ?2 . (1; 1; : : : 1) 2 Kn0 , so P (Z^n > 0)  1 ?  1 m m=1 Next we note that Z^n =n ! W^ a.s. with P (W^ > 0) = P ( 1) so we can pick an a > 0 and a large integer N1 so that P (Z^n  an for all n  N1 )  1 ? =2: Choose N2  N1 such that 1 X

(4:2)

N2 +1

mrm () < r=2:

Let n0 (!) be the rst n  N2 such that Z^n  an and there is an f in K^ n0 such that S f 2 B (x; r=2) (n0 = 1 if no such n exists). All particles in the nth generation of the truncated process lie in the ball of radius

R =

1 X

m=1

m rm() < 1:

So under our assumption (1.4) on the tail of the distribution of jX j, for n  N2 (make N2 larger if necessary) each particle in the nth generation of the truncated process has probability at least C (R ; x; r)n of having a child (in the non-truncated process) in B (x; r=2). On fZ^n  an 8n  N1g  , for n > N2 we have

P (n0 > n + 1jGn _ Fb )(!)  1(n0(!) > n)

Y 

f 2K^ n0



?  1 ? P S f (!) + n+1 X 2 B (x; r=2)

 1(n0(!) > n)(1 ? Cn )Z^n (by the above)  1(n0(!) > n)e?aC : It follows that n0 < 1 a.s. on . Clearly n0 is a (Gn )-stopping time and we may choose a Gn0 -measurable random index f such that on fn0 < 1g, f 2 K^ n00 and S f 2 B (x; r=2).

We claim now that the descendants of f in the truncated population will produce positive mass in B (x; r). Note that on fn0 < 1g, (4.2) shows that jS i ? S f j < r=2 for Y^ almost every i in D(f ); ?  ?  and so   B (x; r)    D(f ) = W^ f . Therefore  ?







?





P  B (x; r) > 0  P   B (x; r) > 0; n0 < 1 ?   P P (W^ f > 0jGn0 )1(n0 < 1)  (1 ? =4)(1 ? =2) > (1 ? ) 14

This completes the proof.

Proof of Theorem 5 We begin with a result about the maxima of the displacements

for generation n, (4.3), which leads in turn to a bound on the maximum diameter of the set of descendents of individuals in generation n, (4.4), and in turn to an upper bound on the dimension of the closed support.

(4.3) Lemma. lim supn!1 n1 log maxf 2Kn0 jX f j  log(1= ) a.s. Proof It follows from (2.1) that if c >  then jKn0 j=cn ! 0 a.s. Letting  > c1= , using a basic inequality from measure theory and our assumption about the tail of jX j

P



max jX f j > n ; jKn0 j  cn f 2Kn0



 cn P (jX j > n )  cn  C? n :

Our choice of  implies c? < 1 so the right hand side is summable. Since for any  > 0 we can choose  < 1= +  the desired result follows from the Borel-Cantelli lemma. 0 0 g f 0 0 Let K1 = [1 n=1 Kn , n = supfjS ? S j : f 2 Kn ; g 2 K1 ; g 2 D(f )g. From (4.3) it follows easily that we have

(4.4) Lemma. Suppose 1= < 1. Then lim sup n1 log n  log(1= ) a.s. n!1

Proof Let  >  = with  < 1. (4.3) implies that for n  N (!) max jX f j  n f 2K 0 1

n

P1

and hence if n  N (!) then n  m=n m m. This implies lim sup n1 log n  log() a.s. n!1 and the desired result follows. To prove Theorem 5 now, let c >  and  > 1= . From (2.1) and (4.4) it follows that if n  N0 (!) then jKn0 j  cn and n  n. When this occurs the closed support of  can be covered by cn balls of radius n and, in particular, is compact. If

= ?(1 + )(log c)= log  with  > 0 then cn (n) = c?n ! 0. This shows that the -dimensional Hausdor measure of the closed support of  is 0. Since  > 0, c > , and  > 1= are arbitrary, the Hausdor dimension is less than or equal to ?(log )= log(1= ). 15

By repeating the last proof with minor modi cations we can prove Theorem 8, as we now show:

Proof of Theorem 8 Our rst step is to bound the displacements on level n. (4.5) Lemma. Suppose the X f are i.i.d. standard normals. Then there is a constant A = A() such that jX f j  A a.s. lim sup p1n fmax 2K 0 n!1

n

Proof It follows from (2.1) that if c >  then jKn j=cn ! 0 a.s. Using a basic inequality 0

from measure theory and a standard result about the tail of the normal distribution (see e.g., Durrett (1991), Theorem (1.3) in Chapter 1)

P



f j > Apn; jK 0 j  cn j X max n 0 f 2Kn



 cn P (jX j > Apn)  Acpn e?A2 n=2: n

If we choose A so that ce?A2 =2 < 1 then the right hand side is summable and the desired result follows from the Borel-Cantelli lemma. To prove Theorem 8 now, we note that (4.5) implies

p

(4.6) Lemma. If 1 n=1 an n < 1 then  has compact support. P

5. Smoothness Results The key to the developments here is the random characteristic function Z

(t; !) = eitx  (!; dx); t 2 Rd : Speci cally our aim will be to show that ()

E

Z

jtjk j(t; !)j dt



< 1:

Once this is done the desired conclusions can be obtained from the following well-known result (see the proof of Theorem 7.25 in Rudin (1991)). R

R

(5.1) Lemma. Let (t) = eitx (dx) where  is a nite measure. If jtjk j (t)j dt < 1 Then (dx) = f (x) dx where f has continuous kth order partial derivatives given by Z Y k 1 (?itij )e?itx (t) dt fx11 :::xik (x) = (2)d j =1

16

R

R

Note that (t) = cos(t  x)  (!; dx) + i sin(t  x)  (!; dx) and ja + bij2 = a2 + b2, so using (2.4) and the accompanying notation, we have

E j(t)j2 = E =C

(Z

1 X n=0

2

cos(t  x)  (!; dx) +

Z

sin(t  x)  (!; dx)

2 )

?n E fcos(t  (Sn + Tn )) cos(t  (Sn + Tn0 )) + sin(t  (Sn + Tn )) sin(t  (Sn + Tn0 ))g :

Introducing the notation

cn (x) = E cos(t  (x + Tn )) sn (x) = E sin(t  (x + Tn )) en (x) = Eeit(x+Tn ) ; we can condition on the value of Sn to write  E cos(t  (Sn + Tn )) cos(t  (Sn + Tn0 )) + sin(t  (Sn + Tn )) sin(t  (Sn + Tn0 )) = E (cn (Sn )2) + E (sn(Sn )2) = E jen(Sn )j2 since Eeit(x+Tn ) = E cos(t  (x + Tn )) + iE sin(t  (x + Tn )) and hence cn (x)2 + sn (x)2 = jen (x)j2. Since en (x) = eitx EeitTn , we have jen (x)j2 = jEeitTn j2, i.e., jen (x)j2 is constant. Combining this with the previous computations we have

E j(t)j = C

(5:3)

2

1 X

n=0

?n jEeitTn j2:

Using the fact that for positive numbers cm we have X

m

cm

!1=2



X

m

cm1=2

(to prove this square both sides) and Jensen's inequality we have

E j(t)j  (E j(t)j2)1=2  C 1=2

1 X n=0

?n=2 jEeitTn j:

Integrating and using Fubini's theorem we have (5:4)

E

Z

Z 1 X k 1=2 ? n= 2 jtjk jEeitTn j dt: jtj j(t)j dt  C  d d R R n=0

Proof of Theorem 6 When an = n , clearly Tn =n is equal in law to T . Since we have supposed

0

I=

Z Rd

jtjk jEeitT0 j dt < 1; 17

changing variables t = s=n, dt = ds=nd we have Z

Rd

jtjk jEeitTn j dt =

Z

k jEeisT0 j ds  ?n(k+d) = I  ?n(k+d) : j s j d

R

So (5.4) becomes

E

1 X k 1=2 jtj j(t)j dt  C ?n=2 ?n(k+d)  I < 1 d R n=0

Z

when ?1=2 ?(k+d) < 1. Taking logs and rearranging gives (k + d) log(1=) < (log )=2 or D=2 ? d > k.

Proof of Theorem 7. Suppose EeitX = e?jtj . jEeitTn j = exp (?An jtj ) where

An =

1 X m=n+1

a m :

Writing things in polar coordinates and then changing variables r = s=A1n= , dr = ds=A1n= Z Rd

jtjk jEeitTn j dt = C

1

Z 0

rk+d?1 e?An r dr

= A?n (k+d)=

1

Z

0

sk+d?1 e?s =2 ds:

The last integral is nite since we always have k + d  1. Plugging this result into (5.4) we have Z 1 X k 1=2 ?n=2 A?n (k+d)= E jtj j(t)j dt  C Rd

which proves the desired result.

n=0

REFERENCES

D.A. Dawson (1992) In nitely divisible random measures and superprocesses, in Proceedings 1990 Workshop on Stochastic Analysis and Related Topics, Silivri, Turkey, Birkhauser D.A. Dawson (1993) Measure-valued Markov processes. St. Flour Lecture Notes, Springer Lecture notes in Math 1541 D.A. Dawson and E.A. Perkins (1991) Historical processes. Memoirs of the AMS No. 454 R. Durrett (1991) Probability: Theory and Examples. Wadsworth Pub. Co., Belmont, CA S. Ethier and T. Kurtz (1986) Markov Processes: Characterization and Convergence. John Wiley and Sons, New York S.N. Evans (1988a) Continuity properties of Gaussian stochastic processes indexed by a local eld. Proc. London. Math. Soc. 56, 380{416 18

S.N. Evans (1988b) Sample path properties of Gaussian stochastic processes indexed by a local eld. Proc. London. Math. Soc. 56, 580{624 S.N. Evans and E.A. Perkins (1991) Absolute continuity results for superprocesses with some applications. Trans. Amer. Math. Soc. 325, 661{681 T.E. Harris (1963) Branching Processes. Springer-Verlag, New York J. Hawkes (1981) Trees generated by a simple branching process. J. London Math. Soc. 24, 373{384 O. Kallenberg (1983) Random Measures. Third Edition. Akademie-Verlag, Berlin. N. Konno and T. Shiga (1988) Stochastic di erential equations for some measure-valued di usions. Prob. Th. Rel. Fields. 79, 201{225 B. Mandelbrot (1982) The Fractal Geometry of Nature. W.H. Freeman, New York M. Reimers (1989) One dimensional stochastic partial di erential equations and the branch -ing measure di usion. Prob. Th. Rel. Fields 81, 319{340 S. Roelly-Coppoletta (1986) A criterion of convergence of measure-valued processes: application to measure-valued branching. Stochastics 17, 43{65 W. Rudin (1991) Functional Analysis. Second Edition. McGraw-Hill, New York

19

Suggest Documents