Sequential Correlated Sampling Methods for Some ... - CiteSeerX

0 downloads 0 Views 196KB Size Report
introduced by Halton in 1962 for the e cient solution of certain matrix problems. ... Obviously, this is an unsatisfactory situation when a global solu- ..... dxn?1dn?1.
Sequential Correlated Sampling Methods for Some Transport Problems Rong Kong & Jerome Spanier Department of Mathematics Claremont Graduate University 925 N. Dartmouth Ave. Claremont, CA 91711, USA Email: [email protected] & [email protected]

Abstract. In this paper, we will describe how to extend to the solution of cer-

tain simple particle transport problems a sequential correlated sampling method introduced by Halton in 1962 for the ecient solution of certain matrix problems. Although the methods apply quite generally, we have so far studied in detail only problems involving planar geometry. We will describe important features of the resulting algorithm in which random walks are processed in stages, each stage producing a small \correction" to the solution obtained from the previous stage. Issues encountered in the course of implementing such an algorithm for practical transport problems will be discussed and numerical evidence of the geometric convergence achieved will be presented for several model problems.

1 Introduction In [1], a broad overview is given of Monte Carlo learning algorithms that give rise to geometric convergence for quite general transport problems. In this paper, one of the methods (based on correlated sampling applied sequentially) outlined in [1] is described in much more detail. Similar detail is presented for the method based on importance sampling in [9]. We will consider the following type of transport problem and introduce a new Monte Carlo method to solve it: R1

s 0 0  @ @x = ?t (x; ) + ? (x;  )d ; ?1    1; 0 < x < T; (1) (0; ) = Q (); 0 <   1; (T; ) = QT (); ?1   < 0: 2

1

0

A physical description of this problem and its derivation can be found, for example, in [2] and [3]. In these references one also nds some exploration of the solution, especially in the context of representation through eigenvalues and eigenfunctions. Here we only describe (Fig. 1) the physical phase space that corresponds to problem (1). In Figure 1, we depict a slab of thickness T lled with a homogeneous material. Random walking particles (for example, neutrons or photons), whose source distributions are restricted to x = 0, x = T and are de ned there

x T

0

Fig. 1. Phase space corresponding to equation (1) by Q () and QT (); respectively, are injected into the slab from the two boundaries. While moving in the slab, they can be scattered or absorbed, which events are characterized by the total macroscopic cross section, t ; and the scattering cross section, s : For later convenience, we de ne the absorption scattering cross section by a = t ? s The unknown function (x; ) is the particle ux, which is the expected number of particles passing the point x along the direction which makes an angle arccos  with the positive x? axis. Our methodology to attack problem (1) will be described in Section 3. Basically, what we do is rst estimate the integral occurring on the right hand side, called the scalar ux, by Monte Carlo methods and then solve the resulting ordinary di erential equation for the full (vector ux) solution. This same idea can also be applied to more general anisotropic transport problems (see [1]). Monte Carlo methods can be useful in solving such problems, but only if the slow convergence associated with conventional applications of the method is overcome. Speci cally, the usual type of probabilistic random walk simulation leads to the estimate of the error, for some constant C; 0

error  pC ; W

(2)

where W is the number of random walks used. This means, among other things, that each additional signi cant digit of information requires, on the average, a hundredfold increase in the number of individual random walks simulated. Obviously, this is an unsatisfactory situation when a global solution is sought to great accuracy and when millions of random walks (utilizing hours or even days of the fastest computers) might be needed to achieve errors of perhaps 10%. In 1962, John Halton [4] successfully solved certain matrix problems (which may be thought of as discretized versions of equation (1)) by applying Monte Carlo methods sequentially. Halton's idea was to solve the problem in

successive stages, each of which can be estimated using conventional Monte Carlo methods. With sucient care, the error En after the nth stage can be bounded by the error En? obtained in the previous stage multiplied by a constant (W ) that depends on the number of the random walks used; i.e., 1

En  (W )En? :

(3)

1

In fact, often with only a relatively small number of random walks, one can achieve (W ) < 1: Thus, after n stages of such a process, the error can be estimated by En  (W )n E ; (4) thus achieving geometric, or exponential, convergence. In this case, additional digits of accuracy can be obtained with only linear, rather than exponential, increases in computer cost, thus o ering enormous potential for improvement over conventional Monte Carlo implementations In this paper, we apply this idea to the continuous transport problem (1), using a generalization of Halton's methods, concentrating here on the algorithms needed and the numerical results obtained. A probabilistic error analysis of this method can be found in [5]. 0

2 Sequential Strategy First, we describe the sequential strategy for solving (1), which is similar to that used in [4] and [6] for matrix problems. Let  (x; ) be an initial approximation of (x; ) obtained by any means and set = + : (5) th Substituting (5) into (1) produces a problem, called the 0 stage problem, for obtaining    (x; ) : 0

0

0

0

0

0

R1

s 0 0  @ @x = ?t  (x; ) + ?  (x;  )d + S (x; ); ?1    1; 0 < x < T;  (0; ) = Q (); 0 <   1;  (T; ) = QT (); ?1   < 0; 0

0

2

0

0

0

1

0 0

0

where





R1

S (x; ) = ?  @ @x + t (x; ) + s ? (x; 0 )d0 ; Q () = Q () ? (0; ); QT () = QT () ? (T; ): 0

0

0

2

0 0 0

0

(6)

0

0

0

1

(7)

0

An approximation to the solution of problem (6) can be obtained using conventional Monte Carlo methods. Assuming the approximate solution is e  e (x; ), set  = + e +  : (8) 0

0

0

0

1

Substituting (8) back into (1) produces a problem, called the 1st stage problem, describing    (x; ) : 1

1

R1

s 0 0  @ @x = ?t  (x; ) + ?  (x;  )d + S (x; ); ?1    1; 0 < x < T;  (0; ) = Q (); 0 <   1;  (T; ) = QT (); ?1   < 0; where the \reduced source" S (x; ) satis es 1

1

2

1

1

1

(9)

1

1 0

1

1

1





R1

S (x; ) = S (x; ) ?  @@xe + t e (x; ) + s ? e (x; 0 )d0 ; (10) Q () = Q () ? e (0; ); e QT () = QT () ?  (T; ): 1

0

0

0

2

1 0

0 0

1

0

1

0

0

0

We can again use conventional Monte Carlo methods to solve (9), obtaining an approximate solution e  e (x; ): If we now assume that we have obtained all the approximations e ; e ;    ; en? ; up to the (n ? 1)st stage, in order to obtain the solution for the nth stage, set  = + e + e +    + en? + n : (11) Then the nth stage problem describing n  n (x; ) is 1

1

0

1

1

0

0

1

1

R  @@xn = ?t n (x; ) + s ? n (x; 0 )d0 + S n (x; ); ?1    1; 0 < x < T; n (0; ) = Qn (); 0 <   1; n (T; ) = QnT (); ?1   < 0; 1

2

1

(12)

0

where





R S n (x; ) = S n? (x; ) ?  @ e@xn? + t en? (x; ) + s ? en? (x; 0 )d0 ; Qn() = Qn? () ? en? (0; ); QnT () = QTn? () ? en? (T; ): 1

1

1

1

2

0

1

1

1

1

1

1

0

(13) We again solve this problem using conventional Monte Carlo methods to obtain en  en (x; ): This outlines our sequential strategy. Now assume that (x; ) is the exact solution of (1) and let

n  + e + e +    + en? + en ; En = kn ? k 0

0

1

1

(14)

where the norm can be speci ed as desired. The iterative method we have described will lead to the following inequality: En  (W )En? + (W ) (15) for some (W ) < 1 and a suciently small number (W ): We should mention that the quantities (W ); (W ) are also dependent on some other parameters de ned by the characteristics of the continuous transport problem being 1

solved, as we demonstrate in [5]. In the rest of this paper we will provide a more detailed description of the sequential algorithm outlined above and exhibit numerical evidence of the geometric convergence, as suggested by (15), it achieves.

3 Basic Algorithms We limit our discussion to solving the nth stage problem (12) since the same ideas can easily be applied to all the other stages. First, we decompose (12) into three di erent problems, each with a different source. The rst problem concerns n (x; ) and involves the source Qn (); 0 <   1; at x = 0 : 0

0

n0

R1

 @@x = ?tn (x; ) + s ? n (x; 0 )d0 ; (16) ?1    1; 0 < x < T; n (0; ) = Qn (); 0 <   1; n (T; ) = 0; ?1   < 0; the second concerns n (x; ) and involves the source QnT (); ?1   < 0; at x=T : n R  @@x = ?t n (x; ) + s ? n (x; 0 )d0 ; (17) ?1    1; 0 < x < T; n n n  (0; ) = 0; 0 <   1;  (T; ) = QT (); ?1   < 0; and the third concerns n (x; ) and involves the reduced internal source S n (x; ); ?1    1; 0 < x < T : 0

0

2

1

0

0

0

1

1

1

1

2

1

1

1

1

2

n2

R1

s n n 0 0 n  @ @x = ?t  (x; ) + ?  (x;  )d + S (x; ); ?1    1; 0 < x < T; n (0; ) = 0; 0 <   1; n (T; ) = 0; ?1   < 0: 2

2

2

1

2

(18)

2

It can be easily veri ed, according to the principle of superposition, that if we solve (16), (17) and (18) for n (x; ); n (x; ) and n (x; ); respectively, the solution of (12) can be obtained as 0

1

2

n (x; ) = n (x; ) + n (x; ) + n (x; ): 0

1

2

(19)

On the other hand, since these three problems are very similar, we only discuss (18). The other two can be similarly treated. To solve (18), consider the following problem describing a function K (x; y; ;  ) that depends on parameters 0 < y < T and ?1 <  < 1 : R1

s 0 0  @K @x = ?t K (x; y ; ;  ) + ? K (x; y;  ;  )d + (x ? y;  ?  ); ?1    1; 0 < x < T; K (0; y; ;  ) = 0; 0 <   1; K (T; y; ;  ) = 0; ?1   < 0: 2

1

(20)

Thus, if K (x; y; ;  ) is the solution of (20), it can be easily veri ed that the solution of (18) can be expressed by

n (x; ) =

Z

1

?

2

T

Z

1

0

K (x; y; ;  )S n (y;  )dyd:

(21)

Notice that (20) describes a very simple particle transport system in which particles start at x = y; their directions of initial motion always make an angle arccos  with the positive x?axis, and the \weight" of each particle is 1. This observation is very important when we estimate any integrals involving the function K (x; y; ;  ): Now consider the scalar ux in stage n; Z

'n (x) 

?

2

and assume that

'n (x) 



ani Pi 2Tx ? 1

1



n (x; )d;

(22)

2

1

NX ?

2

?2

1



2

i

=0

;

(23)

where the functions Pi Tx ? 1 are the Legendre polynomials of degree i , 0  x  T: The coecients ani can be estimated as follows. Using the orthogonality of the Legendre polynomials, we obtain 2

?  R ani  iT R T R'n (x)Pi Tx ?? 1 dx  (24)  iT R? R T n (x; )Pi TxR ? 1R dxd  ?  iT ? T S n (y;  )dyd ? T K (x; y; ;  )Pi Tx ? 1 dxd: 2

2 +1

2

2 +1

0 1

2 +1

1

2

2

1

0

1

0

2

1

2

1

0

Now each integral ? T K (x; y; ;  )Pi Tx ? 1 dxd will be estimated by Monte Carlo methods. There are many types of estimators that can achieve this, as described, for example, in [7]. Of these, the track length estimator (a random variable that simply sums distances traveled by all random walking particles over each track segment between successive collisions)  the ? x seems ? 1 intromost useful here. However, the appearance of the?function P i T  duces some complications, since the function Pi Tx ? 1 is not of constant sign over each track. It was for this purpose that the generalized track length estimator explored in [8] was developed. According to [8], with W particles, we can obtain sample estimates eani of the coecients ani by ?2

R1 R 1



0

2

2

2

e ani = 2i T+ 1 2

Z

1

?

1

T

Z 0

2

S n (y;  )dyd  1

"

W XZ X

W w

=1

Lk Lk





#

Pi 2Tx ? 1 dx

y;

(

: )

(25) where the integrals of Pi are taken over each individual track segment that makes up every random walk. More detailed information about how these

computations of eani are carried out may be found in the Appendix. Using (25) and (23), an estimate 'en (x) of 'n (x) is obtained. Substituting this into (18) produces an ordinary di erential equation for en (x; ); which can be solved to produce an estimate of n (x; ) : 2

2

2

2

2

n2

e = ? en (x; ) + s 'en (x) + S n (x; );  @@x t ?1    1; 0 < x < T; en (0; ) = 0; 0 <   1; en (T; ) = 0; ?1   < 0: 2

2

2

2

(26)

2

Solving (26) exactly then gives en (

 x; ) = 2

(

s R x e? t x?y 'en (y )dy + R x e? t x?y S n (y; )dy;  R  R   ? s xT e? t x?y 'en (y)dy ?  xT e? t x?y S n (y; )dy; (

2

)

(

1

2

0

(

)

0

)

(

1

)

2

2

 > 0;  < 0:

(27) Combining the solutions of (16) and (17) with (27), the solution of (18), we obtain a Monte Carlo approximation of n (x; ) en (x; ) +  en (x; ) +  en (x; ) en (x; )  8   R t > e?  x Qn () + s x e? t x?y 'en (y)dy > > >  R > < +  x e? t x?y S n (y; )dy;  > 0; (28) = > t T ?x n R T ? t x?y n  s e QT () ?  x e  'e (y)dy > > > > : ? R T e? t x?y S n (y; )dy;  < 0;  x with 'en (y)  'en (y) + 'en (y) + 'en (y); (29) n n e e where, as with the solution of (18),  (x; ) and  (x; ) are the Monte Carlo approximate solutions of (16) and (17), respectively, while 'en (y) and 'en (y) are the approximate scalar uxes for the same problems. Remark It can be easily checked that all Qn() and QnT () vanish except for n = 0: 0

1

2

(

0

2

(

1

)

0

)

0

(

)

(

)

2

(

1

0

)

1

2

0

1

0

1

0

4 Reduced Source Calculation The main idea for the algorithm described in Sections 2 and 3 is that the nth stage involves estimating a correction term, n ; generated by a reduced source, S n ; that becomes smaller and smaller as n increases. Unfortunately, the smallness of S n; together with the fact that S n is not of constant sign as x;  vary, produces serious computational diculties. We now discuss some of these issues in more detail. From (13) we obtain   S n (x; ) = S n? (x; ) ?  @ e@xn? + t en? (x; ) R (30) + s ? en? (x; 0 )d0  R  n ? n ? 0 0 s = ? 'e (x) ? ? e (x;  )d ; 2

1

1

1

2

1

1

1

1

2

1

1

1

where in the last equality, we have used (26) for en? and similar equations for en? and en? for the (n ? 1)st stage. We see that S n (x; ) does not depend on . Let S n(x) = S n (x; ). To simplify, assume  0: Substituting (28) into (30) and making use of the remark above, we obtain for n > 1 1

2

1

1

0

1

0

?  S n (x) = ? s 'en? (x) + s 1

R 1 R T e?  1t jx?x1 j

2

2 1 0 0 R 1 R T e?  1t jx?x1 j n?1  s + S (x1 )dx1 d1 2

2

0

'en? (x )dx d 1

1

1

(31)

1

1

0

and for n = 1

? 1t jx?x1 j

Te  S (x) = ? s 'e (x) + s  R 'e (x )dx d t T ?x R t  s  + ? e QT ( )d + s e?  x Q ( )d : ?

0

1

2 R 1 R

2

2

0

2

1

1

(

0

0

0

1

1

1

)

1

1

2

1

1

0

0

(32)

1

1

1

For completeness, we write down the following expression for S n(x) which can be obtained from (31) and (32) by recursion:

S n(x) = ? s 'en? (x) 1

?

2

2 R 1 R T e?  1t jx?x1 j ?



'en? (x ) ? 'en? (x ) dx d +    t R T e?  jx?x j dx d      R R T e? n?t jxn? ?xn? j ?  'e (xn? ) ? 'e (xn? ) dxn? dn? n?  ?  R R T e?  t jx?x j + s n dx d  R R T e? nt jxn? ?xn j 'e (xn )dxn dn  n t j x ? x j ?  n R R T e ?   dx d    + s   t x ? x ? j j  R R T e n? n? n? R dxn? dn? ? e nt T ?xn? QT (n )dn n?  ?  R R T e?  t jx?x j dx d    + s n    R R T e? n?t jxn? ?xn? j R dxn? dn? e? nt xn? Q (n )dn : n?

+ s ?  R + s n 2

0

2

1

0

1

1

1

2

0

0

0

1

1

1

1

1

0

1

1

1

1

2

0

0

0

1

1

1

2

0

1

(

1)

1

1

1

0

1

1

1

1

1

1

0

0

1

1

0

1

2

1

0

1

0

1

1

1

1

2

1

1

1

1

1

2

1

1

+1

0

1

1

2

0

1

1

1

0

1

0

1

0

1

1

1

1

1

1

1

0

0

(33) Now we return to (31). Notice that by (23), '~n? (x ) is a linear combination of Legendre polynomials. In order to obtain S n (x), we need to compute the following integrals: 1

I =

Z

1

T e? 1t jx?x1 j



1

0

and

I =

Z

2

0

1



Pi 2Tx ? 1 dx

T e? 1t jx?x1 j





1

1

1

S n? (x )dx : 1

1

1

(34) (35)

For I , we have 1

I = t     [i]  i ? ? ?     Pi( [ ]) Tx ? 1 Pi Tx ? 1 + t T Pi00 Tx ? 1 +    + t T     i ? t x  ? e t Pi (?1) ? t T Pi0 (?1) +    + (?1)i t T Pi i (?1) 2

1

2

2

1

2

1

2

t

? (T ?x) ? e 1t





1

2

1

Pi (1) + t T1

= t 

2

2

2



P 0 (1) +    + i



2

2

2

2

2

( )

1

  1 i P i (1) i t T ( )

2

2

   [i]  i ? ?     ? Pi( [ ]) Tx ? 1 Pi Tx ? 1 + t T Pi00 Tx ? 1 +    + t T     ?t e?  t T ?x + (?1)i e?  t x     i i   0  Pi (1) + t T Pi (1) +    + t T Pi (1) ; 2

2

1

1

(

2

1

)

2

2

2

1

2

2

2

2

2

1

2

1

( )

1

(36)

where the second equality used the following expressions for Pi j (1) and Pi j (?1) : Pi j (1) = j  j ii?jj ; (37) Pi j (?1) = (?1)i j Pi j (1) : It can be checked that the absolute values of Pi j (1) and Pi j (?1) increase ( )

( )

( )

1

2

( )

( + )! !(

)!

( )

+

( )

( )

rapidly with i, but the value of I is very small for any x: This makes it dif cult to represent I on a computer for large i: Indeed, our experience shows that for i > 11, formula (36) cannot be used to represent I accurately unless prohibitively many digits of precision are carried throughout the calculation. In this case, using quadrature is another alternative, but this can account for a great deal of computer time. As for I ; quadrature may be a good choice. In addition, we can try to expand each S n (x) as a linear combination of Legendre polynomials: 1

1

1

2

S n (x) 

NX ?









sni Pi 2Tx ? 1 : i 1

(38)

=0

If we then assume

'n (x) 

NX ?

1

i

=0

ani Pi 2Tx ? 1

;

(39)

using (31) we obtain NX ? e ajn? 2s + sjn? Iij ; sni = ? 2s eain? + 2s 2i T+ 1 j 1





1

1

=0

1

(40)

where

    Z T t Pi 2Tx ? 1 dx e?  jx?x jPj 2Tx ? 1 dx : (41) Despite their complicated appearance, computing Iij is no more dicult than computing I : The integral with respect to x is handled in the same way as in (36), while the integrals with respect to x are treated using Rodrigues' formula: Z Z di f (t) ?1 ? t i dt: (42) f (t)Pi (t)dt = 21i i! ? dti ?

Iij =

Z

1

0

d 

T

Z

1

1

1

1

1

1

0

0

1

1

1

1

2

1

1

Application of these re nements in the computation yields the numerical evidence (to be shown in the next section) of clear geometric convergence. However, we do not claim to have optimized the calculation with respect to error per unit computation. Our goal in this paper was the more modest one of developing an e ective, geometrically convergent algorithm using correlated sampling methods applied sequentially.

5 Numerical Results The numerical results we present next demonstrate clear evidence of geometric convergence. Figure 2 contrasts the O(W ? ) rate (where W =number of random walks simulated) of convergence of plain (nonsequential) Monte Carlo with the much more rapid convergence of our sequential algorithm. In Figure 2 and 3, the convergence shown was obtained by tting (in a least squares sense) straight lines to the data provided by each algorithm. This was done in order to provide useful estimates of the approximate rates of convergence, . In Figure 3, we solved the problem (1) for three di erent cases, each with a di erent absorption rate, which is the ratio at (while the scattering rate is 1? a = s ). The convergence rates are obtained in terms of the residual which t t is the di erence of the two sides of equation (1) evaluated at the approximate solution. For each problem we chose T = 1; and N; the order of the truncated Legendre expansion, equal to 11. Even though the di erent slopes in Figure 3 indicate di erent convergence rates for the sequential correlated sampling method, they all show clear geometric convergence. In our case, the values of (W ) appearing in (3) are approximately equal to 0.033 for the scattering rate ( st ) 0.1, 0.184 for the scattering rate 0.5 and 0.288 for the scattering rate 0.9. 1 2

6 Summary We have extended some of the ideas presented in [4] and [6] to continuous problems described by certain kinds of integral equations. The key idea is to

Comparison of Sequential Monte Carlo vs. Plain Monte Carlo -2 "Plain" "Sequential" -4

T=1.0, Q0=1.0, coef.=11, scat.rate=50%, TestPoint=(0.75,0.75)

log10(Residual)

-6

-8

-10

-12

-14

-16 50

100

150 200 250 numbers of random walks (1,000)

300

350

Fig. 2. Comparison of Sequential Monte Carlo vs. Plain Monte Carlo Geometric Convergence for Different Scattering Rates T=1.0, Q0=1.0, coef.=11, R.W./stage=20,000, TestPoint=(0.75,0.75) -2 "scat.rate=10%" "scat.rate=50%" "scat.rate=90%"

-4

log10(Residual)

-6

-8

-10

-12

-14

-16

-18 0

5

10

15 stages

20

25

Fig. 3. Geometric convergence for three di erent transport problems

expand the solution of the problem as a linear combination of appropriately de ned basis functions, truncate the sum, and estimate each coecient by a sequential algorithm. Clear evidence of geometric convergence is obtained when sucient care is taken to avoid some of the numerical diculties that were encountered when a naive choice of basis functions was made. A proof of the geometric convergence is provided in a companion paper [5]. A completely di erent method, based on an adaptive application of importance sampling, can be found in [9]. In that reference, the importance sampling method is applied to the same transport problem treated here, with equally good results in terms of geometric convergence. However, the inherently more complicated nature of the importance sampling implementation requires more computing time per random walk than the one described here based on correlated sampling. Work continues on extending these ideas to more general transport problems, on making more ecient choices of basis functions, and on re ning the error analysis presented in [5] for the method.

References 1. Spanier, J.: Geometrically Convergent Learning Algorithms for Global Solutions of Transport Problems. this volume 2. Bell, G. I., Glasstone, S.: Nuclear Reactor Theory. Krieger Pub. Co. 1970 3. Case, K. M., Zweifel, P. F.: Linear Transport Theory. Addison-Wesley Pub. Co. 1967 4. Halton, J.:Sequential Monte Carlo. Proc. Camb. Phil. Soc. 58 (1962) 57-78 5. Kong, R., Spanier, J.: Error Analysis of Sequential Monte Carlo Methods for Transport Problems. this volume 6. Li, L., Spanier, J.: Approximation of Transport Equations by Matrix Equations and Sequential Sampling. Monte Carlo Methods and Applications 3 (1997) 171198 7. Spanier, J., Gelbard, E. M.: Monte Carlo Principles and Neutron Transport Problems. Addison-Wesley Pub. Co. 1969 8. Spanier, J.: Monte Carlo Methods for Flux Expansion Solutions of Transport Problems. to appear in Nuclear Science and Engineering 9. Lai, Y., Spanier, J.: Adaptive Importance Sampling Algorithms for Transport Problems. this volume

7 Appendix: Monte Carlo Estimation of a2

n i

There are many ways to estimate ani by Monte Carlo. Here we only describe one such method based on composite midpoint rule quadrature formula in x and . First, we divide the interval [?1; 1] into M equal subdivisions each of length  = M and [0; T ] into Mx equal subdivisions of length x = MTx 2

2

?1 =  <  <  <    < M = 1 0 = x < x < x <    < x Mx = T 0

0

1

1

2

2

(43)

and let the midpoints of the subrectangles so de ned be denoted by (yj ; m ):

?1 =  <  <  <  <  <    < M = 1 : 0 = x < y < x < y < x <    < xMx = T 0

1

0

1

1

2

1

2

(44)

2

2

The approximation used for (25) is

a = iT

PM PMx R m R xj n m=1 j =1 m?1 xj?1 ( h ? 2x  i PW P R

S y;  )dyd   Lk Lk Pi T ? 1 dx y; W w R R P P M   iT m  Mj x mm? xxjj? S n (y;  )dyd   ? x  i PW hP R Lk Lk Pi T ? 1 dx y ; : W w

en 2i

2 +1

1

=1

(

(45)

)

2 +1

=1

=1

1

1

1

2

=1

(

j m)

Notice that (y;  ) has been replaced by (yj ; m ) so that the two integrals appearing in the last approximate-equality of (45) can be carried out separately. Let Z m Z xj 2 i + 1 S n(y;  )dyd: (46) Ajm = T m?1 xj?1

Then we have

"

M X Mx X

#

W XZ Ajm  X 2x ? 1 dx P ; (47) i T m j W w Lk Lk yj ;m where we retain the notation eani in order not to involve too many symbols. e ani =





2

=1

=1

=1

(

)

2

The bracketed sum in (47) is then estimated using the generalized track length estimator. In order to compute (47) we introduce three groups of pseudorandom numbers rtrack , rscatt and rangle , each of which is uniformly distributed in [0, 1] and will be used as follows: rtrack : used to decide the length of each track segment; rscatt : used to decide if the particle is absorbed or scattered when it makes a collision; rangle : used to decide the cosine of the angle between each track direction and the x?axis. In addition, let ls = starting x?coordinate of each track; le = ending x?coordinate of each track; We describe our strategy as a computer algorithm. First, initialize eani : 2

en 2i

a = 0:

(48)

Let m = 1 and j = 1; so we have  =  : For the rst track L , set ls = y . 1

1

1

1

For the rst track, we only need to sample rtrack and obtain l = ? ln(rtrack ) ;

t  fls + l ; T g;   0; le = min maxfls + l ; 0g;  < 0: 1

1

Then

(50)

1

1

Z le Pi ( 2Tx ? 1)dx: a = eani + AW ls

en 2i

(49)

(51)

11

2

We next check to see if the particle left the interval [0; T ] (by this we mean le = T or le = 0). If so, we terminate this random walk and proceed to the next one. If not, we generate an rscatt to determine if the particle is absorbed (rscatt < at ). If so, we terminate this random walk and proceed to the next one. If not, we set ls = le (52) to obtain the starting point of the next track L . At this point, we need to sample an rangle to decide the  = cos(angle), i.e. the direction of motion for the next track and set  = 2rangle ? 1: (53) Then we generate an rtrack and set (54) l = ? ln(rtrack ) ; 2

2

2

t which is the projection of L onto the x?axis and  fls + l ; T g;   0; (55) le = min maxfls + l ; 0g;  < 0: At this point, we need to check if  is positive (forward track) or negative (backward track). If   0, we set Z le e ani = eani + AW Pi ( 2Tx ? 1)dx: (56) ls 2

2

2

2

2

2

2

11

2

2

Otherwise, we set

e ani = eani + AW

11

2

2

Z ls

le

Pi ( 2Tx ? 1)dx:

(57)

For the third track L , we set ls = le and proceed as above until the particle disappears, either by leaving the slab or being absorbed. After exhausting all W particles, we consider all other (j; m) pairs. This completes the computations of (25). 3

Suggest Documents