Monte Carlo Simulations: Efficiency Improvement ... - CiteSeerX

20 downloads 85 Views 919KB Size Report
uniform bremsstrahlung splitting (UBS) (Rogers et al. 1995) any bremsstrahlung event leads to the sampling and transport of N (a constant splitting number) ...
Published in "Integrating New Technologies into the Clinic: Monte Carlo and Image-Guided Radiation Therapy - Proc. of 2006 AAPM Summer School: Pages 71-91: Published by Medical Physics Publishing,( Madison WI)

Monte Carlo Simulations: Efficiency Improvement Techniques and Statistical Considerations Daryoush Sheikh-Bagheri, Ph.D.1, Iwan Kawrakow, Ph.D.2, Blake Walters, M.Sc.2, and D. W. O. Rogers, Ph.D.3 1 Department of Radiation Oncology, Allegheny General Hospital Pittsburgh, Pennsylvania 2 Ionizing Radiation Standards, National Research Council, Ottawa, Ontario, Canada 3 Physics Department, Carleton University, Ottawa, Ontario, Canada Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 The Metrics of Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 The Condensed History Technique (CHT) . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Efficiency Improvement Techniques Used in Treatment Head Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Range Rejection and Transport Cutoffs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Splitting and Russian Roulette . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Uniform Bremsstrahlung Splitting (UBS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Selective Bremsstrahlung Splitting (SBS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Directional Bremsstrahlung Splitting (DBS) . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Efficiency Improvement Techniques Used in Patient Simulations . . . . . . . 9 Macro Monte Carlo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 History Repetition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Boundary-Crossing Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Precalculated Interaction Densities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Woodcock Tracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Photon Splitting Combined with Russian Roulette . . . . . . . . . . . . . . . . . . . . . 13 Simultaneous Transport of Particle Sets (STOPS) . . . . . . . . . . . . . . . . . . . . . . 13 Quasi-Random Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Correlated Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Statistical Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Introduction A large number of general-purpose Monte Carlo (MC) systems have been developed for simulating the transport of electrons and photons. One of the most popular ones, which has been extensively used and benchmarked against experimental measurements in a large number of medical physics situations, is the EGS code system (Ford and Nelson 1978; Nelson, Hirayama, and Rogers 1985; Kawrakow and Rogers 2000).

1

2

Daryoush Sheikh-Bagheri et al.

Other codes systems such as ITS (Halbleib and Melhorn 1984; Halbleib 1989; Halbleib et al. 1992); MCNP (Briesmeister 1986, 1993; Brown 2003); PENELOPE (Baró et al. 1995) and GEANT4 (Agostinelli et al. 2003) also have popularity among their user groups. The EGS and ITS/ETRAN systems are roughly of the same efficiency for calculations when no variance reduction techniques are used, whereas the other systems tend to be considerably slower. For special-purpose applications the use of sophisticated variance reduction techniques has made some of these codes substantially more efficient than others. For example, the BEAMnrc code (Rogers, Walters, and Kawrakow 2005) is an optimized EGSnrc user-code for modeling radiotherapy accelerators that employs several techniques which significantly enhance its overall efficiency. Nonetheless, it is still not fast enough for routine treatment planning purposes without a substantial farm of computers. Thus, to make routine clinical use a possibility, there have been a number of attempts to develop high-efficiency MC codes. Among the most well known of such codes are: the Macro Monte Carlo (MMC) code (Neuenschwander and Born 1992; Neuenschwander, Mackie, and Reckwerdt 1995); the PEREGRINE code (Schach von Wittenau et al. 1999; Hartmann Siantar et al. 2001); Voxel Monte Carlo (VMC/xVMC) (Kawrakow, Fippel, and Friedrich 1996; Kawrakow 1997; Fippel 1999; Kawrakow and Fippel 2000b; Fippel et al. 2000), VMC++ (Kawrakow and Fippel 2000a; Kawrakow 2001); MCDOSE (Ma et al. 2000); the Monte Carlo Vista (MCV) code system (Siebers et al. 2000); the Dose Planning Method (DPM) (Sempau, Wilderman, and Bielajew 2000) and other codes (Keall and Hoban 1996a,b; Wang, Chui, and Lovelock 1998). In another chapter, “Monte Carlo Methods for Accelerator Simulation and Photon Beam Modeling,” Ma and Sheikh-Bagheri have discussed the effects on the accuracy of MC-calculated dose distributions of various linac modeling parameters, such as the specifications of the initial electron beam-on-target and accelerator components. Our discussion of accuracy assumes all such influencing factors are properly addressed. In this chapter we discuss the fundamental methods of improving the efficiency of MC simulations that are used both in therapy beam simulations and in simulations of the dose distribution in the patient. Throughout the chapter we explicitly distinguish between techniques that do not alter the physics in any way when they increase the efficiency—i.e., true variance reduction techniques (VRT)—and techniques that achieve the improved efficiency through the use of approximations; i.e., approximate efficiency improving techniques (AEIT).

The Metrics of Efficiency The efficiency, ε, of a Monte Carlo calculation is defined as: ε = 1 , where σ2 is σ 2T an estimate of the variance on the quantity of interest and T is the CPU time required to obtain this variance. The goal is to reduce the time it takes to obtain a sufficiently small statistical uncertainty σ on the quantity of interest. The shorter the time needed, the higher the efficiency of the simulation and vice versa. The time T for the simula-

MC Simulations: Efficiency Improvement/Statistical Considerations

3

tion is proportional to the number N of statistically independent particles. At the same time σ2 decreases as 1/N according to the central limit theorem (Lux and Koblinger 1991). As a result, the product T · σ2 is a constant and the efficiency ε = 1/T σ2 expresses how fast a certain simulation algorithm can calculate a given quantity of interest at a desired level of statistical accuracy. The above metric does not define how to calculate the variance of the quantity of interest. In the case of MC simulations related to radiation treatment planning (RTP), one is typically interested in a distributed quantity such as the three-dimensional (3D) dose distribution in the patient, or the spectrum or fluence of particles emerging from the treatment head of the linear accelerator, etc. One therefore must define a measure of the overall uncertainty in order to evaluate the efficiency of a particular simulation algorithm. Rogers and Mohan (2000) proposed to use an average uncertainty as a measure of the uncertainty of a MC patient calculation, given by: 1 n  ∆Di  ∑  , n i=1  Di  2

σ2 =

where Di is the dose in the i’th voxel, ∆Di is its statistical uncertainty, and the summation runs over all voxels (n) with a dose greater than 50% of the maximum. Using the above ICCR (International Conference on the Use of Computers in Radiation Therapy) benchmark criteria, the most commonly used MC codes in radiotherapy applications were compared against each other (Chetty et al. 2006) and the compilation of results is depicted in Table 1. The differences in speed are primarily attributable to the aggressiveness in the employment of various VRTs in the different codes, the inherent speeds of the transport algorithms, and the efficiency of the geometry packages. Before we get into a detailed discussion of VRTs, we will first discuss one single most important technique that has made MC simulations of electron transport possible using our present-day computers: the condensed history technique.

The Condensed History Technique (CHT) A typical RTP energy range electron or positron together with secondary particles it sets in motion undergoes of the order of 106 elastic and inelastic collisions until locally absorbed. It is therefore impossible to simulate each individual collision as this would result in prohibitively long simulation times. To overcome this difficulty, Berger (1963) introduced the condensed history technique (CHT). The CHT makes use of the fact that most electron interactions result in extremely small changes in energy and/or direction and thus groups them into single “steps” that account for the aggregate effects of scattering on the path of the electron. All general purpose MC packages such as the original code by Berger called ETRAN (Berger 1963, Seltzer 1988), ITS (Halbleib and Melhorn 1984; Halbleib 1989; Halbleib et al. 1992), MCNP (Briesmeister et al. 1993) (which both use the ETRAN electron transport algorithms), EGS4

4

Daryoush Sheikh-Bagheri et al.

Table 1. Summary of Timing and Accuracy Results from the ICCR Benchmark. Timing comparisons were performed using 6 MV photons, 10×10 cm2 field size, and those for the accuracy test, using 18 MV photons and a 1.5×1.5 cm2 field size, as detailed in the ICCR benchmark (Rogers and Mohan 2000). All times have been scaled to the time it would take running on a single, Pentium IV, 3 GHz processor. Readers should be aware that the timing results, as well as the method used to scale the times, are subject to large uncertainties due to differences in compilers, memory size, cache size, etc. (Reprinted from Chetty et al. 2006 with permission from AAPM). Monte Carlo code

Time estimate (minutes)

% max. diff. relative to ESG4/PRESTA/DOSXYZ

ESG4/PRESTA/DOSXYZ

42.9

0, benchmark calculation

VMC++

0.9

+1

MCDOSE (modified ESG4/PRESTA)

1.6

+1

MCV (modified ESG4/PRESTA)

21.8

+1

RT_DPM (modified DPM)

7.3

+1

MCNPX

60.0

max. diff. of 8% at Al/lung interface (on average + 1% agreement)

Nomos (PEREGRINE)

43.3*

+ 1*

GEANT 4 (4.6.1)

193.3**

± 1 for homogeneous water and water/air interfaces**

*Note that the timing for the PEREGRINE code also includes the sampling from a correlated-histogram source model and transport through the field-defining collimators. ** See Poon and Verhaegen (2005) for further details.

(Nelson, Hirayama, Rogers 1985), EGS4 with PRESTA (Bielajew and Rogers 1987), EGSnrc (Kawrakow 2000, Kawrakow and Rogers 2000), and PENELOPE (Salvat et al. 1996) use the CHT in one way or another. The CHT introduces an artificial parameter called the “step-size.” It is well known that for many implementations of the CHT the results depend on the choice of the step-size (Rogers 1984, Bielajew and Rogers 1987, Seltzer 1991, Rogers 1993). The CHT is therefore an AEIT according to the definition adopted earlier in this chapter. Although every MC algorithm uses the CHT for charged particle transport, it is rarely considered as a technique to improve the efficiency of MC simulations. Yet, the two main aspects of the CHT implementation: (1) the “electron-step algorithm” also called “transport mechanics” and (2) the boundary-crossing algorithm very strongly influence the simulation speed and accuracy. For more detailed discussion on the condensed history technique the reader is referred to the original Berger work (Berger 1963), the article by Larsen (1992), where a formal mathematical proof is provided that any CHT implementation converges to the correct result in the limit of short steps, the paper by Kawrakow

MC Simulations: Efficiency Improvement/Statistical Considerations

5

and Bielajew (1998), which gives a detailed theoretical comparison between different electron-step algorithms, and to Kawrakow (2000a), where the various details of a CHT implementation and their influence on the accuracy are investigated. The article by Siebers, Keall, and Kawrakow (2005) discusses CHT details relevant for fast algorithms used in RTP class simulations.

Efficiency Improvement Techniques Used in Treatment Head Simulations Range Rejection and Transport Cutoffs Use of AEITs, such as range rejection and transport cutoffs, can improve the efficiency of treatment head simulations substantially, without significantly changing the results (Rogers et al. 1995). In range rejection, an electron’s history is terminated whenever its residual range is so low that it cannot escape from the current region or reach the region of interest. Since this ignores the possible creation of bremsstrahlung or annihilation photons while electrons (negatrons or positrons) slow down, it is not an unbiased technique. However, as long as it is only applied to electrons below a certain energy threshold, it has been shown to have little effect on the results in many situations. Similarly, by increasing the low-energy cutoff for electron transport, one can save a lot of time, but this may have an effect on the dose distribution if too high a threshold is used. Playing Russian roulette with particles at energies below a relatively high transport cutoff or with particles that would be range-rejected is a comparable variance reduction technique for reducing the simulation time. However, its implementation is typically more difficult and this has favored the simpler use of range rejection and high transport cutoffs in situations where it is easy to demonstrate that the resulting error is sufficiently small. The particle production and transport cutoff energies and the threshold energy for doing range rejection must both be carefully investigated to avoid significant errors in the results of the simulations. In order to determine an optimum value for range-rejection cutoff in linear accelerator (linac) photon beam simulations, Sheikh-Bagheri et al. (2000) modified the BEAM code (Rogers et al. 1995) to allow tagging of bremsstrahlung production anywhere outside the target. They concluded that outside the target, values of the upper energy for doing range rejection of 0.6 MeV for 4 MV beams, and 1.5 MeV for 6 MV and higher energy beams, provide the largest savings in central processing unit (CPU) time (about a factor of 3 for the 10 and 20 MV beams studied), with negligible (less than 0.2%) underestimation of the calculated photon fluence from the linac. In the following sections we discuss the much larger increases in efficiency that can be achieved for photon beams by employing true VRTs, although for electron beams the AEITs may still be the best options available.

6

Daryoush Sheikh-Bagheri et al.

Splitting and Russian Roulette Two very commonly used VRTs are splitting and Russian roulette, which were both originally proposed by J. von Neumann and S. Ulam (Kahn 1956). These are especially useful in simulating an accelerator treatment head (Rogers et al. 1995; Sheikh-Bagheri 1999; Kawrakow, Rogers, and Walters 2004). In the various forms of bremsstrahlung splitting (see Figure 1), each time an electron undergoes a bremsstrahlung interaction, a large number of secondary photons with lower weights are set in motion, the number possibly depending on a variety of factors related to the likelihood of them being directed toward the field of interest (FOI). The energy of the electron is reduced by an amount equal to the energy of one of the emitted photons, which is

Figure 1. A simplistic schematic of the splitting routines discussed in this chapter for linac modeling. Without bremsstrahlung splitting many electron tracks (red dashed curved lines) have to be simulated to get a single photon emitted toward the field of interest (FOI). In uniform bremsstrahlung splitting (UBS) (Rogers et al. 1995) any bremsstrahlung event leads to the sampling and transport of N (a constant splitting number) photons, increasing the likelihood of emission toward the FOI. Selective Bremsstrahlung Splitting (SBS) (SheikhBagheri 1999) treats the splitting number as a variable which is calculated as a function of the probability of bremsstrahlung emission into the FOI, and transports all (black arrows) sampled photons. In Directional Bremsstrahlung Splitting (DBS) (Kawrakow, Rogers, and Walters 2004) the photons emitted toward the FOI are transported, but the many that are not emitted toward the FOI are subjected to a game of Russian roulette that many will not survive (gray dotted arrows) and only the few that survive get transported, all leading to further increases in efficiency. (Note that the schematic ignores, for the sake of illustration, that bremsstrahlung emission at high electron energies and shallow depths is more intense and more forward peaked.)

MC Simulations: Efficiency Improvement/Statistical Considerations

7

in violation of conservation of energy on an individual interaction basis but results in correct fluctuations in energy loss for electrons and correct expectation values for photon energy and angular distributions. Splitting can save a large amount of time because photon transport is fast, whereas it takes a long time to track an electron in the target. Thus, splitting bremsstrahlung interactions makes optimal use of each electron track. If the number of photons created is selected to minimize those that can never reach the patient plane, then there is a further time savings. Russian roulette can be played whenever a particle resulting from a class of events is of little interest. The lowinterest particles are eliminated with a given probability but to ensure an unbiased result, the weight of the surviving particles is increased by the inverse of that probability. A common example is to play Russian roulette with secondary electrons created in a photon beam.

Uniform Bremsstrahlung Splitting (UBS) When using UBS, in each interaction that produces photons, Nsplit bremsstrahlung or 2 Nsplit annihilation photons are sampled instead of one or two photons. To make the game fair, a statistical weight of w0 /Nsplit is assigned to the photons, where w0 is the statistical weight of the incident electron or positron. In this way the Nsplit or 2 Nsplit photons count statistically for as much as the one or two photons that would be produced in a normal simulation that does not use splitting. Many of the photons set in motion in interactions of the primary electrons impinging on the bremsstrahlung target will undergo additional collisions before emerging from the treatment head or being locally absorbed. Electrons and positrons set in motion in such interactions will inherit the statistical weight of the photons, i.e., they will have a weight of w0 /Nsplit if UBS is used. If one would split bremsstrahlung and annihilation events of such secondary charged particles one would have photons with weights w0 /N2split, w0 /N3split, etc. This is not desirable and therefore such higher order interactions are not split. If one is only interested in the dose beyond the depth of maximum dose in the phantom, where the contribution of contaminant electrons is negligible, or if one wants to obtain another photon-only quantity such as the photon (energy) fluence or spectrum, one can further improve the efficiency of the treatment head simulation by employing Russian roulette with p = 1/Nsplit for charged particles set in motion in photon collisions. In this case, the secondary electrons that survive the Russian roulette game have again a weight of w0 and their bremsstrahlung and annihilation interactions must be split to avoid the production of “fat” photons (Sheikh-Bagheri 1999). It is important to keep in mind that a UBS simulation that uses Russian roulette for secondary charged particles results in a poor estimate of the contaminant electrons and is therefore not suitable for full dose calculations in the patient. UBS was implemented in the original BEAM version (Rogers et al. 1995) and it was refined in Sheikh-Bagheri (1999) and SheikhBagheri et al. (2000). UBS was shown in the article by Kawrakow, Rogers, and Walters (2004) to improve the efficiency of photon treatment head simulations by up to a factor of 8 (without Russian roulette) or 25 (with Russian roulette).

8

Daryoush Sheikh-Bagheri et al.

Selective Bremsstrahlung Splitting (SBS) Bremsstrahlung photons can be emitted in all directions by an electron, but those emitted by the electrons aiming toward the FOI at the time of emission have a higher chance of reaching it (see Figure 1). In a typical photon beam treatment head simulation most photons produced in electron and positron interactions are absorbed by the primary collimator, the photon jaws, and the linac shielding. For instance, for a 6 MV beam and a 10×10 cm2 field size only about 2% to 3% of the photons reach the plane underneath the jaws. With this realization, Sheikh-Bagheri and Rogers developed a technique known as selective bremsstrahlung splitting (SBS) (Sheikh-Bagheri 1999). The difference between SBS and UBS is that SBS uses a variable splitting number for bremsstrahlung that depends on the probability to emit a photon directed towards the FOI. The probability is precalculated for different incident electron directions assuming that the electron position is on the beam axis, and the splitting number is selected, during the simulation and according to the probability, between a maximum (electron moving forward) and a minimum number (electron moving backwards). The minimum splitting number is typically 1/10 of the maximum. Although SBS substantially reduces the time needed to simulate photons not reaching the FOI, it introduces a non-uniform distribution of statistical weights, which leads to a lower efficiency than theoretically possible. Nevertheless, SBS improves the efficiency by a factor of 2.5 to 3.5 compared to UBS for a total efficiency gain compared to a simulation without splitting of ~20 when not using Russian roulette for secondary electrons or ~65 when Russian roulette of secondary electrons is used (see Figure 2).

Directional Bremsstrahlung Splitting (DBS) As with SBS, the goal of DBS (Kawrakow, Rogers, and Walters 2004) is to reduce the number of transported photons not reaching the FOI. But unlike SBS, only those split photons that are aimed into the FOI are transported. The remaining photons are immediately subjected to a game of Russian roulette where, on average, only a single photon survives (see Figure 1). This approach leads to uniform statistical weights within the FOI, which improves the efficiency further compared to SBS. DBS uses a complex combination of interaction splitting, Russian roulette for secondary electrons and photons, particle splitting for electrons, and directional biasing together with the fact that the probability for bremsstrahlung, Compton scattered, annihilation, and fluorescence photons reaching the FOI can be calculated in advance, thus reducing the number of actual interactions being sampled. DBS improves the efficiency of photon beam treatment head simulations by up to a factor of 8 compared to SBS for a total gain in efficiency compared to a simulation without any splitting of ~150 when electron splitting is employed and therefore good statistics are achieved for contaminant electrons, or of ~500 when electron splitting is not employed and therefore only useful for photon-only quantities (see Figure 2).

MC Simulations: Efficiency Improvement/Statistical Considerations

9

Figure 2. Relative efficiency for calculating photon fluence within the 10×10 cm2 field of a simulated Elekta SL25 6 MV photon beam as a function of bremsstrahlung splitting number (NBRSPL). Efficiencies shown are relative to total photon fluence efficiency with no splitting. For UBS and SBS, efficiencies are shown with Russian roulette on (open circles) and off (solid circles). The field size parameter, FS, used with SBS was 30 cm (SheikhBagheri 1999; Sheikh-Bagheri and Rogers 2002a,b). For DBS, results are shown with electron splitting off (open circles) and electron splitting on with the splitting plane at Z=15.46 cm and the Russian roulette plane at Z=15.2 cm (closed circles). For UBS the minimum splitting number is 20 and for SBS and DBS it is 50. Note the y axis is logarithmic. (Reprinted from Kawrakow, I., D. W. O. Rogers, and B.Walters, “Large efficiency improvements in BEAMnrc using directional bremsstrahlung splitting,” Med Phys 31:2883–2898. © 2004, with permission from AAPM.)

To give an idea of the power of these techniques, Figure 3 presents calculated spectra averaged over a 10×10 cm2 field from a typical high-energy photon accelerator, using BEAMnrc on a 1.8 GHz CPU.

Efficiency Improvement Techniques Used in Patient Simulations An efficient linac simulation algorithm plays an important role in the process of beam commissioning for a MC-based RTP, by facilitating the often-iterative process of determining the phase-space of the incident electron beam (at the location of the bremsstrahlung target in photon beams or at the vacuum exit window in electron beams). A substantial improvement in the efficiency of beam modeling for MC treat-

10

Daryoush Sheikh-Bagheri et al.

Figure 3. An example of the speed of the calculation of photon spectra in a 10×10 cm2 field from a 16 MV beam from a realistic linac with the DBS technique, with electron splitting (blue histogram) and without electron splitting (red histogram), using the BEAMnrc system.

ment planning does not necessarily depend on direct high-efficiency MC methods, since the beam modeling is feasible through the use of a variety of other (and not direct MC-based) methods such as (see chapter by Ma and Sheikh-Bagheri); recycling phasespace files, design of MC-based beam models, or the employment of empirical and semi-empirical measurement-based beam models. Therefore, the routine utilization of a MC code in the clinic will very strongly depend on the efficiency of the simulation for each patient. As discussed earlier, the CHT implementation is the most important factor for improved efficiency. The following sections provide a brief discussion of additional VRTs and AEITs employed by MC algorithms for dose calculations in a patient. Dose distribution post-processing (i.e., denoising), which is another approach to significantly reduce CPU time, is discussed in detail in the chapter by Kawrakow and Bielajew (“Monte Carlo Treatment Planning: Interpretation of Noisy Dose Distributions and Review of Denoising Method”). The combined efficiency enhancements thus achieved have made MC patient dose calculations viable for routine clinical use.

MC Simulations: Efficiency Improvement/Statistical Considerations

11

Macro Monte Carlo The Macro Monte Carlo (MMC) approach (Neuenschwander and Born 1992; Neuenschwander, Mackie, and Reckwerdt 1995) was perhaps the first attempt to develop a fast MC code for use in RTP. The idea behind it is simple: one performs simulations of electrons impinging on homogeneous spheres of varying radii and media using a general purpose package such as EGS4 (used in the original MMC implementation) or EGSnrc (used in the current commercial implementation in the Eclipse treatment planning system), and stores the probability distribution of particles emerging from the sphere in a database. In the actual patient simulation electrons are transported using this database. For each electron step the medium and size of the spheres employed for the transport is determined from the properties of the surrounding voxels. The MMC approach is an AEIT because (1) it uses two-dimensional histograms to represent the database probability distributions (in reality the distributions are five-dimensional but this would require too much data) and (2) the energy is distributed along a straight line between the initial and final electron position. Neuenschwander, Mackie, and Reckwerdt (1995) recognized that because of item (2) above, the radius of the spheres must be limited to about 5 mm to achieve sufficient accuracy for typical RTP electron beam simulations.

History Repetition The history repetition technique was introduced almost simultaneously in the SMC (Super Monte Carlo) (Keall and Hoban 1996a,b), and VMC (Kawrakow, Fippel, and Friedrich 1996) codes. It is also used by xVMC (Fippel 1999; Kawrakow and Fippel 2000b) and MCDOSE (Ma et al. 2000). In history repetition, one simulates an electron track in an infinite, homogeneous medium (typically water) and then “applies” the track to the actual patient geometry starting at different positions and directions at the patient surface (electron beams) or from different interaction sites (photon beams). Whereas VMC/xVMC generate the reference track on-the-fly using their own physics implementation, MCDOSE and SMC use EGS4 for this purpose. The difference between SMC and MCDOSE is that in MCDOSE the track is generated on the fly, whereas SMC uses precalculated tracks stored in a data file. The application of the reference track to the heterogeneous patient geometry requires the appropriate scaling of the step lengths and the multiple elastic scattering angles. The scaling of multiple elastic scattering angles is done within a small angle approximation. In addition, it is not possible to obtain the correct number of discrete interactions. History repetition is therefore an AEIT. For materials typically found in the patient anatomy the error can be made very small, of the order of 1% to 2%, as discussed in more detail in Kawrakow (1997). Use of history repetition in arbitrary materials, such as high-Z elements, is not possible if one wants to ensure an acceptable accuracy. It was originally believed that history repetition is the main reason for the much higher simulation speed of VMC/xVMC, SMC, and MCDOSE compared to general-

12

Daryoush Sheikh-Bagheri et al.

purpose MC codes. However, a more careful analysis reveals that history repetition only results in a modest gain in efficiency that depends on the voxel size and the details of the CHT implementation (Kawrakow 2001; Siebers, Keall, and Kawrakow 2005). The more accurate the CHT implementation (fewer steps required), the less the efficiency gain and vice versa. For the VMC/xVMC code, for instance, the efficiency gain varies between a factor of ~1.5 (1 mm voxels) and ~3 (1 cm voxels).

Boundary-Crossing Algorithms The algorithm used for geometrical boundary crossing can have a substantial effect on the efficiency of a MC calculation. As an example, BEAMnrc uses the PRESTA boundary crossing algorithm since it is three to four times faster in phantom or accelerator calculations than the EGSnrc boundary crossing algorithm, and in these simulations gives the same result. All of the very fast dose-in-phantom codes do not even stop at the boundaries and use various other techniques to avoid loss of accuracy (Kawrakow, Fippel and Friedrich 1996; Ma et al. 2000; Sempau, Wilderman, and Bielajew 2000)

Precalculated Interaction Densities Use of precalculated interaction densities is a true VRT that has been employed in SMC, (Keall and Hoban 1996b), MCPAT (Wang, Chui, and Lovelock 1998) and in xVMC (Fippel 1999) for photon beam calculations. Instead of tracing the photons incident on the patient again and again, in this technique the interaction densities in all voxels are calculated in advance for all voxels and these interaction densities are then used to start the appropriate number of electrons from the different voxels. Fippel (1999) reports about a factor of 2 gain in efficiency compared to photon transport not using any VRT. The main limitation of this method is the fact that relatively simple source models are required to calculate the interaction densities accurately.

Woodcock Tracing Woodcock tracing, also known as the “delta scattering method” (Lux and Koblinger 1991) is employed in DPM (Sempau, Wilderman, and Bielajew 2000) and the PEREGRINE code (Hartmann Siantar et al. 2001) for photon transport. This technique is also known as the fictitious cross-section method and can also be used to handle the energy-dependent cross section in electron transport (Nelson, Hirayama, and Rogers 1985). In Woodcock tracing for photon transport, one adds a fictitious interaction, which leaves the energy and direction of the photon unaltered, to the list of possible photon interactions. The cross section for this fictitious interaction is selected to make the total photon cross section constant in the entire geometry, i.e., a smaller fictitious interaction cross section is used in voxels with a larger cross section and vice versa.

MC Simulations: Efficiency Improvement/Statistical Considerations

13

One can then transport the photon immediately to the interaction site without the need for tracing through the geometry. The correct number of real interactions is obtained by selecting a fictitious interaction with the appropriate probability that depends only on the fictitious cross section at the point of interaction. Woodcock tracing is a true VRT. In a detailed investigation of various photon transport VRTs, Kawrakow and Fippel (2000b) reported approximately 20% efficiency gain when Woodcock tracing was implemented in xVMC; however no quantitative efficiency gain is reported in DPM or in PEREGRINE.

Photon Splitting Combined with Russian Roulette The single most significant gain in efficiency (about a factor of 5) for photon transport in the patient is obtained from a combination of particle splitting and Russian roulette introduced by Kawrakow and Fippel (2000b), and denoted “SPL.” SPL is employed in recent versions of VMC /xVMC (Kawrakow and Fippel 2000b; Fippel et al. 2000), and VMC++ (Kawrakow and Fippel 2000a; Kawrakow 2001), and is also used in DOSXYZnrc (Walters, Kawrakow, and Rogers 2005). In this true VRT, Ns photon interaction sites are sampled for each incident photon using a single pass through the geometry. Secondary photons, resulting from Compton scattering, bremsstrahlung, and annihilation, are subjected to a Russian roulette game with a survival probability of 1/Ns. Surviving secondary photons are transported in the same way as primary photons. For a typical patient anatomy the optimum splitting number Ns is around 40. Efficiency gains from SPL vary between a factor of 5 and a factor of 9 (Kawrakow and Fippel 2000b) when coupled with history repetition (VMC/XVMC) or STOPS (VMC++) to transport the resulting electrons and positrons.

Simultaneous Transport of Particle Sets (STOPS) The goal of the STOPS technique (Kawrakow 2001) is to reduce the relative time spent simulating particle interactions as in history repetition, while not introducing any approximations. STOPS is a true VRT. To accomplish this task, particles are transported in sets. The particles in a set have the same energy and charge but different positions and directions. STOPS saves time by calculating the material-independent quantities such as mean free paths, interpolation indices, azimuthal scattering angles and cross sections (e.g., Møller and Bhabha) once for all the particles in the set. However, it still has to sample material-dependent quantities, such as multiple elastic scattering and bremsstrahlung, for each particle in the set. If one or more particles in a set undergo a discrete interaction different from the other particles, the set is separated into subsets and each subset is transported individually. Due to the similarity of interaction properties of materials typically found in the patient anatomy, separating particle sets is a relatively rare event. Hence, the efficiency gain is almost the same as from history repetition, yet no approximations are

14

Daryoush Sheikh-Bagheri et al.

involved. This fact allows the use of STOPS in arbitrary media, not just low-Z materials, as is the case with history repetition.

Quasi-Random Sequences The successful implementation of the MC technique depends heavily on the generation and use of random numbers. Unlike pseudo-random numbers, that are more frequently encountered in MC simulations, quasi-random numbers are generated with emphasis on filling the multidimensional space of interest in as uniform a way as possible. The main advantage of this uniformity in the generated random numbers is that the computation converges faster compared to the same problem using a sequence of pseudo-random numbers. Quasi-random sequences are used in VMC/xVMC for photon transport (Kawrakow and Fippel 2000b) and in VMC++ for electron and photon transport (Kawrakow 2001) and were reported to result in an efficiency gain of about a factor of 2. For a more detailed discussion the reader is referred to Siebers, Keall, and Kawrakow (2005).

Correlated Sampling Correlated sampling is a standard VRT that may lead to substantial efficiency gains but has not received sufficient attention in MC dose calculations in external beam RTP. Substantial efficiency gains from correlated sampling have been reported for the calculation of ion chamber dosimetry correction factors (Ma and Nahum 1993; Buckley, Kawrakow, and Rogers 2004) and for brachytherapy dose calculation (Hedtjärn, Carlsson, and Williamson 2002). The only publication on the use of correlated sampling in external beam RTP calculations that we are aware of is Holmes et al. (1993), where modest efficiency gains were observed for relatively simple heterogeneous geometries. In addition to the papers cited above the reader is referred to Siebers, Keall, and Kawrakow (2005) and for more theoretical considerations to the book by Lux and Koblinger (1991).

Statistical Considerations A reliable estimate for the efficiency of a MC simulation requires a correct estimate of the statistical uncertainties. The following sections discuss the different methods of estimating the variance of a quantity of interest when performing MC simulations, as well as the pitfalls associated with the recycling of particles. According to the Central Limit Theorem (Lux and Koblinger 1991), the probability distribution to observe a certain result in a MC simulation will approach a Gaussian in the limit of a large number of particle histories. Hence, if one divides the simulation into batches (groups), each batch containing a large number of particles, one can estimate the uncertainty σX of the quantity X using

MC Simulations: Efficiency Improvement/Statistical Considerations

N

σX =

∑ (X

i

− X )2 ,

15

(1)

i =1

N ( N − 1)

where Xi is the result of the i’th batch, X is the mean of the Xi, and N is the number of batches. Equation (1) is the usual way to estimate the variance of a set of normally distributed observations. The batch method has been widely employed due to its conceptual and numerical simplicity and has been used, for instance, in many EGS user codes prior to 2002. Its main disadvantage is that there is a relatively large uncertainty on the uncertainty estimate due to the small number of batches (typically 10 to 40). An alternative way to estimate the uncertainty of a MC computed quantity is to employ history-by-history statistical analysis. The use of a history-by-history method is based on the fact that the uncertainty σX is related to the first () and second () moments of the single-history probability distribution, according to

σ X2 =

X2 − X N −1

2

,

(2)

where now N is the number of particles instead of batches. The moments and can be estimated within the simulation by keeping track of the sum of the single history observations xi, i.e., X =

1 N ∑ xi , N i=1

X2 =

1 N 2 ∑ xi . N i=1

(3)

Implementation of the above equation in a MC code is trivial if one is interested in no more than a few quantities. However, for a distributed quantity such as a 3-D dose distribution with hundreds of thousands of voxels needed in RTP, a straightforward implementation of the summations at the end of each history would result in a very long calculation time. This problem was resolved only recently due to a computational trick attributed to Salvat (Sempau, Wilderman, and Bielajew 2000); see also Walters, Kawrakow, and Rogers (2002). For general-purpose codes the computational penalty due to the use of history-by-history statistics with the Salvat trick is negligible. However, in the case of fast codes such as VMC++, history-by-history analysis can lead to a 10% to 20% penalty in terms of CPU (central processing unit) time compared to batch analysis (Kawrakow 2001). The main advantage of the history-by-history method is that it provides a more precise estimate of the uncertainty. This is illustrated in Figure 4, which shows the fractional dose uncertainty along the central axis of an 18 MeV electron beam obtained using a history-by-history method and batch methods with different number of batches. Equations (1) and (2) both assume that the batch or history observations are statistically independent. In a “normal” Monte Carlo simulation that does not employ any

16

Daryoush Sheikh-Bagheri et al.

Figure 4. Fractional uncertainty along the central axis for an 18 MeV electron beam from a Clinac 2100C (20×20 cm2 field at SSD=100 cm) in a water phantom. Fractional uncertainties are estimated using the history-by-history method and the batch method with 10 and 40 batches. A scaled depth-dose curve is also shown for reference. Dose was scored in 1×1×0.5 cm3 voxels. (Reprinted with permission from Walters, B. R. B., I. Kawrakow, and D. W. O. Rogers, “History by history statistical estimators in the BEAM code system.” Med Phys 29:2745–2752. © 2002, with permission from AAPM.)

VRTs, the statistical independence of individual particle histories is typically automatically guaranteed by the use of a good quality random number generator that produces an uncorrelated sequence of random numbers. However, when one employs VRTs, or splits the simulation into parts as typically done in patient simulations that use a phase-space file from a full treatment head simulation, it is very important to combine particles into statistically independent groups in order to obtain a reliable uncertainty estimate. For instance, when using techniques such as history repetition or STOPS, all repetitions of the same track (history repetition) or all particles within a set (STOPS) are correlated and must therefore be considered as one event for the sake of statistical analysis. In a similar way, all particles originating from the same electron incident on the bremsstrahlung target or vacuum exit window belong to the same initial, statistically independent, history. Therefore, when using a phase-space file for the patient calculation, particles must be grouped according to their initial electron history and cannot be considered to be independent. An important conclusion from these considerations is that, when re-using phase-space file particles, one must recycle the particles (i.e., use them several times before moving on to the next particle), instead of restarting the phase-space file several times. It is not possible to group the phase-space

MC Simulations: Efficiency Improvement/Statistical Considerations

17

particles into statistically independent initial histories when restarting a phase-space file and as a result the uncertainties will be underestimated. For a more detailed discussion of these issues the reader is referred to (Walters, Kawrakow, and Rogers 2002). Another important consideration when using phase-space files is that the uncertainties in the patient simulation cannot be reduced beyond the variance present in the phase-space file, even if particles are recycled a very large number of times. This is most easily understood by considering a phase-space file containing a single particle. If one recycles this particle a very large number of times, the uncertainty in dose deposition from this particular particle will tend to zero, but the overall uncertainty (variance) of the simulation will be very large since each reuse of the particle is statistically speaking, fully dependent on the other. This variance that is present in a phase-space file has become known as the “latent variance” (Sempau et al. 2001). A simple technique to demonstrate the presence of and accurately estimate the latent variance in a phase-space calculation of dose was proposed by Sempau et al. (2001) and is depicted in Figure 5. In this technique the variance is separated into two components, only one of which depends on the number of recyclings. Then by increasing the recycling factor (K), that term is taken to zero, at which limit the remaining term is the latent variance.

Figure 5. The dependence of the variance on the inverse of the number of times (K) particles from a phase-space file are recycled. When 1/K approaches zero, the only component of variance remaining is the latent variance present in the phase-space file. (Reprinted with permission from Sempau, J., A. Sánchez-Reyes, F. Salvat, H. Oulad ben Tahar, S. B. Jiang, and J. M. Fernandez-Varea,. “Monte Carlo simulation of electron beams from an accelerator head using PENELOPE.” Phys Med Biol 46:1163–1186. © 2001, with permission from Elsevier.)

18

Daryoush Sheikh-Bagheri et al.

Summary The design and special-purpose implementation of several efficiency enhancement techniques such as condensed history technique (CHT), splitting, and Russian roulette have facilitated the development of faster and faster MC codes used in beam modeling and in patient dose simulation. Variance reduction techniques (VRTs), in contrast to approximate efficiency enhancement techniques (AEITs), do not bias the physics of the simulations and, when implemented properly, can confidently be used to enhance the efficiency of the simulations. AEITs can be utilized to enhance the efficiency when care is taken to show that they have a small or negligible effect on the results. Due to the increased popularity of such aggressive implementations of VRTs, it has become more important to take correlations between phase-space particles into account when calculating uncertainties. These correlations can be properly taken into account using either the history-by-history or the batch technique by grouping the particles according to the primary original history that generated them. The substantial improvements achieved in the efficiency of MC calculations are paving the way for routine clinical use.

References Agostinelli, S., et al. (Geant4 Collaboration). (2003). “GEANT4-a simulation toolkit.” Nucl Instrum Methods Phys Res A 506:250–303. Baró, J., J. Sempau, J. M. Fernandez-Varea, and F. Salvat. (1995). “Penelope—an algorithm for Monte-Carlo simulation of the penetration and energy-loss of electrons and positrons in matter.” Nucl Instrum Methods Phys Res B 100:31–46. Berger, M. J. “Monte Carlo Calculation of the Penetration and Diffusion of Fast Charged Particles” in Methods in Computational Physics, Vol 1. B. Alder, S. Fernbach, and M. Rotenberg (eds.). New York: Academic Press, pp. 135–215, 1963. Bielajew, A. F., and D. W. O. Rogers. (1987). “PRESTA: The parameter reduced electron-step transport algorithm for electron Monte Carlo transport.” Nucl Instrum Methods Phys Res B 18:165–181. Briesmeister, J. F. (Editor). MCNP—A General Purpose Monte Carlo Code for Neutron and Photon Transport, Version 3A. Los Alamos National Laboratory Report LA-7396-M, Los Alamos, NM, 1986. Briesmeister, J. F. (Editor). MCNP—A General Monte Carlo N-Particle Transport Code, Version 4ª. Los Alamos National Laboratory Report LA-12625-M, Los Alamos, NM, 1993. Brown, F. B. (Editor). MCNP—A General Monte Carlo N-Particle Transport Code, Version 5. Los Alamos National Laboratory Report LA-UR-03 1987. Los Alamos, NM, 2003. Buckley, L. A., I. Kawrakow, and D. W. O. Rogers. (2004). “CSnrc: Correlated sampling Monte Carlo calculations using EGSnrc.” Med Phys 31:3425–3435. Chetty, I. J., B. Curran, J. Cygler, J. J. DeMarco, G. Ezzell, B. A. Faddegon, I. Kawrakow, P. J. Keall, H. Liu, C.-M. Ma, D. W. O. Rogers, D. Sheikh-Bagheri, J. Seuntjens, and J. V. Siebers. (2006). “Issues associated with clinical implementation of Monte Carlo-based treatment planning: Report of the AAPM Task Group No. 105. Med Phys (Submitted).

MC Simulations: Efficiency Improvement/Statistical Considerations

19

Fippel, M. (1999). “Fast Monte Carlo dose calculation for photon beams based on the VMC electron algorithm.” Med Phys 26:1466–1475. Fippel, M., I. Kawrakow, F. Nüsslin, and D. W. O. Rogers. “Implementation of Several Variance Reduction Techniques into the XVMC Monte Carlo Algorithm for Photon Beams” in XIIIth International Conference on the Use of Computers in Radiation Therapy (XIIIth ICCR). W. Schlegel and T. Bortfeld (eds.). Heidelberg: Springer-Verlag, pp. 406–408, 2000. Halbleib, J. A. “Structure and Operation of the ITS Code System” in Monte Carlo Transport of Electrons and Photons. W. R. Nelson, T. M. Jenkins, A. Rindi, A. E. Nahum, and D. W. O. Rogers (eds.). New York: Plenum Press, pp. 249–262, 1989. Halbleib, J. A., and T. A Melhorn. ITS: The Integrated TIGER Series of Coupled Electron/Photon Monte Carlo Transport Codes. Sandia National Laboratory Report SAND84-0073. Albuquerque, NM, 1984. Halbleib, J. A., R. P. Kensek, T. A. Mehlhorn, G. D. Valdez, S. M. Seltzer, and M. J. Berger. ITS Version 3.0: The Integrated TIGER Series of Coupled Electron/Photon Monte Carlo Transport Codes, Sandia National Laboratory Report SAND91-1634. Albuquerque, NM, 1992. Hartmann Siantar, C. L., R. S. Walling, T. P. Daly, B. Faddegon, N. Albright, P. Bergstrom, A. F. Bielajew, C. Chiang, D. Garnet, R. K House, D. Knapp, D. J. Wieczorek, and L. J. Verhey. (2001). “Description and dosimetric verification of the PEREGRINE Monte Carlo dose calculation system for photon beams incident on a water phantom.” Med Phys 28:1322–1337. Hedtjärn, H., G. A. Carlsson, and J. F. Williamson. (2002). “Accelerated Monte Carlo based dose calculations for brachytherapy planning using correlated sampling.” Phys Med Biol 47:351–376. Holmes, M. A., T. R. Mackie, W. Söhn, P. J. Reckwerdt, T. J. Kinsella, A. F. Bielajew, and D. W. O. Rogers. (1993). “The application of correlated sampling to the computation of electron beam dose distributions in heterogeneous phantoms using the Monte Carlo method.” Phys Med Biol 38:675–688. Kahn, H. “Use of Different Monte Carlo Sampling Techniques” in Symposium on Monte Carlo Methods. H. A. Meyer (ed.). New York: John Wiley and Sons, pp. 146–190, 1956. Kawrakow, I. (1997). “Improved modeling of multiple scattering in the voxel Monte Carlo model.” Med Phys 24:505–517. Kawrakow, I. (2000a). “Accurate condensed history Monte Carlo simulation of electron transport. I. EGSnrc, the new EGS4 version.” Med Phys 27:485–498. Kawrakow, I. (2000b). “Accurate condensed history Monte Carlo simulation of electron transport. II. Application to ion chamber response simulations.” Med Phys 27:499–513. Kawrakow, I. “VMC++, Electron and Photon Monte Carlo Calculations Optimized for Radiation Treatment Planning” in Advanced Monte Carlo for Radiation Physics, Particle Transport Simulation and Applications. A. Kling, F. Barao, M. Nakagawa, L. Távora, and P. Vaz (eds.). Proceedings of the Monte Carlo 2000 Meeting Lisbon. Berlin: SpringerVerlag, pp. 229–236, 2001. Kawrakow, I., and A. F. Bielajew. (1998). “On the condensed history technique for electron transport.” Nucl Instrum Methods Phys Res B 142:253–280. Kawrakow, I., and D. W. O. Rogers. The EGSnrc Code System: Monte Carlo Simulation of Electron and Photon Transport. Technical Report PIRS–701, National Research Council of Canada, Ottawa, Canada, 2000.

20

Daryoush Sheikh-Bagheri et al.

Kawrakow, I., and M. Fippel. “VMC++, A Fast MC Algorithm for Radiation Treatment Planning” in XIIIth International Conference on the Use of Computers in Radiation Therapy (XIIIth ICCR). W. Schlegel and T. Bortfeld (eds.). Heidelberg: Springer-Verlag, pp. 126–128, 2000a. Kawrakow, I., and M. Fippel. (2000b). “Investigation of variance reduction techniques for Monte Carlo photon dose calculation using XVMC.” Phys Med Biol 45:2163–2184. Kawrakow, I., M. Fippel, and K. Friedrich. (1996). “3D electron dose calculation using a voxel based Monte Carlo algorithm (VMC).” Med Phys 23:445–457. Kawrakow, I., D. W. O. Rogers, and B. Walters. (2004). “Large efficiency improvements in BEAMnrc using directional bremsstrahlung splitting.” Med Phys 31:2883–2898. Keall, P. J., and P. W. Hoban. (1996a). “Superposition dose calculation incorporating Monte Carlo generated electron track kernels.” Med Phys 23:479–485. Keall, P. J., and P. W. Hoban. (1996b). “Super-Monte Carlo: A 3D electron beam dose calculation algorithm.” Med Phys 23:2023–2034. Larsen, E. W. (1992). “A theoretical derivation of the condensed history algorithm.” Ann Nucl Energy 19:701–714. Lux, I., and L. Koblinger. Monte Carlo Particle Transport Methods: Neutron and Photon Calculations. New York: CRC Press, 1991. Ma, C.-M., and A. E. Nahum. (1993). “Calculation of absorbed dose ratios using correlated Monte Carlo sampling.” Med Phys 20:1189–1199. Ma, C.-M., J. S. Li, T. Pawlicki, S. B. Jiang, and J. Deng. “MCDOSE - A Monte Carlo Dose Calculation Tool for Radiation Therapy Treatment Planning” in XIIIth International Conference on the Use of Computers in Radiation Therapy (XIIIth ICCR). W. Schlegel and T. Bortfeld (eds.). Heidelberg: Springer-Verlag, pp. 123–125, 2000. Nelson, W. R., H. Hirayama, and D.W.O. Rogers. The EGS4 Code System. Stanford Linear Accelerator Report SLAC-265, Stanford CA, 1985. Neuenschwander, H., and E. J. Born. (1992). “A macro Monte Carlo method for electron beam dose calculations.” Phys Med Biol 37:107–125. Neuenschwander, H., T. R. Mackie, and P. J. Reckwerdt. (1995). “MMC—A high-performance Monte Carlo code for electron beam treatment planning.” Phys Med Biol 40:543–574. Poon, E., and F. Verhaegen. (2005). “Accuracy of the photon and electron physics in GEANT4 for radiotherapy applications.” Med Phys 32(6):1696–1711. Rogers, D. W. O. (1984). “Low energy electron transport with EGS.” Nucl Instrum Methods 227:535–548. Rogers, D. W. O. (1993). “How accurately can EGS4/PRESTA calculate ion chamber response?” Med Phys 20:319–323. Rogers, D. W. O., and R. Mohan. “Questions for Comparison of Clinical Monte Carlo Codes” in XIIIth International Conference on the Use of Computers in Radiation Therapy (XIIIth ICCR). W. Schlegel and T. Bortfeld (eds.). Heidelberg: Springer-Verlag, pp.120 – 122, 2000. Rogers, D. W. O., B. Walters, and I. Kawrakow. BEAMnrc Users Manual. NRC Report PIRS509(a)(rev I), 2005. Rogers, D. W. O., B. A. Faddegon, G. X. Ding, C.-M. Ma, J. Wei, and T. R. Mackie. (1995). “BEAM: A Monte Carlo code to simulate radiotherapy treatment units.” Med Phys 22:503–524. Salvat, F., J. M. Fernandez-Varea, J. Baro, and J. Sempau. PENELOPE, An Algorithm and Computer Code for Monte Carlo Simulation of Electron-Photon Showers, University of Barcelona Report, 1996.

MC Simulations: Efficiency Improvement/Statistical Considerations

21

Schach von Wittenau, A. E., L. J. Cox, P. M. Bergstrom, Jr., W. P. Chandler, C. L. Hartmann Siantar, and R. Mohan. (1999). “Correlated histogram representation of Monte Carlo derived medical accelerator photon-output phase space.” Med Phys 26(7):1196–1211. Seltzer, S. M. “An Overview of ETRAN Monte Carlo Methods” in Monte Carlo Transport of Electrons and Photons. T. M. Jenkins, W. R. Nelson, A. Rindi, A. E. Nahum, and D. W. O. Rogers (eds.). New York: Plenum Press, pp. 153–182, 1988. Seltzer, S. M. (1991). “Electron-photon Monte Carlo calculations: The ETRAN code.” Int J Appl Radiat Isotopes 42:917–941. Sempau, J., S. J. Wilderman, and A. F. Bielajew. (2000). “DPM, a fast, accurate Monte Carlo code optimized for photon and electron radiotherapy treatment planning dose calculations.” Phys Med Biol 45:2263–2291. Sempau, J., A. Sánchez-Reyes, F. Salvat, H. Oulad ben Tahar, S. B. Jiang, and J. M. Fernandez-Varea. (2001). “Monte Carlo simulation of electron beams from an accelerator head using PENELOPE.” Phys Med Biol 46:1163–1186. Siebers, J. V., P. J. Keall, J. Kim, and R. Mohan. “Performance Benchmarks of the MCV Monte Carlo System” in XIIIth International Conference on the Use of Computers in Radiation Therapy (XIIIth ICCR). W. Schlegel and T. Bortfeld (eds.). Heidelberg: Springer-Verlag, pp. 129–131, 2000. Siebers, J., P. Keall, and I. Kawrakow. “Monte Carlo Dose Calculations for External Beam Radiation Therapy” in The Modern Technology of Radiation Oncology. Volume 2. J. Van Dyk. Madison, WI: Medical Physics Publishing, pp. 91–130, 2005. Sheikh-Bagheri, D. Monte Carlo Study of Photon Beams from Medical Linear Accelerators; Optimization, Benchmark and Spectra. Ph.D. Thesis, Carleton University, Ottawa, 1999. Sheikh-Bagheri, D., and D. W. O. Rogers. (2002a). “Sensitivity of megavoltage photon beam Monte Carlo simulations to electron beam and other parameters.” Med Phys 29(3):379–390. Sheikh-Bagheri, D., and D. W. O. Rogers. (2002b). “Monte Carlo calculation of nine megavoltage photon beam spectra using the BEAM code.” Med Phys 29(3):391–402. Sheikh-Bagheri, D., D. W. O. Rogers, C. K. Ross, and J. P. Seuntjens. (2000). “Comparison of measured and Monte Carlo calculated dose distributions from the NRC linac.” Med Phys 27(10):2256–2266. Walters, B. R. B., and D. W. O. Rogers. DOSXYZnrc Users Manual. NRC Report PIRS-794(rev B), 2004. Walters, B. R. B., I. Kawrakow, and D. W. O. Rogers. (2002). “History by history statistical estimators in the BEAM code system.” Med Phys 29:2745–2752. Walters, B. R. B., I. Kawrakow, and D. W. O. Rogers. DOSXYZnrc Users Manual, NRC Report PIRS 794 (rev B), 2005. Wang, L., C.-S. Chui, and M. Lovelock. (1998). “A patient-specific Monte Carlo dose calculation method for photon beams.” Med Phys 25:867–878.