a fast and very accurate approach to the computation of ... - IOPscience

0 downloads 0 Views 319KB Size Report
ABSTRACT. A new method of calculating microlensing magnification patterns is proposed that is based on the properties of the backward gravitational lens ...
The Astrophysical Journal, 653:942Y 953, 2006 December 20 # 2006. The American Astronomical Society. All rights reserved. Printed in U.S.A.

A FAST AND VERY ACCURATE APPROACH TO THE COMPUTATION OF MICROLENSING MAGNIFICATION PATTERNS BASED ON INVERSE POLYGON MAPPING E. Mediavilla,1 J. A. Mun˜oz,2 P. Lopez,1 T. Mediavilla,3 C. Abajas,1 C. Gonzalez-Morcillo,4 and R. Gil-Merino5 Received 2006 February 3; accepted 2006 August 31

ABSTRACT A new method of calculating microlensing magnification patterns is proposed that is based on the properties of the backward gravitational lens mapping of a lattice of polygonal cells defined at the image plane. To a first-order approximation, the local linearity of the transformation allows us to compute the contribution of each image-plane cell to the magnification by apportioning the area of the inverse image of the cell (transformed cell) among the source-plane pixels covered by it. Numerical studies in the  ¼ 0:1Y0.8 range of mass surface densities demonstrate that this method (provided with an exact algorithm for distributing the area of the transformed cells among the source-plane pixels) is more efficient than the inverse ray-shooting technique (IRS). Magnification patterns with relative errors of 5 ; 104 are obtained with an image-plane lattice of only 1 ray per unlensed pixel. This accuracy is, in practice, beyond the reach of IRS performance (more than 10,000 rays should be collected per pixel to achieve this result with the IRS) and is obtained in a small fraction (less than 4%) of the computing time that is used by the IRS technique to achieve an error more than an order of magnitude larger. The computing time for the new method is reduced to below 1% of the IRS computing time when the same accuracy is required of both methods. We have also studied the secondorder approximation to control departures from linearity that could induce variations in the magnification within the boundaries of a transformed cell. This approximation is used to identify and control the cells enclosing a critical curve. Subject headingg : gravitational lensing

1. INTRODUCTION

The basic concept of studying flux magnification induced by gravitational lensing is that the magnification associated with a given area in the source plane is proportional to the surface covered by the image(s) of this source area. The practical adaptation of this idea to the computation of magnification patterns consists in shooting backward a regular grid of rays from the image plane to the source plane, making the magnification in a given pixel of the source plane proportional to the number of rays that hit it. However, this zeroth-order approximation hides all the information related to the mapping between areas. We propose to use the properties of the transformation of the cells defined by the lattice of rays to improve the computation. The inverse ray-shooting technique ( IRS) consists in collecting light rays in pixels in the source plane. The basic idea of the method that we have developed from the inverse polygon-mapping technique (IPM ) is to collect bits of area of the image plane into the source-plane pixels. The technique of tiling the image plane to project the grid boxes in the source plane is not new (Blandford & Kochanek 1987; Kochanek & Blandford 1987; Keeton 2001) and has been extensively used, mainly in the context of solving the lens equation to analyze the lensing properties of arbitrary mass distributions (Keeton 2001). The aim of the present paper is to work out the possibilities of this technique in the computation of microlensing magnification patterns. In x 2 the properties of polygon transformation under the inverse lens equation are discussed from Taylor expansions. In x 3 algorithms and numerical methods are described. Section 4 is dedicated to a comparison between the new method and the inverse ray-shooting technique.

The granulation of the mass distribution of the lens galaxies in stars or other compact objects induces uncorrelated variability between the different images of a lensed source (quasar or extragalactic microlensing). After its theoretical description (Chang & Refsdal 1979, 1984) and detection (Irwin et al. 1989), this effect was recognized as a potential tool to study the properties and distribution of compact objects in galaxies and to explore the unresolved structure of the active galactic nucleus (continuum source [see Wambsganss 2001, 2006; Kochanek 2004 and references therein] and broad-line region [Abajas et al. 2002; Lewis & Ibata 2004]). The effects of microlensing on the light curves of a lensed object are studied by finding configurations of compact objects at the lens galaxy and source trajectories that reproduce the observations. To obtain statistically acceptable estimates of the physical variables of interest, a large number of random realizations of light curves should be modeled. This implies a high computational cost to evaluate flux magnification as a function of position at the source plane (source-plane magnification patterns). The magnification patterns are usually computed with the inverse ray-shooting technique (Kayser et al. 1986; Schneider & Weiss 1987; Wambsganss 1990, 1999), although other methods (see Schneider et al. 1992; Lewis et al. 1993) have been proposed, some of them for specific sceneries, such as the ray-bundle method ( Fluke et al. 1999), which was developed for applications in the weak-lensing limit. 1 Instituto de Astrofı´sica de Canarias, Vı´a La´ctea S/ N, 38200 La Laguna, Tenerife, Spain. 2 Departamento de Astronomı´a y Astrofı´sica, Universidad de Valencia, 46100 Burjassot, Valencia, Spain. 3 Departamento de Ciencias Aplicadas, Escuela Universitaria Francisco Tomas y Valiente, Alfonso XI 6, 11201 Algeciras, Cadiz, Spain. 4 Escuela Superior de Informatica, Universidad de CastillaYLa Mancha, Paseo de La Universidad 4, 13071 Ciudad Real, Spain. 5 Institute of Astronomy, School of Physics, University of Sydney, NSW 2006, Australia.

2. THE METHOD 2.1. First-Order Taylor Expansion: Linear Polygon Mapping Let us consider a periodic lattice of rays at the image plane. This lattice defines the vertices of congruent polygons (unit cells) that tessellate the entire image plane. The coordinates of any of 942

EFFICIENT COMPUTATION OF MAGNIFICATION PATTERNS the vertices of a given cell can be written as x i þ x i , where x i is the center of the cell. The inverse image of the cell center is given by the dimensionless lens equation (see Schneider et al. 1992),     ð1Þ yi xi ¼ xi  i xi ; where y i are the transformed coordinates of the cell center at the source plane and  i is the scaled deflection angle. To a first order of approximation, the inverse image of a vertex offset by  x i with respect to the pixel center is X @ i     y i x i þ x i ¼ y i x i þ x i  x j ; j @x j

ð2Þ

943

of-linearity cells, (3) subdivide the out-of-linearity cells at the image plane to fulfill the linearity requirements (if needed), and (4) distribute the image-plane cell area among the pixels covered by the transformed polygon (in proportion to the area of the transformed polygon subtended by each pixel; see x 3.4). Steps 2 and 3 can be obviated in a basic implementation of the method. In the zeroth-order scheme of the inverse ray-shooting technique, the transformation of area elements is not considered. In fact, to compute the magnification, the entire surface of an unit cell is implicitly assigned to a single pixel (that hit by the ray). This procedure works consistently only if the number of rays is large enough to ensure that the transformed polygon is considerably smaller than the pixel size at the source plane. This is a very restrictive condition that leads to an inefficient computation. 2.2. Second-Order Taylor Expansion: Tests of Linearity

and the offset between the center and the vertex at the source plane is X @ i      x j : ð3Þ y i ¼ y i x i þ x i  y i x i ¼  x i  j @x j In matrix notation,   y1 y 2

 ¼

a11

a12

a21

a22



x1 x 2

 ;

To estimate the variation of the magnification within a cell, we need to study the linearity of the transformation, which is given by the derivatives of the aij components of the transformation matrix. We thus include second-order terms in equation (2),     y i x i þ x i ¼ y i x i þ  x i X @ i 1 X @ i j   x  x j  x k ð6Þ 2 j; k @x j @x k @x j j

ð4Þ or

@ ; @x1

ð5aÞ

    X 1X y i x i þ x i ¼ y i x i þ aij x j þ aijk x j x k ; ð7Þ 2 j; k j

a12 ¼ 

@ 1 ; @x 2

ð5bÞ

where

a21 ¼ 

@ 2 ; @x1

ð5cÞ

1

a11 ¼ 1 

a22 ¼ 1 

@ 2 : @x 2

@aij : @x k

ð8Þ

a12 ¼ a21 ;

ð9aÞ

a112 ¼ a121 ¼ a211 ;

ð9bÞ

a122 ¼ a212 ¼ a221 ;

ð9cÞ

aijk ¼ If we take into account that

ð5dÞ

( Note that the definition of the deflection angle, a ¼ 9 , implies a12 ¼ a21 . See Schneider et al. [1992].) This is an invertible linear transformation (except at the critical curves, where det A ¼ 0); hence, parallelograms are mapped onto parallelograms (see Appendix A). Thus, each of the polygonal cells from the image plane may suffer shearing, dilation, reflection, and rotation at the source plane (note that these transformation properties are true only in the linear approximation; in general, the back-traced image of a square cell can be any tetragon, even nonYsimply connected, if it encloses a critical curve). Each transformed polygon (inverse image of the cell) covers totally or partially one or more of the pixels defined in the source plane. Let j S 0 j be the area of the transformed polygon subtended by a source-plane pixel. Then the fraction of area of the imageplane cell collected by the pixel is j Sj ¼ j S 0 j/j det Aj (see Appendix A). Insofar as the transformation may be considered linear, the same proportionality ratio between the areas at the source and image planes holds for all the pixels covered by the transformed polygon. To compute the magnification of a pixel, we sum the areas of all the regions of the image plane mapped onto it according to the following steps: (1) use the lens equation to map the image-plane polygonal cells onto the source plane, (2) study the linearity of the transformation (observe that the ratio between areas, jS 0 j/j Sj, is constant within the boundaries of the polygon) to identify out-

equation (7) can be written as     y1 x i þ  x i ¼ y1 x i þ a11 x1 þ a12 x 2 2  2i 1h  þ a111 x1 þ 2a112 x1 x 2 þ a122 x 2 ; 2 ð10Þ     y 2 x i þ  x i ¼ y 2 x i þ a12 x1 þ a22 x 2 2  2i 1h  þ a112 x1 þ 2a122 x1x 2 þ a222  x 2 : 2 ð11Þ This second-order Taylor expansion has nine independent coefficients,     y1 x i ; y 2 x i ; a11 ; a12 ; a22 ; a111 ; a112 ; a122 ; and a222 ; ð12Þ that should be estimated from the application of the lens equation (eq. [1]) to the vertices (and perhaps the center) of the polygonal cell.

944

MEDIAVILLA ET AL.

Vol. 653

To study the departures from linearity in a given cell, we can compute the contribution of second-order terms to the separations between the vertices and the center of the transformed polygon, y i ðx i þ x i Þ  y i ðx i Þ. In the case of the square-centered cell (see Appendix B), the average of this quantity among the four vertices gives the displacement of the centroid of the transformed polygon, i i i , with respect to the transformed center, ycen ¼ ycen  y i . Its ycen modulus is given by qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2  2ffi 1 2 ycen ¼ ycen þ ycen qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 ¼ ða111 þ a122 Þ2 þ ða222 þ a112 Þ2 ð  xÞ2 : ð13Þ 2 Using the derivatives, we can also estimate the variation of the magnification factor, det A ¼ a11 a22  ða12 Þ2 , within the elementary cell from its gradient, @ det A ¼ a111 a22 þ a11 a221  2a12 a121 ; @x1

ð14Þ

@ det A ¼ a112 a22 þ a11 a222  2a12 a122 : @x 2

ð15Þ

The variation at a displacement  x from the center of the cell is given by : det A = x, and the maximum variation of det A within the cell boundaries is      @ det A   @ det A       x: þ ð16Þ ð det AÞ ¼  @x1   @x 2  3. COMMENTS ON THE NUMERICAL METHOD A scheme for the computation of magnification patterns based on the inverse mapping of a lattice of rays may include the following steps: (1) initial tessellation of the image plane, (2) transformation of the lattice with the lens equation, (3) linearity control to detect cells whose transformed polygons do not fulfill the linearity condition (out-of-linearity cells), (4) subdivision and reprocessing of the out-of-linearity cells (adaptive tessellation of the image plane; see also Keeton 2001), and (5) apportioning of the cell area among the source-plane pixels covered by the transformed polygons. The inverse ray shooting (steps 1 and 2), the inverse polygon mapping (steps 1, 2, and 5), and the inverse polygon mapping with linearity control (steps 1, 2, 3, 4, and 5) may be considered as the zeroth-, first- and second-order approaches of this scheme. Although it is not a grid-based technique, the ray-bundle method ( Fluke et al. 1999) is also inspired by the idea of comparing areas at the source and image planes (an infinitesimal bundle of rays that form a circular image is used to compute the magnification). However, this technique considers the contribution of only one image and is more suited to the weak-lensing limit. On the contrary, the inverse polygon mapping is based on the tessellation and apportioning of all the image plane among the source-plane pixels and therefore will not underestimate the magnification. 3.1. Tessellation of the Image Plane: Square and Square-centered Cells A regular grid of rays is the natural option for the inverse rayshooting technique. However, in the scheme of the inverse polygonal cell mapping, any tessellation can be selected to cover the image plane. In fact, an adaptive tessellation changing the shape and /or size of the polygons along the image plane according to the variations of the magnification [inferred from 9ð det AÞ] is

Fig. 1.— Periodic square lattice (dotted lines) and conventional squarecentered cells (solid lines). Each ray is denoted by a filled circle.

probably the best option. If we restrict ourselves to the simpler case of periodic lattices, we find that the primitive cells have only four vertices, giving eight conditions, which is not enough to determine the nine coefficients of the second-order Taylor expansion. This can be solved by selecting conventional cells such as those that are rectangularly centered (5 ; 2 conditions per cell) or hexagonally centered (7 ; 2 conditions per cell). In this study we use a conventional square-centered cell defined from the primitive square lattice pffiffiffi (see Fig. 1). Note that the side of the squarecentered cell is 2 times the side of the primitive square cell. In Appendix B we give the expressions for the coefficients of the second-order Taylor expansion in the case of the square-centered cell. 3.2. Transformation of the Polygonal Cells In the case of square cells, the lattice of rays can be sequentially shot in a way identical to the inverse ray-shooting method. Each new ray in the lattice defines a new polygon (the packing fraction—the number of rays per unit cell— of the square lattice is 1). Thus, for a given square lattice, the number of rays that hit a pixel in the source plane in absence of lensing (a reference quantity for computations using the inverse ray-shooting technique) equals the number of cells needed to cover an unlensed pixel. In the case of square-centered cells (see Fig. 1), one additional ray should be shot at the center of each unit cell of the basic square lattice (the packing fraction is 2). This diminishes the computational efficiency, in terms of traced rays, by a factor of 2. 3.3. Reprocessing of Out-of-Linearity Cells The subdivision at the image plane of a critical cell should be done iteratively by dividing the cell according to some algorithm (see, for instance, Fig. 7) and checking the linearity of the transformation of each subcell until the condition of linearity is achieved in all the divisions of the cell. In practice, the relatively small number of exceptions and their reduced impact on the evaluation of the magnification (as long as the area is progressively subdivided with each iteration) allows simpler approaches, such as

No. 2, 2006

EFFICIENT COMPUTATION OF MAGNIFICATION PATTERNS

the establishment of a limit in the iteration. In the numerical experiments described below, we have used an even simpler algorithm consisting of subdividing each rejected cell into a subgrid of 16 ; 16 rays that are processed as in the inverse ray-shooting method, assigning to each ray 1/ð16 ; 16Þ of the cell area (see Fig. 7).

TABLE 1 Point-Mass Lens

Method IPM ................ IRS .................

3.4. Apportioning of the Mapped Polygons among the Source-Plane Pixels A mapped polygon totally or partially covers one or more of the pixels defined in the source plane. We need to know what fraction of the image-plane cell area is collected by each one of the pixels. According to Green’s theorem, the area of a simply connected region with boundary C is given by the line integral Z y 2 dy1 ; ð17Þ

945

a b c d

Rays per Unlensed Pixela

Rays per Pixel (average)b

Error ()c

Computing Timed

1.1 1.1 71 282 1024

1.3 1.3 85 337 1224

0.0004 0.4 0.03 0.009 0.004

1 0.3 5.7 23 79

Number of rays per unlensed pixel (see text). Average number of rays collected per pixel. Dispersion relative to the exact solution (see text). Computing time relative to the IPM computation (see text).

& Blandford [1987] and Keeton [2001] also use triangles to divide cells, but at the image plane).

C

where ( y1, y 2) are the source-plane coordinates. The transformed polygon is defined by four straight lines (the 2 2 2 ð y1 Þ, and y 2 ð y1 Þ. To ð y1 Þ, yBC ð y1 Þ, yCD four polygon sides): yAB DA compute the area of the polygon subtended by a square pixel of center (I, J ) and size 1, we should define the boundary of this region from (1) the polygon sides, (2) the pixel sides, and (3) the possible intersections of each polygon side with the pixel sides. of the with For the yAB polygon side, the coordinates   intersections   2 2 and y1ABþ ; yABþ , the bottom and top pixel sides ( y1AB ; yAB respectively) are obtained by solving  1  2 yAB ¼ J  0:5; yAB ð18Þ   2 ð19Þ y1ABþ ¼ J þ 0:5; yAB and the coordinates of the intersections of the other polygon sides can be found in a similar way. Thus, it is straightforward to find that the area of the polygon subtended by the pixel can be written as the sum of four terms, one for each polygon side, SIJ ¼ SAB þ SBC þ SCD þ SDA ;

ð20Þ

SAB ¼  Z

minð y1B ; I þ0:5; y1ABþ Þ 

maxð y1A ; I0:5; y1AB Þ minð y1B ; I þ0:5Þ

 minð y1B ;

I þ0:5;



2 yAB  J þ 0:5 dy1

dy1 ;

4.1. Point-Mass Lens We study this case to compare both numerical methods, the IRS and IPM, with an exact solution. The computed magnification patterns correspond to a square region of size yl ¼ 6 at the source plane (in Einstein radius units). The number of pixels at the source plane is 2000 ; 2000. A square region of size xl ¼ 10 is considered at the image plane. According to the scheme proposed in x 3, the first-order IPM algorithm that we apply consists of (1) tessellation of the image plane using a square lattice, (2) cell transformation, and (5) apportioning of the transformed polygons among the covered pixels. Using the IPM, we compute a magnification pattern considering image-plane cells of approximately the same size as the source-plane pixels. Using the IRS, we obtain several magnification patterns corresponding to 1, 71, 282, and 1024 rays pixel1 in the absence of gravitational lensing. A pattern from the exact analytical solution, exact( y1, y 2), is also computed. The relative difference of a given pattern ( y1, y 2) with respect to the exact one is   ð y1 ; y 2 Þ  exact ð y1 ; y 2 Þ :  y1 ; y 2 ¼ exact ð y1 ; y 2 Þ

with Z

4. RESULTS: COMPARISON BETWEEN THE INVERSE POLYGON-MAPPING AND THE INVERSE RAY-SHOOTING METHODS

ð21Þ

y1ABþ Þ

where y1A and y1B are the y1 coordinates of vertices A and B, y1A < y1B , y1A < I þ 0:5, y1B > I  0:5, and y1AB < y1ABþ (if y1A > I þ 0:5 or y1B < I  0:5, then SAB ¼ 0). The terms corresponding to y1AB > y1ABþ and for the other polygon sides can be easily obtained. From these expressions it is simple to compute the fraction of the polygon area corresponding to each pixel that, after normalization to the image-plane cell area, is added to the pixel magnification. This procedure to calculate exactly the polygon area subtended by a pixel can be easily generalized to any simply connected polygon with an arbitrary number of sides. On the other hand, when the transformed tetragon (or polygon, in general) is nonYsimply connected (it may occur for image-plane cells of any geometry enclosing a critical curve), we divide it into two simply connected triangles and apply the same procedure to each of them (Kochanek

ð22Þ

The standard deviation, , of ( y1, y 2) for each pattern is given in Table 1. The IPM method with 1 ray per unlensed pixel obtains a very high accuracy, an order of magnitude better than that corresponding to the IRS with 1024 rays per unlensed pixel. A dependence of the IRS relative error with the average number of rays collected per pixel,  ¼ N 0:790:01 , is found (see data in Table 1), which is very close to the result of Kayser et al. (1986):  ¼ N 3=4 . According to this dependence, more than 25,000 rays pixel1 should be collected on average by the IRS to match the IPM error. As commented above, the great advantage of the IPM reflects that this method is based on a linear approximation to lens mapping, while the IRS is based on a numerical simulation with inherent noise. In Table 1 we have also included the computing time of each pattern, normalized to the computing time corresponding to the IPM method. In principle, the IPM should have a great advantage, since the number of rays computed with this method is considerably smaller. However, the IPM includes significant overheads corresponding to the relatively high computational weight of the polygonal area apportioning with respect to the ray tracing. In the

TABLE 2 Distribution of Microlenses 

Method

Lattice

Rays per Unlensed Pixela

xlb

N

0.09.......................

IPM IRS IRS IRS IPM IPM IRS IRS IRS IPM IPM IRS IRS IRS IPM

Square Square Square Square Square-centered Square Square Square Square Square-centered Square Square Square Square Square-centered

1.1 1.1 71 283 2.2 1.1 1.1 71 257 2.2 1.1 1.1 70 266 2.2

10 10 10 10 10 12.8 12.8 12.8 12.8 12.8 30 30 30 30 30

3 3 3 3 3 15 15 15 15 15 220 220 220 220 220

0.3.........................

0.77.......................

a b c d

noisec

Computing Timed

              

1.5 0.4 8.6 36 1.6 3.8 0.9 44 162 4.7 82 44 2762.3 11917 143.1

0.0005 0.5 0.02 0.007 0.0005 0.0005 0.5 0.03 0.01 0.0005 0.0003 0.3 0.02 0.006 0.0003

0.0003 0.1 0.009 0.004 0.0003 0.0004 0.2 0.01 0.009 0.0004 0.0001 0.09 0.006 0.003 0.0001

Number of rays per unlensed pixel (see text). Size of the image-plane region (see text). Dispersion relative to the signal (see text). Computing time relative to the IPM computation for a point mass (see Table 1).

Fig. 2.— Magnification profiles computed with the inverse ray-shooting (IRS) and the inverse polygon-mapping ( IPM ) methods, using  ¼ 0:1. Comparison among IRS with 1 ray pixel1 (thin solid line in the bottom panel ), IRS with 71 rays pixel1 (dotted lines), IRS with 283 rays pixel1 (solid lines of intermediate thickness), and IPM with 1 ray pixel1 (thick solid lines). Bottom: Caustic crossings (includes a zoom of a single crossing). Middle: Detail of a inner to caustic region. Top: Detail of a region of relatively low magnification. Pixel size is 0.003 Einstein radii.

EFFICIENT COMPUTATION OF MAGNIFICATION PATTERNS

947

Fig. 3.— Magnification profiles computed with the IRS and IPM methods, using  ¼ 0:3. Comparison among IRS with 1 ray pixel1 (thin solid line in the bottom panel ), IRS with 71 rays pixel1 (dotted lines), IRS with 257 rays pixel1 (solid lines of intermediate thickness), and IPM with 1 ray pixel1 (thick solid lines). Bottom: Caustic crossings (includes a zoom of a double crossing). Middle: Detail of a inner to caustic region. Top: Detail of a region of relatively low magnification. Pixel size is 0.003 Einstein radii.

case of the point-mass particle, the impact of the overheads is maximized. In spite of this, the improvement in computational efficiency of the IPM is extraordinary. For instance, the computing time of an IPM frame with a relative error of  ¼ 0:0004 is only 4% of the computing time of an IRS frame with a modest  ¼ 0:01. 4.2. Discrete Distribution of Microlenses To compare inverse polygon mapping with inverse ray shooting in a range of projected mass densities, we have computed with both methods magnification patterns corresponding to dimensionless surface mass densities, , of 0.1, 0.3, and 0.7 covering a square region of size yl ¼ 6 at the source plane (in Einstein radius units). The number of pixels at the source plane is 2000 ; 2000. We consider at the image plane a region of dimension xl ¼ 1:5

yl : 1

ð23Þ

(The 1.5 factor is to limit the effect of mapping a finite-sized region of a discrete distribution of microlenses; see Schneider & Weiss [1987].) Distributing 3, 15, and 220 microlenses of 1 M in this region, we obtain, respectively, surface densities of  ¼

0:09, 0.3, and 0.77, slightly different from the nominal values. The microlenses were randomly distributed in the  ¼ 0:3 and 0.77 cases. For each -value we have computed one magnification pattern that applies the inverse polygon mapping to a square lattice corresponding to 1 ray per unlensed pixel and another three different realizations of the magnification pattern that use the inverse rayshooting method with approximately 1, 64, and 256 rays per unlensed pixel. In Table 2 we detail the parameters of the different magnification patterns. To estimate the noise in each magnification pattern, ( y1, y 2), we have computed the ratio   ð y1 ; y 2 Þ ; Q y1 ; y 2 ¼ Blt ð y1 ; y 2 Þ

ð24Þ

where filt( y1, y 2) is the magnification pattern after applying a median filter of 5 pixels ; 5 pixels. Thus, we measured the standard deviation, noise, in 10 different regions of Q( y1, y 2) that were not affected by caustics. The estimates made for all the values of  (see Table 2) imply that the accuracy of the IPM with 1 ray pixel1 is more than an order of magnitude better than the accuracy of the IRS with 256 rays pixel1.

948

MEDIAVILLA ET AL.

Vol. 653

Fig. 4.— Magnification profiles computed with the IRS and IPM methods, using  ¼ 0:8. Comparison among IRS with 1 ray pixel1 (thin solid line in the bottom panel ), IRS with 70 rays pixel1 (dotted lines), IRS with 265 rays pixel1 (solid lines of intermediate thickness), and IPM with 1 ray pixel1 (thick solid lines). Bottom: Caustic crossings (includes a zoom of a double crossing). Middle: Detail of a inner to caustic region. Top: Detail of a region of relatively low magnification. Pixel size is 0.003 Einstein radii.

In Figures 2, 3, and 4, some magnification profiles (tracks along the magnification patterns) are presented to illustrate these results. Magnification profiles with several caustic crossings are shown in the bottom panels. The middle panels correspond to intermediate-magnification regions. Finally, the top panels represent low-magnification regions. If we look at the bottom panels of Figures 2, 3, and 4, the great improvement of the inverse polygon mapping with respect to the inverse ray shooting is obvious when we apply both methods to the same lattice of 1 ray pixel1. In the bottom panels, the differences between the inverse polygon mapping and the inverse ray shooting with 64 or 256 rays pixel1 are imperceptible. If we look at the middle and top panels (Figs. 2, 3, and 4), we see that the inverse ray-shooting method with 64 and 256 rays pixel1 approximates rather well the inverse polygon-mapping method (1 ray pixel1), but is noisy in high-magnification regions and much noisier away from the caustics (top panels in Figs. 2, 3, and 4). The IPM magnification profiles present almost inappreciable traces of noise due to the great improvement in the signal-to-noise ratio of the IPM with respect to the IRS. To quantify the differences between the patterns obtained with the inverse ray shooting (256 rays pixel1) and the inverse

polygon mapping (1 ray pixel1), we have computed the relative differences,    ð y1 ; y 2 Þ   IRS ð y1 ; y 2 Þ ;  y1 ; y 2 ¼ IPM  IPM ð y1 ; y 2 Þ

ð25Þ

where  IPM ( y1, y 2) and IRS( y1, y 2) are the magnification patterns obtained with the inverse polygon-mapping and the inverse ray-shooting methods, respectively. The average of the absolute values of the relative differences are hjji ¼ 0:006  0:007, 0:009  0:010, and 0:006  0:007 for  ¼ 0:1, 0.3, and 0.77, respectively. When we take into account the estimates of the dispersion for each magnification pattern (see Table 2), these results imply a total statistical consistence between the different pairs of patterns. In addition to the great improvement in accuracy, the computing times (Table 2) also imply a spectacular increase in computational efficiency (about 2 orders of magnitude for  ¼ 0:3 and 0.77) when using the inverse polygon mapping. However, for a number of microlenses greater than 50, the use of an algorithm to separate the long- and short-range gravitational effects of the microlenses (e.g., Wambsganss 1990, 1999; Kochanek 2004) can

No. 2, 2006

EFFICIENT COMPUTATION OF MAGNIFICATION PATTERNS

949

TABLE 3 Comparison with an IRS Hierarchical Tree Code 

Method

Lattice

Rays per Unlensed Pixela

xlb

N

0.7.............

IRS ( hierarch. tree code) IPM IPM IPM

Square Square Square Square-centered

352 1.0 0.12 0.24

32 32 32 32

1104 1104 1104 1104

a b c d

noisec

Computing Timed

   

5457.7 196.5 54.8 100

0.008 0.0007 0.002 0.002

0.003 0.0002 0.001 0.001

Number of rays per unlensed pixel (see text). Size of the image-plane region (see text). Dispersion relative to the signal (see text). Computing time relative to the IPM computation for a point mass (see Table 1).

substantially speed up the computation of the deflecting angles of the rays. In x 4.3 we discuss the impact of this improvement on the comparison between the IRS and the IPM. 4.3. Comparison with an IRS Hierarchical Tree Code We have used a frame computed with a hierarchical tree code (Wambsganss 1999) for  ¼ 0:7. As inputs for our IPM code, we have used the microlens positions assigned by the hierarchical tree code and the same square regions for shooting and receiving the rays at the image and source planes, respectively. With the IPM we have considered two lattices, one with 1 ray pixel1 and the other with 0.12 rays pixel1. The main parameters of the patterns are presented in Table 3, and two of the patterns (IRS and IPM with 1 ray pixel1) are shown in Figure 5. In Figure 6 we show the magnification histograms of the three frames, which are almost identical. According to the results of Table 3, the IPM method with 1 ray pixel1 achieves an order-of-magnitude better accuracy than the hierarchical tree code and uses only 3% of the computing time. With 0.12 rays pixel1, the IPM still obtains a better accuracy than the IRS and is more than 2 orders of magnitude faster. Insofar as this result has been obtained with an IPM code that uses a plain (nonhierarchical) algorithm to compute the ray deflection, the potential advantage of the IPM with respect to the IRS hierarchical tree code is substantially greater.

pixel seems to work very well even without control of the variation of the magnification within the polygonal cells. To study the impact of the departures from linearity in the polygon transformation, we have repeated the computations of the magnification patterns for  ¼ 0:09, 0.3, and 0.77 with the inverse polygon mapping, but using a square-centered lattice (obtained by adding a ray in the middle point of each original square) to estimate the second-order derivatives within each transformed polygonal cell. With this information we calculate the relative variation of det A within the boundaries of the cell (see eq. [16]). Thus, out-oflinearity cells that exceed a given threshold in ð det AÞ (see eq. [16]) can be identified and reprocessed. Here we consider as out of linearity those cells with ðdet AÞ/det A  1 that enclose a critical curve and can be mapped in nonYsimply connected polygons. The algorithm for apportioning the area of the transformed polygons among the source-plane pixels identifies and properly distributes these polygons but has not taken into account the strong magnification gradient within them. We can reprocess the critical cells in a very simple way by subdividing each one into a grid of 16 ; 16 rays, which are processed using an IRS algorithm (see Fig. 7). The resulting hybrid magnification patterns have the caustics computed with the IRS and the rest of the pattern computed with the IPM. With these patterns, it is straightforward

4.4. Reprocessing of Critical Cells As the results of xx 4.1, 4.2, and 4.3 indicate, inverse polygon mapping applied to a square lattice of 1 ray per unlensed

Fig. 5.— Magnification patterns for  ¼ 0:7. Top left, Inverse polygon mapping with 1 ray pixel1; top right, inverse ray shooting computed with a hierarchical tree code (385 rays pixel1). Bottom: Details of the above magnification patterns (left, IPM; right, IRS). Pixel size is 0.003 Einstein radii.

Fig. 6.— Comparison among the histograms of the magnification patterns for  ¼ 0:7 obtained with inverse ray shooting with 385 rays pixel1 (solid line), inverse polygon mapping with 1 ray pixel1 (squares), and inverse polygon mapping with 0.12 rays pixel1 (triangles).

950

MEDIAVILLA ET AL.

Vol. 653

the polygon apportioning, this has no significant computational impact. On the other hand, the subdivision of each critical cell implies the computation of 256 additional rays. In spite of this, the number of critical cells is relatively small, even in the  ¼ 0:77 case (194,352 critical cells out of a total of 110,250,000 cells), and reprocessing affects the computing time by less than a factor of 2 (see Table 2). 5. CONCLUSIONS We have proposed a new approach to computing microlensing magnification patterns that is based on the properties of polygon transformation under the inverse mapping defined by the lens equation. We have studied the method in a range of projected surface densities ( ¼ 0:09, 0.3, and 0.77) and have compared the results with those obtained with the inverse ray-shooting technique. The following conclusions are worthy of note:

Fig. 7.— Example of a reprocessed transformed polygon. Points A, B, C, and D are the vertices of the transformed polygon. Each one of the dots represents one of the 16 ; 16 rays corresponding to the rectangular grid into which the cell is subdivided at the image plane.

to compute the relative differences between the caustics computed with the IPM and with the IRS (256 rays pixel1), hjjicaust (see eq. [25]), which are 0.008, 0.007, and 0.005 for  ¼ 0:09, 0.3, and 0.77, respectively. These values are similar to those obtained comparing the IRS and IPM without reprocessing frames (see hjji in x 4.2). Thus, the resulting patterns do not show noticeable differences with respect to those obtained without reprocessing of the critical cells. However, the control and reprocessing of the critical or, in general, out-of-linearity cells could be crucial if less conservative image-plane tessellations (than the square lattice of 1 ray per unlensed pixel adopted by us) were used to optimize the computation. To illustrate this we have computed a pattern using a squarecentered lattice of 0.24 rays pixel1 with reprocessing of the critical cells (see Table 3). This pattern is the reprocessed version of the 0.12 rays pixel1 one (see Table 3). If we compare both patterns with the one generated with 1 ray pixel1, we find that in the caustics the relative error improves from 0.02 in the 0.12 rays pixel1 case to 0.008 when the critical cells are reprocessed (0.24 rays pixel1 case). In any case, the control and reprocessing steps are needed to complete a procedure in which the noise of the magnification patterns can be consistently controlled from the variation of the linearity within the cells. This is also another improvement with respect to the inverse ray-shooting technique. The cost of obtaining the information regarding linearity of the transformation at each cell is to multiply the number of rays by a factor of 2. Due to the computing overheads associated with

1. The IPM method, provided with an exact algorithm for apportioning polygons among square pixels, gives very good results even for a lattice of square cells corresponding to 1 ray per unlensed pixel (relative error 5 ; 104 ). This precision is very difficult to attain with the IRS. 2. IPM obtains this high accuracy in only a fraction (less than 4%) of the computing time that is used by the IRS with a hierarchical tree code algorithm to achieve an error larger by more than an order of magnitude. The improvement in computing speed of the IPM with respect to the IRS can be of more than 2 orders of magnitude when the same accuracy is required of both methods. This improvement in efficiency could be crucial in new fields of research (such as the Bayesian analysis of experimental data based in microlensing simulations, which is currently limited by computational cost; see Kochanek 2004). 3. IPM allows direct control of lens-mapping properties at each image-plane cell. In particular, the variations of the magnification within each transformed polygon can be easily controlled when considering a lattice of square-centered cells. In this way, we have identified the cells enclosing a critical curve and subdivided them into 16 ; 16 subcells that are processed with the inverse ray-shooting technique. This is a very simple and immediate approximation toward an adaptive tessellation of the image plane that could start with cells even greater than the unlensed pixel with a subsequent progressive resizing and/or reshaping of the cells according to specific requirements of linearity in the cell transformation (i.e., limits on the variation of the magnification within a given polygon).

We are grateful to the anonymous referee for his valuable comments and suggestions. We thank Joachim Wambganss for useful comments. This work was supported by the European Community’s Sixth Framework Marie Curie RTN (MRTN-CT-505183 ‘‘ANGLES’’) and by the Ministerio of Educacio´n y Ciencia of Spain with the grants AYA2004-08243-C03-01 and AYA200408243-C03-03.

APPENDIX A To a first-order approximation, lens mapping can be locally seen as a linear, invertible (except at critical curves) transformation of R 2 that maps parallelograms onto parallelograms. These kind of transformations can be generally written as a composition of a rotation, a dilation (including reflection), and a shear. If we take into account that the linear approximation to lens mapping is symmetric in addition, we can find a relationship among the rotation angle, , the dilation factors a and b, and the shear factor, n,   aþb tan ; ðA1Þ n¼ a

No. 2, 2006

EFFICIENT COMPUTATION OF MAGNIFICATION PATTERNS

951

and we can write  A¼

a11 a12

a12 a22

  cos  ¼ sin 

sin  cos 



a 0

0 b





0 @1 0

 1 aþb tan  A ; a 1

ðA2Þ

where a12 ; a11 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 þ a2 ; a ¼ a11 12 tan  ¼



det A : a

ðA3aÞ ðA3bÞ ðA3cÞ

These parameters can be written, as is usual (see Schneider et al. 1992), in terms of the convergence () and shear ( 1,  2):     1    1 2 a11 a12 ¼ ; ðA4Þ a12 a22 2 1   þ 1 2 ; 1    1 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a ¼ ð1    1 Þ2 þ 22 ; tan  ¼

ð1  Þ2 12  22 b ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : ð1    1 Þ2 þ22

ðA5aÞ ðA5bÞ ðA5cÞ

Values of det A < 0 imply reflections (b < 0) between the sides of the parallelogram, as expected for saddle-point images. For values of det A > 0, the handedness of the parallelogram is preserved. In this case, if traceð AÞ ¼ ða þ bÞ/cos  > 0 (image at a minimum), we have /2 <  < /2, and if traceð AÞ < 0 (image at a maximum), we have 3/2 >  > /2 (see also Schneider et al. 1992). As far as the linear approximation holds, the relation between the areas of a cell and its transformed polygon or between any surface element within the cell (defined as the wedge product of two vectors, u and v), S ¼ u ^ v, and its transform, S 0 ¼ u 0 ^ v 0 , is given by S 1 : ¼  S0 det A

ðA6Þ

APPENDIX B The transformation of the coordinates of a vertex, x i þ  x i , defined by its offset,  x i, with respect to the center of the polygon, x i, given by the second-order Taylor expansion of the lens equation (see eqs. [10] and [11]), is      2  2 i 1h ðB1Þ y1 x i þ  x i ¼ y1 x i þ a11  x1 þ a12  x 2 þ a111  x1 þ 2a112  x1  x 2 þ a122  x 2 ; 2      2  2 i 1h y 2 x i þ x i ¼ y 2 x i þ a12 x1 þ a22  x 2 þ a112  x1 þ 2a122 x 1  x 2 þ a222  x 2 : ðB2Þ 2 In the case of a square of center xi and side 2 x, the coordinates of the four vertices are given by   x xAi ¼ x i  ; x   x i i ; xB ¼ x þ x   x xCi ¼ x i þ ; x   x i i xD ¼ x þ ; x

ðB3aÞ ðB3bÞ ðB3cÞ ðB3dÞ

952

MEDIAVILLA ET AL.

Vol. 653

and the transformations for the four vertices of the square are   1 y1A ¼ y1 x i  ða11 þ a12 Þ x þ ða111 þ 2a112 þ a122 Þð xÞ2 ; 2

ðB4aÞ

  1 yA2 ¼ y 2 x i  ða12 þ a22 Þx þ ða112 þ 2a122 þ a222 Þð xÞ2 ; 2

ðB4bÞ

  1 y1B ¼ y1 x i þ ða11  a12 Þx þ ða111  2a112 þ a122 Þð xÞ2 ; 2

ðB4cÞ

  1 yB2 ¼ y 2 x i þ ða12  a22 Þx þ ða112  2a122 þ a222 Þð xÞ2 ; 2

ðB4dÞ

  1 y1C ¼ y1 x i þ ða11 þ a12 Þ x þ ða111 þ 2a112 þ a122 Þð  xÞ2 ; 2

ðB4eÞ

  1 yC2 ¼ y 2 x i þ ða12 þ a22 Þ x þ ða112 þ 2a122 þ a222 Þð  xÞ2 ; 2   1 y1D ¼ y1 x i  ða11  a12 Þx þ ða111  2a112 þ a122 Þð xÞ2 ; 2   1 yD2 ¼ y 2 x i  ða12  a22 Þx þ ða112  2a122 þ a222 Þð xÞ2 : 2

ðB4f Þ ðB4gÞ ðB4hÞ

From these equations we obtain a11 ¼

y1B  y1A þ y1C  y1D ; 4 x

ðB5aÞ

a12 ¼

yB2  yA2 þ yC2  yD2 ; 4 x

ðB5bÞ

yC2  yB2 þ yD2  yA2 ; 4 x      y1B  y1A þ y1C  y1D

ðB5cÞ

a22 ¼ a112 ¼ a122 ¼ a111 ¼ a222 ¼

4ð  x Þ 2  2     yB  yA2 þ yC2  yD2 4ð  x Þ 2

y1A þ y1B þ y1C þ y1D  4y1 ðx i Þ 2ð xÞ2 yA2 þ yB2 þ yC2 þ yD2  4y 2 ðx i Þ 2ð xÞ2

;

ðB5dÞ

;

ðB5eÞ

 a122 ;

ðB5f Þ

 a112 :

ðB5gÞ

From equations ( B5f ) and ( B5g), we have     y1A þ y1B þ y1C þ y1D  4y1 x i ¼ 4y1cen  4y1 x i ¼ 2ða111 þ a122 Þð  xÞ2 ;     2 yA2 þ yB2 þ yC2 þ yD2  4y 2 x i ¼ 4ycen  4y 2 x i ¼ 2ða222 þ a112 Þð  xÞ2 ;

ðB6aÞ ðB6bÞ

i are the coordinates of the centroid of the four vertices of the transformed cell. The absolute value of the displacement of the where ycen i i ¼ ycen  y i ðx i Þ, is then centroid of the transformed cell with respect to the transformed center, ycen qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2   ffi 1 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2¼ ycen ¼ y1cen þ ycen ða111 þ a122 Þ2 þ ða222 þ a112 Þ2 ð xÞ2 : ðB7Þ 2

REFERENCES Abajas, C., Mediavilla, E., Mun˜oz, J. A., Popovic´, L. Cˇ., & Oscoz, A. 2002, Fluke, C. J., Webster, R. L., & Mortlock, D. J. 1999, MNRAS, 306, 567 ApJ, 576, 640 Irwin, M. J., Webster, R. L., Hewett, P. C., Corrigan, R. T., & Jedrzejewski, Blandford, R. D., & Kochanek, C. S. 1987, ApJ, 321, 658 R. I. 1989, AJ, 98, 1989 Chang, K., & Refsdal, S. 1979, Nature, 282, 561 Kayser, R., Refsdal, S., & Stabell, R. 1986, A&A, 166, 36 ———. 1984, A&A, 132, 168 Keeton, C. R. 2001, ApJ, submitted (astro-ph /0102340)

No. 2, 2006

EFFICIENT COMPUTATION OF MAGNIFICATION PATTERNS

Kochanek, C. S. 2004, ApJ, 605, 58 Kochanek, C. S., & Blandford, R. D. 1987, ApJ, 321, 676 Lewis, G. F., & Ibata, R. A. 2004, MNRAS, 348, 24 Lewis, G. F., Miralda-Escude, J., Richardson, D. C., & Wambsganss, J. 1993, MNRAS, 261, 647 Schneider, P., Ehlers, J., & Falco, E. E. 1992, Gravitational Lenses ( Berlin: Springer) Schneider, P., & Weiss, A. 1987, A&A, 171, 49

953

Wambsganss, J. 1990, Ph.D. thesis, Ludwig-Maximilians-Univ., Munich ———. 1999, J. Comput. Appl. Math., 109, 353 ———. 2001, in ASP Conf. Ser. 237, Gravitational Lensing: Recent Progress and Future Goals, ed. T. G. Brainerd & C. S. Kochanek (San Francisco: ASP), 185 ———. 2006, in Gravitational Lensing: Strong, Weak and Micro, ed. G. Meylan, P. Jetzer, & P. North (Berlin: Springer), 453

Suggest Documents