Good Lattice Rules in Weighted Korobov Spaces

0 downloads 0 Views 309KB Size Report
Apr 22, 2005 - provided to illustrate the efficiency of CBC lattice rules and Korobov lattice rules (with suitably chosen ...... consists of O(dq∗. ) terms, and the ...
Good Lattice Rules in Weighted Korobov Spaces with General Weights Josef Dick1 , Ian H. Sloan1 , Xiaoqun Wang1,2 1 2 3

and

Henryk Wo´zniakowski3,4

School of Mathematics, University of New South Wales, Sydney 2052, Australia

Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China

Department of Computer Science, Columbia University, New York, NY 10027, USA 4

Institute of Applied Mathematics and Mechanics, University of Warsaw, Poland.

April 22, 2005 Summary. We study the problem of multivariate integration and the construction of good lattice rules in weighted Korobov spaces with general weights. These spaces are not necessarily tensor products of spaces of univariate functions. Sufficient conditions for tractability and strong tractability of multivariate integration in such weighted function spaces are found. These conditions are also necessary if the weights are such that the reproducing kernel of the weighted Korobov space is pointwise non-negative. The existence of a lattice rule which achieves the nearly optimal convergence order is proven. A component-by-component (CBC) algorithm that constructs good lattice rules is presented. The resulting lattice rules achieve tractability or strong tractability error bounds and achieve nearly optimal convergence order for suitably decaying weights. We also study special weights such as finite-order and order-dependent weights. For these special weights, the cost of the CBC algorithm is polynomial. Numerical computations show that the lattice rules constructed by the CBC algorithm give much smaller worst-case errors than the mean worst-case errors over all quasiMonte Carlo rules or over all lattice rules, and generally smaller worst-case errors than the best Korobov lattice rules in dimensions up to hundreds. Numerical results are provided to illustrate the efficiency of CBC lattice rules and Korobov lattice rules (with suitably chosen weights), in particular for high-dimensional finance problems. Keywords: Quasi-Monte Carlo methods, lattice rules, multivariate integration. 2000 Mathematics Subject Classification: 65C05, 65D30, 65D32. Email addresses: [email protected],

[email protected],

[email protected]. Corresponding author: Xiaoqun Wang.

1

[email protected],

hen-

1

Introduction

We consider the multivariate integration problem in a Hilbert space Hs of periodic integrable real functions defined over the s-dimensional unit cube, Z Is (f ) = f (x)dx ∀ f ∈ Hs . [0,1]s

High-dimensional integrals are usually approximated by a quasi-Monte Carlo (QMC) algorithm of the form n−1 1X f (xk ), (1) Qn,s (f ) = Qn,s (f, {xk }) := n k=0 where x0 , . . . , xn−1 are points in [0, 1]s chosen in some deterministic manner. Lattice rules are an important type of QMC algorithms; see [10, 12] for the theory of lattice rules. We assume that Hs is a reproducing kernel Hilbert space and Ks (x, y) is its reproducing kernel. That is, Ks (·, y) ∈ Hs for all y ∈ [0, 1]s , and ∀ f ∈ Hs , y ∈ [0, 1]s ,

f (y) = hf, Ks (·, y)i

where h·, ·i is the inner product of Hs , and kf kHs = hf, f i1/2 . Define Z hs (x) := Ks (x, y) dy ∀ x ∈ [0, 1]s . [0,1]s

We assume that hs ∈ Hs . Then Is is a continuous linear functional with the reproducer hs , i.e., Is (f ) = hf, hs i ∀ f ∈ Hs , and kIs k = khs kHs . The worst-case error of the algorithm Qn,s in the space Hs is defined as e(Qn,s ; Hs ) := sup{|Is (f ) − Qn,s (f )| : f ∈ Hs , ||f ||Hs ≤ 1}. For n = 0, we do not sample the function, and define the initial error as the error for Q0,s (f ) = 0, e(0; Hs ) := sup{|Is (f )| : f ∈ Hs , ||f ||Hs ≤ 1} = ||Is ||. For n ≥ 1, it is well known, see for example [15], that the square worst-case error can be expressed as Z

n−1

2X e (Qn,s ; Hs ) = Ks (x, y)dxdy − n k=0 [0,1]2s 2

n−1 1 X s Ks (xk , x)dx + 2 K (xk , xl ). n k,l=0 [0,1]s

Z

(2)

This explicit expression for the worst case error in terms of Ks is the feature that makes reproducing kernel Hilbert spaces especially useful. Let n(ε, Hs ) := min{n : ∃ Qn,s of the form (1) such that e(Qn,s ; Hs ) ≤ εkIs k} 2

be the minimal number of function evaluations needed to reduce the initial error by a factor ε ∈ (0, 1) by a QMC algorithm (1). Multivariate integration is said to be (QMC-) tractable in the sequence of spaces {Hs } if n(ε, Hs ) is bounded by a polynomial in ε−1 and s, i.e., if n(ε, Hs ) ≤ Cε−p sq

∀ ε ∈ (0, 1), ∀ s = 1, 2, . . . ,

(3)

for some non-negative C, p, q. The numbers p and q are called ε- and s-exponents of tractability; we stress that they are not uniquely defined. The problem is said to be (QMC-) strongly tractable if that bound is only a polynomial in ε−1 , i.e., when (3) holds with q = 0. The infimum of the numbers p satisfying (3) with q = 0 is called the ε-exponent of strong tractability. We stress that in this paper we restrict ourselves to QMC algorithms. Obviously, the study of tractability and strong tractability when we allow arbitrary algorithms is of great importance. Due to Smolyak’s theorem, see for example [18], it is enough to consider linear P algorithms, i.e., algorithms of the form n−1 k=0 ak f (xk ) for some ak not necessarily equal to −1 n as in the QMC algorithms. The subject of tractability for algorithms with arbitrary ak for spaces with the so-called product weights has been studied in [11], and is left for future research for general weights. To simplify nomenclature, we will be using tractability and strong tractability as the shortened versions of QMC-tractability and QMC-strong tractability. The tractability of multivariate integration and the construction of lattice rules that achieve strong tractability error bounds have been extensively studied [8, 9, 15, 16]. Most of the results are obtained under the assumption that the underlying function spaces Hs are tensor product Hilbert spaces, see Section 2. In such a case, if non-negative weights γ1 , γ2 , . . . , γs are introduced to moderate the relative importance of successive variables, with γj being associated with the jth coordinate xj , then the interactions of the group of variables xj with j ∈ u ⊆ {1, 2, . . . , s} are automatically associated with weights of the product form Q γu := j∈u γj . Such a setting is important. However, the product form assumption is not always realistic. For example, we may face the situation that the interactions involving two variables are important, but those involving more than two variables can be ignored (this is the typical situation in many financial problems, see [3, 19, 20]). We may also face the situation that only a specific order of interactions are important. In such cases, it is desirable to introduce general weights which describe the relative importance of each distinct group of variables. General weights have been previously used for Sobolev spaces in [6, 7]. We pay special attention to finite-order and order-dependent weights. Finite-order weights are especially suited for functions which mostly depend on interactions of only a few variables, whereas order-dependent weights for functions whose interactions involving the same number of variables have the same or nearly the same importance. We believe that finite-order and order-dependent weights capture behaviour of many multivariate functions occuring in practical applications including finance problems. As we shall see, such weights significantly simplify the computations and guarantee tractability or even strong tractability of multivariate integration. In a recent paper [5], we studied the question of how to choose the weights in the setting of tensor product weighted Sobolev spaces [15]. A similar problem is studied in [21] in the setting of tensor product weighted Korobov spaces. In the tensor product setting, the correct 3

scaling of the weights turned out to be crucial. In the present work, where we are, in effect, allowed to make a separate choice of weights for each group of variables, the question of scaling turns out to be unimportant, for details see Section 2. The essential difference is that for the product case the only way to describe the relative importance of groups of variables with different cardinality is through scaling. There is no such limitation in the general case, and little or no importance attached to the way in which the weights are scaled. The central problem in the theory of lattice rules is to find “good” lattice rules, i.e., rules that have the optimal order of convergence. The component-by-component (CBC) algorithm for constructing good (shifted) lattice rules in spaces with product weights has been studied in a series of recent papers, and some good theoretical properties of such lattice rules have been revealed [4, 8, 9]. A similar CBC algorithm will be developed here for more general spaces with general weights. The outline of this paper is as follows. In Section 2, we introduce the weighted Korobov space with general weights. The existence of good lattice rules and the sufficient (and in some cases necessary) conditions for tractability or strong tractability in such weighted function spaces are established in Section 3. In Section 4, a CBC algorithm for constructing good rank-1 lattice rules is proposed. The essence of this algorithm is that the generator of a lattice rule is found one component at a time. The corresponding lattice rule is proved to satisfy a strong tractability error bound under an appropriate condition on the weights, and to have optimal order of convergence. Important special choices of weights — the finite-order weights and order-dependent weights are studied in Section 5. For these special weights, the cost of the CBC algorithm is polynomial. Some numerical tests for specific functions and high-dimensional finance problems are presented in Section 6 which demonstrate the good performance of the lattice rules constructed by the CBC algorithm.

2

Weighted Korobov spaces with general weights

In this section we define a generalization of the weighted Korobov spaces of [16]. First we introduce some notation. Let Z = {· · · , −2, −1, 0, 1, 2, · · · } be the set of integers and Z0 = Z − {0}. Let Zm and Zm 0 be the m-fold Cartesian product of Z and Z0 , respectively (so if z ∈ Zm then none of its components is zero). For x = (x1 , x2 , . . . , xs ) and u ⊆ {1, 2, . . . , s}, 0 by xu we denote the |u|-dimensional vectors containing the components of x with indices in u. Here, |u| is the cardinality of the subset u. For α > 1 and arbitrary non-negative γs,u , define the weighted Korobov space Hs,α to be the Hilbert space with reproducing kernel X Ks,α (x, y) = 1 + γs,u Ku,α (xu , yu ), (4) ∅6=u⊆{1,...,s}

where Ku,α (xu , yu ) =

Y j∈u

X e2πih(xj −yj ) z |h|α h∈Z 0

4

! =

Y j∈u

2Dα ({xj − yj })

with Dα (t) :=

∞ X cos (2πht) h=1



,

t ∈ [0, 1],

(5)

and the notation {x} means the fractional part of x. We stress that γs,u may depend on the dimension s. We sometimes write γs,u as γu to simplify the notation. We use the convention that γs,∅ = 1. Many formulas of this paper make sense only for positive weights γu . If some γu are zero these formulas should be interpreted as the limiting case of positive γu ’s, where we adopt the convention that ∞ 0 = 0. It is convenient to write the kernel Ks,α (x, y) as X Ks,α (x, y) = r(h, γ)−1 e2πih·(x−y) , h∈Zs

where h · x denotes the standard inner product of h and x, h · x =  r(h, γ) =

Ps

j=1

hj xj , and

1, if h = (0, . . . , 0), Q −1 α γuh j∈uh |hj | , if h 6= (0, . . . , 0), where uh = {j : hj 6= 0}.

Hence, r(h, γ) is well defined if γuh is not zero. For γuh = 0 we formally set r(h, γ) = ∞. The inner product of the weighted Korobov space Hs,α is X r(h, γ) fˆ(h) gˆ(h), (6) hf, giHs,α = h∈Zs

where fˆ(h) =

R

exp(−2πih · x) f (x) dx is the Fourier coefficient of the function f . If some r(h, γ) = ∞ we assume that fˆ(h) = 0 for all f from Hs,α . Then the corresponding term in the sum (6) disappears, due to our convention that ∞ 0 = 0. For the extreme case when all γu = 0, the space Hs,α becomes a one dimensional space of constant functions. It is easy to verify that the function Ks,α (x, y) is indeed the reproducing kernel of the Hilbert space with the inner product (6). In fact, for all y ∈ [0, 1]s we have X hf, Ks,α (·, y)iHs,α = r(h, γ) fˆ(h) r(h, γ)−1 e2πih·y = f (y). [0,1]s

h∈Zs

For the weighted Korobov space Hs,α the reproducer of multivariate integration is hs = 1, since Z Ks,α (x, y) dy = 1 ∀ x ∈ [0, 1]s . (7) [0,1]s

This also implies that the initial error is e(0; Hs,α ) = 1. For an arbitrary point set P = {xk = (xk,1 , . . . , xk,s ) : k = 0, 1, . . . , n − 1} ⊂ [0, 1]s ,

5

it follows from (2) and (7), see also [16], that the square worst-case error of Qn,s (f ) over the unit ball of Hs,α is ! n−1 n−1 X X Y X e2πih(xk,j −xl,j ) X 1 γu . (8) e2 (Qn,s ; Hs,α ) = 2 α n k=0 l=0 |h| j∈u h∈Z ∅6=u⊆{1,...,s}

0

Note that if α is an even integer, the reproducing kernel Ks,α (x, y) is related to the Bernoulli polynomials Bα (x). For α even, we have α (−1) 2 +1 α! X e2πihx Bα (x) = (2π)α h∈Z |h|α

∀ x ∈ [0, 1].

0

In this case the kernel (4) can be written as X

Ks,α (x, y) = 1 +

 γu

∅6=u⊆{1,...,s}

(2π)α α (−1) 2 +1 α!

|u| Y

Bα ({xj − yj }).

j∈u

For general α > 1 and f ∈ Hs,α we have kf k2Hs,α = |fˆ(0)|2 +

X 06=h∈Zs

1 Y |hj |α |fˆ(h)|2 , γuh j∈u h

where, as before, uh = {j : hj 6= 0 }. From this we get, for α ≤ β and f ∈ Hs,β , that kf kHs,α ≤ kf kHs,β . This means that the unit ball of Hs,β is a subset of the unit ball of Hs,α and therefore for any algorithm Qn,s we have e(Qn,s ; Hs,β ) ≤ e(Qn,s ; Hs,α )

∀ α ≤ β.

Hence, multivariate integration in Hs,β is not harder than multivariate integration in Hs,α . Definition 1. The weights {γs,u } are product iff for all s = 1, 2, . . . , and all u ⊆ {1, 2, . . . , s} we have Y γs,u = γs,j for some non-negative {γs,j }. (9) j∈u

Q (We use the convention that j∈u γs,j = 1 if u = ∅.) We say that the weights {γs,u } are non-product if they are not product weights.  For product weights, the kernel Ks,α (x, y) can be written as the product of kernels: Ks,α (x, y) =

s Y j=1

6

K1,α,γs,j (xj , yj ),

(10)

where K1,α,γs,j (x, y) = 1 + γs,j

X e2πih(x−y) = 1 + 2γs,j Dα ({x − y}). |h|α h∈Z 0

The Hilbert space of the kernel Ks,α (x, y) is then the tensor product of reproducing kernel Hilbert spaces of univariate functions: Hs,α = H(K1,α,γs,1 ) ⊗ H(K1,α,γs,2 ) ⊗ · · · ⊗ H(K1,α,γs,s ), where H(K1,α,γs,j ) is a reproducing kernel Hilbert space of univariate functions with the kernel K1,α,γs,j (x, y). However, for non-product weights the space Hs,α is not a tensor product space. In this paper, we are interested in general weights which are not necessarily product weights. An important case of weights is when they depend on u only through the cardinality of u. Definition 2. The weights {γs,u } are order-dependent iff for all s = 1, 2, . . . , and all u ⊆ {1, 2, . . . , s} we have γs,u := Γs,|u| , (11) where Γs,0 = 1 and Γs,1 , . . . , Γs,s are arbitrary non-negative numbers.



Another important case of weights is when they are zero if the cardinality of u is large. Definition 3. The weights {γs,u } are finite-order if there exists an integer q such that γs,u = 0 for all s and for all u with |u| > q.

(12)

The finite-order weights are of order q ∗ , if q ∗ is the smallest integer q with this property.  Note that the order-dependent weights are product iff Γs,` = a`s for some non-negative as . Clearly, order-dependent weights are finite-order iff for all s, Γs,j = 0 for j ≥ q + 1, and finite-order weights are product iff for all s, γs,u = 0 for all u 6= ∅. For weights of order q ∗ , ∗ the reproducing kernel Ks,α consists of O(sq ) terms. We shall study finite-order weights and order-dependent weights in detail in Section 5. We finish this section with a remark about the normalization of the weights γu . From the definition of the worst-case error it follows that for f ∈ Hs,α we have the error bound |Is (f ) − Qn,s (f )| ≤ e(Qn,s ; Hs,α ) |f |Hs,α ,

(13)

where |f |Hs,α is the semi-norm corresponding to (6): !1/2 |f |Hs,α =

X 06=h∈Zs

Y

γu−1 h

|hj |α |fˆ(h)|2

.

(14)

j∈uh

The fact that we can use the semi-norm rather than the norm on the right hand side of (13) is because the QMC rule (1) is exact for constant functions. Suppose now that each weight γu for a non-empty subset u of {1, 2, . . . , s} is replaced by τ γu , for some positive number τ . Then we see from (8) that the corresponding worst-case 7

error is multiplied by τ 1/2 , while from (14) the semi-norm is divided by a factor τ 1/2 . Thus the bound on the right hand side of (13) is unaffected by this change. Although the scaling of weights does not change the error bound, it changes the sequence of Korobov spaces Hs,α through the change of their norms. Since the concepts of tractability and strong tractability refer to a sequence of specific spaces, we may have tractability or strong tractability of multivariate integration depending on the scaling. This point will be more clear when we present the error bounds in terms of the weights. It is worth emphasizing that scaling of product weights is quite different. Indeed, we now Q have γu = j∈u γj , and if we multiply each underlying weight γj by τ then γu is multiplied by τ |u| . In fact, it is easy to check that for non-zero γj and τ 6= 1, the weights τ γu are not longer product weights. The scaling of product weights therefore affects the relative importance of weights γu1 and γu2 for two subsets u1 and u2 of different cardinality. For this reason, the paper [5] abandoned the normalization of γ1 = 1 introduced in [15], instead attempting to choose the normalization so as to minimize the error bound. The paper [21] made another attempt to choose the suitable weights for practical applications. Given that everything of significance for the error bound with weights γs,u is unchanged if the weights are multiplied by a constant factor, sometimes it makes sense to consider weights such that s-dimensional integration is no harder than (s + 1)-dimensional integration. This holds if the unit balls Bs of the spaces Hs,α for s = 1, 2, . . . are non-decreasing: B1 ⊆ B2 ⊆ B3 ⊆ · · · ,

(15)

where Bs = {f ∈ Hs,α : ||f ||Hs,α ≤ 1} is the unit ball of the space Hs,α of functions of s variables defined on [0, 1]s . Here we assume that a function f ∈ Hs,α of s variables is treated as a function of s+1 variables which is independent of the (s+1)th variable. Then f ∈ Hs+1,α and therefore Hs,α ⊆ Hs+1,α . It is not hard to check that (15) holds iff γs,u ≤ γs+1,u

∀s ≥ 1 and u ⊆ {1, 2, . . . , s}.

Definition 4. The weights {γs,u } are nested iff they satisfy (16).

(16) 

Obviously, if the weights γs,u are independent of the dimension s, i.e., γs,u = γu , then they are nested. Observe that the product weights studied in [16] are of the form Y γs,u = γj ∀ u ⊆ {1, 2, . . . , s}, ∀ s = 1, 2, . . . . j∈u

Since they do not depend on s, they are nested. In this case, γs,u = γs+1,u for all u ∈ {1, 2, . . . , s}, and the norms of f ∈ Hs,α are the same for all Hd,α with d ≥ s. Similarly, order-dependent weights are nested if the Γs,|u| in (11) do not depend on s. That is, if γs,u = Γ|u|

∀ u ⊆ {1, 2, . . . , s}, ∀ s = 1, 2, . . . .

8

3

The existence of good lattice rules and tractability

We consider the rank-1 lattice point set    kz P = P (z) := : k = 0, 1, . . . , n − 1 , n

(17)

where z = (z1 , . . . , zs ) is an integer vector with no factor in common with n. In this paper we consider the case when n is a prime number; for a non-prime number n, similar results can be established by applying the approach used in [4, 9]. For a prime n, each component of z can be chosen from the restricted set Zn := {1, 2, . . . , n − 1}. Let Zsn denote the s-fold copy of Zn . For the lattice point set (17), the square worst-case error in (8) can be simplified to ! n−1 X X Y X e2πihkzj /n 1 e2n,s (z) := e2 (Qn,s ; Hs,α ) = γu . (18) α n k=0 |h| j∈u h∈Z ∅6=u⊆{1,...,s}

3.1

0

The existence of good lattice rules

We show the existence of good lattice rules by an averaging argument. Define Mn,s (α) :=

X 1 e2 (z), (n − 1)s z∈Zs n,s

(19)

n

as the mean square worst-case error taken over all possible z ∈ Zsn . In [16], Mn,s (α) was found for product weights. We now find Mn,s (α) for general weights. Theorem 1 Let n be a prime number, and Mn,s (α) be defined by (19) for α > 1. • We have Mn,s (α) =

1 n

X

γs,u (2ζ(α))|u| +

∅6=u⊆{1,...,s}

where ζ is the Riemann zeta function, ζ(α) =

n−1 n

P∞

h=1

X ∅6=u⊆{1,...,s}

h−α , and

2ζ(α)(1 − n1−α ) T (α) = − . n−1 • We have Mn,s (α) ≤

1 n−1

X ∅6=u⊆{1,...,s}

9

γs,u (T (α))|u| .

γs,u (2ζ(α))|u| .

(20)

• There exists a generating vector z ∈ Zsn such that s X 1 γs,u (2ζ(α))|u| . en,s (z) ≤ √ n − 1 ∅6=u⊆{1,...,s} • Let µ be an equiprobable measure on Zsn , i.e., µ(z) = (n − 1)−s for all z ∈ Zsn . For c > 1, define the set   s X   c Zc = z ∈ Zsn : en,s (z) ≤ √ γs,u (2ζ(α))|u| .   n−1 ∅6=u⊆{1,...,s}

Then µ (Zc ) > 1 − c−2 . Proof. From (18), on averaging e2n,s (z) over all the (n − 1)s values of z ∈ Zsn , we obtain ! n−1 X 1X X Y X e2πihkzj /n 1 Mn,s (α) = γu (n − 1)s z∈Zs n k=0 |h|α j∈u h∈Z0 ∅6=u⊆{1,...,s} n ! n−1 X X Y X e2πihkzj /n 1 1X = γu n k=0 (n − 1)|u| |h|α |u| h∈Z0 ∅6=u⊆{1,...,s} zu ∈Zn j∈u   n−1 2πihkzj /n X X Y X X 1 e  1  = γu α n k=0 n − 1 |h| j∈u z ∈Z h∈Z ∅6=u⊆{1,...,s}

1 n

= where

n−1 X

X

j

n

0

γu Tα (k, n)|u| ,

k=0 ∅6=u⊆{1,...,s}

n−1 1 X X e2πihkz/n Tα (k, n) := . n − 1 z=1 h∈Z |h|α 0

It can be shown that  Tα (k, n) =

2ζ(α), if k is a multiple of n, T (α), otherwise,

P where T (α) is given by (20). Indeed, if k is a multiple of n we need to sum up h∈Z0 1/|h|α = P α 2 ∞ j=1 1/j = 2ζ(α). If k is not a multiple of n then we separate out the terms in the sum over h in which h is a multiple of n and obtain T (α). We return to the formula for Mn,s (α). Separating out the k = 0 term in the expression of Mn,s (α), we have Mn,s (α) =

1 n

X

γu (2ζ(α))|u| +

∅6=u⊆{1,...,s}

10

n−1 n

X ∅6=u⊆{1,...,s}

γu (T (α))|u| .

This completes the first part of the theorem. To prove the second assertion, write Mn,s (α) as Mn,s (α) =

1 n

X

γu (2ζ(α))|u| (1 + R(n, u, α)),

∅6=u⊆{1,...,s}

where 

|u|

R(n, u, α) = (−1) (n − 1)

1 − n1−α n−1

|u| .

If |u| is odd, R(n, u, α) ≤ 0; while if |u| is even then |u| ≥ 2 and  R(n, u, α) ≤ (n − 1)

1 − n1−α n−1

2 ≤

1 . n−1

Therefore, for |u| either even or odd 1 Mn,s (α) ≤ n

X

|u|



γu (2ζ(α))

∅6=u⊆{1,...,s}

1 1+ n−1

 =

1 n−1

X

γu (2ζ(α))|u| .

∅6=u⊆{1,...,s}

This completes the proof of the estimate of Mn,s (α). The remaining parts of Theorem 1 follow from an easy application of the mean value theorem and Chebyshev’s inequality applied to (19). Theorem 1 presents the formula and an upper bound for the mean square worst-case error in terms of the number of sample points n and the weights γs,u of the Korobov space. The last part of this theorem states that for large c, say c = 10, we have a large probability, at least 0.99 for c = 10, that randomly selected points from Zsn satisfy the bound on the mean worst-case error modulo a factor c. Before we state and prove the next theorem, which establishes faster convergence than is apparent in Theorem 1, we recall the Jensen inequality: X

ak ≤

X

ark

1/r

for 0 < r ≤ 1,

where ak are arbitrary non-negative numbers. Theorem 2 Let n be a prime number. • Then there exists a rank-1 lattice rule with a generator z0 ∈ Zsn such that for all τ ∈ [1, α), en,s (z0 ) ≤ C(s, τ )(n − 1)−τ /2 , where

τ /2

 C(s, τ ) = 

X ∅6=u⊆{1,...,s}

11

1/τ γs,u (2ζ(α/τ ))|u| 

.

(21)

• Let µ be the same probability measure as in Theorem 1. For fixed τ ∈ [1, α) and b > 1, define  Zb (τ ) = z ∈ Zsn : en,s (z) ≤ b C(s, τ )(n − 1)−τ /2 , Then µ (Zb (τ )) > 1 − b−2/τ . Proof. We use an alternative notation en,s (α, γ, z) for the worst-case error, in order to stress the dependence on the parameter α and the weight sequence {γu }. Let zu and hu denote the |u|-dimensional vectors containing the components of z and h with indices in u. It is now convenient to write e2n,s (α, γ, z), see (18), in a different form: n−1

e2n,s (α, γ, z)

1X = n k=0 =

X ∅6=u⊆{1,...,s}

X

X exp(2πikhu · zu /n) Q α j∈u |hj | |u|

hu ∈Z0

n−1 X 1X exp(2πikhu · zu /n) Q . α n k=0 j∈u |hj | |u|

γu

∅6=u⊆{1,...,s}

Since

γu

hu ∈Z0

n−1

1X exp(2πikhu · zu /n) = n k=0



1, if hu · zu ≡ 0 (mod n) , 0, otherwise,

we have e2n,s (α, γ, z) =

X

X Q

∅6=u⊆{1,...,s}

|u| hu ∈Z0 ,

γu . α j∈u |hj |

hu ·zu ≡0 (mod n)

Applying the Jensen inequality to the double sum on the right hand side, we have  1/λ e2n,s (α, γ, z)

  ≤ 

X ∅6=u⊆{1,...,s}

 γuλ  Q  αλ  j∈u |hj |

X |u| hu ∈Z0 , hu ·zu ≡0 (mod

1/λ = e2n,s (αλ, γ λ , z)

(22)

n)

1 α

< λ ≤ 1, where γ λ denotes the weight sequence with values γuλ for each u ⊆ {1, . . . , s}. We see from Theorem 1, with α replaced by αλ and γu replaced by γuλ , that for every λ there exists a generator zλ ∈ Zsn such that

for

e2n,s (αλ, γ λ , zλ ) ≤

1 n−1

X

γuλ (2ζ(αλ))|u| .

∅6=u⊆{1,...,s}

Now let z0 ∈ Zsn be such that en,s (α, γ, z0 ) ≤ en,s (α, γ, z) for all z ∈ Zsn . Then for all 1 < λ ≤ 1 we have α 1/λ e2n,s (α, γ, z0 ) ≤ e2n,s (α, γ, zλ ) ≤ e2n,s (αλ, γ λ , zλ ) , 12

and therefore 1/λ

 1 e2n,s (α, γ, z0 ) ≤ (n − 1)− λ 

X

γuλ (2ζ(αλ))|u| 

.

(23)

∅6=u⊆{1,...,s}

For any τ with 1 ≤ τ < α, put λ = 1/τ in (23). Then α1 < λ ≤ 1 and  τ /2 X en,s (α, γ, z0 ) ≤ (n − 1)−τ /2  γu1/τ (2ζ(α/τ ))|u|  = C(s, τ )(n − 1)−τ /2 . ∅6=u⊆{1,...,s}

This completes the proof of the first part. Now we prove the second part. For τ ∈ [1, α), based on Theorem 1 we have   s X   1/τ b 1/τ µ  z ∈ Zsn : en,s (α/τ, γ 1/τ , z) ≤ √ γs,u (2ζ(α/τ ))|u|  > 1 − b−2/τ .   n−1 ∅6=u⊆{1,...,s}

That is µ∗ := µ

n

o τ > 1 − b−2/τ . z ∈ Zsn : en,s (α/τ, γ 1/τ , z) ≤ b C(s, τ )(n − 1)−τ /2

From (22), we have Thus

n

z∈

τ en,s (α, γ, z) ≤ en,s (α/τ, γ 1/τ , z) . Zsn

: en,s (α/τ, γ

1/τ

o τ −τ /2 , z) ≤ b C(s, τ )(n − 1) ⊆ Zb (τ ).

Therefore, µ (Zb (τ )) ≥ µ∗ > 1 − b−2/τ . This completes the proof. The essence of Theorem 2 is that for arbitrarily large s there is a rank-1 lattice rule whose error is of order n−τ /2 . Since τ can be arbitrarily close to α we may achieve almost the same speed of convergence as for the univariate case, which is n−α/2 , and it is known that this bound is sharp, see [12]. Hence, as long as we control the factors C(s, τ ), the difficulty of the s-dimensional integration is roughly the same as for the univariate case. Furthermore, if we choose b such that b−2/τ is small, we have large probability that the vectors z from Zsn satisfy the error bound of order n−τ /2 . The factors C(s, τ ) depend on the weights. It is clear that if τ approaches α, these factors blow up to infinity, since the Riemann zeta function behaves like ζ(1 + δ) ≈ δ −1 for small positive δ. If we set τ = α(1 − δ) we have  α/2 X 1/α C(s, α(1 − δ)) ≈  γs,u (2δ −1 )|u|  . ∅6=u⊆{1,...,s}

Hence, everything depends on the weights. 13

3.2

Tractability for general weights

We are now ready to discuss tractability of multivariate integration in the spaces Hs,α for general QMC algorithms and lattice rules. From Theorem 2 we easily conclude sufficient conditions on tractability and strong tractability. We also show matching necessary conditions if the kernel is pointwise non-negative using the same analysis as in Section 6.2 of [15]. Theorem 3 For τ ∈ [1, α) and q ≥ 0 define  X 1 Bτ,q := sup  q s s=1,2,...

 1/τ γs,u (2ζ(α/τ ))|u|  .

(24)

∅6=u⊆{1,...,s}

• If the weights {γs,u } are such that B1,q < ∞ for some non-negative q then the integration problem in the spaces Hs,α is tractable. If Bτ,q < ∞ then the ε-exponent is at most 2/τ and the s-exponent is at most q, see (3). The corresponding error bounds on the worstcase error can be achieved by lattice rules. • In particular, if B1,0 < ∞ then the integration problem in the spaces Hs,α is strongly tractable. If Bτ,0 < ∞ then the ε-exponent of strong tractability is at most 2/τ . If Bτ,0 < ∞ for all 1 ≤ τ < α, then the ε-exponent of strong tractability reaches the minimal value 2/α. • Assume that the weights {γs,u } are chosen such that Ks,α (x, y) ≥ 0

∀ x, y ∈ [0, 1]s .

(25)

Then the condition B1,q < ∞ for some q is necessary for tractability of the integration problem in the spaces Hs,α , and the condition B1,0 < ∞ is necessary for strong tractability. Proof. We prove the first part. Assume that B = Bτ,q is finite for some τ ∈ [1, α) and some non-negative q. Then we have X 1/τ γs,u (2ζ(α/τ ))|u| ≤ B sq ∀s ≥ 1. ∅6=u⊆{1,...,s}

Hence, (21) yields C(s, τ ) ≤ B τ /2 sqτ /2

∀s ≥ 1.

Therefore, due to Theorem 2 there exists a generator z0 ∈ Zsn such that en,s (z0 ) ≤ C(s, τ )(n − 1)−τ /2 ≤ B τ /2 sqτ /2 (n − 1)−τ /2 . This yields that the minimal number n(ε, Hs,α ) of function evaluations needed to reduce the initial error, which is 1, by a factor of ε ∈ (0, 1) and by the use of a QMC algorithm satisfies n(ε, Hs,α ) ≤ dB sq ε−2/τ e + 1. 14

Thus, we have tractability with ε-exponent at most 2/τ and s-exponent at most q. To prove the second part, we see that if Bτ,0 < ∞ for some τ ∈ [1, α) then we have strong tractability with the ε-exponent at most 2/τ . If Bτ,0 < ∞ for all 1 ≤ τ < α, then the ε-exponent of strong tractability is at most inf{2/τ : 1 ≤ τ < α}, which is 2/α. Since n−α/2 is the best possible rate of convergence for d = 1, see [12], we conclude that the ε-exponent of strong tractability is 2/α and this value is minimal. We now turn our attention to the third part. Since the reproducer of multivariate integration is hs (x) ≡ 1, the square of the worst case error of an arbitrary QMC algorithm Qn,s , see (2), satisfies n−1 1 X 2 e (Qn,s ; Hs,α ) = −1 + 2 Ks,α (xk , xl ). n k,l=0 We can replace the equal sign with a greater-than-or equal sign by dropping all terms with k 6= l, since Ks,α (xk , xl ) ≥ 0. Furthermore, for k = l we have X Ks,α (xk , xk ) = Ks,α (0, 0) = 1 + γs,u (2ζ(α))|u| , ∅6=u⊆{1,...,s}

and therefore e2 (Qn,s ; Hs,α ) ≥ =

−1 + −1 +

≥ −1 +

Ks,α (0, 0) n 1 1+ n 1 n

 X

γs,u (2ζ(α))|u| 

∅6=u⊆{1,...,s}

X

γs,u (2ζ(α))|u| .

∅6=u⊆{1,...,s}

Assume that we have tractability or strong tractability. Then for some QMC algorithm Qn,s we have e(Qn,s ; Hs,α ) ≤ ε for n = n(ε, Hs,α ) ≤ Cε−p sq for some non-negative C, p, q, where q = 0 when we consider strong tractability. Take, say, ε = 21 . Then the lower bound on the worst case error of Qn,s for n = n( 12 , Hs,α ) yields X

 γs,u (2ζ(α))|u| ≤ n 1 + e2 (Qn,s , Hs,α ) ≤ C 2p sq (1 + .25).

∅6=u⊆{1,...,s}

Hence sup s

1 sq

X

γs,u (2ζ(α))|u| ≤ 5 C 2p−2 < ∞.

∅6=u⊆{1,...,s}

This means that B1,q < ∞ holds for a specific q. Hence the third part is proven. Theorem 3 states matching necessary and sufficient conditions for tractability and strong tractability of multivariate integration under the assumption (25) that the reproducing kernel 15

is pointwise non-negative, Ks,α (x, y) ≥ 0. We now discuss this assumption for product weights. Q For the product weights, γs,u = j∈u γs,j , from (10) we have Ks,α (x, y) =

s Y

(1 + 2γs,j Dα ({xj − yj })) ,

j=1

and Dα , see (5), is related to a Bernoulli polynomial if α is an even integer. Clearly, Dα (t) ≤ Dα (0) = ζ(α). It has been proven in [2] that the minimum of the function Dα on the interval [0, 1] is attained at 1/2. Since Dα (1/2) is given as an alternating series we have −1 < Dα (1/2) < −1 + 2−α

∀ α > 1.

(26)

For the product weights, we have Ks,α (x, y) ≥ 0 for all x and y ∈ [0, 1]s iff 1+2γs,j Dα (1/2) ≥ 0 iff 1 1 γs,j ≤ aα := < , j = 1, 2, . . . , s. (27) 2|Dα (1/2)| 2(1 − 2−α ) That is, the assumption (25) holds if we take all γs,j no larger than aα . For general weights, it is easy to check that X 1 γs,u (2ζ(α))|u|−1 ≤ 2 ∅6=u∈{1,2,...,s}

implies that Ks,α (x, y) ≥ 0.

3.3

Tractability for product weights

We now show how to obtain necessary and sufficient conditions on strong tractability and tractability for product weights and the space Hs,α for arbitrary α > 1, without assuming (27), based on the analysis above. The case α = 2 has been already fully analyzed in [5]. For simplicity we assume that the weights γs,j are uniformly bounded. We are going to prove the following theorem. Theorem 4 Consider spaces Hs,α with arbitrary α > 1 and product weights γs,j . Assume that A := sup max γs,j < ∞. s=1,2,... j=1,2,...,s

The integration in spaces Hs,α is strongly tractable iff sup s=1,2,...

s X

γs,j < ∞.

(28)

j=1

The integration in spaces Hs,α is tractable iff Ps j=1 ln(1 + γs,j ) sup < ∞. ln(s + 1) s=1,2,... 16

(29)

Proof. For product weights, the third part of Theorem 1 states that there is a lattice rule Qn,s such that s 1 Y 2 e (Qn,s ; Hs,α ) ≤ (1 + 2ζ(α)γs,j ). n − 1 j=1 Hence we obtain tractability if Qs sup

j=1 (1

s=1,2,...

+ 2ζ(α)γs,j ) < ∞ (s + 1)q

for some positive q, and we obtain strong tractability if q = 0 in the condition above. By taking the logarithm, this condition is equivalent to Ps j=1 ln(1 + 2ζ(α)γs,j ) sup < ∞, (30) 1 + δq,0 ln(s + 1) s=1,2,... where δq,0 = 0 for q = 0, and 1 otherwise. For c > 0, consider the function gc (x) =

ln(1 + cx) , ln(1 + x)

x ∈ (0, ∞).

Observe that gc (0) := limx→0 gc (x) = c and similarly gc (∞) = 1. It is easy to check that gc0 is always negative if c > 1, is zero if c = 1, and is positive if c < 1. Therefore, for c > 0 a1 := min(c, 1) ≤ gc (x) ≤ a2 =: max(c, 1)

∀x ∈ [0, ∞).

That is, we have a1 ln(1 + x) ≤ ln(1 + cx) ≤ a2 ln(1 + x)

∀x ∈ [0, ∞).

(31)

By taking c = 2ζ(α), we then see that (30), with δq,0 = 1, is equivalent to (29). This proves that (29) implies tractability. To conclude that (28) implies strong tractability it is enough to note that sup s

s X

ln(1 + γs,j ) < ∞ iff

sup s

j=1

s X

γs,j < ∞.

j=1

The last equivalence holds since γs,j ≤ A and there is a positive number b = b(A) such that b x ≤ ln(1 + x) ≤ x for all x ∈ [0, A]. (In fact, if one of the supremums is finite then all γs,j must be bounded, and the assumption that γs,j ≤ A is not really needed for this part of the proof.) We need now to show that strong tractability implies (28) and tractability implies (29). This can be done as follows. We can decrease the weights by switching from γs,j to   1 ∗ ∗ ηs,j = c γs,j with c = min 1, > 0. 2A 17

Clearly, the decrease of the weights makes the integration problem no harder. Then ηs,j ≤

γs,j 1 1 ≤ ≤ , 2A 2 2|Dα (1/2)|

since 1 < 1/|Dα (1/2)| due to (26). Hence, the kernel Ks,α for the weights ηs,j is pointwise non-negative and we can apply the last part of Theorem 3 for the weights ηs,j = c∗ γs,j . For such weights, strong tractability implies that B1,0 < ∞. We now have B1,0 = sup s

s Y

(1 + 2ζ(α)ηs,j ) − 1.

j=1

P Hence, B1,0 < ∞ is equivalent to sups sj=1 ηs,j < ∞, which in turn is equivalent to Ps sups j=1 γs,j < ∞. Therefore, strong tractability implies (28). Tractability in spaces Ks,α with weights ηs,j implies that B1,q < ∞ for some positive q. We now have ! s 1 Y (1 + 2ζ(α)ηs,j ) − 1 . B1,q = sup q s s j=1 Hence, B1,q < ∞ is equivalent to Ps sup

j=1

s

ln(1 + 2ζ(α)c∗ γs,j ) < ∞. ln(s + 1)

By taking c = 2ζ(α)c∗ and using (31) we have that the last condition is equivalent to (29). Therefore, tractability implies (29). This completes the proof. We remark in passing that it is possible to prove Theorem 4 without assuming that the weights γs,j are uniformly bounded by switching to a Hilbert space with a pointwise nonnegative reproducing kernel but we do not pursue this point here.

3.4

Tractability for weights independent of the dimension

We now give another sufficient condition for tractability or strong tractability in the case of weights which are independent of the dimension, i.e., γs,u = γu . In this case, the weights are nested, see (16). For τ ∈ [1, α), let X 1/τ D(j, τ ) = γu ∪{j} (2ζ(α/τ ))|u|+1 , j = 1, 2, . . . , s. u⊆{1,...,j−1}

Since the weights are independent of the dimension, the factor C(s, τ ), see (21), present in the error bound in Theorem 2 can then be written as  τ /2 !τ /2 s X X C(s, τ ) =  γu1/τ (2ζ(α/τ ))|u| + D(s, τ ) = D(j, τ ) . j=1

∅6=u⊆{1,...,s−1}

18

Assume that there exists a number A(τ ), such that D(j, τ ) ≤ A(τ ) j q−1 , j = 1, 2, . . . , s.

(32)

Then we have the following: • If q > 0, then there exists a number A1 (τ ) such that C(s, τ ) ≤ A1 (τ )sqτ /2 . Arguing as in the proof of Theorem 3, we have tractability with the ε-exponent 2/τ and the s-exponent q. • If q < 0, then there exists a number A2 (τ ) such that C(s, τ ) ≤ A2 (τ ), and we have strong tractability with the ε-exponent 2/τ . • If q = 0, then there exists a number A3 (τ ) such that C(s, τ ) ≤ A3 (τ )(log s)τ /2 , and we have tractability with the ε-exponent 2/τ and the s-exponent arbitrarily small. An example of weights which satisfy the condition (32) is |u|+1

γu ∪{j} ≤ Gj wj

, u ⊆ {1, . . . , j − 1}, j = 2, 3, · · · ,

with Gj = O(j qτ ) and wj = O(j −τ ). Here γ{1} can be arbitrary. Indeed, for such weights, X D(j, τ ) = 2ζ(α/τ )(Gj wj )1/τ

|u|/τ

wj

(2ζ(α/τ ))|u|

∅6=u⊆{1,...,j−1} 1/τ

≤ 2ζ(α/τ )(Gj wj )



1+

1/τ 2ζ(α/τ )wj

j−1

≤ A4 (τ ) j q−1 for some number A4 (τ ). Thus the condition (32) is satisfied as claimed.

3.5

Worst-case error for α approaching to 1

The simple proof technique needed to show the last part of Theorem 3 may be also applied to the case when the parameter α goes to 1. As we have already seen, upper error bounds of lattice rules depend on the factor ζ(α), which goes to infinity as α goes to 1. For α going to 1, we are losing smoothness of functions in Hs,α , and its reproducing kernel degenerates. We now prove that the error of any QMC algorithm goes to infinity if α goes to 1, independently of how many functions values we use. Theorem 5 Consider the integration problem in the space Hs,α for arbitrary weights γs,u independent of α. We assume that at least one of γs,u for u 6= ∅ is non-zero. For arbitrary n and arbitrary QMC algorithm Qn,s using sample points xj that may depend on α, we have lim e(Qn,s ; Hs,α ) = ∞.

α→1+

19

Proof. The square worst-case error of the QMC algorithm Qn,s that uses sample points xj = xj (α) satisfies n−1 1 X 2 Ks,α (xk , xl ), e (Qn,s ; Hs,α ) = −1 + 2 n k,l=0 which also can be written as n−1 1 X (c + Ks,α (xk , xl )) e (Qn,s ; Hs,α ) = −(c + 1) + 2 n k,l=0 2

∀ c ∈ R.

(33)

The kernel Ks,α is given by (4). We already indicated that −1 < Dα (t) ≤ ζ(α) for t ∈ [0, 1]. This implies that Ku,α (xu , yu ) ≥ −2(2ζ(α))|u|−1 for all xu and yu , and X Ks,α (x, y) ≥ c∗ := 1 − 2 γu (2ζ(α))|u|−1 . ∅6=u⊆{1,...,s}

Setting c = −c∗ in (33), we conclude that the kernel c + Ks,α (x, y) is pointwise non-negative, thus we can drop off-diagonal elements (k 6= l) in (33) to obtain    X 1 1 2 |u|−1 e (Qn,s ; Hs,α ) ≥ 2 γu (2ζ(α)) ζ(α) − 1 − . n n ∅6=u⊆{1,...,s}

Since ζ(α) goes to infinity as α goes to 1, the error goes to infinity, as claimed.

4

Component-by-component construction

Theorems 1 - 3 in the previous section indicate the existence of “good” lattice rules. The global optimal generator z0 ∈ Zsn defined by en,s (α, γ, z0 ) ≤ en,s (α, γ, z) for all z ∈ Zsn , satisfies the optimal error bound of Theorem 2. However, the full search over all (n − 1)s different z ∈ Zsn is impossible for large s and n. In this section we will show that the generator obtained by carrying out the construction one component at a time still satisfies the optimal error bound of Theorem 2. We define the root mean square of the worst-case error in Hs,α averaged over all QMC rules as Z  1/2

eavg n,s

2

e (Qn,s ({xk }); Hs,α )dx0 · · · dxn−1

:=

.

[0,1]ns

It is shown in [15] that eavg n,s

1 =√ n

Z

1/2

Z Ks,α (x, x)dx −

[0,1]s

Ks,α (x, y)dxdy [0,1]2s

20

,

which reduces in the present case to 1/2

 1  eavg n,s = √ n

X

γs,u (2ζ(α))|u| 

.

(34)

∅6=u⊆{1,...,s}

This will serve later as a benchmark for the worst-case errors of lattice rules. The CBC construction of lattice rules has been proven to be a powerful tool in the weighted Korobov spaces with product weights. We now generalize this approach to general weights. Component-by-Component (CBC) Algorithm Suppose n is a prime number and suppose the weights {γs,u } are given. The generator z = (1, z 2 , . . . , z s ) is found as follows: 1. Set the first component z 1 of the generator z to 1. 2. For d = 2, 3, . . . , s and known z 1 , z 2 , . . . , z d−1 , find z d ∈ Zn such that the worst-case error: ! n−1 X X Y X e2πihkzj /n 1 γs,u , (35) e2n,d (1, z 2 , . . . , z d−1 , z d ) = n k=0 |h|α j∈u h∈Z ∅6=u⊆{1,...,d}

0

is minimized. The cost of the CBC algorithm will be discussed later. We now show that the lattice rule with the generator constructed by this algorithm has good theoretical properties. Theorem 6 Suppose that n is a prime number and the weights {γs,u } are given. Let z = (1, z 2 , . . . , z s ) be found by the component-by-component algorithm. Then for d = 1, 2, . . . , s and for any τ ∈ [1, α), we have en,d (1, z 2 , . . . , z d ) ≤ C(d, τ ) (n − 1)−τ /2 ,

(36)

where C(d, τ ) is given in (21). Proof. We prove by induction the following equivalent error bound:  1/λ X 1 λ e2n,d (1, z 2 , . . . , z d ) ≤ (n − 1)− λ  γs,u (2ζ(αλ))|u|  ,

(37)

∅6=u⊆{1,...,d}

for all α1 < λ ≤ 1. For d = 1, the same proof as in [8] can be used to prove (37). Now suppose the generator z = (1, z 2 , . . . , z d ) found by the CBC algorithm satisfies (37). We prove that the (d + 1)dimensional vector (z, z d+1 ) with z d+1 found by the algorithm satisfies (37) with d replaced by d + 1. From (18) we have ! n−1 X X Y X e2πihkzj /n 1 e2n,d+1 (z, z d+1 ) = γu n k=0 |h|α j∈u h∈Z ∅6=u⊆{1,...,d+1}

=

e2n,d (z)

+ θ(α, γ, z, z d+1 ), 21

0

(38)

where

n−1 Y 1X X θ(α, γ, z, zd+1 ) = γu n k=0 u⊆{1,...,d+1}, j∈u

X e2πihkzj /n |h|α h∈Z

! .

(39)

0

d+1∈u

We need the following lemma. Lemma 7 Suppose n is a prime number. Let z = (1, z2 , . . . , zd ) and α, γ be given, and let ∗ θ(α, γ, z, zd+1 ) be defined by (39). Then there exists zd+1 ∈ Zn such that ∗ θ(α, γ, z, zd+1 )≤

X 1 γu (2ζ(α))|u| . n − 1 u⊆{1,...,d+1}, d+1∈u

Proof. We average θ(α, γ, z, zd+1 ) over all possible values of zd+1 ∈ Zn , to obtain n−1

Φ(α, γ, z) :=

X 1 θ(α, γ, z, zd+1 ) n − 1 z =1 d+1

=

n−1 n−1 X X e2πikhu ·zu /n 1 1X X Q γu α n − 1 z =1 n k=0 u⊆{1,...,d+1}, j∈u |hj | |u| d+1

hu ∈Z0

d+1∈u

=

X 1 γu n − 1 u⊆{1,...,d+1}, d+1∈u

where

X

Y |u|−1

hu−{d+1} ∈Z0

|hj |−α S(hu , zu ),

j∈u−{d+1}

n−1 n−1 X 1 X X e2πikhu ·zu /n S(hu , zu ) = , n k=0 h ∈Z |hd+1 |α z =1 d+1

d+1

0

and for {d + 1} ⊆ u ⊆ {1, . . . , d + 1}, zu and hu denote the |u|-dimensional vectors containing the components of (z, zd+1 ) and (h, hd+1 ) with indices in u. For u = {d + 1} we use the convention that X Y |hj |−α S(hu , zu ) = S(h{d+1} , z{d+1} ). |u|−1

hu−{d+1} ∈Z0

j∈u−{d+1}

We show that for all hu and zu we have S(hu , zu ) ≤ 2ζ(α).

(40)

Indeed, for {d + 1} ⊆ u ⊆ {1, . . . , d + 1}, we can write hu · zu = c + hd+1 zd+1 for some integer

22

c, and n−1 n−1 X 1 X X e2πik(c+hz)/n S(hu , zu ) = n k=0 h∈Z |h|α z=1 0

=

n−1 X

X

z=1

|h|−α

h∈Z0 ,

c+hz≡0 (mod n)

=

n−1 X z=1

X

|mn − cz −1 |−α

m∈Z,

mn−cz −1 6=0

where z −1 ∈ Zn is the inverse element of z satisfying zz −1 ≡ 1 (mod n). In the last step we have used the fact that c + hz ≡ 0 (mod n) is equivalent to h ≡ −cz −1 (mod n), i.e., h = mn − cz −1 for some integer m. Now if c ≡ 0 (mod n), then S(hu , zu ) =

n−1 X X

|mn|−α =

z=1 m∈Z0

(n − 1) X |m|−α < 2ζ(α). nα m∈Z 0

If c 6≡ 0 (mod n), then the set {cz −1 (mod n) : z = 1, . . . , n − 1} = {1, . . . , n − 1}, and thus S(hu , zu ) =

n−1 X X

|mn − b|−α
q ∗ , and suppose that q ∗ is the smallest integer with this property. Suppose also that γs,u are uniformly bounded: γs,u ≤ A for all s and all u. Then

25

• The generator z = (1, z 2 , . . . , z s ) constructed by the CBC algorithm with weights γ := {γs,u } satisfies en,s (α, γ, z) ≤ C3 (τ ) sq

∗ τ /2

(n − 1)−τ /2 , f or all τ ∈ [1, α),

(43)

where C3 (τ ) is independent of s and n, and depends on τ . Hence the resulting lattice rule achieves the tractability error bound with the ε-exponent 2/τ , which can be arbitrarily close to 2/α, and the s-exponent q ∗ . • If the generator z = (1, z 2 , . . . , z s ) is found by the CBC algorithm for the following “enlarged” order-dependent and finite order weights Γ := {Γs,j }, Γs,j := max γs,u , j = 1, . . . , q ∗ ; |u|=j

Γs,j = 0 f or j > q ∗ ,

(44)

then the same error bound as (43) is satisfied, with possibly a different C3 (τ ) factor, and the same tractability result holds. Proof. We prove the first part. For any τ ∈ [1, α), since the weights are of order q ∗ , we have ∗

X

1/τ (2ζ(α/τ ))|u| γs,u



∅6=u⊆{1,...,s}

q   X s `=1

`



A1/τ (2ζ(α/τ ))` = O(sq ),

(45)

with the implied constant in the big O notation depending on τ . Therefore, from (21) the ∗ corresponding constant C(s, τ ) in the error bound (36) satisfies C(s, τ ) = O(sq τ /2 ). The error bound (43) then follows from Theorem 6. Moreover, it is clear from (45) that the condition Bτ,q < ∞ is satisfied with q = q ∗ and for all τ ∈ [1, α), where Bτ,q is defined in (24). From Theorem 8 the tractability results follows. Now we prove the second part. If the generator z = (1, z 2 , . . . , z s ) is found by the CBC algorithm for the enlarged order-dependent weights (44), then en,s (α, γ, z) ≤ en,s (α, Γ, z). The results then follow as for the first part. In many applications, although s is huge, functions can be well approximated by sums of functions that depends on groups of just a few variables up to a given order q ∗ . We can model this situation by finite-order weights with small q ∗ . For instance, for some finance problems q ∗ = 2 or q ∗ = 3, see [19, 20]. Moreover, lattice rules constructed for the “enlarged” order-dependent and finite-order weights (44) can achieve the same convergence rate as for the general finite-order weights. As we shall see, the cost of the CBC algorithm for weights that are both finite-order and order-dependent can be much smaller than the cost for general weights. Furthermore, the cost for order-dependent weights can be much smaller than the cost for finite-order weights.

26

5.2

Computational cost

We analyze the total cost of the CBS algorithm for finite-order and order-independent weights. For finite-order weights of order q ∗ , the corresponding reproducing kernel is a sum of only ∗ O(sq ) terms. Thus the sum over u in the definition of e2n,d (1, z 2 , . . . , z d−1 , z d ), see (35), ∗ ∗ consists of O(dq ) terms, and the total cost of the CBC algorithm is O(n2 sq +2 α). The computational cost is exponential in the order q ∗ but this is not dangerous as long as q ∗ is not large. Further results on multivariate integration on weighted spaces of functions equipped with finite-order weights can be found in [14]. We now turn to the case of order-dependent weights, γs,u = Γs,|u| , with Γs,0 = 1. For simplicity we write Γs,|u| as Γ|u| . As before, assume that Dα (t) can be computed with cost of order α. The square worst-case error e2n,d (z) = e2n,d (1, z 2 , . . . , z d ) is given by n−1

e2n,d (z)

1X = n k=0 n−1

X

 2Dα

j∈u

∅6=u⊆{1,...,d}

X

D(`)

Y



 2Dα

u⊆{1,...,d},|u|=` j∈u

n−1

1X = n k=0

kz j n

d

1 XX = Γ` n k=0 `=1 where

γu

Y

X

Y

 2Dα

u⊆{1,...,d},|u|=` j∈u

kz j n

kz j n

 =

d X

Γ` D(`) ,

`=1

 ,

for ` = 1, 2, . . . , d.

Note that D(`) can be considered as an overall measure of the quality of the `-dimensional projections of the lattice rule with generator z. The quantity D(1) has the same value for all rank-1 lattice rules if n is prime, since every one-dimensional projection of such a lattice rule is just {k/n : k = 0, 1, . . . , n − 1}. The formula for D(`) involves quantities of the form X Y Ck,j , u⊆{1,...,d},|u|=` j∈u

for Ck,j = 2Dα (kz j /n). We give a recursive formula to compute such quantities. Define X Y Tk (m, `) = Ck,j , for m = 1, 2, . . . , d and ` = 1, 2, . . . , m. (46) u⊆{1,...,m}, |u|=` j∈u

We can view Tk = (Tk (m, `)) as a d × d lower triangular matrix. Obviously, Tk (m, 1) =

m X

Ck,j

and Tk (m, m) =

j=1

m Y

Ck,j

j=1

for m = 1, 2, . . . , d. The elements of the dth row of Tk are used to compute ! n−1 d 1X X 2 en,d (z) = Γ` Tk (d, `) n k=0 `=1 27

From (46) we get Tk (m, `) = Tk (m − 1, `) + Ck,m Tk (m − 1, ` − 1),

for m ≥ 2, ` ≥ 2.

This allows us to compute Tk (m, `) by the following algorithm: Tk (1, 1) = Ck,1 , for m = 2, 3, . . . , d m X Ck,j , Tk (m, 1) =

Tk (m, m) =

j=1

m Y

Ck,j ,

j=1

for ` = 2, 3, . . . , m − 1 Tk (m, `) = Tk (m − 1, `) + Ck,m Tk (m − 1, ` − 1). This algorithm is especially convenient when we successively increase d = 2, 3, . . . , s, and therefore is especially well suited to the CBC algorithm. If the Tk (d − 1, `)’s have been P computed, then the next step to compute all Tk (d, `) as well as d`=1 Γ` Tk (d, `) requires only O(d) operations. The computation of e2n,d+1 (z) requires O(nd) operations, and therefore the total cost of the CBC algorithm is O(n2 s2 ). Finally, if the order-dependent weights are also of finite-order q ∗ , then the total cost of the algorithms is reduced to O(n2 s q ∗ ).

6

Numerical comparisons

We present a number of numerical tests. First, we compare the worst-case errors of CBC lattice rules and Korobov lattice rules with several choices of weights. Then we test the CBC algorithm for a specific smooth function for which various choices of order-dependent, finiteorder and products weights are appropriate. Finally we compare the practical performance of various known algorithms with the CBC algorithm for a high-dimensional finance problem.

6.1

Comparisons of worst-case errors

P −2 = π 2 /6. For a given We perform numerical comparisons for α = 2, so that ζ(α) = ∞ h=1 h s and a sequence of order-dependent weights Γ1 , . . . , Γs , we compare the following: • The root mean square average of the worst-case error given by (34) with α = 2, !1/2 s   X 1 s eavg (2ζ(2))` Γ` . n,s = n `=1 ` • The root mean of the worst-case error over all lattice rules [Mn,s (2)]1/2 , with Mn,s (2) defined in (19) for α = 2, i.e., ` s   s   X 1X s s 2ζ(2) ` −1 Mn,s (2) = (2ζ(2)) Γ` + (1 − n ) − Γ` . n `=1 ` ` n `=1 28

• The worst-case error of lattice rules constructed by the CBC algorithm. • The worst-case error of Korobov lattice rules, i.e., lattice rules with generator of the form z = (1, a, . . . , as−1 ) (mod n), where a ∈ Zn is found by minimizing en,s (z) for each fixed s. For order-dependent weights the worst-case error en,s (z) is computed using the recursive formula developed in Section 5.2. For a given dimension s, the computational costs of searching best Korobov lattice rules for a single dimension s, and for searching CBC lattice rules for all dimension up to s, are approximately the same. The CBC lattice rules have the advantage of being extensible in dimension (if the weights are independent of the dimension), but Korobov lattice rules do not. The error bound of Korobov lattice rules in weighted spaces has been studied in the case of product weights in [22] (the same analysis applies to non-product weights). In general, the known error bounds of Korobov lattice rules have a stronger dependence on the dimension than those for CBC lattice rules. We consider the two choices of weights, each of which is both finite-order and orderdependent: • (A) : Γ1 = 1, Γ2 = • (B) : Γ` =

1 , (2π 2 )`−1

1 , 2π 2

and Γ` = 0 for ` ≥ 3.

` = 1, 2, 3, and Γ` = 0 for ` ≥ 4.

According to Theorem 9, the resulting CBC lattice rules achieve tractability error bounds with the ε-exponent 1 and the s-exponent at most 2 for choice (A), or at most 3 for choice (B). n 251

1009

4001

order

Method Mean QMC Mean LR Korobov CBC Mean QMC Mean LR Korobov CBC Mean QMC Mean LR Korobov CBC Korobov CBC

s=4 2.56e-1 1.16e-1 3.41e-2 3.29e-2 1.28e-1 5.72e-2 9.31e-3 8.98e-3 6.41e-2 2.87e-2 2.57e-3 2.44e-3 0.93 0.94

s=8 4.07e-1 2.49e-1 7.40e-2 7.71e-2 2.03e-1 1.24e-1 2.19e-2 2.10e-2 1.02e-1 6.20e-2 6.05e-3 5.59e-3 0.90 0.95

s = 16 6.87e-1 5.13e-1 2.12e-1 2.11e-1 3.43e-1 2.56e-1 4.83e-2 5.13e-2 1.72e-1 1.28e-1 1.51e-2 1.36e-2 0.95 0.99

s = 32 1.23e00 1.04e00 5.35e-1 5.57e-1 6.11e-1 5.20e-1 1.59e-1 1.54e-1 3.07e-1 2.61e-1 3.66e-2 3.69e-2 0.97 0.98

s = 64 2.29e00 2.10e00 1.38e00 1.43e00 1.14e00 1.05e00 4.25e00 4.14e00 7.54e-1 5.26e-1 1.10e-1 1.11e-1 0.91 0.92

s = 128 4.41e00 4.22e00 3.33e00 3.33e00 2.20e00 2.10e00 1.08e00 1.10e00 1.10e00 1.06e00 3.03e-1 3.07e-1 0.87 0.86

Table 1. Comparisons of the worst-case errors and the convergence orders, i.e., the value r in en,t (z) = O(n−r ), for weights (A) : Γ1 = 1, Γ2 = 2π1 2 and Γ` = 0 for ` ≥ 3. “Mean QMC” and “Mean LR” stand for the root mean square average of the worst-case error over all QMC rules and over all lattice rules, respectively. 29

n 251

1009

4001

order

Method Mean QMC Mean LR Korobov CBC Mean QMC Mean LR Korobov CBC Mean QMC Mean LR Korobov CBC Korobov CBC

s=4 2.59e-1 1.22e-1 4.55e-2 4.48e-2 1.29e-1 6.03e-2 1.33e-2 1.34e-2 6.48e-2 3.02e-2 4.17e-3 4.09e-3 0.86 0.86

s=8 4.32e-1 2.87e-1 1.57e-1 1.49e-1 2.15e-1 1.43e-1 5.12e-2 4.84e-2 1.08e-1 7.15e-2 1.69e-2 1.56e-2 0.80 0.81

s = 16 8.22e-1 6.84e-1 4.69e-1 4.72e-1 4.10e-1 3.41e-1 1.88e-1 1.74e-1 2.06e-1 1.71e-1 6.47e-2 6.20e-2 0.71 0.73

s = 32 1.82e00 1.70e00 1.46e00 1.41e00 9.07e-1 8.48e-1 6.10e-1 5.91e-1 4.56e-1 4.26e-1 2.38e-1 2.27e-1 0.65 0.66

s = 64 4.52e00 4.43e00 4.16e00 4.05e00 2.25e00 2.21e00 1.91e00 1.83e00 1.13e00 1.11e00 7.70e-1 7.71e-1 0.61 0.60

s = 128 1.20e+1 1.19e+1 1.17e+1 1.12e+1 5.98e00 5.94e00 5.65e00 5.36e00 3.00e00 2.99e00 2.53e00 2.46e00 0.55 0.55

Table 2. The same as Table 1, but for weights (B) : Γ` = 1/(2π 2 )`−1 , ` = 1, 2, 3 and Γ` = 0 for ` ≥ 4. Tables 1 and 2 present the comparisons of the worst-case errors for the two choices of weights in dimensions up to s = 128 for n = 251, 1009 and 4001. The apparent convergence order in each dimension for each method, i.e., the value r in en,d (z) = O(n−r ) which is estimated from linear regression on the empirical data, is also given. We observe that for the two choices of weights above, both the CBC and Korobov lattice 1/2 rules behave much better than the average values eavg , with the CBC lattice n,s and [Mn,s (2)] rules being slightly better than Korobov rules, especially for case (B). Both averages converge as O(n−0.5 ), as they should. For the weights (A), both the CBC and Korobov lattice rules achieve nearly the optimal convergence order O(n−1 ), and seem to have a very weak dependence on the dimension. For the weights (B), the convergence orders of the CBC and Korobov lattice rules are worse than those for weights (A), and seem to have a stronger dependence on the dimension, as theory predicts.

6.2

Test function

We tested the CBS algorithm for the specific function f (x) = 1 +

s X

xi (xi − 21 )(xi − 1)xj (xj − 12 )(xj − 1).

(47)

i,j=1, i6=j

The exact value of the integral of this function is 1. The function f is periodic on [0, 1]s . It is easy to show that f belongs to Hs,2 for any weights {γs,u } with positive γs,u for |u| ≤ 2. The integral of f has been approximated using 14 different generating vectors of lattice rules for α = 2. These correspond to four sets of both order-dependent and finite-order weights, three sets of product weights, and two values of n. The absolute errors are given in Table 3. 30

Γ1 = 1, Γ2 = 2π1 2 Γ1 = 1, Γ2 = 2π1 2 , Γ3 = Γ1 = 1, Γ2 = 2π1 2 , Γ3 = Γ1 = 1, Γ2 = 2π1 2 , Γ3 = 0.1 γj = 2π 2 0.9j γj = 2π2 γj = 2π12 j 2

1 4π 4 0.5 4π 4 0.1 4π 4

CBC n = 4001 9.033e-08 1.329e-05 5.791e-09 6.508e-07 1.968e-07 2.635e-06 1.499e-05

rules n = 8009 1.453e-08 4.929e-06 7.952e-07 4.658e-07 4.460e-08 8.274e-07 2.801e-07

Table 3. Absolute error of the test function for s = 64. It is clear from the form of f that the first weights are the “correct” weights for this problem. We see this reflected in the more accurate result for the first case. The second, third and fourth cases are different from the first in that the additional weight Γ3 is introduced. We see an increase in the error recorded although the error decreases with Γ3 . For product weights we obtain similar results as for the cases with Γ3 . This test highlights the important role that the choice of the weights plays.

6.3

Finance examples

For practical applications the weights should be chosen to reflect the characteristics of the problems at hand. For example, for many problems in finance the underlying functions are nearly a superposition of functions of 2 or 3 variables [19, 20]. The problem of what product weights should be used in practice is studied in [21]. It is pointed out there that unsuitable choices of weights may not give good results. Here we choose the weights (A) and (B) as indicated in Subsection 6.1. Our choices (A) and (B) of weights generate lattice rules with nearly optimal low-dimensional projections, and thus may be suitable for finance applications. We give examples to illustrate the practical performance of these lattice rules. Consider the problem of pricing path-dependent options. We assume that the payoff at the expiration time T is g(St1 , . . . , Sts ), where St is the price of the underlying asset at time t. In our examples, we take equally spaced times tj = j∆t, j = 1, . . . , s, ∆t = T /s with t0 = 0. We assume that under the risk-neutral measure the underlying asset follows geometric Brownian motion (Black-Scholes model): dSt = rSt dt + σSt dBt , where r and σ are the risk-free interest rate and the volatility, and Bt is the standard Brownian motion. Based on the risk-neutral valuation principle, the value of the option at t = 0 is C0 = EQ [e−rT g(St1 , . . . , Sts )],

31

where EQ [·] is the expectation under the risk-neutral measure Q. The dimension of the resulting integral is the number of time steps. We consider three kinds of path-dependent options: arithmetic average Asian options, barrier options and look-back options. With K being the strike price, their payoffs are:   P • Arithmetic Asian call option: g(St1 , . . . , Sts ) = max 0, 1s sj=1 Stj − K . • Barrier option (down-and-out call option, with barrier Ba ):  max (0, ST − K) , if for all j = 1, . . . , d, Stj > Ba , g(St1 , . . . , Sts ) = 0, if for some j = 1, . . . , d, Stj ≤ Ba . • Look-back call option: g(St1 , . . . , Sts ) = ST − min{St1 , . . . , Sts }. R Changing the variables, we reduce the computation of C0 to [0,1]s f (x)dx with the integrand f depending on the option g. We stress that f is neither periodic nor smooth. The lack of periodicity is not so important since we can use lattice rules with random shifts and such random shifted lattice rules work for non-periodic functions, see [13]. The lack of smoothness is more important, and it is present in practically all finance problems due to the use of min and max in the definition of the options as well as due to the change of the variables. As is typical in this area, the lack of smoothness of finance integrands does not stop us from using QMC algorithms such as random shifted lattice rules or other low-discrepancy algorithms. It is remarkable that QMC algorithms are so effective and significantly outperform the Monte Carlo (MC) algorithm although not all the theoretical assumptions are satisfied for finance integrands. This problem is not yet fully understood and is definitely beyond the scope of this paper. Nevertheless, we want to join the big group of people in testing the CBC lattice rules also on finance problems. We compare the efficiency of the CBC lattice rules and Korobov lattice rules using the weights (A) or (B) with the efficiency of the Sobol algorithm [17] and the MC algorithm. To get an estimation of the accuracy for each method, we compute the sample variance of the randomized QMC. For lattice rules, we use the random shift method, see [13]. For Sobol points, we use the digital-scrambling method as used in [21]. The relative efficiency ratio of an estimate with respect to the crude MC estimate is defined by the inverse of the ratio of their sample variances. The relative efficiency ratio is thus a measure of the factor by which n would need to be increased to achieve the same error with the crude Monte Carlo method. To reduce the effective dimension, we use the Brownian bridge construction [3] and the construction based on principal component analysis [1]. The relative efficiency ratios are given in Tables 4 and 5.

32

Lattice (A)

Strike price

90

STD BB PCA

100

STD BB PCA

110

STD BB PCA

MC 1.00 0.998 0.997 1.00 1.00 0.998 1.00 1.00 1.00

Sobol 15 561 835 5 302 674 2 206 416

Lattice (B)

Korobov

CBC

Korobov

CBC

77 235 796 22 115 669 9 50 368

104 487 927 31 319 734 8 116 425

62 118 881 23 40 619 8 17 337

57 502 888 17 305 741 8 154 434

Table 4. The relative efficiency ratios with respect to the crude MC for an arithmetic Asian option (with m = 100 replications, s = 64, n = 4001, for the Sobol point set n = 4096). The parameters are S0 = 100, σ = 0.2, T = 1.0, r = 0.1. In the second column are the path generation methods: STD — standard (sequential) construction of the Brownian motion; BB — Brownian bridge construction; PCA — principal component analysis. Options

Lattice (A)

Strike price

Barrier

90

options

STD BB PCA

100

STD BB PCA

Look-back options

STD BB PCA

MC 1.00 0.997 0.996 1.00 0.996 0.995 1.00 0.997 0.997

Sobol 6 147 131 3 16 23 6 43 113

Lattice (B)

Korobov

CBC

Korobov

CBC

34 135 91 9 21 17 31 101 94

38 182 151 9 13 21 30 107 99

25 115 106 9 18 29 32 111 92

19 209 168 7 34 25 19 155 186

Table 5. The same as Table 4, but for barrier options and look-back options. For barrier options the barrier is K − 10, where K is the strike price. As we can see in these tables, both the CBC lattice rules and Korobov lattice rules with the weights (A) or (B) improve the efficiency of MC by a large factor which ranges between about 10 and 1000. There is not much difference in efficiency between the two choices of weights, although the construction using weights (A) is faster. In most cases the CBC lattice results are better than those of the Korobov lattices, and also better than the Sobol results. Significant additional variance reduction factors can be obtained by variance reduction techniques, such as the antithetic variates and control variables (we omit the details). A remarkable feature of these (CBC and Korobov) lattice rules is their robustness, that is, they can outperform MC on a variety of high-dimensional problems. Thus all these lattice rules are practical for applications. 33

Acknowledgments We are grateful to Frances Kuo and Dirk Nuyens who told us how to eliminate the extra power of s in the cost of the algorithm for order-dependent weights. We are also thankful to Ben Waterhouse for performing numerical calculations for the function (47). The support of the Australian Research Council under its Centres of Excellence Program is gratefully acknowledged. The third author is pleased to acknowledge the support of the National Science Foundation of China, and the last author the support of the National Science Foundation of USA.

References [1] P. Acworth, M. Broadie and P. Glasserman, A comparison of some Monte Carlo and quasi-Monte Carlo techniques for option pricing. In Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, P. Hellekallek and H. Niederreiter, eds., 1-18, 1997, Spring-Verlag. [2] G. Brown, G. A. Chandler, I. H. Sloan and D. C. Wilson, Properties of certain trigonometric series arising in numerical analysis. J. Math. Anal. Appl., 371-380, 1991. [3] R. E. Caflisch, W. Morokoff and A. B. Owen, Valuation of Mortgage backed securities using Brownian bridges to reduce effective dimension. J. Comp. Finance, 1 (1997), 27-46. [4] J. Dick, On the convergence order of the component-by-component construction of good lattice rules. J. Complexity, 20 (2004), 493-522. [5] J. Dick, I. H. Sloan, X. Wang and H. Wo´zniakowski, Liberating the weights. J. Complexity, 20 (2004), 593-623. [6] F. J. Hickernell and X. Wang, The error bounds and tractability of quasi-Monte Carlo algorithms in infinite dimension. Math. Comp., 71 (2002), 1641-1661. [7] F. J. Hickernell and H. Wo´zniakowski, Integration and approximation in arbitrary dimensions. Adv. Comput. Math., 12 (2000), 25-58. [8] F. Y. Kuo, Component-by-component constructions achieve the optimal rate of convergence for multivariate integration in weighted Korobov and Sobolev spaces. J. Complexity, 19 (2003), 301-320. [9] F. Y. Kuo and S. Joe, Component-by-component construction of good lattice points with composite number of points. J. Complexity, 18 (2002), 943-976. [10] H. Niederreiter, Random Number Generation and Quasi-Monte Carlo Methods. SIAM, Philadelphia, 1992. [11] E. Novak and H. Wo´zniakowski, Intractability results for integration and discrepancy, J. Complexity, 17, 388-441, 2001. [12] I. H. Sloan and S. Joe, Lattice Methods for Multiple Integration, Oxford University Press, Oxford, 1994.

34

[13] I. H. Sloan, F. Y. Kuo and S. Joe, Constructing randomly shifted lattice rules in weighted Sobolev spaces, SIAM J. Numer. Anal., 40, 1650-1665, 2002. [14] I. H. Sloan, X. Wang and H. Wo´zniakowski. Finite-order weights imply tractability of multivariate integration. J. Complexity, 20 (2004), 46-74. [15] I. H. Sloan and H. Wo´zniakowski, When are quasi-Monte Carlo algorithms efficient for high dimensional integrals? J. Complexity, 14 (1998), 1-33. [16] I. H. Sloan and H. Wo´zniakowski, Tractability of multivariate integration for weighted Korobov classes. J. Complexity, 17 (2001), 697-721. [17] I. M. Sobol’, On the distribution of points in a cube and the approximate evaluation of integrals. Zh. Vychisli. Mat. i Mat. Fiz., 7 (1967), 784-802. [18] J.F. Traub, G.W. Wasilkowski and H. Wo´zniakowski, Information-Based Complexity, Academic Press, New York, 1988. [19] X. Wang and K. T. Fang, Effective dimensions and quasi-Monte Carlo integration, J. Complexity, 19 (2003), 101-124. [20] X. Wang and I. H. Sloan, Why are high-dimensional finance problems often of low effective dimension? SIAM J. Sci. Comput., to appear, 2005. [21] X. Wang and I. Sloan, Efficient weighted lattice rules with application to finance. Submitted for publication, 2002. [22] X. Wang, I. H. Sloan and J. Dick, On Korobov lattice rules in weighted spaces. SIAM J. Numer. Anal., 42 (2004), 1760-1779.

35