Erich Novak and Klaus Ritter*. Mathematisches Institut .... error are known for many linear problems and Gaussian measures, see Traub et al. (1988). For these ...
Math. Z. 211,671 686 (1992)
Mathematische Zeitschrift 1~ Springer-Verlag 1992
Average errors for zero finding: Lower bounds Erich Novak and Klaus Ritter* Mathematisches Institut, Universit/it Erlangen-Nfirnberg, Bismarckstrasse 1 1/2, W-8520 Erlangen, Federal Republic of Germany Received October 7, 1991; in final form February 26, 1992
1 Introduction There are a number of papers studying linear problems of numerical analysis, such as integration or Lq-reconstruction, on the average. See, e.g., Traub et al. (1988), Novak (1992), Ritter (1990), and Wo~'niakowski (1991) for recent surveys and new results. From these references it also can be seen that not much is known about average errors of optimal methods for nonlinear problems such as global optimization or zero finding. In Novak (1989) known upper bounds for the average error of zero finding methods are surveyed. Up to now, lower bounds for the average error were not known for this problem - or for any other nonlinear problem of numerical analysis. In this paper we prove that the average error for computing a zero using n function values is bounded from below by 7". Here we study the class F of continuous functions on [0, 1] with f(0) = - 1 and f(1) = 1 and consider the Brownian bridge on F. This result shows that the linear convergence, which holds for the root criterion in the worst case, cannot be beaten in the average case. We feel that our method of proof can also be useful for the proof of lower bounds for other measures (yielding smoother functions) and other nonlinear problems. The plan of this paper is as follows. In Sect. 2 we give a more detailed formulation of the problem. We also mention some background material and indicate our main result together with some ideas of its proof. In particular, we give a short discussion about the difference in proving lower bounds for the average error for linear and for nonlinear problems, respectively. In Sect. 3 we study local average errors. In Sect. 4 we explain our technique of adding auxiliary knots which is important for the inductive proof of the main result in Sect. 5.
* The second author is supported by the Deutsche Forschungsgemeinschaft(DFG)
672
E. Novak and K. Ritter
2 Problem formulation and main result We study the approximate solution of a nonlinear e q u a t i o n f ( x ) = 0, assuming that f belongs to a class F c {f: [0, 1] -~ IR I f continuous, i ( 0 ) < 0 , f ( 1 ) > 0} = F * . To c o m p u t e such an approximate solution we use a partial information o n f w h i c h consists of n adaptively, i.e., sequentially, c o m p u t e d function valuesf(xk). Processes which use derivatives of f a r e also used in practice, and although they are valuable in some situations, they are not treated here. The selection of the knot Xk, which m a y depend on the values f ( x l ) . . . . . f(Xk-1), is given by a measurable m a p p i n g Xk:IR k-1
17
---+ [ 0 ,
.
In particular xl e [0, 1] is a constant. Hence the information on f a l t e r k evaluations is
N k ( f ) = ( Y l , . - - , Yk) with
Yl = f ( x t ( Y l . . . . .
Yz-1))
for l = 1. . . . . k. Any m a p p i n g Nk = [ x t , . 9 9 Xk]: F ~ IRk of this form is called an adaptive information operator. The approximation S , ( f ) to a zero of f is constructed by an algorithm which uses the information N , ( f ) , i.e.,
S,,(f) = c~,(f(xl) . . . . .
f ( x , ( . . .)))= cp,(N,,(f)) ,
where
~.: IR" ~ [0, 12 is measurable. Any m a p p i n g S.: F ~ [0, 1] of this form is called an adaptive zero finding m e t h o d (using n knots). We use two different error criteria to measure the quality of S, f o r f ~ F. If f is a function having a unique zero f - l ( 0 ) , then it is natural to consider the n u m b e r i f - x (0) - S , ( f ) l as the error of S.. In the case of a n onunique zero this generalizes to the " r o o t criterion"
d ( f - l ( O ) , S , ( f ) ) = inf{lx - S . ( f ) l I f ( x ) = O}. This criterion is used in m o s t theoretical studies on the error of zero finding methods. In m o s t numerical calculations, however, the residual error, defined by
]f(S,,(f))[ , is used. This is easy to understand because d ( f - ~(0), S,,(f)) is not available in most cases but [f(S,,(f))l can easily be computed. The worst case performance of S. on the class F is then expressed by the maximal error
erp(s,,) = s u p d ( f - ' J',~F
(0), S,,(f))
Average errors for zero finding: lower b o u n d s
673
and e~e(S,) = sup [ f ( S , ( f ) ) l
,
f~F
respectively. It is not difficult to prove that the bisection method S,b satisfies evro( S ,b) = 2 - n - 1 for any class ~ C ~ ( [ 0 , 1 ] ) I f ' > 0} = F = F * .
{feF*
Furthermore it is well known that the bisection method is optimal with respect to the maximal error e~~ for all these classes, i.e., the bisection method satisfies b evr o (S,) < e~~
for any adaptive method S, using n knots. See Traub et al. (1988, p. 190) for similar results. Hence the optimal worst case error bounds do not depend on the degree of smoothness, bisection is optimal even for the class of C ~~ with a simple zero. In this paper we study the average performance of adaptive zero finding methods. We always consider the Borel a-field on F* generated by the topology of uniform convergence. If P is a probability measure on a measurable set F c F * then the average error of S, is given by e~~
= S d(f
t(O), S , ( f ) ) d P ( f )
F
and e)e(S,) = ~ I f ( S , ( f ) ) l d P ( f ) , v
respectively. The measurability of the above integrands follows from the measurability of the mappings xk and q~,. Up to now, two different probability measures were used to analyze the zero finding problem. In N o v a k (1989) and also in this paper the Brownian bridge with f(0) = - 1 and f(1) = 1 is studied. This measure P is Gaussian and its topological support, which is henceforth denoted by F, is given by -1,f(1)=
r = {f~r*lf(O)=
1}.
Furthermore P is uniquely determined by its mean re(t) = ~ f ( t ) d P ( f ) v
= 2t - 1
and its covariance kernel R(s, t) = ~ ( f ( s ) - m ( s ) ) ' ( f ( t )
- m(t))dP(f)
= s'(1 - t ) ,
F
where 0 < s < t < 1. A measure of Dubins, Freedman, and Ulam, which was further studied in Graf et al. (1986), was taken in Graf et al. (1989). This measure is used to analyze the problem of zero finding for increasing functions, since its topological support is the subset of all increasing functions from F. For both measures it is proved that the bisection method is not optimal. Actually one can construct methods S, with e~e~
< c" 7"
674
E. N o v a k a n d K. Ritter
where 7 < 0.5 and c > 0. It should be stressed that lower bounds for the average error were not k n o w n for both measures. The aim of this paper is to establish such lower b o u n d s for the Brownian bridge, and our main result is the following. Theorem. Let P be the Brownian bridge with f(O) = - 1 and f(1) = 1. Then there
exist constants cl, C2, 0~, fl • 0 w i t h infe~~
> c1" ~"
S~
and infe~(S,) > c2"fl" S~
for every n e N, where S, runs through all adaptive zero findin9 methods using n knots. O u r p r o o f is constructive, i.e., we get explicit values for ~ and ft. To get the numerical values ~ = 0.001 and 13 = 0.005 we have to perform, however, a numerical evaluation of the m i n i m u m of four functions in one variable. See (19) and (20) and the remark following L e m m a 3. U p p e r bounds of the form e" 7" are trivial for the root criterion, where even the worst case error of the bisection m e t h o d is 2 - " - 1. This also means that the n u m b e r of function values, necessary to get a guaranteed small error e with the bisection method, is at most log 0.001 - - < 1 0 log 0.5 times larger than the n u m b e r of function values, necessary to get an average error with any method. This means that the average case complexity of our problem is only slightly smaller than the worst case complexity. The reader should observe, however, that the worst case error of any m e t h o d for arbitrary n is infinite on F * [and at least 1 on F), if we use the residual error criterion. Therefore it is not quite trivial that an upper b o u n d of the form c" 7" is valid for the average error with respect to the residual criterion, too. To prove this, we consider a simple modification S, of the bisection method: We use the same information as for the bisection m e t h o d but define ~b, as for the regula falsi. We get a m e t h o d with average error b o u n d e d by c" 2 -n/z. In this paper we study p-average errors ( ! d ( f - l ( 0 ) , S , ( f ) ) P d P ( f ) ) lip and
with p = 1, whereas p = 2 would yield the mean square error. The upper b o u n d for the residual criterion and, of course, the lower bounds given in the theorem are also valid for 1 =< p < ~ . N o w we want to explain some ideas of our proof. Lower bounds for the average error are k n o w n for m a n y linear problems and Gaussian measures, see T r a u b et al. (1988). F o r these problems, the average error can be written in the form ~ ItS(f) - ~,(y)ll d P ( f [ N , = y)dN, P(y) N,~(F) F
(1)
Average errors for zero finding: lower bounds
675
using conditional probability measures, c o m p a r e with (4) or (5). Assume for a m o m e n t that N , is nonadaptive, i.e., the knots xk are constants. Then we have two i m p o r t a n t differences to our nonlinear p r o b l e m which m a k e the linear case much simpler: (a) the optimal ~b, = 4~* in (1) is linear and can easily be characterized as the conditional expectation of S given N,, (b) the inner integral e(~b*, N,, y) in (1), called local average error, does not depend on y and hence the error (1) for the optimal ~* equals Sr II S(f)II d P ( f l Nn = 0). Clearly these statements must be modified in the case of an adaptive information operator. Nevertheless, even for these operators, the local average error only depends on the knots and not on the respective function values. Hereby one can prove that a d a p t i o n does not help in this situation. M o r e o v e r there is no need to distinguish between favorable information, i.e., the local average error is small, and unfavorable information for given knots. The situation is quite different for the zero finding problem which can be seen already in the case n = 1. Then the average error can be written in the form (4) or (5), where (a) the optimal q51 = qS* is not a simple function, (b) the local average error e(8*, N1, y) heavily depends on y and is not b o u n d e d from below since lira e(q~*, N1, y) = 0 for y --, 0 and for y ~ _+ oo; m o r e o v e r the average error Sa e(c/)*, N~, y ) d N 1 P ( y ) cannot be given by a simple formula. T o prove lower bounds for the nonlinear case we find a set of unfavorable information depending on the adaptive information operator, i.e., the local average error is sufficiently large on this set. F o r the zero finding problem, an information f(t~) = y~ with tl < . . . < t, is unfavorable, if it has the following properties: (i) there exists exactly one subinterval with f ( t k - 1 ) < 0 < f(t~); here tk+l - tk is not too small and the values lf(tk)r and If(tk+l)l are not too large (compare with property 1 in Sect. 5), (ii) the values Jf(h)l are not too small (compare with properties 2 and 3 in Sect. 5). Obviously we have to prove that such an unfavorable information occurs with sufficiently large probability. To prove this by induction, we do not study N, itself but first add some auxiliary knots and obtain an information o p e r a t o r Mz,, Then we investigate the information M2, and show that even for this expanded information operator there exist enough functions yielding unfavorable information.
3 The local average error
Before we are able to define and study local average errors we need to gather some k n o w n facts on the Brownian bridge and its disintegration. Let a, b e IR and T > 0. By P,,b,T we denote the Brownian bridge with f(0) = a a n d f ( T ) = b. This Borel probability measure on the space C([0, T ] ) is Gaussian, and therefore it is uniquely determined by its mean
re(t) = ~ f ( t ) d P . , b , r ( f ) = a + t'(b - a)/T
676
E. Novak and K. Ritter
and its covariance kernel R(s, t) = ~ ( f ( s ) - m ( s ) ) ' ( f ( t )
- m ( t ) ) d P . , ~ , T ( f ) = s ' ( T -- t)/T
where 0 _< s _< t _< T, see Billingsley (1968, p. 64). Hence we have P = P_ 1,1,1 for the Brownian bridge used in the theorem. The Brownian bridge has the following scaling, reflection, and time-inversion properties. If f(t) = x/T.f(t/T) then Pa.b,T is the image of P/./Y,b/.w,1 under the m a p p i n g f~--~f If f(t) = -f(t) then P.,b,T is the image of P_.,-b,W under the m a p p i n g f~--~j~ If f(t) =f(r-
t)
then Pa,b, r is the image of Pb,a, r under the m a p p i n g f ~ - ~ f These properties can be verified by computing the means and the covariance kernels of the image measures, because these image measures are Gaussian, too. Let N . = [x~ . . . . . x . ] be an adaptive information operator, Then it is easy to check that N . is measurable with measurable range N . ( F ) . F o r Y = ( Y l . . . . . y . ) ~ N . ( F ) we define a Gaussian measure P ( ' I N . = y ) on F as a connection of independent Brownian bridges in the following way. The m e a n of P(" IN. = y) is the affine linear interpolation of - 1, Yl . . . . . y., 1 in the knots 0, xl . . . . . x . ( y l . . . . . y._l), 1. These knots define a partition of [0, I] into at most n + 1 nonoverlapping subintervals. Function evaluations at 0 < s _< t < 1 are uncorrelated, i.e., independent since P ( ' I N . = y) is Gaussian, if s and t lie in different subintervals, and the evaluations have covariance (s - z l ) ' ( z 2 - t)/(z2 - zl), if s and t lie in the same subinterval [zl, z2]. N o w the following disintegration formula H(f)dP(f) F
=
~ ~ H(f)dP(frN.
= y)dN.P(y)
(2)
Nn(F) F
holds for any integrable m a p p i n g H: F -~ IR. M o r e o v e r we have P ( N . -1 { y } I N . = y) = 1
(3)
for any y ~ N . ( F ) , See, e.g., Ritter (1990) for the similar case of a Brownian motion. The measure P ( ' ] N . = y) is called the regular conditional probability given N . = y. Applying (2) and (3) to the average error of a zero finding m e t h o d S. = 0. ~ N . we get e~~
=
~ ~ d(f-l(O), d~.(y))dP(ftN.
= y)dN.P(y)
(4)
Nn(F) F
and e~ve(S.) =
~ ~ If(dp.(y))ldP(flN.
= y)dN.P(y).
(5)
Nn(F) F
The inner integrand is called local average error of S. given N~ = y in both cases.
Average errors for zero finding: lower bounds
677
Observe that in case of the residual criterion these local errors can be c o m p u t e d as integrals with respect to a single Brownian bridge. This, however, is not true for the root criterion. Therefore the following l e m m a gives a direct estimation for the local error only in the case of the residual criterion. L e m m a 1 There exist constants c~ > 0 and 0 < c2 < 1 such that inf
~d(f-t(O),x)dP
. , b , r ( f ) => c~" T
O~x c2"x/T
0 O. Then we have ~ If(x)ldPa, b.r(f) > min(lal, Ibl).
inf O x / ~ and a" b > 0 then we get Pa,b,r( I min If(t)l > 0 } ) >
Po, o , r ( { f < min(lal, lb])})
\ko - -
-4
.
(7)
680
E. Novak and K. Ritter
F u r t h e r m o r e the inequalities min(x, ~(T, x)) < 2" z(T, x)
(8)
T - max(x, ~(T, x)) < 2" z(T, x)
(9)
and hold. The next l e m m a states that L e m m a 1 can be applied to the subinterval given by x and ~ at least with probability c3. L e m m a 3 There exists a constant 0 < Ca < 1/4 such that { ) ) 6 [ - 2w/~, - - w / ~ ] , f ( m a x ( x , { ) ) e Iv/7, 2x/~] }) > c3
P-,.b,r({f(min(x,
holds f o r any T > O, 0 < x < T, and a, b e Ix/@, 2 v / T ] . Proof. D u e to the scaling, the reflection, and the time inversion property of the
Brownian bridge it is sufficient to prove the statement for T = 1, a, b e I = [1, 2] and 0 _< x < 1/2. In this case we have =
1 -
x(1
-
x ) = (1 -
x) 2 + x,
which yields z = (1 - x ) 2, and we put F(a, b, x) = P - ~ , b , l ( { f ( x ) e [ - 2 ( 1 = P-a,b,
1
- x), - ( 1 - x ) ~ , f ( { ) e [ 1
\[1 - x ~ -I,
el
- x, 2(1 - x)]})
.
Observe that F is not continuous in (1, 1, 0}, e.g., and that F(a, b, O) = 1. In the following we assume 0 < x < 1/2. The c o m m o n distribution off(x)~(1 - x) and f({)/{l - x) with respect to P-,,b, * is n o r m a l with mean 1 1-x
m = m(a, b, x) - - - ' ( - a
+ x ' ( b + a), b - x ( 1 - x ) ' ( b
+ a))
and covariance matrix c
= C(x)
X = - -
1--x
"D(x),
where x
D=D(x)=
1-x(1-x
,)
"
If Qx denotes the zero-mean n o r m a l distribution with covariance matrix C ( x ) and if A = ( - 1) x I, then we obtain F(a, b, x) = Q x ( A - re(a, b, x)) .
D u e to P r 6 k o p a (1971) the function Qx(A - ") is logarithmic concave on 1R2, and since m(', ", x) is linear we conclude that F(', ", x) is logarithmic concave on 12. This implies rain F (a, b, x) = a,b~l
min a,be{1,2}
F(a, b, x) ,
Average errors for zero finding: lower bounds
681
and therefore the minimization of F is reduced to the minimization of four functions defined on [0, 1/2]. With q = q(x) = x/(1 - x)/x we get "!exp
F(a, b, x) = - 2 7 z - ~
1 2~ - , , / ~
(C-'(y--m),y-m)
-
" 2 exp D q-tA-~,l
(,
--~(D-ly,
y)
)
dy dy.
Let a, b e {1, 2} and consider the square q(x)'(A - re(a, b, x)). Because of lira q(x) = oo X -~ O
and lira q ( x ) ' ( ( - a , b) - m(a, b, x)) = (0,0) X--*0
its side length tends to infinity while one of its vertices tends to (0, 0). Hence we obtain lira F (a, b, x)
1
4
x~0
Together with the continuity of F(a, b, ") on ]0, 1/2] we get a positive lower b o u n d for any function F(a, b, ") with a, bE {1, 2}. The numerical minimization of F(a,. b, ") for a, b e { l , 2} shows that the constant in the above lemma can be chosen as c3 = 0.0107. In the remaining part of this section we define the extension of an arbitrary adaptive information operator N , = [ x t , . . . , x,,] to an adaptive information operator M2, = [z~ . . . . .
z2.] : F --, 1R2" .
We use regular knots from N , in every odd step and auxiliary knots in every even step. The first two knots of M2, are fixed as Z1 =
gl
and z2 = 4(1, x l ) ,
see (6). Suppose
that
M2k
is
already
defined.
Since
the
knots
zl,. 9 9
z2k(yl,.. 9 Y2k-t) are not increasingly ordered in general and since some of them m a y coincide, we introduce the following rearrangement for any information y EM2~(F). The pairwise different inner knots belonging to y are
0 < z~k(y) < . . . < z~Y)(Y) < t with 0 < i(y) < 2k and {z21k(y). . . .
, z ~ ' ( y ) } = {Zl . . . . .
z2k(yl . . . . .
Yak-1 )) c~ ]0, 1 [ .
682
E. Novak and K. Ritter
F u r t h e r m o r e we define y~k = y~ ifzlEk(Y) = z j ( y l . . . . . Y j - 1 ) . Since the function values at 0 and 1 are fixed, we put i(y) + 1 . x i(y) + 1 zOk(y)=O, yOk=--1, and ZZk (Y)=Yzk = 1. Hence the information M 2 k ( f ) = Y consists of the function values
at i(y) + 2 increasingly ordered knots. Let l _ < k < n 1 and put u = xk+l(Yl,Y3
.....
Y2k-1) .
In the step 2k + 1 we use the regular knot zz~+l(yl .....
y2~) = u ,
which finally yields N . ( U ) = ( Y l , Y3, . . . , Y2,-1)
(I0)
if M2,(f)
= ( y l , Y2 . . . . .
Y2,) .
Hence M 2 , ( f ) contains at least the information N , ( f ) . In the step 2k + 2 we consider the location of the last regular k n o t u with respect to the first 2k regular and auxiliary knots, and the k n o w n function values at both ends of the subinterval containing u. W e use the knot z2~+ 2 ( y ) =
s + ~(t - s, u - s),
I
if u e ] s , t [ = ]z~-k l ( y l . . . . .
Y2k), ZI2k(Yl . . . . .
Y2k)[
and y~]l < 0 < y~:~ 0,
else , (11)
see (6), for y e M 2 k + I ( F ) with M 2 k + t = [ z l . . . . . ZEk+l]. Observe that Z:k + e (Yl . . . . . Y2R+ 1) does not depend on the function value Y2k+ 1 at u, and that it is sufficient to define Z2k+2 on the range M 2 k + I ( F ) in order to define M2~ completely. The evaluation of a function at the auxiliary knot Z2k+E(Y) = 0 does not provide any new information and it is done only for formal reasons.
5 Proof of the theorem Let S. = ~b, o N, be an adaptive zero finding m e t h o d using n knots and let M z . = [zl . . . . . z2.] denote the extension of N, by auxiliary knots. W i t h o u t loss of generality we m a y assume 0 < xx < 1 for the first k n o t used by N,. M o r e o v e r let P be the Brownian bridge with f(0) = - 1 and f(1) = 1. I. We define measurable subsets of informations A2k c MEk(F) c IN2k
Average errors for zero finding: lower bounds for k = 1. . . . .
683
n by the following properties:
!. y{~ 1 < 0 < Y~k for exactly one 1 (1/4)k, (c) lYgk-11, lY~kI ~-,,/z{k(Y) - zJs l ( y ) . [1, 23,
2. tY~kl > 1 / x / ~ ' m a x ( x / z ~ ( Y ) - z ~ I(Y), x/ztz~l(Y) - z~k(y)) 1 _< t _< ~(y), 3. ly~kl _-> (1/2) ~ for every 1 1/~/2"x/zi2k(Y) - z2~S-t(y). II. Let c3 be the constant from L e m m a 3. In the following we prove
P ( { M 2 k E A z k } ) > ck3
(12)
for k = 1. . . . , n by induction. Without loss of generality we can assume z~(y) = zl < z: = z~(y) and we put z = z: - zl > 1/4, see (7). Then the set A2 satisfies
{M2~A:}
= { f ~ F [ --2w/~ cl "(1/4)" "(1 - l/e) 2" using L e m m a 1 and 2 a n d properties 1 a n d 2. N o w we consider the residual criterion, where the local average error of S. given M 2 . = y is
e~e(OZn, M 2 . , y) = ~ If(~Oz.(y))l d P ( f ] M 2 . = y) F
see (5). We prove
e~(r
M2., y) > c2 "(1/2)"
(lS)
for y e A2., where c2 is the c o n s t a n t from L e m m a 1. Fix y 9 A2., assume
~.(y) e [z~. l(y), z;.(y)], a n d put T = z~z.( y ) - zZz; l ( y ), x = Oz.(Y) - z~-. ~( Y ), a = y~z; ~, a n d b = y~.. Hence we get
e~e(IPzn, M 2 . , y) = ~ [ f ( x ) l d P . , b , r ( f } . If a < 0 < b then we have l = j, see p r o p e r t y 1, a n d L e m m a 1 shows e~."(r
M2., y) >~___ c:" x/@ >= c2 "(1/2)"
.
686
E. Novak and K. Ritter
If a" b > 0 then L e m m a 2 a n d p r o p e r t y 3 yield fe ev(tPz,, M2,, y) >
m i n ( j a [ , Ib[) > (1/2)",
a n d (18) is p r o v e n . IV. F i n a l l y we c o n s i d e r the a v e r a g e e r r o r of S,, a n d we p u t c~ = (1 -
1/e)2"c3/4
(19)
and fl = c 3 / 2 .
(20)
F r o m (12) a n d (17) we get e~'~
=
e~e~
I M2n(F)
=> j 4.~
M2.,
y)dM~.V(y) >=PI{M~.EAa.})'c~'(I/4)"'(1
- l/e) a"
A2n ~> C 1 " ~Xn .
F r o m (12) a n d (18) we get
e~ee(S.) =
~
e~e(0z.,
Mz., y)dM2.P(y)
M2n(F)
> ~ eree(~2.,M2., y)dM2.P(y) >=n ( { M 2 ,
e
A2.})'c2
"(1/2)"
A2n C2 . fin
.
This c o m p l e t e s the proof.
Acknowledgment. We
thank Henryk Wo2niakowski who urged us to find explicit values for c~
and ft.
References 1. Billingsley, P.: Convergence of probability measures. New York: Wiley 1968 2. Graf, S., Mauldin, R.D., Williams, S.C.: Random homeomorphisms. Adw Math. 60, 239 359 (1986) 3. Graf, S., Novak, E., Papageorgiou, A.: Bisection is not optimal on the average. Numer. Math. 55, 481-491 (1989) 4. Novak, E.: Average case results for zero finding. J. Complexity 5, 489-501 (1989) 5. Novak, E.: Some applications of functional integration: average errors of numerical methods. Suppl. Rend. Circ. Mat. Palermo 28, 425-437 (1992) 6. Pr6kopa, A.: Logarithmic concave measures with application to stochastic programming. Acta Sci. Math. 32, 301-316 {[971) 7. Ritter, K.: Approximation and optimization on the Wiener space. J. Complexity 6, 337-364 (1990) 8. Traub, J.F., Wasilkowski, G.W., Wo2niakowski, H.: Information-based complexity. New York: Academic Press 1988 9. Wo~niakowski, H.: Average case complexity of multivariate integration. Bull. Am. Math. Soc. 24, 185-194 (1991)