A discrete subdivision scheme for constructing a convex surface, using the Monge-Ampere operator FREDRIK BRUZELIUS email:
[email protected]
November 23, 1998
Abstract Given is an initial data set in R3. A subdivision scheme is an algorithm for creating a denser data set. This paper discuss approximation subdivision schemes for bivariate function with convexity preserving properties. The result of the subdivision is a square equidistant lattice. It is proposed from a continuous perspective. The convex preserving property is guaranteed by constraining the eigenvalues of the Hessian to be nonnegative. This results in a nonlinear partial dierential inequality for the Monge-Ampere operator. The continuous ansatz is then made discrete using the Finite Dierence Method. This leads to a numerically hard to solve optimization problem, a so called non-convex quadratic constraint quadratic problem (QQP). An alternative way of introducing constraints for convexity preserving, is also discussed. This is linear and therefore leads to simpler but bigger optimization problem. An equivalence between the two methods are derived, and numerical tests are performed. A method for solving the QQP problem above are presented, and some negative numerical results concerning this method are presented. Also a standard method for solving nonlinear optimization problems, Sequential Quadratic Program, is tested on the problem with also giving negative numerical result. A third way of generating constraints for convexity preservation, using a factorization of the Monge-Ampere operator is also presented. These linear constraints are stricter then the original constraints, but give satisfactory good numerical results.
Contents 1 Introduction
1
2 Convex surfaces
3
1.1 The problem formulation . . . . . . . . . . . . . . . . . . . . . 1 1.2 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2.1 De nition of convexity . . . . . . . . . 2.2 The continuous ansatz . . . . . . . . . 2.3 Discretization . . . . . . . . . . . . . . 2.3.1 Finite Dierences discretization 2.3.2 Finite Elements Approximation
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
3 4 5 5 8
3 Linear constraints
10
4 Algorithms
15
3.1 Scott's constraints . . . . . . . . . . . . . . . . . . . . . . . . 10 3.2 Factorized Monge-Ampere . . . . . . . . . . . . . . . . . . . . 13
4.1 Quadratic programs . . . . . . . . . . . . . . . 4.1.1 Active set method . . . . . . . . . . . 4.2 Quadratically constrained quadratic programs 4.2.1 QP-iteration . . . . . . . . . . . . . . . 4.2.2 Sequential Quadratic Programming . .
5 Tests and Conclusions 5.1 5.2 5.3 5.4
Convergence of QP-iteration . Convergence of Sequential QP Scott's constraints . . . . . . Factorized Monge-Ampere . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
15 15 17 17 18
19 20 25 28 31
List of Figures 2.1 2.2 2.3 3.1 3.2 3.3 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.10 5.11 5.12 5.13
The Laplacian \molecule" . . . . . . . . . . . . . . . . . . . . . . The \molecule" for approximation of u . . . . . . . . . . . . . . . . An 6 6 lattice example with corresponding lattice index. . . . . . . .
5 6 7 A convex line segment in one dimension. . . . . . . . . . . . . . . . 11 Example of a the smallest minimal triangle in a square lattice . . . . . 12 Areas of coecients for convexity of quadratic functions . . . . . . . . 14 xy
The function u = sin(x2 + y2 ), on a lattice with 11 11-points and a spacing of h = 0:1 . . . . . . . . . . . . . . . . . . . . . . . . . . The Monge-Ampere constraint for the original data . . . . . . . . . . Convergence pattern of the symmetric QP-iteration with start vector v(0) = 0:2, i = 1; : : :; N . . . . . . . . . . . . . . . . . . . . . . . . Inactive Monge-Ampere constraint, for symmetric QP-iteration with = 0:01, and start vector v(0) = 0:2, i = 1; : : :; N . . . . . . . . . . . . . . Convergence pattern of the symmetric QP-iteration using the normal solution as start-vector. The data is disturbed in lattice point 73. . . . . Convergence pattern of the QP-iteration. The zero vector as the start vector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Monge-Ampere constraint for = 1 after 50 iteration with QPiteration, with disturbed data, 0:1 in point 73. . . . . . . . . . . . . . Convergence of the QP-iteration with disturbed data (added 0.1 in lattice point 73). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Convergence of the Sequential QP for dierent values of with the normal solution as starting vector. Data disturbed 0:3 in lattice point 49 . . Approximated surfaces using Sequential QP and the normal solution as starting vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . Convergence of the Sequential QP for dierent values of with the zero vector as starting vector. Data disturbed 0:3 in lattice point 49 . . . . . Approximated surfaces using Sequential QP and the zero vector as starting vector. Data disturbed 0:3 in lattice point 49 . . . . . . . . . . . . All minimal linesegment for a the center point in a lattice with 9 9 points. i
i
2
19 20 21 21 22 23 24 24 25 26 27 27 28
5.14 Approximated surfaces using the linear constraints, = 10, data disturbed 0.3 in lattice point 73. . . . . . . . . . . . . . . . . . . . . . 29 5.15 The Monge-Ampere constraints for a solution with 12 linear constraints. Data disturbed 0.1 in lattice point 73 and = 10 . . . . . . . . . . . 29 5.16 Constructed surfaces using the factorized Monge-Ampere constraints with dierent values of . . . . . . . . . . . . . . . . . . . . . . . . . . 31 5.17 Comparison of factorized Monge-Ampere constraints and the original problem solved with Sequential QP. Data disturbed 0:3 in lattice point 49 32
Chapter 1 Introduction This paper is on discrete approximation convexity preserving subdivision schemes, i.e given data points in R3, the goal is to nd a set of convex data points in a discrete lattice. This is further discussed below.
1.1 The problem formulation A classical problem in approximation theory is to de ne a smooth function which approximates given values. In certain applications, e.g. in surface design in the car industry, one is looking for a function which also should ful ll certain shape properties. Such properties could be convexity in part, or the whole, of the domain of de nition. In this paper convexity in whole is investigated, on a discrete domain in R2, due to the nite dimensional space of the approximation, see [10]. This means, given is a arbitrary set of points in R3. We want to nd a function that approximates this given set. The function should be convex, and we want to have some control over the curvature.
1.2 Outline In Chapter 2, we de ne convexity for continuous functions and for a set of discrete data in R3. A continuous ansatz which solves the convex preserving subdivision as a optimization problem is also presented. This gives rise to a continuous optimization problem with non-linear constraints (of MongeAmpere type). Two short introductions to the Finite Element Method and the Finite Dierence Method respectively, which is used further in the paper, are also presented. Here also the important issue of how to handle boundary data is discussed. In particular we consider three dierent approaches. 1
CHAPTER 1. INTRODUCTION
2
In Chapter 3 we rst describe the result of Scott, [6]. Further a linearization of the Monge-Ampere constraints proposed in Chapter 2, is presented. In Chapter 4 we give a brief orientation of the type of optimization problems that arise in Chapter 2 and 3, i.e so called quadratic programs (QP) and quadratic programs with quadratic constraints (QQP). Standard algorithms for solving such problems, as well as a new algorithm are also presented. In Chapter 5 we represent results of quit extensive numerical tests. E.g we consider convergence behavior of the algorithms presented in Chapter 4, and conclusions of the dierent ways of constraining for convexity. An equivalence of the constraints proposed in Chapter 2 and 3 is also shown.
Chapter 2 Convex surfaces This Chapter starts by de ning convexity of a function and a discrete set of data. In section 2.2 a continuous problem is proposed, that will solve a convexity preserving approximation of given scattered data. This continuous problem is then discretized on a square equidistant lattice, using the Finite Dierence Method and the Finite Element Method respectively.
2.1 De nition of convexity We need to know what it means to say that a set of data in the plane is convex. We start by giving the de nition of convexity for a continuous function, De nition 2.1.1 A continuous function f 2 C( ) de ned on a closed convex domain Rs, is said to be convex if, for every points x1 ,x2 2 and every , 0 1, there holds f (x1 + (1 ? )x2) f (x1) + (1 ? )f (x2) If, for every 0 < < 1 and x1 6= x2, there holds f (x1 + (1 ? )x2 ) < f (x1 ) + (1 ? )f (x2 ) then f is said to be strict convex. This means in the 2-dimensional case, that given three arbitrary points on the surface, the planar interpolant to these points must lie above the surface in the convex hull spanned by the points , i.e the triangle formed by the three points, for strict convexity and the interpolant can be equal to the surface in the triangle for convexity. We also reminds of the following property of a twice continuous dierentiable convex function, 3
CHAPTER 2. CONVEX SURFACES
4
Proposition 2.1.1 Let f 2 C2. Then f is convex over a convex set
containing an interior point if and only if the Hessian matrix f 00 of f is positive semide nite throughout . For proof see for example Luenberger in [9]. Now let us de ne a discrete version of convexity, in the plane. This notion can be de ned in many ways, but we will use the following,
De nition 2.1.2 A set of data f(xi; yi; zi)gNi=1, is called convex if there is a convex continuous function f 2 C( ), such that f (xi ; yi) = zi and (xi; yi) 2
, i = 1 : : : N . The data set is said to be strict convex if the interpolating function is strictly convex.
2.2 The continuous ansatz In this paper we will study a particular approach for approximating scattered data in R3. More precisely, given data (xi; yi; zi); i = 1; 2 : : : N , a bivariate function u(x; y) is sought such that the dierence in 2-norm between the function values in the scattered points and the given values i.e N X
(2.1)
i=1
(u(xi; yi) ? zi)2
is minimized. Apart from data delity, we will also require that the function u(x; y) is convex. One way of doing this is to make the eigenvalues of the Hessian of u(x; y) nonnegative. The Hessian is (2.2)
u00(x; y) =
u u 2 xx xy , where u = @ u(x; y ) etc. xx uyx uyy
@x2 By taking the trace of the Hessian (tr(u00) = uxx + uyy ), which is the sum of the eigenvalues, and the determinant (det(u00) = uxx uyy ? u2xy ) which is the product of the eigenvalues, and force them to be nonnegative we have enough constraints to guarantee convexity. The product constraint will be called the Monge-Ampere constraint and the sum constraint the Laplacian constraint. We also add a second term to the objective function to make the solution more smooth, for example the square of L2-norm of the Laplacian. Now we
CHAPTER 2. CONVEX SURFACES
5
can formulate the full problem as the following optimization problem, min u2H 2
N X i=1
(u(xi; yi) ? zi)2 +
(
ZZ
(uxx + uyy )2dxdy
uxx + uyy 0 , a.e. 2 uxxuyy ? uxy 0 , a.e. here is a compact area in R2 that contains the scattered data, and H2( ) is the Sobolev space of order 2, i.e. u, u0 and u00 2 L2( ), (the space of square integrable functions on ). (2.3)
when
2.3 Discretization The continuous problem (2.3) can be discretized in many ways. One way would be to convert the function u(x; y) into a vector containing the function values in a nite lattice, v = fu(xi; yi)gNi=1. The derivatives could be approximated with dierences of neighbouring point in the lattice. Another way is to approximate the function u(x; y) by a linear combination of piecewise planar basis functions as in the Finite Element Method.
2.3.1 Finite Dierences discretization
Let the area ( in (2.3)) be discretized into a nite lattice ( h ), which includes the scattered data and additional points where the function u(x; y) is to be evaluated. For simplicity assume that the lattice is squared-shaped, with a equidistant grid-distance of h in both directions.
Figure 2.1: The Laplacian \molecule" The derivatives are approximated by taking dierences of neighbouring points in the lattice. The Laplacian (uxx + uyy ) is discretized using the
CHAPTER 2. CONVEX SURFACES
6
expression, (2.4)
uxx + uyy = ui;j+1 + ui;j?1 + uhi+12 ;j + ui?1;j ? 4ui;j + O(h2);
where ui;j = u(xi; yj ). This is the well known 5-point approximation, which has second order accuracy. For more on nite dierences see for example [4]. The mixed derivative, uxy , is approximated by uxy = ui+1;j+1 + ui?1;j?14?h2ui+1;j?1 ? ui?1;j+1 + O(h2 ): (2.5) This approximation is also of second order.
Figure 2.2: The \molecule" for approximation of u . xy
A problem appears when we wish to approximate derivatives at boundary points, since here there will be neighbours outside the lattice. We will discuss three possible solutions to this problem: (i ) Treat the points that are missing in the approximations as unknowns. (ii ) Modify the dierence operators (2.4) and (2.5) when applied to boundary points, so that they use only lattice points. (iii ) Assume that the points outside the boundary points are known, and use these as data in the approximation. The rst solution (i ) can be realized by increasing the lattice points with a frame of unknowns, that are cut of in the solution. The second solution (ii ) can be realized by making the symmetric dierence operators non-symmetric for example, (2.6) uxx = 2ui;j ? 5ui+1;j +h24ui+2;j ? ui+3;j + O(h2 )
CHAPTER 2. CONVEX SURFACES
7
uxy = 4ui;j ? 5(ui+1;j +2hu2i;j+1) ? 6ui+1;j+1 + : : : : : : + ?(ui+2;j + ui;j+2) +2h(2ui+1;j+2 + ui+2;j+1) + O(h2 ) This two operators can be rotated to t a boundary lattice point, with some sign adjustments. Despite that these two non-symmetric dierence operators has an accuracy of order two, like the symmetric ones, they give a poorer approximation of the second derivatives, than the symmetric did. This might be explained by the fact that the symmetric operators (e.g (2.4)) all the odd terms of h, i.e all terms with h2n+1, n = 1; 2; : : :, in the Taylor expansion are cancelled. The last solution (iii ) is realized using a frame of data outside the lattice. This is for obvious reason the best, because it has more input of data. For most of our investigation we assume given points outside the boundary point. This has the advantages of singling out the features we will investigate here more clearly. Finally, the integral in (2.3) is approximated by sampling (uxx + uyy )2 over all lattice points, i.e. the square sum of the discrete Laplacian. This is essentially the trapezoidal rule. Let the sampled function u be represented as a vector v, containing the values of u in the all lattice points. Lattice indexed from bottom to top and left to right, see gure 2.3. Let E be the identity matrix but with
(2.7)
Figure 2.3: An 6 6 lattice example with corresponding lattice index. the rows that correspond to the given data in the z-vector extracted, L the discrete Laplacian dierence operator and Wi the Monge-Ampere matrices for all lattice points i. These are made by forming the three approximation matrices for uxx, uyy , and uxy , as discussed above, represented by the matrices
CHAPTER 2. CONVEX SURFACES
8
Axx, Ayy and Axy , i.e Wi = ATxx Ayy ? ATxy Axy . For example with i = 5 in a 3 3 lattice,
0 ?1=16 0 B 1=16 B B 0 B 0 Wi = B 0 B 1=16 B @ 0
0 1=16 0 0 0 1=6 0 ?1=16 0 0 0 0 0 0 0 0 0 ?1=16 0 0 0 ?1=16 0 1=16 1 0 0 ?2 0 0 1 0 ?2 0 0 4 0 0 ?2 0 1 0 0 ?2 0 0 1 0 0 ?1=16 0 0 0 ?1=16 0 1=16 0 0 0 0 0 0 0 0 ?1=16 0 1=16 0 0 0 1=16 0 ?1=16
1 C C C C C C C A
These matrices Wi are made symmetric, by taking the symmetric part, (Wi + WiT )=2. The symmetrization does not in uence the constraints but are essential in iterative algorithms, see e.g method (4.9) below, discussed in later chapters. The discrete version of (2.3) then becomes,
(2.8)
when
(
min kEv ? zk22 + kLv + Xl k22
v2RN
Lv + Xl 0; vT Wiv + cTi v + i 0; i = 1 : : : N:
Here is L = Axx + Ayy of dimension N N in the case of with given boundary, case iii. Further the matrix E is of dimension N 1 N , with N 1 < N , N 1 equal to the number of given data in the lattice, ci, Xl and i are vectors respectively scalars, which depends on the boundary data. These quantities are all zero for methods (i ) and (ii ) above, and nonzero only for method (iii ).
2.3.2 Finite Elements Approximation
The basic idea in the Finite Element Method is to reformulate a dierential equation to an equivalent variational problem. Then to construct a discrete nite dimensional function space, and solve the variational problem for a solution in the functional space. Let's show this with an example. We start with the Poisson equation in the plane, u(x) = f (x); in ; (2.9) u = 0; on @ : Here is a bounded open domain in the plane, with the boundary @ , and f is a given function. If we multiply the rst equation in (2.9) with a test
CHAPTER 2. CONVEX SURFACES
9
function and take the integral over and then apply Greens formula, we can formulate the variational problem for (2.9),
a(u; v) = (f; v) , 8v 2 V; R where a(Ru; v) = ru rvdx and, (2.10) (f; v) = fvdx; V = fv : v 2 C( ); the rst derivatives are piecewise continuous on and v = 0 on @ g We now introduce a triangulation lattice in the plane, and a function space Vh that contains for example continuous piecewise linear functions, (2.11) Vh = fv : v 2 C( ); linear on every triangle, v = 0 on @ g: This function space can be built up by linear combinations of the basis function, 1 if i = j; (2.12) 'j (Ni) = 0 if i 6= j:
Here 'i is the corresponding basis function to the lattice point Ni in the triangulation. We now seek a solution uh to the variational problem (2.10) in this discrete function space Vh , by forming the stiness matrix and the left hand side vector, and solving a linear system,
A = b; where Aij = a('i; 'j ); (2.13) i = uh (Ni) , and bi = (f; 'i): It can easily be shown that for a square equidistant lattice the stiness matrix for the problem (2.9) above is identical to the Laplacian matrix in the nite dierence method. Due to this fact that the Finite element method is so similar to the Finite dierence method on a square equidistant lattice, which are only investigated in this paper, the Finite element method is not further used in this paper. For a lattice with for example points distributed randomly, the Finite Element Method are superior the Finite Dierence Method, because it uses more of the structure of the lattice. For further reading on the Finite Element Method see e.g [8].
Chapter 3 Linear constraints In this chapter two dierent ways of constructing convex surfaces using only linear constraints are discussed. The rst way is proposed by Scott in [6], and does not use the discretization property of the surface, and another way which is a reformulation from the constraints proposed in Chapter 2 using the Monge-Ampere and the Laplacian operators.
3.1 Scott's constraints Another way to approach convexity on a discrete two dimensional lattice is described by Scott in [6]. He considers generating the piecewise planar interpolant to the triangulated data. If the planes formed by two neighboring linear interpolant triangles meet at an angle less then then the corresponding edge is said to be convex. The goal is to make all edges convex. This is not a problem, but the triangulation of the data is not unique. For one triangulation all edges can be convex and for another triangulation the data points forms non convex edges. A convex triangulation is such that the piecewise linear interpolant of the data based on it are convex. If f is a convex interpolant (not necessarily piecewise linear) of the data then it exists a convex triangulation and vice versa, i.e if there exists a convex triangulation to the data, then the data are convex, c.f the de nition of convex scattered data earlier. This convex triangulation can not be used to construct constraints for convexity. We need something else that guarantees the existence of a convex triangulation. In the one dimensional case, z = f (x), every pair of x points with a third point between them represents a constraint for convexity, (3.1)
(xj ? xi)(fk ? fi) (fj ? fi)(xk ? xi); 10
CHAPTER 3. LINEAR CONSTRAINTS
11
where xi < xk < xj and fi = f (xi), fj = f (xj ) and fk = f (xk ), A geometric interpretation of the constraints are that the point in the middle lies under the mean value of the two outer points, see gure 3.1. If there are N points, 1.1
1
0.9
z
0.8
0.7
0.6
0.5
0.4
0.3 −0.5
0
0.5
1 x
1.5
2
2.5
Figure 3.1: A convex line segment in one dimension. when there are,
N
3 3 = O(N ) linear constraints like (3.1). Most of these are not linearly independent. If we instead consider only pairs of points which contain exactly one point in between, then it is easy to see that the resulting constraints become linearly independent. The number of such constraints is O(N ), or exactly N ? 2. The two dimensional case it is more complicated. Any three points (x; y) which contain a fourth point inside there convex hull, represent a constraint. There are O(N 4 ) of such linear constraints. These are like in the one dimensional case not linearly independent. Scott makes the following de nitions in [6], De nition 3.1.1 A minimal triangle is a set of three points in the plane, which forms a triangle which contains exactly one point inside the triangle.
De nition 3.1.2 A minimal linesegment is a set of two points in the plane,
which have exactly one point lying on the linesegment between them.
Linearly independent linear constraints for convexity in the plane are represented by both minimal triangles and minimal linesegments. If three points lie on a line they form a minimal line segment and a constraint just like in the one dimensional case (3.1) holds. If the three points do not lie on a line, there exists a fourth point such that the four points forms a minimal triangle, then the four points form a constraint, of the type listed below in (3.2).
CHAPTER 3. LINEAR CONSTRAINTS
12
In a square lattice with N = n2 points there are, according to [6], O(N 2 ) minimal linesegments and O(N 2) minimal triangles. The smallest possible minimal triangle (see gure 3.2), gives the constraint, (3.2)
f (x1) + f (x2) + f (x3) ? 3 f (xinner ) 0;
where x1, x2 and x3 are the points that form the triangle and xinner is the inner point in the triangle.
Figure 3.2: Example of a the smallest minimal triangle in a square lattice This means that the number of linearly independent linear constraints becomes a computationally troublesome for big N -values, i.e big lattice.
CHAPTER 3. LINEAR CONSTRAINTS
13
3.2 Factorized Monge-Ampere The diculty of solving the problem (2.3) is of course the nonlinear MongeAmpere constraint. This, when discretized, spans a non-convex set. However the Monge-Ampere constraint can be factorized, 1 (u ? u )(u + u ) + 1 (u + u )(u ? u ) 0: (3.3) 2 xx xy yy xy 2 xx xy yy xy If we instead of using the Monge-Ampere expression as constraint use all four factors in (3.3) as constraints, 8 u + u 0; xx xy > > < uxx ? uxy 0; (3.4) > > : uuyyyy ?+ uuxyxy 00:; Then, when discretized, we get linear (thus convex) constraints. The constraints (3.4) are equivalent with, (3.5)
(
uxx juxy j 0; uyy juxy j 0:
We can then see that if the constraints (3.4), from now on called the factorized Monge-Ampere constraints, are ful lled then both the MongeAmpere constraint and the Laplacian constraint are ful lled, i.e the factorized Monge-Ampere constraints guarantee convexity, in the sense of no truncation error, c.f Chapter 5. The reversed does not hold. It is hard to get a insight of how the factorized Monge-Ampere constraints set looks in comparison to the original set spanned by the Monge-Ampere and the Laplacian constraints. Especially when the dimension of the solution space is equal to the number of lattice points. Let us instead try to point out what it means for a class of functions, e.g. quadratic functions. A quadratic function u(x; y) = ax2 + by2 + cxy + dx + ey + g, is convex, i.e the MongeAmpere and Laplacian constraints are ful lled, if it holds, ( 4ab ? c2 0; (3.6) 2a + 2b 0: The factorized Monge-Ampere constraint gives, for the same function, ( 2a jcj; (3.7) 2b jcj:
CHAPTER 3. LINEAR CONSTRAINTS
14
The gure 3.3 shows the areas where the respective constraints are ful lled. The darker nonlinear area is of course the Monge-Ampere and the Laplacian constraints, and the lighter linear the factorized Monge-Ampere constraints. This means that some convex functions, e.q u(x; y) = 31 x2 + 3y2 + 2xy, do not ful ll the factorized Monge-Ampere constraints.
a
(|c|/2,|c|/2)
b
Figure 3.3: Areas of coecients for convexity of quadratic functions
Chapter 4 Algorithms In this chapter we will discuss some diculties with the problems which appear in the discretization of the continuous anzats, and from the problem with linear constraint. Some algorithms for solving this type of problem are also discussed.
4.1 Quadratic programs The class of optimization problem having linear constraints and a quadratic objective function, called quadratic programs (QP), 1 xT Qx + cT x : aT x b ; i 2 Ig: min f q ( x ) = (4.1) i i N 2 x2R This class varies in diculty depending on the properties of the matrix Q. Here aTi denotes the ith row of a matrix A and I = 1 : : : M . For the case, (4.2)
2 + kLv + X k min k Ev ? z k l 2 N v2R
when Av 0;
the corresponding Q matrix is of course positive semide nite, and in the our problem with given boundary data, case (iii ), the matrix is positive de nite. This fact make the problem much easier to solve numerically. It also guarantees a unique solution to the problem.
4.1.1 Active set method
To solve a quadratic program, numerically, one can use an active set method. The basic idea of active set methods is, given a feasible point xk , to nd a 15
CHAPTER 4. ALGORITHMS
16
direction d by solving the equality subproblem, (4.3)
minfq(xk + d) : aTi (xk + d) = bi; i 2 Ik g;
where Ik is the working set, often consisting of the indices of the set of active constraints. Further as above, aTi is the i:th row of the constraint matrix A and bi the corresponding element in the b-vector. The subproblem (4.3) can be solved by using for example a null-space method. These methods nd a full rank matrix Z that spans the null-space of the constraint matrix Ak corresponding to the constraints in (4.3). Then all feasible points can be written as x = x0 + Zw, where w is arbitrary. This is equivalent to the unconstrained problem, 1 wT (Z T QZ )w + (Qx + c)T Zwg: min f (4.4) 0 w 2 The Lagrange multipliers, , can be calculated using the rst order necessary condition for optimality, (4.5)
Qx + c + AT = 0
Here x denotes the optimal solution and the corresponding Lagrange multipliers. If the solution to the subproblem (4.3) is d = 0 and the condition, (4.6)
Qxk + c +
X i2Ik
(ik)ai = 0
holds, then xk is a local minimum to the original problem (4.1). If not, the working set Ik is updated by deleting one of the indices for which (ik) < 0, and the iteration is continued with this working set.
CHAPTER 4. ALGORITHMS
17
4.2 Quadratically constrained quadratic programs The class of problem called quadratically constrained quadratic programs (QQP's), which are non-convex, are very hard to solve numerically. In (2.8) the objective function is convex, but the constraints are not (i.e the matrices Wi are inde nite). This means among other things that one can not guarantee a zero duality gap between the QQP and its Lagrangian dual, max min kEv ? zk22 + kLv + Xlk22 ? v
(4.7)
n X i=1
i(vT Wiv + cTi V
+ i) ?
2n X
i=n+1
i Liv;
see [7]. Here Li denotes the corresponding rows in the Laplacian dierence matrix L. So a method based on the Lagrange dual relaxation (4.7) would probably only obtain an approximative solution to the original problem.
4.2.1 QP-iteration
To escape the problem with quadratic constraints, (4.8) i(v; v) 0, with i(v; w) = vT Wiw + cTi w + i ; one might iterate with an old value v in the rst argument of i and a new value in the other. If it converges, then the original constraints would be ful lled. Since in every iteration step we need to solve a quadratic programming problem (QP), we call this iteration QP-iteration. The i-functions are not symmetric in its argument, but that does not matter when the arguments are the same, as in the original constraints. But used in an iteration, the distribution between the arguments are essential. Hence the ci term could bee split in dierent ways, minv2RN kEvk+1 ? zk22 + kLvk+1 + Xl k22 ( k+1 Lv 0 (4.9) when (vk )T W vk+1 + cT ((1 ? )vk + vk+1) + 0 i i i where is a scalar. The most natural value of would be = 21 , because of the symmetry in arguments, but for reasons discussed later, = 1 is also
CHAPTER 4. ALGORITHMS
18
tested. From numerical experience presented later for the case where = 1 one needs vk+1 and vk to be closer, i.e kvk ? vk+1k to be smaller, compared to the case when = 21 (i.e. the i(vk ; vk+1) and i(vk ; vk) is closer in the symmetric case).
4.2.2 Sequential Quadratic Programming
There are several known algorithms for solving problems with nonlinear constraints, (4.10) min ff (x) : ci(x) 0; i 2 Ig: x2Rn
One method is called Sequential Quadratic Programming, see [5]. This algorithm is a generalization of Newton's method for unconstrained optimization in that it nds a step, dk , away from the current point by solving a QP problem that approximates the original problem, k )T d + 1 dT r (L(xk ; k ))d min r f ( x d 2 xx (4.11) when ci (xk ) + rci(xk )T d 0; i 2 I: Here L(x; ) denotes the Lagrangian dual, as de ned in (4.7). The next iterate of x is just xk+1 = xk + dk with d = dk , but there are several ways to estimate the Lagrange multipliers, . One way is to use the Lagrange multipliers from the last iteration in the QP problem (4.11). Another estimate can be obtained by solving an auxiliary problem. This can lead to more accurate estimates, but the Matlab implementation (called constr ) that is tested here uses the rst way of estimating the Lagrange multipliers. For further reading on sequential quadratic programs see e.g. [5], page 361-382.
Chapter 5 Tests and Conclusions Here results from numerical tests together with some conclusions are presented. As convergence measure we use the normalized 2-norm between two consecutive iterates, k+1 k (5.1) k = kv kvk?k v k2 : 2
The 2-norm is equivalent with the Frobenius norm when v is represented as a matrix, i.e the square of the Euclidean in z-direction in every point. It is therefore quite a natural way of measuring convergence. The data was generated by sampling the function u = sin(x2 + y2), on a square lattice with spacing h = 0:1, [?0:5; 0:5], see gure 5.1. Every 8:th point was given as data, z, as well as the data-points along the boundary, e.g. x = 0:6 and y 2 [?0:6; 0:6]. The function u is in itself convex. However 0.5
0.4
z
0.3
0.2
0.1
0 0.5 0.5 0 0 −0.5
y
−0.5
x
Figure 5.1: The function u = sin(x + y ), on a lattice with 11 11-points and a spacing of h = 0:1
2
2
19
CHAPTER 5. TESTS AND CONCLUSIONS
20
in some experiments we simulated non-convex data by adding noise to some samples. The nonlinear Monge-Ampere constraints are in the tests the critical part, and is therefore plotted for the resulting surfaces. To get more insight these constraints are plotted in one dimension,see gure 5.2. The order of the constraints in the one-dimensional plot is the same as described in earlier chapters. Note, as expected, that the constraints are positive, re ecting the convexity of the test function. 4
Ampere/Monge constraint
3.5
3
2.5
2
1.5
1
0
20
40
60
80
100
120
140
grid points
Figure 5.2: The Monge-Ampere constraint for the original data
5.1 Convergence of QP-iteration The symmetric QP-iteration (4.9) with = 12 converges very fast, assuming that the given data correspond to convex function. In fact it converges after one iteration for all 0:01, using a constant as start vector, v(0). This can be seen in table 5.1, that the minimum of i(vk; vk ) becomes positive after one iteration and it is close to the minimum of i(vk ; vk+1), as well as in gure 5.3. The reason for this behavior is that for 0:01 the constraints (both the Monge-Ampere and the Laplace constraints) are inactive at the solution, see gure 5.4. This solution is then the solution to the normal equation formed by the objective function. It is from now on denoted as the normal solution, and is used as a start point in the other tests. For > 0:01 the solution enters a stable oscillating state (see gure 5.3), pending between two vectors.
CHAPTER 5. TESTS AND CONCLUSIONS
21
Convergence of symmetric qp−iteration for different values of my
0
10
my=10 −2
10
my=1 my=0.1
−4
10
−6
rho(k)
10
−8
10
−10
10
−12
10
−14
10
my=0.01 my=1e−8
−16
10
0
1
2
3
4
5 # iteration
6
7
8
9
Figure 5.3: Convergence pattern of the symmetric QP-iteration with start vector v = (0)
i
0:2, i = 1; : : :; N .
Ampere/Monge constraint for symmetric qp−iteration, my=0.01 8
7
Ampere/Monge constraint
6
5
4
3
2
1
0
0
20
40
60
80
100
120
140
grid points
Figure 5.4: Inactive Monge-Ampere constraint, for symmetric QP-iteration with = 0:01, and start vector v(0) = 0:2, i = 1; : : :; N . i
This method does not generate a feasible solution in every iteration, so the critical part is of course to get the Monge-Ampere constraint nonnegative. The Laplace constraints are ful lled in every iteration. The pending state for big values can be \good" solutions (or one of them), in the sense of ful lling the Monge-Ampere constraint and tting the data. But since the method do not converge there are no guarantees that the constraints are all
CHAPTER 5. TESTS AND CONCLUSIONS k
k
0 0:62 1 3:02 10?16 2 0
22
i(vk ; vk) i(vk ; vk+1) [minifig; maxifig] [minifig; maxifig] [0; 6:8 10?3 ] [?1:2 10?3 ; 0:14] [9:78 10?6 ; 7:56 10?4 ] [9:78 10?6 ; 7:56 10?4 ] [9:78 10?6 ; 7:56 10?4 ] [9:78 10?6 ; 7:56 10?4 ]
Table 5.1: Convergence of the symmetric QP-iteration with non disturbed data, = 0:01, v(0) = 0:2, i = 1; : : :; N . i
ful lled. If the data is disturbed, e.g. by adding 0:1 to the lattice point 73, to simulate non convex data, the convergence pattern looks like in gure 5.5. These experiments clearly reveal, cf gure 5.3 and gure 5.5, as well as other similar tests not reported here, that the symmetric QP-iteration may not converge for any value of , except for the trivial case when the normal solution is the solution to (2.8). The QP-iteration is therefore not satisfactory with = 21 . If we modify algorithm (4.9) by changing from 12 to 1, we get slow convergence for all for non-disturbed data and, v(0) = 0 as start vector (see gure 5.6). The convergence curve looks super-linear in the beginning but attens out Convergence of symmetric qp−iteration for different values of my
0
10
my=10
−1
rho(k)
10
my=0.01 my=1 −2
10
my=0.1 −3
10
0
5
10
15
20
25 # iteration,k
30
35
40
45
50
Figure 5.5: Convergence pattern of the symmetric QP-iteration using the normal solution as start-vector. The data is disturbed in lattice point 73.
CHAPTER 5. TESTS AND CONCLUSIONS
23
Convergence of qp−iteration for different values of my
0
10
−1
10
my=1e−8
my=0.01 −2
rho(k)
10
−3
10
my=10
my=0.1
my=1 −4
10
−5
10
0
5
10
15
20
25 # iteration
30
35
40
45
50
Figure 5.6: Convergence pattern of the QP-iteration. The zero vector as the start vector. to a linear convergence. For convex data it converged in all tested cases (i.e. 2 [10?7; 107]). In table 5.2 the convergence () of the Qp-iteration with convex data, the interval of the i(vk ; vk) and i (vk; vk+1) are also listed for every iteration. We can see here that the minimum if the i(vk ; vk ) and the minimum of i(vk ; vk+1) remains after many iterations, and in the symmetric case it was smaller, cf table 5.2 and table 5.1. This means that the convergence is even more important here in the non symmetric method for ful lling the Monge-Ampere constraints. k
k
2 4 10 20 30 40 50
0:015 0:027 0:037 6:67 10?4 2:85 10?4 1:58 10?4 7:42 10?5
i(vk ; vk) [minifig; maxifig] [?2:22 10?16 ; 2:80 10?4 ] [?5:55 10?17 ; 1:40 10?3 ] [?7:30 10?17 ; 7:43 10?4 ] [?7:22 10?17 ; 7:35 10?4 ] [?1:11 10?16 ; 7:07 10?4 ] [?6:77 10?17 ; 6:94 10?4 ] [?7:31 10?17 ; 6:89 10?4 ]
i(vk ; vk+1) [minifig; maxifig] [?2:67 10?4 ; 8:01 10?2 ] [?1:91 10?4; 2:0 10?2 ] [?7:48 10?5 ; 3:20 10?4 ] [?3:18 10?5 ; 8:10 10?4 ] [?1:44 10?5 ; 7:08 10?4 ] [?6:08 10?6 ; 6:96 10?4 ] [?2:88 10?6 ; 6:89 10?4 ]
Table 5.2: Convergence of the qp-iteration with non disturbed data, = 0:1 If the data is disturbed like in the symmetric method, i.e by adding 0.1 in lattice point 73, the algorithm seems to converge for some values of , and
CHAPTER 5. TESTS AND CONCLUSIONS
24
starts to oscillate for others, see gure 5.8. The relation between convergence and the value of the parameter is hard to nd. The parameter does not eect the convergence in a direct way. But if it converges, the solutions ful ll the Monge-Ampere pretty well, see gure 5.7, and also t the data suciently good. 7
6
Ampere/Monge constraint
5
4
3
2
1
0
−1
0
20
40
60
80
100
120
140
grid points
Figure 5.7: The Monge-Ampere constraint for = 1 after 50 iteration with QP-iteration, with disturbed data, 0:1 in point 73.
Convergence of qp−iteration for different values of my
−1
10
my=0.01 −2
10
rho(k)
my=0.1 −3
10
−4
10
my=10 my=1
−5
10
0
5
10
15
20
25 # iteration,k
30
35
40
45
50
Figure 5.8: Convergence of the QP-iteration with disturbed data (added 0.1 in lattice
point 73).
CHAPTER 5. TESTS AND CONCLUSIONS
25
5.2 Convergence of Sequential QP Here we make the same tests where the 49:th lattice point is disturbed by adding 0:3, to make the dataset non-convexity somewhere in the middle. We also use the normal solution of the objective function as a start-vector, as in the QP-iteration case. The results are displayed in gure 5.9. The convergence is somewhat erratic. The big dierence in the convergence behavior for dierent values of is probably caused by the dierence in the start-vector. In these tests the bigger value of the \more" convex normal solution, i.e start-vector, we get. In fact using another starting point than the normal Convergence of Sequential Quadratic Program
2
10
1
µ=0.01
10
µ=0.1
0
ρk
10
−1
10
µ=1 −2
µ=10
10
−3
10
0
5
10
15
20
25 #iteration
30
35
40
45
50
Figure 5.9: Convergence of the Sequential QP for dierent values of with the normal solution as starting vector. Data disturbed 0:3 in lattice point 49
solution we observed another solution, i.e the method does not converge towards a global optimum but to a local (c.f gure 5.10 and gure 5.12). This can be explained by the non-convexity of the constraints. This probably also explains why the case with = 10?8 converged (k = 0) after only a few iterations. In gure 5.10 we can see that the surfaces dier much with different values of . For the case where 0:1 the surfaces seems less likely to be a global optimum to the problem, for that reason that the surface has negative parts, which the sampled data does not have. Here is not the critical part to get the Monge-Ampere constraints to be ful lled, because given that the previous iterate xk ful lls the constraints, the next iterate also ful lls the constraints up to a approximation error of order 2, i.e of O(kd(k) k22) where dk is the current step-length. Despite the fact that the normal solution, which is the start-vector here, does not necessarily
CHAPTER 5. TESTS AND CONCLUSIONS µ=10
µ=1
0.4
0.4
0.2
0.2
0 0.5
0.5
0
26
0 0.5
0.5
0
0
0
−0.5 −0.5
−0.5 −0.5
µ=0.1
µ=0.01
0.5
0 −1
0 −2 −0.5 0.5
0.5
0
0 −0.5 −0.5
−3 0.5
0.5
0
0 −0.5 −0.5
Figure 5.10: Approximated surfaces using Sequential QP and the normal solution as starting vector
ful ll the Monge-Ampere constraints the nal results, i.e after 50 iterations, ful lls the Monge-Ampere constraints and the Laplacian constraints. If we instead of taking the normal solution, take the zero-vector to be the starting-vector (remembering that the boundary is all positive), we get a convex starting-vector. This means that the algorithm does not have to use a lot of iterates to nd a feasible vector. The result, displayed in gure 5.11, shows that the convergence does not depend so much on the value of . The resulting surfaces, displayed in gure 5.12, also show that the method converges towards a solution that is more reasonable than when we used the normal solution as a start vector. These surfaces ful ll the Monge-Ampere constraints and the Laplacian constrains. They also seems to ful ll the linear constraints proposed in [6] (up to the truncation error as discussed in the Chapter 2).
CHAPTER 5. TESTS AND CONCLUSIONS
27
Convergence of Sequential Quadratic Program
1
10
−1
µ=10
0
10
µ=1 µ=10
−1
ρk
10
−2
10
−3
10
µ=10−2
µ=10−8
−4
10
0
5
10
15
20
25 #iteration
30
35
40
45
50
Figure 5.11: Convergence of the Sequential QP for dierent values of with the zero vector as starting vector. Data disturbed 0:3 in lattice point 49
µ=10
µ=1
0.4
0.4
0.2
0.2
0 0.5
0 0.5 0.5
0 −0.5
0.5
0
0
0 −0.5
−0.5 µ=0.1
−0.5 µ=0.01
0.4
0.4
0.2
0.2
0 0.5
0 0.5 0.5
0
0 −0.5
−0.5
0.5
0
0 −0.5
−0.5
Figure 5.12: Approximated surfaces using Sequential QP and the zero vector as starting vector. Data disturbed 0:3 in lattice point 49
CHAPTER 5. TESTS AND CONCLUSIONS
28
5.3 Scott's constraints As we include more and more linear constraints, given by [6], in the optimization problem (4.2), we get a resulting surface that is closer to convexity. But this holds only up to a point, because of the geometry of a squared lattice, i.e as the distance of the outer points in a minimal linesegment increases the closer it get to it's neighbouring minimal linesegment, see gure 5.13. For 8
6
4
2
0
−2
−4
−6
−8 −8
−6
−4
−2
0
2
4
6
8
Figure 5.13: All minimal linesegment for a the center point in a lattice with 9 9 points. a surface with a bigger curvature, this linesegment that are \close" to each other become inactive. This fact is con rmed in gure 5.14, where there are almost no dierences in the surfaces constructed with 4, 8 or 12 constraints in every points, i.e the 4 rst minimal linesegment based on the eight neighbouring points are active. This could be a way of reducing the number of constraints in advance, to make it computationally possible for large data sets. If we check the Monge-Ampere constraints on a solution to the problem with linear constraints, we see that they are almost ful lled, even with much less linear constraints than proposed by Scott in [6], see gure 5.15. We can make the following generalization.
Proposition 5.3.1 Assuming that the truncation errors in the Monge-Ampere and the Laplacian constraints are zero then, if the Monge-Ampere and the Laplacian constraints are ful lled at all lattice points, then so are the linear constraints, as given in [6] and vice versa. Proof. If the Monge-Ampere and the Laplacian constraints are ful lled, then there exists a convex interpolating function. Further there exists by de nition of convexity a triangulation of the points in (x; y)-plane with a corresponding
CHAPTER 5. TESTS AND CONCLUSIONS
29
2 constraints
4 constraints
0.5
0.5
0.4
0.4
0.3
0.3
0.2 0.5
0.2 0.5 0.5
0 −0.5
0.5
0
0
0 −0.5
−0.5
8 constraints
−0.5
12 constraints
0.5
0.5
0.4
0.4
0.3
0.3
0.2 0.5
0.2 0.5 0.5
0
0 −0.5
0.5
0
0 −0.5
−0.5
−0.5
Figure 5.14: Approximated surfaces using the linear constraints, = 10, data disturbed 0.3 in lattice point 73.
piecewise planar function which is convex, i.e ful lls the linear constraints given by [6]. If on the other hand the linear constraints are ful lled there exists a convex interpolating function f 2 C1 ( ), see [1] Lemma 3.1, i.e 2
Ampere/Monge constraints
1.5
1
0.5
0
−0.5
0
20
40
60 80 lattice points
100
120
140
Figure 5.15: The Monge-Ampere constraints for a solution with 12 linear constraints. Data disturbed 0.1 in lattice point 73 and = 10
CHAPTER 5. TESTS AND CONCLUSIONS
30
the Monge-Ampere and the Laplacian constraint are ful lled. Unfortunately, we have the truncation error in the Monge-Ampere and the Laplace constraints. This property makes the Monge-Ampere, Laplace constraints less reliable, especially when the lattice spacing increases. Another disadvantage of the Monge-Ampere constraints is that the generalization to higher dimensions, ( 2 Rm; m > 2) the corresponding constraints are polynomials of partial dierential equations with degree equal to the dimension (m). On the other hand the generalization of the linear constraints proposed by Scott remains linear, but increases in quantity with dimension, (i.e more constraints in every point).
CHAPTER 5. TESTS AND CONCLUSIONS
31
5.4 Factorized Monge-Ampere The factorized Monge-Ampere constraints of course guarantees that the MongeAmpere and the Laplacian constraints are ful lled, but since the factorized Monge-Ampere constraints forms a set that is smaller than the original it is interesting to see that the standard routine for Quadratic Programs in Matlab (qp) always nds a feasible solution for all values of > 0. Figure 5.16 2
µ=10−2
2
Z=|sin(x +y )−0.2|
0.2
0.2
0.1
0.1
0 0.5
0 0.5 0.5
0
0.5
0
0 −0.5
0 −0.5
−0.5
−0.5 2
µ=1
µ=10
0.2
0.2
0.1
0.1
0 0.5
0 0.5 0.5
0
0 −0.5
−0.5
0.5
0
0 −0.5
−0.5
Figure 5.16: Constructed surfaces using the factorized Monge-Ampere constraints with dierent values of .
shows the constructed surfaces for dierent values of . The data is taken from u = jsin(x2 + y2) ? 0:2j in every third point on a lattice with distance h = 0:1. The reason for sampling denser here (every third point instead of every eighth), is that then the problem is harder to solve due to the starting non-convexity of u. An interesting feature here is that we get the biggest difference in the solution when we vary the parameter around 1. If the value of is increased to 104 in gure 5.16, there is no visual dierence in the solution compared with the one with = 102 . The normalized 2-norm between the solutions for = 102, v2, and = 104, v4 is kv2 ? v4k22=kv2k22 4 10?4 , the corresponding number for = 1 and = 102 is 0:12, and practically the same result with negative exponential of . The surfaces in gure 5.16 also seem to ful ll the linear constraints proposed by [6], which induce proposition 5.3.1. But since there are truncation errors, there is no guarantees that for even a lattice with 3 3 points that the ful llment of the factorized Monge-Ampere constraints imply that the
CHAPTER 5. TESTS AND CONCLUSIONS
32
linear constraints proposed by [6] are ful lled, e.g. the non-convex surface,
00 @2
2 0 1 2 0 2 0
1 A
The center-point do not ful ll the linear constraints proposed by [6] (in the diagonals), but all four factorized Monge-Ampere constraints are ful lled. If we compare the surfaces generated with the Sequential QP algorithm, i.e the original problem with both the Monge-Ampere and the Laplacian constraints, and the surfaces generated with the Quadratic Program algorithm and the Factorized Monge-Ampere constraints, see gure 5.17, we can see dierences. Here we also computed the number kEv ? zk22 and obtained Factorized Monge−Amp‘ere, µ=10
Sequential QP, µ=10
0.5
0.5
0.4
0.4 0.3
0.3
0.2 0.2 0.1 0.5
0.5 0.5 0
0 −0.5
−0.5
0.5 0
0 −0.5
−0.5
Figure 5.17: Comparison of factorized Monge-Ampere constraints and the original problem solved with Sequential QP. Data disturbed 0:3 in lattice point 49 0:18 and 0:22 respectively. Finally we consider the following experiment. We took as start-vector in the Sequential QP algorithm the left surface in gure 5.17, i.e the solution the problem with the Factorized Monge-Ampere constraints. We then observed very little dierence between the computed solution and the initial solution. Also the result was quit dierent from the right surface in gure 5.17. We may conclude, as above, that we may not have computed the global solution the our optimization problem. It would have been interesting to extend the setup in this report to also include methods for global optimization.
Bibliography [1] Andersson, L.-E., Elfving, T., Iliev, G. and Vlachkova, K., Interpolation of Convex Scattered Data in R3 based upon a convex minimum network. Journal of Approximation Theory 80, pp 299-320 (1995). [2] Dahlquist G., Bjorck A.: Numerical methods 2nd edition (Working copy, July 30, 1996), Department of Mathematics, Linkoping University. [3] Lindberg PO: Optimeringslara en introduktion , Department of Mathematics, Linkoping University. [4] Smith G.D. Numerical solution of partial dierential equations: Finite Difference Methods, Oxford University press. [5] Bertsekas D.P.: Nonlinear programming , Athena Scienti c, ISBN 1-88652914-0 (1995). [6] Scott D. S.: The complexity of interpolating given data in three space with a convex function of two variables, Journal of Approximation Theory 42, pp. 52-63 (1984). [7] Anstreicher K., Chen X., Wolkowicz H., Yuan Y.: Strong duality for a trustregion type relaxation of the quadratic assignment problem, Waterloo, Ontario N2L 3G1,Canada, Research Report CORR 98-31 (1998). [8] Johnson C.: Numerical solutions of partial dierential equations by the nite element method. Studentlitteratur Lund, Sweden (1987). [9] Luenberg D. G.: Introduction to linear and nonlinear programming, Second Edition, Addison-Wesley (1984). [10] Neamtu M.:On approximation and interpolation of convex functions, Approximation Theory, Spline Functions and Applications, 411-418, (1992).
33