Quasi-interpolation for Data Fitting by the Radial Basis Functions Xuli Han and Muzhou Hou School of Mathematical Sciences and Computing Technology, Central South University, 410083 Changsha, China
[email protected]
Abstract. Quasi-interpolation by the radial basis functions is discussed in this paper. We construct the approximate interpolant with Gaussion function. The suitable value of the shape parameter is suggested. The given approximate interpolants can approximately interpolate, with arbitrary precision, any set of distinct data in one or several dimensions. They can approximate the corresponding exact interpolants with the same radial basis functions. The given method is simple without solving a linear system. Numerical examples show that the given method is effective. Keywords: interpolation, approximation, radial basis function.
1
Introduction
Consider a set of ordered pairs (xi , fi ), i = 0, 1, · · · , n, where xi ∈ Rd , fi ∈ R, i = 0, 1, · · · , n. A radial basis function is a function φi (x) = φ(ax − xi ), which depends on the shape parameter a and the distance between x ∈ Rd and a fixed point xi ∈ Rd . Each function φi is radially symmetric about the center xi . The interpolating RBF approximation is F (x) =
n
ci φi (x),
(1)
i=0
where the expansion coefficients, ci , are chosen so that F (xi ) = fi , i = 0, 1, · · · , n. That is, they are obtained by solving the linear system. For the RBFs that we 2 have considered in this paper φ(t) = e−t , the interpolation matrix can be shown to be invertible for distinct points [1,2]. Despite the fact that the interpolation matrix can be shown to be invertible, the linear system may often be very ill-conditioned and it may be impossible to solve accurately using standard floating point arithmetic. The condition number F. Chen and B. J¨ uttler (Eds.): GMP 2008, LNCS 4975, pp. 541–547, 2008. c Springer-Verlag Berlin Heidelberg 2008
542
X. Han and M. Hou
of the interpolation matrix is influenced by the number of centers, the minimum separation distance of the centers, as well as values of parameters. The shape parameter affects both the accuracy of the approximation and the conditioning of the interpolation matrix. Many researchers (e.g., [3,4]) have attempted to develop algorithms for selecting optimal values of the shape parameter. By optimal choice of the shape parameter is still an open question. Several different strategies see [5], have been somewhat successful in reducing the ill-conditioning problem when using RBF methods in PDE problem. The strategies include: variable shape parameters, domain decomposition, preconditioning the interpolation matrix, and optimizing the center location. Often, more than one of these strategies are used together. When the number of samples for fitting functions is large, the interpolation methods present the typical drawbacks of global methods, since each interpolated value is influenced by all the data. Moreover, the numerical condition of the interpolation matrix heavily depends on the data density and the smoothness of the radial basis functions. This leads to unstable solutions or unacceptable computational costs, see [6]. Compactly supported positive definite radial basis functions have been introduced to overcome these problems and to provide local methods, see [7,8] Among local methods, the method given in [9,10] seems to match both requirements of efficiency and reproduction quality. Quasi-interpolation, one of approximation method, possesses some advantages, such as less computation time. In [11], [12] and [13], some methods are discussed for the approximation of a function with a neural network whose activation function is sigmoidal. In [14], the univariate quasi-interpolants are discussed. In this paper we give a method for approximate interpolation. We show a constructive method for obtaining a family of approximate interpolation.
2
Quasi-interpolants
A quasi-interpolant is constructed by letting ci in (1) are some given finite linear combinations of fj . We consider a simple interpolant of type s(x) :=
n
fi ϕi (x),
(2)
i=0
where
φ(ai x − xi ) , ϕi (x) = n j=0 φ(aj x − xj )
ai =
√ λ . mink=i xk − xi
Here, we denote by λ a shape parameter. Remark. If we consider g(x) = fi−1 φi−1 (x) + fi φi (x) + fi+1 φi+1 (x), with xi+1 − xi = xi − xi−1 = h > 0, then g (xi ) = (fi+1 − fi−1 )/(2h) when eλ = 4λ. This leads to λ ≈ 2.153292 and λ ≈ 0.357403. Therefore, for the shape-preserving property, we may choose λ ≈ 2 for suitable shape properties of approximation.
Quasi-interpolation for Data Fitting
3
543
Approximability
Corresponding to the quasi-interpolant (2), we consider interpolant as follows s0 (x) :=
n
ci ϕi (x),
(3)
i=0
where the expansion coefficients, ci , are chosen so that s0 (xi ) = fi , i = 0, 1, · · · , n.
(4)
The system (4) can be written as an (n + 1) × (n + 1) linear system in vectorial form M c = f, where c = (c0 , c1 , · · · , cn )T , f = (f0 , f1 , · · · , fn )T , and the elements of the interpolation matrix are mij = ϕj (xi ), i, j = 0, 1, · · · , n. If the matrix M is invertible, then the coefficients are obtained by solving the linear system. Theorem 1. If λ > ln(n), then the matrix M is invertible. Proof. The matrix can be written as M = D−1 B, ⎞ ⎛ n n n D = diag ⎝ b0,j , b1,j , · · · , bn,j ⎠ . j=0
j=0
j=0
The entries of the matrix B are bij = φ(aj xi − xj ) = e−aj xi −xj . 2
2
From this we obtain bii = 1, mij ≤ e−λ (i = j). Thus |bij | = bij ≤ ne−λ . j=i
If λ > ln(n), then
j=i
|bij | < |bii |, i = 0, 1, · · · , n.
j=i
This implies that matrix B is a strictly diagonally dominant matrix, and then matrix M is invertible. The estimation in Theorem 1 is conservative. Specially, we have the following result. Theorem 2. Let xi ∈ R, xi+1 − xi = h for all nodes. If λ ≥ ln(3), then the matrix M is invertible. Proof. For the entries of the matrix B, we have 2 e−λ (2 − e−λi − e−λ(n−i) ) |bij | = e−λ|i−j| < . 1 − e−λ j=i
j=i
544
X. Han and M. Hou
Therefore, If λ ≥ ln(3), then
|bij | < 1 = bii .
j=i
This implies the theorem.
Since ln(3) ≈ 1.098612, we may choose λ ≈ 2 for the two factors of approximability and shape-preserving property. We note that s0 (x) and s(x) differ only in the expansion coefficients and s0 (xi ) = fi for i = 0, 1, · · · , n. We will prove that s(x) is an approximate interpolant that is arbitrarily near of the corresponding s0 (x) when the parameter λ increases. Theorem 3. If λ > ln(n), then |s(x) − s0 (x)|
ln(n), the matrix M is invertible. We can get c by solving the linear system (4). Thus, we have c − f = (I − M )c = (I − M )(c − f ) + (I − M )f, where I is an unit matrix. Therefore c − f ∞ ≤
I − M ∞ f ∞ . 1 − I − M ∞
From (4) , we have
1 I − M ∞ = max 2 1 − n 0≤i≤n
j=0 bij
2ne−λ j=i bij ≤ . = max 0≤i≤n 1 + 1 + ne−λ j=i bij 2
For λ > ln(n), we have
2ne−λ < 1. 1 + ne−λ Then, considering that the function g(t) = t/(1 − t) is strictly increasing on (−∞, 1), it follows that c − f ∞ ≤
2ne−λ f ∞ . 1 − ne−λ
On the other hand |s(x) − s0 (x)| = |
n
(ci − fi )ϕi (x)|
i=0
≤ c − f ∞
n
ϕi (x) = c − f ∞ .
i=0
From this we obtain (5) immediately.
Quasi-interpolation for Data Fitting
4
545
Numerical Examples and Conclusions
Example 1. We consider the approximate interpolation corresponding to the 2 function f (x) = e−(x−3) sin(x). Figure 1 shows the curves (dashed lines) of the interpolant (2) with the dimension d = 1 and λ = 2. 0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
−0.2
−0.2
−0.4
−0.4
0
1
2
3
4
5
6
0
1
2
3
4
5
6
Fig. 1. Approximate interpolation curves with n = 10 (left) and n = 20 (right)
Example 2. We consider the approximate interpolant (2) corresponding to the function
f (x, y) = sin( x2 + y 2 )/ x2 + y 2 (6) on the 64 points S = {(i, j) : −8 ≤ i ≤ 8, −8 ≤ j ≤ 8}. We have set error = maxx∈S |f (x) − s(x)|. The results are given in Table 1. Figure 2 shows the different surfaces of the approximate interpolant for the function (6) with λ = 2 (left) and λ = 5 (right). Table 1. The approximate errors for the function (6) λ
1
2
6
12
15
error 0.4322 0.3030 0.0660 0.0037 8.2187 × 10−4
Example 3. We consider the approximate interpolant (2) corresponding to the function 2 2 f (x, y) = ye−x −y (7) on the 64 points S = {(i, j) : −3 ≤ i ≤ 3, −3 ≤ j ≤ 3}. The results are given in Table 2. Figure 3 shows the different surfaces of the approximate interpolant for the function (7) with λ = 2 (left) and λ = 5 (right). Stated numerical examples above show that the given method is effective. With the Gaussian function, the given approximate interpolant can approximately interpolate, with arbitrary precision, any set of distinct data in one or several dimensions without solving a linear system. It can approximate the corresponding exact interpolant with the same radial basis functions. The suitable value of the shape parameter is suggested. We give a rigorous bound of the interpolation error.
546
X. Han and M. Hou
0.6
0.8 0.6
0.4
0.4 0.2 0.2 0
0
−0.2 10
−0.2 10 5
5
10 5
0
10 0
−5
−5 −10
5
0
0
−5
−5 −10
−10
−10
Fig. 2. Approximate interpolation surfaces for the function (6) Table 2. The approximate errors for the function (7) λ
1
2
5
10
15
error 0.2358 0.1765 0.0619 0.0061 5.0486 × 10−4
0.2
0.4
0.1
0.2
0
0
−0.1
−0.2
−0.2 4
−0.4 4
2
4 2
0
0
−2
−2 −4
−4
2 0 −2 −4
−3
−2
−1
0
1
2
3
Fig. 3. Approximate interpolation surfaces for the function (7)
For the two factors of approximability and shape-preserving property, the suitable value of the shape parameter is suggested. Compare with [13], the given approximant is a simple explicit expression and effective for multivariate functions.
References 1. Micchelli, C.: Interpolation of scattered data: Distance matrices and conditionally positive definite function. Constr. Approx. 2, 11–22 (1986) 2. Wendland, H.: Piecewise polynomial, positive definite and compactly supported radial functions of minimal degree. Adv. Comput. Math. 4, 389–395 (1995) 3. Carlson, R.E., Foley, T.A.: The parameter in multiquadric interpolation. Comput. Math. Appl. 21, 29–42 (1991) 4. Rippa, S.: An algorithm for selecting a good parameter c in radial basis function interpolation. Adv. Comput. Math. 11, 193–210 (1999) 5. Kaansa, K., Hon, Y.C.: Circumventing the ill-conditioning problem with multiquadric radial basis function: Applications to elliptic partial differential equations. Comput. Math. Appl. 39, 123–137 (2000)
Quasi-interpolation for Data Fitting
547
6. Schaback, R.: Creating surfaces from scattered data using radial basis function. In: Daehlen, M., Lyche, T., Schumacker, L. (eds.) Mathematical Methods for curve and Surfaces, pp. 477–496. Vanderbilt University Press, Nashville (1995) 7. Wu, Z.: Multivariate compactly supported positive definite radial functions. Adv. Comput. Math. 4, 283–292 (1995) 8. Wendland, H.: Piecewise polynomial, positive definite and compactly supported radial basis functions of minimal degree. Adv. Comput. Math. 4, 359–396 (1995) 9. Lazzaro, D., Montefusco, L.B.: Radial basis functions for the multivariate interpolation of large scattered data sets. J. Comput. Appl. Math. 140, 521–536 (2002) 10. Davydov, O., Morandi, R., Sestini, A.: Local hybrid approximation for scattered data fitting with bivariate splines. Computer Aided Geometric Design 23, 703–721 (2006) 11. Debao, C.: Degree of approximation by superpositions of a sigmoidal function. Approx. Theory & its Appl. 9, 17–28 (1993) 12. Mhaskar, H.N., Michelli, C.A.: Approximation by superposition of sigmoidal and radial basis functions. Adv. Appl. Math. 13, 350–373 (1992) 13. Lianas, B., Sainz, F.J.: Constructive approximate interpolation by neural networks. J. Comput. Appl. Math. 188, 283–308 (2006) 14. Zhang, W., Wu, Z.: Shape-preserving MQ-B-Splines quasi-interpolation. In: Proceedings Geometric Modeling and Processing, pp. 85–92. IEEE Computer Society Press, Los Alamitos (2004)