approximation of functions in a method of finite points - CiteSeerX

1 downloads 0 Views 131KB Size Report
Oñate et al. 7 focused on the application to fluid flow problems with a standard point collocation technique. All these methods can be considered as Finite Point ...
COMPUTATIONAL MECHANICS New Trends and Applications S. Idelsohn, E. Oñate and E. Dvorkin (Eds.) ©CIMNE, Barcelona, Spain 1998

APPROXIMATION OF FUNCTIONS IN A METHOD OF FINITE POINTS Juan Jose Benito(1),Luis Gavete(2),Angel Buceta(1), Santiago Falcón(3) (1) Escuela Tecnica Superior de Ingenieros Industriales U.N.E.D. Apdo. Correos 60149, 28080 Madrid, Spain e-mail: [email protected] (2) Escuela Tecnica Superior de Ingenieros de Minas Universidad Politécnica c/ Rios Rosas 21 ,28003 Madrid, Spain e-mail:[email protected] (3) Union Electrica Fenosa S.A.Madrid, Spain

Key words:finite point method, moving least squares, error indicator Abstract.. The main idea of these methods is to replace the FEM interpolation by a local weighted least squares fitting. It is possible to disconnect the number of nodes from the number of approximation parameters because the least squares fitting replaces the standard FEM interpolation. The approximate function becomes smooth by using continuous weighting functions. To preserve the local character of the approximation is necessary to chose weighting functions that vanish at a certain distance from the point. In this communication a posteriori error indicator has been used to try of distributing the error uniformly all over the domain. This procedure allows us to isolate the domain areas with a worst behaviour and refine only the small areas in the domain. It is possible to eliminate the elements and to provide very easy preprocessing and very simple adaptativity and refinement mesh. Also a parametric study of weighting functions is included.

Juan Jose Benito, Luis Gavete, Angel Buceta, Santiago Falcón

1 INTRODUCTION The error made when interpolating a function f(x) by means of a polynomial p(x), is given by e(x) = f(x) - p(x).If we have n+1 different points, belonging to the interval [a, b] and f(x) is a function that is defined and admits n continuous derivatives, then a value ξ exists, inside the interval [a, b], such that f ( n +1) (ξ) e( x ) = ( n + 1)!

n

∏ (x − x ) i=0

(1)

i

The use of this error formula is not very useful due to the information involved. The term of the interpolation error is then difficult to apply. To increase the accuracy of the interpolation it would seem a priori convenient to increase their degree, what can lead to have to carry out an interpolation on many points. The problem that this originates is that the polynomials of high degree have an oscillatory nature for what they are not acceptable. An idea to avoid such problems is to make a partition of the interval and to make a piecewise interpolation with different polynomials of the same degree, or different degree. The inconvenience, is that although one obtains a good approach of the function, there is not continuity at the derivatives. To avoid this, type Hermite interpolations can be made, that is to say, taking as data the value of the function and the first derivative in each interface. Although the idea is good, the reality is that it is necessary too much information. Since, in general, data are known of the function but not of the derivatives, reason for that methods have been developed because don't require information on the derivatives, except in occasions, in the extreme points of the interval, like it can be the spline interpolation. Another possible solution to the problem is to use approximation instead of interpolation. We can find the polynomial p for the one which the functional E(p) sum of the quadratic deviations (p(xi)-fi)2 is minimum. N N ∂p ( x i ) ∂E = ∑ 2 p( x i ) − f i = 2 ∑ x ij p( x i ) − f i = ∂a j i = 0 ∂a j i =0

[

]

[

]

(2)

N N  = 2 ∑ xij (a 0 + a1 xi + ... + a m xim ) − ∑ xij f i  = i =0  i=0  N N N        N   = 2  ∑ x ij a 0 +  ∑ x ij +1 a1 + ... +  ∑ x ij + m a m −  ∑ x ij  f i  = 0  i =0   i =0   i =0    i = 0 

which it is equal to a system of linear equations that always has unique solution when the support points xi are different [1]. In the standard method of least squares, a group of constant weights wi can be considered to differentiate the importance of each point of the support. In this case the function is

2

Juan Jose Benito, Luis Gavete, Angel Buceta, Santiago Falcón

E ( p ) = ∑ (p ( x i ) − f i ) w i N

2

(3)

i=0

A new idea to that indicated previously gives place to a variant of great interest of the method of least squares exposed, when it is necessary a more local information that global in certain points. It seems logical that the value in the point x is more influenced by the values fi corresponding to the neighborhood points that those far away. A form of introducing this idea in the formulation, is to associate to the deviations some weighting functions wi(x), that make diminish the influence of the values fi as the distance increases

[

N

E ( p) = ∑ w i ( x ) p( x i ) − f i i=0

]

2

(4)

where it is supposed that wi(x) it is positive, big for the points xi near x, and relatively small or even zero for points xi more distant. If this is considered, p(x), in the above equation, will be a priori a polynomial p( x ) =

m

∑a x

i

i

(5)

i=0

The normal equations are obtained in a same way, considering those (m+1) necessary conditions m  m  m  m 0 m '  ∑ w i x i  a 0 +  ∑ w i x i  a 1 +...+  ∑ w i x i  a m = ∑ w i f i  i =0   i =0   i=0  i=0

(6)

m  m  m  m 2 ' m 1  ∑ w i x i  a 0 +  ∑ w i x i  a 1 +...+  ∑ w i x i +  a m = ∑ w i x i f i  i =0   i =0   i =0  i =0 m  m  m  m m m 1 m 2m   ∑ w i x i  a 0 +  ∑ w i x i +  a 1 +...+  ∑ w i x i  a m = ∑ w i x i f i  i =0   i =0   i =0  i=0

where wi(x) has been substituted for the abbreviation wi. It is necessary to point out that is guaranteed the unicity of the solution and that the function associated to the fitting curve is in this case g(x)= p(x)

(7)

Being the coefficients obtained ai, function of x through the weighting functions wi. This function g(x) won't be, in general, a polynomial function. The biggest inconvenience is that for the g(x) evaluation, it is necessary to solve a system of equations for each x, this is the reason because this process is not usually applied for polynomials of high degree. It is important to indicate two properties of interest of the function g(x). In the first place, g(x) has the reproduction property, that is to say, if all the data correspond to a polynomial

3

Juan Jose Benito, Luis Gavete, Angel Buceta, Santiago Falcón

function and a polynomial p(x) is used, then the function g(x) is also a polynomial. The second property is that the function g(x) is regular in the sense that it can be derived repeatedly (as many times as be diferenciable the function of weight w). This is what we are looking for in definitive: a) That continuity exists in the derivatives of the function. b) That the approximation function is a “open” function and it is not restricted to be a polynomial that which introduces a tremendous freedom when adapting better to the approach of a cloud of points, and also obtaining not only a global approach but also locally adapted to the distribution of points in the cloud. The problem in finite elements is that, in general, surfaces are not usually determined having continuous derivatives through the sides of the elements, but only on the function (continuity C0). Although in the case of partial differential equations is usually important the demand of continuity C n, n≥1,depending of the case considered. The method of least squares outlined before is very interesting in the case that the data correspond to distributed points in an aleatory way in the plane xy. Lancaster and Salkauskas 2, have investigated some properties of interest on the surfaces generated by means of moving least squares. Also, like it was indicated in the case of curve fitting, they have the reproduction property, and there is continuity at the partial derivatives. 2 MOVING LEAST SQUARES. In the method of moving least squares the idea is to replace the piecewise interpolation typical of the finite elements, for an local least squares fitting. With this the function resulting is more regular that the function of the MEF, since the discontinuous coefficients are replaced, to apply to the function that is minimized in the MEF (value unit in the element and null outside of him), for continuous functions of weight, that which gives a continuity Cn, usually taking n greater than 1. The value of the field variable u* in a point of the domain, approaches m

u * ( z) ≈ u( z) = ∑ Pi ( z i )a i = {P( z)} {a} T

(8)

i =1

where it is defined z=(x, y), zi=(xi, yi), i=1,… n; {P} it is a vector of independent functions that contains the parameters to determine by means of the approach algorithm, that is to say, minimizing the function one that defines the sum of the weighted quadratic errors J ( a ) = ∑ w i ( z 1 z i ) (u i − u ( z i ) ) = n

2

i =1

(

= ∑ w i ( z 1 z i ) u i − {P( z i )} {a} n

i =1

=

T

)

2

=

({u } − [ P]{a}) [W( z z )] ({u } − [ P]{a}) T

i

1 i

i

4

(9) i = 1, ..., n

Juan Jose Benito, Luis Gavete, Angel Buceta, Santiago Falcón

 P( x ) T  { 1 }   [ P] =  ...   T {P( x m )} 

[

[W] = diag w 1 ( z), ..., w n ( z)

(10)

]

(11)

When imposing the condition of minimum of the function, you reaches the relationship [A] {a} = [H] {ui}

(12)

{a} = [A]-1 [H] {ui}

(13)

and therefore

[A ] = ∑ w ( z 1 z i ) {P( x i )}{P( x i )} n

T

(14)

i =1

[

]

[ H ] = w ( z 1 z 1 ) {P( z1 )}, w ( z1 z 2 ) {P( z 2 )}, ..., w ( z 1 z n ) {P( z n )}

(15)

Being able to write, taking account (13) and (8), the expression that approaches the field variable u( z) = {p( z)} [A ] −1 [A ] {u i } = {Φ( z)} T

T

{u } i

(16)

Keeping in mind the reproduction property, that is to say, the capacity to reproduce any function included as part of the base exactly, if the polynomials of degree zero are included we have n

∑ Φ ( z) = 1 i =1

i

(17)

that it is known as “partition of the unity” in the mathematical literature. Those derivatives of the function that approaches the field variable are obtained starting from their definition u x = {P} Tx {a} + {P} T {a} x

(18)

where the subindexes are used to indicate derivatives. If ux is derived once again u xy = {P} Txy {a} + {P}Tx {a} y + {P} Ty {a} x + {P} T {a} xy

5

(19)

Juan Jose Benito, Luis Gavete, Angel Buceta, Santiago Falcón

Some derivatives don't present in general any difficulty, since they are usually simple functions, but it doesn't happen the same thing with other derivatives. Taking account of (12) we have

[A ] x {a} + [A ] {a} x = [ H ] x {u i }

(20)

where n

[A ] x = ∑ i =1

∂w ( z 1 z i ) T P( z i )} − {P( z i )} { ∂x

[ H] x =

∂w ( z 1 z i ) {P( x i )} ∂x

(21)

(22)

Therefore {a}x is obtained solving

[A ] {a} x = [ H ] x {u i } − [A ] x {a}

(23)

It can be interesting to have explicit form of the derivatives of the shape functions,

{Φ} x = {P} Tx [A ]−1 [ H ] + {P} T [A ] −1 [ H ] x − {P} T [ A ] −1 [A ] x [ A ] −1 [ H ]

(24)

Nayroles et.al.3 use approaches that ignore some derivatives

{N} x ≈ {P} Tx [A ] −1 [ H ]

(25)

The diffuse element method developed by Nayroles et al.3 is a new way of discretizing the continuous media. In this method, only a mesh of nodes and a boundary description is needed to develop the Galerkin equations. The approximating function are polynomials fitted to the nodal values of each local domain by a weighted least squares approximation. Belytschko et al.4,5. developed an alternative implementation using moving least square approximation as were defined by Lancaster and Salkauskas2. Liu et al.6 has recently proposed a different kind of "griddles" multiple scale methods based on reproducing kernel and wavelet analysis. Oñate et al.7 focused on the application to fluid flow problems with a standard point collocation technique. All these methods can be considered as Finite Point Methods. This paper encloses a posteriori error indicator, in order to minimize the error, and a sensitivity analysis of various parameters involved. Some of the numerical results given indicate the rate of convergence and power of the method. Duarte and Oden8 on one hand and Babuska and Melenk9 for other, they have shown how the denominated methods without mesh can be based on the partition of the unity. In this line, the first authors have developed a new method that they denominate h-p clouds whose idea is in fact the construction of families of functions using the partition of the unity, that is to say, they multiply a partition of the unity by polynomials or another class of functions. The resulting functions conserve the properties of the partition of the unity and their linear

6

Juan Jose Benito, Luis Gavete, Angel Buceta, Santiago Falcón

combinations can represent polynomials of the degree that is required, being able to appreciate their interest therefore for the formation of p-adaptive hierarchical families. A group of functions is called partition of the unity if it has the following properties 1) Φ ik ∈ C ∞ 1 ≤ i ≤ n

(26)

n

2)

∑ Φ ({x}) = 1 i =1

k i

∀x⊂Ω

(27)

Being the order k the degree of the polynomial until which you can represent by means of linear combination of the partition of the unity. There is not an only form of building the functions so that they complete the condition of constituting a partition of the unity, being able to base their election in if the problem to solve is or not linear, the complexity of the geometry of the domain, required regularity (C0,C1, …etc), etc. In this way, for example, in the method of Duarte and Oden8 , is formed the partition of the unity, starting from a function in way of moving least squares of order k. n m   u n ( x) = ∑ Φ ik ( x)  u i + ∑ b ji q j ( x)   i j =1

(28)

where qj(x) is a base of monomials of more order that k. If polynomials of Legendre are used 1 Pp ( ξ) = ( p − 1)!

1 2

p −1

dp dξ p

[(ξ

2

− 1)

p

]

(29)

then the approach function is Φ p +1 = ∫ Pp (ξ )dξ =

[(

)]

p 1 1 d p −1 2 ξ −1 2 p −1 ( p − 1)! 2 dξ

(30)

One of the biggest problems in the implementation of meshless methods resides in that the used approach is not an interpolation, that is to say that the approach functions do not look for the nodal values. This implies a difficulty when imposing the essential boundary conditions that it has led to the appearance of different solutions like they are, among other, Lagrange multipliers (Belytschko4,5). Another solution consists on to force that the weighting functions are zero in the boundary where the conditions type Dirichlet are imposed. Although a priori it is an interesting idea, it seems to be less robust than the other ones. According to Krongauz and Belystchko10, the most satisfactory solution is the use of a joining with finite elements. Another important method to treat essential boundary conditions is given by Mukherjee and Mukherjee11.

7

Juan Jose Benito, Luis Gavete, Angel Buceta, Santiago Falcón

3 A POSTERIORI ERROR INDICATOR A posteriori error indicator has been developed based on the studies of Ferragut et al.11. The basic idea of this error indicator is to try distributing the error uniformly all over the domain. Each pair of evaluating points are compared in order to estimate the step in the value of the function. We call error in the pair of evaluation points I and A to the expression: 2

 u xA - u xI   u yA - u yI   +   error I -A =   d(A - I)   d(A - I) 

2

(31)

being ux the derivative of "u" respect "x" uy the derivative of "u" respect "y" d(A-I) the distance from A to I The average domain error is: n

∑ error error av =

I

I

(32)

n

being n the total number of pairs of evaluation points. It has been considered the necessity of adding nodes and evaluation points when a pair of evaluation points has an error higher than the average domain error. This procedure allows us to isolate the domain areas with a worst behaviour and refine only the small areas in the domain. A first advantage is the no necessity to use complicated refinement algorithms because there are no elements structure (only for numerical integration), but only to add nodes and evaluation points in the area where they are needed. In the method of finite elements, they are necessary several postprocessors to obtain a soft field of gradients. However in this method it is not necessary the use of complicated algorithms of postprocessing the solution, because structure of elements doesn't exist, and the solution is already sufficiently regular, with continuity at the derivatives. 4 NUMERICAL RESULTS Three examples have been developed in other to check the capabilities of this numerical method. 4.1.Example one: two dimensional boundary value problem Let us consider the equation:

8

Juan Jose Benito, Luis Gavete, Angel Buceta, Santiago Falcón

∆u = 2 cos(Πx) (4 π2 y 4 - 16 π2 y3 + (19 π2 - 48) y 2 - 6(π2 - 16)y - 38)

u = 0 if y = 0 ∨ y = 2   if x = 0 ∨ x = 2 ∂u =0  ∂x

for : 0 < x < 2 ∧ 0 < y < 2    Bound.Cond.    

(33) the obvious solution is: u = - 2 cos(Πx) y (y - 2) (2y - 3) (2y - 1)

(34)

1

FPM 0.1

FEM

0.0 1

0.0 01 h

0.1

1

Figure 1: Y gradient error

We obtain best results than in the finite element method even using formula (25) to calculate shape function derivatives (see fig.1). 4.2.Example two: two dimensional boundary value problem with a gradient singularity Let us consider the equation: - ∆u = 0 θ for : Γ  u = ρ sin  2  1 2

for : 0 < x < 2 ∧ 0 < y < 2  Bound.Cond.  

9

(35)

Juan Jose Benito, Luis Gavete, Angel Buceta, Santiago Falcón

with: x = ρ cos(θ) y = ρ sin (θ) the obvious solution is: 1

u = ρ 2 sin

θ 2

(36)

The posteriori error indicator has been used in this problem in order to improve the solution. The areas where the error is higher than the average domain error are shown in the figure 2.

Figure 2: Area to be refined

Because of the singularity in the gradients it is obvious that the most problematic area is the one which surrounds the origin. The error indicator shows us where it is necessary to add nodes, so a new mesh has been developed. Figure 4 shows the old and new meshes. Developing the new one is so easy as adding nodes in the points that the error indicator shows. Several examples with this problem have shown that with one step of refinement the error drops to a 60% with refinements as shown in figure 3.

10

Juan Jose Benito, Luis Gavete, Angel Buceta, Santiago Falcón

Figure 3: First refinement

4.3.Example three: cantilever beam test The method behaviour was checked with the problem of the cantilever beam, for which Timoshenko and Goodier12 gave the following exact solution for plane stress

[(

))]

P  D 2  y −  6 L − 3x )x + (2 + ν) y − 2 Dy   6 EI 2 1  1 1  P   2   3ν y − 2 Dy + D 2  (L − x) + (4 + 5ν)D 2 x +  L − x 3x 2  uy =   6 EI   2  4 3   ux = −

(

(37) In this paper the problem was solved with L=10, D=1, E=1000 y ν=0.3. The EFG method with linear shape functions and the imposition of essential boundary conditions by Lagrange multipliers was used. Two regular mesh of nodes were considered: a mesh with 21x3 nodes (internodal distance 1) and a mesh with 41x5 nodes (internodal distance 2). The integration space was defined in each case with cells formed by four nodes and in each cell Gauss 4x4 quadrature was used.

11

Juan Jose Benito, Luis Gavete, Angel Buceta, Santiago Falcón

For both cases, the following two weight functions were tested: a) Polynomial weight function (Spline type): 2

3

 d   d   d  wi (d ) = 1 − 6  + 8  − 3   dm   dm   dm 

4

(38)

when d ≤ dm, and wi = 0 when d > dm; and where d=

(x − x i )2 + (y − yi )2

(39)

This study tries to find which the best value for the ratio dm/ci where ci is the mesh nodal parameter. In this case ci is the side of the square cell formed by four nodes. b) Exponential weight function (Gauss type): w i (d ) =

e

d −  c

2

−e

1− e

 dm  −   c 

 dm  −   c 

2

2

(40)

when d ≤ dm, and wi = 0 when d > dm; and where d=

(x − x i )2 + (y − yi )2

c = α ⋅ ci

(41)

with ci is the side of the square cell In the above, c is a constant which controls the relative weights and dm is the size of the support for the weight function and determines the influence domain of xi. 0,370

0,365

w

0,360

0,355 2

2,5

3

3,5

4

4,5

dm/c

Figure 4: Values of the exponential weigth function for d=1

12

5

Juan Jose Benito, Luis Gavete, Angel Buceta, Santiago Falcón

For using a well shaped weight function, it is advised to take dm=4c. This conclusion can be observed in the figure 4, where the value of the exponential weight function with d=1 is represented for different values of dm/c. Therefore, the paper also tries to find which the best value for the ratio dm/ci in this weight function. It is easy to see that α=dm/4ci. The deflection at the end of the beam (x=10, y=0) is compared to the exact result 4.0275 given by Timoshenko and Goodier. The following error is defined: %E =

uy exact − uy efg uy exact

⋅ 100

(42)

Thus in figures 5 and 6, the error deflection variation with the ratio dm/ci is represented for both meshes and weight functions before mentioned. Figure 5 shows, in the case that spline type weight function is used, that whatever value of dm two or three times bigger than ci is enough for a good accuracy (less than 0.5%) and as it was expected the mesh with more nodes has a planer curve. As it is shown in figure 5, in this problem it is not necessary to take a big number of nodes if we choose a good support size for the weight function (for example dm≥3.5ci). 3,5

3,0

2,5

63 205

2,0

%E 1,5

1,0

0,5

0,0 1,5

2,0

2,5

3,0

3,5

4,0

4,5

dm/ci

Figure5: Error using spline weight function

Figure 6 shows, in the case that exponential type weight function is used, that dm=2.6ci gives a good accuracy for both meshes and using the mesh of 205 nodes also a planer curve. Here the support size (area of influence) of the weight functions is smaller and that give us a saving of computer time.

13

Juan Jose Benito, Luis Gavete, Angel Buceta, Santiago Falcón

20

15

33 205 %E10

5

0 2,0

2,1

2,2

2,3

2,4

2,5

2,6

2,7

2,8

2,9

3,0

dm/ci

Figure6: Error using exponential weight function

It was also studied the influence of the number of integration points. In figure 7 and 8 this influence it is shown. There they are represented the curves for a mesh of 63 nodes with 60 square cells(0.5 side) of integration (line 63) and a mesh, of also 63 nodes, but with 200 square cells (0.25 side) of integration (line 63R), for polynomial (figure 7) and exponential (figure 8) weight functions. 3,5

3,0

%E

2,5

63

2,0

63R

1,5

1,0

0,5

0,0 1,5

2,0

2,5

3,0

3,5

4,0

4,5

dm/ci

Figure 7: Influence of the number of integration points using spline weight function

14

Juan Jose Benito, Luis Gavete, Angel Buceta, Santiago Falcón

Thus, in case of spline weight functions, when the number of integration points increases the curve is smoother but the change is not important. In case of gauss weight function no change can be appreciated 20 18

63

16 14

63R 12

%E10 8 6 4 2 0 2,0

2,1

2,2

2,3

2,4

2,5

2,6

2,7

2,8

2,9

3,0

dm/ci

Figure 8: Influence of the number of integration points using exponential weight function.

Now we can also calculate with the same method the normal stress of different points in the beam. The exact result by Timoshenko and Goodier12 was: P Sxx = − (L − x )( y − 1 / 2D) I

(43)

63 70 60

exact

50

spline

40

gauss Sxx 30 20 10 0 0

1

2

3

4

5

6

7

8

9

-10

x

Figure 9: Variation of normal stress Sxx for y=0 , 0≤x≤10.

15

10

Juan Jose Benito, Luis Gavete, Angel Buceta, Santiago Falcón

First, in figure 9 it is shown the variation of the normal stress with x when y=0.The exact result and result for spline weight function and gauss weight function are compared. Both weight functions have the same response here, and both have less accuracy in the beginning and the end of the beam, where the boundary conditions were placed. 63

60

exact 40

spline 20

gauss

Sxx 0 0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

-20

-40

-60

y

Figure 9: Variation of normal stress Sxx for x=0 , 0≤y≤1.

Figure 10 show us the variation of the normal stress with y in the fixed end of the beam (x=0). In this case it is observed that spline weight function works even better than the exponential weight function. For another sections, away the boundary, the results are good in both cases. Therefore both weight functions can be used to get a good accuracy in the normal stress. 5 CONCLUSIONS Some aspects of a finite point method in linear analysis have been studied. A high convergence degree has been obtained with continuity Cn. The method seems also very effective using an indicator for the error a posteriori. This can provide a very simple adaptability and refinement. The main drawbacks of the method, as compared with the finite element method, are the treatment of essential boundary conditions and the increase of computer time due to the calculation of the shape derivatives functions. The main advantage of the method is the continuity in the gradients which give us a better approximation as compared with the finite element method. It is particularly important in the presence of singularities. A parametric study comparing two different weight functions was made. It is important to choose a weight function that was stable for the problem under study. Finally, it is shown as we have continuity in all stresses in the cantilever domain.

16

Juan Jose Benito, Luis Gavete, Angel Buceta, Santiago Falcón

REFERENCES [1] E. Isaacson and H. B.Keller, Analysis of Numerical Methods, John Wiley & Sons, Inc., (1966). [2] P. Lancaster and K. Salkauskas, “Surfaces generated by moving least squares methods”, Math. Comput. 37, 141-158 (1981). [3] B. Nayroles, G. Touzot, and P. Villon, “Generalizing the finite element method:diffuse aproximation and diffuse elements”, Computational Mechanics, 10, 307-318 (1992). [4] T. Belytschko, Y.Y. Lu and L.Gu, “Element-free Galerkin Methods”, International Journal for Numerical Methods in Engineering, 37, 229-256, (1994). [5] Y. Y. Lu, T. Belytschko and L. Gu, “A New Implementation of the Element-free Galerkin Method”, Comput Methods Appl. Mech Eng. 113, 397-414, (1994). [6] W. K. Liu, S. Jun, S. Li, J. Adee and T. Belytschko, “Reproducing Kernel Particle Methods for Structural Dynamics”. International Journal for Numerical Methods in Engineering,. 38, 1655-1679. (1995). [7] E. Oñate, S. Idelsohn, O. C. Zienkiewicz and R. L. Tailor, “A Finite Point Method in computational mechanics. Aplications to convective transport and fluid flow”, International Journal for Numerical Methods in Engineering, 39, 3839-3866 (1996). [8] A. Duarte and J. T.Oden, “H-P Cloud-An h-p Meshless Method,Numerical Methods for Partial Differential Equations”, 12, 673-705 (1996). [9] I. Babuska and J. M. Melenk, “The Partition of Unity Method”, International Journal for Numerical Methods in Engineering,.40, 727-758 (1997). Y. Krongauz and T. Belytschko, “A Petrov-Galerkin Diffuse Element Method (PG DEM) and its comparison to EFG”, Computational Mechanics, 19, 327-333 (1997). [10] Y. X. Mukherjee and S. Mukherjee, “The Boundary Node Method for potential problems”, International Journal for Numerical Methods in Engineering, 40, 797-815 (1997).

17

Juan Jose Benito, Luis Gavete, Angel Buceta, Santiago Falcón

[11] L. Ferragut, R. Montenegro and A. Plaza, “Efficient refinement/derefinement algorithm of nested meshes to solve evolution problems”. Comunications in Numerical Methods in Engineering, 10, 403-412. (1994). [12] S. Timoshenko y J. N. Goodier, Teoría de la Elasticidad, Urmo s. A. (1968)

18