The Taylor's nonpolynomial series approximation

3 downloads 137 Views 707KB Size Report
for the analytic continuation of the Taylor series [9]. In these studies especially ... Singh). 1http://ee.iitd.ernet.in/ ; 2http://www.samsung.com/in/ ; 3www.jiit.ac.in/. 1 ...
The Taylor’s nonpolynomial series approximation Pushpendra Singh, Joshi S D, Patney R K, Saha Kaushik

To cite this version: Pushpendra Singh, Joshi S D, Patney R K, Saha Kaushik. The Taylor’s nonpolynomial series approximation. 2016.

HAL Id: hal-01229594 https://hal.archives-ouvertes.fr/hal-01229594v2 Submitted on 30 Mar 2016

HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destin´ee au d´epˆot et `a la diffusion de documents scientifiques de niveau recherche, publi´es ou non, ´emanant des ´etablissements d’enseignement et de recherche fran¸cais ou ´etrangers, des laboratoires publics ou priv´es.

The Taylor’s nonpolynomial series approximation Pushpendra Singh1,3,∗, S D Joshi1 , R K Patney1 and Kaushik Saha2 1

Department of EE, Indian Institute of Technology Delhi, India 2 3

Samsung R & D Institute India - Delhi, India

Jaypee Institute of Information Technology - NOIDA, India

Abstract In this paper, we propose a Taylor’s nonpolynomial series approximation and its application to the computation of approximate solution of differential equations as well as the representation of functions. We also present an extension of the Taylor’s theorem for nonpolynomial series approximation and error analysis.

Keywords: The Taylor series, the Maclaurin series, polynomial and nonpolynomial approximation.

1

Introduction

The Maxwell’s equations of electromagnetism, motion of objects, fluid and heat flow, bending and cracking of materials, vibrations, electrical circuits, chemical reactions, nuclear reactions, wave propagation, etc. are modelled by system of differential equations. Hence, the most important mathematical tool for modeling engineering systems and physical phenomena is the differential equation [1]. The Taylor series method, also called the recurrent power series method [2–4], is one of the fundamental building blocks of numerical analysis, has a long and rich history. The Taylor’s polynomial series approximation method is well known and has been used in a variety of applications. Numerical solution of a system of differential systems, in the context of the periodic orbits, has been evaluated by using the Taylor method [5–7]. Performance of the Taylor series method, for ordinary differential equations (ODEs) and differential-algebraic equations (DAEs), has also been studied in [8]. Some recent studies, on the Taylor series method have been performed for the analytic continuation of the Taylor series [9]. In these studies especially solution for a class of nonlinear singular boundary value problems (BVPs) without requiring any specific technique in handling the singularity at the origin [10] and ODE applications [11] are investigated. Some more of the important applications of the Taylor method investigated in the literature, including among others, are numerical methods for differential equations [12–14], computer assisted proofs in dynamical systems, e.g. the existence of periodic orbits and the Lorenz attractor [15]. The use of nonpolynomial functions, such as exponential and trigonometric functions, in Runge– Kutta methods are presented in [16–18]. The idea of matching the Taylor series of the exact solution ∗

Corresponding author’s E-mail address: [email protected]; [email protected] (P. Singh). 1 http://ee.iitd.ernet.in/ ; 2 http://www.samsung.com/in/ ; 3 www.jiit.ac.in/

1

with the Taylor series of the numerical solution has been used in the construction of Runge–Kutta methods [16–19]. The presence of a parameter (ω), in nonpolynomial spline function, is shown to produce better results [20–22], in terms of maximum absolute error in approximation, than the polynomial spline function in the approximation of the numerical solution of the BVPs. Some nonpolynomial interpolations, with multiple parameters, have been discussed in [23], and nonpolynomial spline function is used to obtain upper and lower envelope in empirical mode decomposition (EMD) algorithm to reduce mode mixing and detrend uncertainty [24]. Thus, we propose to use the parameterized nonpolynomial series approximation in Taylor polynomial series method for the solution of differential equation and function representation. This paper is organized as follows: In section 2, we discuss polynomial and nonpolynomial functions, the classical Taylor’s series and Taylor’s theorem. In section 3, we propose the Taylor’s nonpolynomial approximation, extension of the Taylor’s theorem for nonpolynomial series approximation and error analysis. Simulation results are presented in section 4. We present the conclusions in section 5.

2

Preliminaries: Polynomial and Nonpolynomial Series

If a function f (t) has a power series expansion at c (or about c or centered at c), that is, if f (t) =

∞ X

ck (t − c)k and |t − c| < R, then ck =

k=0

f (k) (c) , k!

(1)

where R is the radius of convergence. The power series in (1) is called the Taylor series of the function f (t) at c. For the special case when c = 0, the Taylor series becomes the Maclaurin series. The following theorem, which is a generalization of the mean value theorem, known as the classical Taylor’s theorem is stated here for quick reference: Theorem 2.1. (Taylor’s theorem) Let f (t) ∈ C n+1 [a, b] and let p(t) ∈ C n [a, b] be the nth order Taylor polynomial approximation of f (t) with center c ∈ [a, b]. Then ∀t¯ ∈ [a, b], there exists some value ξ between c and t¯ such that f (n+1) (ξ) ¯ (t − c)n+1 f (t¯) = p(t¯) + (n + 1)! where p(t) =

Pn

k=0 ck (t

− c)k and ck =

(2)

f (k) (c) . k!

Taylor’s theorem states that the difference between p(t) and f (t) at some point t¯ (other than c) is governed by the distance between t¯ and c and by the (n + 1)th derivative of f (t). Nonpolynomial, exponential and trigonometric, functions fitted Runge–Kutta methods [16–18] use the span of the functions in the following form T ∈ span{1, t, t2 , · · · , tq , exp(±ωt), t exp(±ωt), t2 exp(±ωt), · · · , tn exp(±ωt)},

(3)

and when ω = jµ, µ ∈ R the couple exp(±ωt) is replaced by sin(µt), cos(µt) and the method is referred as trigonometrically fitted [1]. We consider (3) with (n + 1) parameters {ωk }nk=0 of the form T2n ∈ span{1, t, t2 , · · · , tq , exp(±ω0 t), t exp(±ω1 t), t2 exp(±ω2 t), · · · , tn exp(±ωn t)}. 2

(4)

The interrelation between the basis of polynomial functions and the basis of nonpolynomial functions is established in the following manner: lim span{1, sin(ωt), cos(ωt)} = lim span{1,

ω→0

ω→0

sin(ωt) 2 , 2 [cos(ωt) − 1]} ω ω

(5)

From equation (5) it follows that lim span{1, sin(ωt), cos(ωt)} = span{1, t, t2 }.

(6)

ω→0

Similarly, we obtain: limω1 →0 span{1, exp(ω1 t), exp(ω2 t)} = limω1 →0 span{1, [exp(ωω11t)−1] , exp(ω2 t)}, limω1 →0 span{1, [exp(ωω11t)−1] , exp(ω2 t)} = span{1, t, exp(ω2 t)}, limω2 →0 span{1, t, exp(ω2 t)} = limω2 →0 span{1, t, ω22 [exp(ω2 t) − 1 − ω2 t]}, 2 limω2 →0 span{1, t, ω22 [exp(ω2 t) − 1 − ω2 t]} = span{1, t, t2 }, and hence 2 limω1 →0 span{1, exp(ω1 t), exp(ω2 t)} = span{1, t, t2 }. ω2 →0

It is easy to show, for m ≥ 1, that: m−1

m−1

2k X (ωt)k X m! (−1)m (2m)! k (ωt) 2m (−1) t = lim m [exp(ωt) − ]; t = lim [cos(ωt) − ]; ω→0 ω ω→0 k! ω 2m (2k)! k=0 k=0 m

m−1

2k+1 X sin(ωt) (−1)m (2m + 1)! k (ωt) (−1) t = lim , and t2m+1 = lim [sin(ωt) − ]. (7) ω→0 ω→0 ω ω 2m+1 (2k + 1)! k=0

The relationship among polynomial, nonpolynomial and the Fourier series is established in [23] by the nonpolynomial function with n parameters {ωk }nk=1 of the form T2n ∈ span{1, sin(ω1 t), cos(ω1 t), sin(ω2 t), cos(ω2 t), . . . , sin(ωn t), cos(ωn t)} or T2n = c0 + c1 sin(ω1 t) + c2 cos(ω1 t) + · · · + c2n−1 sin(ωn t) + c2n cos(ωn t) (8) and, using (7), it is shown that: limω1 →0 T2n ∈ span{1, t, t2 , cos(ω2 t), sin(ω2 t), . . . , cos(ωn t), sin(ωn t)}, limω1 →0 T2n = span{1, t, t2 , t3 , t4 , . . . , cos(ωn t), sin(ωn t)}, . . . , and ω2 →0

lim T2n ∈ span{1, t, t2 , . . . , t2n−1 , t2n }. .. .

(9)

ω1 →0

ωn →0

In equation (8), parameters {ωk }nk=1 may or may not be harmonically related with each other. If, for 1 ≤ k ≤ n, ωk = kω1 and n → ∞ then equation (8) converge to well known classical Fourier series representation of any periodic signal. We also consider the nonpolynomial function with n parameters {ωk }nk=1 of the form Tn ∈ span{1, exp(ω1 t), exp(ω2 t), exp(ω3 t), . . . , exp(ωn t)} or Tn =

n X k=0

3

ck exp(ωk t)

(10)

and, using (7), it is easy to show that: limω1 →0 Tn ∈ span{1, t, exp(ω2 t), exp(ω3 t), . . . , exp(ωn t)}, limω1 →0 Tn = span{1, t, t2 , exp(ω3 t), . . . , exp(ωn t)}, . . . , and ω2 →0

lim Tn ∈ span{1, t, t2 , . . . , tn−1 , tn }. .. .

ω1 →0

(11)

ωn →0

Similarly, we can use combination of polynomial, sinusoidal and exponential functions to obtain the other span of nonpolynomial functions. From (9) and (11), it is clear that polynomial functions are special cases of the nonpolynomial functions and hence the span of the nonpolynomial functions includes larger class of functions.

3

Taylor’s Nonpolynomial Series Approximation

We propose a Taylor’s nonpolynomial series expansion of a function f (t) by nonpolynomial function p(t) = c0 + c1 sin(ω1 (t − c)) + c2 cos(ω1 (t − c)) + · · · + c2n cos(ωn (t − c)) as given in Eq. (8) with c = 0. Since, there are (2n+1) constants {ci }2n i=0 in the expansion of f (t), we need (2n+1) equations to determine them. From (8), we obtain f (c) = p(c) = c0 + c2 + c4 + · · · + c2n ,

(12)

f (2m−1) (c) = p(2m−1) (c) = (−1)m+1 [ω12m−1 c1 + ω22m−1 c3 + · · · + ωn2m−1 c2n−1 ],

(13)

f (2m) (c) = p(2m) (c) = (−1)m [ω12m c2 + ω22m c4 + · · · + ωn2m c2n ]

(14)

for m = 1, 2, · · · , 2n. Similarly, P we propose the Taylor’s nonpolynomial series expansion of f (t) by nonpolynomial function p(t) = nk=0 ck exp(ωk (t − c)), as given in Eq. (10) with c = 0, and since there are (n + 1) constants {ci }ni=0 in f (t) expansion, we need (n + 1) equations to determine them. From (10), we obtain f (c) = p(c) = c0 + c1 + c2 + · · · + cn , (15) f (k) (c) = p(k) (c) = [ω1k c1 + ω2k c3 + · · · + ωnk cn ]

(16)

for k = 1, 2, · · · , n. Similarly, we can use combination of polynomial, sinusoidal and exponential functions to obtain nonpolynomial expansion of a function f (t) about c. We generalize the existing Taylor’s result of polynomial approximations and propose the following result for nonpolynomial approximations. Theorem 3.1. Let f (t) ∈ C n+1 [a, b] and let p(t) ∈ C ∞ [a, b] be the Taylor nonpolynomial approximation of f (t) with center c ∈ [a, b]. Then ∀t¯ ∈ [a, b], there exists some value ξ between c and t¯ such that f (n+1) (ξ) − p(n+1) (ξ) ¯ (t − c)n+1 . (17) f (t¯) = p(t¯) + (n + 1)! 4

Proof. If t¯ = c, the proof of the theorem is trivial. Therefore, we consider t¯ 6= c. For fixed but arbitrary t¯ (and t¯ 6= c), we construct the function F (x) = f (x) − p(x) − λ(x − c)n+1 ,

(18)

where the constant λ is defined such that F (t¯) = 0. It can be easily seen that λ=

f (t¯) − p(t¯) . (t¯ − c)n+1

(19)

Clearly the function F (x) has the following properties: F (t¯) = 0, F (c) = 0, F 0 (c) = 0, F 00 (c) = 0, · · · , F (n) (c) = 0.

(20)

Through the mean value theorem and (20), we obtain: F (c) = 0 and F (t¯) = 0 ⇒ ∃ξ1 , between c and t¯, such that F 0 (ξ1 ) = 0. F 0 (c) = 0 and F 0 (ξ1 ) = 0 ⇒ ∃ξ2 , between c and ξ1 , such that F 00 (ξ2 ) = 0. .. . F n (c) = 0 and F n (ξn ) = 0 ⇒ ∃ξ, between c and ξn , such that F n+1 (ξ) = 0. From the above discussions and (18), we obtain F (n+1) (ξ) = f (n+1) (ξ) − p(n+1) (ξ) − λ(n + 1)!, and hence λ=

(21)

f (n+1) (ξ) − p(n+1) (ξ) . (n + 1)!

(22)

This proves the theorem. Theorem 3.1 becomes the Taylor’s Theorem 2.1 of polynomial approximation if p(t) is a polynomial of degree at most n, i.e. p(n+1) (t) = 0 and hence p(n+1) (ξ) = 0. Thus this result provide an important link between the Taylor polynomial and nonpolynomial approximation and error analysis. Since the nonpolynomial approximation, given in Eqs. (8) and (10), depend on the parameters {ωi }, it is pertinent to find values of those parameters, which minimize the approximation error. The error in nonpolynomial approximation can be computed from Eq. (17) as e(t) = f (t) − p(t) =

f (n+1) (ξ) − p(n+1) (ξ) (t − c)n+1 . (n + 1)!

(23)

We can derive an overall error bound by L∞ -norm max |f (n+1) (ξ) − p(n+1) (ξ)| |e(t)| ≤

ξ

max |(t − c)n+1 |

(n + 1)!

(24)

t

and minimize it with respect to ω, as p(n+1) (ξ) is a function of ω. We can minimize the error |e(t)| with respect to ω by minimax approximation and obtain the value of ω for which error is minimum i.e. we obtain least infinity norm solution of min max |f (n+1) (ξ) − p(n+1) (ξ)| or min ||f (n+1) (ξ) − ω

(n+1)

ξ

ω

p (ξ)||∞ . Using (17) one can obtain better approximation than (2) by selecting ω such that min max |f (n+1) (ξ) − p(n+1) (ξ)| ≤ max |f (n+1) (ξ)|. ω

ξ

ξ

5

4

Simulations

In this section, we consider two examples and apply proposed method to obtain the approximation of the solution of ODE and function representation and compare errors between polynomial and nonpolynomial approximations.

4.1

Example 1

The basic principle of the Taylor series method is simple. Let the solution y(t) to the initial-value problem y 0 = f (t, y), a ≤ t ≤ b, y(a) = α (25) has (n + 1) continuous derivatives. If we expand the solution, y(t), in terms of its Taylor polynomial about ti and evaluate at ti+1 , we obtain y(ti+1 ) = y(ti ) + y 0 (ti )h +

y (n) (ti ) n y 00 (ti ) 2 h + ··· + h , 2! n!

(26)

where step size h = (b−a) = ti+1 − ti , ti = a + ih, for i = 0, 1, · · · , N . Successive differentiation of N 0 the solution, y(t), gives y (t) = f (t, y), y 00 (t) = f 0 (t, y), · · · , y (n) (t) = f (n−1) (t, y). From these results and Eq.(26) we obtain y(ti+1 ) = y(ti ) + f (ti , y(ti ))h +

f (n−1) (ti , y(ti )) n f 0 (ti , y(ti )) 2 h + ··· + h . 2! n!

(27)

We consider the initial value problem y 0 = y − t2 + 1, 0 ≤ t ≤ 2, y(0) = 0.5, N = 10 and apply the Taylor’s method of order three, and obtain (with yi = y(ti )) h2 h3 2 yi+1 = yi + h(yi − + 1) + (yi − ti + 1 − 2ti ) + (yi − t2i − 2ti − 1). 2 6 Similarly, from Eqs. (8), (12), (13), (14) and the Taylor’s method of order two, we obtain t2i

yi+1 = yi +

[1 − cos(ω1 h)] sin(ω1 h) (yi − t2i + 1) + (yi − t2i + 1 − 2ti ), ω1 ω12

(28)

(29)

Absolute Error 0.045 0.04

Absolute Error (AE)

0.035 0.03 0.025 0.02

Polynomial Approx. oder 2 Polynomial Approx. oder 3 Nonpolynomial Approx. order 2

0.015 0.01 0.005 0 0

0.2

0.4

0.6

0.8

1 t

1.2

1.4

1.6

1.8

2

Figure 1: Absolute error in the Taylor’s polynomial and nonpolynomial (with L∞ -norm minimization ω1 = 0.69) approximation of y(t) in Example 1. 6

where y0 = 0.5. This reduces to a polynomial approximation of order two of the initial value problem in the limit as ω1 → 0. For this problem, the exact solution is y(t) = (t + 1)2 − 0.5 exp(t) and we use this for error calculation. Figure 1 shows comparison of absolute errors (AE) between the Taylor’s polynomial (of order 2 and 3) and nonpolynomial approximation (of order 2) of the initial-value problem in Example 1. It is clear that the AE in both polynomial and nonpolynomial cases are comparable when the value of t is closer to 0, whereas, when t is deviating further from 0, the AE in polynomial approximation of order two grows by polynomial growth rate. The AE in nonpolynomial approximation of order two increases only by small amount and is comparable with polynomial approximation of order three.

4.2

Example 2

In this example, we approximate the function f (t) = esin(t) by the Taylor nonpolynomial and polynomial of degree 2 at c = 0. We obtain f (0) = 1, f 0 (0) = 1, f 00 (0) = 1, f 000 (0) = 0, f 0000 (0) = −1 and hence the Taylor’s polynomial approximation of order four is fp (t) ≈ 1 + t +

1 2 1 4 t − t. 2! 4!

(30)

From Eqs.(8), (12), (13), (14), we obtain the Taylor’s nonpolynomial approximation of order two as 1 1 1 sin[ω1 (t)] − 2 cos[ω1 (t)]. (31) fnp (t) ≈ (1 + 2 ) + ω1 ω1 ω1 This reduces to a polynomial approximation of order two of function in the limit as ω1 → 0. Figure 2 shows comparison of absolute errors (AE), for 0 ≤ t ≤ 70, between the Taylor’s polynomial and nonpolynomial approximation of function f (t) in Example 2. It is clear that the AE in both cases are comparable when the value of t is closer to c, whereas, when t is deviating further from c, the AE in polynomial approximation grow by polynomial growth rate and the AE in nonpolynomial approximation grow by small amount and then decrease.

5

Conclusion

In this paper, we have proposed the Taylor’s nonpolynomial series approximation and presented error analysis result by extending the Taylor’s theorem for nonpolynomial approximations. The applications of the proposed method are presented for the approximation of the solution of differential equations and the representation of functions. Simulation results demonstrate the accuracy of the proposed method.

Acknowledgment The authors would like to thank JIIT Noida for permitting to carry out research at IIT Delhi and providing all required resources throughout this study.

7

log10(AE)

10 0 −10 −20 0

Polynomial Approx. order 4 Nonpolynomial Approx. order 2 10

20

30

40 t Parameter (ω)

10

20

30

50

60

70

50

60

70

50

60

70

Parameter (ω)

8 6 4 2 0 0

40 t

Absolute Error (AE): log10(AE) 6 4 2

log10(AE)

0 −2 −4 −6 Polynomial Approx. order 4 Nonpolynomial Approx. order 2

−8 −10 −12 −14 0

10

20

30

40 t

Figure 2: Top and middle plots are absolute errors (AE) and corresponding parameters (ω) plots, bottom plot is AE plot with single parameter (ω = −3.29) in the Taylor’s polynomial and nonpolynomial approximation of f (t) in Example 2 with L∞ -norm minimization.

References [1] Z. Kalogiratou, Th. Monovasilis, G. Psihoyios, T.E. Simos, Runge–Kutta type methods with special properties for the numerical integration of ordinary differential equations, Physics Reports 536 (2014) 75–146. [2] A.E. Roy, P.E. Moran, W. Black, Studies in the application of recurrence relations to special perturbation methods I, Celestial Mech. 6 (1972) 468–482. [3] J.F. Steffensen, On the differential equations of Hill in the theory of the motion of the Moon II, Acta Math. 95 (1956) 25–37.

8

[4] A. Deprit, R.V.M. Zahar, Numerical integration of an orbit and its concomitant variations by recurrent power series, Z. Angew. Math. Phys. 17 (1966) 425–430. [5] W.G. Choe, J. Guckenheimer, Computing periodic orbits with high accuracy, Comput. Methods Appl. Mech. Engng. 170 (1999) 331–341. [6] J. Guckenheimer, B. Meloon, Computing periodic orbits and their bifurcations with automatic differentiation, SIAM J. Sci. Comput. 22 (2000) 951–985. [7] M. Lara, A. Deprit, A. Elipe, Numerical continuation of families of frozen orbits in the zonal problem of artificial satellite theory, Celestial Mech. Dynam. Astronom. 62 (1995) 167–181. [8] Barrio R., Performance of the Taylor series method for ODEs/DAEs, Appl. Math. Comput. 163 (2005) 525–545. [9] S. Abbasbandy, C. Bervillier, Analytic continuation of Taylor series and the boundary value problems of some nonlinear ordinary differential equations, Appl. Math. Comput. 218 (2011) 2178–2199. [10] S. H. Chang, Taylor series method for solving a class of nonlinear singular boundary value problems arising in applied science, Appl. Math. Comput. 235 (2014) 110–117. [11] R. Barrio, M. Rodriguez, A. Abad, F. Blesa, Breaking the limits: The Taylor series method, Appl. Math. Comput. 217 (2011) 7940–7954. [12] N.S. Nedialkov, K.R. Jackson, G.F. Corliss, Validated solutions of initial value problems for ordinary differential equations, Appl. Math. Comput. 105 (1999) 21–68. [13] J. Hoefkens, M. Berz, K. Makino, Computing validated solutions of implicit differential equations, Adv. Comput. Math. 19 (2003) 231–253. [14] M. Kumar, A. Srivastava, A. K. Singh, Numerical solution of singularly perturbed non-Linear elliptic boundary value problems using finite element method, Appl. Math. Comput. 219 (1) (2012) 226–236. [15] W. Tucker, A rigorous ODE solver and Smales 14th problem, Found. Comput. Math. 2 (2002) 53–117. [16] T.E. Simos, An exponentially-fitted Runge–Kutta method for the numerical integration of initial-value problems with periodic or oscillating solutions, Comput. Phys. Comm. 115 (1998) 1–8. [17] B. Paternoster, Runge–Kutta(–Nystrom) methods for ODEs with periodic solutions based on trigonometric polynomials, Appl. Numer. Math. 28 (1998) 401–412. [18] T.E. Simos, Exponentially fitted Runge–Kutta–Nystrom method for the numerical solution of initial-value problems with oscillating solutions, Appl. Math. Lett. 15 (2002) 217–225.

9

[19] Z. Kalogiratou, Th. Monovasilis, T.E. Simos, A symplectic trigonometrically fitted modified partitioned Runge–Kutta method for the numerical integration of orbital problems, Appl. Numer. Anal. Comput. Math.(ANACM) 2 (2005) 359–364. [20] Kumar M., Srivastava P. K., Computational Techniques for Solving Differential Equations by Quadratic, Quartic and Octic Spline, Adva. Eng. Softw. 39 (2008) 646–653. [21] M. A. Ramadan, I.F. Lashien, W.K. Zahara, Polynomial and nonpolynomial spline approaches to the numerical solution of second order boundary value problems, Appl. Math. Comput. 184 (2007) 476–484. [22] Srivastava P. K., Kumar M., Mohapatra R. N., Quintic Nonpolynomial Spline Method for the Solution of a Special Second-Order Boundary-value Problem with engineering application, Comput. Math. Appl. 62 (4) (2011) 1707–1714. [23] Singh P., Patney R.K., Joshi S.D., Saha K., Some studies on nonpolynomial interpolation and error analysis, Appl. Math. Comput. 244 (2014) 809–821. [24] Singh P., Srivastava P.K., Patney R.K., Joshi S.D., Saha K., 2013 Nonpolynomial Spline Based Empirical Mode Decomposition, 2013 International Conference on Signal Processing and Communication (2013) 435–440.

10