The parameter estimation algorithms based on the ... - SAGE Journals

0 downloads 0 Views 322KB Size Report
by defining and minimizing a cost function based on ... Ding et al.9 studied the least squares iterative .... of the batch data, define the cost function based on the.
Research Article

The parameter estimation algorithms based on the dynamical response measurement data

Advances in Mechanical Engineering 2017, Vol. 9(11) 1–12 Ó The Author(s) 2017 DOI: 10.1177/1687814017730003 journals.sagepub.com/home/ade

Ling Xu1,2

Abstract This article studies the parameter estimation to the system response from the discrete measurement data. By constructing the dynamical rolling cost functions and using the nonlinear optimization, the gradient identification method is presented for estimating the parameters of the sine response signal with double frequency. In order to overcome the difficulty for determining the step size and deduce the influence of noises, the stochastic gradient identification method is derived to estimate the signal parameters. For the purpose of improving the accuracy, a multi-innovation stochastic gradient parameter estimation algorithm is presented using the moving window data. Finally, the simulation examples are provided to test the algorithm performance. Keywords Parameter estimation, system response, rolling optimization, nonlinear optimization, multi-innovation

Date received: 27 June 2016; accepted: 24 July 2017 Handling Editor: Nima Mahmoodi

Introduction System identification and parameter estimation have been used widely in process control, signal modeling, communication, and electronic technology. The objective of parameter estimation is to obtain the parameter estimates of system models or signal models.1–3 In general, the parameter estimation algorithm can be derived by defining and minimizing a cost function based on the measurement data.4–6 The optimization method is one of the critical factors for parameter estimation algorithm. Many optimization methods are used in system identification and parameter estimation. Li et al.7 considered the parameter estimation algorithms for Hammerstein output error systems using Levenberg– Marquardt optimization method. Xu et al.8 presented a parameter estimation method based on the Newton optimization. Ding et al.9 studied the least squares iterative identification algorithm for multivariate pseudo-linear autoregressive moving average (ARMA) systems. Wang and Xun10 considered a recursive least

squares identification for a class of nonlinear multipleinput single-output systems. Wang et al.11 suggested a maximum likelihood estimation method for dual-rate Hammerstein systems. All in all, different identification methods can be derived by means of different optimization. For the linear problem, the least squares method is effective. However, for the nonlinear problem, we must use the nonlinear optimization.12,13 System identification is to obtain the system models. The system model is the basis of system control.14–16 For a control system, the system responses contain the

1

School of Internet of Things Technology, Wuxi Vocational Institute of Commerce, Wuxi, P.R. China 2 School of Internet of Things Engineering, Jiangnan University, Wuxi, P.R. China Corresponding author: Ling Xu, School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, P.R. China. Email: [email protected]

Creative Commons CC-BY: This article is distributed under the terms of the Creative Commons Attribution 4.0 License (http://www.creativecommons.org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access pages (https://us.sagepub.com/en-us/nam/ open-access-at-sage).

2 abundant information of the system parameters. The system response at the discrete time can be collected by means of measurement instruments. In process control, the system response is obtained by applying some typical signals. In system identification, the impulse signal, the step signal, and the sine signal are used widely to generate the measurement data for estimating system parameters. Many identification methods have been presented based on the impulse response experiment, the step response experiment, and the frequency response experiment. Ahmed et al.17 studied the identification method from step responses with transient initial conditions. Fedele18 considered a method to estimate a first-order plus time-delay model from the step response. Xu and Ding19 presented a damping iterative parameter identification method for dynamical systems based on the sine signal. Hidayat and Medvedev20 studied the Laguerre domain identification of continuous linear time-delay systems from impulse response data. This article studies the parameter estimation based on the response signals. The accuracy, the computation complexity, and the robustness21–23 are main aspects of the identification algorithm performance.24,25 Many identification algorithms focus on enhancing the accuracy and robust and reducing the complexity.26–28 Wang and Ding29–31 considered the filtering algorithm to reduce the algorithm’s complexity. Mao and Ding32 used the filtering technique to the Hammerstein controlled autoregressive systems to reduce the complexity. Na et al.33 studied the robust adaptive finite-time parameter estimation for robotic systems. Increasing the measurement data used in the estimation computation is an effective method.34,35 Moreover, some methods based on the model decomposition or the parameter decomposition can reduce the complexity.36,37 Wang and colleagues38,39 suggested the hierarchical estimation algorithms to reduce the complexity. Wang and colleagues40,41 considered a multi-innovation parameter estimation method for Hammerstein nonlinear system to enhance the accuracy. The objective of this article is to propose the identification algorithms to obtain high accuracy and low complexity. In system identification, the recursive computation and the iterative computation are used to obtain the parameter estimates.42–45 Wang and Ding46,47 studied the recursive identification algorithm for nonlinear systems. Xu48 proposed the iterative Newton method to estimate the system parameters from step response. Wang and Ding49 studied a recursive parameter and state estimation algorithm for the input nonlinear state space system. In general, the recursive computation can use the online data; therefore, the recursive algorithm can be used in the online identification. The rest of this article is organized as follows. Section ‘‘Problem description’’ describes the identification and

Advances in Mechanical Engineering estimation problem from the system response. Section ‘‘The recursion gradient identification algorithm’’ derives the gradient identification method. Section ‘‘The SG identification method based on the dynamical data’’ derives the stochastic gradient (SG) parameter estimation method. Section ‘‘The multi-innovation stochastic gradient identification based on the moving window data’’ derives the multi-innovation gradient parameter estimation method. Section ‘‘Examples’’ gives some examples to illustrate the performance of the proposed methods. Finally, section ‘‘Conclusion’’ gives some concluding remarks.

Problem description In process control, the linear time-invariant system can be described by the transfer function G(s). The system modeling is to obtain the system parameters. This problem is called parameter estimation. In fact, it is difficult to achieve the system parameters directly. The system response is the system output by applying some excitation signals. The impulse signal, the step signal, and the sine signal are used widely as the excitation signals in the practical engineering. The excitation signals are the input signals. Once these signals are applied to the system to be modeled, the systems can generate the responses. Therefore, the system responses contain the system characteristic information. In order to obtain the system parameters, we can use the system response information to derive the parameter estimation algorithm. Then, the parameter estimates can be obtained using the parameter estimation algorithms. Let R(s) denote the Laplace transform of the input; let Y (s) denote the Laplace transform of the output; let y(t) denote the system response. According to the definition of the transfer function, we have Y (s) = G(s)R(s). The system response y(t) is the inverse Laplace transform, that is, y(t) = L 1 ½G(s)R(s). For a process control system, different input signals lead to different responses, such as the impulse response, the step response, and the frequency response. In general, for the transfer function G(s), the system response is a function containing exponential terms or sine terms. Therefore, the response y(t) = f (u, t) is a highly nonlinear function with respect to the system parameters u. This nonlinear form causes difficulties for estimating the system parameters based on the response measurement data. Moreover, the identification algorithm can be realized by means of constructing and minimizing the cost function about the system parameters. In order to obtain the system parameter estimates, the problem of minimizing the cost function is converted to the nonlinear optimization.

Xu

3

The measurement data contain the parameter information. The use of the measurement data is an important factor for identification method. Suppose that the sampling time is tk . As a result, the measurement data are represented as y(t1 ), y(t2 ), . . .. The values of the response function at the sampling time tk are f (u, t1 ), f (u, t2 ), . . .. Because the measurement data contain the measurement noise, the measurement output is not equal to the response model output. If the model output is very close to the measurement output, the model is effective for describing the system characteristic. The model structure and parameter are the components of the system model. After the model structure is determined, the model parameters are obtained by the parameter estimation method. According to the different measurement data which are used in the algorithm, we can construct different cost functions. For the single data y(tk ) at time tk , the cost function can be defined as 1 J (u) :¼ ½y(tk )  f (u, tk )2 2

ð1Þ

For a batch of measurement data y(tk ), y(tk1 ), . . . , y(tkp + 1 ), where p is the data length of the batch data, define the cost function based on the batch data J (u) :¼

k 1 X ½y(tj )  f (u, tj )2 2 j = kp + 1

v(tk ) :¼ y(tk )  ^y(tk ) = y(tk )  a sin (vtk )  d sin (nvtk )

The recursion gradient identification algorithm Consider a response of a combinatorial sine signal ð3Þ

where a and d are the amplitude, v is the angular frequency, and n is a constant. Suppose that the observation time is tk , k = 1, 2, . . .. The measurement data at time tk are y(tk ). In practical, the measurement data y(tk ) at time tk contain the measured error. Therefore, the measurement data y(tk ) are not equal to the model output ^y(tk ). Define the

ð4Þ

If the error is very small, it means that the signal model is satisfied. This problem can be converted into minimizing the gradient cost function. Define the parameter vector u :¼ ½a, d, vT R 23 . Define the cost function J1 (u):¼ J1 (a, d, v) = =

1 2 v (tk ) 2

1 ½y(tk )  a sin (vtk )  d sin (nvtk )2 2

ð5Þ

Because the measurement data y(tk ) vary with the sampling time tk , the observation data in the cost function J1 (u) vary with the sampling time. Therefore, the dynamical measurement data are used in the cost func^ k ) at time t = tk tion J1 (u). The parameter estimates u(t can be obtained by minimizing J1 (u). Taking the first-order derivative of J1 (u) with respect to u, we can obtain the gradient vector   ∂J1 (u) ∂J1 (u) ∂J1 (u) ∂J1 (u) T = , , 2 R3 grad ½J1 (u) :¼ ∂u ∂a ∂d ∂v ð6Þ

ð2Þ

From the cost functions, we can see that the cost functions are the rolling optimization function with the change of the sampling time. So the real-time measurement data can be used to estimate system parameters. The purpose of this article is to propose the parameter estimation based on the nonlinear dynamical rolling optimization and the discrete response measurement data.

^y(t) = a sin (vt) + d sin (nvt)

difference between the model output and the measurement output as

∂J1 (u) =  sin (vtk )½y(tk )  a sin (vtk )  d sin (nvtk ) ∂a ð7Þ ∂J1 (u) =  sin (nvtk )½y(tk )  a sin (vtk )  d sin (nvtk ) ∂d ð8Þ ∂J1 (u) =  ½atk cos (vtk ) + ndtk cos (nvtk ) ∂v 3 ½y(tk )  a sin (vtk )  d sin (nvtk )

ð9Þ

^ k ) :¼ ½^a(tk ), d(t ^ k ), v Let u(t ^ (tk )T be the estimates of u at time t = tk , that is, the parameter estimates at the recursion k. According to the negative gradient search and minimizing the cost function J1 (u), the gradient (also called the least mean square (LMS) algorithm) for estimating the sine signal with double frequency is given by ^ k1 )  m(tk )grad½J1 (u(t ^ k1 )) ^ k ) = u(t u(t ^ k1 )) grad½J1 (u(t " #T ^ k1 )) ∂J1 (u(t ^ k1 )) ∂J1 (u(t ^ k1 )) ∂J1 (u(t , , = ∂a ∂b ∂v

ð10Þ

ð11Þ

4

Advances in Mechanical Engineering ^ k1 )) ∂J1 (u(t =  sin (^ v(tk1 )tk ) ∂a ^ k1 ) sin (n^ ½y(tk )  ^ a(tk1 ) sin (^ v(tk1 )tk )  d(t v(tk1 )tk ) ð12Þ ^ k1 )) ∂J1 (u(t =  sin (n^ v(tk1 )tk )½y(tk ) ð13Þ ∂d ^ v(tk1 )tk )  d(tk1 ) sin (n^ v(tk1 )tk ) ^ a(tk1 ) sin (^ ^ k1 )) ∂J1 (u(t =  ½^a(tk1 )tk cos (^ v(tk1 )tk ) ∂v ^ k1 )tk cos (^ v(tk1 )tk ) + nd(t ^(tk1 ) sin (^ v(tk1 )tk ) 3 ½y(tk )  a ^ k1 ) sin (n^  d(t v(tk1 )tk )

ð14Þ

^ k1 )  mgrad½J1 (u(t ^ k1 ))) m(tk ) = argmin J1 (u(t

ð15Þ

4. 5. 6.

ð16Þ

Define the information vector

atk cos (vtk ) + ndtk cos (nvtk )T 2 R3

ð17Þ

Then, the gradient of the cost function J2 (u) can be represented as

^ k) The steps of computing the parameter estimates u(t using the recursion gradient (RG) identification algorithm (10)–(15) are as follows:

2. 3.

1 J2 (u) :¼ ½y(tk )  a sin (vtk )  d sin (nvtk )2 2

u(u, t) :¼ ½sin (vtk ), sin (nvtk ),

m0

1.

step size, we present the SG algorithm parameter estimation method. This algorithm can determine the step size automatically at each recursion. Define the cost function based on the dynamical data

To initiate: let k = 1; let ^ 0 ), v ^ 0 ) = ½^ a(t0 ), d(t ^ (t0 )T 2 R3 be an real u(t vector; Collect the measurement data y(tk ); Compute the terms of the gradient using equations (12)–(14); form the gradient vector ^ k1 )) using equation (11); ½J1 (u(t Compute the step size m(tk ) using equation (15); ^ k ) using Compute the parameter estimates u(t equation (12); Increase k by 1 and go to Step (2).

Remark 1. The step size m(tk ) at each recursion can be determined by the one-dimensional search. However, it is very difficult to obtain the step size by the direct equation solving. We can use the cut-and-try method to determine the step size or use the optimization method such as the Newton iterative method and the gradient method.

The SG identification method based on the dynamical data The gradient parameter estimation algorithm needs to optimize the step size at each recursion k. For the sine response signal, it is complicated to obtain the step size using one-dimensional search. With the increasing of the recursion, the step size cannot trend to zero. Therefore, the algorithm is sensitive to the noise. In order to avoid sensitivity to the noise and in order to avoid the complicated computation to determine the

grad½J2 (u) =  u(u, tk )½y(tk )  a sin (vtk )  d sin (nvtk ) ð18Þ ^ k ), v ^ k ) :¼ ½^a(tk ), d(t ^ (tk ) be the recursive paraLet u(t meter estimate of the parameter vector u at the recursion k. Then, we have ^ grad½J2 (u)ju = u(t ^ k1 ) = grad½J2 (u(tk1 )) ^ k1 ), tk )½y(tk )  ^a(tk1 ) sin (^ =  u(u(t v(tk1 )tk ) ^ k1 ) sin (n^  d(t v(tk1 )tk )

ð19Þ

In order to reduce the sensitivity to the noise, the step size m(tk ) is set to m(tk ) :¼

1 , r(tk )

^ k1 ), tk )k2 r(tk ) = r(tk1 ) + k u(u(t ð20Þ

Based on the negative gradient search and minimizing the cost function J2 (u), we can obtain the SG algorithm for estimating the sine response ^ k ) = u(t ^ k1 ) + m(tk )grad½J2 (u)j ^ u(t u = u(tk1 ) ^ k1 ) + = u(t

u(tk ) ½y(tk )  ^a(tk1 ) sin (^ v(tk1 )tk ) r(tk )

^ k1 ) sin (n^ v(tk1 )tk ) d(t ^ k1 ), tk )k2 , r(tk ) = r(tk1 ) + k u(u(t

ð21Þ r(t0 ) = 1

ð22Þ

^ k ) = ½sin (^ u(t v(tk1 )tk ), sin (n^ v(tk1 )tk ), ^ ^a(tk1 )tk cos (^ v(tk1 )tk ) + nd(tk1 )tk cos (n^ v(tk1 )tk )T ð23Þ ^ k ), v ^ (tk ) u(tk ) = ½^a(tk ), d(t

ð24Þ

^ k) The steps of computing the parameter estimates u(t using the gradient identification algorithm (equations (21)–(24)) are as follows:

Xu

5 1.

2. 3. 4. 5. 6.

^ 0) = To initiate: let k = 1; let u(t T 3 ^ ^ (t0 ) 2 R be an real vector; pre½^ a(t0 ), d(t0 ), v set the recursive length Le ; Collect the measurement data y(tk ); ^ k ) using equation Compute the information u(t (23); Compute r(tk ) using equation (22); ^ k ) using Compute the parameter estimates u(t equation (21); If k = Le , terminate the recursive process and obtain the characteristic parameters from equation (24); otherwise, increase k by 1 and go to Step (2).

Remark 2. The method of determining the step size m(tk ) between the gradient method and the SG method is different. For the SG method, the step size m(tk ) is set using the information vector. It can update automatically with the increasing recursion k.

The multi-innovation stochastic gradient identification based on the moving window data The gradient identification algorithm and the SG algorithm use the current data at the sampling time tk . Only one step data is used, and the accuracy is low. In order to enhance the estimation accuracy, we use the batch data in the latest time to construct the cost function. For system identification algorithm, the more the measurement data is used, the higher the estimation accuracy is. However, too many measurement data used in the algorithm can lead to the heavy computation complexity. Therefore, we propose to use the moving window measurement data to derive the parameter estimation algorithm. The moving window measurement data are a batch data, and they can update dynamically. The moving window data are represented as y(tk ), y(tk1 ), . . . , y(tkp + 1 ), where p is the length of the moving window. Based on the moving window data and the multiinnovation theory,50 we derive the parameter identification algorithm for the sine signal with double frequency.

Define the cost function J3 (u) using the moving window data 1X ½y(tki )  a sin (vtki )  d sin (nvtki )2 2 i=0 p1

J3 (u) :¼

Taking the first-order derivative of J3 (u) with respect to u = ½a, d, vT , the gradient vector is given by   ∂J3 (u) ∂J3 (u) ∂J3 (u) ∂J3 (u) T = , , 2 R3 grad½J3 (u) :¼ ∂u ∂a ∂d ∂v ð25Þ p1 X ∂J3 (u) = ½y(tki )  a sin (vtki )  d sin (nvtki ) ∂a i=0

sin (vtki )

ð26Þ p1 X ∂J3 (u) = ½y(tki )  a sin (vtki )  d sin (nvtki ) ∂d i=0

sin (nvtki )

ð27Þ p1 X ∂J3 (u) = ½y(tki )  a sin (vtki )  d sin (nvtki ) ∂v i=0

½atki cos (vtki ) + ndtki cos (nvtki ) ð28Þ Define the information vector u(u, tk ) :¼ ½sin (vtk ), sin (nvtk ), a cos tk (vtk ) + nd cos (vtk )T 2 R3

Define the information matrix based on the window data F(p, u, tk ) :¼ ½u(u, tk ), u(u, tk1 ), . . . , u(u, tkp + 1 ) 2 R3 3 p

^ k ) :¼ ½^a(tk ), d(t ^ k ), v Let u(t ^ (tk ) be the estimate of the parameter vector u at time t = tk . Define the scalar ^ k1 ), tk ) :¼ y(tk ) ^a(tk1 ) sin (^ innovation: e(u(t v(tk1 )tk ) ^ k1 ) sin (n^ v(tk1 )tk ). Expanding the scalar innovad(t ^ k1 ), tk ) 2 R into the innovation vector gives tion e(u(t

6

Advances in Mechanical Engineering 2

3 ^ k1 ), tk ) e(u(t 6 ^ k1 ), tk1 ) 7 6 e(u(t 7 6 7 E(p, tk ):¼ 6 7 . 6 7 .. 4 5 ^ k1 ), tkp + 1 ) e(u(t 2 3 ^ k1 ) sin (n^ y(tk )  ^a(tk1 ) sin (^ v(tk1 )tk )  d(t v(tk1 )tk ) 6 7 ^ k1 ) sin (n^ 6 7 v(tk1 )tk1 )  d(t v(tk1 )tk1 ) y(tk1 )  ^a(tk1 ) sin (^ 6 7 =6 7 .. 6 7 . 4 5 ^ k1 ) sin (n^ y(tkp + 1 )  ^a(tk1 ) sin (^ v(tk1 )tkp + 1 )  d(t v(tk1 )tkp + 1 )

ð29Þ

= Y(p, tk )  F1 (p, tk )  F2 (p, tk ) Y(p, tk ) = ½y(tk ), y(tk1 ), . . . , y(tkp + 1 )T 2 Rp v(tk1 )tk ), F1 (p, tk ) = ½^a(tk1 ) sin (^ ^(tk1 ) sin (^ v(tk1 )tk1 ), . . . , ^a(tk1 ) a T

sin (^ v(tk1 )tkp + 1 ) 2 R

ð31Þ

p

The steps of computing the parameter estimates using the MISG algorithm (equations (33)–(38)) are as follows: 1.

^ k1 ) sin (n^ v(tk1 )tk ), F2 (p, tk ) = ½d(t ^ k1 ) sin (n^ ^ k1 ) d(t v(tk1 )tk1 ), . . . , d(t T

ð30Þ

sin (n^ v(tk1 )tkp + 1 ) 2 R

ð32Þ 2.

p

Using the negative gradient search and minimizing the cost function J3 (u), the multi-innovation stochastic gradient (MISG) algorithm is given by

3.

To initiate: let k = 1; pre-set the innovation ^ 0 ) = ½^a(t0 ), d(t ^ 0 ), v length p; let u(t ^ (t0 ) be a real vector; let r(t0 ) = 1. Pre-set the recursive length Le ; Collect the measurement data y(tk ). Compute ^ ki ), and construct the information vector u(t i = 0, 1, . . . , p  1 using equation (37). Form ^ tk ) using equation (36); the stack matrix (Fp, Compute r(tk ) using equation (35). Compute the innovation vector (p, tk ) using equation (34); ^ k ) using Update the parameter estimates u(t equation (33). Obtain the estimates of the char^ k ), and v ^ (tk ) from acteristic parameters ^a(tk ), d(t ^ u(tk ) using equation (38);

4. ^ k1 )  1 grad½J3 (u(t ^ k1 )) ^ k ) = u(t u(t r(tk ) ð33Þ ^ tk ) F(p, ^ E(p, tk ) = u(tk1 ) + r(tk ) 2 3 ^ k1 ) sin (n^ y(tk )  ^a(tk1 ) sin (^ v(tk1 )tk )  d(t v(tk1 )tk ) 6 7 ^ k1 ) sin (n^ y(tk1 )  ^a(tk1 ) sin (^ v(tk1 )tk1 )  d(t v(tk1 )tk1 ) 6 7 E(p, tk ) = 6 7 . .. 4 5 ^ y(tkp + 1 )  ^a(tk1 ) sin (^ v(tk1 )tkp + 1 )  d(tk1 ) sin (n^ v(tk1 )tkp + 1 ) ^ tk )k2 , r(tk ) = r(tk1 ) + k F(p,

r(t0 ) = 1

ð34Þ

ð35Þ

^ tk ) = F(p, u(t ^ k1 ), tk ) F(p, ^ k1 ), tk1 ), . . . , u(u(t ^ k1 ), tkp + 1 ) ^ k1 ), tk ), u(u(t = ½u(u(t ^ k1 ), . . . , u(t ^ kp + 1 ) ^ k ), u(t = ½u(t

5.

If k = Le , terminate the recursive process; otherwise, k :¼ k + 1 and go to Step (2).

ð36Þ ^ k1 ), tki ) ^ ki ) = u(u(t u(t v(tk1 )tki ), = ½sin (^ v(tk1 )tki ), sin (n^ ^ k1 )tki ^ v(tk1 )tki ) + nd(t a(tk1 )tki cos (^ cos (n^ v(tk1 )tki )T ,

ð37Þ

i = 0, 1, . . . , p  1

^ k ) = ½^a(tk ), d(t ^ k ), v u(t ^ (tk )

ð38Þ

Remark 3. The MISG method is derived on the basis of the SG method by expanding the scalar innovation into the innovation vector using the moving window measurement data. The moving window data update with the recursion k dynamically. The scheme of the moving window data make more data to participate in the recursive estimation computation. It can improve the estimation accuracy.

Xu

7

Table 1. The RG estimates and estimation errors. s2

k

a

d

v

d (%)

0:102

5 10 20 50 100 200 400 600 5 10 20 50 100 200 400 600 5 10 20 50 100 200 400 600

5.99699 5.98910 5.97690 5.55476 4.89302 4.62401 5.15293 5.03696 5.99688 5.98973 5.98466 5.56719 4.97040 4.58357 5.16766 5.02444 5.99670 5.99080 5.99797 5.58861 5.10329 4.51425 5.19292 5.00297 5.00000

11.99699 11.98911 11.97694 11.56861 10.97619 10.74657 10.81028 10.82366 11.99688 11.98974 11.98468 11.57979 11.04183 10.74612 10.82690 10.84434 11.99671 11.99081 11.99795 11.59908 11.15457 10.74540 10.85543 10.87984 10.00000

2.90912 2.83020 2.76322 2.36928 2.28766 2.27957 2.27966 2.27966 2.92257 2.85813 2.82768 2.38064 2.30159 2.29354 2.29367 2.29367 2.94562 2.90598 2.93829 2.39947 2.32473 2.31673 2.31693 2.31693 2.00000

20.78537 20.43685 20.08543 14.50865 8.77658 7.60596 7.51556 7.51475 20.82673 20.52884 20.37081 14.65151 9.29974 7.79463 7.70886 7.71654 20.89880 20.69168 20.88882 14.89849 10.30981 8.14811 8.04617 8.06935

0:802

2:002

True values RG: recursion gradient.

Figure 1. The RG parameter estimation errors d versus k.

Examples In this section, we provide some examples to show the performance of the proposed method. Example 1. Consider a power signal

y(t) = 5 sin (2t) + 10 sin (4t) where the true values are a = 5, d = 10, and v = 2. In the simulation, the white noise is applied to the response signal. The noise variance is s2 = 0:102 , s2 = 0:802 , and s2 = 2:002 . Using the proposed RG

8

Advances in Mechanical Engineering

Table 2. The SG estimates and estimation errors. k

a

d

v

d (%)

1 5 10 20 30 50 100 1 5 10 20 30 50 100 True values

2.17973 2.17142 2.17042 2.17067 2.17059 2.17056 2.17055 2.18881 2.17103 2.16979 2.16924 2.16911 2.16906 2.16903 2.00000

3.24120 3.22462 3.22450 3.22367 3.22361 3.22360 3.22359 3.26753 3.23313 3.23267 3.23136 3.23126 3.23121 3.23120 3.00000

4.65368 4.58170 4.62856 4.67715 4.68529 4.69062 4.68915 5.03265 4.91726 4.97627 4.97683 4.98486 4.99371 4.98966 5.00000

7.44126 8.18873 7.56393 6.94695 6.84691 6.78246 6.79994 5.33818 4.87875 4.68840 4.66529 4.65399 4.64754 4.64897

SG: stochastic gradient.

Figure 2. The SG parameter estimation errors d versus k.

method to estimate the response parameters, the parameter estimates and their estimation errors ^k  u k =u are shown in Table 1. The estimation d :¼k u errors d versus k are shown in Figure 1.

Example 2. Consider a power signal with double frequency y(t) = 2 sin (5t) + 3 sin (15t) where the true values are a = 2, d = 3, and v = 5. In the simulation, the white noise is applied to the response signal. In order to test the sensitivity to the

noise, the noise variance is taken as s2 = 0:502 and s2 = 4:002 , respectively. The ratio of the variance is 8. Using the proposed SG method to estimate the parameters of the power signal in Example 2, the parameter ^k  u k =u estimates and their estimation errors d :¼k u are shown in Table 2. The estimation errors d versus k are shown in Figure 2. Example 3. Consider a power signal with double frequency y(t) = 8 sin (0:5t) + 3 sin (t) where the true values are a = 8, d = 3, and v = 0:5.

Xu

9

Table 3. The MISG estimates and estimation errors. p

k

a

d

v

d (%)

1

5 10 50 100 150 200 250 300 5 10 50 100 150 200 250 300 5 10 50 100 150 200 250 300 5 10 50 100 150 200 250 300

7.44140 7.44425 7.44828 7.44832 7.44833 7.44833 7.44833 7.44832 7.52281 7.52850 7.53656 7.53665 7.53666 7.53666 7.53665 7.53665 7.60421 7.61275 7.62485 7.62497 7.62499 7.62498 7.62498 7.62497 7.76701 7.78126 7.80141 7.80162 7.80164 7.80164 7.80163 7.80162 8.00000

2.78157 2.77984 2.77983 2.77984 2.77983 2.77983 2.77982 2.77982 2.82715 2.82367 2.82367 2.82367 2.82366 2.82365 2.82365 2.82364 2.87272 2.86751 2.86750 2.86751 2.86750 2.86748 2.86747 2.86746 2.96387 2.95518 2.95517 2.95519 2.95516 2.95514 2.95512 2.95510 3.00000

0.05408 0.06366 0.38174 0.43586 0.46594 0.48631 0.50131 0.51286 20.38684 20.36768 0.26848 0.37671 0.43689 0.47762 0.50762 0.53072 20.82775 20.79901 0.15522 0.31757 0.40783 0.46893 0.51392 0.54858 21.70959 21.66169 20.07131 0.19928 0.34972 0.45156 0.52654 0.58431 0.50000

8.73253 8.64720 7.07687 6.98053 6.95156 6.94204 6.94025 6.94190 11.93881 11.72066 6.39399 5.96902 5.83930 5.79851 5.79337 5.80387 16.25640 15.91338 6.15136 5.11276 4.77032 4.66142 4.65022 4.68202 25.96369 25.39185 7.08639 4.24171 2.95448 2.44271 2.39653 2.57260

5

10

20

True values MISG: multi-innovation stochastic gradient.

In the simulation, the dynamical data with different innovation lengths p = 1, p = 5 and p = 9 are used to estimate the response parameters. The recursion length is Le = 300. The variance of the white noise is s2 = 4:002 . Using the proposed MISG method to estimate the signal parameters, the parameter estimates ^k  u k =u for differand their estimation errors d :¼k u ent p are shown in Table 3. The estimation errors d versus k for different p are shown in Figure 3. From the simulation results, we can draw the following conclusive remarks:



 



The common feature of the proposed RG method, the SG method, and the MISG method is that the estimation errors reduce with the increasing of the recursion k. This implies that the proposed methods are effective for estimating the response parameters. The simulation results in Example 1 show that the RG method is sensitive to the noise. The variety curves of the parameter estimates shake seriously. The changing curves of the parameter

estimates given by the SG method from Example 2 show that the variety of the estimation errors obtained by the SG method has no serious fluctuation when the noise variance is big. The estimation accuracy given by the MISG method changes with the increasing of the innovation length p (i.e. the moving window data length). The innovation length denotes the numbers of measurement data participated in the recursion computation. If the innovation length is big, the estimation accuracy is high. Compared the proposed three methods, the MISG method can obtain more satisfactory effectiveness than the RG method and the SG method. Even though the RG method and the SG method have low accuracy, their computation complexity is low.

Conclusion This article considers the parameter estimation problem to the system response from the discrete measurement

10

Advances in Mechanical Engineering

Figure 3. The MISG parameter estimation errors d versus k.

data. Using the dynamical measurement data and the moving window measurement data, the RG method, the SG method, and the MISG method are presented for estimating the response parameters. The simulation results show that the proposed algorithms are effective. The proposed methods can be expanded to parameter estimation of speech signal processing,51,52 the self-organizing map,53 and applied to other fields.54–65 Declaration of conflicting interests The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Natural Science Research of Colleges and Universities in Jiangsu Province (No. 16KJB120007) and sponsored by the Qing Lan Project of Jiangsu Province, and the Natural Science Foundation of Jiangsu Province (No. BK20160162).

References 1. Raja MAZ and Chaudhary NI. Two-stage fractional least mean square identification algorithm for parameter estimation of CARMA systems. Signal Process 2015; 107: 327–339. 2. Xu L and Ding F. Recursive least squares and multiinnovation stochastic gradient parameter estimation methods for signal modeling. Circ Syst Signal Pr 2017; 36: 1735–1753.

3. Xu L. The damping iterative parameter identification method for dynamical systems based on the sine signal measurement. Signal Process 2016; 120: 660–667. 4. Ding F, Xu L and Zhu QM. Performance analysis of the generalised projection identification for time-varying systems. IET Control Theory A 2016; 10: 2506–2514. 5. Ding F, Wang FF, Xu L, et al. Parameter estimation for pseudo-linear systems using the auxiliary model and the decomposition technique. IET Control Theory A 2017; 11: 390–400. 6. Wang DQ, Mao L and Ding F. Recasted models based hierarchical extended stochastic gradient method for MIMO nonlinear systems. IET Control Theory A 2017; 11: 476–485. 7. Li JH, Zheng WX, Gu JP, et al. Parameter estimation algorithms for Hammerstein output error systems using Levenberg–Marquardt optimization method with varying interval measurements. J Frankl Inst 2017; 354: 316–331. 8. Xu L, Chen L and Xiong WL. Parameter estimation and controller design for dynamic systems from the step responses based on the Newton iteration. Nonlinear Dynam 2015; 79: 2155–2163. 9. Ding F, Wang FF, Xu L, et al. Decomposition based least squares iterative identification algorithm for multivariate pseudo-linear ARMA systems using the data filtering. J Frankl Inst 2017; 354: 1321–1339. 10. Wang C and Xun J. Novel recursive least squares identification for a class of nonlinear multiple-input single-output systems using the filtering technique. Adv Mech Eng 2016; 8: 1–8. 11. Wang DQ, Zhang Z and Yuan JY. Maximum likelihood estimation method for dual-rate Hammerstein systems. Int J Control Autom 2017; 15: 698–705. 12. Goncalves MLN and Melo JG. A Newton conditional gradient method for constrained nonlinear systems. J Comput Appl Math 2017; 311: 473–483.

Xu 13. Witkowski WR and Allen J. Approximation of parameter uncertainty in nonlinear optimization-based parameter estimation schemes. AIAA J 2015; 31: 947–950. 14. Cao X, Zhu DQ and Yang SX. Multi-AUV target search based on bioinspired neurodynamics model in 3-D underwater environments. IEEE T Neur Net Lear 2016; 27: 2364–2374. 15. Chu ZZ, Zhu DQ and Yang SX. Observer-based adaptive neural network trajectory tracking control for remotely operated vehicle. IEEE T Neur Net Lear 2017; 28: 1633–1645. 16. Xu L. A proportional differential control method for a time-delay system using the Taylor expansion approximation. Appl Math Comput 2014; 236: 391–399. 17. Ahmed S, Huang B and Shah SL. Identification from step responses with transient initial conditions. J Process Contr 2008; 18: 121–130. 18. Fedele G. A new method to estimate a first-order plus time delay model from step response. J Frankl Inst 2009; 346: 1–9. 19. Xu L and Ding F. Parameter estimation algorithms for dynamical response signals based on the multi-innovation theory and the hierarchical principle. IET Signal Process 2017; 11: 228–237. 20. Hidayat E and Medvedev A. Laguerre domain identification of continuous linear time-delay systems from impulse response data. Automatica 2012; 48: 2902–2907. 21. Na J, Yang J, Wu X, et al. Robust adaptive parameter estimation of sinusoidal signals. Automatica 2015; 53: 376–384. 22. Na J, Yang J, Ren XM, et al. Robust adaptive estimation of nonlinear system with time-varying parameters. Int J Adapt Control 2015; 29: 1055–1072. 23. Na J, Herrmann G and Zhang KQ. Improving transient performance of adaptive control via a modified reference model and novel adaptation. Int J Robust Nonlin 2017; 27: 1351–1372. 24. Wang XH and Ding F. Convergence of the recursive identification algorithms for multivariate pseudo-linear regressive systems. Int J Adapt Control 2016; 30: 824–842. 25. Wang XH and Ding F. Joint estimation of states and parameters for an input nonlinear state-space system with colored noise using the filtering technique. Circ Syst Signal Pr 2016; 35: 481–500. 26. Mao YW and Ding F. A novel parameter separation based identification algorithm for Hammerstein systems. Appl Math Lett 2016; 60: 21–27. 27. Wang DQ. Hierarchical parameter estimation for a class of MIMO Hammerstein systems based on the reframed models. Appl Math Lett 2016; 57: 13–19. 28. Wang DQ and Zhang W. Improved least squares identification algorithm for multivariable Hammerstein systems. J Frankl Inst 2015; 352: 5292–5370. 29. Wang YJ and Ding F. Novel data filtering based parameter identification for multiple-input multiple-output systems using the auxiliary model. Automatica 2016; 71: 308–313. 30. Wang YJ and Ding F. The filtering based iterative identification for multivariable systems. IET Control Theory A 2016; 10: 894–902.

11 31. Wang YJ and Ding F. The auxiliary model based hierarchical gradient algorithms and convergence analysis using the filtering technique. Signal Process 2016; 128: 212–221. 32. Mao YW and Ding F. Multi-innovation stochastic gradient identification for Hammerstein controlled autoregressive autoregressive systems based on the filtering technique. Nonlinear Dynam 2015; 79: 1745–1755. 33. Na J, Mahyuddin MN, Herrmann G, et al. Robust adaptive finite-time parameter estimation and control for robotic systems. Int J Robust Nonlin 2015; 25: 3045–3071. 34. Xu L, Ding F, Gu Y, et al. A multi-innovation state and parameter estimation algorithm for a state space system with d-step state-delay. Signal Process 2017; 140: 97–103. 35. Pan J, Jiang X, Wan XK, et al. A filtering based multiinnovation extended stochastic gradient algorithm for multivariable control systems. Int J Control Autom 2017; 15: 1189–1197. 36. Ding F and Wang XH. Hierarchical stochastic gradient algorithm and its performance analysis for a class of bilinear-in-parameter systems. Circ Syst Signal Pr 2017; 36: 1393–1405. 37. Ma JX, Xiong WL, Chen J, et al. Hierarchical identification for multivariate Hammerstein systems by using the modified Kalman filter. IET Control Theory A 2017; 11: 857–869. 38. Wang XH, Ding F, Alsaadi FE, et al. Convergence analysis of the hierarchical least squares algorithm for bilinearin-parameter systems. Circ Syst Signal Pr 2016; 35: 4307–4330. 39. Wang XH and Ding F. Convergence of the auxiliary model based multi-innovation generalized extended stochastic gradient algorithm for Box-Jenkins systems. Nonlinear Dynam 2015; 82: 269–280. 40. Wang XH and Ding F. Modelling and multi-innovation parameter identification for Hammerstein nonlinear state space systems using the filtering technique. Math Comp Model Dyn 2016; 22: 113–140. 41. Wang XH, Ding F, Hayat T, et al. Combined state and multi-innovation parameter estimation for an input nonlinear state space system using the key term separation. IET Control Theory A 2016; 10: 1503–1512. 42. Li MH, Liu XM and Ding F. Least-squares-based iterative and gradient-based iterative estimation algorithms for bilinear systems. Nonlinear Dynam 2017; 89: 197–211. 43. Li MH, Liu XM and Ding F. The maximum likelihood least squares based iterative estimation algorithm for bilinear systems with autoregressive noise. J Frankl Inst 2017; 354: 4861–4881. 44. Li MH, Liu XM and Ding F. The gradient-based iterative estimation algorithms for bilinear systems with autoregressive noise. Circ Syst Signal Pr 2017; 36: 1–28. 45. Wang DQ and Gao YP. Recursive maximum likelihood identification method for a multivariable controlled autoregressive moving average system. IMA J Math Control I 2016; 33: 1015–1031. 46. Wang YJ and Ding F. Recursive least squares algorithm and gradient algorithm for Hammerstein-Wiener systems using the data filtering. Nonlinear Dynam 2016; 84: 1045–1053.

12 47. Wang YJ and Ding F. Recursive parameter estimation algorithms and convergence for a class of nonlinear systems with colored noise. Circ Syst Signal Pr 2016; 35: 3461–3481. 48. Xu L. Application of the Newton iteration algorithm to the parameter estimation for dynamical systems. J Comput Appl Math 2015; 288: 33–43. 49. Wang XH and Ding F. Recursive parameter and state estimation for an input nonlinear state space system using the hierarchical identification principle. Signal Process 2015; 117: 208–218. 50. Ding F, Wang XH, Mao L, et al. Joint state and multiinnovation parameter estimation for time-delay linear systems and its convergence based on the Kalman filtering. Digit Signal Process 2017; 62: 211–223. 51. Zhao N, Wu MH and Chen JJ. Android-based mobile educational platform for speech signal processing. Int J Elec Eng Educ 2017; 54: 3–16. 52. Wan XK, Li Y, Xia C, et al. A T-wave alternans assessment method based on least squares curve fitting technique. Measurement 2016; 86: 93–100. 53. Cao X and Zhu DQ. Multi-AUV task assignment and path planning with ocean current based on biological inspired self-organizing map and velocity synthesis algorithm. Intell Autom Soft Co 2017; 23: 31–39. 54. Chu ZZ, Zhu DQ and Yang SX. Adaptive sliding mode control for depth trajectory tracking of remotely operated vehicle with thruster nonlinearity. J Navigation 2017; 70: 149–164. 55. Ji Y and Ding F. Multiperiodicity and exponential attractivity of neural networks with mixed delays. Circ Syst Signal Pr 2017; 36: 2558–2573.

Advances in Mechanical Engineering 56. Pan J, Yang XH, Cai HF, et al. Image noise smoothing using a modified Kalman filter. Neurocomputing 2016; 173: 1625–1629. 57. Feng L, Wu MH, Li QX, et al. Array factor forming for image reconstruction of one-dimensional nonuniform aperture synthesis radiometers. IEEE Geosci Remote S 2016; 13: 237–241. 58. Ding F. Hierarchical multi-innovation stochastic gradient algorithm for Hammerstein nonlinear system modeling. Appl Math Model 2013; 37: 1694–1704. 59. Ding F. Several multi-innovation identification methods. Digit Signal Process 2010; 20: 1027–1039. 60. Wang DQ and Ding F. Performance analysis of the auxiliary models based multi-innovation stochastic gradient estimation algorithm for output error systems. Digit Signal Process 2010; 20: 750–762. 61. Liu YJ, Yu L and Ding F. Multi-innovation extended stochastic gradient algorithm and its performance analysis. Circ Syst Signal Pr 2010; 29: 649–667. 62. Ding F, Liu G and Liu XP. Parameter estimation with scarce measurements. Automatica 2011; 47: 1646–1655. 63. Fan CL, Li HJ and Ren X. The order recurrence quantification analysis of the characteristics of two-phase flow pattern based on multi-scale decomposition. T I Meas Control 2015; 37: 793–804. 64. Wang TZ, Qi J, Xu H, et al. Fault diagnosis method based on FFT-RPCA-SVM for cascaded-multilevel inverter. ISA T 2016; 60: 156–163. 65. Wang TZ, Wu H, Ni MQ, et al. An adaptive confidence limit for periodic non-steady conditions fault detection. Mech Syst Signal Pr 2016; 72–73: 328–345.

Suggest Documents