Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 2005 Seville, Spain, December 12-15, 2005
TuIB21.2
Parameter Identification of Hammerstein Models using Elimination Theory Kaiyu Wang, Marc Bodson, John Chiasson, and Leon M. Tolbert Abstract— A Hammerstein model is a system model in which the inputs go through a static nonlinearity followed by a linear time-invariant system. Often the static nonlinearity is modeled as a polynomial nonlinearity in the inputs or as a piecewise constant nonlinearity. Such models are nonlinear in the unknown parameters and therefore present a challenging identification problem. In this work, the authors show that elimination theory can be used to solve exactly for parameter values that minimize a least-square criterion. Thus, the approach guarantees the minimum can be found in a finite number of steps, unlike iterative methods that are currently used. Index Terms— Hammerstein models, Parameter identification, Nonlinear least-squares, Resultants.
v(t ) u(t)
w(t )
Fig. 1.
|(w) +
II. H AMMERSTEIN M ODEL
q X
dl |(w l) =
q X
el z(w l) + y(w)
(1)
l=1
(2)
z(w) = j(x(w))
where j (·) is a static nonlinearity and y(w) is (typically) a discrete-time random process of independent, identically distributed (i.i.d.) random variables. The static nonlinearity j (·) is often taken to have a polynomial form, that is, j(x) =
p X
fl xl =
(3)
l=1
The unknowns are then d = (d1 > ===> dq ) e = (e1 > ===> eq ) f = (f1 > ===> fp ) where upper bounds on q and p are assumed to be known. Substituting the polynomial nonlinearity (3) into (2), the Hammerstein model (1) and (2) may be rewritten as |(w) =
q X l=1
A Hammerstein model consists of the input going through a static nonlinear block followed by a linear time-invariant system, as shown in Figure 1 (see [7][13]). Mathematically, K. Wang, J. Chiasson and L. M. Tolbert are with the ECE Department, The University of Tennessee, Knoxville, TN 37996.
[email protected],
[email protected],
[email protected] M. Bodson is with the EE Department, The University of Utah, Salt Lake City, UT 84112.
[email protected] L. M. Tolbert is also with Oak Ridge National Laboratory, NTRC, 2360 Cherahala Boulevard, Knoxville, TN 37932.
[email protected] Drs. Chiasson and Tolbert would like to thank Oak Ridge National Laboratory for partially supporting this work through the UT/Battelle contract no. 4000023754. Dr. Tolbert would also like to thank the National Science Foundation for partially supporting this work through contract NSF ECS-0093884.
y(t)
Hammerstein model.
l=1
Methods to identify the parameters of Hammerstein models have been considered for many years; for example, see Refs. [1][2][3][4] for some of the early work in the controls literature. In the last few years, there has been a renewed interest in the area, as seen in the recent publications [5][6][7][8][9][10][11][12]. With the exception of [3], these all involve iterative methods to solve for the parameters. The noniterative method in [3] (see also [13] pp. 584-585) is achieved by rewriting the system model so that it is linear in the parameters, but overparameterized. In this work, we present a noniterative method that does not overparameterize the system model and guarantees that the parameter vector that minimizes the least-squares error criterion is found. The method exploits the fact that the squared error is rational in the unknown parameters. The optimal parameter vector is found using the method of resultants from elimination theory [14]. The approach here is based on previous work by the authors on identifying the parameters of an induction motor [15].
Linear Time-Invariant System
a Hammerstein system may be represented as [7]
I. I NTRODUCTION
0-7803-9568-9/05/$20.00 ©2005 IEEE
Static Nonlinearity
dl |(w l) +
q X p X
el fm xm (w l) + y(w)
l=1 m=1
6 |(w 1) ¤9 £ : .. = d1 · · · dq 7 8 . |(w q) 65 5 x(w 1) · · · xp (w 1) £ ¤9 :9 .. .. .. + e1 · · · eq 7 87 . . . + y(w)=
5
x(w q) · · ·
xp (w q)
6 f1 .. : . 8 fp
This model is linear in the unknown parameters dl , but nonlinear in the unknown parameters el > fl . It can be made to “look” linear in the parameters, as in [3] and [13], by
3444
letting ; dl A A A A e A 1 flq ? .. nl = . A A A e f A A = q lq(q1)p
ZlW (w) =
; A A A A A ?
III. L EAST-S QUARES I DENTIFICATION Let
for l = 1> ===> q for l = q + 1> ===> q + p
Z (w) , and
for l = q + (q 1)p + 1> ===> q + qp for l = 1> ===> q for l = q + 1> ===> q + p
|(w l) xlq (w 1) .. .
A A A xlq(q1)p (w 1) A A =
for l = q + (q 1)p +1> ===> q + qp
so that |(w) = Z (w)N= However, in this representation, the nl ’s for l A q are not independent of each other so that the system is overparameterized. To simplify the notation for the remainder of the paper, consider the specific model |(w) =
2 X
£
¤
x(w 1) x2 (w 1) x(w 2) x2 (w 2) 6 n1 9 n2 : : N,9 7 n3 8 n4 5
so that equation (7) is written compactly as (10)
|(w) = Z (w)N + y(w)=
Collecting the data set {|(w)> x(w)} for w = 1> ===> Q , the leastsquares criterion requires that the parameter vector N be found that minimizes the squared error H 2 (N) =
Q X w=1
| |(w) Z (w)N |2
(11)
subject to the constraint (9) and assuming n1 6= 0. With UZ ,
Q X
Z W (w)Z (w)
w=1
(4)
el z(w l) + y(w)
UZ | ,
l=1
z(w) = j(x(w)) = f1 x(w) + f2 x2 (w)
Q X
Z W (w)|(w)
w=1
(5) U| ,
or
Q X
| W (w)|(w)>
w=1
|(w) =
2 2 X X l=1 m=1
=
£
Therefore
e1
the squared error may be rewritten
el fm xm (w l) + y(w)
e2
¤
2
x(w 1) x (w 1) x(w 2) x2 (w 2)
¸
f1 f2
¸
H 2 (N) = + y(w)=
w=1
| |(w) Z (w)N |2
W W = U| 2UZ | N + N UZ N=
2
|(w) = e1 f1 x(w 1) + e1 f2 x (w 1) + e2 f1 x(w 2) + e2 f2 x2 (w 2) + y(w)
Q X
(6)
(12)
For the moment, assume that n1 = e£1 f1 is not zero ¤ and the problem is then to find NS , n1 n2 n3 that minimizes Q ¯ ¯2 X ¯ ¯ H (NS ) , ¯|(w) Z (w)N ¯ 2
or, as in [3][13], write
n4 =n2 n3 @n1
w=1
2
2
|(w) = n1 x(w1)+n2 x (w1)+n3 x(w2)+n4 x (w2)+y(w) (7) where n1 = e1 f1 > n2 = e1 f2 > n3 = e2 f1 > n4 = e2 f2 =
(8)
¯ ¯ W = U| 2UZ N | ¯
n4 =n2 n3 @n1
¡ W ¢¯¯ N UZ N ¯
n4 =n2 n3 @n1
=
(13)
The standard procedure is then to compute the extrema points, that is, the solutions to CH 2 (NS ) =0 Cn1 CH 2 (NS ) =0 u2 (NS ) , Cn2 CH 2 (NS ) = 0. u3 (NS ) , Cn3 u1 (NS ) ,
The model (7) is overparameterized as n4 = n2 n3 @n1 =
+
(9)
In the next section, it is shown how to solve this identification problem explicitly taking into account the constraint (9).
3445
(14)
These are rational functions of n1 > n2 > and n3 . The solutions to this set of rational functions are contained in the solution set of CH 2 (NS ) =0 Cn1 CH 2 (NS ) =0 s2 (NS ) , n1c2 Cn2 CH 2 (NS ) =0 s3 (NS ) , n1c3 Cn3
s1 (NS ) , n1c1
(15)
where c1 > c2 > and c3 are the smallest positive integers that make the sl polynomials in the n1 . As n1 6= 0 is assumed, the zero sets of (14) and (15) are identical. Systems of polynomial equations can be solved for all of their roots using resultants. This is summarized in the next subsection. A. Solving Systems Resultants[14][16]
of
Polynomial
Equations
via
Given two polynomial equations d(n1 > n2 ) = 0 and e(n1 > n2 ) = 0, how does one solve them simultaneously to eliminate (say) n2 ? A systematic procedure to do this is known as elimination theory and uses the notion of resultants. Briefly, one considers d(n1 > n2 ) and e(n1 > n2 ) as polynomials in n2 whose coefficients are polynomials in n1 . Then, for example, letting d(n1 > n2 ) and e(n1 > n2 ) have degrees 3 and 2, respectively in n2 , they may be written in the form d(n1 > n2 ) = d3 (n1 )n23 + d2 (n1 )n22 + d1 (n1 )n2 + d0 (n1 ) e(n1 > n2 ) = e2 (n1 )n22 + e1 (n1 )n2 + e0 (n1 )= The q × q Sylvester matrix, where q = degn2 {d(n1 > n2 )} + degn2 {e(n1 > n2 )} = 3 + 2 = 5, is defined by 6 5 d0 (n1 ) 0 e0 (n1 ) 0 0 9 d1 (n1 ) d0 (n1 ) e1 (n1 ) e0 (n1 ) 0 : : 9 : d (n ) d (n ) e (n ) e (n ) e (n Vd>e (n1 ) , 9 1 1 2 1 1 1 0 1 ) := 9 2 1 7 d3 (n1 ) d2 (n1 ) 0 e2 (n1 ) e1 (n1 ) 8 0 d3 (n1 ) 0 0 e2 (n1 ) (16) The resultant polynomial is then defined by ´ ³ u(n1 ) = Res d(n1 > n2 )> e(n1 > n2 )> n2 , det Vd>e (n1 ) (17) and is the result of eliminating the variable n2 from d(n1 > n2 ) and e(n1 > n2 ). In fact, the following is true. Theorem 1: [14][16] Any solution (n10 > n20 ) of d(n1 > n2 ) = 0 and e(n1 > n2 ) = 0 must satisfy u(n10 ) = 0. See the Appendix for more discussion on resultants. B. Solving the Polynomial System (15) Consider the system (15) to be polynomials in n3 with coefficients polynomials in (n1 > n2 ). Compute ³ ´ us1 s2 (n1 > n2 ) , Res s1 > s2 > n3 = det Vs1 s2 (n1 > n2 ) ³ ´ us1 s3 (n1 > n2 ) , Res s1 > s3 > n3 = det Vs1 s3 (n1 > n2 )=
to eliminate n3 . Next, n2 is eliminated to obtain ´ ³ u(n1 ) , Res us1s2 > us1s3 > n2 = det Vus1 s2 us1 s3 (n1 )
which is a polynomial in one variable. The roots n1l for l = 1> ===> q1 of u(n1 ) = 0 are found, which are then substituted into us1 s2 . For each fixed n1l > solve us1 s2 (n1l > n2 ) = 0 (or us1 s3 (n1l > n2 ) = 0) for its roots n2lm m = 1> ===> q2l to obtain the partial solutions (n1l > n2lm ) for l = 1> ===> q1 > m = 1> ===> q2l . For each l and m, the partial solutions (n1l > n2lm ) are then substituted into s1 (or into s2 or s3 ) and s1 (n1l > n2lm > n3 ) = 0 is solved to obtain the solutions (n1l > n2lm > n3lmn ) for l = 1> ===> q1 > m = 1> ===> q2l > n = 1> ===> q3lm . These solutions are then checked to see which ones satisfy the complete system of polynomial equations (15) and, therefore, constitute the candidate solutions for the minimization. From the set of candidate solutions, the one that results in the smallest squared error is chosen. These computations presented in this work were carried out using M ATHEMATICA [17]. Remark The assumption that n1 6= 0 was made above. However, one does not need to know this a priori. Simply, one solves the problem twice: once assuming that n1 6= 0 and the second time assuming n4 6= 0 [substituting n1 = n2 n3 @n4 into (12)]. A parameter vector NS is found in each case. From these two solutions, the one that gives the smallest residual (squared) error is chosen. C. Sufficient Richness The system of polynomial equations (15) can be solved using resultants as outlined in the previous subsection and one may then check each of the finite number of solutions for the one that gives the minimal value for H 2 (NS ). However, one needs to know if the solution makes sense. For example, in the linear least-squares problem, there is a unique, well-defined solution provided that the regressor matrix UZ is nonsingular (or in practical terms, if its condition number is not too large). In the nonlinear case here, a Taylor series expansion about the computed minimum point NS = [n1 > n2 > n3 ]W is done to obtain (l> m = 1> 2> 3) H 2 (NS ) = H 2 (NS ) 1 C 2 H 2 (NS ) + [NS NS ]W [NS NS ] + · · · 2 Cnl Cnm (18) One then checks that the Hessian matrix with elements C 2 H 2 (NS ) is positive definite and that its condition number Cnl Cnm is not too large, to ensure that the data was sufficiently rich to identify the parameters. D. Solving for the Model Parameters Once the nl are found, the el > fl are found. If n1 6= 0 is the minimizing case, then the system (see (8))
3446
n1 = e1 f1 n2 = e1 f2 n3 = e2 f1
(19)
is solved using resultants while if if n4 6= 0 is the minimizing case, then n2 = e1 f2 n3 = e2 f1 n4 = e2 f2
(20)
is solved via resultants. IV. S IMULATION E XAMPLES Here we consider the use of the above methodology to estimate the parameter values of some examples presented in the literature. A. Example 1 The first example is from [4][8] ¸ ¸ £ ¤ x(w 1) x2 (w 1) f1 |(w) = e1 e2 + y(w) x(w 2) x2 (w 2) f2
where e1 = 1> e2 = 2> f1 = 0> f2 = 2, x(w) is zero mean Gaussian white noise with unity variance, and y(w) is also zero mean Gaussian white noise with variance 1=56. Note that n1 = e1 f1 = 0. However, one would not know this a priori so this example is worked out assuming first that n1 6= 0, and then assuming n4 6= 0= Case 1 n1 6= 0 A simulation is performed and the input-output data is collected from w = 1> ===> 1000 to compute UZ > UZ | > and U| . Using equations (13), (14) and (15), the polynomials [NS , (n1 > n2 > n3 )] CH n13
2
(NS ) s1 (NS ) , =0 Cn1 CH 2 (NS ) =0 s2 (NS ) , n12 Cn2 CH 2 (NS ) =0 s3 (NS ) , n12 Cn3
(21)
are computed, where deg n2 deg n3 2 2 = 1 2 2 1 ¡ ¢ Using ¡ resultants, ¢ us1 s2 , Res s1 > s2 > n2 and us1 s3 , Res s1 > s3 > n2 are computed where, it happens that us1 s2 = n12 u¯s1 s2 and us1 s3 = n12 u¯s1 s3 . The factors n12 in us1 s2 and n12 inus1 s3 can be removed without changing the solution set as n1 is assumed to be nonzero. The degrees of u¯s1 s2 and u¯s1 s3 are deg n1 deg n3 u¯s1s2 5 5 = u¯s1s3 6 6 s1 s2 s3
deg n1 4 3 3
Finally, n3 is eliminated from u¯s1s2 and u¯s1 s3 to obtain u(n1 ) , Res (¯ us1 s2 > u¯s1 s3 > n3 ) = det Vu¯s1 s2 u¯s1 s3 (n1 )
where deg {u(n1 )} = 18. The optimal solution is NS 1 = (0=036> 2=008> 0=072) so that n4 = n2 n3 @n1 = 4=016 and the error is computed to be H 2 (NS 1 ) = 1931=6. For this optimal p value of N sS 1 and choosing the norm 2 2 constraint kek = e1 + e2 = 5 as in [8], the solutions to n1 = e1 f1 > n2 = e1 f2 and n3 = e2 f1 are ½ (0=999> 2=001> 0=036> 2=009) = (e1 > e2 > f1 > f2 ) = (0=999> 2=001> 0=036> 2=009)
Zero measurement noise. If this problem is redone with no measurement noise, then the optimal solution is NS 1 = (0=234> 1=975> 0=470) so that n4 = n2 n3 @n1 = 3=967 and the error is computed to H 2 (NS 1 ) = 338=19=For p this optimal s value of NS 1 and again choosing kek = e21 + e22 = 5> the solutions to n1 = e1 f1 > n2 = e1 f2 and n3 = e2 f1 are ½ (0=994> 2=001> 0=235> 1=987) (e1 > e2 > f1 > f2 ) = (0=994> 2=001> 0=235> 1=987)
Case 2. n4 6= 0 Using the same data as in case 1, equation (13) is used with n1 = n2 n3 @n4 and NS , (n2 > n3 > n4 ). The corresponding polynomial system whose zeros are the extremal points of the least-squares error are of the form CH 2 (NS ) =0 Cn2 2 CH (NS ) =0 s2 (NS ) , n42 Cn3 CH 2 (NS ) =0 s3 (NS ) , n43 Cn4 s1 (NS ) , n42
where
(22)
deg n3 deg n4 2 3 = 1 3 2 4 ¡ ¢ Using ¡ resultants, ¢ us1 s2 , Res s1 > s2 > n2 and us1 s3 , Res s1 > s3 > n2 are computed, where it happens that us1 s2 = n42 u¯s1 s2 and us1 s3 = n42 u¯s1 s3 , where again, the factors n42 in us1 s2 and n42 in us1 s3 can be removed without changing the solution set as n4 is assumed to be nonzero. The degrees of u¯s1 s2 and u¯s1 s3 are s1 s2 s3
deg n2 1 2 2
u¯s1 s2 u¯s1 s3
deg n3 5 6
deg n4 5 6
=
Finally, n3 is eliminated to obtain ´ ³ u(n4 ) , Res u¯s1 s2 > u¯s1 s3 > n3 = det Vu¯s1 s2 u¯s1 s3 (n4 )
where deg {u(n4 )} = 18. The optimal solution is NS 2 = (2=008> 0=072> 4=020) so that n1 = n2 n3 @n4 = 0=036 and the error is computed to H 2 (NS 2 ) = 1931=6. For this optimal p value of NSs 2 and choosing the norm constraint kek = e21 + e22 = 5 as in [8], the solutions to n2 = e1 f2 > n3 = e2 f1 and n4 = e2 f2 are ½ (0=999> 2=001> 0=036> 2=010) = (e1 > e2 > f1 > f2 ) = (0=999> 2=001> 0=036> 2=010)
3447
The errors as well as the estimated parameters el and fl are the same in both cases which is not a surprise as neither of the estimated values for n1 and n4 are zero Zero measurement noise. The optimal solution is NS 2 = (2=00> 0=00> 4=00) so that n1 = n2 n3 @n4 = 0=00 and the error is computed to be H 2 (NS 2 ) = 0. p For this optimal s value of NS 2 and again choosing kek = e21 + e22 = 5> the solutions to n2 = e1 f2 > n3 = e2 f1 and n4 = e2 f2 are ½ (1=00> 2=00> 0=00> 2=00) (e1 > e2 > f1 > f2 ) = (1=00> 2=00> 0=00> 2=00)
where s1 s2 s3 s4 s5
deg n1 1 1 1 1 1
t4 (n1 > n3 > n4 > n5 ) , Res(s5 > s2 > n2 ) where t1 = n3 t¯1 > t2 = n3 t¯2 > t3 = n3 t¯3 > t4 = n3 t¯4 and t¯1 t¯2 t¯3 t¯4
n1 n2 n3 n4 n5 n6
deg n4 1 2 1 2
deg n5 1 2 2 1
=
Next, n1 is eliminated to obtain three polynomials in three unknowns as t1 > t¯2 > n1 ) v1 (n3 > n4 > n5 ) , Res(¯ t1 > t¯3 > n1 ) v2 (n3 > n4 > n5 ) , Res(¯
for
w
v¯1 v¯2 v¯3
=
deg n3 4 3 3
deg n4 2 1 2
deg n5 2 2 1
=
Next, n5 is eliminated to obtain two polynomials in two unknowns as v1 > v¯2 > n5 ) w1 (n3 > n4 ) , Res(¯ v1 > v¯3 > n5 ) w2 (n3 > n4 ) , Res(¯ where w1 = n32 w¯1 > w2 = n32 w¯2 and
where n3 6= 0 is assumed. The corresponding polynomial system whose zeros are the extremal points of the leastsquares error are of the form [NS , (n1 > n2 > n3 > n4 > n5 )] CH 2 (NS ) Cn1 CH 2 (NS ) s2 (NS ) , n3 Cn2 CH 2 (NS ) s3 (NS ) , n33 Cn3 2 CH (NS ) s4 (NS ) , n32 Cn4 CH 2 (NS ) s5 (NS ) , n32 Cn5
deg n3 2 4 3 3
where v1 = n3 v¯1 > v2 = n3 v¯2 > v3 = n3 v¯3 and
= d1 = d2 = e1 f1 = e1 f2 = e2 f1 = n4 n5 @n3
s1 (NS ) , n3
deg n1 1 1 1 1
t1 > t¯4 > n1 ) v3 (n3 > n4 > n5 ) , Res(¯
d = (d1 > d2 ) = (1=8287> 0=8353) e = (e1 > e2 ) = (0=9397> 0=3420) f = (f1 > f2 ) = (1> 1)= simulation
=
t3 (n1 > n3 > n4 > n5 ) , Res(s4 > s2 > n2 )
z(w) = j(x(w)) , f1 x + f2 x2
the
deg n5 1 1 2 2 1
t2 (n1 > n3 > n4 > n5 ) , Res(s3 > s2 > n2 )
|(w) + d1 |(w W ) + d2 |(w 2W ) = e1 z(w W ) + e2 z(w 2W )) + y(w)
Data is collected from W> 2W> ===> 1000W . Set
deg n4 1 1 2 1 2
t1 (n1 > n3 > n4 > n5 ) , Res(s1 > s2 > n2 )
B. Example 2
where the sampling time is W = 0=6 sec, x(w) is an i.i.d. random process uniformly distributed in the interval [5> 5] and y(w) is an i.i.d. random process uniformly distributed in the interval [0=1> 0=1]. The parameter values in the simulation are
deg n3 2 2 4 3 3
Next, n2 is eliminated to obtain four polynomials in four unknowns as
With zero measurement noise, H 2 (NS 2 ) = 0 as expected and the parameter values estimated with n4 6= 0 are chosen. The second example is from [5] where the discrete-time system to be identified is
deg n2 1 1 1 1 1
=0 =0
w¯1 w¯2
deg n3 6 5
deg n4 6 5
=
Finally, n4 is eliminated to obtain a single polynomial in n3 as ³ ´ u(n3 ) = Res w¯1 (n3 > n4 )> w¯2 (n3 > n4 )> n4
where
degn3 {u(n3 )} = 18=
=0
The calculated optimal solution is =0
NS = (1=8288> 0=8354> 0=9394> 0=9397> 0=3417)
=0
so that n6 = n4 n5 @n3 = 0=3418 and the error is computed to be H 2 (NS ) = 0. For this optimal value of NS and
3448
p choosing kek = e21 + e22 = 1, the solutions to n2 = e1 f2 > n3 = e2 f1 and n4 = e2 f2 are (d1 > d2 > e1 > e2 > f1 > f2 ) = ½ (1=8288> 0=8354> 0=9398> 0=3418> 0=9996> 0=9999) (1=8288> 0=8354> 0=9398> 0=3418> 0=9996> 0=9999)= V. C ONCLUSIONS AND F UTURE R ESEARCH
A procedure to identify the parameters in a Hammerstein model was presented and illustrated with examples. The method guarantees that the parameters that minimize the squared error is found. A limitation of the method is that models with larger number of parameters result in much higher degree polynomials to be solved using resultant theory. In such cases, the symbolic computation of the determinant of the Sylvester matrix for finding the resultant polynomials becomes an issue . New results in [18][19] for doing such computations will allow one to work problems with larger number of parameters. R EFERENCES
[17] S. Wolfram, Mathematica, A System for Doing Mathematics by Computer. 2nd Edition, Addison-Wesley, Reading, MA, 1992. [18] M. Hromcik and M. Šebek, “New algorithm for polynomial matrix determinant based on FFT,” in Proceedings of the European Conference on Control ECC’99, August 1999. Karlsruhe Germany. [19] M. Hromcik and M. Šebek, “Numerical and symbolic computation of polynomial matrix determinant,” in Proceedings of the 1999 Conference on Decision and Control, pp. 1887–1888, 1999. Tampa FL. [20] D. Cox, J. Little, and D. O’Shea, Using Algebraic Geometry. SpringerVerlag, Berlin, 1998.
A. Appendix: Resultants (see [14][16][20]). Given two polynomials d({1 > {2 ) and e({1 > {2 ) how does one find their common zeros? To see how this is done, write d({1 > {2 ) = d3 ({1 ){32 + d2 ({1 ){22 + d1 ({1 ){2 + d0 ({1 ) e({1 > {2 ) = e2 ({1 ){22 + e1 ({1 ){2 + e0 ({1 )= Next, see if polynomials of the form ({1 > {2 ) = 1 ({1 ){2 + 0 ({1 ) ({1 > {2 ) = 2 ({1 ){22 + 1 ({1 ){2 + 0 ({1 )= can be found such that
[1] K. S. Narendra and P. G. Gallman, “An iterative method for the identification of nonlinear systems using a hammerstein model,” IEEE Transactions on Automatic Control, vol. 11, pp. 546–550, July 1966. [2] P. G. Gallman, “A comparison of two hammerstein model identification algorithms,” IEEE Transactions on Automatic Control, vol. 21, pp. 124–126, February 1976. [3] F. H. I. Chang and R. Luus, “A noniterative method for identification using hammerstein model,” IEEE Transactions on Automatic Control, vol. 16, pp. 464–468, October 1971. [4] P. Stoica, “On the convergence of an iterative algorithm used for hammerstein identification,” IEEE Transactions on Automatic Control, vol. 26, pp. 967–969, April 1981. [5] E.-W. Bai and M. Fu, “A blind approach to hammerstein model identification,” IEEE Transactions on Signal Processing, vol. 50, pp. 1610–1619, July 2002. [6] E.-W. Bai, “Frequency domain identification of hammerstein models,” IEEE Transactions on Automatic Control, vol. 48, pp. 530–542, April 2003. [7] E.-W. Bai and D. Li, “Convergence of the iterative hammerstein system identification algorithm,” IEEE Transactions on Automatic Control, vol. 49, pp. 1929–1940, November 2004. [8] E.-W. Bai and D. Li, “Convergence of the iterative hammerstein system identification algorithm,” in Proceeding of the 43rd IEEE Conference on Decision and Control, pp. 3868–3873, Paradise Island, Bahamas, December 2004. [9] W. Greblicki, “Continuous-time hammerstein system identification,” IEEE Transactions on Automatic Control, vol. 45, pp. 1232–1236, June 2000. [10] Z. Hasiewicz and G. Mzyk, “Combined parametric-nonparametric identification of hammerstein systems,” IEEE Transactions on Automatic Control, vol. 49, pp. 1370–1375, August 2004. [11] J. Vörös, “Iterative algorithm for parameter identification of hammerstein systemswith two-segment nonlinearities,” IEEE Transactions on Automatic Control, vol. 44, pp. 2145–2149, November 1999. [12] J. Vörös, “Modeling and parameter identification of systems with multisegment piecewise-linear characterics,” IEEE Transactions on Automatic Control, vol. 47, pp. 184–188, January 2002. [13] O. Nelles, Nonlinear System Identification. Springer-Verlag, Berlin, 2001. [14] D. Cox, J. Little, and D. O’Shea, IDEALS, VARIETIES, AND ALGORITHMS An Introduction to Computational Algebraic Geometry and Commutative Algebra. 2nd Edition, Springer-Verlag, Berlin, 1996. [15] K. Wang, J. Chiasson, M. Bodson, and L. M. Tolbert, “A nonlinear least-squares approach for estimation of the induction motor parameters,” in Proceedings of the IEEE Conference on Decision and Control, pp. 3856–3861, December 2004. [16] J. von zur Gathen and J. Gerhard, Modern Computer Algebra. Cambridge University Press, Cambridge, UK, 1999.
({1 > {2 )d({1 > {2 ) + ({1 > {2 )e({1 > {2 ) = u({1 )= Equating powers of matrix form as 5 d0 0 e0 9 d1 d0 e1 9 9 d2 d1 e2 9 7 d3 d2 0 0 d3 0
(23)
{2 , this equation may be rewritten in 0 e0 e1 e2 0
0 0 e0 e1 e2
65 :9 :9 :9 :9 87
0 1 0 1 2
6
5
: 9 : 9 :=9 : 9 8 7
u({1 ) 0 0 0 0
6 : : := : 8
The matrix on the left-hand side is called the Sylvester matrix and is denoted here by Vd>e ({1 ). The inverse of Vd>e ({1 ) has the form ´ ³ 1 1 Vd>e ({1 ) = adj Vd>e ({1 ) det Vd>e ({1 )
where adj(Vd>e ({1 )) is the adjoint matrix and is a 5 × 5 polynomial matrix in {1 . Solving for l ({1 )> l ({1 ) gives 6 5 6 5 u({1 ) 0 ({1 ) 9 0 : 9 1 ({1 ) : : 9 : 9 9 0 ({1 ) : = adjVd>e ({1 ) 9 0 : = : : det Vd>e ({1 ) 9 9 7 0 8 7 1 ({1 ) 8 0 2 ({1 ) Choosing u({1 ) = det Vd>e ({1 ) this becomes 6 5 5 6 0 ({1 ) 1 9 1 ({1 ) : 9 0 : : 9 9 : 9 0 ({1 ) : = adjVd>e ({1 ) 9 0 : : 9 9 : 7 1 ({1 ) 8 7 0 8 0 2 ({1 )
and guarantees that 0 ({1 )> 1 ({1 )> 0 ({1 )> 1 ({1 )> 2 ({1 ) are polynomials in {1 . That is, the resultant polynomial defined by u({1 ) = det Vd>e ({1 ) is the polynomial required for (23).
3449