994
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 44, NO. 5, MAY 1999
Existence of Stationary Points for Pseudo-Linear Regression Identification Algorithms Phillip A. Regalia, Mamadou Mboup, and Mehdi Ashari
Abstract— The authors prove existence of a stable transfer function satisfying the nonlinear equations characterizing an asymptotic stationary point, in undermodeled cases, for a class of pseudo-linear regression algorithms, including Landau’s algorithm, the Feintuch algorithm, and (S)HARF. The proof applies to all degrees of undermodeling and assumes only that the input power spectral density function is bounded and nonzero for all frequencies and that the compensation filter is strictly minimum phase. Some connections to previous stability analyses for reduced-order identification in this algorithm class are brought out. Index Terms— Identification, pseudo-linear regression, undermodeled stationary points.
Fig. 1. Decomposition of unknown system.
observed system output fy(1)g is generated according to
I. INTRODUCTION Many convergence results for adaptive identification algorithms are limited to sufficient order settings, in which an unknown system is of finite and known order. Fewer results are available for more realistic undermodeled (or reduced-order) settings, in which the degree of an unknown system is underestimated or prohibitively large. Here we prove, for undermodeled cases, the existence of a stable transfer function satisfying the nonlinear equations governing a mean stationary point for pseudo-linear regression algorithms, including Landau’s identifier [1], the Feintuch algorithm [2], and (S)HARF [3], [4]. The importance of examining the behavior of adaptive identification algorithms in undermodeled cases has been recognized repeatedly over the years [5]–[8], since physical systems that one may attempt to identify need not admit exact descriptions using finite-order models. Two contributions by Anderson and Johnson [5], [6] show the absence of divergence for this algorithm class in undermodeled cases, subject to excitation and strict positive real conditions. Those results tacitly assume existence of a reduced-order model that the algorithm may be attempting to identify, without specifying its structure. As argued in [8], a natural choice for the reduced-order model is the system obtained at a mean stationary point, if one exists. Section II reviews this question and states our main result asserting the existence of a stationary point giving rise to a stable transfer function. Section III presents the technical construct, with proofs of intermediate lemmas deferred to Section IV. Concluding remarks are synthesized in Section V.
II. PROBLEM STRUCTURE AND MAIN RESULT
We consider a system identification problem in which fu(1)g is a discrete-time zero mean stationary stochastic process, and the
Manuscript received December 5, 1997; revised December 8, 1997. Recommended by Associate Editor, J. C. Spall. P. A. Regalia is with the D´epartement Signal et Image, Institut National des T´el´ecommunications, F-91011 Evry cedex, France (e-mail:
[email protected]). M. Mboup is with UFR Math´ematiques et Informatique, Universit´e Ren´e Descartes, F-75270 Paris cedex 06, France. M. Ashari is with Lucent Technologies, Allentown, PA 18103 USA. Publisher Item Identifier S 0018-9286(99)03937-9.
y(n) =
1
k=0
0
hk u(n k) + (n)
where f(1)g is a stationary output disturbance assumed independent of the input sequence fu(1)g, and fhk g is the impulse response of the unknown system, assumed stable and causal. Its transfer function k is H(z) = k hk z , where z is the backward delay operator: zu(n) = u(n 0 1). This transfer function may be irrational. The SHARF algorithm [3] adjusts the coefficients of a candidate rational model b0 + b1 z + B(z) ^ = H(z) = A(z) 1 + a1 z +
1 1 1 + bM zM 1 1 1 + aM zM
using the following adaptation algorithm:
M M 0 ak (n)^y(n 0 k) + bk (n)u(n 0 k) k=1 k=0 M (n) = ck [y(n 0 k) 0 y ^(n 0 k)]; c0 = 1 k=0 ak (n+1) = ak (n) 0 (n)^ y (n 0 k); k = 1; 2; 1 1 1 ; M k = 0; 1; 1 1 1 ; M: bk (n+1) = bk (n) + (n)u(n 0 k); y ^(n) =
Here > 0 is a small adaptation step size, and C(z) = 1 + 1 1 1 cM zM is a compensation filter with user-chosen coefficients fck g. Variants of the basic algorithm include HARF [4] using a posteriori filtered regressors, Landau’s algorithm [1] using a matrix gain sequence in place of , and the Feintuch algorithm in which C(z) = 1. This algorithm class is globally convergent (strongly [9] or weakly [10]) in the sufficient order case (i.e., when deg H(z) M ), provided the compensation filter is chosen correctly. Convergence behavior in more realistic undermodeled (or reducedorder) cases is more difficult to analyze, since the algorithm is not a gradient descent procedure, although results are available in [5] and [6]. With respect to Fig. 1, suppose the larger order system H(z) is split into a reduced-order part, of degree not exceeding M but otherwise unspecified plus the remaining undermodeled dynamics and measurement noise. If the undermodeled dynamics and measurement noise are momentarily set to zero, we recover a sufficient-order setting. Global asymptotic convergence may be shown based on passivity applied to an internal energy function. Once the undermodeled dynamics and measurement noise are added back to the system, the
c1 z +
0018–9286/99$10.00 1999 IEEE
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 44, NO. 5, MAY 1999
internal energy function is shown to remain asymptotically bounded under the excitation conditions specified in [5], [6], which implies the absence of divergence. Let B3 (z) A3 (z) be the (as yet unspecified) reduced-order subsystem in Fig. 1. The passivity arguments from [5] and [6] assume that the compensation filter C(z) is chosen such that C(z) A3 (z) is strictly positive real (SPR). To the extent that the structure of the reduced-order part B3 (z) A3 (z) is unspecified, so is the set of admissible compensation filters which ensure stability of the adaptive system. As argued in [8], some meaning can be given to the decomposition of Fig. 1 provided the reduced-order system is that obtained at a mean asymptotic stationary point of the algorithm, if one exists. If so, then the energy function from [5] and [6] would measure the deviation between the parameter estimates and their mean values, and bounding the energy function would constrain the identifier to stay in a neighborhood of the stationary point. A mean asymptotic stationary point corresponds, of course, to those parameter values fak g and fbk g which, if held fixed, would render the error (n) orthogonal to the regressor signals, i.e., E[(n)u(n E[(n)^ y (n
0 k)] = 0; 0 k)] = 0;
k = 0; 1; k = 1; 2;
111;M 1 1 1 ; M:
(1) (2)
These equations may be obtained by appealing to averaging theory (e.g., [11]) or stochastic approximation theory (e.g., [10]). Either way, these equations are nonlinear in the filter coefficients fak g, and the ^ consistent with (1) and existence of a stable transfer function H(z) (2) has not previously been established for the undermodeled case. Our contribution is summarized in the following theorem. Theorem 1: Suppose the input power spectral density function
S (z) =
1
k=01
E[u(n)u(n
0 k)]z
III. THE EXISTENCE PROOF We shall work in the transfer function space L2 , consisting of i functions f (z) = 1 i=01 fi z having square-summable coefficients ffi g. The causal projection (onto the Hardy space H2 ) is denoted as [f (z)]+ = f0 + f1 z + f2 z 2 + f3 z 3 + 1 1 1 ; yielding a function analytic in jz j < 1. If f (z) and g(z) are (column) vector-valued, the inner product 1 j! j! y y hf (z); g(z)i = 21 f (e )[g(e )] d! = fi gi 0 i=01 in which y denotes conjugate transposition, becomes a matrix containing all pairwise scalar products, with immediate specialization to the scalar case. Introduce now the one-sided z -transform i=0
E[(n)u(n
0
S
i i)]z = [ (z) C(z) [H(z)
0
^ fulfills Theorem 2: If the rational function H(z) + = z M +1 VM (z)Q(z) ^ [S (z)C(z)[H(z) 0 H(z)]]
(3)
for some stable and causal Q(z), where VM (z) =
aM + aM 01 z +
1 + a1 z +
1 1 1 + a1 zM 01 + zM
1 1 1 + aM 01 zM 01 + aM zM
is a causal all-pass function whose zeros are the reciprocals of the ^ poles of H(z) , then (1) and (2) are satisfied. ^ We show that there always exists a stable rational function H(z) , of degree not exceeding M , which fulfills this characterization. As this problem is nonlinear in the filter coefficients fak g, we may ^ reparameterize H(z) so that the set of stable and causal functions occupies a convex region in the parameter space, using a normalized lattice parameterization [12]. To this end, we set AM (z) = 1 + a1 z + 1 1 1 + aM 01 z M 01 + M and A ~M (z) = z M AM (z 01 ) and then determine lower order aM z polynomials from the Schur recursion Ak01 (z) ~k01 (z) zA
=
1
1
cos k
0 sin k
0 sin k 1
Ak (z) ~k (z) A
(4)
in which sin k = A~k (0)=Ak (0) and cos k is nonnegative. It is well known [15] that this procedure will iterate successfully M times, yielding j sin k j < 1 for k = M; 1 1 1 ; 1, if and only if AM (z) is strictly minimum phase (all zeros in jz j > 1). Let us set sk = sin k and s = [s1 ; 1 1 1 ; sM ]; the stability domain becomes the following convex open subset of IRM :
D = fs : jsk j < 1 for k = 1; 2; 1 1 1 ; M g:
(5)
for k = 0; 1; . . . ; M , and V(z) = Beurling–Lax theorem [13], [8], one may show that the orthogonal complement in H2 to the space spanned by V0 (z); . . . ; VM (z) consists of all shifted versions of zVM (z), i.e.,
Let Vk (z) = A~k (z)=AM (z) t ; VM (z)] . Using the [V0 (z);
111
k
is nonzero and bounded for all jz j = 1, and that the compensation filter C(z) is strictly minimum phase (no zeros in jz j 1). Equations ^ (1) and (2) then admit a solution for which the resulting H(z) is a stable transfer function. Remark 1: In many studies [1], [3]–[6], [9], C(z) generates the numerator of a transfer function which should be SPR for convergence to apply; this constrains C(z) to be strictly minimum phase. The proof of Theorem 1, however, invokes no SPR condition.
1
995
+: ^ H(z)]]
If the output noise f(1)g is independent of the input fu(1)g, then it makes no contribution to this expression. The following result on certain zeros of this function is adapted from [8, Sec. 9.7].
0M +1 = hV(z); f (z)i , [f (z)]+ = zVM (z)g(z)
(6)
for some g(z) 2 H2 . Given any numerator polynomial B(z) = b0 + b1 z + 1 1 1 + bM z M , we may always determine coefficients 0 ; 1 1 1 ; M such that ^ H(z) =
B(z) AM (z)
M =
k=0
k Vk (z)
and then work with the parameters fsk g and fk g rather than the coefficients fak g and fbk g. We now state the chain of arguments which will establish ^ Theorem 1. We first constrain the zeros of H(z) as a function of the poles, in order to obtain a partial fulfillment of the frequency domain characterization (3). For given values fsk g, which determine the denominator AM (z), we may find coefficients fk g—dependent on fsk g—for which
S
[ (z)C(z)[H(z)
+ = zVM (z)R(z) ^ 0 H(z)]]
(7)
for some R(z) 2 H2 . To verify, we note from (6) that this amounts to choosing the coefficients fk g such that ^ 0M +1 = hV(z); S (z)C(z)[H(z) 0 H(z)] i:
^ Expanding H(z) = system
M V (z), this equality reduces to the linear k=0 k k
hV(z); S (z)C(z)H(z)i = hV(z); S (z)C(z)V(z)i
0
.. . : M
(8)
996
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 44, NO. 5, MAY 1999
We show in the Appendix that the matrix on the right-hand side is invertible whenever C (z ) is strictly minimum phase, which yields a unique solution for fvk g. ^ (z ) = M k Vk (z ) are now constrained with The zeros of H k=0 respect to the poles, which are set by the Schur parameters s = fsk g. ^ (z; s) to emphasize this dependence. We write Vk (z; s) and H ^ (z ) fulfilling (3), we must now To complete our construction of H choose the Schur parameters s such that the remainder function R(z ) from (7) vanishes in its first M coefficients, i.e., R(z ) = z M Q(z ) for some Q(z ) 2 H2 . This in turn can be written as
0 = hz k ; R(z )i;
k = 0; 1;
1 1 1 ; M 0 1:
But from (7) we can write R(z ) as
R(z ) = z 01 V (z 01 ; s)[ (z )C (z )[H (z )
S
M
(9)
0 H^ (z; s)]]+
so that (9) becomes
^ (z; s)]]+ i; 0 = hz k ; z 01 VM (z 01 ; s)[S (z )C (z )[H (z ) 0 H for k = 0; 1; 1 1 1 ; M
01
or, by rearranging
^ (z; s)]i; 0 = hz k VM (z; s); S (z )C (z )[H (z ) 0 H for k = 1; 2; 1 1 1 ; M;
(10)
provided such a choice for s exists. That such a choice indeed exists will be deduced by examining ^ (z; s) as s reaches any the limiting behavior of VM (z; s) and H boundary point of the stability domain (5). Let @ D denote the set of boundary points (i.e., points where at least one coefficient sk has reached unit magnitude), such that the closure of D from (5) becomes D = D [ @ D. We shall say that s belongs to the kth subboundary @ Dk if
jsi j
1; i < k = 1; i = k < 1; i > k
^ (z; s) are so that @ D = k @ Dk . The zeros of both VM (z; s) and H constrained functions of the poles, and the following two lemmas claim that the constraints in question are sufficient to ensure stability ^ (z; s) as any poles reach the unit circle. of VM (z; s) and H Lemma 3: If s 2 @ Dk , then VM (z; s) reduces to a stable and causal all-pass function of degree M 0 k, parameterized by sk = 61 and sk+1 ; 1 1 1 ; sM , and is odd with respect to s along @ D
M 0s) = 0VM (z; s);
V (z;
for all s 2 @ D:
Lemma 4: Suppose the input spectral density function S (z ) is bounded and nonzero for all jz j = 1 and that the compensation filter ^ (z; s) reduces C (z ) is strictly minimum phase. If s 2 @ Dk , then H to a stable and causal function of degree not exceeding M 0 k, parameterized by sk = 61 and sk+1 ; . . . ; sM , and is even with respect to s along @ D
^ (z; H
0s) = H^ (z; s);
for all s 2 @ D:
The proofs are deferred to Section IV. We now exploit the following Borsuk fixed point Theorem [14, p. 46]. be a closed, bounded, symmetric, and convex Theorem 5: Let D to IRM . subset of IRM , and let F be a continuous mapping from D If F is odd along the boundary, i.e.,
0s) = 0F (s); for all s 2 @ D (11) . , i.e., F (s3 ) = s3 for some s3 2 D then F admits a fixed point in D F(
The same conditions on F also ensure the existence of a zero in
D , by considering the continuous mapping G(s) = s 0 F (s) from D
to IRM ; the condition (11) yields G(0s) = 0G(s) for all s 2 @ D, , or F (s3 ) = 0. which implies G(s3 ) = s3 for some s3 2 D To apply this result, we set z z2 V (z; s) .. .
M
F (s) =
zM
S
; (z )C (z )[H (z )
0 H^ (z; s)]
:
Since inner products are linear functions of their arguments, Lemmas 3 and 4 imply easily that F (s) is odd along the boundary, i.e., F (0s) = 0F (s) for all s 2 @ D. Theorem 5 then implies F (s3 ) = 0 , as desired in view of (10). for some s3 2 D ^ (z; s3 ) Remark 2: With s3 denoting a zero of F (s), the function H is the transfer function obtained at a stationary point. Although its ^ (z; s3 ) is not existence is established, a closed-form solution for H obtained. In [17], however, we show that a constructive procedure can be developed when the input fu(1)g is white noise, using interpolation theory. A corresponding result for nonwhite inputs is still an open issue. Remark 3: Depending on the unknown system H (z ), a stationary point may occur along the boundary @ D. In this case, Lemma 4 ^ (z; s3 ) suffers pole-zero cancellations. As (1) and (2) shows that H depend only on external signals, the locations of any such pole-zero cancellations are theoretically indeterminate; see [8, pp. 541–543] for a simulation example of this phenomenon. IV. PROOFS OF LEMMAS ^ (z; s) as s ! @ D. With We examine the functions Vk (z; s) and H V(z; s) = [V0 (z; s); 1 1 1 ; VM (z; s)]t , we may use the Schur recursion (4) to express V(z; s) in terms of z V(z; s); solving for V(z; s) leads to
V(z; s) = (I 0 z F(s))01g(s)
(12)
in which the terms appear elementwise (indexed from zero) as
0 i j+1 jl=i+1 cl ; i j < M j =i01 i j < i 0 1 or j = M (13) M [g(s)]i = si cl l=i+1 with the conventions s0 +1; si = sin i ; and cl = cos l . Since hV(z; s); V(z; s)i = I for all s (e.g., [12] and [8]), the pair [F(s)]ij =
ss c; 0;
[F(s); g(s)] is orthogonal
F(s)[F(s)]y + g(s)[g(s)]y = IM +1 ;
One may now check that, as s smoothly to the forms
F(s) =
F1 (s)
F2 (s)
;
: for all s 2 D
! @ Dk ;
g(s) =
(14)
F(s) and g(s) tend
0k ; g2 (s)
s 2 @ Dk
(15) where F1 (s) has dimensions k 2 k, and F2 (s) and g2 (s) depend only on sk = 61 and sk+1 ; 1 1 1 ; sM . Along the subboundary @ Dk , the Lyapunov equation (14) decouples into two equations, namely
F1 F1y = Ik ; F2 F2y + g2 g2y = IM +10k : The first equality shows that F1 is orthogonal and hence contains k eigenvalues on the unit circle, corresponding to the k unit-circle zeros of AM (z; s) when s 2 @ Dk (e.g., [15]). The term F2 must then contain the eigenvalues inside the unit circle, and the second equality shows that (F2 ; g2 ) is a stable controllable pair.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 44, NO. 5, MAY 1999
Now, along @ Dk , the final M 0 k + 1 entries Vk (z; s); 1 1 1 ; VM (z; s) from (12) reduce to (I 0 zF2 (s))01 g(s). One may check from (13) that F2 (0s) = F2 (s) and g2 (0s) = 0g2 (s) for all s 2 @ Dk . This implies that, for all s 2 @ Dk
Vi (z; 0s) = 0Vi (z; s);
i = k; k + 1; 1 1 1 ; M:
(16)
Since the passage from F to F2 removes k states, each Vi (z; s), for k, drops in degree from M to M 0 k. The choice i = M in (16) gives Lemma 3. ^ (z; s) = k k We now examine the linear combination H Vk (z; s), and the behavior of the coefficients fk g obtained from (8), as s ! @ Dk . Suppose first that s 2 D. From (12) we obtain
i
Fi g; i 0 0; i0 I; i=0 (Fy )jij ; i < 0:
= k01 = 0
:
Since j1 j = 1 1 1 = jk j = 1, the matrix q (F1 ) is invertible provided that S (z )C (z ) 6= 0 for all jz j = 1. If we now substitute the forms
Here we show the invertibility of the nonsymmetric matrix
Vk (z; s)
q(F) =
.. .
VM (z; s)
;
S (z)C (z)
Vk (z; s) .. .
(19)
VM (z; s)
whenever C (z ) = 1 + c1 z + 1 1 1 + cN z N is strictly minimum phase, with N an arbitrary integer. We assume s 2 D if k = 0 as in (8), or s 2 @ Dk if k > 0 as in (18). We begin by writing
(19) =
N i=0
ci
Vk (z ) .. .
VM (z )
;
S (z)z
i
Vk (z ) .. .
VM (z )
=
N i=0
ci Ri
in which fR0 ; 1 1 1 ; RN g is a matrix autocorrelation sequence because the filters Vi (z ) are stable and S (z ) is a spectral density
998
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 44, NO. 5, MAY 1999
function. By stochastic realization theory (e.g., [18]), we may find an asymptotically stable realization of the form
F x(n) + G w(n) y(n) = Hw(n)
x(n+1) =
[18] P. Delsarte, Y. V. Genin, and Y. Kamp, “Orthogonal polynomial matrices on the unit circle,” IEEE Trans. Circuits and Syst., vol. 25, pp. 149–160, Mar. 1978.
with the property that if E [w(n)wy (m)] = n0m I, then E [y(n)
3yy (m)] = Rn0m for 0 yn 0 my N . If this system is put in input normal form (I = FF + GG ), a calculation shows that Ri = E [y(n + i)yy (n)] = HF i Hy; i = 0; 1; 1 1 1 ; N:
A Unified Approach for Stability Analysis of a Class of Time-Varying Nonlinear Systems
The matrix (19) may then be written as (19) =
H
N
ci
Chun-Bo Feng and Shumin Fei
F i Hy :
i=0 Now, R0 = HHy > 0 so that H has full row rank. As such, (19) is singular only if i ci F i is singular. If fk g are the eigenvalues of F , with jk j < 1, those of i ci F i become C (k ). But C (z) 6= 0
Abstract—It is shown that an integral with negative feedback from its output terminal through a strictly passive system is still strictly passive. By using successive negative feedback around integrals, the conditions of strict passivity and a unified approach for constructing strictly passive systems are obtained for time-varying nonlinear systems of different orders. A new concept of “dissipation factor” is defined in this paper. The stability of a class of time-varying nonlinear systems is examined by using this passivity analysis.
REFERENCES
Index Terms—Passive system, passivity analysis, stability analysis, timevarying nonlinear system.
for all jz j 1 if C (z ) is strictly minimum phase so that (19) must be invertible.
[1] I. D. Landau, “Unbaised recursive identification using model reference adaptive techniques,” IEEE Trans. Automat. Contr., vol. 21, pp. 194–202, Apr. 1976. [2] P. L. Feintuch, “An adaptive recursive LMS filter,” Proc. IEEE, vol. 64, no. 11, pp. 1622–624, Nov. 1976. [3] M. G. Larimore, J. R. Treichler, and C. R. Johnson Jr., “SHARF: An algorithm for adaptive IIR digital filters,” IEEE Trans. Acoust., Speech, Signal Proc., vol. 28, pp. 428–440, Aug. 1980. [4] C. R. Johnson, Jr., “A convergence proof for a hyperstable adaptive recursive filter,” IEEE Trans. Inform. Theory, vol. 25, pp. 745–749, Nov. 1979. [5] C. R. Johnson, Jr. and B. D. O. Anderson, “Sufficient excitation and stable reduced-order adaptive IIR filtering,” IEEE Trans. Acoust., Speech, Signal Proc., vol. 29, pp. 1212–1215, Dec. 1981. [6] B. D. O. Anderson and C. R. Johnson, Jr., “On reduced-order adaptive output error identification and adaptive IIR filtering,” IEEE Trans. Automat. Contr., vol. 27, pp. 927–933, Aug. 1982. [7] H. Fan and M. Nayeri, “On reduced-order identification: Revisiting ‘On some system identification techniques for adaptive filtering,’ ” IEEE Trans. Circuits Syst., vol. 37, pp. 1144–1151, Sept. 1990. [8] P. A. Regalia, Adaptive IIR Filtering in Signal Processing and Control. New York: Marcel Dekker, 1995. [9] L. Ljung, “On positive real transfer functions and the convergence of some recursive schemes,” IEEE Trans. Automat. Contr., vol. 22, pp. 539–551, Aug. 1977. [10] H. Fan, “Application of Benveniste’s convergence results in the study of adaptive IIR filtering algorithms,” IEEE Trans. Inform. Theory, vol. 34, pp. 692–709, July 1988. [11] V. Solo and X. Kong, Adaptive Signal Processing Algorithms: Stability and Performance. Englewood Cliffs, NJ: Prentice-Hall, 1995. [12] A. H. Gray, Jr. and J. D. Markel, “A normalized filter structure,” IEEE Trans. Acoustics, Speech, and Signal Processing, vol. 23, pp. 268–277, June 1975. [13] A. Beurling, “On two problems concerning linear transformations in Hilbert space,” Acta Mathematica, vol. 81, pp. 239–255, 1949. [14] J. Dugundi and A. Granas, Fixed Point Theory. Warsaw, Poland: PWN—Polish Scientific, 1982. [15] P. P. Vaidyanathan and S. K. Mitra, “A unified structural interpretation of some well known stability test procedures for linear systems,” Proc. IEEE, Apr. 1987, vol. 75, no. 4, pp. 478–497. [16] P. A. Regalia, M. Mboup, and M. Ashari, “On the existence of stationary points for the Steiglitz–McBride algorithm,” IEEE Trans. Automat. Contr., vol. 42, pp. 1592–1596, Nov. 1997. [17] , “Existence of stationary points for reduced-order hyperstable adaptive IIR filtering algorithms,” in Proc. ICASSP, Munich, Germany, Apr. 1997, vol. 5, pp. 3601–3604.
I. INTRODUCTION The stability analysis for general time-varying nonlinear systems is a difficult problem of long standing. Up to now the most important method for treating this problem is Lyapunov’s direct method. The solution of the stability problem by Lyapunov’s direct method depends on how the Lyapunov function is constructed, and the usefulness of this method is limited because there is no general method for constructing appropriate Lyapunov functions. The input–output (I/O) analysis of dynamic systems has drawn much attention since the 1970’s. The notion of “passivity” taking from the analogy to electric circuit has been widely used in analyzing the stability of a general class of dynamic systems, including time-varying nonlinear systems (see [1]). Stability analysis from the point of view of passivity has led to Lyapunov-theoretic counterparts of many results obtained from I/O point of view [2]–[4]. In [2] the conditions under which a nonlinear system can be rendered passive via smooth state feedback are studied for an affine time-invariant smooth nonlinear system. This problem is fully solved if there exists a C 2 storage function for the system. Yet how to search for the storage function or to determine the existence of a C 2 storage function is still unsolved. The coprime factorization of the causal operator and I/O stabilization with the state feedback have been studied by Sontag [5] and [6] for a general time-invariant nonlinear system. Though the question of when a finite-dimensional time-invariant nonlinear system can be rendered passive via smooth state feedback has been solved, the question of how to design a strictly passive time-varying nonlinear system of different orders is still left open. The passivity analysis of nonlinear systems is applicable not only to stability analysis, but also to network synthesis, robust analysis, and adaptive control of nonlinear systems [7], [8], and so on. Manuscript received May 14, 1997. Recommended by Associate Editor, A. J. van der Schaft. This work was supported by the NSFC and National Clumbing Project of China. The authors are with the Research Institute of Automation, Southeast University, Nanjing, 210096 China (e-mail:
[email protected]). Publisher Item Identifier S 0018-9286(99)02123-6.
0018–9286/99$10.00 1999 IEEE