Integral-Input-Output to State Stability
Brian Ingalls Dept. of Mathematics, Rutgers University, NJ Department of Mathematics - Hill Center Rutgers, The State University Of New Jersey 110 Frelinghuysen Rd. Piscataway, NJ 08854-8019 FAX: 732-445-5530
[email protected]
June 15, 2001
Abstract
A notion of detectability for nonlinear systems is discussed. Within the framework of \input to state stability" (ISS), a dual notion of \output to state stability" (OSS), and a more complete detectability notion, \input-output to state stability" (IOSS) have appeared in the literature. This note addresses a variant of the IOSS property, using an integral norm to measure signals, as opposed to the standard supremum norm that appears in ISS theory.
Keywords: detectability, zero-detectability, input to state stability, Lyapunov function, inputoutput to state stability, norm observer. 1
Introduction
We consider stability features for the system with output: x_ (t) = f (x(t); u(t)); y(t) = h(x(t));
(1)
where x 2 Rn . The function f : Rn Rm ! Rn is assumed jointly continuous in x and u, and locally Lipschitz in x, uniformly in u. The output map h : Rn ! Rp is assumed locally Lipschitz, and we suppose f (0; 0) = 0 and h(0) = 0. Inputs u() take values in some set U Rm (where U = Rm unless otherwise stated). The notion of input to state stability (ISS), introduced in [22], provides a theoretical framework in which to formulate questions of robustness with respect to inputs (seen as disturbances) acting on a system. An ISS system is, roughly, one which has a \ nite nonlinear gain" with respect to inputs and whose transient behavior can be bounded in terms of the size of the initial state; the precise de nition is in terms of K function gains. The theory of ISS systems now forms an integral part of several texts ([4, 6, 8, 12, 13, 21]) as well as expository and research articles (see e.g. [7, 9, 14, 17] as well as the recent [27]).
Supported in part by US Air Force Grant F49620-98-1-0242
1
Within the framework of ISS, a natural notion of detectability, and a dual to the ISS property, is the notion of output to state stability (OSS) addressed in [23, 24]. These references include a characterization of OSS in terms of a Lyapunov (or \storage") function, as well as a discussion of the roles of OSS and the more general property of input-output to state stability (IOSS) in nonlinear observer theory. The IOSS property was further addressed in [11]. In each of the notions mentioned thus far, signals (i.e. inputs and outputs) are measured by a supremum (or L1 ) norm. In many cases, it may be more natural to use an integral (or L1 -type) norm, which corresponds to a measure of the \total energy" of the signal. A variant of ISS using this norm, called integral-ISS (iISS) was introduced in [26] and further studied in [1]. This paper addresses a combination of the ideas described above: namely a notion of detectability making use of integral norms. This property is formulated as a natural combination of the IOSS and iISS properties. It was introduced as integral-input-output to state stability (iIOSS) in [11]. This notion has been called \integral-detectability" by Morse and Hespanha [19] and is closely related to the notion of a \convergent observer" used by Krener in [10]. In addition, all systems which are passive in the sense of [13] automatically satisfy the iIOSS property (cf. remark 12 in [24]). The main result in this paper is a characterization of the iIOSS property in terms of the existence of an appropriate Lyapunov function. In general, the result provides for the existence of a continuous Lyapunov function, though we indicate an important case where the construction can be extended to show the existence of a smooth function. Such Lyapunov characterizations for detectability notions are especially insightful, since in some cases the notion of detectability has been de ned in terms of the existence of an appropriate Lyapunov (or \storage") function (e.g. [16, 18]). While we refer to IOSS and iIOSS as notions of detectability, they should be called more precisely notions of zero-detectability, as they characterize the property that the information from the output is sucient to deduce stability of the state to the origin. For linear systems, such a property is equivalent to \full-state detectability" { the property which allows construction of an observer which tracks arbitrary trajectories. For nonlinear systems, a zero-detectability condition cannot guarantee the existence of a \complete" observer. Given a nonlinear system which satis es a zero-detectability property, the most one may expect is to be able to construct a norm-observer which is able to provide a bound on how far the state is from the origin. The existence of norm observers for IOSS systems was addressed in [11]. We shall see that a similar construction for iIOSS systems follows immediately from the de nitions. 1.1 Basic De nitions and Notation
The Euclidean norm in a space Rk is denoted simply by jj. For each interval I R and any measurable function u : I ! Rk , we will use kukI to denote the (essential) supremum norm of u() over I . That is, kukI = ess sup fju(t)j : t 2 Ig. An input (or control ) will be a measurable, locally essentially bounded function u : I ! Rm , where I is a subinterval of R which contains the origin, such that u(t) 2 U Rm for almost all t 2 I . Unless otherwise speci ed, we assume I = R0 . For each initial state and input u we let x(t; ; u) denote the unique maximal solution of (1), and we write the output signal as y(t; ; u) := h(x(t; ; u)). A system is forward complete if each 2 Rn and each input u de ned on R0 produce a solution x(t; ; u) which is de ned for all t 0. A function : R0 ! R0 is of class K (or a \K function") if it is continuous, positive 2
de nite, and strictly increasing; and is of class K1 if in addition it is unbounded. A function : R0 ! R0 is of class L if it is continuous, decreasing, and tends to zero as its argument tends to +1. A function : R0 R0 ! R0 is of class KL if for each xed t 0, (; t) is of class K and for each xed s 0, (s; ) is of class L. To formulate the statement that a nonsmooth function decreases in an appropriate manner, we will make use of the notion of the viscosity subgradient (cf. [3]).
De nition 1.1 A vector 2 Rn is a viscosity subgradient of gthe function V : Rn ! R at 2 Rn if there exists a function g : Rn ! R satisfying limh!0 j(hhj) = 0 and a neighbourhood O Rn of the origin so that V ( + h) ? V ( ) ? h g(h) for all h 2 O. 2 The (possibly empty) set of viscosity subgradients of V at is called the viscosity subdierential and is denoted @D V ( ). We remark that if V is dierentiable at , then @D V ( ) = frV ( )g. 2
The integral-Input-Output to State Stability Property
The main property of interest in this paper is the following.
De nition 2.1 We say that a forward complete system (1) satis es the integral-input-output to state stability property (iIOSS) if there exist 2 K1 , 2 KL, and 1 , 2 2 K so that for every initial point 2 Rn , and every input u, (jx(t; ; u)j) (j j ; t) +
for all t 0.
Z
0
t
1 (jy(s; ; u)j) + 2 (ju(s)j) ds
(2)
2
Remark 2.2 We note that, by causality, the iIOSS bound (2) can be expressed equivalently as (jx(t; ; u)j) (j j ; t) +
Z
0
t
1 (jy(s; ; u)j) ds +
for all t 0. We will make use of this alternate description.
Z
0
1
2 (ju(s)j) ds
(3)
2
Remark 2.3 Recall that a forward complete system (1) satis es the input-output to state stability property (IOSS) if there exist 2 KL, and 1 , 2 2 K so that for every initial point 2 Rn , and every input u, jx(t; ; u)j (jj ; t) + 1(ky(; ; u)k[0;t]) + 2 (kuk[0;t])
for all t 0. It is natural to compare the notion of iIOSS to this analogous property. We will show as a consequence of our main result that an IOSS system is in particular an iIOSS system. It has been shown (in [11] and [1] respectively), that the iOSS property is strictly weaker than OSS, and that the iISS property is strictly weaker than ISS. Either of these results show that iIOSS is a strictly weaker property than IOSS. 2 3
Remark 2.4 It is an easy exercise to show that for linear systems the iIOSS property is
equivalent to detectability. However, as mentioned above, for general systems as in (1), iIOSS is a notion of zero-detectability. Given that a system satis es the iIOSS property, one cannot hope to build a complete observer for the system, but rather only a norm observer which measures how far the state is from the origin. The construction of norm observers for IOSS systems was addressed in [11], where it was shown that a system satis es the IOSS property if and only if it admits an appropriate norm observer. For iIOSS systems, the situation is more transparent. It is immediate that if a system satis es the iIOSS bound (2), then for the function p : R0 ! R0 de ned by p_(t) = 1 (jy(t; ; u)j) + 2 (ju(t)j); p(0) = 0; the system will satisfy (jx(t; ; u)j) (j j ; t) + p(t) 8t 0: Thus the function p() provides an asymptotic upper bound for the size of the state, i.e. it is a norm observer for the system. To extend these ideas to the case where construction of a full-state observer may be possible, one must consider a notion of \complete" detectability for nonlinear systems. Such a notion was introduced in [24] under the name of incremental-IOSS. 2 2.1 iIOSS Lyapunov Functions De nition 2.5 We call a continuous function V : Rn ! R0 an iIOSS Lyapunov function if there exist , 2 K1 , 1 , 2 2 K, and : R0 ! R0 continuous positive de nite so that 8 2 Rn (4) (j j) V ( ) (j j) and f (; ) ?(j j) + 1 (jh( )j) + 2 (jj) 8 2 Rn ; 8 2 Rm (5) for each 2 @D V ( ). 2
We remark that the decrease statement (5) can be written equivalently in an integral formulation, using the following standard result.
Proposition 2.6 (e.g. [20] Proposition 14) Given a forward complete system as in (1), a continuous function V : Rn ! R0 , and a continuous function w : Rn Rm ! R, the following are
equivalent: 1. For all 2 Rn and all 2 Rm
f (; ) w(; )
for each 2 @D V ( ). 2. For each 2 Rn and each input u, the solution x(; ; u) satis es V (x(t; ; u)) ? V ( )
t
Z
for any t 0. 4
0
w(x(s; ; u); u(s)) ds
2
Remark 2.7 Applying Proposition 2.6 with w(; ) = ?(j j) + 1 (jh( )j) + 2 (jj); we conclude that the decrease statement (5) in the de nition of an iIOSS Lyapunov function could be equivalently written as V (x(t; ; u)) ? V ( )
Z
0
t
?(jx(s; ; u)j) + 1(jh(x(s; ; u))j) + 2(ju(s)j) ds
(6)
for all 2 Rn , all inputs u, and all t 0. This alternative formulation will be used below. 2 3
Lyapunov Characterization
Our main result is the following
Theorem 1 Suppose system (1) is forward complete. The following are equivalent. 1. The system is iIOSS. 2. The system admits an iIOSS Lyapunov function.
Remark 3.1 The main result in [11] is a Lyapunov characterization of the IOSS property. It
is shown in that reference that a system is IOSS if and only if it admits an IOSS Lyapunov function, which can be de ned as an iIOSS Lyapunov function for which the function is of class K1 . Thus it is an immediate consequence of Theorem 1 that the IOSS property implies the iIOSS property. 2
Remark 3.2 We will prove a slightly stronger statement than (2 ) 1) of Theorem 1. The proof below shows that the existence of a lower semicontinuous iIOSS Lyapunov function implies that a system is iIOSS. 2 It is still an open question whether every iIOSS system admits a smooth iIOSS Lyapunov function. However, as a minor extension of the proof of Theorem 1 we will also show the following.
Lemma 3.3 Suppose system (1) is forward complete and has compact input value set U. Then,
if the system is iIOSS, it admits a smooth iIOSS Lyapunov function, i.e. there exists a smooth (C 1 ) function V : Rn ! R0 , and , 2 K1 , 1 , 2 2 K, and : R0 ! R0 continuous positive de nite so that (4) holds and
rV () f (; ) ?(jj) + 1(jh()j) + 2(jj)
8 2 Rn ; 8 2 U:
Remark 3.4 Lemma 3.3 provides, in particular, a Lyapunov characterization in terms of a
smooth function for the property of integral-output to state stability (iOSS) which is de ned as iIOSS for systems with no inputs (or equivalently, with U = f0g). 2 5
3.1 Suciency
We begin with the proof of (2 ) 1) (suciency) in Theorem 1. Here we follow the suciency argument given in [1]. A few preliminary lemmas are needed.
Lemma 3.5 ([1] Lemma 4.1) Let : R0 ! R0 be a continuous positive de nite function. Then there exists 1 2 K1 and 2 2 L such that (s) 1 (s)2 (s) 8s 0: 2 The following comparison result will be needed. This is a generalization of Corollary 4.3 in [1].
Proposition 3.6 Given any continuous positive de nite : R0 ! R0 , there exists a KL function with the following property. For any 0 < et 1, any lower semicontinuous function y : [0; et ) ! R0 , and any measurable, locally essentially bounded function v : [0; et ) ! R0 , if y(t2 ) y(t1 ) +
Z
t2 t1
?(y(s)) + v(s) ds
then y(t) (y(0); t) +
Z
t
0
2v(s) ds
8 0 t1 t2 < et;
(7)
8t 2 [0; et ): 2
The following lemma will be needed to prove Proposition 3.6.
Lemma 3.7 Suppose given a locally Lipschitz positive de nite function : R0 ! R0 , a time 0 < et 1, and a measurable, locally essentially bounded function v : [0; et ) ! R0 . Let y : [0; et ) ! R0 be any lower semicontinuous function which satis es (7). De ne w() to be the solution of the initial value problem w_ (t) = ?(w(t)) + v(t);
w(0) = y(0):
(8)
Then w(t) is de ned for all t 2 [0; et ) and y(t) w(t)
8t 2 [0; et ):
Proof. (We follow the proof of Theorem III.4.1 in [5]). Let y() and w() be as above for given , et, and v(). We rst note that w() exists for all t 2 [0; et ), since is nonnegative and v() is essentially bounded on each nite interval. For each integer n 1, let wn () be the solution of 1 w (0) = y(0); (9) w_ (t) = ?(w (t)) + v(t) + ; n
n
n
n
which is also de ned on [0; et ). We will show that y(t) wn (t)
8t 2 [0; et ) 6
(10)
for all n 1. Indeed, suppose not. Then there exists n 1 and 2 [0; et ) so that y( ) > wn ( ): Let t0 := sup ft 2 [0; ] : y(t) wn (t)g: Then, as y() is lower semicontinuous and wn () is continuous, y(t0 ) wn (t0 ): We claim that in fact y(t0 ) = wn (t0 ). If this were not the case, there would be numbers 1 , 2 so that y(t0 ) < 1 < 2 < wn (t0 ): (11) From (7), we have that y(t0 + t) y(t0 ) +
for each t 2 [0; et ? t0 ). Since lim t!0
Z
t0 +t
t0
Z
t0 +t
t0
?(y(s)) + v(s) ds
?(y(s)) + v(s) ds = 0;
it follows from (11) that there is some "1 > 0 so that y(t0 + t) < 1 for all t 2 [0; "1 ]. Since wn () is continuous, (11) also gives an "2 > 0 so that wn (t0 + t) > 2 for all t 2 [0; "2 ]. Thus y(t0 + t) < wn (t0 + t) for all t suciently small, which contradicts the de nition of t0 . We conclude that y(t0 ) = wn (t0 ). From (7) and Taylor's Theorem, we have that for " 2 [0; et ? t0 ), y(t0 + ")
Z
t0 +"
y(t0) + ?(y(s)) + v(s) ds t0 = y(t0 ) ? "(y(t0 )) + "v(t0 ) + o(")
and from (9) wn (t0 + ") = wn (t0 ) +
Z
t0 +"
t0
?(wn(s)) + v(s) + n1 ds
= wn (t0 ) ? "(wn (t0 )) + "v(t0 ) + n" + o(");
where o() signi es a function satisfying limt!0 o(tt) = 0. Since wn (t0 ) = y(t0 ), it follows that y(t0 + ") wn (t0 + ") for " suciently small, a contradiction. Thus (10) holds for all n 1. We note that wn (t) ! w(t) uniformly on each nite time interval (cf. e.g. Theorem 1 in [25]). Thus for any T 2 [0; et ), as (10) holds for all n, y(t) nlim 8t 2 [0; T ]: !1 wn (t) = w(t) As T > 0 is arbitrary, we conclude that y(t) w(t) for all t 2 [0; et ). To complete the proof of Proposition 3.6 we will also need the following statement. 7
Lemma 3.8 ([1] Corollary 4.3) Given any continuous positive de nite : R0 ! R0 , there exists a KL function with the following property. For any 0 < et 1, any absolutely continuous function w : [0; et ) ! R0 , and any measurable, locally essentially bounded function v : [0; et ) ! R0 , if w_ (t) ?(w(t)) + v(t) (12) for almost all t 2 [0; et ), then w(t) (w(0); t) +
t
Z
0
2v(s) ds
8t 2 [0; et ): 2
The proof of Proposition 3.6 is a straightforward combination of Lemma 3.7 and Lemma 3.8. Proof. (Proposition 3.6) Let a continuous positive de nite : R0 ! R0 be given. Without loss of generality, we assume is locally Lipschitz (otherwise we replace by a locally Lipschitz function majorized by ). Let be the KL function given by Lemma 3.8. Suppose et, y() and v() are as in the statement of the Proposition so that (7) holds. Let w() be the solution of the initial value problem (8). Then Lemma 3.7 gives y(t) w(t)
8t 2 [0; et ):
Also, since w() satis es (12) (as an equality), Lemma 3.8 gives w(t) (w(0); t) +
Z
0
t
2v(s) ds
8t 2 [0; et ):
Since w(0) = y(0), the result follows. We can now give the argument for suciency of the Lyapunov characterization. As mentioned earlier, this proof holds for lower semicontinuous Lyapunov functions. Proof. Theorem 1 (2 ) 1) Suppose the function V satis es the de nition of an iIOSS Lyapunov function for the forward complete system (1) with functions , , , 1 and 2 satisfying (4) and (5). Let 1 2 K1 and 2 2 L be functions as in Lemma 3.5 for . Let e(s) := 1 (?1 (s))2 (?1 (s)):
By (4) and (6), we have, for each 2 Rn and each input u, V (x(t2 ; ; u))
V (x(t1; ; u)) +
Z
t2
?(jx(s; ; u)j) + 1(jh(x(s; ; u))j) + 2 (ju(s)j) ds
t1 Z t 2
?1(jx(s; ; u)j)2 (jx(s; ; u)j) + 1 (jh(x(s; ; u))j) + 2 (ju(s)j) ds Z t 2 V (x(t1; ; u)) + ?e(V (x(s; ; u))) + 1(jh(x(s; ; u))j) + 2(ju(s)j) ds
V (x(t1; ; u)) +
t1
t1
for all 0 t1 t2 . 8
Then, as e is continuous positive de nite, Proposition 3.6 gives the existence of a KL function so that for each 2 Rn and each input u (jx(t; ; u)j)
V (x(t; ; u)) (V (); t) +
t
Z
((jj); t) + for all t 0, which is the required bound.
0 Z 0
21 (jh(x(s; ; u))j) + 22 (ju(s)j) ds t
21 (jh(x(s; ; u))j) + 22 (ju(s)j) ds
3.2 Necessity
We next prove (1 ) 2) (necessity) for Theorem 1. We will construct an iIOSS Lyapunov function for a given iIOSS system. The proof combines ideas from the constructions in [28] and [1]. The following result will be needed. This statement follows directly from Proposition 7 in [26].
Proposition 3.9 For any given KL function , there exist a family of mappings fTr gr0 with: for each xed r > 0, Tr : R>0 onto ! R>0 is strictly decreasing; for each xed " > 0, Tr (") is strictly increasing as r increases and limr!1 Tr (") = 1; the map (r; ") 7! Tr (") is jointly continuous in r and "; such that
(s; t) "
for all s r, all t Tr (").
2
Before giving the construction, we will cite a lemma on boundedness of reachable sets for forward complete systems which says that the reachable set from a given point over a nite time interval [0; T ] is bounded if the inputs are required to satisfy a bound of the type Z
0
T
(ju(s)j) ds M < 1
(13)
for an appropriate choice of 2 K1 .
Remark 3.10 Note that for arbitrary K1 functions , this need not hold. Take, for example the one-dimensional system x_ = u2 . With (s) = s, the inputs uk (t) =
t 1 1 < t 1k k
k 0
0
de ned on [0; 1] satisfy 01 (juk (s)j) ds = 1 for each k 1. However, the solution starting at the origin corresponding to the input uk () satis es x(1) = k, and so clearly one can reach an unbounded set in one time unit using controls satisfying (13). 2 R
The following lemma shows that one can always choose so that the bound (13) on inputs implies a bounded reachable set. (In the example above, clearly (s) = s2 will do.) 9
Lemma 3.11 ([2] Corollary 2.13) Suppose system (1) is forward complete. Then there exist functions 1 , 2 , 3 , of class K1 and a constant c 0 such that jx(t; ; u)j 1(t) + 2(jj) + 3
Z
holds for all 2 Rn , all inputs u, and all t 0.
0
t
(ju(s)j) ds + c
2
We now provide the Lyapunov construction. Proof. Theorem 1 (1 ) 2) Suppose the system (1) is forward complete and satis es the iIOSS property with gains , ,
1 and 2 . Pick any smooth, strictly increasing and bounded function k : R ! R>0 whose derivative is strictly decreasing. Then there are two positive numbers c1 < c2 so that k(t) 2 [c1 ; c2 ] for all t 0. De ne (t) : R0 ! R>0 by (t) :=
d k(t): dt
Since the system is forward complete, we may nd a function 2 K1 as in Lemma 3.11. De ne e2 (s) := maxf 2 (s); (s)g for all s 0. Note that the iIOSS bound (2) holds with e 2 in the place of 2 . We de ne a Lyapunov function as t
Z
1
Z
2 e2 (ju(s)j) ds k(t) (jx(t; ; u)j) ? 1 (jy(s; ; u)j) ds ? u t0 0 0 2 Rn . It is immediate that this function satis es (4), as
V0 ( ) := sup sup
for each
c1 (j j) V0 ( ) c2 (j j ; 0)
8 2 Rn :
(14) The rst of these inequalities follows from considering the trajectory with input u 0 at time t = 0, and the second from the iIOSS bound (3): for any 2 Rn and any input u, (jx(t; ; u)j) ?
t
Z
0
1 (jy(s; ; u)j) ds ?
(jx(t; ; u)j) ? (jj ; t) (jj ; 0)
t
Z
0
Z
1
0
1 (jy(s; ; u)j) ds ?
2 e2 (ju(s)j) ds 1
Z
0
2 (ju(s)j) ds
(15)
for all t 0. Next, we observe that for each , the supremum over inputs in V0 ( ) can be taken to be a supremum over a restricted set, as follows. From the iIOSS bound (3), we have, for any 2 Rn and any input u, (jx(t; ; u)j) (j j ; 0) +
Z
0
t
1 (jy(s; ; u)j) ds +
for all t 0. Suppose now that and u are such that Z
0
1
e2 (ju(s)j) ds > (j j ; 0):
10
Z
0
1
e2 (ju(s)j) ds
In this case it follows that (jx(t; ; u)j)
Z
0
for all t 0. Then for all t 0 (jx(t; ; u)j) ?
Z
0
t
t
1 (jy(s; ; u)j) ds +
1 (jy(s; ; u)j) ds ?
Z
0
1
Z
0
1
2 e2 (ju(s)j) ds
2 e2 (ju(s)j) ds 0:
Since V0 ( ) 0 for each 2 Rn , it follows that for each 2 Rn V0 ( ) = sup sup
u2U (jj) t0
(jx(t; ; u)j) ?
t
Z
1 (jy(s; ; u)j) ds ?
0
1
Z
2 e2 (ju(s)j) ds k(t) :
0
where, for each r 0, we de ne U (r) := fu() : 01 e2 (ju(s)j) ds (r; 0)g. We next make the observation that the supremum in time can be taken over a restricted set as well. Let Tr (") be de ned as in Proposition 3.9 for the function . From (14) and (15) we have R
V0 ( ) = sup
sup
u2U (jj) 0tT
(jx(t; ; u)j) ?
Z
0
t
1 (jy(s; ; u)j) ds ?
t
Z
0
2 e2 (ju(s)j) ds k(t) ;
where for each 2 Rn we set T := T2jj( cc21 ( j2j )). We will show that the function V0 is continuous on Rn by showing lower and upper semicontinuity in the next two lemmas.
Proposition 3.12 The function V0 is lower semicontinuous on Rn . Proof. We will show
for all 0 2 Rn . Fix 0 2 Rn
lim inf V0 ( ) V0 (0 ) ! 0
and let " > 0 be given. There exists an input u0 and a time t0 0 so that
(jx(t0 ; 0 ; u0 )j) ?
Z
0
t0
1 (jy(s; 0 ; u0 )j) ds ?
Z
0
1
2 e2 (ju0 (s)j) ds k(t0 ) V0 (0 ) ? 2" :
By continuity of x(t0 ; ; u0 ) and , there exists a neighbourhood U1 of 0 so that
j(jx(t0 ; 0; u0 )j) ? (jx(t0 ; ; u0 )j)j 4k("t ) 0
for all 2 U1 . Furthermore, as ! 0 implies y(t; ; u0 ) converges uniformly to y(t; 0 ; u0 ) on the nite interval [0; t0 ], and since 1 is uniformly continuous on a compact containing an open neighbourhood of fy(t; 0 ; u0 ) : t 2 [0; t0 ]g, we can nd a neighbourhood U2 U1 of 0 so that each 2 U2 satis es
j 1(jy(s; 0; u0 )j) ? 1 (jy(s; ; u0 )j)j 4t k"(t ) 0
11
0
for all s 2 [0; t0 ]. Then for each 2 U2 ,
(jx(t0 ; 0 ; u0 )j) ?
t0
Z
0
1 (jy(s; 0 ; u0 )j) ds
? (jx(t0 ; ; u0 )j) ?
Z t 0 " + 4k(t0 ) 0 j 1 (jy(s; 0 ; u0 )j) ? 1 (jy(s; ; u0 )j)j ds " 2k(t0 ) :
Z
0
t0
1 (jy(s; ; u0 )j) ds
This gives, for each 2 U2 , V0 ( )
(jx(t0 ; ; u0 )j) ?
t0
Z
0
(jx(t0 ; 0; u0 )j) ? V0 (0) ? ";
1 (jy(s; ; u0 )j) ds ?
t0
Z
0
Z
0
1 (jy(s; 0 ; u0 )j) ds ?
1 Z
2 e2 (ju0 (s)j) ds k(t0 ) 1
0
2 e2 (ju0 (s)j) ds k(t0 ) ? 2"
Hence V0 is lower semicontinuous. The next result will be needed to show upper semicontinuity.
Proposition 3.13 For eachS T > 0 and each compact C Rn , there exists LC;T > 0 so that for any input u 2 U (C ) := 2C U (j j), each pair , 2 C has the property that jx(t; ; u) ? x(t; ; u)j LC;T j ? j 8t 2 [0; T ]: That is, the trajectories are Lipschitz in the initial conditions, uniformly over inputs u 2 U (C ). Proof. From Lemma 3.11, we have that the trajectories stay in a bounded set on the interval [0; T ]. A standard Gronwall's Lemma argument gives this Lipschitz condition from the local Lipschitz assumption on f .
Proposition 3.14 The function V0 is upper semicontinuous on Rn . Proof. We will show
lim sup V0 ( ) V0 (0 ) !
(16)
0
for all 0 2 Rn . Suppose (16) fails at some 0 2 Rn . Then there exists " > 0 and a sequence fj g1 j =1 so that j ! 0 and V0 (j ) > V0 (0 ) + "
(17)
for all j 1. Choose r > 0 so that j0 j r and jj j r for all j 1. Then for each j 1, V (j ) = sup max
u2U (r) t2[0;T0 ]
(jx(t; j ; u)j) ?
Z
0
t
1 (jy(s; j ; u)j) ds ?
12
Z
0
1
2 e2 (ju(s)j) ds k(t)
where T0 := Tr ( c12 (V0 (0 ) + ")). Now, for each j 1, there exists j 2 [0; T0 ], uj 2 U (r) for which
(jx(j ; j ; uj )j) ?
j
Z
0
1 (jy(s; j ; uj )j) ds ?
Z
1
0
2 e2 (juj (s)j) ds k(j ) V0 (j ) ? 2" :
Let R be the reachable set from Br := f 2 Rn : j j rg in time less than or equal to T0 with controls in U (r). Then Lemma 3.11 tells us that R is bounded. From Proposition 3.13 and the fact that the output map h is locally Lipschitz, we can nd Lx > 0 and Ly > 0 so that
jx(t; ; u) ? x(t; ; u)j Lx j ? j jy(t; ; u) ? y(t; ; u)j Ly j ? j for all t 2 [0; T0 ], for any pair , 2 Br and for any input u 2 U (r). Further, since is continuous, it is uniformly continuous on the bounded set R, so there is a x > 0 so that
j1 ? 2j x ) j(1 ) ? (2 )j 4k("T ) 0
if 1 , 2 2 R. Likewise, since 1 is uniformly continuous on the bounded set h(R) := fh() : 2 Rg, there is a y > 0 so that
j1 ? 2 j y ) j 1 (1 ) ? 1(2 )j 4T k"(T ) 0
0
minfx ;y g , we nd for 1 , 2 2 h(R). Then, for each j large enough so that jj ? 0 j max fLx ;Ly g
(jx(j ; j ; uj )j) ?
4k("T ) + 0 " 2k(T ) :
j
Z
0
Z
0
j
1 (jy(s; j ; uj )j) ds
? (jx(j ; 0 ; uj )j) ?
j 1 (jy(s; j ; uj )j) ? 1 (jy(s; 0 ; uj )j)j ds
Z
j
0
1 (jy(s; 0 ; uj )j) ds
0
From which we nd, for each j suciently large, V0 (0 )
Z
0 Z
(jx(j ; 0 ; uj )j) ?
(jx(j ; j ; uj )j) ? V0(j ) ? "
0
j j
1 (jy(s; 0 ; uj )j) ds ?
1 (jy(s; j ; uj )j) ds ?
Z
0 Z 0
1 1
2 e2 (juj (s)j) ds k(j )
2 e2 (juj (s)j) ds k(j ) ? 2"
which contradicts (17). We conclude that V0 is upper semicontinuous. Finally, we show that the function V0 satis es the decrease statement (5). Let 2 Rn nf0g and an input v be given, and consider the resulting trajectory. For > 0 small enough, we have jj < jx(; ; v)j < 2 j j, so for such the supremum over time in the expression for V (x(; ; v)) 0 2 may be taken over [0; T ]. We nd, for such suciently small, V0 (x(; ; v))
= sup sup
u 0sT
(jx(s; x(; ; v); u)j) ?
Z
s
0
13
1 (jy(r; x(; ; v); u)j ) dr
?
= sup sup
u t +T
(jx(t; ; v] u)j) ?
t
Z
sup sup
(jx(t; ; v] u)j) ?
u 0t +T
? u 0t +T
1
u 0t +T
(jx(t; ; v] u)j) ?
?
Z
1
0Z
+ c2 ub 0t
t
Z
0
Z
0
0
0
2 e2 (jv] u(r)j) dr k(t ? ) Z
0
(jx(t; ; ub)j) ?
0
t
1 (jy(r; ; v] u)j) dr
2 e2 (jv] u(r)j) dr k(t ? )
Z
0
1 (jy(r; ; v)j) + 2 e2 (jv(r)j) dr
1 (jy(r; ; v] u)j) dr
k(t ? ) 2 e2 (jv] u(r)j) dr k(t) 0max t +T k(t)
1 (jy(r; ; v)j) + 2 e2 (jv(r)j) dr Z
1 (jy(r; ; v] u)j) dr
t
Z
2 e2 (ju(r)j) dr k(s)
1 (jy(r; ; v] u)j) dr +
2 e2 (jv] u(r)j) dr k(t ? ) + c2
0
sup sup
0
1
Z
2 e2 (jv] u(r)j) dr +
(jx(t; ; v] u)j) ?
Z
?
sup sup
0
sup sup
1
Z
t
Z
0
1 (jy(r; ; v] u)j) dr
?
1
Z
1 (jy(r; ; ub)j) dr ?
Z
1
0
2 e2 (jub(r)j) dr k(t)
k(t ? ) + c2 1 (jy(r; ; v)j) + 2 e2 (jv(r)j) dr k(t) 0 Z k(t ? ) + c2 1 (jy(r; ; v)j) + 2 e2 (jv(r)j) dr; = V0 ( ) 0max t +T k(t) 0
max 0t +T
Z
where v] u is the concatenation of u with v at time , that is v] u =
v(t) if 0 t : u(t ? ) if < t
Rewriting, we arrive at V0 (x(; ; v)) ? V0 ( )
k(t ? ) V0 ( ) max ?1 + + c2 1 (jy(r; ; v)j) + 2 e2 (jv(r)j) dr 0t +T k(t) 0
Z
for > 0 suciently small. Recall that (t) = dtd k(t) is decreasing, so for > 0 small enough, max ?1 + k(kt (?t) ) 0t +T
k(t) ? k(t ? ) ? 0tmin +T c2 = ? k(T + ) ? k(T ) :
c2
14
So for > 0 suciently small,
k(T + ) ? k(T ) + c2 1 (jy(r; ; v)j) + 2 e2 (jv(r)j) dr V0 (x(; ; v)) ? V0 ( ) ?V0 ( ) c2 0 Z V0 ( ) = ? c (T + r) + c2 [ 1 (jy(r; ; v)j) + 2 e2 (jv(r)j)] dr: (18) 2 0 Z
Recall that (18) has been veri ed for all 6= 0. We next note that it also holds for = 0. Let an input v be given. The calculation above (with T = 1) gives, for > 0, Z k(t ? ) V0 (x(; 0; v)) V0 (0) sup + c2 1 (jy(r; ; v)j) + 2 e2 (jv(r)j) dr k(t) 0t 0 = c2
Z
0
1 (jy(r; ; v)j) + 2 e2 (jv(r)j) dr;
as V0 (0) = 0. Clearly this gives (18) for = 0. Finally, we will make use of the following lemma to formulate the decrease statement (18) in the viscosity sense.
Lemma 3.15 Suppose given a system as in (1), a function V : Rn ! R, a point 2 Rn , and an element 2 Rm . Then, if there exists a continuous ;u : R0 ! R and " > 0 so that for all 0 0 for all s > 0 such that Ve is smooth everywhere. Without loss of generality, we may assume that 0 (s) 1 for all s > 0. (If it is not, we may replace by a smooth K1 function 0 with the property that 00 (s) = 0 (s) in a neighbourhood of the origin where 0 (s) 1 and 00 (s) 1 everywhere else.) Let V = Ve . It follows from (21) and (14) that 8 2 Rn ; (j j) V ( ) (j j) where (s) = ( c21 (s)) and (s) = (2c2 (s; 0)). Let 0 : R0 ! R0 be a continuous positive de nite function which satis es 0 (j j) 0 (Ve ( )) 21 (j j) for all 2 Rn . From (22), we have rV () f (; ) ?0(Ve ()) 21 (jj) + 0(Ve ())(c2 1 (jh()j) + 2c2 e2 (jj)) ?0(jj) + c2 1 (jh()j) + 2c2 e2 (jj) 8 2 Rn ; 8 2 U: This holds at = 0 since V is smooth and has a minimum at the origin, so rV (0) = 0. Thus V is a smooth iIOSS Lyapunov function for the system. 16
References
[1] D. Angeli, E. D. Sontag, and Y. Wang, A characterization of integral input to state stability, IEEE Transactions on Automatic Control, 45 (2000), pp. 1082{1097. [2] D. Angeli and E. D. Sontag, Forward completeness, unboundedness observability, and their Lyapunov characterizations, Systems & Control Letters, 38 (1999), pp. 209{217. [3] F. H. Clarke, Yu. S. Ledyaev, R. J. Stern, and P. R. Wolenski, Nonsmooth Analysis and Control Theory, Springer, New York, 1998. [4] R. A. Freeman and P. V. Kokotovic, Robust Nonlinear Control Design, State-Space and Lyapunov Techniques, Birkhauser, Boston, 1996. [5] P. Hartman, Ordinary Dierential Equations, John Wiley & Sons, New York, 1964. [6] A. Isidori, Nonlinear Control Systems II, Springer-Verlag, London, 1999. [7] Z.-P. Jiang, A. Teel, and L. Praly, Small-gain theorem for ISS systems and applications, Mathematics of Control, Signals, and Systems, 7 (1994), pp. 95{120. [8] H. K. Khalil, Nonlinear Systems, Prentice-Hall, Upper Saddle River, NJ, second ed., 1996. [9] P. V. Kokotovic and M. Arcak, Constructive nonlinear control: progress in the 90's, Invited Plenary Talk, IFAC Congress, in Proc. 14th IFAC World Congress, the Plenary and Index Volume, Beijing, 1999, pp. 49{77. [10] A. J. Krener, A Lyapunov theory of nonlinear observers, in Stochastic Analysis, Control, Optimization and Applications, Birkhauser, Boston, MA, 1999, pp. 409{420. [11] M. Krichman, E. D. Sontag, and Y. Wang, Input-output-to-state stability, SIAM Journal on Control and Optimization, to appear. [12] M. Krstic and H. Deng, Stabilization of Uncertain Nonlinear Systems, Springer-Verlag, London, 1998. [13] M. Krstic, I. Kanellakopoulos, and P. V. Kokotovic, Nonlinear and Adaptive Control Design, John Wiley & Sons, New York, 1995. [14] D. Liberzon, E. D. Sontag, and A. Morse, A new de nition of the minimum-phase property for nonlinear systems, with an application to adaptive control, in Proc. IEEE Conf. Decision and Control, Sydney, Dec. 2000, IEEE Publications, 2000, pp. 2106{2111. [15] Y. Lin, E. D. Sontag, and Y. Wang, A smooth converse Lyapunov theorem for robust stability, SIAM Journal on Control and Optimization, 34 (1996), pp. 124{160. [16] W. M. Lu, A state-space approach to parameterization of stabilizing controllers for nonlinear systems, IEEE Transactions on Automatic Control, 40 (1995) pp. 1576{1588. [17] R. Marino and P. Tomei, Nonlinear output feedback tracking with almost disturbance decoupling, IEEE Transactions on Automatic Control, 44 (1999), pp. 18{28. [18] A. S. Morse, Control using logic based switching, in Trends in Control: A European Perspective, A. Isidori, ed. Springer, London, 1995, pp. 69{114. 17
[19] A. S. Morse and J. P. Hespanha, Supervision of integral-input-to-state stabilizing controllers, Automatica, to appear. [20] L. Rosier and E. D. Sontag, Remarks regarding the gap between continuous, Lipschitz, and dierentiable storage functions for dissipation inequalities appearing in H1 control, Systems & Control Letters, 41 (2000), pp. 237{249. [21] R. Sepulchre, M. Jankovic, and P. V. Kokotovic, Constructive Nonlinear Control, Springer, London, 1997. [22] E. D. Sontag, Smooth stabilization implies coprime factorization, IEEE Transactions on Automatic Control, 34 (1989), pp. 435{443. [23] E. D. Sontag and Y. Wang, Detectability of nonlinear systems, in Proceedings of the Conference on Information Sciences and Systems (CISS 96), Princeton, NJ, 1996, pp. 1031-1036. [24] E. D. Sontag and Y. Wang, Output-to-state stability and detectability of nonlinear systems, Systems & Control Letters, 29 (1997), pp. 279-290. [25] E. D. Sontag, Mathematical Control Theory, Deterministic Finite Dimensional Systems, Springer-Verlag, New York, 2nd ed., 1998. [26] E. D. Sontag, Comments on integral variants of ISS, Systems & Control Letters, 34 (1998), pp. 93{100. [27] E. D. Sontag, The ISS philosophy as a unifying framework for stability-like behavior, in Nonlinear Control in the Year 2000 (Volume 2) (Lecture Notes in Control and Information Sciences, A. Isidori, F. Lamnabhi-Lagarrigue, and W. Respondek, eds.), Springer-Verlag, Berlin, 2000, pp. 443{468. [28] E. D. Sontag and Y. Wang, Lyapunov characterizations of input to output stability, SIAM Journal on Control and Optimization, 39 (2001), pp. 226{249.
18