Identification of unstable systems using output error and ... - IEEE Xplore

3 downloads 0 Views 139KB Size Report
of interconnected stochastic nonlinear systems can be solved by using ... Index Terms—Closed-loop identification, prediction error methods, system ...
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 1, JANUARY 2000

By choosing some suitable "ij ; i ; i , and i ; we can obtain that pi > 0 in (30). Further, choosing ij and ui as

ij = 0cij zij 0 Mij ; ui = 0cin zin 0 Min where cij

j = 1; 1 1 1 ; ni01 ; (31)

> 0 then we have

DV  0

N

n

i=1 j =1

cij zij4 + pi k~xi k4

137

[10] Z. Pan and T. Basar, “Backstepping controller design for nonlinear stochastic systems under a risk-sensitive cost criterion,” SIAM J. Contr. Optim., vol. 37, pp. 957–995, 1999. [11] L. Shi and S. K. Singh, “Decentralized adaptive controller design for large-scale systems with higher order uncertainties,” IEEE Trans. Automat. Contr., vol. 37, pp. 1106–1118, 1992. , “Decentralized controller design for interconnected uncertain sys[12] tems: Extensions to higher order uncertainties,” Int. J. Contr., vol. 57, pp. 1453–1468, 1993. [13] D. D. Siljak, Decentralized Control of Complex Systems. New York, NY: Academic, 1991.

(32)

which is negative definite. Therefore, the following decentralized output-feedback result follows. Theorem 4.1: Consider the stochastic nonlinear system (14) satisfying Assumption 4.1. The equilibrium of the overall closed-loop, large-scale stochastic system (14) and its decentralized output-feedback control law (31) is globally asymptotically stable in probability. Remark 4.1: Theorem 4.1 presents a solution to the decentralized stabilization problem for interconnected nonlinear stochastic systems based on an explicit construction of dynamic decentralized output-feedback controller and quartic Lyapunov function (22), simultaneously. The proof of Theorem 4.1 also gives a systematic procedure for the decentralized dynamic output feedback controller design. The result extends the centralized result in [2] to the decentralized dynamic output-feedback stabilization of interconnected stochastic nonlinear systems.

Identification of Unstable Systems Using Output Error and Box–Jenkins Model Structures Urban Forssell and Lennart Ljung

Abstract—It is well known that the output error and Box–Jenkins model structures cannot be used for prediction error identification of unstable systems. The reason for this is that the predictors in this case generically will be unstable. Typically, this problem is handled by projecting the parameter vector onto the region of stability, which gives erroneous results when the underlying system is unstable. The main contribution of this work is that we derive modified, but asymptotically equivalent, versions of these model structures that can also be applied in the case of unstable systems. Index Terms—Closed-loop identification, prediction error methods, system identification.

I. INTRODUCTION V. CONCLUSION The problem of decentralized global stabilization was studied in this paper. It has been shown that both the decentralized state feedback and dynamic output-feedback nonlinear control problems for a class of interconnected stochastic nonlinear systems can be solved by using a Lyapunov-based recursive design approach. Our results extend the existing centralized stabilization of stochastic nonlinear systems to decentralized control of interconnected stochastic nonlinear systems.

REFERENCES [1] Y. H. Chen, G. Leitmann, and Z. K. Xiong, “Robust control design for interconnected systems with time-varying uncertainties,” Int. J. Contr., vol. 54, pp. 1457–1477, 1991. [2] H. Deng and M. Krstic´, “Output-feedback stochastic nonlinear stabilization,” in Proc. 36th IEEE Conf. Decision Contr., San Diego, CA, 1997, pp. 2333–2338. [3] , “Stochastic nonlinear stabilization—I: A backstepping design,” Syst. Contr. Lett., vol. 32, pp. 143–150, 1997. [4] A. Isidori, Nonlinear Control Systems, 3rd ed. New York, NY: Springer-Verlag, 1995. [5] S. Jain and F. Khorrami, “Decentralized adaptive control of a class of large-scale interconnected systems,” IEEE Trans. Automat. Contr., vol. 42, pp. 136–154, Feb. 1997. [6] R. Z. Khas’miniskii, Stochastic Stability of Differential Equations. Rockville, MD: S & N International Publisher, 1980. [7] W. Lin, “Global robust stabilization of minimum-phase nonlinear systems with uncertainty,” Automatica, vol. 33, pp. 453–462, 1997. [8] R. Marino and P. Tomei, “Robust stabilization of feedback linearizable time-varying uncertain nonlinear systems,” Automatica, vol. 29, pp. 181–189, 1993. [9] B. Øksendal, Stochastic Differential Equations: An Introduction with Applications. New York, NY: Springer-Verlag, 1985.

In this correspondence, we will discuss prediction error identification of unstable systems using output error and Box–Jenkins model structures. As is well known from textbooks on system identification in the prediction error framework (e.g., [1] and [2]), it is required the predictors are stable. In case the parameters are estimated using some numerical search algorithm that requires gradients of the predictor to be computed, these must also be stable. With the ARX and ARMAX model structures, this is no problem because the dynamics model and the noise model share denominator polynomials, which cancel when the predictors are formed. For the output error and Box–Jenkins model structures, this is not the case, and if the underlying system is unstable, the predictors will generically be unstable, which seemingly makes the model structures inapplicable in these cases. Also, when identifying stable systems that are “close” to being unstable, it might be that the predictors become unstable in one or more of the steps in the search algorithm because of numerical problems, so the search has to be terminated before the global optimum is reached. Traditionally, the problem of unstable predictors has been handled by projecting the parameter vector into the region of stability. This problem will, of course, lead to completely useless results if the underlying system is unstable. If the system is stable, but the predictors become unstable in some intermediate step in the numerical search, projecting the parameter vector into the region of stability can lead to Manuscript received December 1, 1997; revised January 14, 1999. Recommended by Associate Editor, J. C. Spall. The authors are with the Division of Automatic Control, Department of Electrical Engineering, Linköping University, SE-581 83 Linköping, Sweden (e-mail: [email protected]@isy.liu.se). Publisher Item Identifier S 0018-9286(00)01923-1.

0018–9286/00$10.00 © 2000 IEEE

138

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 1, JANUARY 2000

convergence problems. As we shall see, however, it is possible to reparameterize the output error and Box–Jenkins model structures to guarantee stability of the predictors, and this without increasing the total number of parameters to be estimated. With the new versions of the output error and Box–Jenkins model structures we thus gain two things: these model structures can be applied to unstable systems and the numerical properties of the search algorithm are improved. The price we pay is an increase in complexity of the search algorithm in case the predictors are unstable. (If they are stable, we can use the standard algorithms.) If the system is unstable, we will assume the experimental data are generated under stabilizing feedback. Such feedback will involve some knowledge of the system. This knowledge, however, may by rudimentary, and an obvious need for an improved system model may still exist. One example—among many—is flight testing unstable aircraft to build accurate dynamic models. With a stabilizing controller in the loop, we are faced with a closed-loop identification problem. Closed-loop identification is often used in connection to so called control-relevant identification where the goal is to estimate models suitable for (robust) control design; see, e.g., the surveys [3]–[5]. It is then often only interesting to model the dynamics of the plant, the noise properties are less interesting, so it would be natural to use an output error model structure. Because unstable plants, however, cannot be handled using output error models, the conclusion has been that this approach cannot be used when the plant is unstable. Alternative solutions have been suggested in, e.g., [6]–[8]. Unfortunately, these methods are considerably more involved than a direct application of an output error or a Box–Jenkins model to the closed-loop data. II. SOME BASICS IN PREDICTION ERROR IDENTIFICATION

In the standard case of least-squares prediction error identification, we calculate the parameter estimate as the minimizing argument of the criterion function VN () =

(1)

Here, G(q; ) and H (q; ) are rational functions1 of q 01 , the unit delay operator (q 01 u(t) = u(t 0 1)), etc., parameterized in terms of  ; y (t) is the output; u(t) is the input; and e(t) is white noise. Typically,  ranges over some open subset DM of Rd (d = dim ): 

2 DM  Rd :

(2)

The one-step-ahead predictor for (1) is

01 y^(tj) = H (q; )G(q;  )u(t) +

01 1 0 H (q;  )

y (t):

(3)

are stable for the predictor to be well defined. This process calls for an inversely stable noise model H (q; ), and the unstable poles of G(q; ) are also poles of H (q; ). These issues will play important roles in this paper. The prediction errors "(t; ) = y (t) 0 y^(tj) corresponding to the predictor (3) are

01 (q; )(y(t) 0 G(q; )u(t)):

(4)

We will also use the following notation for the gradient of y^(tj): (t;  ) =

j

d y^(t ) d

=

0 dd "(t; )

:

t=1

1 2

2

" (t; ):

(6)

i

( +1)

^N

(i) = ^N

i 01 V 0 N

0 Ni

( )

( )

RN

i

( )

^N

(7)

where VN0 RN N

gradient of the criterion function; is a matrix that modifies the search direction; scaling factor that determines the step length. From (5), we see

0

VN () =

0 N1

N t=1

(t;  )"(t;  )

(8)

and typically RN is chosen approximately equal to the Hessian VN00 [which would make (7) a Newton algorithm]; a standard choice is

N

T (t; ) + I (t;  ) (9) t=1 where   0 is chosen so RN becomes positive definite. This equation RN =

1

N

is also called the Levenberg–Marquardt regularization procedure. Clearly, it is required that both the predictor (3) and the gradient (5), which have to be computed and used in the search algorithm (7), are stable. When dealing with unstable systems, this introduces constraints on the possible model structures. III. COMMONLY USED MODEL STRUCTURES With these stability requirements in mind, let us now discuss some standard choices of model structures. A general model structure is the following [1]: A(q )y (t) =

B (q ) C (q ) u(t) + e(t) F (q ) D(q )

(10)

where A(q ) = 1 + a1 q

01 + 1 1 1 + an q0n

(11)

and similarly for the C , D , and F polynomials, while

Here, it is required the filters H 01 (q; )G(q; ) and [1 0 H 01 (q; )]

"(t; ) = H

N

Typically, we find the estimate through some numerical search routine of the form

In prediction error identification, we typically consider linear model structures parameterized in terms of a parameter vector  y (t) = G(q; )u(t) + H (q; )e(t):

N

1

(5)

1In this correspondence, we will limit the study to single-input single-output models for ease of exposition.

B (q ) = q

0n

b0 + b1 q

01 + 1 1 1 + bn q0n

:

(12)

This model structure includes some common special cases, like ARX, ARMAX, output error, and Box–Jenkins; see [1]. In the sequel, we will assume nk = 0 [which always can be achieved by replacing u(t) by u(t 0 nk )]. A sufficient condition for the predictor and gradient filters to be stable is that C (q ) 1 F (q ) is stable for all  2 DM (cf. [1, Lemma 4.1]). This condition is automatically satisfied for ARX models, and for ARMAX models, it is sufficient the C -polynomial is stable, which does not impose any stability constraints on the dynamics model. For identification of an unstable system, these model structures are thus natural choices. The output error model structure has a fixed noise model (H (q; ) = 1) and is a natural choice if only a model of the system dynamics is required. In case we also want to model the noise characteristics (e.g., to improve the efficiency), but do not want the noise and dynamics models to be dependent as in the ARX and ARMAX cases, then the

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 1, JANUARY 2000

Box–Jenkins model structure would be the one to choose. If the underlying system, however, is unstable, these model structures cannot be used without modifications, e.g., the ones we propose in this paper. To see where the problem lies, let us study the output error case. Suppose we want to identify an unstable system, stabilized by some controller, and we are only interested in modeling the dynamics with no modeling effort spent on the noise characteristics. Then, the natural choice would be to use an output error model structure: y (t) =

B (q ) u(t) + e(t): F (q )

(13)

Now, because the system is unstable, the predictor

j

y^(t ) =

139

we have fk

=

(14)

Fa3 (q ) = 1 +

0

(15a)

0

j

=

B (q ) C (q ) u(t) + e(t): F (q ) D(q )

(16)

Also in this case, we have to resort to projections onto the region of stability to ensure stability of the predictors and gradient filters, which makes this model structure in its standard form useless for identification of unstable systems. In the following sections, we will describe how to modify these model structures to avoid these problems. IV. AN ALTERNATIVE OUTPUT ERROR MODEL STRUCTURE A. Some Additional Notation Let Fs (q ) (Fa (q )) be the stable (antistable), monic part of F (q ) F (q ) = Fs (q )Fa (q )

(17)

and let the polynomials Fs (q ) and Fa (q ) be parameterized as Fs (q ) = 1 + fs;1 q 01 +

1 1 1 + fs;n q0n Fa (q ) = 1 + fa;1 q 01 + 1 1 1 + fa;n q 0n

(18) :

(19)

With the notation

1;

fa;k ; 0;

=0 1  k  nf

k

(20)

else

and f s;k

=

1; fs;k ; 0;

=0 1  k  nf

k

else

(22)

fa;n

01 01 q + 111 +

1

fa;n

q 0n :

(23)

6= 0.

(21)

B (q ) F 3 (q ) u(t) + a e(t) F (q ) Fa (q )

(24)

with the predictor

will generically be unstable. When implementing a parameter estimation algorithm for the output error case, we typically secure stability in every iteration of the algorithm (7) by projecting the parameter vector into the region of stability. For unstable systems, this process of course leads to erroneous results. These problems are also present when using the Box–Jenkins model structure

=

fa;n

y (t) =

y^(t ) =

f a;k

= 1; 2; 1 1 1 ; nf :

B. The Proposed Model Structure

(15b)

y (t) =

k

Now, consider the following modified output error model structure:

1 u(t k) @ y^(t ) = @bk F (q ) B (q ) @ y^(t ) = u(t k) @fk F 2 (q )

0

f s;j f a;k0j ;

Furthermore, let Fa3 (q ) denote the monic, stabilized Fa -polynomial; i.e., Fa3 (q ) is the monic polynomial whose zeros are equal to the zeros of Fa (q ) reflected into the unit disc. In terms of fa;i , the coefficients of Fa (q ), we can write Fa3 (q ) as

as well as the gradient filters

j

j =0

Here, we have used the implicit assumption that fa;n

B (q ) u(t) F (q )

j

n

Fa (q )B (q ) u(t) + Fa3 (q )F (q ) B (q ) u(t) + Fa3 (q )Fs (q )

1 0 FFa3((qq)) y(t) a Fa (q ) 1 0 F 3 (q) y(t): a

(25)

The difference between this model structure and the basic output error model structure (13) is that we have included a noise model Fa3 (q )=Fa (q ) and obtained a different “dummy” noise term e(t) =

Fa3 (q ) e(t) Fa (q )

(26)

instead of just e(t). At first glance, it may thus seem as the model structures (13) and (24) will give different results, but in fact they are (asymptotically) equivalent as can be seen from the following result. Proposition 1: When applying a prediction error method to the model structures (13) and (24), the resulting estimates will asymptotically, as N ! 1, be the same. Proof: From classical prediction error theory, we know under mild conditions the limiting models will minimize the integral of the spectrum of the prediction errors (see, e.g., [1, Theorem 8.2]). Now, if we let "F (t; ) denote the prediction errors obtained with the model (25) and let "(t; ) denote the prediction errors corresponding to the model (14), the spectrum of "F (t; ) is given by

8" (!) =

Fa (ei! ) Fa3 (ei! )

2

8" (!) = jfa;n j2 8" (!)

(27)

where 8" (! ) denotes the spectrum of "(t; ). Thus, the spectra differ by only a constant scaling and, hence, the corresponding limiting models will be the same. As we have seen, the results will asymptotically be the same with both model structures; the difference is of course that the predictor (25) will always be stable along with all its derivatives even if F (q ) is unstable [as opposed to standard output error case which require a stable F (q ) for the predictor to be stable]. In (24), the noise model is monic and inversely stable and the unstable poles of the dynamics model are also poles of the noise model (cf. the discussion in Section II). The basic idea behind the equivalence result in Proposition 1 is really that a constant spectrum may be factorized in infinitely many ways using all-pass functions. Here, we chose convienient pole locations for these all-pass functions to have stable predictors. This process is actually closely related to classical Kalman filter theory, as will be illustrated next.

140

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 1, JANUARY 2000

C. Connections to the Kalman Filter SUMMARY

A straightforward interpretation of the second-order equivalent (25) of (14) exists. Suppose we realize (13) in state-space form: x(t + 1) =

Ax(t) + Bu(t); C

y (t) = x(t) + e(t):

TABLE I OF IDENTIFICATION RESULTS

(28)

The steady-state Kalman filter predictor is

A 0 KC )^x(t) + Bu(t) + Ky(t) y^(tj) = C x ^(t)

x ^(t + 1) = (

(29)

where K is determined from an algebraic Riccati equation in the usual way: K = A5C T =(R + C 5C T ), 5 = A5AT 0 K(R + C 5C T )KT . Here, R is the variance of the innovations e(t): R = Ee2 (t) > 0. (E denotes mathematical expectation.) Now, if A is stable, the solution is K = 0 and (29) corresponds to (14). If A is not stable, however, the solution to the Kalman predictor problem is a nonzero K that makes (29) exactly equal to (25). This result holds regardless of R. We thus have two alternative ways of computing the gradient y^(tj) in the output error case: (25) and (29). As we have seen, these two variants give identical solutions y^(tj), although (25) is computationally less demanding. In contrast with the Kalman filter solution (29), it is also straightforward to generalize (25) to other model structures, like the Box–Jenkins model structure. This process is briefly discussed in Section V below. Another important issue is how to compute the gradient (5). Using the explicit predictor formula (25), this is relatively straightforward, albeit tedious, as shown in the next section.

x3 (t) =

0 F (q)(BF(q3)(q))2 u(t) + (FF3 ((qq))) 2 y(t) a

s

a

we may write @ y^(t ) = W1k (q )x1 (t) + W2k (q )x2 (t) + W3k (q )x3 (t): @fk

j

What we then finally need to be able to compute the gradient k k i , i = 1; 2; 3. Using (22), we have that W1k (q ) =

D. Computation of the Gradient As mentioned above, the gradient (t; ) is needed for the implementation of the search scheme (7). With the predictor (25), the expression for the gradient will be much more involved than (15), but for completeness we will go through these calculations in some detail (after all, the gradient is needed for the implementation of the estimation algorithm). Given the predictor model (25), we have that

j

0 k);

w1k;i

n

j

0 @f@

k

=1

0; n

=1

i

w2

k ;i

w1k;i q0i

1 ; f0 a;k0i

=

W2k (q ) =

k

0n ik f

(39)

else

w2k;i q 0i

1 ; f0 s;k0i

=

0;

(30)

k

0n ik f

(40)

else

and

and @ @ B (q ) y^(t ) = u(t) @fk @fk Fs (q )Fa3 (q )

(38)

(@=@f )^y(tj) are explicit expressions for the filters W

i

1 @ y^(t ) = u(t @bk Fs (q )Fa3 (q )

(37)

a

Fa (q ) y (t): Fa3 (q )

(31)

Introducing

W3k (q ) = w3k;i

W1k (q ) =

@ Fs (q ); @fk

(32)

W2k (q ) =

@ Fa (q ); @fk

(33)

=

1 1

fa;n

1 f0 s;k0n

0;

01

n

=1

w3k;i q 0i + w3k;0 (1

i

+i ;

nf

0 F 3 (q)) a

0 k  i  n + n 0 k (41) f

f

else.

Equations (30)–(41) together constitute a complete and explicit description of the gradient (t; ) = (d=d)^ y(tj), which may be used in an implementation of the search algorithm (7). E. Simulation Example

@ Fa3 (q ); @fk

(34)

x1 (t) =

0 F 2 (Bq)(Fq)3 (q) u(t);

(35)

x2 (t) =

0 F 31(q) y(t);

(36)

W3k (q ) =

s

a

a

To illustrate the applicability of the proposed model structure (24) to identification problems involving unstable systems, we will in this section present a small simulation study. The “true” system—to be identified—is given by b0 (42) 1 + f1 q01 + f2 q02 u(t) + e(t) with b0 = 1, f1 = 01:5, and f2 = 1:5. This system is unstable with poles in 0:75 6 0:9682i. y (t) =

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 45, NO. 1, JANUARY 2000

To generate identification data, we simulated this system using the feedback law u(t)

= r (t)

0 (00 95 02 ) :

q

y (t)

= r (t) + 0:95y (t

0 2)

V. AN ALTERNATIVE BOX–JENKINS MODEL STRUCTURE The trick to include a modified noise model in the output error model structure is of course also applicable to the Box–Jenkins model structure. The alternative form will in this case be

=

B (q ) F (q )

u(t)

+

3

a (q )C (q ) e(t): Fa (q )D (q )

F

Control of Nonlinear Chained Systems: From the Routh–Hurwitz Stability Criterion to Time-Varying Exponential Stabilizers

(43)

which places the closed-loop poles in 0.8618 and 0.6382. In the simulation we used independent, zero-mean, Gaussian white noise reference and noise signals fr(t)g and fe(t)g with variances 1 and 0.01, respectively. N = 200 data samples were used. In Table I, we have summarized the results of the identification, the numbers shown are the estimated parameter values together with their standard deviations. For comparison, we have, apart from the model structure (24), used a standard output error model model structure and a second-order ARMAX model structure. As can be seen, the standard output error model structure gives completely useless estimates, and the modified output error and the ARMAX model structures give similar and accurate results.

y (t)

141

(44)

An explicit expression for the gradient filters for this predictor can be derived similarly as in the output error case, albeit the formulas will be even messier. We leave the details to the reader. VI. CONCLUSIONS In this paper, we have proposed new versions of the well-known output error and Box–Jenkins model structures that can also be used for identification of unstable systems. The new model structures are equivalent to the standard ones, as far as number of parameters and asymptotical results are concerned, but guarantee stability of the predictors. REFERENCES [1] L. Ljung, System Identification: Theory for the User, 2nd ed. Englewood Cliffs, NJ: Prentice-Hall, 1987. [2] T. Söderström and P. Stoica, System Identification. Englewood Cliffs, NJ: Prentice-Hall International, 1989. [3] M. Gevers, “Toward a joint design of identification and control,” in Essays on Control: Perspectives in the Theory and Its Applications, H. L. Trentelman and J. C. Willems, Eds. Boston, MA: Birkhäuser, 1993, pp. 111–151. [4] P. M. J. Van den Hof and R. J. P. Schrama, “Identification and control—Closed-loop issues,” Automatica, vol. 31, pp. 1751–1770, 1995. [5] U. Forssell and L. Ljung, “Closed-loop identification revisited—Updated version,” Automatica, vol. 35, pp. 1215–1244, 1999. [6] F. R. Hansen, G. F. Franklin, and R. Kosut, “Closed-loop identification via the fractional representation: Experiment design,” in Proc. Amer. Contr. Conf.. Pittsburg, PA, 1989, pp. 1422–1427. [7] P. M. J. Van den Hof and R. J. P. Schrama, “An indirect method for transfer function estimation from closed loop data,” Automatica, vol. 29, pp. 1523–1527, 1993. [8] P. M. J. Van den Hof, R. J. P. Schrama, R. A. de Callafon, and O. H. Bosgra, “Identification of normalized coprime factors from closed-loop experimental data,” Eur. J. Contr., vol. 1, pp. 62–74, 1995.

P. Morin and C. Samson

Abstract—We show how any linear feedback that stabilizes the origin of a linear chain of integrators induces a simple, continuous time-varying feedback that exponentially stabilizes the origin of a nonlinear chained-form system. The design method is related to a method developed by M’Closkey and Murray to transform smooth feedback yielding slow polynomial convergence into continuous homogeneous ones that give exponential convergence. Index Terms—Asymptotic stability, nonholonomic system, time-varying feedback.

I. INTRODUCTION Control systems in the so-called chained form have been extensively studied in recent years. This research interest partly stems from the fact that the kinematic equations of many nonholonomic mechanical systems, such as these arising in mobile robotics (unicycle-type carts, car-like vehicles with trailers, etc.), can be converted into this form [12], [16], [18]. This paper addresses the problem of asymptotic stabilization of a given equilibrium point (which corresponds to a fixed configuration for a mechanical system). Because chained systems do not satisfy Brockett’s necessary condition [1], they cannot be asymptotically stabilized, with respect to any equilibrium point, by means of a continuous pure-state feedback u(x). In [15], one of the authors proposed and derived smooth time-varying feedback laws u(x; t) for the stabilization of a unicycle-type vehicle. This proposition showed how the topological obstruction raised by Brockett could be dodged and was the starting point of other studies on time-varying feedback. In [3] and [4], Coron established that most controllable systems can be asymptotically stabilized with this type of feedback. The literature on the subject has since then mostly focused on the explicit design of such stabilizing control laws. Smooth feedback laws yielding slow (polynomial) asymptotic convergence have first been designed (see, e.g., [13], [15]–[17], and [19]). More recently, properties associated with homogeneous systems have been used to obtain feedback laws only continuous but yielding an exponential convergence rate [7], [8], [10], [14]. Lately, M’Closkey and Murray have presented in [9] a method for transforming smooth time-varying stabilizers into homogeneous continuous ones. The method is best suited for driftless systems for which it applies systematically. The construction relies upon the initial knowledge of an adequate Lyapunov function coupled with a smooth stabilizing feedback law. More precisely, the exponential stabilizer is obtained by “scaling” the size of the smooth control inputs on a level set of the Lyapunov function. The feedbacks derived in the present paper have been obtained by adapting and combining the core of this method to the control design method earlier proposed by Samson in [16] for the smooth feedback stabilization of chained systems. Although our approach is specific to chained systems, it carries with it two important improvements with respect to [9]. The first one is that the knowledge

Manuscript received November 18, 1997; revised December 18, 1998. Recommended by Associate Editor, L.-S. Wang. The authors are with INRIA Sophia-Antipolis, 06902 Sophia Antipolis Cédex, France (e-mail: {pmorin, jsamson}@sophia.inria.fr). Publisher Item Identifier S 0018-9286(00)01924-3.

0018–9286/00$10.00 © 2000 IEEE

Suggest Documents