IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 1, JANUARY 2006
[8] E.-K. Boukas and Z.-K. Liu, Deterministic and Stochastic Time-Delay Systems. Boston, MA: Birkhäuser, 2002. [9] C.-T. Chen, Linear System Theory and Design. New York: Holt, Rinehart, and Winston, 1984. [10] M. C. Delfour, “The linear quadratic control problem with delays in space and control variables: A state space approach,” SIAM. J. Control Optim., vol. 24, pp. 835–883, 1986. [11] J. M. Dion et al., Ed., Linear Time-Delay Systems. London, U.K.: Pergamon, 1999. [12] J. L. Dugard and E. I. Verriest, Eds., Stability and Control of Time-Delay Systems. New York: Springer-Verlag, 1998. [13] D. H. Eller, J. K. Aggarwal, and H. T. Banks, “Optimal control of linear time-delay systems,” IEEE Trans. Automat. Control, vol. AC-14, no. 6, pp. 678–687, Dec. 1969. [14] A. F. Filippov, Differential Equations with Discontinuous Right-Hand Sides. New York: Kluwer, 1989. [15] W. H. Fleming and R. W. Rishel, Deterministic and Stochastic Optimal Control. New York: Springer-Verlag, 1975. [16] F. R. Gantmacher, Lectures in Analytical Mechanics. Moscow, U.S.S.R.: Mir, 1975. [17] P. Gupta and P. Kumar, “A system and traffic dependent adaptive routing algorithm for ad hoc networks,” in Proc. 36th Conf. Decision and Control, San Diego, CA, 1997, pp. 2375–2380. [18] J. K. Hale and S. M. Verduyn-Lunel, Introduction to Functional Differential Equations. New York: Springer-Verlag, 1993. [19] V. B. Kolmanovskii and L. E. Shaikhet, Control of Systems with Aftereffect. Providence, RI: AMS, 1996. [20] V. B. Kolmanovskii and A. D. Myshkis, Introduction to the Theory and Applications of Functional Differential Equations. New York: Kluwer, 1999. [21] G. L. Kharatashvili, “A maximum principle in external problems with delays,” in Mathematical Theory of Control, A. V. Balakrishnan and L. W. Neustadt, Eds. New York: Academic, 1967. [22] H. Kwakernaak and R. Sivan, Linear Optimal Control Systems. New York: Wiley-Interscience, 1972. [23] M. S. Mahmoud, Robust Control and Filtering for Time-Delay Systems. New York: Marcel Dekker, 2000. [24] M. Malek-Zavarei and M. Jamshidi, Time-Delay Systems: Analysis, Optimization and Applications. Amsterdam, The Netherlands: North Holland, 1987. [25] A. Manitius, “Optimum control of linear time-lag processes with quadratic performance indexes,” in Proc. 4th IFAC Congr., Warsaw, Poland, 1969. [26] M. N. Oguztoreli, Time-Lag Control Systems. New York: Academic, 1966. [27] , “A time-optimal control problem for systems described by differential difference equations,” SIAM J. Control, vol. 1, pp. 290–310, 1963. [28] L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze, and E. F. Mishchenko, The Mathematical Theory of Optimal Processes. New York: Wiley-Interscience, 1962. [29] V. S. Pugachev and I. N. Sinitsyn, Stochastic Systems: Theory and Applications. Singapore: World Scientific, 2001. [30] K. Uchida, E. Shimemura, T. Kubo, and N. Abe, “The linear-quadratic optimal control approach to feedback control design for systems with delay,” Automatica, vol. 24, no. 6, pp. 773–780, 1988. [31] L. Zhang, Z. Zhao, Y. Shu, L. Wang, and O. W. W. Yang, “Load balancing of multipath source routing in ad hoc networks,” in Proc. IEEE Int. Conf. Communications, New York, 2002, pp. 3197–3201.
97
On Designing of Sliding-Mode Control for Stochastic Jump Systems Peng Shi, Yuanqing Xia, G. P. Liu, and D. Rees Abstract—In this note, we consider the problems of stochastic stability and sliding-mode control for a class of linear continuous-time systems with stochastic jumps, in which the jumping parameters are modeled as a continuous-time, discrete-state homogeneous Markov process with right continuous trajectories taking values in a finite set. By using Linear matrix inequalities (LMIs) approach, sufficient conditions are proposed to guarantee the stochastic stability of the underlying system. Then, a reaching motion controller is designed such that the resulting closed-loop system can be driven onto the desired sliding surface in a limited time. It has been shown that the sliding mode control problem for the Markovian jump systems is solvable if a set of coupled LMIs have solutions. A numerical example is given to show the potential of the proposed techniques. Index Terms—Linear matrix inequality (LMI), Markovian jump parameter, sliding-mode control, stochastic stability.
I. INTRODUCTION A large class of physical systems have variable structures subject to random changes, which may result from the abrupt phenomena such as component and interconnection failures, parameters shifting, tracking, and the time required to measure some of the variables at different stages. Systems with this character may be modeled as hybrid ones, that is, to the continuous state variable, a discrete random variable called the mode, or regime, is appended. The mode describes the random jumps of the system parameters and the occurrence of discontinuities. One of the most important hybrid systems is the so-called Markovian jumping system (MJS), in which the mode-process is a continuous-time discrete-state Markov process taking values in a finite set. In engineering applications, frequently occurring dynamical systems which can be represented by different forms depending on the value of an associated Markov chain process are termed jump systems. Research into this class of system and their applications span several decades. For some representative prior work on this general topic, we refer the reader to [4]–[6], [14]–[16], [20]–[22], and the references therein. More recently, the robust stabilization of stochastic nonlinear hybrid systems has been considered in [7], robust stability and controllability of stochastic differential delay can be found in [29] and nonlinear filtering for state delayed systems with Markovian switching has been discussed in [24]. In another active research area, the so-called sliding mode control has attractive features to keep systems insensitive to the uncertainties Manuscript received October 17, 2004; revised April 7, 2005. Recommended by Associate Editor A. E. B. Lim. The work of P. Shi was supported by the K. C. Wong Education Foundation, Hong Kong, the Laboratory of Complex Systems and Intelligence Science, Institute of Automation, Chinese Academy of Sciences, and the Space Control and Inertial Technology Research Center, Harbin Institute of Technology. The work of Y. Xia was supported by the National Natural Science Foundation of China under Grant 60504020. P. Shi is with the School of Technology, University of Glamorgan, Pontypridd, CF37 1DL, U.K. (e-mail:
[email protected]). Y. Xia is with the School of Electronics, University of Glamorgan, Pontypridd, CF37 1DL, U.K., and also with the Department of Automatic Control, Beijing Institute of Technology, Beijing 100081, China (e-mail:
[email protected];
[email protected]). G. P. Liu is with the School of Electronics, University of Glamorgan, Pontypridd, CF37 1DL, U.K., and also with the CSIS Lab, CASIA and the NCS Lab, CSU, PRC (e-mail:
[email protected]). D. Rees is with the School of Electronics, University of Glamorgan, Pontypridd, CF37 1DL, U.K. (e-mail:
[email protected]). Digital Object Identifier 10.1109/TAC.2005.861716
0018-9286/$20.00 © 2006 IEEE
98
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 1, JANUARY 2006
on the sliding surface. Sliding mode control as a general design tool for robust control systems has been well established. The salient advantages of sliding model control are: i) fast response and good transient performance; ii) its robustness against a large class of perturbations or model uncertainties; and iii) the possibility of stabilizing some complex nonlinear systems which are difficult to stabilize by continuous state feedback laws see for example, [10], [26]–[28], and the references therein. However, to the best of authors’ knowledge, the previous problems, to date, have not been fully investigated yet. Note that in the past, a great amount of work has been conducted on Markovian jump systems, such as H2 , H1 and mixed H2 =H1 control for Markovian jump systems with known and unknown noise statistics with either LQ or prescribed H1 performances. However, to deal with systems with fast convergence rate in switching surfaces, robust against both internal and external disturbances and still keeping a desired performance, slidingmode control has been shown as an effective and useful tool. This motivated us to study the above systems with Markovian jumps. The work conducted in this note not only contributes to the theory development, but also solves the practical problems, which has already been reported on in the literature, such as those occurring in manufacturing systems, telecommunication systems, internet based implement and power systems, etc (see, for example, [4], [13], [25], and the references therein). In this note, we consider the problem of stochastic sliding mode control for a class of linear continuous systems with Markovian jump parameters. The jumping parameters are treated as continuous-time, discrete-state Markov process. Concepts of stochastic stability and stochastic stabilization for the underlying systems are introduced. The sliding surface and reaching motion controller for the system will be designed. An LMI-type condition for the existence of linear sliding surfaces is derived. The solution to the condition can be used to characterize linear sliding surfaces, and by selecting suitable reaching law, the reaching motion controller is proposed. The above problems are solved in terms of a finite set of coupled linear matrix inequalities (LMIs). Finally, a numerical example is included to demonstrate the effectiveness of the theoretical results obtained.
We consider a class of stochastic systems with Markovian jump parameters in a fixed probability space ( , F , )
P
(2.1)
where x(t) 2 Rn is the state vector; u(t) 2 Rm is the control input, w 2 Rl is the disturbance, while ft ; t 2 [0; T ]g is a finite-state Markovian process having a state–space S f1; 2; . . . ; g, generator (ij ) with transition probability from mode i at time t to mode j at time t + , i; j 2 S
pij = P r(t+ = j jt = i) j = 1 ij++ o(+)o; (); ifif ii 6= =j ii ii =
0
6
im ; ij
m=1;m=i
where > 0 and lim #0 o( )= = 0. For each possible value t = i, i 2 matrices associated with mode i by
A ( t )
A ( i) B ( t )
(2.2)
0 8 i; j 2 S ; i 6= j
k F ( i ) w ( t) k f ( i ) ; i 2 S (2.4) where f (i), i 2 S are positive scalars, k1k denotes the Euclidean norm
of a vector and its induced norm of a matrix.. Remark 2.1: The model of the form (2.1) is a hybrid system in which one state x(t) takes values continuously and another state t , referred to as the mode or operating form, takes values discretely in S . This kind of system can be used to represent many important physical systems subject to random failures and structure changes, such as electric power systems [25], control systems of a solar thermal central receiver [23], communications systems [1], aircraft flight control [17], control of nuclear power plants [19] and manufacturing systems [2], [3]. In order to obtain a regular form of systems (2.1), we can choose a nonsingular matrix T (t ) such that
T ( t )B ( t ) = where B2 (t ) tion
0(n0m)2m B 2 ( t )
2 Rm2m is nonsingular. For convenience, let us parti-
U2T (t ) U1T (t ) where U1 (t ) 2 Rn2m and U2 (t ) 2 Rn2(n0m) are two sub-blocks T ( t ) =
of a unitary matrix resulting from the singular value decomposition of B (t ), that is
B ( t ) = [ U 1 ( t ) U 2 ( t ) ] where
6(t ) 2 Rm2m
6(t ) T 0(n0m)2m V (t )
is a diagonal positive–definite matrix and
V (t ) 2 Rm2m is a unitary matrix([12]). By the state transformation z = T (t )x, system (2.1) has the regular form
0(n0m)2m [u(t) + F ( )w(t)] (2.5) t B 2 ( t ) where A(t ) = T (t )A(t )T 01 (t ). System (2.5) can be written as z_1 (t) = A11 (t )z1 (t) + A12 (t )z2 (t) (2.6) z_2 (t) = A21 (t )z1 (t) + A22 (t )z2 (t) + B2 (t )[u(t) + F (t )w(t)] (2.7) z_ (t) = A(t )z (t) +
II. PROBLEM FORMULATION AND PRELIMINARIES
x_ (t) = A(t )x(t)+ B (t )[u(t)+ F (t )w(t)]; o = i; t 0
where A(i), B (i) and F (i) are known real constant matrices of appropriate dimensions which describe the nominal system. It is assumed that
(2.3)
where z1
2 Rn0m , z 2 Rm and 2
B2 (t ) = 6(t )V T (t ); A11 (t ) = U2T (t )A(t )U2 (t ) A12 (t )=U2T (t )A(t )U1 (t ) A21 (t )= U1T (t )A(t )U2 (t ) A22 (t ) = U1T (t )A(t )U1 (t ): It is obvious that the first equation of system (2.6) represents the sliding motion dynamics of the system (2.5) and, hence, the corresponding sliding surface can be chosen as follows:
s(t) = [ C1 (t ) C2 (t )] z = C1 (t )z1 + C2 (t )z2 = 0
(2.8)
C (i) is invertible for any i 2 f1; 2; . . . ; g. Let C (t ) = S , we will denote the system Cwhere 0 (t )C (t ) 2 Rm2 n0m and substitute z = 0C (t )z to (2.6)
B (i) F ( t )
2
1
1
(
1
)
2
1
gives the sliding motion
F (i)
z_1 (t) = [A11 (t ) 0 A12 (t )C (t )]z1 (t):
(2.9)
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 1, JANUARY 2006
Let us recall the definition of stochastic stability for system (2.9). Definition 2.1: For system (2.9), the equilibrium point 0 is stochastically stable, if for any z1 and 0 2 S 0
(0) 1 E kz (t; z (0))k 1
2
1
j =1
kj M (j )+N (k) = 0; k 2 S
~( ) = [ ( ) ( ) ( )]
(2.10)
A11 k 0 A12 k C k . where A k In the following, attention is focused on the design of gain C k 2 Rm2(n0m) and a reaching motion control law u t for each k 2 S such that 1) the sliding motion (2.9) is stochastically stable; and 2) the system (2.6)–(2.7) is stochastically stable with the reaching control law u t .
()
III. MAIN RESULTS In this section, the design results of sliding surface and reaching motion controller will be presented. Let us first consider the problem of sliding surface design, the results are in the form of LMIs. The first result of designing sliding surface can be stated as follows. Theorem 3.1: The reduced order system (2.9) is stochastically stable if there exist symmetric positive–definite matrices P k 2 Rm2m , k 2 S and general matrix Q k 2 Rm2(n0m) , k 2 S such that the following inequalities hold for all k 2 S ; see (3.1) at the bottom of the page. Moreover, the sliding surface of system (2.6) is
()
s(t) = C 1 (k)z1 (t) + C 2 (k)z2 (t) = 0;
C 1 (k) and C 2 (k) are Q(k)P 01 (k), that is, for all k 2 S where
k2S
(3.2)
appropriately factorization of
1 2 1 1 2 2
.. .
1k=12 P (k) 1k=22 P (k) 0P (1) 0 0 0P (2) .. .
.. .
1k=(k201) P (k) 1k=(k2+1) P (k)
0 0
0 0
.. .
.. .
.. .
1k=(201) P (k) 1k=2 P (k)
0 0
0 0
A~T (k)M (k) + M (k)A~(k) +
j =1
kj M (j ) < 0; k 2 S :
Pre- and postmultiplying inequality (3.4) by M 01
M 01 (k)A~T (k) + A~(k)M 01 A~(k)
(3.3)
+ M 0 (k ) 1
j =1
(3.4)
(k) gives
kj M (j ) M 01 (k) < 0; k 2 S :
(3.5)
(k) = M 0 (k), k 2 S , then it yields P (k)A~T (k) + A~(k)P (k) + kk P (k) Let P
1
+ P (k )
j =1;j 6=k
kj P 01 (j ) P (k) < 0:
(3.6)
Applying Schur complement formula gives (3.7), as shown at the k P k AT k bottom of the next page, where k 2 S , A k P k kk P k . Let Q k C k P k , then (3.7) is equivalent to (3.1). Remark 3.1: Note that Theorem 3.1 provides a sufficient condition for stochastic stability of system (2.9), which can be easily checked by using available LMI Toolbox. This result covers Theorem 1 for nominal . system in [28] as a special case, i.e, when Next, the result of designing reaching motion controller is derived in the following theorem. Theorem 3.2: Assume the condition in Theorem 3.1 holds, i.e., inequalities (3.1) have solutions P k 2 Rm2m , k 2 S , Q k 2 Rm2(n0m) , k 2 S and the linear sliding surface is given by (3.2), and there exist selected k , k 2 S such that the following inequalities:
~( ) ( ) +
1( ) = ( ) ~ ( ) + ( )= ( ) ( )
()
=1
()
()
2( )
0 (k)2(k) 0 2T (k) (k) +
j =1
kj (j ) < 0
( )
have definite–positive matrix solutions k . Then, the following control makes the sliding surface s stochastically stable and globally attractive in finite time
(3.8)
(z(t)) = 0
u(t) = 0(C2 (k)B2 (k))01 [ C1 (k) C2 (k)] A(k)z (t) +2(k)s(t) + ((k) + f (k))sgn( (k)s(t))] where (k), k 2 S are given positive constants.
C 201(k)C 1 (k) = Q(k)P 01 (k) 5(k) = P (k)AT11 (k) + A11 (k)P (k) + kk P (k) 0 A12 (k)Q(k) 0 QT (k)AT12 (k):
5(k) k= P (k) k= P (k)
kj M (j )+N (k)=0; k 2 S :
which is equivalent to the following inequalities:
()
()
()
()
() A~T (k)M (k)+M (k)A~(k)+
j =1
()
()
A~T (k)M (k)+M (k)A~(k)+
Proof: From Lemma 2.1, (2.9) is stochastically stable if only if for any given positive–definite matrices N k , k 2 S , there exist positive–definite matrices M k , k 2 S , satisfying
dt < +1:
The following result shows that the stochastic stability of system (2.9) is equivalent to a set of intercoupled algebraic Lyapunov-type equations have solutions. Lemma 2.1: [9], [20] Consider system (2.9), then the following statements are equivalent. a) System (2.9) is stochastically stable. b) For any given positive–definite matrices N k , k 2 S , there exist positive–definite matrices M k , k 2 S , satisfying
99
(3.9)
1 1 1 k=k0 P (k) k=k P (k) 1 1 1 k=0 P (k) k= P (k) 111 0 0 111 0 0 111 0 0 111 0 0 1 2 (
.. .
1)
.. .
1 1 1 0P (k 0 1) 111 0 .. .
111 111
.. .
0 0
1 2 ( +1)
.. .
0
1 2 (
.. .
111 0P (k + 1) 1 1 1 .. .
0 0
.. .
1 2
1)
.. .
.. .
0 0
0 0
.. .
.. .
1 1 1 0P ( 0 1) 111 0
0
0 P ( )