Lectures on Multivariable Feedback Control. Ali Karimpour. Department of
Electrical Engineering, Faculty of Engineering, Ferdowsi University of. Mashhad ...
Lectures on Multivariable Feedback Control Ali Karimpour Department of Electrical Engineering, Faculty of Engineering, Ferdowsi University of Mashhad (September 2009)
Chapter 4: Stability of Multivariable Feedback Control Systems
4-1
Well-Posedness of Feedback Loop
4-2
Internal Stability
4-3
The Nyquist Stability Criterion 4-3-1 The Generalized Nyquist Stability Criterion 4-3-2 Nyquist arrays and Gershgorin bands
4-4
Coprime Factorizations over Stable Transfer Functions
4-5
Stabilizing Controllers
4-6
Strong and Simultaneous Stabilization
Chapter 4
Lecture Notes of Multivariable Control
This chapter introduces the feedback structure and discusses its stability properties. The arrangement of this chapter is as follows: Section 4-1 introduces feedback structure and describes the general feedback configuration and the well-posedness of the feedback loop is defined. Next, the notion of internal stability is introduced and the relationship is established between the state space characterization of internal stability and the transfer matrix characterization of internal stability in section 4-2. The stable coprime factorizations of rational matrices are also introduced in section 4-3. Section 4-4 discusses how to achieve a stabilizing controller. Finally section 4-5 introduces strong and simultaneous stabilization.
4-1 Well-Posedness of Feedback Loop We will consider the standard feedback configuration shown in Figure 4-1. It consists of the interconnected plant P and controller K forced by command r, sensor noise n, plant input disturbance di, and plant output disturbance d. In general, all signals are assumed to be multivariable, and all transfer matrices are assumed to have appropriate dimensions. Assume that the plant P and the controller K in Figure 4-1 are fixed real rational proper transfer matrices. Then the first question one would ask is whether the feedback interconnection makes sense or is physically realizable. To be more specific, consider a simple example where P=−
s −1 , K =1 s+2
are both proper transfer functions. However, u=
s+2 s −1 (r − n − d ) + di 3 3
i.e., the transfer functions from the external signals r, n, d and di to u are not proper. Hence, the feedback system is not physically realizable! Definition 4-1 A feedback system is said to be well-posed if all closed-loop transfer matrices are well-defined and proper.
2
Chapter 4
Lecture Notes of Multivariable Control
e
Figure 4-1 Standard feedback configuration
Now suppose that all the external signals r, n, d and di are specified and that the closed-loop transfer matrices from them to u are respectively well-defined and proper. Then, y and all other signals are also well-defined and the related transfer matrices are proper. Furthermore, since the transfer matrices from d and n to u are the same and differ from the transfer matrix from r to u by only a sign, the system is well-posed if and only if the transfer matrix from di and d to u exists and is proper. Theorem 4-1 The feedback system in Figure 4-1 is well-posed if and only if I + K (∞) P(∞) is invertible.
4-1
Proof. As we explain, the system is well-posed if and only if the transfer matrix from di and d to u exists and is proper. The transfer matrix from di and d to u is: u = −(I + KP ) [K −1
⎡d ⎤ KP ]⎢ ⎥ ⎣d i ⎦
Thus well-posedness is equivalent to the condition that (I + KP )−1 exists and is proper. But this is equivalent to the condition that the constant term of the transfer matrix I + K (∞) P(∞) is invertible. It is straightforward to show that (4-1) is equivalent to either one of the following two conditions: K (∞ ) ⎤ ⎡ I is invertible ⎢ − P (∞ ) I ⎥⎦ ⎣
4-2
I + P(∞) K (∞) is invertible
3
Chapter 4
Lecture Notes of Multivariable Control
The well-posedness condition is simple to state in terms of state-space realizations. Introduce realizations of P and K: ⎡A P≅⎢ ⎣C
B⎤ D ⎥⎦
⎡ Aˆ K ≅⎢) ⎣⎢C
Bˆ ⎤ )⎥ D ⎦⎥
4-3
Then P(∞) = D and K (∞) = Dˆ . For example, well-posedness in (4-2) is equivalent to the condition that ) ⎡ I D⎤ ⎢ ⎥ is invertible ⎣− D I ⎦
4-4
Fortunately, in most practical cases we will have D = 0, and hence well-posedness for most practical control systems is guaranteed.
4-2 Internal Stability Consider a system described by the standard block diagram in Figure 4-1 and assume the system is well-posed. Furthermore, assume that the realizations for P(s) and K(s) given in equation (4-3) are stabilizable and detectable. Let x and xˆ denote the state vectors for P and K, respectively, and write the state equations in Figure 4-1 with r, d, di and n set to zero: x& = Ax + Bu y = Cx + Du x&ˆ = Aˆ xˆ − Bˆ y
4-5
u = Cˆ xˆ − Dˆ y
Definition 4-2 The system of Figure 4-1 is said to be internally stable if the origin (x, xˆ ) = (0, 0) is asymptotically stable, i.e., the states (x, xˆ ) go to zero from all initial states when r = 0, d=0, di=0 and n=0.
4
Chapter 4
Lecture Notes of Multivariable Control
Note that internal stability is a state space notation. To get a concrete characterization of internal stability, solve equations (4-5) for y and u: ⎡u ⎤ ⎡ I ⎢ y⎥ = ⎢ ⎣ ⎦ ⎣− D
−1
Dˆ ⎤ ⎡0 ⎥ ⎢ I ⎦ ⎣C
Cˆ ⎤ ⎡ x ⎤ ⎥⎢ ⎥ 0⎦ ⎣ xˆ ⎦
4-6
Note that the existence of the inverse is guaranteed by the well-posedness condition. Now substitute this into (4-5) to get ⎡ x& ⎤ ⎛⎜ ⎡ A 0⎤ ⎡ B 0 ⎤ ⎡ I + ⎢ &⎥ = ⎜ ⎢ ˆ⎥ ⎢ ˆ ⎥⎢ ⎣ xˆ ⎦ ⎝ ⎣0 A⎦ ⎣0 − B ⎦ ⎣− D
−1
Dˆ ⎤ ⎡0 ⎥ ⎢ I ⎦ ⎣C
Cˆ ⎤ ⎞⎟ ⎡ x ⎤ ~ ⎡ x ⎤ ⎥ ⎢ ⎥ = A⎢ ⎥ 0⎦ ⎟ ⎣ xˆ ⎦ ⎣ xˆ ⎦ ⎠
4-7
Where 0⎤ ⎡ I ~ ⎡ A 0⎤ ⎡ B A=⎢ +⎢ ⎥ ˆ ˆ ⎥⎢ ⎣0 A⎦ ⎣0 − B ⎦ ⎣− D
−1
Dˆ ⎤ ⎡0 ⎥ ⎢ I ⎦ ⎣C
Cˆ ⎤ ⎥ 0⎦
4-8 ~
Thus internal stability is equivalent to the condition that A has all its eigenvalues in the open lefthalf plane. In fact, this can be taken as a definition of internal stability.
Theorem 4-2 The system of Figure 4-1 with given stabilizable and detectable realizations for P and K is ~
internally stable if and only if A is a Hurwitz matrix (All eigenvalues are in open left half plane). It is routine to verify that the above definition of internal stability depends only on P and K, not on specific realizations of them as long as the realizations of P and K are both stabilizable and detectable, i.e., no extra unstable modes are introduced by the realizations. The above notion of internal stability is defined in terms of state-space realizations of P and K. It is also important and useful to characterize internal stability from the transfer matrix point of view. Note that the feedback system in Figure 4-1 is described, in term of transfer matrices, by ⎡ I K ⎤ ⎡u p ⎤ ⎡d i ⎤ ⎢− P I ⎥ ⎢ ⎥ = ⎢− r ⎥ ⎣ ⎦ ⎣− e⎦ ⎣ ⎦
4-9
Note that we ignore d and n as inputs since they produce similar transfer matrices as equation 4-9.
5
Chapter 4
Lecture Notes of Multivariable Control
Now it is intuitively clear that if the system in Figure 4-1 is internally stable, then for all bounded inputs (di, -r), the outputs (up, - e) are also bounded. The following theorem shows that this idea leads to a transfer matrix characterization of internal stability. Theorem 4-3 The system in Figure 4-1 is internally stable if and only if the transfer matrix ⎡ I K⎤ ⎢− P I ⎥ ⎣ ⎦
−1
⎡ I − K ( I + PK ) −1 P =⎢ −1 ⎢⎣ ( I + PK ) P
− K ( I + PK ) −1 ⎤ ⎥ ( I + PK ) −1 ⎥⎦
4-10
from (di, -r) to (up, - e) be a proper and stable transfer matrix. Proof. Let stabilizable and detectable realizations of P and K defined as 4-3. Then we have the state space equation for the system in Figure 4-1 is ⎡ x& ⎤ ⎡ A 0⎤ ⎡ x ⎤ ⎡ B 0 ⎤ ⎡u p ⎤ + ⎢ &⎥ = ⎢ ˆ ⎥ ⎢⎣ xˆ ⎥⎦ ⎢0 − Bˆ ⎥ ⎢⎣− e⎥⎦ ˆ x A 0 ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡ y ⎤ ⎡C 0⎤ ⎡ x ⎤ ⎡ D 0⎤ ⎡u p ⎤ ⎥⎢ ⎥ + ⎢ ⎥⎢ ⎥ ⎢u ⎥ = ⎢ ⎣ ⎦ ⎣0 Cˆ ⎦ ⎣ xˆ ⎦ ⎣0 Dˆ ⎦ ⎣e ⎦ ⎡u p ⎤ ⎡0 I ⎤ ⎡ y ⎤ ⎡d i ⎤ ⎢ ⎥=⎢ ⎥⎢ ⎥ + ⎢ ⎥ ⎣− e⎦ ⎣ I 0⎦ ⎣u ⎦ ⎣− r ⎦ ⎡ y⎤
The deleting ⎢ ⎥ from last two equations we can rewritten a u ⎣ ⎦
⎡ I Dˆ ⎤ ⎡u p ⎤ ⎡0 Cˆ ⎤ ⎡ x ⎤ ⎡d i ⎤ ⎥⎢ ⎥ + ⎢ ⎥ ⎢ ⎥⎢ ⎥ = ⎢ ⎣− D I ⎦ ⎣− e⎦ ⎣C 0⎦ ⎣ xˆ ⎦ ⎣− r ⎦
−1
⇒
By substituting this in the states space equation we have −1 −1 ⎡ x& ⎤ ⎛⎜ ⎡ A 0⎤ ⎡ B 0 ⎤ ⎡ I Dˆ ⎤ ⎡0 Cˆ ⎤ ⎞⎟ ⎡ x ⎤ ⎡ B 0 ⎤ ⎡ I Dˆ ⎤ ⎡d i ⎤ + ⎥ ⎟⎢ ⎥ + ⎢ ⎥ ⎢ ⎢ &⎥ = ⎜ ⎢ ⎥ ⎢ ⎥ ˆ⎥ ⎢ ˆ ⎥⎢ ˆ ⎥⎢ ⎣ xˆ ⎦ ⎝ ⎣0 A⎦ ⎣0 − B ⎦ ⎣− D I ⎦ ⎣C 0⎦ ⎠ ⎣ xˆ ⎦ ⎣0 − B ⎦ ⎣− D I ⎦ ⎣− r ⎦ −1
−1
⎡ u p ⎤ ⎡ I Dˆ ⎤ ⎡0 Cˆ ⎤ ⎡ x ⎤ ⎡ I Dˆ ⎤ ⎡d i ⎤ ⎥⎢ ⎥ + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎣− e ⎦ ⎣− D I ⎦ ⎣C 0⎦ ⎣ xˆ ⎦ ⎣− D I ⎦ ⎣− r ⎦
Now suppose that this system is internally stable. So the eigenvalues of 0⎤ ⎡ I ~ ⎡ A 0⎤ ⎡ B A=⎢ +⎢ ⎥ ˆ ˆ ⎥⎢ ⎣0 A⎦ ⎣0 − B ⎦ ⎣− D
−1
Dˆ ⎤ ⎡0 ⎥ ⎢ I ⎦ ⎣C
−1
⎡ u p ⎤ ⎡ I Dˆ ⎤ ⎡0 Cˆ ⎤ ⎡ x ⎤ ⎡ I Dˆ ⎤ ⎡d i ⎤ ⎥⎢ ⎥ + ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥=⎢ ⎣− e ⎦ ⎣− D I ⎦ ⎣C 0⎦ ⎣ xˆ ⎦ ⎣− D I ⎦ ⎣− r ⎦
Cˆ ⎤ ⎥ 0⎦
6
Chapter 4
Lecture Notes of Multivariable Control
are in the open left-half plane, it follows that the transfer matrix from (di, -r) to (up,- e) given in (4-10) is stable. Conversely, suppose that ( I + PK ) is invertible and the transfer matrix in (4-10) is stable. Then, in particular, ( I + PK ) −1 is proper which implies that ( I + P(∞) K (∞)) = ( I + DDˆ ) is invertible. Therefore, ⎡ I Dˆ ⎤ ⎢ ⎥ ⎣− D I ⎦
is nonsingular. Now routine calculations give the transfer matrix from (di, -r) to (up, -e) in terms of the state space realizations: ⎡ I Dˆ ⎤ ⎡ 0 − Cˆ ⎤ ~ −1 ⎡ B 0 ⎤ ⎡ I Dˆ ⎤ ⎥ ( sI − A) ⎢ ⎥ ⎢ ⎥+⎢ ˆ ⎥⎢ ⎣0 − B ⎦ ⎣ − D I ⎦ ⎣− D I ⎦ ⎣− C 0 ⎦
−1
Since the above transfer matrix is stable, it follows that ⎡ 0 − Cˆ ⎤ ~ −1 ⎡ B 0⎤ ⎢ ⎥ ( sI − A) ⎢ ˆ⎥ ⎣0 B ⎦ ⎣− C 0 ⎦
as a transfer matrix is stable. Finally, since (A, B, C) and ( Aˆ , Bˆ , Cˆ ) are stabilizable and detectable, ⎛ ~ ⎡ B 0⎤ ⎡0 Cˆ ⎤ ⎞ ⎜ A, ⎢ ⎟ , ⎜ ˆ ⎥ ⎢C 0⎥ ⎟ 0 B ⎦ ⎣ ⎦⎠ ⎝ ⎣
~
is stabilizable and detectable. It then follows that the eigenvalues of A are in the open left-half plane. Note that to check internal stability, it is necessary (and sufficient) to test whether each of the four transfer matrices in (4-10) is stable. Stability cannot be concluded even if three of the four transfer matrices in (4-10) are stable. For example, let an interconnected system transfer function be given by P=
s −1 , s +1
K=
1 s −1
Then it is easy to compute
7
Chapter 4 ⎡ s +1 u ⎡ p ⎤ ⎢s + 2 ⎢ ⎥=⎢ ⎣− e⎦ ⎢ s − 1 ⎢⎣ s + 2
Lecture Notes of Multivariable Control
−
s +1 ⎤ ( s − 1)( s + 2) ⎥ ⎡d i ⎤ ⎥⎢ ⎥ s +1 ⎥ ⎣− r ⎦ ⎥⎦ s+2
which shows that the system is not internally stable although three of the four transfer functions are stable. This can also be seen by calculating the closed-loop A-matrix with any stabilizable and detectable realizations of P and K. Remark: It should be noted that internal stability is a basic requirement for a practical feedback system. This is because all interconnected systems may be unavoidably subject to some nonzero initial conditions and some (possibly small) errors, and it cannot be tolerated in practice that such errors at some locations will lead to unbounded signals at some other locations in the closed-loop system. Internal stability guarantees that all signals in a system are bounded provided that the injected signals (at any locations) are bounded. However, there are some special cases under which determining system stability is simple. Theorem 4-4 Suppose K is stable. Then the system in Figure 4-1 is internally stable iff ( I + PK ) −1 P is stable. Proof. The necessity is obvious. To prove the sufficiency, it is sufficient to show that if Q = ( I + PK ) −1 P is stable, the other three transfer matrices are also stable since: I − K ( I + PK ) −1 P = I − KQ − K ( I + PK ) −1 = − K ( I − QK ) ( I + PK ) −1 = I + ( I + PK ) −1 − I = I − ( I + PK ) −1 PK = I − QK
This theorem is in fact the basis for the classical control theory where the stability is checked only for one closed-loop transfer function with the implicit assumption that the controller itself is stable. Theorem 4-5 Suppose P is stable. Then the system in Figure 4-1 is internally stable iff K ( I + PK ) −1 is stable. Proof. The necessity is obvious. To prove the sufficiency, it is sufficient to show that if Q = K ( I + PK ) −1 is stable, the other three transfer matrices are also stable since:
8
Chapter 4
Lecture Notes of Multivariable Control
I − K ( I + PK ) −1 P = I − QP ( I + PK ) −1 P = ( I − PQ) P ( I + PK ) −1 = I + ( I + PK ) −1 − I = I − PK ( I + PK ) −1 = I − PQ
Theorem 4-6 Suppose P and K are both stable. Then the system in Figure 4-1 is internally stable iff ( I + PK ) −1 is stable. Proof. The necessity is obvious. To prove the sufficiency, it is sufficient to show that if Q = ( I + PK ) −1 is stable, the other three transfer matrices are also stable since:
I − K ( I + PK ) −1 P = I − KQP ( I + PK ) −1 P = QP K ( I + PK ) −1 = KQ
4-3 The Nyquist Stability Criterion Let G(s) be a square, rational transfer matrix. This G(s) will often represent the series connection of a plant with a compensator, and we must assume that there are no hidden unstable modes in the system represented by this transfer function. In this section we wish to examine the stability of the negative-feedback loop created by inserting the compensator kI in to the loop (i.e. an equal gain k into each loop), for various real values of k.
4-3-1 The Generalized Nyquist Stability Criterion Suppose G(s) is a square, rational transfer matrix. We are going to check the stability of Figure 42 for various real values of k. Let det[ I + kG( s)] have P0 poles and Pc zeros in the closed loop right half plane. Then, just as with SISO systems, we have, by the principle of the argument, that the Nyquist plot of φ ( s ) = det( I + kG ( s)) (the image of φ (s ) as s goes once round the Nyquist contour in the clockwise
direction) encircles the origin, Pc − Po times in the counter-clockwise direction. Nyquist contour is shown in Figure 4-3. Remark: For stable system det[ I + kG ( s)] has no zero in the closed loop right half plane, so Pc = 0 .
9
Chapter 4
Lecture Notes of Multivariable Control
+ Controller kI
Process G(s)
-
Figure 4-2 System with constant controller
Figure 4-3 Nyquist contour
Note that the argument so far has exactly paralleled that for SISO feedback. However, if we stopped here we would have to draw the Nyquist locus of φ ( s ) = det( I + kG ( s)) for each value of k in which we were interested, whereas the great virtue of the classical Nyquist criterion is that we draw a locus only once, and then infer stability properties for all values of k. It is easy to show that, if λi (s) is an eigenvalue of G (s) , then kλi (s ) is an eigenvalue of kG (s) , and 1 + kλi ( s) is an eigenvalue of 1 + kG ( s) . Consequently (since the determinant is the product of the eigenvalues) we have det[ I + kG ( s )] = ∏ [1 + kλi ( s )]
4-11
i
10
Chapter 4
Lecture Notes of Multivariable Control
and hence ⎧
⎫
ϑ {det[ I + kG ( s )]} = ϑ ⎨∑ (1 + kλi ( s) )⎬ ⎩
i
4-12
⎭
where ϑ ( f (s) ) is the total number of encirclements of the origin made by the graphs of f (s) . So, we see that we can infer closed-loop stability by counting the total number of encirclements of the origin made by the graphs of 1 + kλi ( s) , or equivalently, by counting the total number of encirclements of -1 made by the graphs of kλi (s ) . In this form, the criterion has the same powerful character as the classical criterion, since it is enough to draw the graphs once, usually for k = 1 . The graphs of λi (s) (as s goes once around the Nyquist contour) are called characteristic loci. Remark: The eigenvalues of a rational matrix are usually irrational functions, so, we may have problem counting encirclements. It turns out that in practice there is no problem, since together the characteristic loci forms a closed curve. Theorem 4-7 (Generalized Nyquist theorem) If G(s) with no hidden unstable modes, has P0 unstable (Smith-McMillan) poles, then the closedloop system with return ratio − kG(s) is stable if and only if the characteristic loci of kG(s) , taken together, encircle the point -1, P0 times anticlockwise. Proof: Desoer and Wang proved the theorem in 1980. “On the generalized Nyquist stability criterion” IEEE Transaction on Automatic control, AC-25 Example 4-1 In the Figure 4-2 G ( s) =
s ⎤ ⎡s − 1 1 ⎢ 1.25( s + 1)( s + 2) ⎣ − 6 s − 2⎥⎦
Suppose that G(s) has no hidden modes, check the stability of system for different values of k. Solution: We need to find the eigenvalues of G(s). So we let λI − G ( s ) = 0
then after some manipulation we find
11
Chapter 4
Lecture Notes of Multivariable Control
λ1 =
2 s − 3 − 1 − 24s 2.5( s + 1)( s + 2)
and
λ2 =
2 s − 3 + 1 − 24s 2.5( s + 1)( s + 2)
The characteristic loci of this transfer function are shown in Figure 4-4. Since G(s) has no unstable poles, we will have closed loop stability if these loci give zero net encirclements of -1/k when a negative feedback kI is applied. From Figure 4-4 it can be seen that for − ∞ < −1 / k < −0.8 , for − 0.4 < −1 / k < 0 and for 0.53 < −1 / k < ∞ there will be no encirclements, and hence closed loop stability will be obtained. For − 0.8 < −1 / k < −0.4 there is one clockwise encirclements, and hence closed loop instability, while for 0 < −1 / k < 0.53 there are two clockwise encirclements, and therefore closed loop instability again.
4-3-2 Nyquist arrays and Gershgorin bands The Nyquist array of G(s) is an array of graphs (not necessarily closed curve), the (i , j ) th graph being the Nyquist locus of g ij (s ) . The key to Nyquist array methods is: Theorem 4-8 (Gershgorin’s theorem) Let Z be a complex matrix of dimensions m × m . Then the eigenvalues of Z lie in the union of the m circles, each with center z ii and radius m
∑z j =1 j ≠i
ij
,
i = 1, 1, .... , m
4-13
They also lie in the union of the circles, each with center z ii and radius
12
Chapter 4
Lecture Notes of Multivariable Control
Figure 4-4 The two characteristic loci for the system defined in example 4-1 m
∑z j =1 j ≠i
ji
,
i = 1, 1, .... , m
4-14
Proof: See “Advanced engineering mathematics” Kreyszig(1972) Consider the Nyquist array of some square G(s). On the loci of g ii ( jω ) superimpose, at each point, a circle of radius m
∑ j =1 j ≠i
g ij ( jω )
m
or
∑g j =1 j ≠i
ji
( jω )
4-15
(making the same choice for all the diagonal elements at each frequency). The ‘bands’ obtained in this way are called Gershgorin bands, each is composed of Gershgorin circles. Gershgorin bands for a sample system are illustrated in Figure 4-5.
13
Chapter 4
Lecture Notes of Multivariable Control
By Gershgorin’s theorem we know that the union of the Gershgorin bands ‘traps’ the union of the characteristic loci. Furthermore, it can be shown that if the Gershgorin bands occupy distinct regions, then as many characteristic loci are trapped in the region as the number of Gershgorin
Figure 4-5 A Nyquist array, with Gershgorin bands for a sample system bands occupying it. So, if all the Gershgorin bands exclude the point -1, then we can assess closed-loop stability by counting the encirclements of -1 by the Gershgorin bands, since this tells us the number of encirclements made by the characteristic loci. If the Gershgorin bands of G(s) exclude the origin, then we say that G(s) is diagonally dominant (row dominant or column dominant, if applicable). Note that to assess stability we need [ I + G ( s)] to be diagonally dominant. The greater the degree of dominant (of G(s) or I+G(s)) – that is, the narrower the Gershgorin bands- the more closely does G(s) resembles m non-interacting SISO transfer function.
4-4 Coprime Factorizations over Stable Transfer Functions
14
Chapter 4
Lecture Notes of Multivariable Control
Recall that two polynomials m(s) and n(s ) , with, for example, real coefficients, are said to be coprime if their greatest common divisor is 1 (equivalent, they have no common zeros). It follows from Euclid's algorithm that two polynomials m(s) and n(s ) are coprime if there exist polynomials x(s ) and y (s) such that x( s )m( s ) + y ( s)n( s ) = 1 ; such an equation is called a Bezout identity. Similarly, two transfer functions m(s) and n(s ) in the set of stable transfer functions are said to be coprime over stable transfer functions if there exists x(s ) and y (s) in the set of stable transfer functions such that x ( s ) m( s ) + y ( s ) n ( s ) = 1
4-16
More generally, we have the following definition for MIMO systems. Definition 4-3 Two matrices M and N in the set of stable transfer matrices are right coprime over the set of stable transfer matrices if they have the same number of columns and if there exist matrices X r and Yr in the set of stable transfer matrices such that ⎡M ⎤ Yr ]⎢ ⎥ = X r M + Yr N = I 4-17 ⎣N ⎦ ~ ~ Similarly, two matrices M and N in the set of stable transfer matrices are left coprime over the
[X r
set of stable transfer matrices if they have the same number of rows and if there exist two matrices X l and Yl in the set of stable transfer matrices such that
[M~
]
~ ⎡X l ⎤ ~ ~ N ⎢ ⎥ = MX l + NYl = I ⎣Yl ⎦
4-18 ⎡M ⎤
Note that these definitions are equivalent to saying that the matrix ⎢ ⎥ is left invertible in the set ⎣N ⎦
[~
of stable transfer matrices and the matrix M
]
~ N is right-invertible in the set of stable transfer
matrices. Equations 4-17 and 4-18 are often called Bezout identities. Now let P be a proper real-rational matrix. A right-coprime factorization (rcf) of P is a factorization P = NM −1 where N and M are right-coprime in the set of stable transfer matrices. ~
~
~
~
Similarly, a left-coprime factorization (lcf) has the form P = M −1 N where N and M are leftcoprime over the set of stable transfer matrices. A matrix P(s) in the set of rational proper transfer matrices is said to have double coprime factorization if there exist a right coprime factorization 15
Chapter 4
Lecture Notes of Multivariable Control
~ ~ P = NM −1 , a left coprime P = M −1 N and X r , Yr , X l and Yl in the set of stable transfer matrices
such that ⎡ X r Yr ⎤ ⎡ M ⎢ ~ ~ ⎥⎢ ⎣− N M ⎦ ⎣ N
− Yl ⎤ =I X l ⎥⎦
4-19 ~
Of course implicit in these definitions is the requirement that both M and M be square and nonsingular. Theorem 4-9 Suppose P(s) is a proper real-rational matrix and ⎡A B ⎤ P( s) ≅ ⎢ ⎥ ⎣C D ⎦
is a stabilizable and detectable realization. Let F and L be such that A+BF and A+LC are both stable, and define ⎡M ⎢N ⎣
⎡ A + BF − Yl ⎤ ⎢ ≅ F X l ⎥⎦ ⎢ ⎢⎣C + DF
⎡X r ⎢ ~ ⎣− N
− L⎤ 0 ⎥⎥ I ⎥⎦
B I D
4-20
⎡ A + LC − ( B + LD ) Yr ⎤ ⎢ F I ~⎥≅ M⎦ ⎢ ⎢⎣ C −D
L⎤ 0 ⎥⎥ I ⎥⎦
4-21
D v +
u
+
x&
B
+
∫
+
x
C
+
A F
Figure 4-6 Feedback representation of right coprime factorization ~
~
Then P = NM −1 and P = M −1 N are rcf and lcf, respectively.
16
+ y
Chapter 4
Lecture Notes of Multivariable Control
The coprime factorization of a transfer matrix can be given a feedback control interpretation. For example, right coprime factorization comes out naturally from changing the control variable by a state feedback. Consider the state space equations for a plant P in the Figure 4-6: x& + Ax + Bu
4-22
y = Cx + Du
Next, introduce a state feedback and change the variable u = v + Fx
4-23
where F is such that A + BF is stable. Then we get x& = { A + BF ) x + Bv u = Fx + v y = (C + DF ) x + Dv
4-24
Evidently from these equations, the transfer matrix from v to u is ⎡ A + BF M (s) ≅ ⎢ ⎣ F
B⎤ I ⎥⎦
4-25
and that from v to y is ⎡ A + BF N ( s) ≅ ⎢ ⎣C + DF
B⎤ D ⎥⎦
4-26
Therefore we have u ( s ) = M ( s)v( s) and y ( s) = N ( s )v( s) so y ( s ) = N ( s )v( s ) = N ( s ) M −1 ( s )u ( s ) = P( s )u ( s )
⇒
P( s ) = N ( s ) M −1 ( s ) .
4-5 Stabilizing Controllers In this section we introduce a parameterization known as the Q-parameterization or Youlaparameterization, of all stabilizing controllers for a plant. By all stabilizing controllers we mean all controllers that yield internal stability of the closed loop system. We first consider stable plants for which the parameterization is easily derived and then unstable plants where we make use of the coprime factorization. The following theorem forms the basis for parameterizing all stabilizing controllers for stable plants. Theorem 4-10 17
Chapter 4
Lecture Notes of Multivariable Control
Suppose P is stable. Then the set of all stabilizing controllers in Figure 4-1 can be described as K = Q( I − PQ) −1
4-27
for any Q in the set of stable transfer matrices and I − P(∞)Q(∞) nonsingular. Proof. First we know that if K = Q ( I − PQ) −1 exist then K = Q( I − PQ) −1
⇒
K ( I − PQ) = Q
⇒
Q = K ( I + PK ) −1
Since Q is stable so K ( I + PK ) −1 is stable so by theorem 4-5 the system in Figure 4-1 is stable. We must show that every stabilizing controller can be shown as 4-27. Consider that the system in the Figure 4-1 is stable, since P is also stable, so by theorem 4-5 K ( I + PK ) −1 is stable. Define Q = K ( I + PK ) −1 = ( I + KP ) −1 K
so we have Q + KPQ = K . Clearly since
I − P(∞)Q(∞)
is
nonsingular K is defined by K = Q ( I − PQ) −1
Example 4-2 For the plant P( s) =
1 ( s + 1)( s + 2)
Suppose that it is desired to find an internally stabilizing controller so that y asymptotically tracks a ramp input. Solution: Since the plant is stable the set of all stabilizing controller is derived from K = Q( I − PQ) −1 , for any stable Q such that, I − P(∞)Q(∞) is nonsingular.
Let Q=
as + b s+3
We must define the variables a and b such that that y asymptotically tracks a ramp input, so the sensitivity transfer function S must have two zeros at origin (L must be a type two system). S = 1 − T = 1 − PK ( I + PK ) −1 = 1 − PQ = 1 −
as + b ( s + 1)( s + 2)( s + 3) − (as + b) = ( s + 1)( s + 2)( s + 3) ( s + 1)( s + 2)( s + 3)
So we should take a = 11, b = 6 . This gives
18
Chapter 4
Lecture Notes of Multivariable Control
Q= K=
11s + 6 s+3
11( s + 1)( s + 2)( s + 12 / 22) . s 2 ( s + 6)
However if P(s) is not stable, the parameterization is much more complicated. The result can be more conveniently stated using state-space representations. The following theorem shows that a proper real-rational plant may be stabilized irrespective of the location of its RHP-poles and RHP-zeros, provided the plant does not contain unstable hidden modes. Theorem 4-11 ~
~
Let P be a proper real-rational matrix and P = NM −1 = M −1 N be corresponding rcf and lcf over the set of stable transfer matrices. Then there exists a stabilizing controller K = Yl X l −1 = X r −1Yr ~
~
with X r , Yr , X l and Yl in the set of stable transfer matrices ( X r M + Yr N = I , NYl + MX l = I ). Proof. See “Multivariable Feedback Design” By Maciejowski J.M. (1989). Furthermore, suppose ⎡A B ⎤ P( s) ≅ ⎢ ⎥ ⎣C D ⎦
is a stabilizable and detectable realization of P and let F and L be such that A+BF and A+LC are stable. Then a particular set of state space realizations for these matrices can be given by 4-20 and 4-21. The following theorem forms the basis for parameterizing all stabilizing controllers for a proper real-rational matrix. Theorem 4-12 ~
~
Let P be a proper real-rational matrix and P = NM −1 = M −1 N be corresponding rcf and lcf over the set of stable transfer matrices. Then the set of all stabilizing controllers in Figure 4-1 can be described as ~ ~ K = ( X r − Qr N ) −1 (Yr + Qr M )
4-28
or K = (Yl + MQl )( X l − NQl ) −1
4-29
19
Chapter 4
Lecture Notes of Multivariable Control
~
where Qr is any stable transfer matrices and X r (∞) − Qr (∞) N (∞) is nonsingular or Ql is any stable transfer matrices and X l (∞) − N (∞)Ql (∞) is nonsingular too. Proof. See “Multivariable Feedback Design” By Maciejowski J.M. (1989).
Example 4-3: For the plant P( s) =
1 ( s − 1)( s − 2)
Find a stabilizing controller. Solution: First of all it is clear that ⎡0 1 P (s ) ≅ ⎢⎢− 2 3 ⎢⎣ 1 0
0⎤ 1⎥⎥ 0⎥⎦
is a stabilizable and detectable realization. Now let F = [1 − 5] and L = [− 7 − 23]T clearly A+BF and A+LC are stable. Since the plant is unstable we use theorem 4-11 to derive coprime factorization parameters. N=
1 ( s − 2)( s − 1) 108s − 72 s 2 + 9s + 38 , , , M = Y = X = l l ( s + 1) 2 ( s + 1) 2 ( s + 1) 2 ( s + 1) 2
~ N=
1 108s − 72 s 2 + 9 s + 38 ~ ( s − 2)( s − 1) , , , M = Y = X = r r ( s + 2) 2 ( s + 2) 2 ( s + 2) 2 ( s + 2) 2
Now let Qr = Ql = 0 then by theorem 4-12 one of the stabilizing controllers is: K = Yl X l
−1
−1
= X r Yr =
108s − 72 s 2 + 9s + 38
Example 4-4 For the plant in Figure 4-1 P( s) =
1 ( s − 1)( s − 2)
The problem is to find a controller that 1. The feedback system is internally stable.
20
Chapter 4
Lecture Notes of Multivariable Control
2. The final value of y equals 1 when r is a unit step and d=0. 3. The final value of y equals zero when di is a sinusoid of 10 rad/s and r=0. Solution: The set of all stabilizing controller is: ~ ~ K = ( X r − Qr N ) −1 (Yr + Qr M ) where from the example 4-3 we have ~ N=
1 1 ( s − 2)( s − 1) 108s − 72 s 2 + 9s + 38 ~ ( s − 2)( s − 1) , , , , , M = N = M = Y = X = r r ( s + 2) 2 ( s + 2) 2 ( s + 1) 2 ( s + 1) 2 ( s + 2) 2 ( s + 2) 2
Clearly for any stable Q the condition 1 satisfied. ~
To met condition 2 the transfer function from r to y ( N (Yr + Qr M ) ) must satisfy ~ N (0)(Yr (0) + Qr (0) M (0)) = 1
⇒
Qr (0) = 38
~
To met condition 3 the transfer function from d to y ( N ( X r − Qr N ) ) must satisfy ~ N (10 j )( X r (10 j ) − Qr (10 j ) N (10 j )) = 0
⇒
Qr (10 j ) = −62 + 90 j
Now define 1 1 + x3 s +1 ( s + 1) 2
Qr ( s ) = x1 + x 2
And then set x1, x2 and x3 to meet the requests.
4-6 Strong and Simultaneous Stabilization Practical control engineers are reluctant to use unstable controllers, especially when the plant itself is stable. Since if a sensor or actuator fails, and the feedback loop opens, overall stability is maintained if both plant and controller individually are stable. If the plant itself is unstable, the argument against using an unstable controller is less compelling. However, knowledge of when a plant is or is not stabilizable with a stable controller is useful for another problem namely, simultaneous stabilization, meaning stabilization of several plants by the same controller. The issue of simultaneous stabilization arises when a plant is subject to a discrete change, such as when a component burns out. Simultaneous stabilization of two plants can also be viewed as an example of a problem involving highly structured uncertainty.
21
Chapter 4
Lecture Notes of Multivariable Control
Say that a plant is strongly stabilizable if internal stabilization can be achieved with a controller itself is a stable transfer matrix. Following theorem shows that poles and zeros of P must share a certain property in order for P to be strongly stabilizable. Theorem 4-13 P is strongly stabilizable if and only if it has an even number of real poles between every pairs of real RHP zeros( including zeros at infinity). Proof. See “Linear feedback control By Doyle”. Example 4-5 Which of the following plant is strongly stabilizable? P1 ( s ) =
s −1 s ( s − 2)
P2 ( s ) =
( s − 1) 2 ( s 2 − s + 1) ( s − 2) 2 ( s + 1) 3
Solution: P1 is not strongly stabilizable since it has one pole between z=1 and z = ∞ but P2 is strongly stabilizable since it has two pole between z=1 and z = ∞ .
Exercises
4-1 Find two different lcf's for the following transfer matrix. ⎡ s −1 G ( s) = ⎢ ⎣s +1
s − 2⎤ s + 2 ⎥⎦
4-2 Find a lcf's and a rcf's for the following transfer matrix. G ( s) =
4 ⎤ 1 ⎡s − 1 ⎢ s + 2 ⎣4.5 2( s − 1)⎥⎦
4-3 Find a lcf's and a rcf's for the following transfer matrix. G ( s) =
4 ⎤ ⎡s − 1 1 ⎢ ( s + 2)( s − 3) ⎣4.5 2( s − 1)⎥⎦
4-4 Derive a feedback control interpretation for the left coprime factorization.
4-5 Show that the equation of all stabilizing controller for stable plants (eq. 4-27) is a special case of the equation of all stabilizing controller for proper real-rational plants (eq. 4-28 or 4-29).
22
Chapter 4
Lecture Notes of Multivariable Control
4-6 In example 4-4 find x1, x2 and x3.
4-7 In example 4-4 find the controller and then find the step response of the system. Then suppose the input is zero but a sinusoid with frequency of 10 rad/s applied as disturbance, find the response of system.
4-8 Derive a representation for left coprime factorization. (the same as right coprime interpretation in figure 4-6) 4-9 By use of MIMO rule in chapter 2 show that the figure 4-7 introduces the controller in equation 4-28.
4-10 By use of MIMO rule in chapter 2 show that the figure 4-8 introduces the controller in equation 4-29.
Figure 4-7 Representation of the stabilizing controller 4-28
Figure 4-8 Representation of the stabilizing controller 4-29
23
Chapter 4
Lecture Notes of Multivariable Control
References Skogestad Sigurd and Postlethwaite Ian. (2005) Multivariable Feedback Control: England, John Wiley & Sons, Ltd. Maciejowski J.M. (1989). Multivariable Feedback Design: Adison-Wesley.
24