Multi-Target Linear-Quadratic Control Problem and ... - CiteSeerX

2 downloads 0 Views 204KB Size Report
May 11, 2002 - It's shown that a multi-target linear-quadratic control problem can be reduced to the classical tracking problem where the target is a convex ...
Multi-Target Linear-Quadratic Control Problem and Second-order Cone Programming L. Faybusovich, T. Mouktonglang

Department of Mathematics, 255 Hurley Building, University of Notre Dame, Notre Dame, IN 46556 USA

Abstract It's shown that a multi-target linear-quadratic control problem can be reduced to the classical tracking problem where the target is a convex combination of the original ones. Finding coecient in this convex combination is reduced to solving a second-order cone programming problem which can be easily solved using modern interior point algorithms. Key words: linear-quadratic control problem; second-order cone programming

1 Introduction Let (H; h; i) be a Hilbert space, Z its closed vector subspace, h ; :::; hm ; c be vectors in H . Consider the following optimization problem : 1

max kh ? hik ! min: im

(1)

h2c+Z

(2)

1

Our major motivation for considering this problem comes from the multitarget linear-quadratic problem which we discuss in more detail below. More generally, (1), (2) describe particular but important cases of multi-criteria analytic design of linear regulators. In the present paper we show how to reduce (1), (2) to a nite-dimensional second-order cone programming problem which can be solved using the interior-point algorithms. We analyze connections between (1), (2) and its dual following the approach suggested in [1]. Preprint submitted to Elsevier Preprint

11 May 2002

2 Duality Considerations Given a Hilbert space (H; h ; i), we can de ne a second-order cone L  V = IR  H as follows:

L = f(t; x) 2 IR  H : t  kxkg: q

Here kxk = hx; xi. Then

int(L) = f(t; x) 2 IR  H : t > kxkg; and the dual L of L coincides with L (for details, see e.g. [1]). Let X = V  V    V (m ? times); K = L  L    L(m ? times); a; b 2 X; Y  X be a closed vector subspace. Consider the following optimization problem.

hha;  ii ! min:  2 (b + Y ) \ K:

(3)

hhb; ii ! min:  2 (a + Y ?) \ K:

(5)

(4)

and its dual (6)

Here, if  i = ((t i ; x i ); (t i ; x i );    ; (tmi ; xmi )); i = 1; 2, then ( )

( ) 1

( ) 1

( ) 2

( ) 2

( )

m X

( )

hh ;  ii = [t i t i + hx i ; x i i] (1)

(2)

i=1

( ) ( ) 1 2

( ) 1

( ) 2

(7)

and

Y ? = f 2 X : hh;  ii = 0; 8 2 Y g The following theorem is a particular case of the result in [1].

Theorem 1 Let int(K) \ (b + Y ) 6= , int(K) \ (a + Y ?) 6= . Then both

problems (3), (4) and (5), (6) have optimal solutions. Moreover, if   is a feasible solution to (3), (4) and   is a feasible solution to (5), (6), then

2

( ; ) is a pair of optimal solutions to (3), (4) and (5), (6), respectively, if and only if

hh ; ii = 0:

(8)

Our rst goal is to rewrite the problem (1), (2) in the equivalent form (3), (4) and then calculate its dual (5), (6). Observe that we can assume that c = 0 (otherwise, we can make a change of variables he = h ? c ). Then (1); (2) is equivalent to:

t ! min;

(9)

kh ? hi k  t; i = 1; 2; : : : ; m; h 2 Z:

(10) (11)

Let V = IR  H; X = V  V      V (m-times). Let, further,

Y = f((t; h); (t; h);    ; (t; h)) 2 X : h 2 Z g;

(12)

b = ((0; ?h ); (0; ?h );    ; (0; ?hm )) 2 X; 2

(13)

a = ((1; 0); (0; 0);    ; (0; 0)) 2 X:

(14)

1

Obviously, we can rewrite (9)- (11) in the form:

hha;  ii ! min;

(15)

 2 (b + Y ) \ K:

(16)

where, K = L  L      L(m ? times).

Proposition 1 We have : Y ? = f(s ; y ); (s ; y );    ; (sm; ym) 2 X : s + s +    + sm = 0; y + y +    + ym 2 Z ?g; 1

1

1

2

2

2

1

2

where Z ? = fg 2 H : hg; hi = 0; 8h 2 Z g.

3

(17)

Proof It is obvious that the right-hand side of (17) is in Y ?. Inversely, if (s ; y ); (s ; y );    ; (sm; ym) 2 Y ?, then according to (7): (s + s +    + sm)t + hy + y +    + ym; hi = 0; 1

1

2

2

1

2

1

2

for any t 2 IR and any h 2 Z . The result follows.  Using (5), (6), (12)- (14) and Proposition 1, we obtain the following dual to (15), (16): m X

? hhi; yii ! min;

(18)

kyik  si; i = 1; 2; : : : ; m; s + s +    + sm = 1; y + y +    + ym 2 Z ?:

(19)

i=1

1

2

1

2

(20)

Our rst observation is that conditions of Theorem 1 are satis ed. Indeed, let

t = max fkh kg + 1; im i 1

h = 0;  = ((t; h ? h ); (t; h ? h );    ; (t; h ? hm)): 1

2

Then  2 int(K) \ (b + Y ). Similarly, take y = y =    = ym = 0, si = m ; i = 1; 2; : : : ; m. 1

1

2

Then  = ((s ; y ); (s ; y );    ; (sm; ym)) 2 int(K) \ (a + Y ?). 1

1

2

2

Hence, both problems (15), (16) and (18)- (20) have optimal solutions and the optimality criterion has the form (8). We are going to use now (8) to derive a characterization of optimal solution to (15), (16) and (18)- (20) respectively. We need the following lemma.

Lemma 1 Let (ti ; xi) 2 L, i = 1; 2. Suppose that t t + hx ; x i = 0: 1 2

1

2

Then

t x + t x = 0: 1

2

2

4

1

(21)

Proof Since (ti ; xi) 2 L, we have ti  kxi k, i = 1; 2. Using Cauchy-Schwarz inequality, we obtain:

0 = t t + hx ; x i  t t ? kx kkx k = t (t ? kx k) + kx k(t ? kx k)  0: 1 2 1

1

2

2

1 2

2

2

1

2

1

(22)

1

If t = kx k but t > kx k, then t = 0, hence x = 0 and (21) holds. The case t > kx k and t > kx k would imply t = 0 which is impossible. Finally, if t = kx k, t = kx k, then if x = 0 or x = 0, (21) immediately follows. Let x 6= 0, x 6= 0. The inequalities (22) imply that 1

1

1

1

2

1

2

1

1

2

2

1

1

2

1

2

1

2

2

x = x ;  < 0: 1

2

In particular kx k = ?kx k, i.e. t = ?t . Thus,  = ? tt12 . Hence, (21) follows.  1

2

1

2

Theorem 2 Let  ;  be optimal solutions to problems (15), (16) and (18)(20), respectively. Then   is unique. Moreover, if

  = ((t; h ? h ); (t; h ? h );    ; (t ; h ? hm)); 1

2

 = ((s; y); (s; y);    ; (sm; ym )) 1

1

2

2

are optimal solutions and

I = fi 2 [1; m] : kh ? hi k = tg; then

h =

X  kyi kZ hi; i2I

 ? hi h  yi = ? t kyik;

i 2 I;

(23) (24)

yi = 0; i 2= I;

(25)

si = kyik; 8i 2 [1; m];

(26)

X  kyi k = 1 i2I

Here, Z hi is the orthogonal projection of hi onto Z .

5

(27)

Proof First of all, since the cost function in (1), (2) is strictly convex, its optimal solution is unique. Hence, the same holds true for the equivalent problem (15), (16). Furthermore, it's obvious that (except for a trivial case m = 1, h 2 Z ), t > 0. 1

In out concrete situation the optimality condition (8) has the form: m X i=1

(hh ? hi; yii + si t ) = 0;

Which is equivalent to ( recall that L = L)

hh ? hi ; yii + si t = 0; i = 1; 2; : : : ; m

(28)

For the analysis of (28), consider two cases. First, if i 2 I , then (21) and (28) implies

si (h ? hi ) + tyi = 0 Since t > 0, we obtain

 yi = ? sti (h ? hi )

In particular, kyik = si , since kh ? hik = t by the de nition of the set I. Hence,  yi = ? kyti k (h ? hi ): (29) If i 2= I , then 0 = hh ? hi; yii + si t  ?kh ? hi kkyik + si t = (t ? kh ? hik)si + kh ? hik(si ? kyik) 0 Since t > kh ? hi k for i 2= I , we should have si = 0 and hence kyik = 0. Observe that in both cases si = kyik. Hence m m X X X kyik = kyik = si = 1: i=1

i2I

6

i=1

In particular, (29) yields:

X  ?h + Pi2I kyi khi yi = : 

(30)

t

i2I

Since m X  X yi = yi 2 Z ?; i2I

i=1

we obtain from (30) (taking into account h 2 Z ) : h = Pi2I kyikZ hi. 

3 Reduction to the Second Order Cone Programming Problem Due to (23), the optimal solution to (15), (16) is contained in a nitedimensional vector subspace W = span (Z h ;    ; Z hm ) of Z . Given  = ( ;    ; m)T 2 IRm consider the vector h() 2 W , IR

1

1

h() = We have

m X i=1

iZ (hi ):

q

kh() ? hik = kh() ? Z hik + i ; 2

2

where i = kZ ? hi k is the norm of the orthogonal projection of the vector hi onto the orthogonal complement Z ? of Z in H . Furthermore,

kh() ? Z hik = eT (i)?e(i); 2

where ej (i) = j forj 6= i; ei(i) = i ? 1 and ? = (hZ hi; Z hj i). Let ? = B T B be the Cholesky decomposition of ?. Then

kh() ? Z hi k = kB ek = kB ? bik ; 2

2 2

where bi = Bei the i-th column of B . 7

2 2

Hence,

2 3 B ? b i7 kh() ? h k = k 64 5k i

2

i

and we can rewrite the original problem (1), (2) in the following equivalent form:

t ! min;

2 3 B ? bi 7 k 64 5k

2

i

(31)

 t; i = 1; 2; : : : ; m;

 2 IRm :

(32) (33)

The problem (31)- (33) is the second order cone programming problem and, hence, can be easily solved using interior-point algorithms (see e.g. [4]).

4 Multi-target Linear Control Problem Denote by Ln [0; T ] the vector space of square integrable functions f : [0; T ] ! IRn. Let H = Ln[0; T ]  Ll [0; T ]; T > 0, and 2

2

2

Z = f( ; ) 2 H : is absolutely continuous on [0; T ]; _ (t) = A(t) (t) + B (t) (t); t 2 [0; T ]; (0) = 0g: Here A(t) (respectively, B (t)) is an n by n (respectively, n by l)continuous matrix-valued functions. Observe that ZT

h( ; ); ( ; )i = [ T (t) (t) + T (t) (t)]dt; ( i; i) 2 H; i = 1; 2 1

1

2

2

1

2

1

2

0

Compare this with [1],[2]. In this setting, the problem (1), (2) admits a natural interpretation as a linear control multi-target problem (with m targets hi = ( i; i); i = 1; 2; : : : ; m). 8

The next proposition summarizes the solution of the classical tracking problem in the from convenient to us ( see e.g. [5]).

Proposition 2 Suppose that the matrix Riccati di erential equation K_ = KB (t)B T (t)K ? AT (t)K ? KA(t) ? I; K (T ) = 0;

(34)

has a solution K (t) on the interval [0,T]. Then

Z ? = f(p_ + AT (t)p; B T (t)p) : p is absolutely continuous on [0; T ]; p_ 2 Ln [0; T ]; p(T ) = 0g: 2

(35) (36)

More precisely, given ('; ) 2 H , we have:

' = + p_ + AT p = + B T p;

(37) (38)

where

_ = A + B ; (0) = 0;

(39)

p = K + 

(40)

_ = ?(AT ? KBB T ) + ' ? KB ; (T ) = 0:

(41)

_ = (A ? BB T K ) ? B (B T  ? ); (0) = 0:

(42)

In particular,

Proof Let _ = A + B ; (0) = 0: We have h( ; ); (p_ + AT p; B T p)i =

ZT

[ T (p_ + AT p) + T (B T p)] dt

0

ZT

= [(A + B )T p + T p_] dt 0

h T iT ZT = p 0 + (A + B ? _ )T p dt 0

= 0; 9

provided p(T ) = 0. Thus, the vector space in the right-hand side of (35) is orthogonal to Z. Let us now check the decomposition (37), (38). Observe that, given ('; ) 2 H , (34) and (41) determine K and . We then nd using (42). The expression (40) determines p and by (38) = ? B T p. It remains to check that (37) and (39) are satis ed. We have:

= ? B T p = ? B T (K + ): Hence,

A + B = A + B ? BB T K ? BB T : But then _ = A + B according to (42). Di erentiating (40) with respect to t we obtain _ + K _ + _ p_ = K Using (34), (41), (42), we obtain:

p_ = (KBB T K ? AT K ? KA ? I ) + K [(A ? BB T K ) ? BB T  + B ] + ' ? KB ? (AT ? KBB T ) = ?AT K ? + ' ? AT : Hence,

' = + p_ + AT (K + ) = + p_ + AT p; which is (37).  Remark Observe that Proposition 2 provides a constructive procedure for calculating Z ('; ) for('; ) 2 H . Remark The question of solvability the matrix di erential equation (34) is classical in control theory (see e.g [3]). The problem (1), (2) for the situation considered in this section takes the form: 2ZT 3 h i max 4 ( ? 'i)T ( ? 'i) + ( ? i )T ( ? i ) dt5 ! min i2[1;m] 0

_ = A + B ; (0) = : 0

10

The Dual problem (18)- (20) can be rewritten in the form: m ZT X ('Ti i + iT i) dt ! max

(43)

i=1 0

p_ = ?AT p +

m X i=0

m X i=1

i; p(T ) = 0;

(44)

i = B T p;

1=2 m Z X T T [i i + i i] dt  1:

(45)

i=m

One possible interpretation of the dual problem (43)- (45) is that we have a linear lter (44) with m input signals i. We split the output signal B T p into m components i in such a way that we match a given input-output pattern (' ; ); (' ; );    ; ('m; m ) in a best possible way provided that the total energy of the signal is bounded (see (45)). 1

1

2

2

5 Example Let H = L [0; T ]  L [0; T ]; T > 0 . Consider the following optimization problem 2 2

1 2

max kh ? hik ! min; im

(46)

h 2 Z;

(47)

1

where hi = [x i ; x i ; ui]T are targets given by following table: 1

2

hi 1 2 3 4 5 x i e? t e? t sin(t)e? t e? t cos(t)e? t x i cos(t)e? t sin(t)e? t e? t sin(t)e? t e? t ui t2 t2 t2 t2 t2 1

3

3

4

2

1 +8

4

4

2

3

1 +3

1 +3

and 11

4

5

1 +5

3

1 +8

Z = f( ; ) 2 H such that (t) is absolutely continuous on [0; T ]; (0) = 0; _ = (t); _ = (t) + (t)g 1

2

2

1

is closed Hilbert subspace. Then by applying part 3 of this paper, problem (46), (47) can be written in the form of problem (31)- (33) which has been solved using Sedumi 1.02, A Mathlab toolbox for Optimization over symmetric cones (see [4] for more detail). Solutions for a given time T are obtained in the following table: T Opt 1 2 3 4 5 T Opt 1 2 3 4 5

1

2

3

4

5

6

7

8

9

10

1:000 0:000 0:000 0:000 0:000

1:000 0:000 0:000 0:000 0:000

1:000 0:000 0:000 0:000 0:000

0:000 0:000 0:000 0:300 0:700

0:000 0:000 0:000 0:472 0:528

0:000 0:000 0:000 0:527 0:473

0:000 0:000 0:000 0:545 0:455

0:000 0:000 0:000 0:552 0:448

0:000 0:000 0:000 0:553 0:447

0:000 0:000 0:000 0:555 0:445

0:522 0:548 0:558 0:561 0:563 0:563 0:563 0:564 0:564 0:564

11

0:564 0:000 0:000 0:000 0:556 0:444

12

0:564 0:000 0:000 0:000 0:559 0:441

13

0:564 0:000 0:000 0:000 0:560 0:440

14

0:564 0:000 0:000 0:000 0:552 0:448

15

0:564 0:000 0:000 0:000 0:557 0:443

16

0:564 0:000 0:000 0:000 0:558 0:442

17

0:564 0:000 0:000 0:000 0:556 0:444

18

0:564 0:000 0:000 0:000 0:559 0:441

19

0:564 0:000 0:000 0:000 0:559 0:441

20

0:564 0:000 0:000 0:000 0:556 0:444

where h = Pi iZ hi is the optimal solution, and Z hi is the orthogonal projection of hi onto Z . 5 =1

Remark Observe that from the solution tables, the optimal value is monotonically nondecreasing and the optimal solution tends to stabilize as time T gets larger, i.e. the nonzero coecients remain nonzero, and tend to have limits. It is quite easy to show that the optimal value in general is monotonically nondecreasing. It is reasonable to conjecture that the optimal solution on the interval [0; T ] converges to the optimal solution on the interval [0; 1) provided the targets are in L [0; 1) and the pair (A,B) is stabilizable. 2

6 Concluding remarks In the present paper we have shown that a minmax Multi-Target LinearQuadratic control problem can be reduced to the classical tracking problem (see e.g. [5]), where the single target is a convex combination of the original ones. Finding coecients in this convex combination can be reduced to a ( nite-dimensional) second-order cone programming problem which can be easily solved using ecient interior-point algorithms. It would be natural to generalize the results of this paper to the maxmin version of the general multicriteria linear-quadratic control problem but it will require a more general 12

technique developed in [1]. This paper is based upon work supported by the National Science Foundation Grant No. 0102628.

References [1] L.Faybusovich and T.Tsuchiya, Primal-dual Algorithms and In nite-dimensional Jordan Algebra, PrePrint, 2001. [2] L. Faybusovich and J.B. Moore, In nite-Dimensional quadratic Optimization: Interior-Point Methods and control Applications, Vol.36(1997), p.p. 43-66. [3] M.H.A Davis, Linear Estimation and Stochastic Control, London, Chapman and Hall, 1977,p. 224. [4] J.F. Sturm, Using SeDuMi 1.02, a MATLAB Toolbox for Optimization Over Symmetric Cones, Optimization Methods and Software, vol 11-12(1999), p.p. 625-653. [5] H. Kwakernaak and R. Sivan, Linear Optimal control Systems, New York, Wiley Interscience (1972).

13

Suggest Documents