Gauss-Newton Algorithm for

2 downloads 0 Views 282KB Size Report
ods and thus overcome the di culties encountered with earlier methods such as the Frank & Wolfe or condi- tional gradient methods which tend to be very slow in.
A Sequential SDP/Gauss-Newton Algorithm for Rank-Constrained LMI Problems Pierre Apkarian1

Hoang Duong Tuan3

Abstract

such that

This paper develops a second-order Newton algorithm for nding local solutions of rank-constrained LMI problems in robust synthesis. The algorithm is based on a quadratic approximation of a suitably dened merit function and generates sequences of LMI feasible iterates. The main trust of the algorithm is that it inherits the good local convergence properties of Newton methods and thus overcome the diculties encountered with earlier methods such as the Frank & Wolfe or conditional gradient methods which tend to be very slow in the neighborhood of a local solution. Moreover, it is easily implemented using available Semi-Denite Programming (SDP) codes. Proposed algorithms have proven global and local convergence properties and thus represent improvements over classically used D-K iteration schemes but also outperform earlier conditional gradient algorithms. Reported computational results demonstrate these facts.

1 Introduction A number of challenging problems in robust control theory fall within the class of rank minimization problems subject to LMI (convex) constraints. A non-exhaustive catalog of such problems is given in reference [3] and includes reduced- or xed-order synthesis, robust syntheses with various classes of scalings or multipliers, reduced-order LPV (Linear Parameter-Varying) synthesis, reduction of LFT (Linear Fractional Transformation) representations, and also combinations thereof. Fairly general formulations of such problems are as follows. The rank minimization program is described as minimize

Rank A(x) L(x) < 0;

(1) (2)

where x is the vector of decision variables, A(x) is a matrixvalued (non necessarily symmetric) ane function of x, and L(x) stands for the LMI constraints and therefore is a symmetric matrix-valued ane function of x. A commonly encountered variant of this problem is the feasibility problem with prescribed rank r, which expresses as the search of x 1 ONERA-CERT, Control System Dept., 2 av. Edouard Belin, 31055 Toulouse, FRANCE - Email : [email protected] - Tel : +33 5.62.25.27.84 - Fax : +33 5.62.25.27.64 3 Department of Electronic-Mechanical Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-01, JAPAN - Email : [email protected]

Rank A(x)  r L(x) < 0 :

(3) (4)

It is shown in [15, 4, 3] that these problems can be cast as the minimization of suitably dened merit functions whose LMI-constrained zeros correspond to solutions of the problems introduced above. In this setup, conditional gradient and Frank & Wolfe techniques can be used to determine local solutions which are expectedly global. Also, specialized stopping schemes are carried out to truncated the sequence of iterates without reaching global optimality. The work in [4, 3] involves concave merit functions and this property can be further exploited in a global concave programming technique to enhance the local solution or provide a certicate of global optimality with prescribed accuracy. Well-known advantages of conditional gradient methods are twofold. They do not rely on any structure of the constraint set except than convexity. Secondly, they are easily implemented with the help of available codes of Semi-Denite Programming (SDP) [7, 14, 27]. See [11] for a thorough description of these methods. Well-identied weaknesses of these methods are the following.  similarly to any rst-order descent technique, the conditional gradient algorithm can be very slow in the neighborhood of a local solution when the cost function is characterized by elongated level curves.  the conditional gradient method is prone to zigzagging because the search direction generating map may be subject to discontinuities in the course of the algorithm. This phenomena have been fully analyzed in [9, 8] where it is shown that for constraint sets that do not satisfy a certain positive curvature property, the convergence rate may signicantly deteriorate and does not even achieve a linear rate of convergence. In this paper, we develop a second-order Newton method which overcomes the local convergence impediments of the conditional gradient method. It is locally linearly convergent and retains the implementation simplicity of the conditional gradient method. As with any Newton method, it is stabilized by a simple scheme to ensure convergence for initial points remote from local solutions. A combination between the conditional gradient method and the proposed Newtontype algorithm appears particularly ecient in this respect. Proceeding this way, one can capture the advantages of the conditional gradient algorithm for remote initial points, and the local eciency of Newton methods. Obviously, the proposed method is applicable to any of the control problems listed in the very beginning of the paper. We deliberately direct our attention to the robust control problem which, to some extent, is the most dicult in the list, and also to keep

descriptions and discussions as concrete as possible. The Newton algorithm is based on a quadratic approximation of an adequate merit function. It is important to notice that such approximations are not useful in the context of the merit functions introduced in [15, 4, 3]. Indeed, in this work the merits functions are either bilinear or concave and therefore positive deniteness of the Hessian matrix or of its restriction to some adequate subspace is generally not satised in the neighborhood of a local solution. It follows that local linear convergence cannot be attained using these functions. This has led us to consider dierent, though quite natural, merit functions for which sucient optimality conditions can hold locally. As for previously developed techniques the method here is in many respects superior to traditional D-K iteration schemes which do not enjoy local convergence properties.

2 Notation The notation used throughout the paper is fairly conventional. S n will denote the set of n  n symmetric matrices. The notation Tr (A) is used for the trace of A. The gradient vector at x of a real-valued function f is denoted rf (x) and its Hessian at the same point is denoted r2 f (x). In algorithm descriptions the notation X k is used to designate the k-th iterate of the variable X . To save a space, we shall make use of the matrix notations:     A := A 0 : diag(A; B ) := diag B 0 B

2.1 Inner and symmetric Kronecker products

In order to facilitate the description and implementation of algorithms, we shall make use of a specic operator `vec ' which maps the set of symmetric matrices S n into Rn(n+1)=2 . It is dened as vec X := [ X11 ; : : : ; X1n ; X22 ; : : : ; X2n ; : : : Xnn ]T ; and simply stores the upper triangle of the matrix into a single vector. If one considers the diagonal transformation p p p p T := diag(1; 2; : : : ; 2; 1; 2; : : : ; 2; : : : ; 1) ; where the unit p entries correspond to diagonal terms of X whereas entries 2 are associated with terms strictly above diagonal, then it is readily veried that Tr (XY ) = vec (X )T T T T vec (Y ) = vec (X )T T 2 vec (Y ) : Also of interest is the matrix representation of the symmetric operator on S n X ?! 12 (UXV T + V XU T ) ; where U and V are arbitrary in Rnn . There are dierent ways to construct a matrix representation of this operator. One particular instance which is compatible with the inner product of symmetric matrices is the following: (U ~ V )T vec (X ) := T vec 21 (UXV T + V XU T ) : where U ~ V generalizes the usual Kronecker product U V to the set of symmetric matrices. Some useful properties of the symmetric Kronecker product are as follows:

  

U ~ V = V ~ U, (U ~ V )T = U T ~ V T = V T ~ U T , 1 < (UXV T + V XU T ); Y >= 1 Tr ((UXV T + 2 2 V XU T ); Y ) = vec (X )T T T (U ~ V )T vec (Y ) .

3 Problem presentation This section provides a brief review of a basic result that will be exploited throughout the paper. We are concerned with the robust control problem of an uncertain plant subject to LFT uncertainty. In other words, the uncertain plant is described as 2 3 x_ 7 6 6 z 7 4 z 5

=

2 A 6 C 6 4 C1

B D D1 C2 D2 (t) z ;

32

3

B1 B2 x D1 D2 77 66 w 77 D11 D12 5 4 w 5 u D21 0

(5)

y w = where (t) is a time-varying matrix-valued parameter and

is ranging over a polytopic set P , i.e.,

 2 P := co fv1 ; : : : ; vL g ;

0 2 P:

(6)

Hence the plant with inputs w and u and outputs z and y has state-space data entries which are fractional functions of the time-varying parameter (t). Hereafter, we are using the following notation u for the control signal, w for exogenous inputs, z for controlled or performance variables, y for the measurement signal. For the uncertain plant (5)-(6) the robust control problem consists in seeking a Linear Time-Invariant (LTI) controller

such that

x_ K = AK xK + BK y ; u = C K xK + D K y ;

(7)

 the closed-loop system (5)-(6) and (7) is internally

stable,  the L2 -induced gain of the operator connecting w to z is bounded by , for all parameter trajectories (t) dened by (6).

It is now well-known that such problems can be handled via a suitable generalization of the Bounded Real Lemma involving adequately dened class of scalings. The Bounded Real Lemma conditions are then simplied by means of the projection Lemma [13, 25].

Proposition 3.1 Consider the LFT plant governed by (5) where  is ranging over a polytopic set P dened in (6). Let KX and KY denote any bases of the null spaces of T T T

[C2 ; D2 ; D21 ] and [B2 ; D2 ; D12 ], respectively. Then, there exists a controller such that the closed-loop system is well posed and the Bounded Real Lemma conditions hold for all

admissible  2 P and for some L2 -gain performance if there exist a pair of symmetric matrices (X; Y ), and scalings 



Q S ST R ;

 e Q SeT

e

S Re

for which LMI constraints involving all variables and nonlinear constraints involving the scalings only, hold simultaneously. Specically, the following must be satised: I 0 0 3 2 32 I XA XB XB 6  1 7 Q S 6 76 I 0 777 KX < 0 (8) [] diag 664 ST R 775 666 C0 D D1 75 ? I 0 4 0 0 I 0 1= I C1 D1 D11 T ?Y C T 3 3 2 ?Y AT ?Y C 2 I 6 I 0 0T 1 77 6 7 6 Qe Se T T 6 7 6 [] diag 66 SeT Re 77 666 ?B0  ?DI ?D01 7777 KY > 0 4 ?1= I 0 5 4 T T 5 ?B1T ?D1 ?D11 0

I 0 0 I (9)   Q < 0; R > 0; Qe < 0; Re > 0; XI YI > 0 (10)  vi T  QT S   vi  > 0; for i = 1; : : : ; L (11) I S R I ?1  e e   Q S Q S (12) ST R = SeT Re

Proof: This result is an excerpt from [25]. The reader is also referred to [23, 22, 1, 2, 16, 26, 17] for related texts. It is important to remark that constraints (8)-(11) are (convex) LMI constraints, hence the nonlinear inversion constraint (12) is the only responsible factor of the hardness of this problem.

4 Algorithms In this section we discuss algorithms for capturing local solutions, in a sense to be dened later, to the problem described in Proposition 3.1. Such algorithms rely on the use of merit functions which vanish when the nonlinear constraint (12) holds exactly. Such an approach is fairly common in general nonlinear optimization either to generate algorithm models of the problem under consideration or to stabilize algorithms which are lacking global convergence properties. See [12] and the textbook [24] for comprehensive discussions. In this context, the selection of the merit function is of crucial importance since it will be determinative of the properties that the algorithm will possess and strongly impacts its eciency. In order to simplify the discussions hereafter, we shall make use of the following notations. 







e e  := SQT RS ; e := SeQT RSe ; the convex set determined by LMIs (8)-(11) will be referred to as XLMI and x will designate the vector of decision variables which are involved in cost functions. When the constraint set XLMI is characterized by implicit variables y in the LMI L(x; y) < 0, then we shall use the notation x 2 XLMI to indicate that there exists some y such that the latter inequality holds.

4.1 A concave merit function

In reference [4], we have introduced a concave merit function to solve the problem stated in Proposition 3.1. The associated algorithm is as follows. minimize f1 (x) := Tr (Z1 ? Z3 Z2?1 Z3T ) (13) subject to x 2 XLMI (14) 3T 2 Q S I 0   T 7 6 S 6 (15) [] 4 I R0 Q0e SIe 75 ? ZZT1 ZZ23  0 ; 3 T 0 I Se Re where (15) is a true LMI constraint by a Schur complement argument. It has been shown that solutions to Proposition 3.1 correspond to zero optimal solutions to (13)-(15) and conversely. Hence, we end up with a concave programming problem for which feasible direction descent algorithms are naturally advisable. One can use a conditional gradient or Frank and Wolfe algorithm has done in [5]. If we use the notation f1 (x) for the trace function introduced earlier, such algorithms take the simple form 1. x0 2 XLMI  2. xk+1 = argmin rf1 (xk )T x : x 2 XLMI 3. stop if rf1 (xk )T xk+1 = rf1 (xk )T xk (stationary point), else go to 2. Certainly, the main advantage of these algorithms is their simplicity. They do not rely on any particular property of the constraint set except than convexity. Indeed, step 2 can be solved by ecient SDP methods. Concavity of the cost is exploited to completely bypass the line search phase. The reader is referred to [4, 3] for gradients and implementation details of this algorithm. Factors that can play adversely are the following:  similarly to any rst-order descent technique the conditional gradient can be very slow in the neighborhood of a local solution.  as stated in the introduction the conditional algorithm is prone to zigzagging for general convex constraint sets and may fail to achieve linear rate of convergence, a property which is highly desirable for practical purpose. As a matter of fact, we have observed that the algorithm is often plateauing in the nal steps and thus solutions to Proposition 3.1 can possibly be missed. Interestingly however, the algorithm reaches its plateau value after a few iterations hence is fast even from initial points remote from local solutions. This can be ascribed to the fact that by virtue of the concavity of f1 , the decrease in the cost is always better than predicted by its linear approximation.

4.2 A Newtonian feasible direction algorithm

In this section, we introduce a Newtonian algorithm which overcomes most of the aforementioned diculties. It exploits a second-order Taylor expansion of the natural Frobenius norm merit function   f2 (x) := ke ? I k2F := Tr (e ? I )T (e ? I ) : (16) As before, this merit function achieves zero value when the nonlinear constraints (12) hold exactly. In contrast to the

cost function (13), this merit function can potentially satises second-order (local) optimality conditions which are of prior importance to achieve fast rates of convergence. This is discussed in [10]. The derivation of a second-order algorithm requires the computation of the gradient and Hessian of the cost function (16). All we need is collected in the next proposition.

Proposition 4.1 The second-order Taylor expansion of the function (16) about the point (k ; e k ) is given as f (xk + dx) = f (xk ) + gk T dx + dxT H k dx + o(kdxk2F ) ; (17) 







vec k ; dx := vec d and where xk := vec vec de e k  3 2 2 vec ( e k )2 k + k ( e k )2 ? 2 ek T 5; gk := 4 2  e k k 2 T vec  ( ) + (k )2 e k ? 2k and



(18)



Assume that the quadratic subproblem in step 2 is replaced with n o xek = argmin gk T (x ? xk ) + (x ? xk )T k (x ? xk ) : x 2 XLMI ; (20) where k is any positive denite matrix such that there exist positive scalars c1 and c2 with c1 kxk2  xT k x  c2 kxk2 ; 8x; k = 0; 1; : : : (21) then every limit point of the algorithm described above is guaranteed to be stationary. In most cases, it is a local minimum thanks to the line search procedure. This good behavior is partly due to the unicity of the solution to (20). See [6] for a simple proof of this fact. Moreover, the method becomes tractable since the quadratic subproblem in step 2 is convex and can be formulated as the SDP problem minimize t 

t ? gk1T (x ? xk ) (x ? xk )T ( k ) 21 ( k ) 2 (x ? xk ) I x 2 XLMI ;



 0

ek 2 H k := () (k e k ?(I) ~) I~+I k ~ e k (k )2 ~ I diag(T; T ) : where ( k ) 21 denotes the symmetric square root of k . (19)

Proof: See full version of paper. These results are then used in the Newton method below. 1. x0 2 XLMI 2. xek = argmin gk T (x ? xk ) + (x ? xk )T H k (x ? xk ) x2XLMI

3. perform a line search to minimize the cost function f2 (x) on the feasible segment

xk+1 = xk + (xek ? xk );

2 [0; 1] ;

and update accordingly the additional (linearly constrained) variables y. 4. stop if stationary point rf2 (xk+1 )T (x ? xk+1)  0; 8x 2 XLMI , else go to 2. In practical implementations, instead of checking stationarity of the current iterate which requires solving an LMI problem, we simply stop the algorithm when the merit function f2 does not show sucient decrease over a certain number of iterations. A central diculty with the quadratic subproblem in step 2 is when the Hessian matrix H k is not positive denite (or semidenite). In such case, the subproblem is at least as difcult to solve as as our original problem in Proposition 3.1. Hence, some modications are required to ensure tractability of the method. Another related issue concerns the global convergence of the method, that is convergence to a stationary point from any feasible initial point. The global convergence problem has been examined in a number of standard textbooks. See [24, 6] for a sample. The following result provides a very simple characterization.

In order to guarantee positive deniteness of k while taking advantage of second-order information in the Hessian matrix H k , one can use a modication suggested by Levenberg and Marquardt [20]. The quartic k is taken as k = (V k )T (k + k )V k ; H k = (V k )T k V k ; where H k = (V k )T k V k is an eigenvalue factorization of H k , and k , a diagonal matrix, is adjusted dynamically along iterations to satisfy the global convergence conditions (21). It is also possible to use a modied Choleski factorization which is less costly [11]. As a matter of fact it has been found more ecient to replace the matrix H k in (19) with   ek 2 I e k ~ k diag(T; T ) (22) diag(T; T ) (k )~ ~ e k (k )2 ~ I

thus neglecting the term (e k k ? I ) ~ I and to perform a Levenberg-Marquardt modication on the latter. It is not dicult to see that doing so, is equivalent to a Gauss-Newton method applied to the constrained least squares problem n





min Tr (e ? I )T (e ? I ) :

o

x 2 XLMI :

This form provides a convex natural approximation of the pure Newton step 2, since the matrix in (22) is positive semidenite and it gets closer to a pure Newton step when the neglected term (e k k ? I ) ~ I comes closer to zero, that is when the nonlinear constraint (12) nearly holds. Note that a Levenberg-Marquardt modication is still needed in such case to ensure global convergence to a stationary point. As shown in the application section, the modied GaussNewton algorithm has also satisfactory local convergence properties. It is proved in [21] that when appropriately implemented such algorithms have a linear local rate of

convergence. Experimentally, we nd that it is possible to improve its global behavior for remote starting points by using the concave merit function described in Section 4.1. Indeed, the conditional gradient algorithm applied to f1 is poor and possibly inexact locally but reaches interesting plateau values very quickly after a few iterations. Hence, the overall algorithm that has been implemented takes advantage of the conditional gradient algorithm with merit function f1 for remote starting points and is switched to the modied Gauss-Newton algorithm with merit function f2 when a plateau value takes place. Also, the heuristic stopping rules introduced in [3] can be used to terminate the algorithm before a zero value of the merit functions is attained. This is allowed from the strict nature of the LMI constraints (8)-(11). See [3] and its journal version for a detailed discussion. The nal implemented version of the algorithm is as follows:

1. x0 2 XLMI 2. xk+1 = argmin rf1 (xk )T x x2XLMI

3. go to 4 if plateauing or terminate if stopping test [3] successful, else go to 2 4. xek = argmin gk T (x ? xk )+(x ? xk)T k (x ? xk ) ; where x2XLMI a Levenberg-Marquardt modication of (22) is used. 5. perform a line search to minimize the cost function f2 (x) on the feasible segment

xk+1 = xk + (xek ? xk );

2 [0; 1] ;

and update accordingly the additional (linearly constrained) variables y. 6. stop if stopping test [3] successful or decrease in cost function becomes insucient over a number of past iterations, else go to 4. Typical behaviors of the algorithm are illustrated in the application section below.

5 Applications to three test problems In this section, previously introduced techniques are applied to three test problems.

 Problem #1 is extracted from [19] and is a simplied

helicopter model.  Problem #2 is a rotating inverted pendulum and additional details can be found in [18].  Problem #3 is the mass-spring arrangement with uncertainties on masses.

State-space data of each problem are given in the full version of paper. We also note that lower bounds for these problems are readily computed by ignoring the nonlinear constraints (12). This amounts to performing an LPV synthesis for each problem, see Table 1.

For these problems, the behavior of the conditional gradient algorithm used alone, is displayed in Table 2. For the rst problem with the indicated value of , the conditional gradient algorithm locates a solution in one iteration. This situation is however uncommon since it fails for problem #2 and #3. The merit function f1 tends to increase after 17 and 19 iterations for problems #2 and #3, respectively. It was found impossible to derive a solution even with the specic stopping tests of [3] in place. Correspondingly, the plateauing behavior of the conditional gradient algorithm is shown in Figures 1 and 2. We then reconsider problems #2 and #3 with the hybrid algorithm which comprises a conditional gradient phase combined with the modied Gauss-Newton phase as described in Section 4. The target values of for each problem are given in Table 2. The iteration counts of each problem are given in Table 3. Problem #2 is now solved in two iterations, one of each type, while problem #3 requires one conditional gradient step and 15 modied Gauss-Newton steps. The behavior of the Gauss-Newton phase is depicted in Figure 3. It is seen that the rate of convergence is ultimately linear, a desirable property for practicality of the approach. For the rst problem, the achieved value of is nearly optimal by virtue of the lower bound in Table 1. For problems #2 and #3 the question remains open since our approach is local in nature. As discussed in [3], these local algorithms lead in most applications to results which are close to global optimality so that they often can be accepted as they are or possibly rened with the global techniques presented in [3]. In the latter case, one must be ready to tolerate longer execution times.

6 Extensions and conclusions The techniques in this paper are easily generalized to linear objective minimization subject to a mixture of LMI and nonlinear constraints. This can be done using an augmented Lagrangian technique. For perfomance problems the merit function f2 (x) in (16) should be replaced with the augmented Lagrangian 







c (x; ) = +Tr (e ? I ) + 2c Tr (e ? I )T (e ? I ) ; The penalty parameter c and the Lagrange multipliers  are adjusted according to specic rules. The inner steps minimize the above function for xed c and  using a Newton or Trust-region strategy [10]. References

[1] P. Apkarian and P. Gahinet, A Convex Characterization of GainScheduled H1 Controllers, IEEE Trans. Aut. Control, 40 (1995), pp. 853864. See also pp. 1681. [2] P. Apkarian, P. Gahinet, and G. Becker, Self-Scheduled H1 Control of Linear Parameter-Varying Systems: A Design Example, Automatica, 31 (1995), pp. 12511261. [3] P. Apkarian and H. D. Tuan, Concave Programming in Control Theory, (1998). to appear in Jour. of Global Optimization. , Robust Control via Concave Minimization - Local and Global [4] Algorithms, in Proc. IEEE Conf. on Decision and Control, Tampa, Florida, dec 1998. [5] K. P. Bennett and O. L. Mangasarian, Bilinear Separation of Two Sets in n-Space, Computational Optimization and Applications, 2 (1993), pp. 207227. [6] D. P. Bertsekas, Nonlinear Programming, Athena Scientic, USA, Belmont, Mass., 1995. [7] S. Boyd and L. ElGhaoui, Method of Centers for Minimizing Generalized Eigenvalues, Linear Algebra and Appl., 188 (1992), pp. 63111. [8] J. C. Dunn, Convergence Rates for Conditional Gradients Sequences Generated by Implicit Step Length Rules, SIAM J. on Control and Optimization, 18 (1979), pp. 473487.

conditional gradient pb #2

5

10

0

f

1

10

−5

10

0

2

4

6

8 10 iterations

12

14

16

18

Figure 1: Conditional algorithm - problem #2 merit function f1 conditional gradient − pb #3

6

10

5

10

4

10

3

10

f1

[9] , Rates of Convergence for Conditional Gradients Algorithms Near Singular and Nonsingular Extremals, SIAM J. on Control and Optimization, 17 (1979), pp. 187211. [10] B. Fares, P. Apkarian, and D. Noll, An Augmented Lagrangian Method for a Class of LMI-Constrained Problems in Robust Control Theory, (1999). Submitted for publication. [11] R. Fletcher, Practical Methods of Optimization, John Wiley & Sons, 1987. [12] M. Fukushima, Merit Functions for Variational Inequality and Complementary Problems, Plenum Press: New York, 1996. [13] P. Gahinet and P. Apkarian, A Linear Matrix Inequality Approach to H1 Control, Int. J. Robust and Nonlinear Control, 4 (1994), pp. 421448. [14] P. Gahinet, A. Nemirovski, A. J. Laub, and M. Chilali, LMI Control Toolbox , The MathWorks Inc., 1995. [15] L. E. Ghaoui, F. Oustry, and M. AitRami, An Algorithm for Static Output-Feedback and Related Problems, IEEE Trans. Aut. Control, 42 (1997), pp. 11711176. [16] A. Helmersson, Methods for Robust Gain-Scheduling, Ph. D. Thesis, Linkoping University, Sweden, 1995. [17] T. Iwasaki and S. Hara, Well-Posedness Theorem: A classication of LMI/BMI-reducible control problems, in Proc. Int. Symp. Intelligent Robotic Syst., Bangalore, dec 1996, pp. 145157. [18] H. Kajiwara, P. Apkarian, and P. Gahinet, Wide-Range Stabilization of an Arm-Driven Inverted Pendulum Using Linear Parameter-Varying Techniques, in AIAA Guid., Nav. and Control Conf., 1998. to appear. [19] I. E. Kose, F. Jabbari, and W. E. Smitendorf, A Direct Characterization of L2 -Gain Controllers for LPV Systems, IEEE Trans. Aut. Control, 43 (1998), pp. 13021307. [20] K. Levenberg, A method for the solution of certain nonlinear problems in least squares, Quart. Applied Math., (1944), pp. 164168. [21] D. G. Luenberger, Linear and Nonlinear Programming, Addison-Wesley, Reading, Mass., 2nd ed., 1984. [22] A. Packard, Gain Scheduling via Linear Fractional Transformations, Syst. Control Letters, 22 (1994), pp. 7992. [23] A. Packard and G. Becker, Quadratic Stabilization of ParametricallyDependent Linear Systems using Parametrically-Dependent Linear, Dynamic Feedback, Advances in Robust and Nonlinear Control Systems, DSC-Vol. 43 (1992), pp. 2936. [24] E. Polak, Optimization : Algorithms and Consistent Approximations, Applied Mathematical Sciences, 1997. [25] C. W. Scherer, A Full Block S -Procedure with Applications, in Proc. IEEE Conf. on Decision and Control, San Diego,USA, 1997, pp. 26022607. [26] G. Scorletti and L. E. Ghaoui, Improved Linear Matrix Inequality Conditions for Gain-Scheduling, in Proc. IEEE Conf. on Decision and Control, New Orleans, LA, Dec. 1995, pp. 36263631. [27] L. Vandenberghe and S. Boyd, Semidenite programming, SIAM review, 38 (1996), pp. 4995.

2

10

1

10

0

10

0

1

2

3

4 5 iterations

6

7

8

9

Figure 2: Conditional algorithm - problem #3 merit function f1 Gauss−Newton − pb # 3

5

10

4

10

problem #1 problem #2 problem #3 0.11 0.18 1.34

2

10

1

10 2

Table 1: Lower bounds based on LPV syntheses

3

10

f

lower bounds

0

10

−1

10

−2

10

problem#1

= 0:11 1

problem#2

= 0:27 fails (17)

problem# 3

= 2:50 fails (19)

Table 2: Results for conditional gradient algorithm Iteration counts to succeed or fail

problem #1 problem #2 problem #3

Cond. gradient Gauss-Newton total 1 0 1 1 1 2 1 15 16

Table 3: Results for hybrid Cond.

gradient with GaussNewton Iteration counts to solution

−3

10

−4

10

0

5

10

15

iterations

Figure 3: Gauss-Newton iterations - problem #3 merit function f2

Suggest Documents