On optimal quadratic Lyapunov functions for polynomial systems 1

0 downloads 0 Views 135KB Size Report
˙x3 = −3x1 − 3x2 − 2x3 + x. 3. 1. + x. 3. 2. + x. 3. 3. (S7). {. ˙x1 = x2,. ˙x2 = −2x1 − 3x2 + x. 2. 1. + x1x2 − x2. 2. (non-odd). Table 1 and Figure 1 show the results ...
On optimal quadratic Lyapunov functions for polynomial systems G. Chesi1 , A. Tesi2 , A. Vicino1 1

Dipartimento di Ingegneria dell’Informazione, Universit` a di Siena Via Roma 56, 53100 Siena, Italy 2

Dipartimento di Sistemi e Informatica, Universit` a di Firenze Via di S.Marta 3, 50139 Firenze, Italy

Abstract The problem of estimating the Domain of Attraction (DA) of equilibria of polynomial systems is considered. Specifically, the computation of the quadratic Lyapunov function which maximizes the volume of the estimate is addressed. In order to solve this double non-convex optimization problem, a semi-convex approach based on Linear Matrix Inequalities (LMIs) is proposed. Moreover, for the case of odd polynomial systems, a relaxed criterion for obtaining an effective starting candidate of the optimal quadratic Lyapunov function is presented.

1

Introduction

In control systems engineering it is very important to know the domain of attraction (DA) of an equilibrium point, that is the set of initial states from which the system converges to the equilibrium point itself [9]. Indeed, such problem arises in both systems analysis and synthesis, in order to guarantee stable behaviours in a certain region of the state space. Unfortunately, it is well known that the DA is a very complicated set, and, in the most cases, it does not admit an exact analytic representation [7]. On the other hand, gridding-based techniques for approximating the set are almost always intractable from the computational burden viewpoint. For this reason, the approximation of the DA via an estimate of a simpler shape has become a fundamental issue since long time (see [7]). The estimate shape is described by a Lyapunov function, generally quadratic. For a given Lyapunov function, the computation of the optimal estimate of the DA (that is, the largest estimate of the selected shape) amounts to solving a non-convex distance problem. Within this context, a problem of primary importance is the selection of the quadratic Lyapunov function. In fact, the volume of the optimal estimate strongly depends on the Lyapunov function chosen for approximating the DA. Obviously, it would be useful to single out the function that maximizes the volume, that is the Optimal Quadratic Lyapunov Function (OQLF). Unfortunately, the computation of the OQLF amounts to solve a double non-convex optimization problem [6, 10]. In this paper, a new technique for computing the OQLF for polynomial systems is presented. Specifically, we propose a Linear Matrix Inequality (LMI) approach based on convexification techniques recently developed for dealing with non-convex distance problems 1

[5, 2, 3]. It is shown how the optimal estimate of the DA for a fixed Lyapunov function can be computed avoiding local minima via a one-parameter sequence of LMIs which requires a low computational burden. This allows us to reformulate the computation of the OQLF as a semi-convex optimization problem. Moreover, in order to obtain a good starting point for the non-convex step, a relaxed criterion is proposed for odd polynomial systems, based on the volume maximization of the region where the time derivative of the Lyapunov function is negative. It is shown how its solution can be computed via a one-parameter sequence of LMIs, that is the computational burden required by the computation of the DA for a fixed Lyapunov function. Simulation results show that this relaxed criterion can provide quite satisfactory candidates for the OQLF. The notation is as follows. 0n : origin of Rn ; Rn0 : Rn \ {0n }; In : identity matrix n × n; A : transpose of matrix A; A > 0 (A ≥ 0): symmetric positive definite (semidefinite) matrix A; s.t.: subject to.

2

Problem formulation

Without loss of generality, let us consider the polynomial system defined as x˙ = Ax + f˜(x), mf  ˜ f (x) = fi (x)

(2.1)

i=2

where x ∈ Rn , fi (x) is a vector of homogeneous forms of degree i, and A is assumed to be a Hurwitz matrix, which implies that the origin is a locally asymptotically stable equilibrium point. Let us consider the quadratic Lyapunov function V (P ; x) = x P x, where P > 0 is such that the time derivative   (2.2) V˙ (P ; x) = 2x P Ax + f˜(x) is locally negative definite. We refer to a such matrix P as feasible P and call P the set of all feasible P . In particular, P can be characterized as   P = P = P  ∈ Rn×n : P A + A P = −Q, Q > 0 .

(2.3)

Let us define the ellipsoidal set induced by V (P ; x), V(P ; c) = {x ∈ Rn : x P x ≤ c} , and the negative time derivative region,   D(P ) = x ∈ Rn : V˙ (P ; x) < 0 ∪ {0}. 2

(2.4)

(2.5)

Then, V(P ; c) is an estimate of the domain of attraction of the origin if V(P ; c) ⊆ D(P ). Moreover, the optimal estimate S(P ) for the selected Lyapunov function is given by S(P ) = V (P ; γ(P )) , γ(P ) = sup {c ∈ R : V(P ; c) ⊆ D(P )} .

(2.6)

Let us observe that the computation of γ(P ) requires the solution of a non-convex distance problem. In fact, γ(P ) = infn x P x x∈R0 (2.7) s.t. V˙ (P ; x) = 0. Finally, let us define the OQLF as the quadratic Lyapunov function V ∗ (P ∗ ; x) = x P ∗ x that maximizes the volume of the DA. Therefore, P ∗ = argmax δ(P ), P ∈P γ n (P ) δ(P ) = , det(P )

(2.8)

where δ(P ) is the volume of S(P ) up to a scale factor depending on the state dimension n. It turns out that the computation of the OQLF amounts to solve a double non-convex optimization problem. In fact, the volume function δ(P ) can present local maxima in addition to the global one δ(P ∗ ). Moreover, each evaluation of δ(P ) requires the computation of γ(P ), that is the solution of the non-convex distance problem (2.7).

3

Semi-convex approach for computing OQLF

In this section we show how the optimal estimate S(P ) in (2.6) can be computed avoiding local minima, and, hence, how the OQLF can be found via a semi-convex approach. In particular, we exploit the convexification techniques developed in [5, 2, 3], which allow us to obtain a lower bound of γ(P ) via a one-parameter sequence of LMIs (see also [4]). A simple test procedure is available for assessing the tightness of such a lower bound. Moreover, in some cases tightness can be established a priori.

3.1

Optimal estimate of the DA

Let us first observe that problem (2.7) is equivalent to the canonical distance problem γ(P ) = infn x P x x∈R0

(3.9)

s.t. w(P ; x) = 0, where w(P ; x) is a locally positive definite polynomial with only terms of even degree, that is w(P ; x) = m i=0 w2i (P ; x) for suitable homogeneous forms w2i (P ; x) of degree 2i. In fact, 3

• if (2.1) is an odd system, that is f˜(x) is composed only by terms of odd degree, then the constraint function V˙ (P ; x) is composed only by terms of even degree, and, hence, w(P ; x) = −V˙ (P ; x) and m = (mf + 1)/2; • otherwise, we can define the new constraint function as w(P ; x) = V˙ (P ; x)V˙ (P ; −x). It turns out that such polynomial has only terms of even degree. Hence, m = mf + 1. Our strategy consists of evaluating the constraint function w(P ; x) on the sets B(P ; c) = {x ∈ Rn : x P x = c} .

(3.10)

γ(P ) = sup {˜ c > 0 : w(P ; x) > 0 ∀x ∈ B(P ; c), ∀c ∈ (0, c˜]} .

(3.11)

In fact, it turns out that

Let us define the homogeneous form of degree 2m h(P ; c; x) =

m 

w2i (P ; x)

i=0

x P x c

m−i .

(3.12)

Then, for any c ∈ (0, +∞) we have that w(P ; x) > 0 ∀x ∈ B(P ; c)  h(P ; c; x) > 0 ∀x ∈

(3.13) Rn0 .

From (3.11) and (3.13) it follows that γ(P ) can be computed via a sequence of positivity tests on homogeneous forms, that is γ(P ) = sup {˜ c > 0 : h(P ; c; x) > 0 ∀x ∈ Rn0 , ∀c ∈ (0, c˜]} .

(3.14)

In order to perform the positivity tests in (3.14), let us introduce the Square Matricial Representation (SMR) of homogeneous forms of even degree (see [5, 2, 3] for details). Let x{m} ∈ Rd be a base of the homogeneous forms of degree m, being

n+m−1 d = σ(n, m) = . (3.15) n−1 Then, the SMR of the homogeneous form h(P ; c; x) is defined as: 

h(P ; c; x) = x{m} H(P ; c)x{m} ∀x ∈ Rn ,

(3.16)

where H(P ; c) ∈ Rd×d is a suitable matrix. It is straightforward to verify that any homogeneous form of even degree can be represented by SMR. Let us observe that, for a fixed base x{m} , the matrix H(P ; c) is not unique. Indeed, all matrices H(P ; c) satisfying (3.16) can be parameterized as H(P ; c; α) = H(P ; c) + L(α), (3.17) 4

where L(α) ∈ Rd×d is a linear parameterization of the linear space    L = L = L ∈ Rd×d : x{m} Lx{m} = 0 ∀x ∈ Rn

(3.18)

and α ∈ RdL is a free parameter vector. The dimension dL of set L is given by 1 dL = d(d + 1) − σ(n, 2m). 2

(3.19)

Then, the complete SMR of h(P ; c; x) is given by 

h(P ; c; x) = x{m} H(P ; c; α)x{m} ∀α ∈ RdL ∀x ∈ Rn .

(3.20)

For any selected base x{m} , the matrices H(P ; c) and L(α) can be easily computed using the algorithms reported in [2]. Let us introduce the quantity cη (P ) = sup



 c˜ : H(P ; c; α) > 0 for some α ∈ RdL , ∀c ∈ (0, c˜] .

(3.21)

It turns out that cη (P ) ≤ γ(P ).

(3.22)

The lower bound cη (P ) can be computed via a one-parameter sequence of convex LMI optimizations. In fact, H(P ; c; α) > 0 for some α ∈ RdL 

(3.23)

η(P ; c) > 0 where η(P ; c) =

max

t∈R,α∈RdL

t

s.t. H(P ; c; α) − tId > 0.

(3.24)

We point out that tightness of cη (P ) is strictly related to the property of positive homogeneous forms to be represented as the sum of squares of homogeneous forms [8]. Indeed, it has been proved that cη (P ) is tight if and only if the homogeneous form h(P ; c; x) satisfies this property for all c ∈ (0, γ(P )) [2]. For the cases n = 2, ∀m and n = 3, m = 2 such property is guaranteed a priori. For the other cases, extensive numerical simulations have shown that the lower bound is almost always tight, except for ad hoc examples [3]. Moreover, tightness of cη (P ) can be checked as follows: cη (P ) = γ(P )  ∃x ∈ Rn : x{m} ∈ ker [H(P ; cη ; αη )] , 5

(3.25)

where αη is the optimizing vector of (3.24). Such test can be performed solving a simple equation system of degree equal to the dimension of the found null space minus one. This means that for regular cases, in which the null space has dimension one, the test has an immediate solution. Obviously, if the lower bound should be discovered to be not tight, a standard optimization can be performed starting from the found point. This largely helps to find the global minimum avoiding local ones [3]. Remark 1 If system (2.1) is described by two homogeneous forms, i.e. x˙ = Ax + fmf (x)

(3.26)

where fmf (x) is a homogeneous form of degree mf , then the computation of the lower bound cη (P ) can be simplified, requiring just one only LMI convex optimization in the dη = dL + 1 variables of (3.24).

3.2

Computation of the OQLF

Exploiting the convexification technique previously described, γ(P ), and hence δ(P ), can be computed avoiding local optimal solutions. This leads us to formulate a semi-convex approach for computing the OQLF. In fact, let us introduce a parameterization of set P in (2.3) through the function F (Q) = F (Q) : F (Q)A + A F (Q) = −Q

(3.27)

where Q is any symmetric positive definite matrix. Moreover, since δ(P ) is not affected by a positive scale factor on P , that is δ(P ) = δ(aP ) ∀a ∈ R, a > 0,

(3.28)

and P depends linearly on Q (see (2.3)), the feasible set of matrices Q can be reduced by imposing a scale constraint, for example Q1,1 = 1. Therefore, let us define the set   (3.29) Q = Q ∈ Rn×n : Q > 0, Q1,1 = 1 of dimension (n2 + n − 2)/2. Then, problem (2.8) can be equivalently rewritten as P ∗ = F (Q∗ ), Q∗ = argmax δ (F (Q)) , Q∈Q  γ n (F (Q)) δ (F (Q)) = det (F (Q))

(3.30)

where, for each Q ∈ Q, γ (F (Q)) is computed using the technique presented in Section 3.1. We point out that the complete SMR (3.20) can be systematically computed as shown in [2]. Moreover, the function L(α) has to be computed once only, lightening the computational burden of problem (3.30). 6

4

Relaxed solution via LMIs for odd systems

In this section a relaxed criterion for computing an initial candidate of the OQLF for odd polynomial systems is proposed. Our aim is to find, along with a moderate computational burden, an effective starting point for initializing the optimization procedure in (3.30). Our criterion consists of finding the matrix P which maximizes the volume of an ellipsoid of fixed shape whose boundary is included in the negative time derivative region D(P ) in (2.5). This criterion is based on the idea that, for obtaining a large optimal estimate of the DA, a good strategy is to enlarge D(P ), which clearly bounds the optimal estimate itself. Obviously, this criterion is relaxed with respect to (3.30), since the volume is measured via an a priori selected ellipsoidal shape, instead of the unknown one defined by the OQLF, and since we are requiring that only the boundary of the ellipsoid is included in D(P ). Let us select the ellipsoidal shape for measuring the volume of the region where the time derivative is negative as V(U ; c), where U ∈ Rn×n is a given symmetric positive definite matrix. Then, the relaxed criterion above described can be formulated as max β(P ), P ∈P

β(P ) = sup {c : B(U ; c) ⊆ D(P )} ,

(4.31)

where B(U ; c) is the boundary of the ellipsoid V(U ; c), whose volume is given by  β(P )n . det(U ) In order to check the inclusion of B(U ; c) in D(P ), let us introduce the homogeneous form of degree 2m

 m−i m  x Ux w2i (P ; x) . (4.32) g(U ; P ; c; x) = c i=0 Then, for any P > 0 and c ∈ (0, +∞) we have that B(U ; c) ⊆ D(P )  w(P ; x) > 0 ∀x ∈ B(U ; c)

(4.33)

 g(U ; P ; c; x) > 0 ∀x ∈ Rn0 . Therefore, our criterion amounts to maximizing c subject to the following condition: ∃P ∈ P : g(U ; P ; c; x) > 0 ∀x ∈ Rn0 .

(4.34)

Let us introduce the complete SMR of g(U ; P ; c; x), 

g(U ; P ; c; x) = x{m} G(U ; P ; c; α)x{m} ∀α ∈ RdL ∀x ∈ Rn . 7

(4.35)

Following the strategy described in Section 3.1 for the computation of γ(P ), we relax condition (4.34) substituting it with ∃P ∈ P, α ∈ RdL : G(U ; P ; c; α) > 0.

(4.36)

Let us observe that condition (4.36) can be checked with one convex LMI optimization. Indeed, let us introduce the function t µ(U ; c) = max Q∈Rn×n ,t∈R,α∈RdL G(U ; F (Q); c; α) − tId > 0 s.t. Q∈Q

(4.37)

where G(U ; F (Q); c; α) depends linearly on the unknowns Q and α. Then, ∃P ∈ P, α ∈ RdL : G(U ; P ; c; α) > 0 

(4.38)

µ(U ; c) > 0. Hence, let us introduce the quantity cµ (U ) = sup {c : µ(U ; c) > 0}

(4.39)

ˆ be the optimizing matrix Q of problem (4.37) for c = cµ (U ). We define the solution and let Q of our relaxed criterion as ˆ Pˆ = F (Q). (4.40) Let us observe that Pˆ can be computed via a one-parameter sequence of convex LMI optimizations, that is about the same computational burden required by each evaluation of the volume function δ(P ). More specifically, Pˆ requires the optimizations (4.37) in dµ = dL + n(n + 1)/2 parameters, and δ(P ) requires the optimizations (3.24) in dη = dL + 1 parameters. Remark 2 As for the computation of the optimal estimate of the DA (see Remark 1), the computation of Pˆ can be simplified if system (2.1) is described by two homogeneous forms as in (3.26) with odd mf , requiring just the solution of a Generalized Eigenvalue Problem (GEVP) which turns out to be a quasi-convex optimization and whose solution can be always computed (see [1]). The number of variables involved in this GEVP is dµ as for (4.37).

5

Examples

In this section we present some examples of the proposed technique. Matrix P ∗ is calculated solving (3.30) with the function FMINSEARCH of Matlab which evaluates δ (F (Q)) using the convexification approach described in Section 3.1. The starting candidate of Q∗ used 8

to initialize FMINSEARCH has been chosen as In for non-odd polynomial systems and as −Pˆ A − A Pˆ for odd ones, where Pˆ is computed as in (4.37)-(4.40) with U = F (In ). Moreover, we have iterated the computation of Pˆ setting U , at each step, equal to the matrix Pˆ obtained at the previous step. The matrix so obtained after i iterations has been denoted by Pˆ (i) . The following systems have been considered:

x˙ 1 = x2 , (S1) x˙ 2 = −2x1 − 3x2 + x21 x2

x˙ 1 = −x1 − 2x2 + x21 x2 , (S2) x˙ 2 = x1 − x2 − x32

x˙ 1 = x2 , (S3) x˙ 2 = −2x1 − x2 − x31 + x1 x42 + x52

x˙ 1 = −2x1 + x2 + x31 + x52 , (S4) x˙ 2 = −x1 − x2 + x21 x32    x˙ 1 = x2 , (S5) x˙ 2 = x3 ,   x˙ 3 = −4x1 − 3x2 − 2x3 + x21 x2 + x21 x3    x˙ 1 = x2 , (S6) x˙ 2 = x3 ,   x˙ 3 = −3x1 − 3x2 − 2x3 + x31 + x32 + x33

x˙ 1 = x2 , (S7) (non-odd) x˙ 2 = −2x1 − 3x2 + x21 + x1 x2 − x22 Table 1 and Figure 1 show the results obtained for these systems. can deduce the following facts:

From these results, we

1. the convex method described in Section 3.1 allows us to evaluate the function δ (F (Q)) (that is, to compute the optimal estimate of the DA for a fixed Lyapunov function) with a low computational burden. Let us consider system S1 for example. The evaluation of δ (F (Q)) requires just the solution of an LMI problem with dη = 2 parameters (see Remark 1). To make a comparison, let us observe that the same evaluation would require the solution of a GEVP with 18 parameters using the recently proposed technique in [11]. Moreover, we evaluate the function δ (F (Q)) avoiding local minima that problem (2.7) can present, especially if the Lyapunov function is close to the OQLF, as shown in figure 1 where the optimal estimate of the DA provided by P ∗ is illustrated. It is obvious that, if a local minimum is found in (2.7) instead of the global one, the computation of P ∗ totally fails. 9

2. the relaxed criterion can provide quite good approximation Pˆ (i) of P ∗ in few iterations, and, at the same time, requires a very lower computational burden than the one needed by the computation of P ∗ . Let us consider system S3 for example, and observe that the computation of each Pˆ (i) requires the solution of a GEVP with dµ = 6 parameters (see Remark 2), while the computation of P ∗ requires the solution of 289 LMI problems with dη = 4 parameters. This suggests that Pˆ (i) can be used like initialization of problem (3.30) for computing P ∗ , since it is expected that a better starting solution avoids local maxima. Indeed, in system S5, the computation of P ∗ using the default stopping tolerance of FMINSEARCH terminates at 7.699. Looking at the volume provided by Pˆ (7) , 7.714, we immediately deduce that this is not the global maximum which has been found increasing the stopping tolerance.

6

Conclusion

A semi-convex approach for computing the quadratic Lyapunov function which maximizes the volume of the domain of attraction estimate for polynomial systems, has been presented. For a fixed Lyapunov function, the proposed technique allows one to compute, via a sequence of convex LMI optimizations with few parameters, the optimal estimate avoiding local minima. This is a necessary step of any procedure for computing optimal quadratic Lyapunov functions (OQLFs). Moreover, a relaxed criterion has been presented for odd polynomial systems, which provides candidates for the OQLF via a one-parameter sequence of convex LMI optimizations. Simulation results have shown that these candidates are very effective starting points for the computation of the OQLF.

References [1] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix Inequalities in System and Control Theory. SIAM, Philadelphia, 1994. [2] G. Chesi. New convexification techniques and their applications to the analysis of dynamical systems and vision problems. PhD thesis, Universit`a di Bologna, Italy, 2001. [3] G. Chesi, A. Garulli, A. Tesi, and A. Vicino. LMI-based techniques for convexification of distance problems. In Proc. of 40th IEEE Conference on Decision and Control, Orlando, Florida, 2001. [4] G. Chesi, R. Genesio, and A. Tesi. Optimal ellipsoidal stability domain estimates for odd polynomial systems. In Proc. of 36th IEEE Conference on Decision and Control, pages 3528–3529, San Diego, California, 1997. 10

[5] G. Chesi, A. Tesi, A. Vicino, and R. Genesio. An LMI approach to constrained optimization with homogeneous forms. Systems and Control Letters, 42:11–19, 2001. [6] E.J. Davison and E.M. Kurak. A computational method for determining quadratic Lyapunov functions for nonlinear systems. Automatica, 7:627–636, 1971. [7] R. Genesio, M. Tartaglia, and A. Vicino. On the estimation of asymptotic stability regions: State of the art and new proposals. IEEE Transactions on Automatic Control, 30:747–755, 1985. [8] G. Hardy, J.E. Littlewood, and G. P´olya. Inequalities: Second edition. Cambridge University Press, Cambridge, 1988. [9] H.K. Khalil. Nonlinear Systems. McMillan Publishing Company, New York, 1992. [10] A.N. Michel, N.R. Sarabudla, and R.K. Miller. Stability analysis of complex dynamical systems. some computational methods. Circ. Syst. Signal Processing, 1:561–573, 1982. [11] B. Tibken. Estimation of the domain of attraction for polynomial systems via LMI’s. In Proc. of 39th IEEE Conference on Decision and Control, Sydney, Australia, 2000.

11

system

volumes

computational burden

S1

δ(Pˆ ) = 8.350 δ(P ∗ ) = 10.21 δ(Pˆ (11) ) = 10.21

computation of Pˆ : one GEVP with dµ = 4 parameters computation of P ∗ : 103 evaluations of δ(P ) by FMINSEARCH evaluation of δ(P ): one LMI with dη = 2 parameters

S2

δ(Pˆ ) = 3.308 δ(P ∗ ) = 7.001 δ(Pˆ (2) ) = 6.993

computation of Pˆ : one GEVP with dµ = 4 parameters computation of P ∗ : 176 evaluations of δ(P ) by FMINSEARCH evaluation of δ(P ): one LMI with dη = 2 parameters

S3

δ(Pˆ ) = 0.6530 computation of Pˆ : one GEVP with dµ = 6 parameters δ(P ∗ ) = 0.7853 computation of P ∗ : 289 evaluations of δ(P ) by FMINSEARCH δ(Pˆ (3) ) = 0.7110 evaluation of δ(P ): one LMI with dη = 4 parameters

S4

δ(Pˆ ) = 1.243 δ(P ∗ ) = 1.965 δ(Pˆ (19) ) = 1.933

computation of Pˆ : sweep on c for (4.37) with dµ = 6 parameters computation of P ∗ : 95 evaluations of δ(P ) by FMINSEARCH evaluation of δ(P ): sweep on c for (3.24) with dη = 4 parameters

S5

δ(Pˆ ) = 6.070 δ(P ∗ ) = 7.787 δ(Pˆ (14) ) = 7.737

computation of Pˆ : one GEVP with dµ = 12 parameters computation of P ∗ : 1271 evaluations of δ(P ) by FMINSEARCH evaluation of δ(P ): one LMI with dη = 7 parameters

S6

δ(Pˆ ) = 0.1110 computation of Pˆ : one GEVP with dµ = 12 parameters δ(P ∗ ) = 0.5413 computation of P ∗ : 967 evaluations of δ(P ) by FMINSEARCH δ(Pˆ (3) ) = 0.4952 evaluation of δ(P ): one LMI with dη = 7 parameters

S7

δ(P ∗ ) = 1.445

computation of P ∗ : 167 evaluations of δ(P ) by FMINSEARCH evaluation of δ(P ): one LMI with dη = 4 parameters

Table 1: Volumes provided by Pˆ , P ∗ and the best Pˆ (i) for systems S1–S7, and corresponding computational burden.

12

4

6

3

4

2

2

1

x2

x2

8

0

0

-2

-1

-4

-2

-6

-3

-8 -4

-3

-2

-1

0

1

2

3

-4 -5

4

-4

-3

-2

-1

0

1

2

3

0.5

1

1.5

4

5

x1

x1

(a) S1

(b) S2

1.5

2

1.5

1

1

0.5

2

0

x

x2

0.5

0

-0.5

-0.5

-1

-1 -1.5

-1.5 -1.5

-1

-0.5

0

0.5

1

-2 -2.5

1.5

-2

-1.5

-1

-0.5

x1

0 x

(c) S3

2

2.5

1

(d) S4 2

1.5

1

x

2

0.5

0

-0.5

-1

-1.5

-2 -2.5

-2

-1.5

-1

-0.5

0 x

0.5

1

1.5

2

2.5

1

(e) S7

Figure 1: Systems S1–S4 and S7. Optimal estimates of the domain of attraction given by P ∗ (dotted line) and constraint set V˙ (P ∗ ; x) = 0 (solid line). 13