of \control of chaos." There are many di erent points of view as to what needs to be achieved in the process of controlling chaos. Traditionally, the performance of ...
Stabilization of Chaotic Dynamics A Modern Control Approach A. Hammad
E. Jonckheere C-Y Cheng S. Bhajekar C-C Chien Department of Electrical Engineering { Systems University of Southern California Los Angeles, CA 90089-2563 (213)740-4457
Abstract
Modern techniques for dynamic control of chaotic systems are introduced. The problem of controlling chaos is addressed in the same conceptual framework as proposed by Ott, Grebogi and Yorke. However, the novelty here is the introduction of LQ and H1 techniques to address the problem. These techniques have been tested on representative systems, speci cally the Logistic map.
Keywords: Chaos, OGY approach, LQ control, H1 control, periodic Riccati solution, global and
local dynamics.
1 Introduction Over the past few years, great attention has been devoted to the problem of \controlling chaos." This is driven mainly by the fact that the presence of chaos in physical systems is quite common and has been demonstrated extensively. In general, there is no obvious de nition of the exact meaning of \control of chaos." There are many dierent points of view as to what needs to be achieved in the process of controlling chaos. Traditionally, the performance of a chaotic system could be improved by making some large and possibly costly alteration in the system which completely changes its dynamics in such a way as to achieve the desired behavior. However, in some cases, this option is not available. Ott, Grebogi and Yorke (1990) addressed an important question: Given a chaotic attractor, how can one obtain improved performance by stabilizing the system around a desired time-periodic motion by means of small time-dependent perturbations in an accessible system parameter? To address this question, a new approach (the OGY approach) was proposed. The method is based on the key observation that unstable periodic orbits are dense in a typical chaotic attractor. There is thus great freedom in choosing a periodic orbit around which the attractor is to be stabilized. The main points of our version of the OGY method can be summarized as follows: 1. The dynamical equations describing the system are known (if not, a model can be obtained from an experimentally measured signal starting with a time delay embedding technique (Mane 1980, Packard et al. 1980, Takens 1981)). 1
2. There exists an unstable low-period periodic orbit embedded in the chaotic attractor such that the system performance is improved if the process is stabilized about that orbit. For example in the case of a periodic orbit of period one the orbit is the xed point xF . 3. The position of the periodic orbit depends on some parameter p, i.e., xF = xF (p), and the the system is structurally stable in the sense that the chaotic behavior of the system is not globally destroyed or drastically altered when small changes in p take place. 4. The parameter p is assumed to be accessible so that the system can be controlled by varying p over a small range from its original nominal value p0, i.e., p 2 [p0 ? pmax; p0 + pmax] ; we therefore write the system iterates as x(n + 1) = f (x(n); p). 5. Preceding the initiation of the control action, the system has to display some chaotic transients. For reason of simplicity, assume that the desired orbit is just an unstable xed point say `xF '. The idea is to rst try to capture the trajectory in a small neighborhood N (xF ) of the desired orbit. Assuming that x(n) falls close to the desired xed point, we need to change the parameter p from its nominal value p0 to pc = p0 + pn where jpn j jpmaxj such that the subsequent iterates will approach the orbit at a geometric rate. The control is initiated only when the iterates are near the desired orbit. Further, the control action p(n) is not allowed to be larger than pmax . The overall scheme is illustrated in Figure 1. -pmax
p
pmax
x(n + 1)
f (x(n); p)
if x(n) 2 N" (xF ) p=p else c p = po
?Fx(n)
linear controller LQ or H 1
x(n)
Delay
Figure 1: Control strategy The nonlinear dynamics is linearized around the xed point xF and the nominal parameter p0. A linear quadratic control methodology as well as an H1 control methodology will be used to generate the change in the parameter. We stress that the control eort should be a local one, that is, the control action is in eect only in the neighborhood of the desired orbit. Both the LQ and H1 designs can be tuned to minimize the tracking error e = x ? xF while keeping p = p ? p0 2
small. In addition, the H1 design oers the possibility of minimizing the eect of the linearization error. It should be stressed that feedback tuning of a parameter of the chaotic generator seems to be absolutely necessary to achieve control of chaos. If control is understood as delaying the bifurcations, it turns out that no linear time-invariant lter wrapped around the chaotic source can achieve control (Hammad and Jonckheere 1995). If control is understood as decreasing the KS entropy, again a linear time-invariant lter does not achieve control of chaos (Jonckheere, Hammad and Wu 1993).
2 Chaotic Dynamics Background The method described in this paper is well suited for both continuous and discrete time systems. We start by introducing the essential notion of abstract dynamical system which proves to be useful at conceptualizing chaos and de ning our control strategy. De nition 1 (Brown 1976) An abstract dynamical system is a quadruple ( ; ; ; T ) where is a nonempty set (the \sample space" or \state space"). is a -algebra of subsets of . is a nite normalized measure, that is, a nonnegative, -additive set function de ned on , i.e., : ! R, such that ( ) = 1. T : ! is a measurable, measure preserving dynamical shift transformation, that is, (A) = (T ?1A) for any measurable set A 2 . 2 4 For example, consider the Logistic Map x(n + 1) = T x(n) = 4x(n)(1 ? x(n)). In this case,
= [0; 1], is the collection of Lebesgue measurable sets of [0; 1], the invariant measure, absolutely continuous relative to Lebesgue measure, is (dx) = pxdx(1?x) , and it is readily veri ed that the measure is preserved, that is,
(A) =
Z
b
a
(dx) =
Z
b1
a1
(dx) +
4 where A = [a; b] and T ?1 A = [a1 ; b1] [ [a2 ; b2].
Z
b2
a2
(dx) = (T ?1 A)
The proof of global convergence of our control scheme relies on ergodic theory which we now brie y review. De nition 2 The abstract dynamical system ( ; ; ; T ) is said to be ergodic i there does not exist a decomposition of into proper invariant subsets, i.e.,
= A [ B; A \ B = ; (A); (B) 6= 0; 1 ? 1 T A = A T ?1B = B (The set equalities are understood to be \modulo a set of zero ?measure.") 2 3
The consequence of ergodicity, crucial in our analysis, is the following: Theorem 1 Let ( ; ; ; T ) be ergodic and let h : ! be a measurable function. Then for almost every !0 2 , Z ?1 1 nX h(!)(d!) = nlim h(T k !0) !1
n k=0
Proof: See Halmos (1956). A few remarks about convergence of the right half side are in order. Convergence in the mean of the 4 1 Pn?1 k sequence of functions hn (!0 ) = n k=0 h(T !0 ) is easy to prove, while pointwise convergence -the Birkho-Kintchin theorem- is much harder. The above theorem will be used mainly with h = IA , where IA denotes the indicator of the set A 2 . In this case the ergodic theorem yields 1 (A) = nlim !1 n
nX ?1 k=0
IA(T k !0)
so that for almost every !0 2 the ergodic measure of A is the frequency of recurrence of the state in the set A. It can be shown (Wu 1992) that the Logistic map is ergodic. To control the logistic equation we introduce the parameter p as x(k + 1) = px(k)(1 ? x(k)). Slight variation of p around 4 does not remove chaos but reveals high variation of unstable orbits. The Logistic map is an example of a class of one-dimensional maps of the form
x(n + 1) = f (x(n); p)
(1)
p is called the control parameter. It is known that the Logistic map displays clear chaotic behavior for p > p1 = 3:5699:::. For p > p1 , xF = p?p 1 is an unstable xed point of period 1, that clearly depends on the control p parameter, and xF1 , xF2 = p+1 (2pp+1)(p?3) are unstable two-cycle points also depending on the control parameter. These unstable periodic points are on the attractor for p 3:7.
3 Control strategy The general scheme outlined in the introduction allows for much exibility in the selection of the action range N (xF ) and the maximum allowable perturbation pmax . While it has been experimentally observed that the general scheme works under a broad range of choices of N(xF ) and pmax , we have to be more restrictive to build a proof of global stability.
Theorem 2 (Global Stability Theorem) Assume that the open loop chaotic dynamics is er-
godic. Choose pmax arbitrarily and let the action range N (xF ) be the Lyapunov-Poincare region of attraction of the periodic orbit xF when the loop is closed by the local LQ (or H1 ) compensator. Also let the ergodic measure (N(xF )) be non-vanishing. Then for -almost every initial condition x0 2 , the scheme converges globally to the desired orbit xF .
4
Proof : If the initial condition x0 is inside the action range, then by de nition it is in the LyapunovPoincare region of attraction and hence the LQ (or H1 ) controller drives the system to the desired orbit. If x0 is outside the action range N(xF ), then no control is applied and the motion is purely chaotic. Invoking the the Ergodic theorem, for -almost x0 , we have 1 (N(xF )) = nlim !1 n
nX ?1 k=0
IN (x )(T k x0)
F
(2)
where IN (xF ) denotes the indicator of the set A. It follows that for -almost every x0 , the state will visit the action range in a nite number of steps. Now the state is within the Lyapunov-Poincare region of attraction and from here on applying the feedback, the LQ (or H1 ) controller drives the state to the desired orbit. The controller design is not restricted to a speci c routine, and hence all the dierent control methodologies can be adapted and used for the solution of this problem. In the following, we use two well established control strategies. The rst is based on a standard Linear Quadratic controller, while the second is developed using an H1 controller. These are just two among many other control strategies that may be used. A test bed design is introduced to demonstrate the above ideas.
3.0.1 Time-Invariant Linear-Quadratic controller Recall that e = x ? xF and let u = p ? p0 = p. Consider a time-invariant discrete-time linear system
e(n + 1) = Ae(n) + Bu(n)
(3)
Assuming that the pair (A; B ) is completely stabilizable, the optimal control input u that minimizes the quadratic cost 1 X [eT (k + 1)Qe(k + 1) + uT (k)Ru(k)] k=0
where Q 0 and R 0 are weighting matrices, is given by
u(k) = ?Fe(k) where F is the optimal feedback gain matrix. The matrix F is given by
F = R?1BT P where P 0 satis es the discrete algebraic Riccati equation
AT PA ? AT PB(BT PB + R)?1BT PA + Q ? P = 0: With D any matrix such that DDT = Q, complete detectability of the pair (A; D) guarantees asymptotic stability of the closed loop system. For more information about LQ controllers, the reader is referred to Kwakernaak and Sivan (1972).
5
3.0.2 Periodic LQ Control
Consider a nonlinear system x(n + 1) = f (x(n); p) with a period-two unstable orbit. Let xF1 and xF2 be the two points of an unstable period-two orbit. Observe that xF1 and xF2 are periodic points of the 2 fold iterated map x(n + 2) = f (f (x(n); p0); p0). Two local linearized models of this nonlinear system are obtained in the neighborhoods of xF1 and xF2 . Let xF1 , xF2 be an unstable
periodic orbit of period 2, that is,
xF2 = f (xF1 ; p0); xF1 = f (xF2 ; p0)
0);p0 ) 0);p0 ) Instability of the orbit refers to all eigenvalues of the Jacobians, @f (f (x;p , @f (f (x;p @x @x x=xF1 x=xF2 having magnitude larger than 1. The nonlinear dynamics is linearized about xF1 and xF2 as follows:
x| (2n +{z1) ? xF1} = f (x(2n); p) ? f (xF2 ; p0) =)
e(2n+1)
e(2n + 1) = A(2n)e(2n) + B(2n)p x| (2n +{z2) ? xF2} = f (x(2n + 1); p) ? f (xF1 ; p0)
=)
e(2n+2)
e(2n + 2) = A(2n + 1)e(2n + 1) + B(2n + 1)p
(4) (5) (6) (7)
Written more compactly, the linearized dynamics is
e(n + 1) = A(n)e(n) + B(n)u(n)
(8)
where A(n) and B (n) are periodic with period 2. The optimal input that minimizes the cost function is given by
u(i) = ?F (i)x(i) where
F (i) = (R + BT (i)(Q + P (i + 1))B(i))?1 BT (i)(Q + P (i + 1))A
(9)
P (i) = AT (i)(Q + P (i + 1))(A(i) ? B(i)F (i))
(10)
and P (i) satis es the matrix dierence equation
The periodic steady state solution can be found by simply iterating (9) and (10) from some initial matrices P (1) and P (2) until convergence is observed, taking in consideration that F (i + 2) = F (i) and P (i + 2) = P (i).
3.0.3 H1 Controller Consider the linear time-invariant system
e(n + 1) = Ae(n) + Bu(n) + a(n) z(n) = C1e(n) + u(n) y(n) = e(n) 6
(11)
where e(n) is the state vector, u(n) is the control input, a(n) is the perturbation vector, z (n) is the generalized error vector, and y (n) is the observation vector. Let F (z ) be the transfer function of the linear controller connecting y to u, and let Tza be the closed loop transfer function from a to z . The H1 control problem is to choose a controller F (z ) that makes the closed loop internally stable and guarantees that kTza k1 < , where optimal. A control law which ensures that kTza k1 < is given by
u(n) = ? BT P{z?1A} e(n) | F1
(12)
where
= I + (BB T ? ?2I )P and P is the positive de nite solution to the generalized algebraic Riccati equation
C1T C1 + AT P ?1P ? P = 0 with the existence condition 2I ? P > 0. The general block diagram of the above model is shown in Figure 2. ................................. . . . - ++ .. . . 6 ... z(n) . . a(n) .. . C 1 . . . . 6 . . . . ? . . y (n) + ? 1 . (zI ? A) u(n) .. B + . . . .................................
F1
Figure 2: Block Diagram For more information about H1 controllers, the reader is referred to Francis (1987).
4 Examples and Illustrations The control strategy outlined in the preceding section has been successfully applied to the Logistic map. The simulation results show how eective the method is. The analysis of the one-dimensional Logistic map revealed some subtle points, that may not be obvious from the analysis of higher dimension systems, mainly that the size of the perturbation allowed is not independent of the nominal value of the parameter being tuned. 7
4.1 Logistic Map
Since our interest is limited to a local control eort, we use a linear approximation of the system (1) in the neighborhood of the desired orbit. The approximate linear model around xF is given by
e(k + 1) = (p0 ? 2p0xF )e(k) + (xF ? x2F )p(k) where e(k) = x(k) ? xF . We assume that p can be varied by a small amount about the nominal parameter value p0 , that is, p can be written as p = p0 + p where jpj jpmaxj and jpmaxj is the maximum allowed variation in p.
4.1.1 Stabilizing Unstable Orbit of Period 1
Let xF denote the xed point to be stabilized. We start by assigning a control action range (the region over which the control action will be activated), say a neighborhood N (xF ). Starting from a randomly chosen initial condition, we iterate the system for 100 iterations until the chaotic behavior is clear and the iterates are distributed over the attractor. After the transient iterations, we turn on our control action. By the Ergodic Theorem the state will fall in the control action region after nitely many iterates. Once the state falls in the neighborhood of the orbit the control will stabilize it. The average length of the chaotic transients until the orbit falls within the control action depends on the initial condition. For the detailed analysis of the transient time, the reader is referred to Ott et al. (1990). Figures 3 and 4 show the Logistic map the orbits plots, x(k) versus the iteration number k and the corresponding parameter perturbation p versus the iteration number k for the LQ controller case. Figure 5 shows the simulation results when the H1 controller is used. mu = 3.7
pert = 0.3
action range = 0.2
mu = 3.9
1
0.9
pert = 0.3
action range = 0.2
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
mu = 4
pert = 0.3
50
100
mu = 4
pert = 0.3
50
100
action range = 0.2
0.6
x
x
0.7
0.5
x
0.8
0.4
0.4
0.3
0.3
0.2
0.2
0.4
0.3
0.2 0
0.1
50
150 Iteration number
pert = 0.3
200
250
0 0
300
action range = 0.2
0.1
50
mu = 3.9
100
150 Iteration number
pert = 0.3
200
250
0 0
300
action range = 0.2
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0
control
0.3
control
control
mu = 3.7
100
0
-0.1
-0.1
-0.2
-0.2
-0.2
50
100
150 Iteration number
200
250
300
-0.3 0
50
100
150 Iteration number
200
250
300
200
250
300
action range = 0.2
0
-0.1
-0.3 0
150 Iteration number
-0.3 0
150 Iteration number
200
250
300
Figure 3: Logistic maps, pmax = 0:3, action radius = 0:2: Orbits plots, x(k) versus the iteration number k and the corresponding parameter perturbation p versus the iteration number k: LQ controller results 8
mu = 3.7
pert = 0.1
action range = 0.1
mu = 3.9
1
0.9
pert = 0.1
action range = 0.1
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
mu = 4
pert = 0.1
50
100
action range = 0.1
0.6
x
x
0.7
0.5
x
0.8
0.4
0.4
0.3
0.3
0.2
0.2
0.4
0.3
0.1
0.2 0
50
150 Iteration number
pert = 0.1
200
250
0.1
0 0
300
action range = 0.1
50
mu = 3.9
100
150 Iteration number
pert = 0.1
200
250
0 0
300
action range = 0.1
0.1
0.1
0.08
0.08
0.06
0.06
0.06
0.04
0.04
0.04
0.02
0.02
0.02
0
control
0.1 0.08
control
control
mu = 3.7
100
0 -0.02
-0.02
-0.04
-0.04
-0.04
-0.06
-0.06
-0.06
-0.08
-0.08
50
100
150 Iteration number
200
250
300
-0.1 0
mu = 4
pert = 0.1
50
100
200
250
300
action range = 0.1
0
-0.02
-0.1 0
150 Iteration number
-0.08
50
100
150 Iteration number
200
250
300
-0.1 0
150 Iteration number
200
250
300
Figure 4: Logistic maps, pmax = 0:1, action radius = 0:1: Orbits plots, x(k) versus the iteration number k and the corresponding parameter perturbation versus the iteration number k: LQ controller results
4.1.2 Stabilizing Unstable Orbits of Higher Periods - Fixed Point Approach
The extension of the proposed control method to higher period orbits is straightforward. If the desired orbit that we wish to stabilize is of period two, the control action is applied around one of the iterates then the system would be allowed to drift freely around the next iterate with no control being applied. In other words, we stabilize a xed point of the two fold iterate using the previous technique. Recall that controlling the state in the neighborhood of any of the points in the orbits results in the stabilization of the entire orbit. Figures 6 and 7 demonstrate the results obtained by applying this strategy to stabilize a period two orbit for the Logistic Map. The extension of this work to orbits with very large periods may not be as eective as in the case of orbits with small periods. This is due in part to the fact that the control is only applied in the neighborhood of one of the points on the orbit, which in turn, may allow the system to drift away from the neighborhood of the other desired orbit points if the period is large. A better strategy may be to apply control more frequently by invoking the control at several points on the orbit. The modi ed strategy which is capable of addressing the problem of stabilizing high-period orbits more eciently is based on LQC strategy along with the solution of periodic Riccati equations. For simplicity, the modi ed method is outlined in the case of a period 2 orbit.
4.1.3 Periodic Riccati Stabilizing Higher Period Orbits Approach The simulation results obtained from applying LQC and the solution of periodic Riccati equations to address the current problem is shown in Figure 8. 9
mu = 3.8 pert = 0.1 action range = 0.2
mu = 3.9 pert = 0.3 action range = 0.2
1
mu = 4 pert = 0.3 action range = 0.2
1
1 Hinfinity controller : gamma = 10.16
0.9
Hinfinity controller : gamma = 9.496
0.9 0.8
Hinfinity controller : gamma = 10.85
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.7
x
x
x
0.6 0.5 0.4
0.4
0.3
0.3
0.2
0.2
0.4 0.3 0.2
0.1
0.1 0
50
100
150
200 250 Iteration number
300
350
0 0
400
0.1
50
mu = 3.8 pert = 0.1 action range = 0.2
100
150
200 250 Iteration number
300
350
0 0
400
50
100
mu = 3.9 pert = 0.3 action range = 0.2
0.1
150
200 250 Iteration number
300
350
400
350
400
mu = 4 pert = 0.3 action range = 0.2
0.3
0.3
0.08 Hinfinity controller : gamma = 10.16
0.2
Hinfinity controller : gamma = 10.85
0.2
Hinfinity controller : gamma = 9.496
0.06 0.04
0 -0.02
0.1
Control Effort
0.02
Control Effort
Control Effort
0.1
0
0
-0.1
-0.1
-0.2
-0.2
-0.04 -0.06 -0.08 -0.1 0
50
100
150
200 250 Iteration number
300
350
-0.3 0
400
50
100
150
200 250 Iteration number
300
350
400
-0.3 0
50
100
150
200 250 Iteration number
300
Figure 5: Logistic maps, pmax = 0:3, action radius = 0:2: Orbits plots, x(k) versus the iteration number k and the corresponding parameter perturbation versus the iteration number k: H1 controller results mu=3.9
pert=.2
action range = 0.1
mu=3.9
1
pert=.2
action range = 0.1
0.2
0.9
0.15
0.8 0.1 0.7 0.05
control
x
0.6 0.5 0.4
0
-0.05
0.3 -0.1 0.2 -0.15
0.1 0 0
50
100
-0.2 0
150
50
Iteration number
100
150
Iteration number
Figure 6: Logistic map, Extension of control strategy to control period 2 orbit: LQ controller results mu = 3.9 perterbation = 0.25 action range = 0.07746 0.25 0.2
0.8
0.15
0.7
0.1
0.6
0.05
Control
x
mu = 3.9 perterbation = 0.25 action range = 0.07746 1 0.9
0.5 0.4
-0.05
0.3
-0.1
0.2
-0.15
0.1 0
Control Effort by the Hinfinity Controller : Gamma = 33.7
0
-0.2
20
40
60 80 Iteration number
100
120
-0.25
140
20
40
60 80 Iteration number
100
120
140
Figure 7: Stabilizing orbit of period 2 using p = 0:25 and = 0:07746: H1 controller results 10
mu=3.9
pert=.2
action range = 0.1
mu=3.9
1
pert=.2
action range = 0.1
0.2
0.9
0.15
0.8 0.1 0.7 0.05
control
x
0.6 0.5 0.4
0
-0.05
0.3 -0.1 0.2 -0.15
0.1 0 0
50
100
-0.2 0
150
50
Iteration number
100
150
Iteration number
Figure 8: Logistic map, Control of period 2 orbit using the periodic Riccati equations solution Clearly the best action would be to apply control at every point on the orbit. However, this might be very costly and a compromise should be reached regarding the frequency of applying the control action and the cost of applying such control. The main idea is that the control should be applied frequently enough to prevent the iterates from escaping from the neighborhood of the controlled orbit.
4.1.4 Parameter dependence
The study of the control strategy in the case of the one dimensional map reveals the important fact that maximum allowed perturbation and the choice of the control action range are dependent on the system parameter to be perturbed. If the maximum perturbation allowed is too large or the control is applied too early, i.e., the control action range is too large, this may result in stabilizing the wrong orbit. In the case of the Logistic map if p = 3:8 and p = :1 and the action range is chosen to be a neighborhood of length .1 centered around xF then the control action is successful and the stabilization of the orbit is achieved rapidly as seen in Figure 9. On the other hand if p is increased to .3 and the action range is increased to .2 then the control is not successful at stabilizing the orbit or even the control fails to stabilize any orbit at all as seen in Figure 9. mu = 3.8
pert = 0.3
action range = 0.2
mu = 3.8
0.8
0.8
0.7
0.7
0.6
0.6
pert = 0.1
action range = 0.1
x
1 0.9
x
1 0.9
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1 0
50
100
150 Iteration number
200
250
300
0.1 0
50
100
150 Iteration number
200
250
300
Figure 9: p = 3:8, control strategy applied after the chaotic transients in the rst 100 iterations This by no means implies that the later choice will not work for all values of p; instead the only implication is that it will not work in the neighborhood of p = 3:8. Figure 3 shows the successful control action for the same settings when p = 4. We note that care should be taken in the choice of the settings based on the value of the parameter. 11
4.2 Comments and Comparisons
Although the proposed control scheme is designed based on a local linearized model of the system, we emphasize that the control methodology introduced above provides a solution to the global stability problem rather than the local stability in the sense of Lyapunov. The LQC method provides great exibility since the choice of Q and R is left as design parameters to be chosen according to the objective to be achieved. If it is more important to guarantee small controls, more weight can be given to u by choosing a large R. On the other hand if the departure from xF is what is most important then a heavier weight for the rst term in the cost index can be achieved by a larger Q. On the other hand, the H1 approach promises to be a good approach to control chaos in this problem setup. This is because a linearized model of a highly non-linear plant is being used to design the controller and the H1 controller is tuned to handle linearization uncertainty. Though a very conservative result, this can be deduced from the small gain theorem interpretation of the H1 control problem. Figure 10 shows the control of chaos of the period 1 orbit using LQ and H1 regulators. The case of p = 4:0 is considered. The time when the control action takes place is zoomed in for better comparison. Observing the perturbation of the plant parameter p, suggests that the H1 regulator handles plant uncertainty well in comparison to the LQ regulator. This is because the control action in the H1 regulator is swift and dies down instantly. There are no oscillations of p about the steady state in the H1 case. Also observe that contrary to the LQ case the H1 design does not overshoot. Similar results were obtained with dierent values of and also while stabilizing the unstable orbit of period 2. mu = 4 pert = 0.3 action range = 0.2 1 0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
x
x
mu = 4 pert = 0.3 action range = 0.2 1 0.9
0.4
0.4
0.3
0.3
Hinfinity controller : gamma = 10.86
0.2
LQ Regulator
0.2
0.1
0.1
0 130
135
140
145 Iteration number
150
155
0 130
160
135
mu = 4 pert = 0.3 action range = 0.2
155
160
0.1
155
160
Control Effort for LQ Regulator
0.05
Control
Control
150
0.15
Control Effort for Hinfinity controller : gamma = 10.86
0.05
0
-0.05
0
-0.05
-0.1
-0.1
-0.15
-0.15
-0.2 130
145 Iteration number
0.2
0.15
0.1
140
mu = 4 pert = 0.3 action range = 0.2
0.2
135
140
145 Iteration number
150
155
160
-0.2 130
135
140
145 Iteration number
150
Figure 10: Comparison between the performance of the LQ and H1 regulators. The time frame for the control eort has been zoomed in for better comparison
12
5 Conclusion Two modern control techniques, LQ and H1 , have been applied to stabilize unstable periodic orbits of a chaotic attractor. The control action proposed stabilized the desired orbit once the system state fell into the neighborhood of the orbit. Application of the LQ technique to stabilization of the Henon map is available in Jonckheere (1994).
References [1] B.D.O. Anderson and J.B. Moore, 1991, \Optimal Control : Linear Quadratic Methods", Prentice Hall Ltd. [2] T. Basar and P. Bernhard, 1991, \H1 -Optimal Control and Related Minimax Design Problems - A Dynamic Game Approach, Systems & Control: Foundations & Applications, Birkhauser. [3] J.R. Brown, 1976, Ergodic Theory and Topological Dynamics. New York : Academic Press. [4] R.S. Bucy and L. A. Campbell, 1987, \Determination of Steady State Behavior for Periodic Filtering Problems". [5] W.L. Ditto, S.N. Rauseo and M.L. Spano, 1990, \Experimental Control of Chaos," Physical Review Letters, Vol. 65, No. 26. [6] U. Dressler and G. Nitsche, 1992, \Controlling Chaos Using Time Delay Coordinates," Physical Review Letters, Vol. 68, No. 1. [7] B. Francis, 1987, A course in H1 control Theory, Springer-Verlag, Berlin - New York. [8] K. Glover and J. Doyle, 1988, "State-space formulae for all stabilizing controllers that satisfy an H1 -norm bound and relations to risk sensitivity," Systems and Control Letters, vol. 11, pp. 167-172. [9] P.R. Halmos, 1956, Lectures on Ergodic Theory. New York : Chelsea Publishing Company. [10] A. Hammad, 1994, "Control of Chaos in Dynamical Systems", Ph.D. Dissertation, Department of Electrical Engineering - Systems, University of Southern California. [11] A. Hammad and E.A. Jonckheere, 1995, \Taming Chaos - A Numerical Experiment Using Low-Order LTI Filters", IEEE Transactions on Circuits and Systems, to appear Feb 1995. [12] E.A. Jonckheere, A. Hammad and B.F. Wu, 1993, \Chaotic Disturbance Rejection - A Kolmogorov - Sinai Entropy Approach", Proc. 32nd Conf. Decis. Contr., San Antonio, TX, pp. 3578-3583. [13] E. Jonckheere, A. Hammad, C-Y Cheng and C-C Chien, 1994, \LQ Control of Chaos", IFAC Symposium on Robust Control Design, Rio De Janeiro, Sep. 14-16, pp. 174-179. [14] H. Kwakernaak and K. Sivan, 1972, Linear Optimal Control Systems, New York : Wiley. 13
[15] R. Mane, Dynamical Systems and Turbulence, Warwick 1980, edited by D. Rand and I. Young, Lecture Notes in Mathematics, vol. 898, Springer-Verlag, pp. 230, 1981. [16] E. Ott, C. Grebogi and A. Yorke, 1990, \Controlling Chaos," Physical Review Letters, Vol. 64, No. 11. [17] N. Packard, J. Crutch eld, J. Farmer and R. Shaw, 1980, \Geometry from a Time Series," Physical Review Letters, vol. 45, pp. 712 - 716. [18] S. Bhajekar, E.A. Jonckheere and A. Hammad, 1994, \H1 Control of Chaos," Proc. 33rd Conf. Decis. Contr., Lake Buena Vista, Fl, Dec 14-16, pp. 3285-3286. [19] F. Takens, 1981, Dynamical Systems and Turbulence, Warwick 1980, edited by D. Rand and I. Young, Lecture Notes in Mathematics, vol. 898, Springer-Verlag, p. 366. [20] B.F. Wu, 1992, Ph.D. Dissertation, Department of Electrical Engineering - Systems, University of Southern California.
14