Stabilizing a linear system with quantized state feedback - IEEE Xplore

3 downloads 0 Views 1MB Size Report
Aug 8, 1990 - Abstmct-This paper addresses the problem of stabilizing an unsta- ble time-invariant discrete-time linear system by means of state feedback.
916

IEEE TRANSACTIONS ON AUTOMATIC CONTROL,

VOL. 35, NO. 8. AUGUST

1990

Stabilizing a Linear System with Quantized State Feedback

Abstmct-This paper addresses the problem of stabilizing an unstable time-invariant discrete-time linear system by means of state feedback when the measurements of the state are quantized. It is found that there is no admissible control strategy that stabilizes the system in the traditional sense of making all closed-loop trajectories tend asymptotically to zero. Still, if the system is not excessively unstable, one can implement feedback strategies that bring closed-loop trajectories arbitrarily close to zero for an arbitrarily long time. It is found that when ordinary “linear” feedback of quantized state measurements is applied, the resulting closed-loop system behaves chaotically. When the state is onedimensional, a quantitative statistical analysis of the resulting closedloop dynamics reveals the existence of an invariant probability measure on the state space that is absolutely continuous with respect to Lebesgue measure and with respect to which the closed-loop system is ergodic. The asymptotically pseudorandom closed-loop system dynamics differ substantially from what would be predicted by a conventional signalplus-noise analysis of the quantization’s effects. Probabilistic reformulations of the stabilization problem in terms of the invariant measure are then considered.

I. INTRODUCTION HE role that digital systems plays in signal processing and feedback control applications continues to expand rapidly. Since a digital system uses finite-precision arithmetic to process finite wordlength specifications of real numbers, the successful application of such a system in a feedback control context hinges on the existence of a harmonious relationship between the digital system and the “analog” real world with which it interacts. This digitallanalog interface comprises substantially more than just the analog-to-digital and digital-to-analog converters that turn real numbers into “words” from a finite alphabet or take such words and turn them into real numbers. In an essential way, a host of other finite-precision features of a digital system (e.g., memory, arithmetic precision, and architecture) have substantial ramifications with regard to the system’s performance in any situation requiring that it generate real-valued inputs to an “analog” system by means of processing the analog system’s (real-valued) outputs. In this paper, we concentrate on what is probably the most familiar and most widely studied facet of the digital/analog interface, namely, the finite-wordlength representation, or quantization, of real numbers. There is extensive literature on the effects of quantization in digital signal processing and control systems. The approach taken here, however, differs substantially from that of most earlier investigations. With some notable exceptions, research into the effect of quantization has proceeded roughly as follows. A system whose implementation will involve quantiza-

T

Manuscript received June 1, 1989; revised January 25, 1990. Paper recommended by Past Associate Editor, B. H. Krogh. This work was supported in part by the National Science Foundation under Grant ECS-83522 11, by the U.S. Air Force Office of Scientific Research under Contract FQ8671-880048, and by the Army Research Office through the Mathematical Science Institute at Comell University, Ithaca, NY, The author is with the School of Electrical Engineering, Cornell University, Ithaca, NY 14853. IEEE Log Number 9036243.

tion of real numbers is designed as if perfect measurements or representations of these numbers were available. An assessment is then made of the influence that quantization will have on the performance of the system once it is implemented. This assessment is usually based upon bounds on certain deviations from “nominal performance” that arise because of the quantization; these bounds are computed by a variety of techniques, many of which model quantization “errors” as noise and produce estimates of “mean squared errors” incurred because of the quantization. Any such approach adopts the attitude that a quantized measurement of a real number is an approximation of that number and has associated with it a calculable numerical error; although this attitude would seem to be in accord with many practical manifestations of quantization, it tends to inhibit one from looking at measurement quantization in other useful ways. We shall see in what follows that these alternative viewpoints, versions of which feature prominently in the ergodic theory of dynamical systems, can be quite illuminating. In essence, they depart from traditional approaches to measurement quantization in that they regard a quantized measurement q ( x ) of a real number x simply as a partial observation of x or as an entity containing a limited amount of information about x; the relationship between x and q ( x ) is seen in this way as being more qualitative than it is quantitative. Over the years, several researchers have obtained some interesting results by treating quantization in unconventional ways. An early investigation along these lines appears in Curry’s book [5], which considers least mean square state estimation and linear quadratic Gaussian optimal control in the presence of quantized outputs; as in the present paper, quantizers in [5] are modeled as deterministic memoryless nonlinearities. The papers by Schweppe [28] and Bertsekas and Rhodes [ 11 discuss related work that adopts a view somewhat akin to our own by modeling the information about a real number contained in a quantized measurement as a set membership constraint. More recently, Moroney et al. (see [21], [22]) considered ways in which one can specify system structures that mitigate the deleterious effects of quantization and finite-precision arithmetic in digital feedback control systems. D. Williamson’s paper 1331 describes related results; see also [20]. In the digital signal processing community there has been a fair amount of research of a similar nature in the context of digital filter synthesis and sigma-delta modulation (see, for example, [ l l ] , [23], and [26]). Quite recently (for example, in [30], [31]), it has been demonstrated that quantization can induce chaotic behavior in digital feedback control systems, but little work has been done on formulating and answering design questions while keeping in mind the possibility of such a seemingly anomalous occurrence. In its most basic form, the question we wish to answer in this paper can be phrased as follows: under what circumstances and in what sense can we stabilize an unstable discrete-time linear system by choosing a feedback control that depends only on past quantized measurements of the system’s state? Moreover, in what fashion does the answer to this question depend on properties of the system and on properties of the “quantizer” generating the measurements to which we have access? Although these questions are very broad, we shall see in what follows that a variety of

0018-9286/90/0800-0916$01.OO 0 1990 IEEE

917

DELCHAMPS: STABILIZING A LINEAR SYSTEM

as k -+ CO for arbitrary x , . One expects that if the origin lies in the interior of the quantization block U, = q - ' ( q ( O ) ) c R " ,then the controller will have the tendency to "turn off" if it observes 11. FEEDBACK STABILIZATION WITH QUANTIZED MEASUREMENTS:a long enough string of q(0)'s in the sequence { q ( x ( k ) ) } The . GENERAL CONSIDERATIONS following technical result formalizes our intuitive appraisal. Proposition 2.1: In ( l ) , suppose that A has at least one eigenIn this section, we set up explicitly the problem of stabilizing an unstable discrete-time state-space linear system by means value with magnitude greater than 1 , and that 0 E R " lies in of feeding back quantized measurements of the system's state the interior of U, = q - ' ( q ( 0 ) ) c R " ; suppose also that U, trajectory. We find, as might be expected, that some very mild is bounded. Then for every feedback control strategy of the assumptions preclude the existence of a feedback control of the form (2), the set of all xo ER" whose closed-loop trajectories CO has Lebesgue measure zero. required form that stabilizes the system in the usual way. We k + x ( k ) tend to zero as k Proof: Given a control strategy of the form (2), assume it is are prompted to attempt to rephrase the stabilization problem so that it makes sense to try to solve it in the ,presence of quantized in force. For each N 2 0 and each N-tuple ( j o , . . . ,j,) E J N , let measurements; it happens that if the system is reachable and not W(j0,.. . , j ~be) the set of all initial conditions x o E R " whose excessively unstable, and if the quantization is uniform, then for closed-loop trajectories k + x ( k ) , k 2 0 , have quantized meaeach E > 0 a feedback strategy can be chosen that causes every surement historiesq(x,) = j o , q ( x (l)) = j ~. . ,. ,q(x(N)) = j ~ , state trajectory eventually to stay within e of the origin for an followed by an infinite string of q(0)'s. Because J is countable, arbitrarily long time K O .Still, the time it takes a trajectory to there are for each N only countably many W ( j 0 , .. . ,j,)'s, and get to within E of zero grows linearly in K O ,the length of time the union over N of all these sets contains the set of x , E R " whose closed-loop trajectories tend to zero as k -t CO, since we wish it to stay there. Begin by letting J be a finite or countable set, and q R" -t J the origin lies in the interior of its own quantization block. ) Lebesgue measure zero, an arbitrary mapping. The mapping q is intended to represent We show that each W (j o , . . . , j ~has quantization of the real n-vectors in its domain; the idea is that which proves Proposition 2.1. If xo ER" is an initial condition {Uj = q - ' ( { j } ) : j E J } is a partition of R" into quantization whose closed-loop trajectory enters U, at time N 1 and stays blocks, and any two n-vectors x and y that lie in the same U , there forever after, then xo E W ( j 0 , .. . ,j,) for some N-tuple are indistinguishable, at least as far as the quantizer is concerned. ( j o , . . . , j ~ )If .x, is another initial condition in W(j0,. . , j ~ ) , x ( k ) , k 2 0 , gives rise to the same seIt is important to observe that instead of viewing q ( x ) as an then its trajectory k approximation of the real n-vector x , we regard it merely as an quence of quantized measurements, and it is easy to see from (1) entity that gives us a limited amount of information about x. Now and (2) that i ( k 1) - x ( k 1) = A ( i ( k )- x ( k ) ) for every let A and B be real matrices having respective sizes ( n x n ) and k 2 0. Since i ( k ) - x ( k ) is bounded by assumption, it follows that io - x o must lie in E S , the generalized eigenspace of A ( n x m). Consider the difference equation corresponding with eigenvalues whose magnitudes do not exceed 1; E' has Lebesgue measure zero by hypothesis. Fixing xo and x ( k 1 ) = A x ( k ) Bu(k), k 2 0; (1) varying io, we see that W (j o , . . . ,j,) is contained in a paralgiven x(0) = xo and u(k),k 2 0 , the mapping k -+ x ( k ) ,k 2 0 , lel translate of E', and therefore has Lebesgue measure zero for 0 represents the time evolution of the state of some controlled every ( j ~. ,+ , j ~E )J N . Having established that there exists no admissible feedback discrete-time state-space linear system whose state space is R ". The input function u : Z + -t R m is constrained to be a nonantic- control law that stabilizes the system in the traditional sense, it ipative function of the past quantized measurements of the state seems reasonable to try and ascertain what kinds of "stabilizax; more precisely, we require that for each k 2 0 there be a tion'' can be accomplished. To this end, we restrict our attention to the case where the quantizer q is uniform in each state direcmapping f (k):J k R m such that for any xo E R" tion. Given A > 0, define the uniform quantizer q A : R - t Z with sensitivity A as follows: qA(x) = m if x 20 u ( k ) =f'k'(q(Xo),q(x(l)),. . . , q ( x ( k ) ) ) (2) and x E [(m - 1/2)A, ( m 1/2)A], and qA(x) = when k + x ( k ) evolves according to the difference equation (1) - q A ( - x ) if x < 0.Given positive numbers A 1 , . . . ,A,, define subject to x ( 0 ) = x , . In what follows, we often refer to control q A : R "+ J = Z " bymeansof strategies of the very general form (2) as admissible control strategies. The significance of our allowing u(k) to depend on the entire measurement history {q(x(/)):I5 k } should not be under- The quantizer qA partitions R" into rectilinear "quantization estimated. If an exact measurement of each x ( / ) were available blocks" whose edges are parallel to the coordinate axes. For at each time I , then the single quantity x ( k ) would contain as obvious reasons, uniform quantizers of the kind just described much useful information for the controller as the entire history are frequently used as models for finite-precision fixed-point rep{x(l):1 I k } . On the other hand, the presence of the quantization resentations of real numbers and vectors. q makes the "present" measurement q ( x ( k ) )somewhat more like In the modified stabilization problem we are about to cona partial observation of the state x ( k ) whose role as an input sider, we ask when it is possible given E > 0 to design a conto the controller is analogous to that of a linear partial obser- trol strategy of the form (2) that causes the trajectory of every vation y ( k ) = C x ( k ) in linear control theory. The specification xo E R " to get to within E of the origin and stay there for a (2) corresponds roughly with the idea of dynamic compensa- "very long time" K O > 0. Proposition 2.2 generalizes a result tion in the linear theory; permitting u(k) to depend explicitly of G. Williamson (see [34]); its proof illustrates the manner in on measurements made before time k is tantamount to allowing which the quantized measurement history of the state trajectory memory to be built into the controller, which is in essentially the tells quite a bit about where the state of the system is at each same spirit as allowing dynamic compensators for linear systems. time. For the statement of Proposition 2.2, recall that the infinThese considerations reflect our desire to regard q as generat- ity norm I A 11 oo of an ( n x n) matrix A is the maximum over ing measurements containing limited information rather than i of I[A]ij1; it is the matrix norm induced by the maxistraight approximations (cf. Section I). mum norm IIx 11 m = max { Ixi I: 1 i n } on R". Since every The measurability constraint (2) on the feedback control law induced norm of an (n x n) matrix A is bounded below by the makes it fairly clear that if the quantizer q is reasonable (e.g., spectral radius of A , any upper bound on IIA I( also bounds locally constant except on a "thin" set), then it will be impossible from above the magnitude of every eigenvalue of A . Although to specify a control law of the form (2) that makes x ( k ) go to zero the hypothesis of Proposition 2.2 does not require that A have interesting complications attends even the simplest examples that fit into the general framework.

-

+

-+

+

+

+

+

-f

+

E,"=,

<
0 and integer K O > 0 there exists a Kl > 0 and a control strategy of the form ( 2 ) that leads to a closed-loop system all of whose trajectories are within E of the origin for times k between K I and K O .K increases linearly in K O and logarithmically in E - . Proofi Suppose E > 0 and K O > 0 are given; it remains to specify the control strategy. Let p = 11All m; we assume that e

;KI +

E

K,(log 2

5 Amin.

so that for very large K O , a is approximately equal to 1 (n log p)/(log 2). It is interesting that this limiting value for a does not depend on E or the { A , } but only on the degree of expansiveness of the original uncontrolled system as measured by IlAIlm. We make one additional comment concerning the “stabilization’’ scheme outlined in the proof of Proposition 2.2. One can think of the sensitivities { A , } of the quantizer qA as measuring the accuracy of the sensors with which measurements of the state variables are made. The result of Proposition 2.2 can be interpreted as saying that even if sensor accuracies are severely limited, so that measurements are very rough, a long record of such measurements can be used intelligently, along with knowledge of the system’s dynamics, so as to obtain a much sharper specification of where the state of the system lies than is available from a single measurement. If computer memory and speed are cheaper than extremely accurate sensors, then a control strategy such as the one described in the proof of Proposition 2.2 would seem to make sense. More general results based on these observations appear in [7]-[9]; these results have the flavor of a control systems version of some of Shaw’s work (see [29])and are also vaguely reminiscent of the so-called successive approximation method of analog-to-digital conversion in signal processing (see [26],for example). Connections with the theory of differential inclusions and guaranteed state estimation (cf. [ 141) are currently being explored. It is natural to ask what happens if we attempt “naively” to implement a feedback law that would stabilize the system in the absence of quantization. More precisely, suppose once again that q in ( 2 ) is a directionwise uniform quantizer onR“ as in Proposition 2.2; suppose also that ( A , B ) is a reachable (or discrete-time stabilizable) pair, and that the ( m x n) matrix F has been chosen so ( A - B F ) has eigenvalues strictly inside the unit disk. Define a feedback control law of the form ( 2 ) via

Suppose that the measurement qA(xo) indicates that X o lies in the (rectilinear) quantization block U1; we can specify a similar box in which A x , lies, and the edges of this second box will have lengths at most { p A ,}; this second box contains { A y : y E Ul}.Similarly, we can specify a box having edges in which A “ x , lies. pn A By the reachability assumption, we can select a control sequence u(O), . . . ,u(n - 1 ) so that the center of the box U1 is steered at time n to a corner point, say, ( 1 / 2 AI , . . . , 1 / 2 A , ) . In this way, the comer point lies at the center of a rectilinear box having edges at most {pn A },, in which we know that x ( n ) lies. A glance at Fig. 1 might prove helpful. It is important to observe that U ( / ) for each I I n - 1 depends only on qA ( x , ) . The assumption that p“ A < 2 A ,in and our placement of the center of the “box at time n” guarantees that the measurement qA(X(n))allows us to specify a rectilinear box with edges at most { 1 / 2 p ” A },, in which x ( n ) lies. The last box has edges with lengths strictly smaller than the lengths of the edges of quantization blocks; if we repeat the “dividing” procedure with this box in place of the quantization block U1 that contained n x , , then at time 2n we “know” x(2n) to within a box of sides 1 5 i 5 m. (3) u;(k) = [FI;j AjqA, ( x j ( k ) ) , {1/4p2“A ,}. Note that the inputs applied between times n j=l and 2n - 1 , inclusive, depend only on qA ( x ( n ) )and qA ( x , ) . We may continue in this fashion so that for every integer L > 0, we Specifying u by (3) amounts to substituting [for x ( k ) ] the cencan specify a box in which x(Ln) lies whose sides have lengths tral point in a given quantization block when q A indicates that at most {( 1/2)LpL”A ,}. If we pick L so that x ( k ) lies in that block; in other words, (3) represents the control strategy u ( k ) = -FR(k), where ( n + K O )log p - log E log A log ( f i / 2 ) L> log 2 - n log p

-E

+

+

then it is not hard to show that at time nL we will know where x(nL) lies to within a rectilinear box with edges less than { ~ p - - ( “ + ~ ~taking ) } ; time n to steer the center of this box to the origin puts x(nL n) in a box centered at the origin with edges small enough so that setting u(k) = 0 for nL n < k < nL n K O will not cause x ( k ) to stray farther than E from zero during that period. Setting K l = nL + n completes the proof. 0

+

+

+ +

Such a strategy is in keeping with the orthodoxy of the quantization-is-just-approximation dogma. It is easy to show in this case that there is a bounded region that every trajectory of the closed-loop system enters eventually and from which no trajectory escapes. Proposition 2.3: In the context above, let y < 1 exceed the magnitude of the largest eigenvalue of ( A - B F ) and let A be the largest of the { A , } . Then there exists an ellipsoid D centered

919

DELCHAMPS: STABILIZING A LINEAR SYSTEM

at the origin in R ” such that for every x o E R “ there is an N > 0 depending on xo for which x ( k ) E D for all k 2 N.If xo E D , then x ( k ) E D for every k L 0. Proof: We can find an invertible ( n x n ) matrix P so that IIP-’(A-BF)PxI) < yllxll foreveryx ~R”;forexample,pick P so that P - ’ ( A - BF)P has a Jordan-type form with small numbers off the diagonal instead of ones. (Here, 11 11 stands for the Euclidean norm on R” .) It follows directly that when strategy (3) is invoked, the closed-loop trajectory k + x ( k ) subject to x ( 0 ) = xo satisfies

for every k

2 0. Hence, if xo is in the ellipsoid D defined by

then x(k) E D every k 2 0. Moreover, it is clear that every closed-loop trajectory eventually enters D,and must stay there 0 forever by our earlier work. It is worth noting the following simple-extension of Proposition 2.3 to nonlinear feedback laws: i f f : R “ + R m is a uniformly Lipschitz function for which the solutions to x ( k 1) = A x ( k ) Bf (x(k))decay uniformly and exponentially to zero, then the solutions to x(k 1) = A x ( k ) Bf ( Z ( k ) ) ,in the notation of (4), eventually enter a bounded positively invariant region D containing the origin in R “ . While an “eventual boundedness” assertion along the lines of the not-surprising Proposition 2.3 might be the end of some stability studies, for us it is the beginning; techniques for investigating precisely what happens inside the region D will be the focus of the next two sections. We shall see in some simple examples that the dynamics of the closedloop system restricted to D , while generally quite complicated, are indeed amenable to analysis.

+

+

+

+

111. NAIVESTABILIZATION: DETERMINISTIC ANALYSIS The results of Section I1 imply that under suitable conditions there exist many control laws of the form ( 2 ) that cause closedloop trajectories of (1) to remain bounded. We shall see in this section that the long-term behavior of these trajectories under the influence of such control laws is quite complicated, and is not assessed accurately by standard difference equation stability theory or by signal-plus-noise analysis; nonetheless, there are certain cases in which we are able to make quantitative assertions about it by using techniques from the ergodic theory of dynamical systems. Specifically, we investigate the dynamics of the closed-loop system one obtains from (1) when A is unstable by applying a feedback control law of the form (3) with “stabilizing” F. Proposition 2.3 guarantees the existence of a region D c R “ that “attracts” every closed-loop trajectory and is invariant under the closed-loop dynamics. Denote by G:R” + R ” the mapping that takes x ( k ) to x(k 1) in the closed-loop system; thus

+

x ( k + 1) = G ( x ( k ) )= A x ( k ) - B F Z ( k )

(5)

in the notation of (4). The piecewise affine map G is likely to have many fixed points other than zero; it will generally have many periodic points as well. These last observations will be true (see [7] and [27]) even when all of A’s eigenvalues are stable, in which case every equilibrium and periodic solution of (5) will be locally asymptotically stable; such a situation is reminiscent of the existence of constant-input limit cycles in digital filters ([24]). In the present context, none of the equilibria or periodic solutions of (5) is stable, since the linearization of G about such a trajectory is A or power of A . The mapping G is, in some sense, locally expanding but globally contracting, and one

expects that the trajectories of the difference equation (5) will behave chaotically. It is indeed the case that the dynamics of (5) inside D are chaotic no matter how one chooses the unstable A and stabilizing F in (5). It might be interesting to proceed, as in many other investigations (e.g., [30], [31]) of chaotic behavior, to attempt to analyze the fine geometric properties of the chaos; we choose instead to concentrate on its statistical features. Although the system is entirely deterministic, its asymptotic behavior is pseudorandom, and it is therefore arguable that the macroscopically “observable” properties of the asymptotic behavior of (5) are purely statistical in character. The paper [13], which is particularly remarkable in that it was written long before anyone had sought to make rigorous the notion of chaotic behavior, draws similar conclusions about nonlinear sampled-data control systems; one can regard the results of Section IV below as carrying the program outlined in [13] a few steps further. One time-honored approach to understanding the long-term statistical behavior of (5) begins by modeling the sequence { e ( k ) = x ( k ) - x ( k ) } as a uniform white noise sequence, in which case (5) becomes a stable linear system driven by white noise:

+

x ( k + 1) = ( A - B F ) x ( k ) B F e ( k ) . As it happens, modeling the quantization “errors” in this fashion as independent uniform random variables can be quite misleading. We observe for future reference that the last equation leads one to expect that the state x ( k ) of the closed-loop system will have an asymptotic probability distribution resembling that of the random variable m

z

=

C(ABFlkBFe(k). -

k =O

Unless A - BF = 0, 2’s probability density is “bell-shaped,’’ is centered on the origin in R ” , and is nonzero as far away from the origin as 1/ 2 A min IIBF 11 oc 111 - ( A - BF)II where A min is the smallest of the quantizer sensitivities {A,: 1 Ij 5 n }. For the remainder of the paper, we specialize to the case where the state in (5) has dimension 1; many interesting features of the higher dimensional case appear even when n = 1, and setting n = 1 enables us to compute exactly many important quantities that we would only be able to approximate if n were larger. Accordingly, suppose that a E R has absolute value greater than 1 and b is a nonzero row m-vector. Given the difference equation

,;’

x(k

+ 1) = a x ( k ) + b u ( k ) ,

k

20

let f E R m be such that la - bf I 5 1, and consider the “closed-loop’’ difference equation obtained by setting u ( k ) = - f A q a ( x ( k ) ) ,k 2 0 , where q is the uniform quantizer q A with sensitivity A . As in Proposition 2.3, this difference equation determines the evolution of the state k x ( k ) when the feedback control U = -f x ( k ) is “naively” implemented by substituting for x ( k ) the center point of the quantization block in which q A ( x ( k ) )determines x ( k ) to lie. The precise closed-loop difference equation is ---f

x ( k + 1) = G ( x ( k ) )= a x ( k ) - bf A q A ( x ( k ) ) ,

k 2 0. (6)

The mapping G :R + R is piecewise linear, and has a jump discontinuity at each point ( k 1/2)A, k E Z .Taking P = 1 in Proposition 2.3, we find that G maps the interval D into itself, where

+

One can show easily that D is precisely the interval that supports the density of the random variable Z defined above; so far,

920

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 35, NO. 8 , AUGUST 1990

I

,,

101

(a)

Fig. 2.

(I*.-**)

.‘.;t

\Il,pc - 1

Fig. 3.

in other words, we have no reason to suspect the inadequacy of signal-plus-white-noise techniques for analyzing (6). We can, however, do quite a bit better in terms of specifying an interval centered on zero that is invariant under the closed-loop dynamics of ( 6 ) and “supports” the long-term behavior of x(k). To this end, let x* E R denote the infimum of all positive values of x that satisfy G([-x, XI) c [-x, XI. Thus, [-x*, x*] is the smallest interval symmetric about zero that is invariant under the closed-loop dynamics. A “reflected” version of the following lemma holds when a < - 1 . Lemma 3.1: In the above context, suppose that a > 1. If -1 < a - b f 2; when 1 < JaI < 2, the situation is more complicated. Lemma 3.2: With notation as in Lemma 3.1, suppose that la1 > 2 . If x + IG(x)I, with G as in (6), has no fixed points outside of D * = [ - x * , x*], then every trajectory of (6) enters D * eventually. If x + IC(x)l has fixed points outside of D * , then the set of initial conditions for (6) whose trajectories never enter D* is contained in a Cantor set whose Lebesgue measure is zero. Proof: Note that if x + JG(x)lhas no fixed points outside of D*,then lC(x)I < x for every x outside of D*,and a simple w-limit set argument (see, e.g., [12]) shows that for every xo E R , G k ( x 0 )E D* for some k > 0 that depends upon x , . Now, suppose that x + IG(x)/ does have a fixed point outside of D’;assuming that a > 2 and 0 < a - bf < 1 (a similar argument works for other choices of the signs of a and a - bf ) im lies that fixed points of x + (G(x)l are fixed points of G. Let R( be the fixed point of G closest to X * on the left; the graph of G to the right of X(I) looks like Fig. 4(a), where we have labeled with x(*)the next largest fixed oint of G. Evidently, the set A of initial conditions in [$I), x(’ ] whose trajectories never enter D *is contained in the set of those whose trajectories remain in [X(I),X(*)]forever; observe that A is given by

R

P

A

=

N + ( a - b f ) ] A ,where

n ( G - k ( [ X ( l )x I)). 7

k >O

a - 3 + 2a-’(a

-

bf)

2[1 - ( a - bf)]

I

.

Pmfi A simple examination of the graph of G is enough to prove the lemma. First, suppose that ( a - b f ) > 0. The graph of G is linear on each interval ( ( N - 1 / 2 ) A , ( N 1 / 2 ) A ) with slope a. The “upper peaks” of the graph lie on a line of slope (a - bf) passing through the point ( 1 / 2 A , 1/2aA). It is not hard to show that X * is the smallest x such that G(y) 5 x for all JJ E [0, XI. The picture in Fig. 2 shows how N + is determined, and shows that X * has the value given in the conclusion of the lemma. The basic idea is that x* is the image of the right end point of the first interval (counting out from zero) whose image under G does not engulf the fixed point of G lying in the next interval. If ( a - bf) < 0, then the graph of G is again linear on each interval ( ( N - 1 / 2 ) A , ( N + 1 / 2 ) A ) , but it is the lowerpeaksthat determine x*; these peaks lie on a line of slope (a < 0 that

+

-by)

Consider now the graph in Fig. 4(b), which represents the restriction to [R(’), x(*)]of a slightly different mapping C obtained by “extending” the_ left branch of G so it “peaks out”_at x ( 2 ) . Tra’ectories under G emanating from initial conditionsjn I , leave [a(‘),CO) in one time step; trajectories originating in Zol and Z O ~ leave in two time steps; and so on. Thus, the set

(G-k([x(I),x- ( 2 ) I))

= k 20

of initial conditions [X(’),R(2)]whose trajectories under G remain in [X(’), X(*)]forever is a Cantor set; its Lebesque measure is easily found to be

where X denotes Lebesgue measure.

92 1

DELCHAMPS: STABILIZING A LINEAR SYSTEM

Returning to the mapping G, we see that k

k

for every k 2 0; it follows the set A of x, E [$I), $')I whose trajectories under G remain forever to the right of X(')is a subset of the Cantor set A. The proof of the lemma is completed by a simple induction on the number of fixed points of G lying to the right of D*. 0 One can construe Lemma 3.2 as justifying our assertion that when la 1 > 2 the long-term dynamics of (6) are exhibited entirely in the dynamics of G restricted to D' = [-x", x"].Before proceeding with the analysis of precisely what happens inside D', a few remarks are in order about how Lemmas 3.1 and 3.2 begin to unmask the signal-plus-white-noise model as an unreliable predictor of the long-term behavior of (6). The interval D in (7) a contains the interval D* by construction; in fact, D contains all Fig. 5. the fixed points of G including those that lie outside of D*. Thus, if G has fixed points outside of D * , then the interval D "overIV . NAIVESTABILIZATION REVISITED: estimates" the eventual excursions of the trajectories of (6). It INVARIANT MEASURES is surprising how much D and D* can differ. If we are given bounds on a > 0 and bf, and let a - bf approach 1 from below, It is time now to introduce the mathematical constructions we then the interval D, the variance of the random variable 2 , and the will need in order to analyze the long-term statistical behavior number of fixed points of G grow without bound. Nonetheless, of trajectories of ( 6 ) . Some of the tools we will be using are it is quite possible to let a - bf approach 1, keeping bounds on fairly recent vintage; the ergodic theory of dynamical systems a and bf, without changing the value of N , in Lemma 3.1; not has advanced rapidly over the last decade or so, and some of its changing N+ and keeping a and bf bounded imply together that newest developments are especially well tailored to the applicaD* stays bounded. tions considered here. Suppose that X is a set equipped with a Consider the special case a = 2; from Lemma 3.1, we see probability measure h defined on a a-field B of subsets of X; let that N + = 0 for every choice of a - b f E [0, l), which means G: X + X be a mapping that is measurable with respect to B . that D* = [-A, A] for all a - bf E [0, 1) when a = 2. More With notation as in Section 111, it is useful to keep in mind that generally, it turns out that given a specific value of N,, there are in what follows we will be taking X to be the interval D * , B to choices of a E (2, 3) and a - bf E ( 0 , l ) , leading to the given be the Borel field of D * , X to be normalized Lebesgue measure N + via Lemma 3.1, for which G in (6) has arbitrarily many on D', and G to be the mapping G from ( 6 ) restricted to D*. If fixed points. The argument for this last assertion is somewhat p : B -+ [0, 11 is probability measure on X, then we can define a tedious, but the result seems plausible when one reflects on how new such measure G ( p ) by setting N + varies when a is close to 2. For more details, see [7]. We shall see below that in many cases when la1 > 2, almost every G(p)(V) = p(G-I(V) trajectory of ( 6 ) is dense in the interval D* = [ -x*, x*].Under these circumstances, the number 3 2N+ or 3 2 N - indicates the number of quantization blocks to which the typical closed- for each V E B . Probabilistically speaking, if x is a random variloop trajectory makes infinitely many visits; it is interesting that able that takes values in X and is distributed according toAp,then N + and N - depend on a , b , and f but not on the quantizer the random variable G ( x ) is distributed according to G ( p ) . If G ( p ) = p , then the measure p is said to be invariant under G ; sensitivity A . It is also interesting to see what happens to N, , N -, and D * hence, if p is invariant under G and x is an X-valued random when one varies some of the parameters in (6) while keeping variable distributed according to p , then {Gk(x):k 2 0 ) is a others fixed. Suppose, for example, that a > 2 is fixed and f stationary X-valued random process. We say tha! G is nonsingular with respect to X if X ( V ) = 0 is allowed to vary subject to 0 < a - bf < 1; a straightforward calculation shows that X * increases linearly with a - b f except at implies that G(h)(V) = 0 for V E B . It follows readily that if a sequence [(llk = a - bf k ] , 0 5 k < CO of "closed-loop poles;" G is nonsingular with respect to X and p S + [0, 11 is a probat these points, the value of X * jumps in response to a change in ability measure on X that is absolutely continuous with respect to A, then G ( p ) is also absolutely continuous with respect to N,. The precise formula for (llk is A. In this case, if p has density +p E L I ( X ) with respect to A, then we denote by P G ( + , ) the density +e(,) of G ( p ) with ka - 1/2a(a - 3) respect to h. The mapping 4 -+ P G ( & ) extends to a linear (Yk = ka 1 transformation PG : L' (A) + L'( A); this transformation is called the Perron-Frobenius operator corresponding to the nonsinguand { ( Y k } accumulates on 1 as k + CO. When the feedback gain lar transformation G:X + X. One can check easily that the f is adjusted smoothly so that the closed-loop pole a - bf passes Perron-Frobenius operator PG corresponding to G maps nonthrough one of the ( Y k ' s , the qualitative behavior of the closed- negative functions to nonnegative functions, and preserves their loop system undergoes a significant change. The {(llk}, in this integrals with respect to X. In fact, ~~PGII = 1, where 11 defashion, mark bifurcation points of a sort. Fig. 5 depicts a some- notes the operator norm induced by the usual norm on L (A). If what more elaborate bifurcation diagram for ( 6 ) . Plotted on the p is absolutely continuous with respect to X and is also invariant horizontal axis is the open-loop pole a over the range 11, 111, under G , then the density 4p of p with respect to h is a fixed and on the vertical axis the closed-loop pole a - bf over [0, 1). point of the Perron-Robenius operator PG . The curves demarcate parameter regions of constant N,; the Observe that the mapping G:D* + D* defined by ( 6 ) is numbers in the regions are the values of N + associated with the Borel measurable and is nonsingular with respect to normalcorresponding parameter regimes. ized Lebesgue measure on D* . The Perron-Frobenius operator

+

+

+

/I,

922

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 35. NO. 8, AUGUST 1990

P G L’(h) + &’(A) corresponding to G is therefore well defined, and takes the simple form

PG(4)(X) =

dl

G -I([

4(t)dX(f), -XI,

x t D’.

x))

Modem ergodic theory enables one to rephrase questions about the long-term statistical behavior of trajectories of (6) in terms of various properties of the corresponding Perron-Frobenius operator PG. Lucid expositions of the results that we will be employing appear in the recent books 1151, 1191, and 1321 and in some key papers that have been published during the last 15 years, most notably [ 161-[ 181. Our principal objective is to determine whether there is a probability measure p * on D *that reflects the long-term statistical properties of (6) in the sense that /

L-I

\

for “typical” trajectories k + x ( k ) , k 2 0 , of (6), where lv denotes the characteristic function of an arbitrary Borel set V c D’. Any such measure p* will have to be invariant under G ; furthermore, in order to be able to make statements about long-term time averages, G will have to be ergodic with respect to p * . There are, in general, many measures on D’ that are invariant under G and with respect to which G is ergodic. For example, if P E D*is a fixed point of G , then G preserves and is ergodic with respect to the measure that assigns probability one to any Borel set containing P and probability zero to any set that does not contain P. In the present context, such a pure point measure does not make “physical sense;” all the fixed points of G are unstable, so the measure will reflect the asymptotic behavior of “atypical” trajectories only. Our intuition compels us to believe that most trajectories of (6) will be spread out across the interval D*;what we seek, therefore, is a measure p* on D* that is absolutely continuous with respect to A, is preserved by G, and with respect to which G is ergodic. By applying the main result of [ 171, we obtain the following preliminary assertion along these lines. Theorem 4.1: In (6), let A > 0 be given; suppose la1 > 1 and la - bf I < 1 . Let D’ = [ - x * , x * ] be defined as in Lemma 3.1, and let X be a normalized Lebes ue measure on D’. If 4 E L’(X), then the sequence (( 1 /L),$J P i (4)} converges in the L’(X)-norm as L -+ 00 to a function 4m E L1(X). Moreover, 4m is a fixed point of PG. 0 If we take 4 to be the density with respect to X of some Borel measure p on D * , then Theorem 4.1 attests to the existence of a measure p m on D* that is absolutely continuous with respect to X and is also invariant under G. In particular, we can obtain an absolutely continuous invariant measure A, by setting 4 = 1. In the absence of an assertion about ergodicity, the mere existence of such an invariant measure is not enough to enable us to make any strong statements about long-term behavior of trajectories of (6). As it happens, there sometimes exist several absolutely continuous measures on D* that are preserved by G. For example, let a = 3/2 and b f = 5/8 in (6). In this case, x * = (3/4)A, and the graph of G on D* looks like Fig. 6. It is evident that every closed-loop trajectory hits the interval (-A /8, A /8) finitely often, and subsequently becomes trapped either in [A/8, (3/4)A] or in [-(3/4)A, -A/8]. Any measure p* invariant under G must therefore assign measure zero to the interval (-A/8, A/8). If we “initialize” P G in Theorem 4.1 with 4 = 1 , . (3/4)A],then the limiting 4, will be concentrated on [ A / & (3/4)A], whereas the limiting +m arising from Theorem 4.1 initialized with 4 = l[-(3,4)A,olis concentrated on [-(3/4)A, -A/8]. (It is worth mentioning that all trajectories of the closed-loop system in this example are eventually bounded away from zero by one eighth of a quantization block width,

// Fig. 6

creating a scenario that signal-plus-noise analysis would never predict.) It is, in general, quite difficult to prove that G is ergodic with respect to a given measure p’ on D’.One convenient criterion is the following one (see, e.g., 1151): if there exists only one invariant probability measure p * on D’ that is absolutely continuous with respect to normalized Lebesgue measure X, and if the density q5* of p* is positive A-almost everywhere in D*,then G is ergodic with respect to p * . In order to apply this condition to the problem at hand, we will need [18, Theorem 11, whose relevant implications can be summarized as follows. Theorem 4.2 [la]: Under the assumptions of Theorem 4.1, there exists an integer r > 0 and absolutely continuous invariant probability measures { p ; : 1 5 i 5 r } such that any absolutely continuous invariant probability measure p* on D * can be expressed as a convex combination of the { p ; } . Furthermore, the { p ; } are singular with respect to each other; each pt is supported on a set A; that is a finite union of disjoint subintervals of D*, and the density of p; with respect to X is positive X-almost everywhere in A;. Moreover, each A; is the w-limit set (under G) 0 of p;-almost every (and hence X-almost every) x E A,. Our strategy for proving Theorem 4.3 below, which is the main result of this section, is essentially to demonstrate that r = 1 in Theorem 4.2 under the given conditions on the various parameters in (6). From Theorems 4.1 and 4.2 it follows that there is a unique probability measure p’ on D* that is absolutely continuous with respect to X, and from Theorem 4.2 that the density of p* with respect to X is A-almost everywhere positive; consequently, G is ergodic with respect to p * , and p * summarizes neatly the longterm statistical behavior of trajectories of (6). (It is our strong belief that Theorem 4.3 holds for general la I > 2 and la-b f I < 1 in (6), but we have not managed yet to prove it.) Theorem 4.3: In (6), suppose either that a = 2 or that a > 2 is an integer and ( a - b f ) ( N ++ 1) 5 (1/2)a, in the notation of Lemma 3.2. Then: i) there exists on D *exactly one probability measure p * that is both invariant under G and absolutely continuous with respect to normalized Lebesgue measure X on D*.The density of p* is positive A-almost everywhere in D’,and G is ergodic with respect to p * ; ii) almost every trajectory of (6) is dense in D’;and ( ( l / L ) ~ ~ ~ ~ P ~=(4*, q 5where ) ) q5* is the deniii) IimL,, sity of p * with respect to X and 4 E L1(X) is the density of an arbitrary probability measure on D * absolutely continuous with respect to X. ProoS: Our argument is inspired by a classic manipulation performed by Renyi in [25]. The condition on a - bf ensures that each interval I , = ( ( j- 1 /2) A , ( j 1 /2)A) intersecting D* contains a zero of G; the zero that lies in I j is bfj A / a . We show first that the fixed point zero has a dense set of “eventual inverse images” in D*;in other words, there exists a dense subset of points X, E D ”for which Gk(x,) = 0 for some k 2 0 depending on x , . Given integers { d , : 0 5 m 5 l } less than a in absolute

+

923

DELCHAMPS: STABILIZING A LINEAR SYSTEM

value, consider an xo E D* that takes the form

Such an xo is a scaled “a-ary rational;” the results of [25] imply that such xo’s are dense in D*. It is a simple matter to verify that for such an x o , G k ( x 0 )= 0 for some k 5 I 1 . To see this, note that if I > 1 and di # 0, then

+

tion G ; the section in [ 191 on Markov transformations contains a lucid exposition of closely related work. It is pointed out in [4] (see also [6]-[8] and [lo]) that the existence of an appropriate partition of D* guarantees that the measure p* in Theorem 4.3 can be computed exactly by means of linear algebra and the classical Perron-Frobenius Theorem; in this case, the density of p* with respect to h turns out to be piecewise constant. Even when such a partition does not exist, it turns out (but is difficult to prove) that the convergence in iii) of Theorem 4.3 occurs in the following strong way:

bf A

G ( x , ) = axo - b f A q a ( x o )= a

m=l

where 11 11 is the L1(X) norm and 4 is an arbitrary initial probability density in L’(X). The rate of convergence is geometric; hence, it is to be expected that by iterating a convenient initial density (say, for example, 4 1) through PG , one can obtain good approximations to the limiting density 4*. Theorem 4.3 gives rise to a host of prescriptive questions, as well. Specifically, suppose we know that many feedback control schemes for a system such as (1) lead to closed-loop systems with well-defined asymptotic statistical properties that are reflected in a limiting invariant measure p * . How might we choose from among a family of such schemes so as to optimize some of these asymptotic features? Since our particular concern in the present paper has been stabilization, a statistical rephrasing of the stabilization problem would seem to be in order. For example, given E > 0, we might consider the problem of choosing f E R ” so as to maximize p * ( [ - E , E]). Alternatively, we might try to maximize p *(( - 1 /2A, 1 /2A)) through choice of feedback control law. It should be remarked that some obvious candidate solutions to these problems are incorrect; for instance, one might D*. Thus, if U c D* is an arbitrary interval, the points on the expect that choosing f in (6) to minimize the extent of the intrajectories emanating from initial conditions in U fill out all of variant region D * would cause the asymptotic measure p* to be D* eventually. Consequently, r = 1 in Theorem 4.2, and there concentrated more heavily near the origin. Note, however, that can be at most one Bore1 probability measure p* on D* that is D* = [-A, A ] when a = 2 regardless of the stabilizing f that both absolutely continuous with respect to X and invariant under we choose; if bf = 2, so that the closed-loop system would be G. Such a p* exists by Theorem 4.1; its density with respect to “deadbeat” in the absence of quantization, it turns out (see [7]) X is positive A-almost everywhere by Theorem 4.2, and so G is that p * = X on D * , so that typical closed-loop trajectories are ergodic with respect to p * , and the proof of i) is complete. Item distributed uniformly over D *. On the other hand, if bf = 312 iii) holds by Theorem 4.1, and ii) follows from Theorem 4.2, then p*(-1/2A, 1/2A) = 2/3, so that p* is concentrated more since D* is the only interval in the family {Ai: 1 5 i 5 r = 1). heavily near zero. Also of evident importance is the problem of “stabilizing” Observe that in the proof of Theorem 4.3 we demonstrated that x ( k + 1) = a x ( k ) + b u ( k )when u ( k ) is allowed to depend on the a set of highly “atypical” initial conditions for (6), namely, those entire past history of quantized measurements of x as in Section whose trajectories hit zero eventually, was dense in D*. One en- 11. We are currently working on a generalization of Theorem 4.3 counters this sort of phenomenon quite frequently in the ergodic asserting that under certain conditions there exists an invariant theory of dynamical systems since there are many dense subsets measure p i f , x l [with notation as in (2)] that reflects the distribuof R “ that have measure zero for reasonable measures. The ex- tion of closeddoop trajectories for such systems. An interesting istence of such sets should make one hesitant to trust blindly the statistical definition of stabilizability for a one-dimensional sysdata obtained from computer simulations of systems such as (6), tem with quantized measurements might read something like this: since the finite-precision arithmetic in a computer makes it easy for every E > 0, there exists a feedback control strategy of the for digitized trajectories to stumble into sets that are measure- form (2) such that p i f , k ) ([ -E, E]) > 1 - E. theoretically exceptional. An interesting study of related quesV. CONCLUDING REMARKS tions is presented in [2]. Theorem 4.3 says that the asymptotic behavior of (6), under Three central themes of this paper transcend the specific probmany circumstances, has “stable” statistical properties and that lems we have considered. First and probably most fundamental these properties are well-reflected in the excursions of typical tra- is the idea that quantization of real numbers can (and sometimes jectories. In essence, Theorem 4.3 is an assertion about existence probably should) be viewed as something more than just approxand uniqueness; it gives no hint as to how one might compute the imate measurement or specification. Second is the recognition special invariant measure p * , or how the properties of p * change that the presence of quantization can be taken into account exin response to changes in the parameters a, 6 , f,and A in (6). plicitly during the control system design process; in other words, The authors of [4] prove a strong existence and uniqueness result a description of the quantization and some knowledge of system along the lines of Theorem 4.3 that applies to certain transfor- dynamics serve together to enhance one’s understanding of what mations 7:D* + D*that, unlike G , are not piecewise linear but closed-loop dynamical behaviors are possible and, in particular, merely piecewise twice continuously differentiable. For any trans- what kinds of information will be (or can be made) available formation 7 to which the main result of [4] applies, there exists to a controller once it is implemented. Third is the observation a special partition of D*similar to the Lebesgue-Markov parti- that even if one adopts a more traditional approximation-oriented tions defined in [7] and [ 101 for the piecewise linear transforma- view of measurement quantization, then the long-term behavior where do is an integer. Thus, G ( x o )is another scaled a-ary rational whose “length” is I - 1 if xo’s “length” is 1. Applying G repeatedly shows that G k ( x 0 )has the form bfj A / a for some k 5 I, implying G‘+’(x,) = 0. Having established the existence of a dense subset of initial conditions in D* leading to trajectories that eventually go to zero, it is easy to show that if U c D* is any open interval, then Gk(U) = D* for some k 1 0. If 0 E U,then G k (U) contains the ‘‘zero quantization block” ZO = ( - 1 /2 A , 1 /2A) for sufficiently large k . By construction (cf. the proof of Lemma 3.1), G(Z,) contains the fixed point b f A ( j f l)/a - 1 E Z j h l ; it follows that Gk(U) contains the fixed points in Z*I for k large enough. Continuing in this fashion, observing that a repeated application of G maps Zj onto Z j * l in much the same manner, we see that G k (U) = D* for sufficiently large k . If U c D * is an arbitrary interval, since U contains interior points that map eventually to zero, we see that G k (U) contains an open interval about zero for k large enough, so that for some still larger value of k , G k (U) =

924

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 35, NO. 8, AUGUST 1990

one sees in a feedback control system containing quantizers, although amenable to probabilistic analysis, can be quite different from what a “white noise” model for quantization “errors” would predict. It is precisely the modem ergodic theory of dynamical systems, which has developed rapidly during the last 20 years in response to the burgeoning interest among applied scientists in complicated dynamics and chaos, that makes feasible a careful statistical study. We are currently investigating various generalizations and extensions of the results described in Sections I-IV. Of particular interest to us are higher dimensional generalizations of the measure-theoretic results of Section IV; the ergodic theory of dynamical systems on R”, although not nearly as well-developed as the corresponding theory for one-dimensional maps, is wellsuited to the analysis of piecewise linear systems such as ( 5 ) . We are also seeking connections between the results of Section I1 and the theory of variable structure systems along with the growing volume of research on differential inclusions and guaranteed state estimation (see [14]), and are searching for a nexus between our results and some very recent work in discrete event systems such as that described in [24]. We close with some comments about the robustness of the various results of Sections 11-IV. As we remarked at the end of Section IV, it can be somewhat dangerous to rely on computer simulation to figure out approximations to the ergodic invariant measure p*. As it happens, the results of all but the most carefully designed simulations of this kind are suspect. The reason is that since computers deal only with rational numbers, a computer-simulated trajectory of the closed-loop system is likely to be a periodic trajectory rather than a dense one (i.e., the numbers with which the computer can deal are likely to be in the “set of measure zero” of initial conditions who trajectories do not fill D * ) .We are currently studying the relationships between lengthy periodic trajectories of (6) and the measure p*. Such relationships are well known for hyperbolic diffeomorphisms of compact manifolds (see, for example, [3]), but have not been studied too closely in the context of piecewise linear discontinuous mappings of intervals; we anticipate that the results of [27] on piecewise linear difference equations will provide important insights. In Lemma 2.2, it is clear that even if there are some small errors in the “steering strategies” for placing centers of blocks on comer points in the quantization grid, the sides of the blocks will be divided roughly in half every n time steps, and the result of the lemma will still hold. Lemmas 3.1 and 3.2 are particularly interesting from the standpoint of robustness; observe that N + and N - are both locally constant functions of a and bf,and therefore will be the same for fine enough finite precision specification of the system coefficients. In this sense, N + and N - are “robust with respect to computer simulation.” REFERENCES

[l] D. P. Bertsekas and I. B. Rhodes, “Recursive state estimation for a set-membership description of uncertainty,” IEEE Trans. Automat. Contr., vol. AC-16, pp. 117-128, 1971. [2] M. L. Blank, “Ergodic properties of discretizations of dynamical systems,” Soviet Math. Dokl., vol. 30, pp. 449-452, 1984. [3] R. Bowen, Equilibrium States and the Ergodic Theory of Anosov Diyfeomorphisms. New York: Springer-Verlag, 1975. [4] A. Boyarsky and M. Scarowsky, “On the class of transformations which have unique absolutely continuous invariant measures,” Trans. Amer. Math. Soc., vol. 255, pp. 243-262, 1979. [5] R. E. Curry, Estimation and Control with Quantized Measurements. Cambridge, MA: M.I.T. Press, 1970. [61 D. F. Delchamps, “Asymptotic statistical properties of linear systems operating under quantized feedback,” in P m . 2989 Allerton Conf. Commun., Contr., Comput., Urbana, IL, 1989. 171 -, “Controlling the flow of information in feedback systems with quantized measurements,” in Proc. 28th IEEE Conf. Derision Contr., Tampa, FL, Dec. 1989, pp. 2355-2360. [81 -, “Expanding maps and the statistical stabilization of linear systems with quantized measurements,” in P m . 2989 Conf. Inform. Sri. Syst., Johns Hopkins University, Baltimore, MD, Mar. 1989, pp. 112-1 18. [9] -, “Extracting state information from a quantized output record,” Syst. Contr. Lett., vol. 13, pp. 365-372, 1989.

r131

r231

[26] [27] [28] [29]

[30] [31] [32] [33] [34]

-, “The ‘stabilization’ of linear systems with quantized feedback,” in Proc. 27th IEEE Conf. Daision Contr., Austin, TX, Dec. 1988, pp. 405-410. R. M. Gray, “Oversampled sigma-delta modulation,” IEEE Trans. Commun., vol. COM-35, pp. 481-489, 1987. J. Guckenheimer and P. J. Holmes, Nonlinear Oscillations, Dynamicul Systems, and Bifurcations of Vector Fields. New York: Springer-Verlag, 1983. R. E. Kalman, “Nonlinear aspects of sampled-data control systems,” in Pmeedings of the Symposium on Nonlinear Circuit Theory, Volume VII. Brooklyn, NY: Polytechnic Press, 1956. A. B. Kurzhanski, “Identification- A theory of guaranteed estimates,” IIASA Working Paper WP-88-55, Laxenburg, Austria, July 1988. A:Lasota and M. C. Mackey, Probabilistic Properties of Deterministic Systems. Cambridge, U.K.: Cambridge University Press, 1985. A. Lasota and J. A. Yorke, “Exact dynamical systems and the Frobenius-Perron operator,” Trans. Amer. Math. Soc., vol. 273, pp. 375-384, 1982. -, “On the existence of invariant measures for piecewise monotonic transformations,” Trans. Amer. Math. Soc., vol. 186, pp. 481-488, 1973. T. Li and J. A. Yorke, “Er odic transformations from an interval into itself,” Trans. Amer. Mat!. Soc., vol. 235, p . 183-192, 1978. R. MaiiC, Ergodic Theory and Differentiabe Dynamics. New York: Springer-Verlag, 1987. R. K. Miller, A. N. Michel, and J. A. Farrell, “Quantizer effects on steady-state error specifications of digital feedback control systems,” IEEE Trans. Automat. Contr., vol. 34, pp. 651-654, 1989. P. Moroney, Issues in the Implementation of Digital Compensators. Cambridge, MA: M.I.T. Press, 1983. P. Moroney, A. S. Willsky, and P. K. Houpt, “The digital implementation of control compensators: The coefficient wordlength issue,” IEEE Trans. Automat. Contr., vol. AC-25, pp. 621-630, 1980. C. T. Mullis and R. A. Roberts, “Synthesis of minimum roundoff noise fixed point digital filters,” IEEE Tmns. Circuits Syst., vol. CAS-23, pp. 551-562,-1976. P. J. Ramadge, “On the periodicity of symbolic observations of iece wise smooth discrete-time systems,” in Proc. 28th IEEE Con! De: cision Contr., Tampa, F L , b e c . 1989, pp. 125-126. A. Renyi, “Representations for real numbers and their ergodic properties.” Acta. Math. Acad. Sri. Hunnar.. vol. 8. DD. 477-493. 1957. Reading, R. A. Roberts and C. T. Mullis, Digiial Signal P&essing. MA: Addison-Wesley, 1987. M. Scarowsky and A. Boyarsky, “On n-dimensional piecewise-linear difference equations,” Nonlinmr Anal., Theory, Methods, Appl., vol. 4, pp. 715-731, 1980. F. C. Schweppe, “Recursive state estimation: Unknown but bounded errors and system inputs,” IEEE Trans. Automat. Contr., vol. AC-13, pp. 22-28, 1968. R. S. Shaw, “Strange attractors, chaotic behavior, and information flow,” Zeitschrift Naturforschung, vol. 36a, pp. 80-1 12, 1981. T. Ushio and K. Him!, “Chaotic behavior in piecewise-linear sampleddata control systems, Int. J. Nonlinear Mech., vol. 20, pp. 493-506, 1985. T. Ushio and C. S . Hsu, “Chaotic rounding error in digital control systems,” IEEE Tmns. Circuits Syst., vol. CAS-34, pp. 133-139, 1987. P. Walters, An Introduction to Ergodic Thtwry. New York: Springer-Verlag, 1982. D. Williamson, “Finite wordlength design of digital Kalman filters for state estimation,” IEEE Trans. Automat. Contr., vol. AC-30, pp. 930-939, 1985. G. A. Williamson, “On the effect of output quantization in control systems,” M.S. thesis, Come11 Univ., Ithaca, NY, 1987.

David F. Delchamps (M’82) was bom in Morristown, NJ, on August 31, 1954. He received the B.S.E. degree from Princeton University, Princeton, NJ, in 1976, and the S.M. and Ph.D. degrees from Harvard University, Cambridge, MA, in 1977 and 1982, respectively. Since January 1982, he has been on the faculty of Comell University, Ithaca, NY, where he is currently Associate Professor in the School of Electrical Engineering and a member of the Center for Applied Mathematics. He does research primarily in the area of systems and control theory, with particular emphasis on complicated dynamical phenomena and finite-precision effects in feedback control systems. He is the author of the book State Space and Input-Output Lineylr Systems (New York: Springer-Verlag, 1988) along with several technical articles. Dr. Delchamps has received several departmental and college-level teaching awards at Comell University.

Suggest Documents