Computational Solution of the Algebraic Riccati Equation Angelika Bunse-Gerstner
Universitat Bremen, Fachbereich Mathematik/Informatik, Postfach 330440, D-28334 Bremen, FRG, email:
[email protected]
Abstract
The numerical solution of the algebraic Riccati equation is required in a large number of applications like linear quadratic optimal control problems, dierential games and computation of Kalman lters. This paper gives a survey of computational method developed and investigated over the last three decades. In particular we discuss Newton's method, the matrix sign function method, defect correction and methods using eigenvalue computations. x1
Introduction
The algebraic Riccati equation (ARE) (1:1) R(X ) = XA + AT X + Q ? XBR?1B T X = 0 arises frequently in control problems. Here A 2 IRnn; B 2 IRnm ; Q = QT 2 IRnn is positive semide nite, R = RT 2 IRmm is positive de nite and X 2 IRnn is the unknown matrix we have to compute. Usually, the desired solution is stabilizing in the sense that the eigenvalues of A ? BR?1B T X have negative real parts. Under mild assumptions (see Section 2) such a stabilizing solution exists and is unique. A typical problem is the linear quadratic optimal control problem, where the control u for the system (1:2) x_ = Ax(t) + Bu(t); x(0) = x0 is sought, which minimizes the cost functional (1:3)
Z 1 xT (t)Qx(t) + uT (t)Ru(t) dt: 0
If the system (1.2) is stabilizable and detectable (see Section 2) then the minimizing control u(t) is given by 1
u(t) = ?R?1 B T Xx(t); where X is the stabilizing solution of (1.1). Many numerical methods have been developed for solving the ARE and this is still a subject of active research, where also new aspects are coming into account like the treatment of large dimensional or singular systems and the development of algorithms for parallel computers. More detailed surveys on numerical methods for the ARE are given in [7],[19]. They contain extensive lists of references which for the sake of brevity cannot be given here. This paper discusses Newton's method for the solution of the ARE, the sign function method, methods based on eigenvalue computations and their variants which exploit the Hamiltonian and symplectic structure of the related eigenvalue problem as well as an iterative re nement technique. x2
From the Theoretical Background
Existence and uniqueness theorems for the solution of (1.1) are well established, see e.g. [19]. A typical result is the following. The pair of matrices (A; B ) from (1.1) and the control system (1.2) are called stabilizable if rank [I ? A; B ] = n for all 2 Cl with nonnegative real part. (A; Q) and the system (1.2) are called detectable if (AT ; QT ) is stabilizable. Stabilizability and detectability guarantee the existence of a stabilizing solution of the ARE: 2.1 Theorem.If (A; B ) is stabilizable and (A; Q) is detectable, then (1.1) has a unique stabilizing solution X 2 IRnn . Moreover X is symmetric and positive semide nite. Proof: See e.g. [13] Solving the Riccati equation can be considered as nding an invariant subspace of a 2n 2n matrix built from the matrices involved in (1.1): 2.2 Theorem. Equation (1.1) holds if and only if (2:3)
"
A BR?1 BT Q ?AT
#"
I 0 ?X I
#
=
"
I 0 ?X I
#"
A?BR?1 BT X 0
BR?1 BT T ?A +XBR?1 BT
where I is the n n identity matrix and 0 is the n n zero matrix.
Proof: See e.g. [19]. This is actually very easy to see by just checking the equations which (2.3) establishes for the n n submatrices. 2
#
;
"
Z11 Z21
#
Now let be a 2n n full rank matrix whose columns span an invariant subspace of (2:4)
"
#
A BR?1 BT H= ; Q ?AT
i.e. there exists an n n matrix , such that (2:5)
"
# "
#
Z Z H 11 = 11 : Z21 Z21
If Z11 is an n n invertible matrix, then (2.5) can be postmultiplied by Z11?1, resulting in (2:6)
"
# "
#
I I ?1 H ?1 = Z21 Z ?1 Z11 Z11 ; Z21 Z11 11
and from Theorem 2.2 we can see that X = ?Z21 Z11?1 solves (1.1). If the system (1.2) is stabilizable and detectable Theorem 2.1 and Theorem 2.2 together imply that H has exactly n eigenvalues in the open left half plane, called the stable eigenvalues. The corresponding invariant subspace is called the stable, subspace. It can be shown [21] # " invariant that for any 2n n matrix ZZ1121 containing basis vectors of the stable invariant subspace as its columns the n n matrix Z11 is invertible. Thus we can compute the stabilizing solution of the ARE via computing a basis for the stable invariant subspace of the 2n 2n matrix H from (2.4). A question that also has to be discussed when numerically solving the ARE is: How much does a slight perturbation of the data in (1.1) perturb the solution X ? For problems in applications data are rarely known exactly, e.g. due to the limitation of measurment precision or neglecting small size in uences in the modelling. So instead of the ideal matrices ~ B; ~ Q~ and R~ A; B; Q and R we have the slightly perturbed matrices A; available on our computer. Moreover, when carrying out an algorithm on a computer to solve the ARE numerically, we will have rounding errors in each basic numerical manipulation due to the limitation of the precision with which real numbers can be represented on this machine. They may perhaps add on during the course of the computation to produce a result X~ on the machine, which is not too close to the theoretically correct result X . The eect of these rounding errors on the solution is often described by proving that for this special algorithm the computed solution X~ is the theoretically exact solution of the problem with perturbed data A+A; B +B; R+R and Q+Q, where bounds can then be given for 3
the magnitude of the entries in the perturbation matrices A; B; Q and R for this special algorithm. Obviously one can be satis ed if these perturbations are not larger then rounding errors in A; B; R and Q themselves or the above mentioned uncertainty in the data, because this cannot be avoided anyway. A numerical method is called backward stable if the perturbation in the data of the problem corresponding to the eect of rounding errors in the solution is rounding-error-small. Here again the question is, how much such a perturbation will alter the solution. The ARE for the data A; B; Q and R in (1.1) is called ill-conditioned if small relative perturbations in the data cause large relative deviations in the solution. If the perturbation in the solution is always of the size of the data perturbations then the problem is called well-conditioned. There is a theoretical measure for the sensitivity of the solution to perturbations in the data, called the condition number of the ARE (see [9]). It is a complicated expression but it can be explicitely calculated for small n and m by standard package software [9]. Note that being wellconditioned or ill-conditioned is a property of the problem (1.1) with the given data itself and not of the computational method we have chosen for its numerical solution. If the problem is ill-conditioned even applying the best numerical method may result in a solution which is far away from the theoretically correct one. For a well-conditioned ARE, however, a backward stable numerical method will produce a computed solution X~ , whose error is equivalent to rounding-error-small perturbations in the data and is therefore because of the small condition number guaranteed to be small itself. With respect to numerical stability two computational methods to solve a problem are best compared by comparing the magnitude of their equivalent perturbation in the data, because thus we can separate the eect of the condition of the problem itself on the error from the rounding error eects of the special algorithm. Another important aspect for comparing numerical methods are their costs. If we assume that the costs are determined by the computing time then the number of arithmetic operation which a method requires is a reasonable measure for the costs. Usually only the order is given as a function of the dimension in the form O nk , meaning that the number of arithmetic operations is bounded by Cnk , where C is a constant independent of n. x3
Newton's Method
Equation (1.1) is a quadratic matrix equation for the unknown symmetric matrix X to which we can apply Newton's method. In this setting it can be introduced in the following way. Let X0 be a symmetric "starting 4
guess" for X . A rst interesting observation is that the defect P = X ?X0 also satis es an ARE of the same type. 3.1 Theorem. Let X be a symmetric solution of (1.1) and let X0 2 IRnn be a symmetric approximation of X . Then P = X ? X0 satis es the ARE (3:2) R (X0) + P A^T + A^T P ? P T BR?1B T P = 0; where A^ = A ? BR?1 B T X0 and the residual R (X0) is de ned in (1.1). Proof: See [20] If we assume that X0 is a good approximation of X and therefore P is small, then we can neglect quadratic terms in P and get R (X0 )+ P A^T + A^T P 0. We can thus assume that the symmetric solution P^ of the equation R (X0)+ P^ A^ + A^T P^ = 0 gives a good approximation to P , such that the updated approximation X1 = X0 + P^ is (hopefully) closer to X than X0. For X1 the defect P^ = X ? X1 will then even be smaller than P and we can repeat the process with X1 instead of X0. This is the idea of Newton's method in this case and the algorithm can be sketched as follows. 3.3 Algorithm.Newton's Method for the ARE Matrices A; B; Q; R from (1.1) satisfying the hypothesis of Theorem 2.1 and a symmetric starting guess X0 2 IRnn . OUTPUT: Approximation to a symmetric solution X of (1.1) FOR k = 0; 1; 2; ::: UNTIL stopping criterion satis ed SetRk := R (Xk ) = Xk A + AT Xk + Q ? Xk BR?1B T Xk Ak := A ? BR?1 B T Xk Solve for Pk in the equation INPUT:
(3:4)
Rk + Pk Ak + ATk P = 0 Update Xk+1 := Xk + Pk END FOR
It can be shown [17],[19] that if X0 is chosen such that all eigenvalues of A ? BR?1B T X0 are stable, then the iteration converges to the desired stabilizing solution X of (1.1). Ultimately the convergence is quadratic. At each step the eigenvalues of A ? BR?1 B T Xk are stable and after the 5
rst step, convergence is monotone, see [17],[19]. Equation (3.4) is a Lyapunov equation. It can be written as a linear system of equations for the n(n2+1) unknown entries of the symmetric matrix P streched out as an n2 dimensional vector and as such be solved by any linear system solver. But the numerically most ecient way of solving this equation is to rst perform the QR algorithm, see e.g. [15], on Ak to receive the real Schurform Tk = QTk Ak Qk , where Qk is an orthogonal matrix and Tk is a quasi-upper triangular matrix, i.e. a block upper triangular matrix having only 1 1 or 2 2 diagonal blocks. Then multiplying (3.4) by QTk and Qk from the left and right, respectively, yields QTk Rk Qk + QTk Pk Qk Tk + TkT QTk Pk Qk = 0: This can now easily be solved for P~k = QTk Pk Qk by the Bartels-Stewart algorithm [4]. Essentially this is subsequently evaluating the columns of this matrix equation, taking into account that Tk is a quasi-triangular matrix. Solving equation (3.4) is the main work for each step. It requires O (n3 ) arithmetic operations. There are stabilization procedures like [2],[22] to compute a symmetric stabilizing starting guess. They may, however, be far from the solution X such that many iterations are needed to reach the region of ultimately rapid quadratic convergence. Ten to twenty iterations may be needed even on well-conditioned Riccati equations. Thus the Newton method by itself can be a very expensive way to solve the Riccati equation. The invariant subspace methods and the sign function method described hereafter need about the same amount of work as two to four Newton step to deliver the solution. Moreover, if the Lyapunov equation is ill-conditioned [9], then it is dicult to compute Pk precisely. But this or a poor choice of X0 may cause Newton's method to converge to a solution which is not stabilizing or may lead to no convergence. Because of these problems Newton's method is often combined with others methods which give a good approximate solution and one or two Newton steps are then used as re nement. Newton's method can be used in a variation which works with the Cholesky factor of Xk rather than with Xk itself [16], which is sometimes more accurate. Also a sort of step size control can be applied to accelerate convergence [6]. A re nement of an approximate solution X~ can also be based on Theorem 3.1. In [20] the following concept of a defect correction method is proposed. 3.5 Algorithm. Defect Correction 6
INPUT: Matrices A; B; Q; R from (1.1) satisfying the hypothesis of Theorem 2.1. OUTPUT: An approximate solution X0 of (1.1) and an error estimate P 2 IRnn. Step 1: Using any method, compute an approximate stabilizing solution X0 for (1.1). Step 2: Set P := X0 ; X0 := 0 WHILE kP k is too large Step 2a: Set X0 := X0 + P Step2b: Using any method, compute an approximate stabilizing solution P of (3.2) END WHILE The costs for this algorithm depend on the special Riccati solvers which are chosen for Step 1 and Step 2. Under the given hypothesis the stabilizing solution in Step 2b always exists. The algorithm is backward stable in the sense that the computed solution X0 is the exact solution of the perturbed equation XA + AT X + (Q ? R(X0 )) ? XBR?1XB T = 0: If the Riccati solvers used in this algorithm are suciently accurate to get the rst signi cant digits of the entries in P correct, X0 converges to the solution X . In practice, as the iteration converges, the accuracy of the computed residual R(X0 ) declines due to the subtractive cancellation in the nal \?Q" subtraction. But at this point R(X0 ) is a rounding-errorsmall perturbation of Q. Eventually, the errors in the residual aect the most signi cant digits of the correction P . Even after this limiting accuracy has been reached, it is unusual for P to overestimate X ? X0 by more than a factor of ten. It often takes only one or two iteration to reach limiting accuracy. Defect correction is a very helpful technique. Even a poor numerical method in Step 1 can be made to deliver high accuracy results if we combine it with a defect correction in Step 2. Note that the method to solve the ARE in Step 1 and 2 are not required to be the same. So Algorithm 3.5 does not automatically double the costs of solving the ARE. With a good approximation from Step 1 one ore two steps of Newton's method are a good and inexpensive choice for the re ning steps. It is advisable to let a standard procedure for solving the Riccati equation be followed by at least one step of defect correction. x4
Eigenvalue Methods
In Section 2 we saw that the stable invariant subspace of the 2n 2n matrix H from (2.4) will give the stabilizing solution of the ARE. The QR algorithm, see e.g. [15], is the best numerical method to compute an 7
invariant k-dimensional subspace of a nonsymmetric matrix, when k is not too small. It is a backward stable numerical method, which computes an orthogonal matrix U 2 IRnn such that U T HU = T is a quasi-upper triangular matrix. T is called the (real) Schur form of H and H = U T TU is referred to as Schur decomposition. The eigenvalues of H are the union of the eigenvalues of the 1 1 or 2 2 diagonal blocks of T . They can be arranged in any order along the diagonal of T by the Bartels/Stewart algorithm, see e.g. [15]. This method requires O(n3 ) operations. If we split the matrices into n n submatrices as (4:1) U T HU = "
U11 U12 U21 U22
#T "
A BR?1 BT Q ?AT
#"
U11 U12 U21 U22
#
=
"
T11 T12 0 T22
#
=T;
where the eigenvalues of T are arranged such that T11 has # n stable " the U11 eigenvalues of H as its eigenvalues, then the columns of U21 span the stable invariant subspace of H and according to what we saw in Section 2 the matrix X = ?U21 U11?1 solves the Riccati equation. X should be computed by solving the linear system U11 X = ?U21 , which is more ecient than computing U11?1 explicitly. We can thus sketch the algorithm as follows. 4.2 Algorithm. Schur Method INPUT: H given in (2.4) with A; B; Q; R satisfying the hypothesis ot Theorem 2.1 OUTPUT:An approximate stabilizing solution X of (1.1) and an error estimate P 2 IRnn. Step 1:Compute the Schur decomposition H = U T TU with the QR algorithm and subsequently applying the Bartels/Stewart algorithm such that "
U T HU =
#
"
U U T11 T12 ; U = 11 12 U21 U22 0 T22
#
and the eigenvalues of the nn submatrix T11 are all stable. Step 2:Solve U11 X = ?U21 with the QR- (or LR-) decomposition (with pivoting), see e.g. [15]. Step 3:Use defect correction (Algorithm 3.5 Step 2) to re ne X and compute an error estimate P . 8
The costs are O(n3 ) operations. This way of computing the solution of the ARE has been introduced in [18]. Nowadays it is the most frequently used numerical method to compute the stabilizing solution of (1.1) and it is implemented in software packages like NAG or the MATLAB-control toolbox. Even though the QR-algorithm is a numerically stable method to solve the eigenvalue problem, in this context there are still problems. Under the hypothesis of Theorem 2.1, i.e. if we have a stabilizable and detectable system, the theory guarantees that H has exactly n stable eigenvalues, that U11 is invertible and that U21 U11?1 is symmetric. But in the presence of rounding errors these properties may get lost when manipulating H step by step to transform it into T . If for instance H has some stable eigenvalues close to the imaginary axis, due to rounding errors they may be forced across the axis into the right half plane during the computation. The computed T may thus not even have n stable eigenvalues and we may not be able to identify the stable invariant subspace. The computed U11 may be close to a non-invertible matrix in which case solving U11 X = ?U21 can cause serious problems. In any case the computed X will rarely be exactly symmetric. Many additional details and a defect correction step are necessary to make this approach work to satisfaction. The QR algorithm applied here treats the matrix H from (2.4) like an arbitrary unstructured one, where in fact H exhibits certain symmetries. Its special structure can be characterized by the equation (4:3)
"
0
#
I ;
where J = ?I 0 I and 0 being the n n identity and the n n zero matrix respectively. A matrix with this "property is#called Hamiltonian and it always exhibits the block structure MK ?NK T , where M; N are symmetric nn matrices. A Hamiltonian matrix is thus determined by only 2n2 + n parameters instead of 4n2 for a general 2n 2n matrix. A matrix S is called symplectic if S T JS = J . Symplectic matrices are non-singular and products of symplectic matrices are symplectic. Moreover, if H is a Hamiltonian matrix and S is symplectic, then S ?1HS is Hamiltonian again. An important property of Hamiltonian matrices is the fact that their eigenvalues occur in pairs ; ?, i.e. if is an eigenvalue of H then so is ?. Thus if the hypothesis of Theorem 2.1 holds, then the Hamiltonian matrix from (2.4) has exactly n stable eigenvalues and each must have a partner in the open right half plane. JH =(JH )T ;
If we could transform the Hamiltonian matrix H step by step as S1?1HS1; S2?1HS2; : : :etc. with symplectic matrices S1 ; S2; : : : such that 9
we nally reach something like a Schur form, then all intermediate matrices and the resulting matrix would be Hamiltonian. The computation could then be arranged to only work on the 2n2 + n parameters and then each actually computed intermediate matrix would be Hamiltonian, thus preserving the pairing of eigenvalues even in the presence of rounding errors. Identifying the stable invariant subspace would therefore almost never be a problem and we can hope for less costs because only half the number of matrix parameters have to be modi ed each step. There exists a Schur like canonical form for Hamiltonian matrices. 4.4 Theorem. If H 2 IRnn is a Hamiltonian matrix with no eigenvalue on the imaginary axis, then there exists an orthogonal and symplectic matrix Z 2 IRnn such that "
Z T HZ =
T
#
R ; 0 ?T T
where T is a quasi-triangular matrix with stable eigenvalues only. Proof: See [21] There has been active research trying to develop a QR-like method to compute this Hamiltonian Schur-like form with a sequence of transformations which are symplectic and orthogonal. Only for the special case of single input/single output control systems one has succeeded to develop such a complete analog to the Schur method [10]. Skipping the requirement of orthogonality for the transforming matrices an algorithm is given in [8], which works for arbitrary Hamiltonian matrices and can take full advantage of the Hamiltonian structure. Unfortunately it is not numerically stable and the growth of the entries in the intermediate matrices has to be carefully monitored. A recent development is the multishift algorithm OSMARE [1], a modi cation of these ideas. It computes only the n eigenvalues 1; :::; n of H in the open right half plane, the \unstable" ones (without the invariant subspace) by an inexpensive but numerically save method. Then it performs a multishift step with these eigenvalues and an orthogonal and symplectic transformation, i.e. x = (H ? 1I ) (H ? 2I ) ::: (H ? nI ) e1 is computed, where e1 is the rst unit vector. Then an orthogonal and symplectic matrix S1 2 IR2nn is constructed such that S1T x = e1 with = kxk and S1T HS1 is computed. Roughly speaking we can then construct an orthogonal and symplectic matrix S2 2 IR2n2n such that S2T S1T HS1S2 has the Hamiltonian Schur-like form. For details see [1]. For ill-conditioned problems this algorithm produces the best result. It 10
is numerically stable and preserves the Hamiltonian structure but it is much more costly than the Schur method for the ARE. x5
Sign Function Method
The sign function method is a simple and elegant method to solve the ARE, see [5],[11], [14]. It is particularly well-suited for parallel computation [12],[3]. It can be described with the Jordan canonical form. Let A be a complex n n matrix, i.e. A 2 Cl nn, having Jordan canonical form A = M (D + N ) M ?1 ; where M 2 Cl nn is a matrix whose columns are eigenvectors and principal vectors, D is a diagonal matrix of eigenvalues and N is a nilpotent matrix (N k = 0 for a suitable k) that commutes with D. The matrix sign(A) is de ned by sign(A) = MSM ?1 ; where S = diag ( (s1 ; s2 ; :::; sn ) and if Re(dii) > 0 si = +1 ?1 if Re(dii) < 0 for i = 1; :::; n. If A has an eigenvalue on the imaginary axis and therefore an Re(dii) is zero, then sign(A) is unde ned. If we de ne a sequence of matrices Z1 ; Z2; ::: by ?1 (5:1) Z0 = A; Zk+1 = Zk ? Zk ?2 Zk for k = 0; 1; 2; ::: then it can be shown that klim Z = sign(A). (This is actually the !1 k Newton iteration for Z 2 ? I = 0.) Let X be the stabilizing solution of the Riccati equation. Then all eigenvalues of A ? BR?1B T have negative real parts and therefore from equation (2.3) we get #
"
"
I 0 W W W = 11 12 =sign(H )= ?X I W21 W22
#"
?I K 0 I
#"
I 0 ?X I
for a suitable matrix K . But then we nd for X
"
W11 W12 W21 W22
#"
which implies (5:2)
"
I ?X
#
#
=
"
"
?I X
#
and thus ?
"
#?1
#
: "
# "
#
W12 X + W11 + I =0; W22 ?X W21
#
W12 X = W11 +I : W22 +I W21
Thus X solves this overdetermined, consistent system of linear equations (5.2), whose solution can be computed with the QR decomposition, see e.g. [15]. If applied to a Hamiltonian matrix Z0 = H all iterates Zk in (5.1) are 11
Hamiltonian. Replacing the non-symmetric inversion Zk?1 by the symmetric inversion (JZk )?1 cuts the work and storage requirement in half. It is advisable to scale the iterates Zk to have determinant 1, see [11]. We can sketch the algorithm as follows. 5.3 Algorithm. Sign Function Method INPUT: H given in (2.4) with A; B; Q; R satisfying the hypothesis of Theorem 2.1. OUTPUT:An approximate stabilizing solution X of the ARE (1.1) and an error estimate P 2 IRnn. Step 1: Z0 := H Step 2:FOR k = 0; 1; 21; ; ::: UNTIL Zk "converges" Zk := jdetZk j? 2nZk ; Zk+1 := Zk ? 21 Zk ? (JZk )?1 J : END FOR Step 3:Solve for X in the overdetermined, consistent system (5.2) Step 4:Use defect correction (Algorithm 3.5 Step 2) to re ne X and compute an error estimate P . The determinant in Step 2 is a by-product of the factorization used to invert the symmetric matrix JZk , so its computation is essentially free. The cost are O(n3 ) operations. The iteration converges to working precision usually in eight or nine steps. The sign function method by itself is not numerically stable. There are serious numerical problems if eigenvalues of H are close to the imaginary axis. Therefore it should always be combined with defect correction. It is a frequently used method because it can take advantage of the Hamiltonian structure and is relatively fast. x6
Concluding Remarks
This paper gave an overview of recent numerical methods to solve the ARE: Newton's method, defect correction, eigenvalue methods and the sign function method, each method having advantages and disadvantages. There are analogues for each method discussed here for the discretetime algebraic Riccati equation AT XA ? X + Q ? AT XB (R + B T XB )?1B T XA = 0; which arises when (1.2) is a system of dierence equations instead of differential equations and the cost functional (1.3) is a corresponding sum, see e.g. [7],[19]. 12
There is a huge number of publications on this topic. An extensive list of papers is kept as TEX-File and is constantly updated at the Technical University of Chemnitz in Germany. To receive the le by e-mail contact P. Benner (
[email protected]). Some recent publications can be obtained as well as a collection of benchmark examples to test numerical methods on by anonymous ftp from ftp.tu-chemnitz.de from the directory pub/Local/mathematik/Benner. Fortran 77 codes can be obtained by anonymous ftp from netlib.att.com and elib.zib-berlin.de both from the directory netlib/control. MATLAB codes are available from the MathWorks ftp site (ftp.mathworks.com) in pub/contrib/control and there is a software package, SLICOT from NAG, under development from which codes can be obtained from
[email protected].
References [1] G.S. Ammar , P. Benner and V. Mehrmann A Multishift Algorithm for the Numerical Solution of the Algebraic Riccati Equation, ETNA 1 (1993), 33 -48. [2] E. S. Armstrong, An Extension of Bass' Algorithm for Stabilizing Linear Continuous Constant Systems, IEEE Trans. Automat. Control AC-20 (1975), 153-154. [3] Z. Bai and Q. Qian, An Inverse Free Parallel Method for the Numerical Solution of Algebraic Riccati Equations, Proc. Fifth SIAM Conf. Appl. Lin. Alg., Snowbird, UT, June 1994, ed. J.G. Lewis, Philadelphia, PA, SIAM (1994), 167{171. [4] R.H. Bartels and G.W. Stewart, Solution of the Matrix Equation AX + XB = C : Algorithm 432, COMM. ACM 15 (1972), 820-826. [5] A.N. Beavers and E.D. Denman, A Computational Method for Eigenvalues and Eigenvectors of a Matrix with Real Eigenvalues, Numer. Math. 21 (1973),389-396. [6] P. Benner and R. Byers, Step Size Control for Newton's Method Applied to Algebraic Riccati Equations, Proc. Fifth Siam Conf. Appl. Lin. Alg., Snowbird, UT, J.G.Lewis,edt., SIAM Philadelphia PA (1994), 177{181 [7] A. Bunse-Gerstner and R. Byers and V. Mehrmann, Numerical Methods for Algebraic Riccati Equations, Proc. Workshop on the Riccati Equation in Control, Systems, and Signals: Como, 13
[8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19]
Italy, S. Bittanti,edt.,Pitagora Editrice, Bologna (1989),107116. A. Bunse-Gerstner and V. Mehrmann, A Sympletic QR-Like Algorithm for the Solution of the Real Algebraic Riccati equation, IEEE Trans. Autom. Control, AC 31 (1986), 1104{1113. R. Byers, Numerical Condition of the Algebraic Riccati Equation, Contemp. Math. 47 (1985), 35-49. R. Byers, A Hamiltonian QR-algorithm, SIAM J. Sci.Stat.Comp.7 (1986), 212-229. R. Byers, Solving the Algebraic Riccati Equation with the Matrix Sign Function, Lin.Alg.Appl.85 (1987), 267-279. J.P. Charlier and P. Van Dooren, A Systolic Algorithm for Riccati and Lyapunov Equations, Math. of Control,Signals,Systems 2 (1989),109-136. W. A. Coppel, Matrix Quadratic Equation, Bull. Austral. Math. Soc. 10 (1974), 377-401. E.D. Denman and A.N. Beavers, The Matrix Sign Function and Computations in Systems, Appl.Math.Comp. 2 (1976), 63-94. G.H. Golub and C.F. Van Loan, Matrix Computations, Johns Hopkins University Press, Baltimore, 2.edition, (1989). S.J. Hammarling, Newton's Method for Solving the Algebraic Riccati Equation, NPL Report, DITC 12/82,(1982). D.L. Kleinman On an Iterative Technique for Riccati Equation Computations, IEEE Trans.Autom.Control AC 13 (1968),114115. A.J. Laub, A Schur Method for Solving Algebraic Riccati Equations, IEEE Trans.Autom.Control AC-24 (1979), 913{ 921. V.Mehrmann, The Autonomous Linear Quadratic Control Problem, Theory and Numerical Solution, Lecture Notes in Control and Information Sciences 163, M.Thomas, A.Wyner, edts.,Springer,Heidelberg (1991). 14
[20] V.Mehrman and E. Tan, Defect Correction for the Algebraic Riccati Equation, IEEE Trans.Autom.Control AC-33 (1988), 695{698. [21] C. Paige, C.R. Van Loan, A Schur Decompositon for Hamiltonian Matrices, Lin.Alg.Appl.4 (1981), 11{32. [22] V. Sima, An ecient Schur method to solve the stabilization problem, IEEETrans.Autom.Control. AC-26 (1981), 724-725.
Biography
Angelika Bunse-Gerstner received the diploma of mathematics in 1975 from the University of Erlangen-Nurnberg, Germany and received her Ph.D. in 1978 and her Habilitation in 1986, both from the University of Bielefeld, Germany, where from 1978 to 1989 she helt a position as Wissenschaftliche Assistentin and from 1989 through 1991 a position as Hochschuldozentin at the department of mathematics. In 1991 she joined the department of mathematics and computer science at the University of Bremen, Germany, where she has been Professor of Mathematics since, leading the numerical analysis group. Here research interests are in the area of numerical linear algebra methods, especially for problems in signals, systems and control. Dr. Bunse-Gerstner is associate editor of SIAM Journal of Matrix Analysis and Applications and of Journal of Mathematical Systems, Estimation and Control.
15