a parallel algorithm for the eigenvalue assignment

2 downloads 0 Views 213KB Size Report
to be suitable for large practical problems such as the design of large space .... f 1; 2;...; ng, be the set of n desired eigenvalues sym- metric with respect to theĀ ...
A PARALLEL ALGORITHM FOR THE EIGENVALUE ASSIGNMENT PROBLEM IN LINEAR SYSTEMS Murilo G. Coutinho

Amit Bhayay

Abstract

A parallel algorithm for the eigenvalue assignment problem (EAP) in single-input linear systems is presented. The algorithm is based on solution of the observer matrix equation, and it has a time complexity of O(n3 =p), where n is the order of the system and p is the number of processors. The algorithm has been implemented on the hypercube parallel computer INTEL iPSC 860, with 8 processors, and expected speed up has been obtained. A comparison of the algorithm with one of the best known sequential algorithms shows the eciency and accuracy of the algorithm, for matrices of large order. This also demonstrates that a parallel algorithm such as the one presented in the paper can substantially ease the computational burden of a sequential algorithm for large systems arising in practical applications such as the design of large space structures, control of power systems, etc. Keywords: Parallel Processing, Eigenvalue Assignment, Observer Matrix Equation, Hessenberg Form.

1. Introduction

Consider the linear time-invariant system: x(t) _ = Ax(t) + Bu(t) where A 2 IRnn and B 2 IRnm are given matrices and let  be a set of complex numbers 1 ; . . .; n symmetric with respect to the real axis. The eigenvalue assignment problem (EAP), more commonly known in control theory as the pole assignment problem, is a problem of choosing a feedback matrix F such that A ? BF has the desired spectrum . In fact, the EAP can be considered as an inverse eigenvalue problem which requires the determination of a matrix with given eigenvalues. It is well known that the EAP has a solution if and only if

 Dept of Electrical Eng { Systems, Univ of Southern Calif, Los Angeles, CA 90089, E-mail: [email protected] y Dept of Electrical Eng, COPPE/UFRJ, P.O. Box 68504, Fed Univ of Rio de Janeiro, RJ 21945-970, Brazil, E-mail: [email protected] The research of the rst two authors was partially supported by RHAE/CNPq and CNPq, the Brazilian National Council for Scienti c and Technological Development. z The research of this author was supported by an NSF grant under contract #DMS-9212629, and he is with the Dept of Math Sciences, Northern Illinois Univ, DeKalb, IL 60115, E-mail: [email protected]

Biswa N. Dattaz

the pair (A; B) is controllable. In the single input case, F is unique when it exists. There are several sequential methods available in the literature for this important problem, but most of them, although ecient and numerically e ective, do not seem to be suitable for large practical problems such as the design of large space structures, control of large power systems etc., where the computational burden due to the solution via sequential methods make these algorithms impractical or even infeasible. Clearly, there is a need for specialized algorithms for large-scale and parallel computations that are not minor modi cations of those for the small and dense case. Among the sequential methods, the best known approaches are the simple recursive algorithms of [6, 1, 4, 3], the implicit QR methods [13, 14, 15], the solution via Schur form [17], the Singular Value Decomposition approach [12], the eigenvalue-eigenvector approach [16], the Matrix equation approach [2]. The last approach is equivalent to implicitly constructing certain nonsingular solutions to the matrix equation: AL ? L = R after the matrices  and R (or part of R) have been chosen appropriately. Here we focus on a specialized parallel algorithm to solve the single input EAP for the case of real and complex eigenvalues with multiplicity greater than or equal to one. Our parallel algorithm is an improvement of the algorithm presented in [7], which uses a sequential approach and therefore has some limitations. The paper is organized as follows. Section 2 brie y discusses the theory behind the algorithm. The proposed parallel algorithmis presented in detail in Section 3. Numerical experiments were performed on systems of various dimensions, and the results are presented in Section 4. Concluding remarks are made in Section 5.

2. The Sylvester-Observer Matrix Equation and the Eigenvalue Assignment Problem Consider the Sylvester matrix equation: AL ? LB = R (1) where A; B and R are given matrices with elements in IR and L is the matrix solution of (1). A solution L may

or may not exist. If a solution exists, it is unique i A and B do not have any common eigenvalues. A variant of the Sylvester matrix equation, known as the Sylvester-Observer matrix equation arises in the design of Luenberger observer. In this variation, the matrix A and a part of the matrix R are known, and the matrices B; L, and the unknown part of R need to be found, satisfying certain constraints of controllability, observability, and nonsingularity [9]. The following theorem and its corollary completely characterize the existence and uniqueness of a nonsingular solution to the observer matrix equation. Theorem 2.1. [10] Let  be a nonderogatory matrix and let A be an arbitrary matrix. Let r1; r2; . . .; rn?1 be (n ? 1) vectors in column n-space. Then there always exists an L such that: R := AL ? L (2) has as its rst (n ? 1) columns r1 through rn?1. Furthermore, L is uniquely determined by its rst column l1 . Denoting the last column of R as rn, the following holds:

(A)l1 = D11r1 + D21r2 + . . . + Dn?1 rn?1 + rn where (x) is the characteristic polynomial of , is a nonzero scalar and Dij are certain n  n matrices. Corollary 2.2. Let the rst (n ? 1) columns of R be zero. Then a solution L of (2) is nonsingular if and only if l1 is chosen so that (A; l1 ) is controllable. Given a controllable system (A; B) and a set of desired eigenvalues  = f1; 2 ; . . .; n g, the rst step towards the solution of the EAP is to transform the ~ using orsystem to Controller-Hessenberg form (H; B) thogonal transformations which can always be done [18]. Thus, without loss of generality, the EAP is rst solved ~ (we will call this the Hessenberg for the pair (H; B) EAP, HEAP) and then the original solution is retrieved via multiplication by the previously determined orthogonal transformations. Let X be a given matrix with spec(X) = . Then, nding a solution for the HEAP ~ is equivalent to nding a matrix F~ for the pair (H; B) and a nonsingular matrix N such that: ~ ?1 = X N(H ? B~ F)N that is: NH ? XN = N B~ F~ which is the transpose of equation (2) with A = H T , L = N T ,  = X T and R = F~T B~T N T . Therefore, we have the following Sylvester-observer matrix equation: H T N T ? N T X T = F~T B~T N T Setting N T = L and F~T B~T N T = R, we get: H T L ? LX T = R

R = F~T B~T L (4) Hence, the procedure to solve the EAP via matrix equation consists of choosing suitable matrices X and R in order to guarantee the existence of a nonsingular solution L for equation (3), which together with the full rank condition of B~ (implying that the reduced system ~ In the is controllable), guarantee the existence of F. general case, a complete characterization of nonsingular solutions for a general matrix equation is not yet known, however for the case when the matrix B is a column vector, the solution is unique when it exists [11]. Let X be a nonderogatory matrix, then theorem 2.1 ensures the existence of L in equation (4) provided that the rst (n ? 1) columns of R are chosen in advance. The main point of using theorem 2.1 is that it does not require any prior knowledge of the eigenvalues of H, and the nonsingular solution can always be guaranteed by satisfying certain conditions for controllability. We now present our parallel algorithm for the single-input EAP based on the solution of an appropriate Sylvesterobserver equation: the program corresponding to the algorithm is called SIPP (from Single Input Pole Placement).

3. The Proposed Parallel Algorithm In the single-input case, the matrix R should be chosen as a rank-one matrix. Let R = udT and  = f1 ; 2; . . .; ng, be the set of n desired eigenvalues symmetric with respect to the real axis. Then, according to theorem (2.1), the matrix equation: HL ? LX = udT (5) will have a nonsingular solution L if and only if (H; u) is controllable and (X; d) is observable. Clearly, the choice u = (1; 0; . . .; 0)T = e1 ( rst column of the identity matrix of order n) will satisfy the controllability condition for (H; u). In order to satisfy the observability condition of (X; d), the choice of d must be related to the set  of desired eigenvalues. For instance, in the case that all i 2 ; 8i  n are real and distinct, the simplest choice for the matrix X is: X = diag(1 ; 2; . . .; n), and an obvious choice for d is (1; 1; . . .; 1; 1)T . Now consider the case when  consists only of real eigenvalues with multiplicity greater than or equal to one. Let mi be the multiplicity of the real eigenvalue i 2 . Let X be constructed as follows:

X = diag(Jm1 m1 ; Jm2 m2 ; . . .; Jmk mk ) (6) where Jmi mi is a Jordan block with mi i s on the main diagonal and mi ? 1 ones on the superdiagonal, corresponding to the eigenvalue i with multiplicity mi ; i = 1; 2; . . .; k  n. With this choice for X, if d is chosen as (1; 1; . . .; 1; 1)T , then the pair (X; d) is ob(3) servable.

Since the complex eigenvalues to be assigned occur in then a = ( p ? 1)t; b = n conjugate pairs, the use of complex arithmetic can be else a = (pi ? 1)t; b = pit avoided if we assign them pairwise: that is, each one For i = a;Pbq do: is assigned together with its conjugate. For clarity and c1 = j =1 2mj , accounts brevity, here we will only consider the case when  confor thePcomplex eigenvalues with labels  (i ? 1) sists only of complex eigenvalues with multiplicity one. c2 = ij?=1q mj , accounts  Let i 2  and i 2  be a pair of eigenvalues with for the real eigenvalues with labels  (i ? 1) multiplicity mi = 1. Let ai and bi be the real and imagc = c 1 + c2 inary parts of i , respectively. The matrix X can still if i 2 IR be constructed as in equation (6), but this time each then if mi = 1 Jordan block will be of order two, that is: then   (H ? i I)lc = e1 dc = 1 (7) J2i = ab i ?abi i i else (H ? i I)lc = e1 in contrast with the real case with multiplicity one, d c = 1 where each block is of order one. Since we want to for j = c + 1; c + mi ? 1 do: avoid complex arithmetic, and at the same time sat(H ? i I)lj = e1 isfy the observability condition, a suitable choice for d d j = 1 is: else if mi = 1 then d = (0; 1=h(2; 1); 0; 1=h(2; 1); .. .; 0; 1=h(2; 1)): (H ? i I)(H ? i I)lc = e1 (H ? i I)(H ? i I)lc+1 = e2 Note that we have to divide the nonzero elements of d dc = 0 by 1=h(2; 1), otherwise we would not have u = e1 . Also, dc+1 = H (21;1) h(2; 1) is di erent from zero because H is unreduced upper Hessenberg. else Finally, in the case  has only complex eigenvalues with (H ? i I)(H ? i I)lc = e1 multiplicity greater than or equal to one, the matrix X (H ? i I)(H ? i I)lc+1 = e2 can again be chosen as in equation (6), however each dc = 0 Jordan block is now block triangular with matrices J2i dc+1 = H (21;1) on the main block diagonal and 2  2 identity matrices for j = c + 2; c + 2(mi ? 1); step 2 do: on the block superdiagonal. (H ? i I)(H ? i I)lj = (e1 + lj ?2) The vector d can still be chosen as (0; 1=h(2; 1); 0; (H ? i I)(H ? i I)lj +1 = (e2 + lj ?1) 1=h(2; 1); . . .; 0; 1=h(2; 1)). dj = 0 Assuming the original system is de ned by the pair dj +1 = H (21;1) (A; b) and: end for end if  = f| 1 ; .{z. .; 1}; . . .; | r ; r ; .{z. .; r ; r}; . . .; | k ; .{z. .; k}g end if m1 mk 2mr end for P

with ki=1 mi = n, is the set of real and complex eigenvalues to be assigned, the complete parallel algorithm Step # 3: Compute f = (1= )(L?1 )T d. for the case we have p processors is summarized below. Then (H ? e1 f T ) has the desired eigenvalues. An optimized routine HESSOLV (supplied by Dr. AlStep # 1: Transform the original system (A; b) to its lan Hindmarsh) was used to solve the Hessenberg sysupper Hessenberg form (H; e1 ). (A parallel implemen- tems. A solution of a linear system Ax = b of order tation of this step was done initially, but numerical ex- n using a LINPACK or LAPACK routine has a time periments showed that a sequential implementation has complexity of O(n3 ), in contrast with the O(n2 ) time a better performance (see section 4 for details). complexity of the optimized algorithm. The time comStep # 2: Solve in parallel the n Hessenberg systems. plexity of the Hessenberg decomposition performed in Let t = max:integer < (k=p), then for each processor parallel is O(n3 =p + c), where p is the number of propi do: cessors used and c is the time due to the interprocess communications. Pratical experiments demonstrated that c >> n3=p, and therefore a sequential approach to If pi = 1 the Hessenberg decomposition was used to obtain better then a = 1; b = t performance. Therefore, the overall time complexity of else if pi = p

the parallel program is O(n3=p).

4. Numerical Results The proposed parallel algorithm was implemented on the hypercube parallel computer INTEL iPSC 860, with eight processors and distributed memory of 8Mb per processor. The programming language used was FORTRAN 77, enhanced with special commands to deal with communication between the processors. Several examples were tested both for ill-conditioned and wellconditioned matrices, and a comparison made with an algorithm that is considered to be one of the best sequential algorithms (coded as program DSEVAS) [13, 14]. The tests can be divided into two categories:

No.proc.

1 8

speed-up (8 procs.)

total time

310.840 81.815 3.799

reallocation Hess. decomp. time time

261.862 32.830

42.925 42.930

7.976

xxxxxx

Table 1: System (NH200(2; ?1; 5); b200(0)): performance test with a sequential Hessenberg decomposition.

compared with the desired eigenvalues. We also performed tests on the sequential program DSEVAS and  Test#1: In this test, the rst row of H is set to similar results were obtained. zero and the resulting matrix is called H 0. The Example 2: Well-conditioned non-Hessenberg systems. eigenvalues of H 0 to be assigned are taken as those of H. In this case, the feedback vector f should be In these tests we considered the same as the rst row of the matrix H originally quasi-tridiagonal, symmetric, non-Hessenberg systems given. The closer f is to the rst row, the better of the form (NHn(a1 ; a2; a3); bn(a4 )), where the performance of the algorithm. 3 2 2 3  Test#2: In this case, we assign an arbitrary given 1 a a 0 . . . a 1 2 3 set of eigenvalues. The eigenvalues of (A ? bf) were 6 a4 7 6 a2 a1 a2 . . . 0 7 7 6 6 7 computed using the EISPACK routines and then 6 a4 7 6 0 a 2 a1 . . . 0 7 6 6 7 compared with the given set. The range of variation NHn = 6 0 0 a . . . 0 7 bn = 66 .. 777 2 6 7 of the values of the elements of the feedback vec6 . 7 4 ... ... ... ... ... 5 4 a4 5 tor f was also studied, since the bigger the range, a 0 . . . a a 3 2 1 a4 the harder the practical implementation of such a (9) feedback vector. For these systems the pair (NHn; bn) needs to be transExample 1: EAP for Wilkinson Bidiagonal Matrices formed into a controller-Hessenberg system before allocating the eigenvalues. Both sequential and parallel Consider the well known ill-conditioned Wilkinson ma- codes were experimented with for this task, reaching trix of order 20 given by: the conclusion that the Hessenberg decomposition us2 3 ing Householder or Givens method [8] is an implicitly 20 0 0 . . . 0 sequential computation, and therefore these algorithms 6 20 19 0 . . . 0 7 6 7 perform better if executed in a single processor, rather 6 18 . . . 0 77 A = 66 00 20 (8) than distributing the computations through several pro0 20 . . . 0 77 6 cessors. 4 ... ... ... ... ... 5 Table 1 shows the performance of the program for a 0 0 . . . 20 1 system of order 200 with 1 and 8 processors, using the with eigenvalues A = 20; 19; 18; 17; .. .; 3; 2; 1. In this sequential code for Hessenberg decomposition. case, it not possible to perform test#1 since A is trianguExample 4: Well-conditioned Hessenberg systems. lar. Thus test#2 was carried out, assigning the eigenvalues to their original values o set by 0:5, that is, assigning them to the set D = 20:5; 19:5;. ..; 2:5; 1:5. The In these tests we considered tridiagonal, symmetric, calculated assigned eigenvalues of the closed-loop sys- Hessenberg systems of the form (Hn(a1; a2); bn), where tem are: Dcalc = f 20.5000, 19.5000, 18.5000, 17.5000, 2 3 16.5000, 15.5000, 14.4999, 13.5001, 12.5000, 11.5000, 2 3 1 a a 0 . . . 0 1 2 10.5000, 9.5000, 8.5000, 7.5000, 6.5000, 5.5000, 4.5000, 6 7 6 a2 a1 a2 . . . 0 7 6 0 7 3.5000, 2.5000, 1.5000 g. 6 7 6 7 0 6 7 In the cases of Wilkinson matrices of order greater than Hn = 66 00 a02 aa1 .. .. .. 00 77 bn = 666 .. 777 (10) 2 30, the intrinsic ill-conditioning of the eigenvalues limits 6 7 6 . 7 4 . . . . . . . . . ... ... 5 4 0 5 the performance of the program, and there are signi 0 0 . . . a a 2 1 cant errors in the eigenvalues of the feedback system 0

No.proc.

1 2 4 8

total reallocation Hess. decomp. time time time

3.302 1.703 0.962 0.611

speed-up (8 procs.) 5.404

3.219 1.620 0.879 0.528

0.082 0.082 0.082 0.082

6.097

xxxxxx

system tested

(H50(3; ?1); b50) (H70(3; ?1); b70) (H100(3; ?1); b100) (H150(2; ?1); b150) (H200(1; ?10); b200) (H240(2; ?1); b240) (H260(2; ?1); b260)

time time DSEVAS SIPP

0.489 1.260 3.521 11.543 26.983 46.393 xxxxxx

0.463 0.226 0.611 1.585 3.278 6.326 8.086

maximum speed-up

1.056 5.575 5.763 7.283 8.232 7.334 xxxxx

Table 2: Performance test for the system Table 5: Performance comparison between the programs (H100(3; ?1); b100). DSEVAS and SIPP, for Hessenberg systems. No.proc.

1 8

speed-up 8 procs.

total time

54.221 8.086 6.706

reallocation Hess. decomp. time time

53.031 6.896

1.190 1.190

7.690

xxxxxx

Table 3: Performance test for the system (H260(2; ?1); b260). a1 and a2 are arbitrary constants and n is the order of the system. Tables 2 and 3 show the performance of the algorithm for systems of order 100 and 260, respectively.

Comparison with the sequential program DSEVAS. In order to get a fair evaluation of our algorithm we performed several tests on both our parallel program SIPP and the sequential program DSEVAS [13, 14]. The main objective was to determine the speed-up of our program using 8 processors relative to the sequential program running on one processor of the hypercube. Table 4 shows the performance results for a non-Hessenberg system of order 100. The slowness of the parallel program SIPP (Table 4) compared to the sequential program DSEVAS for nonHessenberg matrices can be easily explained by examining the logic of the latter program. Firstly, the eigenvalues are assigned two at a time: that is, each time system tested

(H100(3; ?1); b100)

time time DSEVAS SIPP

4.342

8.325

maximum speed-up

0.522

the program runs on the system, its order is reduced by two. Therefore it runs faster and faster as the eigenvalues are assigned, whereas our SIPP program does not have an on-line \reduction" of the order of the system. Secondly, the DSEVAS program does not calculate the Hessenberg form of the system explicitly. In fact, the Hessenberg decomposition is done at the same time the pair of eigenvalues is reallocated, considerably reducing the overall computational e ort. Table 5 con rms this analysis, showing the much improved performance of the SIPP program for Hessenberg matrices of orders varying from 50 to 260. According to the data of table 5, the speed-up of the SIPP program over the DSEVAS program is small for low order systems, and it increases as the order of the system considered increases, until it reaches a reasonable value of 8.232 for systems of order 200. The original system was taken to be in Hessenberg form so that the advantage of the DSEVAS over the SIPP due to optimizations in the Hessenberg decomposition has no in uence on the overall performance. It was noticed that the precision of the reallocated eigenvalues obtained by the DSEVAS program starts to deteriorate for systems of order greater than 240, whereas the SIPP program performed very well for systems of order up to 260. Due to memory limitations we did not perform tests on systems of order greater than 260. We should remark that in practical control applications, it is more important to obtain a realizable feedback vector f than to obtain a precise value for the reallocated eigenvalues. In the great majority of the cases, the SIPP program obtained a feedback vector f with its elements varying over a small range, whereas the elements of the feedback vector f calculated using the DSEVAS program have a considerable variation, much greater than that obtained using the SIPP program (see tables 6 and 7)

5. Conclusion

Table 4: Performance comparison between the programs We presented a parallel algorithm that solves the EAP DSEVAS and SIPP, for a non Hessenberg system. for a single-input controllable system. The algorithm

System value of value of tested f (1) f (2) (H70 (3; ?1); b70 ) -3 1 (H100 (3; ?1); b100 ) -3 1 (H150 (2; ?1); b150 ) -2 1 (H200 (1; ?10); b200 ) -0.999. . . 9.999. . . (H240 (2; ?1); b240 ) -1.999. . . 0.999. . . (H260 (2; ?1); b260 ) -2 1

value of f (3 : n) 10?13 10?12 10?11 10?12 10?13 10?6

Table 6: Accuracy of the components of the feedback vector f calculated by the SIPP program, with respect to the rst row of the original system. Notation: f(1), f(2) and f(3 : n) are, respectively, the rst element, the second element and the greatest magnitude of the elements from 3 to n. System value of value of tested f (1) f (2) (H50 (3; ?1); b50 ) -2.99. . . 0.999. . . (H70 (3; ?1); b70 ) -2.99. . . 0.999. . . (H100 (3; ?1); b100 ) -2.99. . . 0.999. . . (H150 (2; ?1); b150 ) -2 1 (H200 (1; ?10); b200 ) -0.999. . . 9.999. . . (H240 (2; ?1); b240 ) -1.457. . . 2.672. . .

value of f (3 : n) 10?12 10?11 10?11 10?6 10?8 10?2

Table 7: Accuracy of the components of the feedback vector f calculated by the DSEVAS program, with respect to the rst row of the original system. Notation: f(1), f(2) and f(3 : n) are, respectively, the rst element, the second element and the greatest magnitude of the elements from 3 to n. is based on the solution of a Sylvester-observer matrix equation and involves solving several independent Hessenberg systems, which is performed in parallel. A comparison of the method with one of the best known sequential algorithms was performed, and the suitability of our parallel approach was demonstrated for the case of large-scale systems. More details on the work reported in this paper can be found in the rst author's Master's Thesis [5] and a longer version of this paper is available on request.

6. Acknowledgments

We would like to thank Dr. George Miminis and Dr. Allan Hindmarsh for kindly making available the FORTRAN source code of the sequential programs DSEVAS and HESSOLV, respectively, and professor F. C. Mota for help during the implementation of the program. 7.

References

[1] M. Arnold and B. N. Datta, \An Algorithm for Multinput Eigenvalue Assignment Problems," IEEE Trans. Automatic Control, vol. AC-35, pp. 1149-1152, 1990.

[2] S. P. Bhattacharyya and E. deSouza, \Pole Assignment via Sylvester's Equation," Systems & Control Letters, 261-263, 1983. [3] R. Bru, J. Cerdan, and A.M. Urbano, \An Algorithm for the Multiinput Pole Assignment Problem," Lin. Alg. Appl., vol. 199, pp. 427-444, 1994. [4] R. Bru, J. Mas, and A. Urbano, \An Algorithm for the Single-input Pole Assignment Problem," SIAM J. Matrix Anal. Appl., vol. 15-2, pp. 393-407, 1994. [5] Murilo G. Coutinho, \A Parallel Algorithm for the Eigenvalue Assignment Problem in Linear Systems," MSc. Dissertation, Federal University of Rio de Janeiro, July 1992, (in Portuguese). [6] Biswa N. Datta, \An algorithm to assign eigenvalues in a Hessenberg matrix: Single-input case," IEEE Trans. on Automatic Control, vol.AC-32, No.5, pp. 414-417, 1987. [7] Biswa N. Datta, \Large-Scale and Parallel Matrix Computations in Linear Control: a Tutorial," Proc. American Control Conf., vol.1, pp. 137-141, 1992. [8] Biswa N. Datta, Numerical Linear Algebra and Applications, Brooks/Cole Publishing Company, Paci c Grove, 1995. [9] Biswa N. Datta, \Linear and Numerical Linear Algebra Problems in Control Theory: Some Research Problems," Lin. Alg. Appl. vol. 197/198, pp. 755-790, 1994. [10] Karabi Datta, \The Matrix Equation XA ? BX = R and its applications," Lin. Alg. Appl., vol. 109, pp. 91105, 1988. [11] E. DeSouza and S.P. Bhattacharyya, \Controllability, Observality and the solution of AX ? XB = C ," Lin. Alg. Appl., vol. 39, pp. 167-188, 1981. [12] J. Kautsky, N. K. Nichols and P. Van Dooren, \Robust Pole Assignment in Linear Feedback," Int. J. Control, vol. 41, pp. 1129-1155, 1985. [13] G.S.Miminis and C.C.Paige, \An Algorithm for Pole Assignment of Time Invariant Linear Systems," Int. J. Control, vol.35, no.2, pp. 341-354, 1982. [14] G.S.Miminis and C.C.Paige, \A Direct Algorithm for Pole Assignment of Time-Invariant Multi-input Linear Systems using State Feedback," Automatica, vol. 24, no. 3, pp. 343-356, 1988. [15] R. V. Patel and P. Misra, \Numerical algorithms for eigenvalue assignment by State Feedback," Proc. IEEE, vol. 17, pp. 1755-1764, 1984. [16] P. Petkov, N. Christov and M. Konstantinov, \Synthesis of Linear Systems with Desired Equivalent Form," J.Assoc.Comput. Mach., vol. 6, pp. 27-35, 1980. [17] A. Varga, \A Schur Method for Pole Assignment," IEEE Trans. on Automatic Control, vol.AC-26, no.2, pp. 517-519, 1981. [18] P. Van Dooren and M. Verhaegen, \On the use of unitary state-space transformations," Contemp. Math., vol. 47, pp. 447-463, 1981.