Algorithms for computing selected solutions of polynomial equations

11 downloads 0 Views 253KB Size Report
geometric modeling piecewise rational functions are used for spline ... ate polynomial and can be combined with algorithms for computing roots of a polynomial ...
Extended abstract appeared in the proceedings of ACM ISSAC'94.

Algorithms for computing selected solutions of polynomial equations 1 Dinesh Manocha

Department of Computer Science University of North Carolina Chapel Hill, NC 27599 USA [email protected]

Abstract We present ecient and accurate algorithms to compute solutions of zero-dimensional multivariate polynomial equations in a given domain. The total number of solutions correspond to the Bezout bound for dense polynomial systems or the Bernstein bound for sparse systems. In most applications the actual number of solutions in the domain of interest is much lower than the Bezout or Bernstein bound. Our approach is based on global symbolic formulation of the problem using resultants and matrix computations and localizing it to nd selected solutions based on numerical computations. The problem of nding roots is reduced to computing eigenvalues of a generalized companion matrix and we use the structure of the matrix to compute the solutions in the domain of interest only. The resulting algorithm combines symbolic preprocessing with numerical iterations and works well in practice. We discuss its performance on a number of applications.

1 Introduction Finding roots of polynomial systems is a classical problem in the computational literature. Various algorithms have been proposed using symbolic and numeric methods. However, when it comes to practice, none of these algorithms give a level of performance desired in applications involving robotics, computer graphics and geometric modeling, molecular modeling, computer vision etc. In most of these applications, we are interested in nding roots in a speci c domain of interest. Typically this is a subset of the real domain. For example, in computer graphics and geometric modeling piecewise rational functions are used for spline representations of curves, surfaces and volumes. Many geometric operations on the models like intersection, ray-tracing, computation of closest features, reduce to solving solutions of a multivariate polynomial set in a given domain. For Bezier surfaces, the domain for each parameter de ning a surface is [0; 1] (Ho mann 1989). Similarly for inverse kinematics of a robot manipulator, the domain of each of the joint variables is restricted to its joint range. Other examples of solving of such systems arise in problems related to conformation search in molecular modeling (Go and Scherage 1970), structure from motion in computer vision (Faugeras 1993) and calibration problems in computer vision and virtual reality (Faugeras 1993, Bishop and Fuchs et. al. 1992). It turns out that the total number of solutions of these equations (say M ), which typically corresponds to the Bezout bound for dense polynomial systems or the Bernstein bound for sparse polynomial systems, is high but the total number of solutions in the associated domain isprather low. For example, the average number of real solutions of a dense polynomial system is M and in most applications we are only interested in the real solutions only. In this paper we present ecient and accurate algorithms for nding solutions of a zerodimensional multivariate polynomial system in a given domain. Given n equations in n unknowns, the domain is typically a subset of C n , where C represents the set of complex numbers. The algorithm uses resultants to linearize a non-linear polynomial system and reduces the problem to computing eigenvalues of a non-linear matrix polynomial. In particular, the solutions of the non-linear system are directly related to the eigenvalues and eigenvectors of the matrix. The problem of computing selected roots reduces to computing selected eigenvalues of the corresponding matrix polynomial. The resulting algorithm makes use of the matrix formulation, performs subspace and inverse iterations and computes all the solutions in the corresponding domain. The overall approach is iterative in nature. In practice, we have been to obtain speed-ups of order k=M over the best known earlier algorithms, where k is the total number of solutions in the given domain. Previous Work: The problem of nding roots of a multivariate system has been extensively studied in the literature. However, the current viewpoint is that there are no good, general methods for solving systems of more than one nonlinear equations (Press et. al. 1990). For a system of linear equations good algorithms and their implementations, in the form of linear algebra packages like LINPACK and LAPACK, are known and widely used. At the same time, a great deal of work has been done in symbolic and numeric literature for nding roots of a univariate polynomial. Current algorithms for solving multivariate zero-dimensional polynomial systems can be classi ed into iterative methods, homotopy methods or symbolic elimination followed by root isolation methods. Iterative techniques like the Newton's method need a good initial guess to each solution, which is non-trivial for most of these applications. Homotopy 1

methods are based on tracing paths in the complex space (Garcia and Zangwill 1979). Their complexity for dense polynomial systems has been analyzed by Shub and Smale (1993) and recently homotopy algorithms for sparse systems have been described by Huber and Sturmfels (1992). Asymptotically speaking, the complexity of homotopy methods on well-conditioned inputs is linear in the number of solutions M . However, their weakness is on singular problems, where they may diverge or run intolerably slow. This has been often observed in practice as summarized by Horn (1991), who applied them to the structure from motion problem in computer vision: \one problem with continuation methods is that, while in theory paths of roots should never cross, in practice they often come close enough to permit path jumping, unless the path is followed with impractically tight tolerances". Moreover one needs to trace all the paths, corresponding to M and it is dicult to restrict them to selected solutions. They are relatively slow in practice even on systems with small M . For example the best algorithms based on homotopy methods for inverse kinematics of 6R manipulators (where M = 16) takes about 10 seconds on an IBM 3090 (Wampler and Morgan 1991). Techniques based on symbolic elimination reduce the problem to nding roots of a univariate polynomial and can be combined with algorithms for computing roots of a polynomial in a given domain. There are three fundamental techniques for symbolic elimination: multipolynomial resultants, Gr}obner bases and Ritt-Wu's method. However on systems with large M , algorithms based on symbolic elimination su er from accuracy and eciency problems. The problem of nding roots of high degree univariate polynomials can be numerically ill-conditioned (Wilkinson 1959) and therefore, the resulting algorithm needs to use exact or multiple-precision arithmetic, which slows it down considerably. Algorithms based on multipolynomial resultant computation have been presented in (Canny 1988 and Manocha 1993) based on classical formulations of resultants for dense systems (Macaulay 1902, Salmon 1885 and Jouanolou 1991) and recent formulations for sparse systems (Sturmfels and Zelevinsky 1994, Canny and Emiris 1993). They work well on systems with small M . Over the last two decades, Gr}obner bases have been extensively studied in the literature and specialized to zero-dimensional systems as well (Buchberger 1985, Lakshman 1990, Lazard 1992 and Faugere et. al. 1993). Grobner basis algorithms have a complexity that depends on the e ective degree, and so they work well on systems with few roots. This is one reason they have been considered seriously as a practical equation-solving tool. But when their complexity is measured as a function of the number of solutions, their performance is poor. Moreover in practice, computing the Gr}obner basis of 3 ? 4 polynomials in 3 ? 4 variables of moderate degrees can sometimes take an incredible amount of time, if the system is able to compute. Many a times the computation goes on until the machine runs out of all the virtual space (which is of the order of few gigabytes). This fact has been highlighted in the context for geometric modeling applications by Ho mann (1990). Techniques combining resultants with matrix computations to solve polynomial systems have been presented in (Lazard 1981, Auzinger and Stetter 1986 and Manocha 1992). In particular the problem of root nding is reduced to the eigendecomposition of a generalized companion matrix. The methods of (Lazard 1981, Auzinger and Stetter 1986) rederive a resultant formulation equivalent to Macaulay's formulation (Macaulay 1902) and result in a matrix pencil. Macaulay's resultant is expressed as a ratio of two determinants Det(A)=Det(D). Many a times, Det(D) is zero, even though the resultant is not zero. The algorithm in (Auzinger and Stetter 1986) only works if D is non-singular. Otherwise the roots can be computed by computing n distinct 2

eigenvalue formulations (Cellini et. al. 1991) and the overall algorithm takes O(nN 3) time, where n is the number of unknowns. This method is relatively expensive. The algorithm in (Manocha 1992) makes use of single determinant resultant formulations for polynomial systems with up to 5 ? 6 equations and general formulations for sparse and dense polynomial systems as a ratio of two determinants. The resulting algorithm corresponds to computing the eigenvalues of a non-linear matrix polynomial. On problems with small M , it performs very well. For example, it takes only 11 milliseconds to compute all the solutions for the inverse kinematics of a general 6R manipulator (Manocha and Canny 1992). In this paper, we improve this algorithm for systems with large M by only computing the solutions in the domain of interest and make use of the structure of the matrices. It is also possible to combine multivariate Sturm theory (Pedersen 1991, Milne 1992) with interval arithmetic or iterative methods to compute all the roots in a subset of the real domain. However, not much is known about their performance on large systems. In general techniques based on interval arithmetic are slow due to linear convergence. Furthermore, the computation of Sturm functions requires exact arithmetic and symbolic elimination, which can be slow in practice. Methods based purely on interval analysis (Moore 1979), are slow in practice as well. The rest of the paper is organized in the following manner. In Section 2 we review the literature on resultant formulations and linear algebra. We show the relationship between the roots of a multivariate system and the eigenvalues and eigenvectors of a generalized companion matrix in Section 3. In Section 4, we demonstrate how inverse power iterations can be used to compute the eigenvalues of this matrix and at the same time prune the domain. We present the overall algorithm in Section 5 making use of the structure of the matrices. We describe its implementation and performance in Section 6.

2 Multipolynomial Resultants and Matrix Formulation The relationship between the roots of a univariate polynomial and the eigenvalues of its associated companion matrix is well known in linear algebra. Given a univariate polynomial: f (x) = a0 + a1x + a2 x2 + : : : + an?1 xn?1 + xn , its roots correspond to the eigenvalues of its associate companion matrix:

0 BB B M = BBB BB @

0 0 .. . 0 0

1 0 .. . 0 0

::: :::

0 1 .. .

.. . 0 0

0 0 .. . 1 0

0 0 .. . 0 1

::: ::: ?a0 ?a1 ?a2 : : : ?an?2 ?an?1

1 CC CC CC : CC A

We present a similar formulation for a zero-dimensional multivariate system in terms of a generalized companion matrix.

3

2.1 Matrix Computations A square matrix M is singular, if Det(M) = 0. Moreover, the kernel of a singular matrix is the set of vectors v such that Mv = 0: Given an n  n matrix, the vectors in the kernel form a vector space. The dimension of the kernel, say k, and the rank of the matrix, say r are related by

n = k + r: Given a square matrix M, its eigenvalues are the roots of the equations

Mx = x: Similarly given two square matrices, M1 and M2 the eigenvalues of the matrix pencil M1 ? M2

are the roots of

M1x = M2x:

In case Det(M1 ? M2 ) = 0, the given pencil is a singular pencil, otherwise it is a regular pencil. The eigenvalues also correspond to the roots of the characteristic polynomial of a matrix: Det(M ? I). The algebraic multiplicity of an eigenvalue  is the multiplicity  as a root of Det(M ? I). The geometric multiplicity of  is the dimension of the kernel of M ?  I. 0

0

0

0

2.2 Resultant Formulations In this section, we review the Macaulay's formulation for computing the resultant of a system of polynomial equations (Macaulay 1902). This is based on the Bezout bound of the given system of equations. Let us consider the system of equations:

f1 (x1 ; x2; : : :; xn) = 0 f2 (x1 ; x2; : : :; xn) = 0

.. . fn (x1 ; x2; : : :; xn) = 0:

(1)

Moreover, let fi be a homogeneous polynomial of degree di and

d = 1 + ni=1 (di ? 1): Let us use a vector to denote the exponents of a monomial in x1 ; x2; : : :; xn. For example, if = ( 1; 2; : : :; n), then the corresponding monomial is

x = x 11 x 22 : : :x nn : The set of all monomials of degree d in n variables is:

Xd = fx j 1 + 2 + : : : + n = d:g 4

The total number of monomials in the set corresponds to N , where

!

N = n + nd ? 1 : Given the monomials in the set Xd , we partition them into n sets. These sets are:

Xdi = x j i  di and j < dj ; for all j < i: Given the sets of monomials, Xdi , corresponding to each monomial xdi 2 Xdi , we construct a set of polynomials d F (x ; x ; : : :; x ) = xi f (x ; x ; : : :; x ): i 1 2

n

xdi i

i 1 2

n

Each polynomial in the set Fi (x1 ; x2; : : :; xn ) is a homogeneous polynomial of degree d and there are N such polynomials obtained from the original system of equations. Given the polynomials Fi (x1; : : :; xn ), Macaulay's formulation constructs an N  N matrix, A whose columns are indexed by the monomials in Xd and whose rows correspond to the polynomials in the Fi . It turns out that the resultant of the original system of polynomial equations divides Det(A). In fact, the extraneous factor in Det(A) can be expressed as the determinant of a minor of A. We will represent this minor as D and details of its construction are given in (Macaulay 1902).

2.3 Sparse Polynomial Systems A polynomial system is sparse if many of its coecients, compared to a generic polynomial of that degree are zero. Sparse polynomial systems are characterized by e ective low degree. The total number of solutions is much lower than the Bezout bound. In the last two decades Gel'fand and his colleagues began the study of discriminants and resultants of sparse polynomial systems (Gelfand et. al. 1988, Gelfand et. al. 1990). Sparseness leads to a lowering of the e ective degree, and sparse elimination theory deals with providing direct methods for proving bounds on the number of solutions and computing them eciently (Sturmfels 1991). Many of the polynomial systems arising from problems in robotics, modeling, vision and molecular modeling are in fact, sparse and the total number of solutions is much lower than the Bezout bound. An upper bound on the number of solutions, derived from the set of non-zero coecients, is called a Bernstein bound. Bernstein showed that this bound is exact if all the coecients of the polynomial system are generic (Bernstein 1975). Resultant formulations based on the Bernstein bound are highlighted for multi-homogeneous system in (Sturmfels and Zelevinsky 1994) and for general polynomial systems in (Canny and Emiris 1993). In particular, Canny and Emiris (1993) derive a matrix equivalent to A of Macaulay's formulation. The extraneous factor can be computed from the GCD of determinants of n such matrices.

5

3 Generalized Companion Matrix In this section, we highlight the construction of the generalized companion matrix of a system of polynomial equations. Furthermore, we show how the roots of the original polynomial system can be extracted from the eigendecomposition of the generalized companion matrix. The analysis presented in this section is based on Macaulay's resultant formulation and the Bezout bound for a given polynomial system. It easily generalizes to resultant of sparse polynomial systems. Given a system of n homogeneous polynomial equations in n + 1 unknowns: g1(x1 ; x2; : : :; xn; xn+1 ) = 0 g2(x1 ; x2; : : :; xn; xn+1 ) = 0 (2) .. . gn(x1 ; x2; : : :; xn; xn+1 ) = 0: Let gi be a polynomial of degree di and we assume that this system corresponds to a zerodimensional system. Furthermore, it is assumed that all di  2. In case, we are given a linear equation, it can be reduced to eliminate one of the variable. After elimination we obtain a system of n ? 1 equations in n unknowns such that each equation has degree at least two. Given the system of equations (2), we add an additional polynomial: gn+1 (x1; x2; : : :; xn+1 ) = u1 x1 + u2 x2 + : : : + un+1 xn+1 : Given the system of n + 1 equations, gi 's, we compute the top matrix A corresponding to Macaulay's formulation (as described in the previous section). Let N correspond to the order of the matrix and its entries are polynomials in the u0i s. The resultant of g1 : : :; gn+1 is the U-resultant of the equations (2) (Waerden 1950). Based on the properties of Macaulay's formulation, the original system of non-linear equations have been linearized into the form: AXd = 0; (3) where Xd is a vector consisting of all monomials of degree d. Furthermore, in this case A can be expressed as: ! P Q A = G(u ; u ; : : :; u ) H(u ; u ; : : :; u ) ; 1 2

1 2

n

n+1

where the entries of P and Q correspond to the coecients of g1; : : :; gn and the entries of G(u1; : : :; un) and H(u1; u2; : : :; un+1) are linear polynomials in the unknowns. Let us specialize u1 = 1; u2 = 0; : : :; un = 0; and de ne R = G(1; 0; : : :; 0), S(un+1 ) = H(1; 0; : : :; 0; un+1). The matrix S(un+1 ) can be expressed as: S(un+1) = un+1 I + T; where I is the identity matrix and the entries of T consist of 0 and 1 only. S(un+1 ) and T are square matrices of order M = d1 d2 : : :dn , the Bezout bound of the original system. P is a 6

square matrix of order N ? M . Q and R are rectangular matrices of appropriate order. Let us also partition the set of monomials Xd into two parts:

[ [ [ Xd = Xd1 Xd2 : : : Xdn;

and Xdn+1 . As a result the linear system (3) can be expressed as:

P Q R un+1I + T

!

! Xd = 0: Xdn+1

(4)

The lower matrix D corresponding to the Macaulay's formulation is a minor of P. Let us initially consider the case, when P is non-singular. Moreover, let

M = T ? RP?1Q: The determinant of A is a polynomial in un+1 of degree M , the specialized U-resultant. Furthermore, it can be decomposed into M linear factors of the form:

Det(A) = ( 1;1 + n+1;1 un+1 ) : : : ( 1;M + n+1;M un+1 ); where each ( 1;i; n+1;i ) corresponds to the (x1 ; xn+1 ) components of a solution of g1 ; : : :; gn. This follows from the properties of the U-resultant (Waerden 1990). We compute these roots based on the eigendecomposition of M based on the following theorem:

Theorem 3.1 There is one-to-one correspondence between the solutions of a generic polynomial system g1; : : :; gn and the eigenvalues and eigenvectors of M. Proof: The original system has M roots in the complex projective space, counted properly with

respect to multiplicity. Let ( 1; 2; : : :; n ; n+1 ) be a solution of the original system. Therefore, gi( 1; 2; : : :; n; n+1 ) = 0. Let us substitute this solution into the set of monomials Xd and obtain the resulting vector.

! d ! V X 1 V = V2 = Xd n+1 x1 = 1 ;x2 = 2 ;:::;xn = n ;xn+1 = n+1 :

It follows from the resultant formulation that

PV1 + QV2 = 0 and from the properties of U-resultant:

1 RV1 + n+1 V2 + 1TV2 = 0: After eliminating V1 from these two sets of equations we obtain:

? 1RP?1QV2 + n+1 V2 + 1TV2 = 0: 7

This is equivalent to:

( n+1 I + 1 (T ? RP?1 Q))V2 = 0: Thus, (? n+1 ; 1) are the projective coordinates of an eigenvalue of M and V2 is the corresponding eigenvector. As a result, corresponding to each solution of the original system of equations M has a corresponding eigenvalue and eigenvector. The total number of solutions of the original system correspond exactly to the number of eigenvalues of M. As a result, there is a one-to-one relationship between the roots of the original system of equations and the eigendecomposition of M. This relationship is obvious both ways if each eigenvalue has algebraic multiplicity one. In that case, there is a unique eigenvector corresponding to each eigenvalue and its component are related to the roots as shown above. If algebraic multiplicity is greater than one and geometric multiplicity is one, it follows that the corresponding root is a higher multiplicity root of the original system g1 ; g2; : : :; gn. In fact, the multiplicity of the root is equal to the algebraic multiplicity of the eigenvalue. For generic polynomial systems obtained after linear transformations the geometric multiplicity should not be greater than one. In case, it is greater than one, than the corresponding eigenvectors span a basis of dimension greater than one. All the vectors of the form V2 obtained as a function of the roots (as shown above) belong to that vector space. Q.E.D. An analysis similar to the one in above proof has been presented in (Auzinger and Stetter 1986) as well. The formulation shown in (3.1) is constructive and be used to compute the roots of the original polynomial system. Let us assume that each eigenvalue has geometric multiplicity one. The eigenvalue can be used to extract the values of x1 and xn+1 for each root of the given system. Furthermore, the rest of the components for each root satisfy the relation:

Xdn+1 = pV2; where V2 is the corresponding eigenvector and p is an unknown scalar. Since each of the gi is a d?1 d?1 polynomial of degree greater than one, Xdn+1 consists of the monomials fx1 xdn?1 +1 ; x2xn+1 ; : : :; xnxn+1 g. Thus, x2; x3; : : :; xn can be computed by taking the ratio of the corresponding elements of the eigenvectors. Let us consider the case when an eigenvalue has geometric multiplicity k > 1. Let the basis of the space of eigenvectors be represented as: V2;1; V2;2; : : :; V2;k . In such case, the actual relationship between all the components of a root and the basis is given as

Xdn+1x1 = 1 ;xn+1 = n+1 = p1V2;1 + p2V2;2 + : : : + pk V2;k ; where pi 's are scalars. The entries of Xdn+1 are unknowns in x2 ; x3; : : :; xn and the pi 's are unknown scalars as well. To solve this system we need to formulate a system of k +n?1 equations in the k + n ? 1 unknowns by picking the k + n ? 1 independent rows of this vector equations. Given a polynomial system, a generic transformation will result in eigenvalues of geometric multiplicity one. Another way to circumvent this problem to use a generic specialization in the 8

U-resultant polynomial of the form u1 = 1; u2 = 2; : : :; un = n; where i are chosen randomly. In such cases, the eigenvalues correspond to the corresponding combination of the roots. Based on the eigenvalue we know whether the corresponding root is in the ane space or at in nity. All the coordinates x1; x2; : : :; xn can be extracted from the eigenvector as shown above. This transformation of a non-linear polynomial system to an eigenvalue problem works well if the submatrix P in A is non-singular. In case P is singular, we obtain a singular matrix pencil. Auzinder and Stetter (1986) suggest perturbing the input polynomial system such that the corresponding submatrix P in the resulting Macaulay matrix is non-singular. The roots of the original polynomial system are thereby obtained using homotopy methods. Moreover, the roots of the perturbed systems computed using the eigendecomposition are used as the starting points for the homotopy methods. A generic perturbation in theory should result in a non-singular submatrix P. However, in practice one may have to try a few perturbations before obtaining such a system such that the Macaulay's formulation results in non-singular matrices. Furthermore, homotopy methods have been found to be relatively slow in practice. Also it is non-trivial to generalize this approach to sparse polynomial systems.

3.1 Singular Matrix Pencil In this section we consider the cases when the matrix A, shown in (3), is singular. In case the total number of non-trivial solutions of a polynomial system is less than the Bezout bound, Macaulay's formulation will always result in such matrices. The fact that A is singular implies that it has a non-trivial kernel. Let its kernel be of dimension k and represented as W. Therefore, if a vector w 2 W, Aw = 0. Lets specialize the ui 's as shown in the previous section, represent A as a matrix pencil

! ! 0 0 P Q A = R T + un+1 0 I ;

and compute its eigenvalues and eigenvectors by treating it as a generalized eigenvalue problem (Golub and Van Loan 1989). The fact A has a non-trivial kernel implies that the resulting eigenvectors are not unique and therefore, their elements may not correspond to power products of elements corresponding to a root. As a result, we cannot recover all the roots by taking ratios of the elements in the eigenvectors. In this section, we highlight a di erent construction of the generalized companion matrix based upon taking minors of A. Let the kernel of A be denoted by w. Furthermore, let its dimension be k and fW1 ; W2; : : :; Wk g be a basis for that kernel. We de ne a minor of A in the following manner:

  (5) A = RP T + Qn+1I The fact that A is singular implies that P is singular as well. Let P be the maximal nonvanishing square minor of P. Q is a minor of Q and R is a minor of R obtained by deleting u

the corresponding rows and columns.

9

:

Theorem 3.2 If the kernel of A is independent of un+1 then the roots of the original polynomial system can be computed by taking ratios of the elements of the eigenvectors of A. Proof: Let us express any vector in the kernel of A as ! W i; 1 Wi = Wi;2 ; where Wi;2 is a M  1 vector. Furthermore, the entries of Wi are independent of un+1 . This implies that and

PWi;1 + QWi;2 = 0 RWi;1 + un+1Wi;2 + TWi;2 = 0:

The second equation is valid for all such un+1 and that is possible if and only if Wi;2 = 0. Therefore, PWi;1 = 0; RWi;1 = 0; and Wi;1 is in the kernel of P. Given a solution, ( 1; 2; : : :; n ; n+1 ) of the original system, it follows that

! d ! V X 1 V = V2 = Xd n+1 x1 = 1 ;x2 = 2 ;:::;xn = n ;xn+1 = n+1 : is an eigenvector of A. Since A corresponds to a singular pencil, its eigenvectors have the form ! k V +  W V + ki=1 iWi = 1 Vi=12 i i;1 : Thus, the lower M entries of any eigenvector of A correspond V2.

Q.E.D. This theorem is constructive and useful for computing the solutions of the original equations. Furthermore, the eigenvectors of M = T ? RP?1Q correspond to V2 and can be used for computing the roots.

3.2 Single Determinant Resultant Formulations Many resultant formulations are expressed as a single determinant as opposed to a ratio of determinants. This includes Sylvester, Cayley and Bezout's formulation for two polynomial equations (Salmon 1885), Dixon's formulation for three or four polynomial equations (Dixon 1908) and many other cases highlighted in (Morley and Coble 1927). Similarly for multi-homogeneous systems, a single determinant formulation based on the Bernstein bound is highlighted in (Sturmfels and Zelevinsky 1994). Given a system of n equations in n unknowns, we eliminate n ? 1 10

unknowns from the given equations and express the resultant as a matrix polynomial in one unknown. This can be expressed as:

M(x) = Mdxd + Md?1xd?1 + Md?2xd?2 + : : : + M0 where Mi is an m  m matrix with numeric entries and x is one of the unknowns. Typically

M = md; corresponds to the Bezout or Bernstein bound of the given system. The value of d is a function of the degrees of the polynomial equation and the resultant formulation being used. Given a non-linear matrix polynomial, we linearize the problem using the following theorem:

Theorem 3.3 (Manocha 1992) Given the matrix polynomial, M(x) the roots of the polynomial corresponding to its determinant are the eigenvalues of the generalized system C1x + C2, where 2I 0 0 0 3 m 66 0 Im 0 0 77 6 . . . .. ... 777 (6) C1 = 66 .. .. 5 4 0 0 Im 0 0 0 0 Md and 2 0 ?I 0 0 3 m 66 0 0 ?Im 0 77 6 . . . .. 77 .. C2 = 66 .. .. (7) . 7 4 0 0 0 ?Im 5 M0 M1 M2 Md?1 where 0 and Im are m  m null and identity matrices, respectively. :::

:::

::: :::

:::

:::

:::

;

:::

:::

:::

In case the matrix Md is non-singular, this problem can be reduced to a simple eigenvalue problem. It turns out that the eigenvalues of the matrix pencil, C1x + C2, correspond to the coordinates of the variables for each solution. Moreover for regular matrix pencils, the rest of the coordinates can generically be computed from the eigenvectors.

4 Computing Selected Solutions In this section we make use of the matrix formulations presented in the last section to compute the solutions of a polynomial equation. Given the system of equations, (1), and an associated domain (a hypercube):

x1 2 [a1; b1]; x2 2 [a2; b2]; : : : xn 2 [an; bn]: The ai 's and bi 's are not restricted to real numbers only. In case the domain does not correspond to a hypercube, we compute the smallest hypercube enclosing it and nd all the solutions in it. In our case the problem reduces to computing eigenvalues of matrix polynomial in the 11

corresponding domain. For non-linear matrix polynomials, M(x), x corresponds to one of the variable, say xi , and we are therefore interested in computing the eigenvalues of C1x + C2 in [ai ; bi] and the corresponding eigenvectors. As a matter of fact, we compute a superset of the solutions in the given hypercube. Similarly for the U-resultant formulation, where we used a specialization of the form ui = ri , the problem is reduced to computing all the eigenvalues of M(un+1) or M(un+1) in the domain [?(r1b1 + r2b2 + : : : + rnbn); ?(r1a1 + r2a2 + : : : + rnan)]. We make use of inverse power iterations, matrix structure and domain pruning to compute all these eigenvalues. In case the domain corresponds to computing all the real solutions, we decompose the problem into two parts. Firstly we substitute x = 1?x x and compute all the eigenvalues of (C1 ? C2 )x + C2 in [0; 1). Secondly we substitute x = 1+x x and compute all the eigenvalues of (C1 + C2 )x + C2 in (?1; 0]. For large systems we can also use parallel algorithms for computing the eigenvalues of a nonsymmetric matrix in a given domain (Bai and Demmel 1993).

4.1 Power Iterations Power iterations are a fundamental technique to compute the eigenvalues and eigenvectors of a matrix. Given a diagonalizable matrix, A, such that

X?1AX = Diag:(1; 2; : : :; N ) with X = [x1; x2; : : :; xN ] and j1j > j2j  : : :  jN j. Given a unit vector q0 , the power method produces a sequence of vector qk as follows: for k = 1; 2; : : : zk = Aqk?1; qk = zk = k zk k1; xk = qTk Aqk ; end: After a few iterations sk converges to 1 and qk converges of x1. Moreover the convergence is a function of the ratio j1j=j2j. 1 is the dominant eigenvalue of A. Power method is described in detail in (Golub and Van Loan 1989, Wilkinson 1965). In case A is not diagonizable or

the corresponding eigenvector is ill-conditioned, the accuracy and convergence of the method is discussed in (Wilkinson 1965) as well. In our applications, we will be using power iterations to compute the smallest eigenvalues of matrix pencils of the form, C1x + C2 . The smallest eigenvalue of C1x + C2 corresponds to the largest eigenvalues of (C1x + C2 )?1. As opposed to computing the inverse explicitly, the resulting algorithm of inverse power iteration is (given q0 ): 0

0

0

for k = 1; 2; : : : Solve (C1x + C2 )zk = C1 qk?1 ; qk = zk = k zk k1; xk = ?(qTk C2 qk )=(qTk C1 qk ) end : 0

12

To eciently solve the system, we compute the LU decomposition of C1x + C2 using Gaussian elimination. Furthermore, the vector q0 is chosen randomly. Given the RHS vector, C1 qk?1 , the resulting linear system can be solved in O(N 2) steps by solving lower triangular and upper triangular systems. The convergence of the inverse power iteration is a function of the starting guess x and the distance of eigenvalues from x . In particular, let the eigenvalues of C1x + C2 be 1; 2; : : :; N in an order such that: 0

0

0

0

jx ? 1j  jx ? 2j ; : : :;  jx ? N j: The convergence is a function of jx ? 1j=jx ? 2j. If two of the eigenvalues, 1 and 2, are almost at the same distance from x , the overall convergence can be fairly slow. In such cases, the convergence can be further improved using the following procedure (given q0 and u0 ): for k = 1; 2; : : : Solve (C1x + C2)zk = C1 qk?1 ; Solve (C1x + C2 )T vk = C1 uk?1 ; qk = zk = k zk k1; uk = vk = k vk k1; xk = ?(uTk C2 qk )=(uTk C1 qk ) end : 0

0

0

0

0

0

0

0

This process is locally cubically convergent (Wilkinson 1965).

5 Domain Decomposition The main idea behind our algorithm is to use inverse power iterations, nd some eigenvalues in the domain and at the same time prune out some portion of the domain not containing any possible solutions. The resulting domain is decomposed and the algorithm is applied recursively. We will initially present the algorithm to compute a subset of real solutions. It can be easily extended to compute a set of complex solutions. Without loss of generality we will assume that the domain for real solutions is [0; 1]. In particular, we start with a guess x  0:5, which corresponds to the midpoint of the domain. Using inverse power iteration, we nd the eigenvalue closest to x . Let that eigenvalue be t. It is possible that t is complex and in such case we compute the complex conjugate pair of eigenvalues. Assuming that we chose random start vectors, q0 and u0, t is an eigenvalue of C1x + C2 which is closest to x . As a result, there are no other eigenvalues of the pencil in the circle centered at x = x and has radius R = jt ? x j. Based on the convergence of inverse power iterations we make the following conclusions: 0

0

0

0

0

 If t 2 [0; 1], t corresponds to a real root. The rest of the unknowns may be computed from the corresponding eigenvector.  There are no other eigenvalues in the real domain, (x ? R; x + R).  The technique is recursively applied to nd all the roots in the following domains: 0

13

0

1. [0; x ? R], if (x ? R)  0. 2. [x + R; 1], if (x + R)  1. 0

0

0

0

Therefore, we are able to compute one of the solution in the domain and prune the domain as well. The algorithm is applied recursively to each domain. Typically there are very few solutions in the domain. In those cases the algorithm converges to all the solutions quickly and we need to apply this procedure only a few times.

5.1 Computation of Multiple Solutions In the previous section, we have used inverse power iterations for computing a root and decomposing the domain. Many a times the power iteration may not converge to any real solution t or the convergence may be slow. The former can be due to the fact that the closest eigenvalue corresponds to a pair of complex conjugate eigenvalues. The latter is due to the fact that there are two or more real eigenvalues which have roughly the same distance from x . In this section, we modify the inverse power iteration to compute more than one root at the same time. This will include complex conjugate pairs as well. We highlight the technique to compute two solutions at the same time and it can be easily extended to nd more than two solutions. Given the approximation, x = x , let the two closest eigenvalues be t1 and t2 . Let the corresponding eigenvectors be x1 and x2 and 0

0

A = (C1x + C2)?1: 0

Based on the formulation, t1 and t2 are the largest eigenvalues of A. As a result, if we start with a random unit vector u0 and solve for p and q such that

Limits!1

(As+2 ? pAs+1 ? q As )u0 = 0;

than t1 and t2 are the solutions to the equation

2 ? p ? q = 0: The two closest eigenvalues to x are real or complex depends on the sign of (p2 + 4q ). The overall algorithm is iterative and at each stage it computes p and q , until they converge. This is performed as: for k = 1; 2; : : : Solve (C1x + C2)vk = C1uk?1 ; xk =k vk k1 ; uk = vk =xk ; Solve for pk ; qk : 0

0

xk xk?1

! ! ! uTk uk?1 = uTk?1uk?1 uTk?1 uk?2 pk xk?1 : uTk?2 uk uTk?2uk?1 uTk?2 uk?2 qk end: 14

After pk and qk converge, we compute the closest eigenvalues in the following manner. Let D = p2 + 4  q. If D > 0:0 the two closest eigenvalues are

t1 = x ? 2:0=(p + D); t2 = x ? 2:0=(p ? D): Furthermore, the radius R for decomposition corresponds to the minimum of jt1 ? x j and jt2 ? x j. In case the closest pair of eigenvalues is a complex conjugate pair given as: p 2 p 2 Real(t) = x ? p2 ? D ; Imag(t) = p2 ??D D: and the radius R for pruning the domain is given as: 0

0

0

0

0

q

R = (Real(t) ? s )2 + Imag(t)2): 0

5.2 Use of Matrix Structure In the algorithm based on inverse power iterations, the two main computations are the LU decomposition of the matrix C1 x + C2 and solving for the resulting upper triangular systems. The matrix pencil de ned in Theorem 3.3 corresponds to a pencil of order N = md. In particular, it has a block companion structure being linearized from a m  m matrix polynomial of degree d. The LU decomposition is computed using Gaussian elimination and typically it takes about 31 N 3 operations (without pivoting). Solving each triangular system costs about 1 2 1 3 2 2 N operations. As a result, the inverse power iteration takes about 3 N + kN operations, where k is the number of iterations. In this section we make use of the structure of the matrices to reduce the number of operations for LU decomposition as well as solving triangular systems. This is based on the following properties. Given x , let A = C1 x + C2 . It can be shown that A is a matrix of the form: 0

0

0

0 I I 0 ::: 0 1 1 m 2m BB 0 1Im 2Im : : : f0 CC . C B .. .. .. C A = BB ... . : : : . C B@ 0 0 : : : 1Im 2Im CA P1 P2 P3 : : : Pn where 1 and 2 are functions of x . Pi 's are m  m matrices, which are functions of Mi's and x . The LU decomposition of A has a structure of the form: A = LU; where 0 I 0 0 ::: 0 1 1m B 0 1Im 0 : : : 0 CCC B B .. .. .. C L = BB ... . : : : . . C B @ 0 0 : : : 0 1Im CA R1 R2 : : : Rn?1 Ln 0

0

15

and

and

0 Im 2 Im 0 BB 0 I1m 2 Im 1 U = BBB ... ... B@ 0 0 :: :: :: 0 0 :::

::: ::: .. .

0 0 .. .

Im 12 Im 0 Un

1 CC CC ; CC A

R1 = P1; R2 = P2 ? 2 R1; : : : 1

Rn?1 = Pn?1 ? 2 Rn?2; Rn = Pn ? 2 Rn?1: 1 1 Moreover, Ln and Un correspond to the LU decomposition of Rn. This formulation is constructive and the resulting algorithm for LU decomposition of A takes 31 m3 + (d ? 1)m2 operations. Furthermore, given the LU decomposition, solving for the lower triangular system takes (d ? 21 )m2 operations and solving for upper triangular system takes 12 m2 + (d ? 1)m operations. As a result, at each iteration the total number of operations for solving the linear system corresponding to inverse power iteration are dm2 + (n ? 1)m. As far as Macaulay's formulation for dense system is concerned, matrices P, Q, R and T in (5) are sparse and algorithms based on power iterations exploit this sparsity well in terms of matrix vector multiplications. In particular, the sparsity of Macaulay's formulation has been used in (Canny et. al. 1989) to compute its determinant.

6 Implementation and Performance We have implemented this algorithm and applied to applications involving curve and surface intersections and inverse kinematics of serial robot manipulators. We used LAPACK Fortran routines for matrix computations (Anderson et. al. 1992) and interfaced them with our implementation in C. The overall performance of the algorithm is based on computation of resultant matrix entries and computing the eigenvalues in the speci ed domain. The algorithms for computing the matrix entries corresponding to resultants of dense polynomial systems are straightforward. However, the algorithms for sparse systems involve computation of the mixed volume of the given system (Bernstein 1975, Canny and Emiris 1993) and can be relatively expensive for systems consisting of more than seven or eight equations. However most of the sparse systems arising in various applications have a pre-de ned structure. We can compute the resultant matrix by treating the coecients as symbolic variables (as part of preprocessing) and substitute the numerical values for a given instance of the problem. Therefore, almost all the time is spent in the eigendecomposition routines. This approach has been successfully applied to the inverse kinematics problem (described in the next section). The performance of the eigendecomposition algorithm is governed by the following criteria:

 Number of eigenvalues in the given domain. 16

 The distribution of the eigenvalues in the neighborhood of the domain. The rst criterion determines the size of the output and therefore, the complexity of the algorithm. The second property determines the convergence of the iterative algorithm to each solution. A generic linear transformation on the original system of polynomial equations distributes the solutions uniformly in the real and complex space and therefore, improves the speed of the overall algorithm. As such the number of eigenvalues of C1x + C2 in the domain [ai; bi] is greater than the number of solutions of the original system in its associated domain (a hypercube). However after generic transformations, the two numbers are almost equivalent. Applying generic transformations on dense polynomial systems is relatively simple. As far as sparse polynomial systems are concerned, the transformations should preserve the Bernstein bounds. As a result, we are restricted to sparsity preserving transformations. In most applications involving robotics, vision and modeling the sparse polynomial systems arise from geometric constraints and these constraints can be used to formulate the appropriate transformations. The actual performance of the algorithm is governed by the choice of certain parameters at run time. These parameters are:

 Given a domain [ai; bi], the choice of the rst guess to the eigenvalue x . We had chosen 0

it to be close to the midpoint.  The number of eigenvalues of the matrix in the neighborhood of x used in the computation of multiple solutions. 0

It is rather dicult to make a good guess to an eigenvalue corresponding to the solution. In some cases we know about the structure of the geometric problem resulting in the system of equations (like control polytopes of the spline surfaces). As far as the second parameter is concerned, we iteratively decide on the number of eigenvalues that need to be used. Initially we start with a linear formulation and perform a few inverse iterations. In case, the eigenvalue is not converging to a few digits of accuracy in the rst few iterations (typically 3,4 or 5), we re ne the algorithm and start with a higher order formulation (as explained in Section 4.1). This degree is gradually increased and soon the algorithm converges to the dominant eigenvalue(s). To improve the overall speed of the solution, the algorithm follows a two phase process. In the rst phase the  value used for termination of power iterations is chosen small so that the solution power iterations converge to it quickly. Typically, we have used   0:01. This converged value is used as the guess to the actual eigenvalue and after a few more iterations, we are able to compute the solution to a high accuracy (about 4 or 5 digits). Further improvements are obtained by applying a few iterations of the Newton's method on the original system of equations. The performance of the overall algorithm is determined by the choice of the parameters described in the previous section. However, in most geometric applications where the total number of solutions in the domain is small as compared to N , we have observed an order of magnitude of improvement in the running time. We highlight two applications in the following sections.

17

Algorithm

Reference

Machine

Average Time Continuation (Wampler and Morgan 1991) IBM 370-3090 10 sec. Resultant - (Manocha and Canny 1992) IBM RS/6000 0.011 sec. QR algorithm Resultant Section 4 IBM RS/6000 0.0043 sec. Selected Solutions Table 1: Relative performance of various algorithms on inverse kinematics

6.1 Applications to Inverse Kinematics The inverse kinematics problem for general serial mechanisms is a fundamental problem in robotics and molecular modeling (Craig 1989). In particular, we compute all the joint angles for 6R manipulators, given the pose of the end-e ector. Given the pose of the end-e ector, the problem reduces to solving six polynomial in six equations. Each equation has total degree 12. However the resulting system is rather sparse and it has been shown that the total number of solutions is bounded by 16 (Lee and Liang 1988, Raghavan and Roth 1989) (assuming that the system is zero-dimensional). The derivation of the sparse resultant in (Raghavan and Roth 1989) has been combined with matrix computations in (Manocha and Canny 1992) and reduced to a 24  24 eigenvalue problem. The algorithm in (Manocha and Canny 1992) uses the QR or QZ algorithm to compute all the eigenvalues and eigenvectors of the resulting matrix. We applied this algorithm to compute all the real solutions to the equations corresponding to inverse kinematics. In particular we used the 21 problem instance highlighted in (Wampler and Morgan 1991). The average performance of three algorithms based on continuation methods and matrix computations is highlighted in Table 1:

6.2 Curve and Surface Intersections The problems of curve and surface intersection are fundamental in computer graphics and geometric modeling. They arise in various applications like ray-tracing, computing boundary representations from CSG models, hidden surface elimination (Ho mann 1989). We applied our algorithms to intersection of Bezier curves and surfaces. Given a Bezier curve F(s) = (X (s); Y (s); Z (s); W (s)) and a Bezier surface G(u; v) = (X (u; v); Y u; v; Z (u; v); W (u; v); each represented in homogeneous coordinates. Let the curves be of degree d and the surface be a tensor product surface of highest degree term of the form sm tn . The problem of intersection corresponds to computing all the solutions of (

) () =

X u; v W s

18

(

) ()

W u; v X s

Num. 1 2 3 4

N 54 54 96 150

k (Manocha 1994) 1 1.2832 secs. 2 1.2001 secs. 3 4.967 secs. 4 28.193 secs.

Section 4 0.2769 secs. 0.3365 secs. 0.5324 secs. 1.892 secs.

Machine DEC 5000/25 DEC 5000/25 DEC 5000/25 DEC 5000/25

Table 2: Performance of di erent algorithms on curve-surface intersection Y

( (

) () = ) () =

u; v W s

Z u; v W s

( (

) () ) ()

W u; v Y

s

W u; v Z s

in the domain fs; u; v g 2 [0; 1]  [0; 1]  [0; 1]. The Bezout bound of the system is (m + n + d)3 and the Bernstein bound is 2mnd. Using Dixon's resultant (Dixon 1908) and Theorem 2.1 this problem can be reduced to computing the eigendecomposition of 2mnd  2mnd matrix. In Table 2, we have highlighted the performance of the algorithm in (Manocha 1994) based on QR iterations with the improved algorithm presented in Section 4. The entry k corresponds to the number of solutions in the given domain for each case. All the running times are in seconds on a DEC 5000/25 workstation.

7 Conclusion We have presented a algorithm to compute selected solutions of a zero dimensional polynomial system over a domain. The algorithm makes use of resultant formulations and reduces to computing eigendecomposition of generalized companion matrix. We make use of the structure of the matrix along with inverse power iterations to compute all the solutions in the domain. The resulting algorithm is iterative and its application to intersection and kinematics problem resulted in a signi cant speed-up over the previous algorithms.

References [1] E. Anderson, Z. Bai, C. Bischof, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, and D. Sorensen. LAPACK User's Guide, Release 1.0. SIAM, Philadelphia, 1992. [2] W. Auzinger and H.J. Stetter. An elimination algorithm for the computation of all zeros of a system of multivariate polynomial equations. In International Series of Numerical Mathematics, volume 86, pages 11{30, 1986. [3] Z. Bai and J. Demmel. Design of a parallel nonsymmetric eigenroutine toolbox, Part I. In Proceedings of the Sixth SIAM Conference on Parallel Proceesing for Scienti c Computing. SIAM, 1993. 19

[4] D. N. Bernshtein. The number of roots of a system of equations. Funktsional'nyi Analiz i Ego Prilozheniya, 9(3):1{4, 1975. [5] G. Bishop and H. Fuchs et. al. Research directions in virtual environments. Computer Graphics, 26(3), 1992. [6] B. Buchberger. Groebner bases: An algorithmic method in ideal theory. In N.K. Bose, editor, Multidimensional Systems Theory, pages 184{232. D. Reidel Publishing Co., 1985. [7] J. Canny and I. Emiris. An ecient algorithm for the sparse mixed resultant. In Proceedings of AAECC, pages 89{104. Springer-Verlag, 1993. [8] J. Canny, E. Kaltofen, and Y. Laksman. Solving system of nonlinear polynomial equations faster. In Proceedings of International Symposium on Symbolic and Algebraic Computation, 1989. [9] J.F. Canny. The Complexity of Robot Motion Planning. ACM Doctoral Dissertation Award. MIT Press, 1988. [10] P. Cellini, P. Gianni, and C. Traverso. Algorithms for the shape of semialgebraic sets: A new approach. In Proceedings of Applied Algebra, Algebraic Algorithms and Error-Correcting Codes, pages 1{18, 1991. Lecture Notes in Computer Science, vol. 539, Springer-Verlag. [11] J.J. Craig. Introduction to Robotics: Mechanics and Control. Addison{Wesley Publishing Company, 1989. [12] A.L. Dixon. The eliminant of three quantics in two independent variables. Proceedings of London Mathematical Society, 6:49{69, 209{236, 1908. [13] O. Faugeras. Three-Dimensional Computer Vision: A Geometric Viewpoint. MIT Press, Cambridge, Mass., 1993. [14] J.C. Faugere, P. Gianni, D. Lazard, and T. Mora. Ecient computation of zero-dimensional groebner bases by change of ordering. Journal of Symbolic Computation, 16(4):329{344, 1993. [15] N. Go and H.A. Scherga. Ring closure and local conformational deformations of chain molecules. Macromolecules, 3(2):178{187, 1970. [16] C.B. Garcia and W.I. Zangwill. Finding all solutions to polynomial systems and other systems of equations. Math. Prog., 16:159{176, 1979. [17] I.M. Gelfand, M.M. Kapranov, and A.V. Zelevinsky. Equations of hypergeometric type and newton polyhedra. Doklady AN SSSR, 300:529{534, 1988. [18] I.M. Gelfand, M.M. Kapranov, and A.V. Zelevinsky. Discriminants of polynomials in several variables and triangulations of newton polytopes. Algebra i analiz, 2:1{62, 1990. [19] G.H. Golub and C.F. Van Loan. Matrix Computations. John Hopkins Press, Baltimore, 1989. 20

[20] C.M. Ho mann. Geometric and Solid Modeling. Morgan Kaufmann, San Mateo, California, 1989. [21] C.M. Ho mann. Algebraic and numeric techniques for o sets and blends. In W. Dahmen, M. Gasca, and C. Micchelli, editors, Computations of Curves and Surfaces, pages 499{528. Kluwer Academic Publishers, 1990. [22] B.K.P. Horn. Relative orientation revisited. Journal of Optical Society of America, 8(10):1630{1638, 1991. [23] Birkett Huber and Bernd Sturmfels. A polyhedral method for solving sparse polynomial systems. Cornell University, manuscript, 1992. [24] Jean-Pierre Jouanolou. Le Formalisme du Resultant, volume 90 of Advances in Mathematics. 1991. [25] Y.N. Lakshman. On the complexity of computing Groebner bases for zero dimensional polynomial ideals. PhD thesis, Rennselaer Polytechnic Institute, Troy, NY, 1992. [26] D. Lazard. Resolution des systemes d'equations algebriques. Theoretical Computer Science, 15:77{110, 1981. [27] D. Lazard. Solving zero-dimensional algebraic systems. Journal of Symbolic Computation, 13(2):117{131, 1992. [28] H.Y. Lee and C.G. Liang. A new vector theory for the analysis of spatial mechanisms. Mechanisms and Machine Theory, 23(3):209{217, 1988. [29] F.S. Macaulay. On some formula in elimination. Proceedings of London Mathematical Society, 1(33):3{27, May 1902. [30] D. Manocha. Algebraic and Numeric Techniques for Modeling and Robotics. PhD thesis, Computer Science Division, Department of Electrical Engineering and Computer Science, University of California, Berkeley, May 1992. [31] D. Manocha. Ecient algorithms for multipolynomial resultant. The Computer Journal, 36(5):485{496, 1993. [32] D. Manocha. Solving systems of polynomial equations. IEEE Computer Graphics and Applications, pages 46{55, March 1994. Special Issue on Solid Modeling. [33] D. Manocha and J.F. Canny. Real time inverse kinematics of general 6r manipulators. In Proceedings of IEEE Conference on Robotics and Automation, pages 383{389, 1992. [34] P. S. Milne. On the solutions of a set of polynomial equations. In Symbolic and Numerical Computation for Arti cial Intelligence, pages 89{102, 1992. [35] R.E. Moore. Methods and applications of interval analysis. SIAM studies in applied mathematics. Siam, 1979. 21

[36] F. Morley and A.B. Coble. New results in elimination. American Journal of Mathematics, 49:463{488, 1927. [37] P. Pedersen. Multivariate sturm theory. In Proceedings of AAECC, pages 318{332. Springer-Verlag, 1991. [38] W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling. Numerical Recipes: The Art of Scienti c Computing. Cambridge University Press, 1990. [39] M. Raghavan and B. Roth. Kinematic analysis of the 6r manipulator of general geometry. In International Symposium on Robotics Research, pages 314{320, Tokyo, 1989. [40] G. Salmon. Lessons Introductory to the Modern Higher Algebra. G.E. Stechert & Co., New York, 1885. [41] M. Shub and S. Smale. Complexity of bezout's theorem, i. geometric aspects. Journal of the American Mathematical Society, 6(2):459{501, 1993. [42] B. Sturmfels. Sparse elimination theory. In D. Eisenbud and L. Robbiano, editors, Computational Algebraic Geometry and Commutative Algebra. Cambridge University Press, 1991. [43] B. Sturmfels and A. Zelevinsky. Multigraded resultants of sylvester type. Journal of Algebra, 1994. To appear. [44] B.L. Van Der Waerden. Modern Algebra (third edition). F. Ungar Publishing Co., New York, 1950. [45] C. Wampler and A.P. Morgan. Solving the 6r inverse position problem using a generic-case solution methodology. Mechanisms and Machine Theory, 26(1):91{106, 1991. [46] J.H. Wilkinson. The evaluation of the zeros of ill{conditioned polynomials. parts i and ii. Numer. Math., 1:150{166 and 167{180, 1959. [47] J.H. Wilkinson. The algebraic eigenvalue problem. Oxford University Press, Oxford, 1965.

22

Suggest Documents