Multipolynomial Resultants and Linear Algebra 1 ... - CiteSeerX

0 downloads 0 Views 896KB Size Report
Multipolynomial. Resultants and. Linear. Algebra. Dinesh. Manocha and. John. F. Canny ... a set of polynomial equations arises in many symbolic and numeric applications. The three main ..... Each elementary row operation is of the form: Pki.
Multipolynomial

Resultants

Dinesh

Manocha

and

Computer

and

Algebra

F. Canny

John

Science

University

Linear

Division

of California

Berkeley,

CA

94720

USA

The problem

Abstract: from

a set of polynomial

symbolic

and numeric

approaches

of eliminating equations

applications.

are resultants,

for

variables

vanishing

bases and the

tem

Wu-Ritt method. In practice, resultant based algorithms have been shown to be most effectice for certain applications. The result ant of a set of polynomial equations can be expressed in terms of matrices

of

The

problems. of a single

mogeneous

The three main

Grobner

these

struction

arises in many

equations

result

is the

polynomial

in n unknowns,

is an exact equations

main

resultant condition

to

have

con-

of n ho-

such that

for

the

given

a non–trivial

its sys-

solution

[Mac64, Sa185, Wae50]. The resultant is a polynomial in the coefficients of the equations. We refer resuttant of the this resultant as the multipo!ynomiai

and determinants and the bottleneck in its comput ation is the symbolic expansion of determinants. In

given system of equations

this paper we present to compute symbolic

tem of linear equations is widely known. Most of the theory expresses the conditions in terms of ma-

acteristic from

interpolation determinants.

of the algorithms

linear

ity of the computation uations.

These

is the use of techniques

and number

include

The process of eliminating

based algorithms The main char-

to reduce the symbolic

algebra

a matrix

eval-

linear

polyno-

from

a sys-

and reducing

the problem

to solv-

equations. The most familiar is the Sylvester’s formulation

for two nonlinear equations, which expresses it as determinant of a matrix. However, a single determinant formulation may not exist for all systems of nonlinear equations. The most general formulation,

rithms. These algorithms have been implemented as part of a package for resultant computation and we discuss their performance for certain applications.

to the best tant

of our knowledge,

as a ratio

of two

expresses

determinants

the resul-

as formulated

by Macaulay [Mac02, Mac21]. Many a times both the determinants evaluate to zero and the resultant

Introduction

Computational methods to manipulate nomial equations are gaining importance and numeric

equations

ing system of linear form of the resultant

mial to its companion form and similarity transformations for reduction to upper Hessenberg form followed by reduction to Frobenius canonical form. We consider dense as well as sparse interpolation algo-

1

variables

trices and determinants. Elimination theory deals with generalizing these techniques to system of non-

complex-

of function

linearizing

[MC91b].

computation.

The

sets of polyin symbolic

fundamental

is being

computed

guments

[Can88].

the characteristic

prob-

by perturbation This

and limiting

corresponds

polynomials

ar-

to the ratio

of two matrices

of

and is

lems include simultaneous elimination of one or more variables to obtain a “symbolically smaller” system

Characteristic termed as the Generalized by Canny [Can90]. as introduced

and computing the numeric solutions of a system of equations. Elimination theory, a branch of classical algebraic geometry, presents constructive approaches

There are two other algorithmic approaches known in the literature for eliminating variables from a system of equations. The first one is based on pol ynomial ideal theory and generates special bases for polynomial ideals, called Grobner bases. The algorithm for Grobner bases is due to Buchberger and is surveyed in [Buc85, Buc89]. Eliminating a set of variables is a special application of Grobner bases. The second approach for variable elimination has been

Permission granted

to

copy

provided

direct

commercial

title

of the

that

copying

that

the

and/or

specific

ISSAC @ 1992

‘92-7/92/CA, AcM 0.8979

all or part are

otherwise,

not

the ACM

and its date

is by permission To copy

fee

copies

advantage,

publication

Machinery.

without

of this

made

or

copyright

appear,

and

of the Association or to republish,

material

distributed

notice notice

is for

and the is given

for Computing requires

a fee

permission. USA ~.49Q_2/92/00Q7/Q~

developed 58...$1,50

158

by Wu Wen-Tsun

[WU78]

polynomial

using

an idea

proposed on Ritt’s fully

applied

ingby

als for a system of equations

by Ritt [Rit50]. This approach is based characteristic set construction and success-

Wu.

to automated It is referred

geometry

theorem

as the Ritt-Wu’s

The

prov-

lowing

algorithm

[Can90].

rest of the paper

manner.

In Section

view of multipolynomial

is organized

in the fol-

2 we present

a brief

resultants

and linear

realge-

[GC90]. All these methods of elimination are surveyed in [KL92]. When it comes to practice resul-

bra. In Section 3 we present

tants have been shown to be more effective for applications like implicitization [MC9 la]. Moreover, tlhe resulting algorithms are fast in practice and it is pcE-

nant, whose entries are polynomials in one variable. This algorithm is based on techniques from linear algebra as opposed to interpolation. The algorithm is

sible to obtain a tight bound on their running time. On the other hand Grobner bases and characteristic sets computations may be slow for even small prob-

extended to multivariate symbolic determinants using dense and sparse algorithms for multivariate interpolation in Section 4. We describe its implemen-

lems. As a result

tation and performance for implicitization Section 5. In particular, we obtain considerable speed-ups as

on the running The

it is difficult

to derive tight

for

resultant

volves the use of suitable

resultant

reduces the problem

to expanding

nants.

of the matrices

functions tivariate

entries

in terms

polynomial

of the

equations. interpolation

integer number and manipulating In particular,

computation

in-

formulation

and

symbolic

2

techniques

numeric

minant computations for function evaluation ing sparse or dense multivariate interpolation sult ant polynomial evaluation

computation.

It turns

is relatively

for

century. equations

deterand usfor re-

we present

improved

and for

polynomial

to an equivalent

panion matrix followed by similarity for reduction to upper Hessenberg to Frobenius

canonical

are used along

with

form. sparse

In this

are represented

as floating

point

num-

In case

oft he result ant becomes a nec-

essary and sufficient condition for the given system of equations to have a non–trivial solution. There are many specialized formulations of resultants for lower values of n (up to six), which express it as the determinant

of a matrix,

as opposed

to the general

formulation expressing it as a ratio of two determinants [Dix08, Jou89, MC27, Sa185, Stu91, SZ91]. In the rest of the paper we use upper face letters to denote matrices and lower

com-

face letters

transformaticms form and finally

to represent

the other symbols

l–dimensional

represent

case bold case bold

vectors.

All

scalar variables.

These transformaticms and dense interpolation

2.1

algorithms and as a result the interpolation probh?m is divided into smaller problems of lower complexit y. They also lead to improved and faster algorithms for solving system of non–linear equations based on u–result ants as highlighted computing the generalized

the coefficients

m = O, the vanishing

algorithms

blcok

in yl, . . . . ym.

section of the zero sets of the n + 1 equations.

for computing symbolic determinants using multivariate polynomial interpolation. We use a variety of techniques from linear algebra to decrease the number of function evaluations and the symbolic complexity of the resulting comput at ion. These include linearizing a matrix

is a polynomial

bers, we treat them as rational numbers and use exact arithmetic for the rest of the computation. Geometrically, the zero set of the resultant represents the projection of the algebraic set defined by the inter-

algorithm. paper,

are the oldest and by now

paper, we assume that the coefficients of the polynomials are rational numbers, though the results are extendible to any field of characteristic zero. In case,

out th a,t

expensive

resultants

Given a system of n + 1 affine algebraic unknowns ~1, . . ..xn. yl, y~, ,y~, in n +m

the resultant

a result, a great deal of time is spent in numeric determinant evaluations and thereby slowing down the

In this

algorithms.

Background

Multipolynomial

many applications accounts for almost half of the running time. Moreover, Macaulay’s formulation generates two matrices whose order is much greater than the degree of the resultant [Mac02, Mac21]. As

interpolation

al-

determi-

the best known methodology for eliminating variables. Most of their development began in the last

crunching w opposed to generating intermediate symbolic expressions. involves

and efficient

symbolic

of the given

determinants [MC91a]. Multi~converts the computation into

the algorithm

to earlier

a simple

univariate

determi-

and Canny use mull-

and modular

for computing

compared

are polynomial

coefficients

Manocha

computing symbolic variate interpolation

function

bounds

time of the computation.

algorithm

The

gorithm

Linear

Algebra

Algorithms for resultant computation deal with matrices and determinants. As a result, we utilize many properties from linear algebra for efficient reIn this section we review sultant computation.

in [Can88, MC9 lb] and characteristic polynomi-

some techniques

159

from

linear

algebra

and use them

in the following [GLR82, Matrix numeric

sections.

Wi165]. Polynomials matrices on

More

details

If AD, Al, ..., A~ are m x m rationals, then the matrix–

valued function defined on the rationals by L(A) = polynomial of degree k. Z$=OA;~i is called a matrix When

Ak = I, the identity

nomial

matrix,

the matrix

Let

us consider matrix.

~(~)

= A~lL(~),

the

case when

A~

S1,2

... ...

Sl,m-l s2,m_l

Sl,m

S2,2

o o

S3,2 (1

... ...

s3,m-1 ...

s3,m

o

0

:::

sm,;_l ]

Sm,m “

responds

: and xi

exactly

= A~lA,,

O ~ i < k.

to the characteristic

o 0

Q 0

:

:

O

0

“O”

I –Xo

–11

–X2

c=

is a non–

Let

~(~) is a monic matrix polynomial. Theorem 1.1 [GLR82], the determinant

Sz,m s4,m

o 1~

Accgrding to of L(A) corpolynomial

--...

.:.

of

(-1)’’’[A~ (1)

Then

In other

spond exactly

words,

to~he

. “1 ..

and identity

the eigenvalues

(L()))

= O

A~ is a singular

matrix,

the

O Im

0 o

... ...

0 0

:

;

...

;

:

o

0

...

Im

O

0

0

...

0

Ak

[

0 00

1

r

O

.. .

0

-&

---

0

... ... ‘

0

0

A.

Al

.. .

Ak_2

is identical

to the

The matrix

C is the companion

ist ic polynomial. defined in terms

for similarity is of the eigenval-

corres~onds .

exact lv. to the determinant

The exact criterion of the multiplicity

matrix

C. of order

r is a matrix

of

the form

1 Cr-l c~-z 10...00

...

c1

co

Cr=ol”””oo

1“

(3)

1:: 00:::10 Every matrix transformation

of a the

of Frobenius

M are invariant under simof the form ~ = H-l MH,

transformations

of M.

polynomial.

1

where H is a non-singular matrix. As a result characteristic polynomial of the matrix obtained similarity

polynomial

matn”x of the charac-

ues of M and the structure of its Jordan canonical form [Wi165]. A matrix is called derogatory if it is not similar to the companion matrix of its characteristic

1 ~+

The roots of the characteristic polynomial matrix correspond to its eigenvalues. Moreover,

ter applying

. .I 1 may sometimes of its character-

We will represent this linearization as PA + Q. The determinant of CL (~) = PA+ Q corresponds exactly to the determinant of L(A).

eigenvalues of a matrix ilarity transformations

characteristic

1

co

teristic polynomial of M. A matrix be similar to the companion matrix

–Im Ah-l‘

c1

00:::10

(2)

0

...

r

A Frobenius –Im

cm-z

C=ol”””oo

of C corre-

Im 0 [

of the matrix

10...00

[

matri-

matrix polynomial L(A) is linearized into a companion polynomial CL(A) of the form [GLR82]:

CL(A)=

– . . . – co] = o.

polynomial

cm-l

= O

In the case where

– cm_2A~-2

the characteristic

–Kk.l

roots of Determinant

or Determinant(L(A))

- cm_,A~-l

Let the characteris-

I;

m null

x

Frobenius Canonical Form tic polynomial of M be

0 0 ,

O and 1~ are m

An upper Hessenberg matrix is ~educed, if any of the sub diagonal elements, Si,~_ 1, is zero.

ces, respectively. We refer C as the b!ock companassociated with the matrix polynomial ion matrir ~(~).

Sl,l S2,1 s=

poly-

Hessen-

is said to be monic.

singular

where

An upper

Upper Hessenberg Matrix berg matrix is of the form

are given in

M may to the matrices

‘: be reduced direct sum

denoted

by a similarity of a number s

as C., , Cr,,

. . . . C.=.

The characteristic polynomial of each C,, divides the characteristic polynomial of all the preceding Crj.

the af-

In the case of a non–derogatory

to C in (1)

rl

=

n.

is called

of ~(~). ,)

160

The the

direct I’robenius

matrix,

sum of the Frobenius canonical

form.

s = 1 and matrices The

trans-

formed

matrix

is represented

The linearization

as

companion trix

follows

of a matrix from

Ak is non–singular.

polynomial

into

(1), in case the leading Lets consider

a

ma-

the case when

‘=1c~!2117’L!l the leading

matrix

is singular.

We use the reduction, (2.1), to linearize the matrix polynomial into a companion matrix of the form PA + Q. This matrix can be transformed into an equivalent companion matrix using row operations,

where O denotes

a null matrix

of appropriate

similar

order.

to the ones used in Gaussian

elimination.

Let the rows of P and Q be represented 2.2

Univariate

In this section ng

Determinants we consider

a univariate

symbolic

of the m x m matrix,

the problem determinant.

L(A),

of expandi-

perform

Each entr,y

it to an upper triangular

is a polynomial

elementary

operation

of degree

operations,

although

can be improved gorithm

the

In many

At each step perform performed

and Laksman)s

applications

As a result

al-

constant,

of result anta,

form

algebra

and present

fcm of 0(d3 + n3 ) complexity This algorithm is repeatedly

used in the multivariate

interpolation

algorithms

We refer to the algorithm

as univariate is:

a!gon”thm.

L(A)

= Z~=OAi~i

(like Gaussian

After

polyno-

d using

the

transformations to construct 3. Use similarity upper Hessenberg matrix, S, similar to C.

an

presented

canonical

form us-

where

6. Multiply matrix

the Frobenius

canonical

P(A) by the determinant

Q are of the

of order nl, P2

As a result,

we construct

the matrix

R =

Q;

is a non-singular

square

matrix

of order

r. In the case nl > 0, r = O, PA + Q is a singular matrix. The rest of the algorithm analysis involves

polynomial of C, P(A), is 5. The characteristic computed by the product of the of the characteristic polynomials of Frobenius matrices constituting

P and

[Q3Q41Of order nl x n. Using row operations find the rank of R. Let the rank be r and represent its submatrix structure as

in (1).

S to its Frobenius 4. Transform ing similarity transformations.

a

row op-

and Q2 are matrices of order nl x n — nl. Moreover, Q4 and the corresponding null matrix in P are square matrices of order n — nl. Let us consider the case when Q4 is a singular matrix.

x

with

elimination).

the row operations

where PI and Q1 are square matrices

to an equivalent

d

of PA + Q remains

(4)

The algorithm

as a matrix

2. Linearize the matrix polynomial companion matrix C of order reduction

is

pre-

sented in the next section.

1. Express mial.

the determinant

where P and Q are updated

a simple

and improved algorithm most cases in practice.

determinant

operation

However,

d < m. As a result, the running time of the algorithm In this section we use is 0(d4) finite field operations. linear

a corresponding

on Q.

+ dz ) finite

erations

from

row

Pki

each entry of the matrix is a linear polynomial in the coefficients of the nonlinear equations and therefore,

techniques

on P to reduce

Each elementary

cost of interpolation

by using Kaltofen

[KL88].

form.

Pk = Pk – ~Pi

ant for A = pi for O ~ i ~ d, where p is a prime number. This is followed by Vandermonde interpolation to compute the symbolic determinant. the running cost of the method is 0(dm3

row operations

is of the form:

k in A and let the determinant be a polynomial clf A degree d. The degree d is bounded by n = km. simple algorithm is evaluating the numeric determin-

field

as pi and

~, respectively. Each matrix is of order n. Moreover, their elements are denoted as Pij and q~j. We

reorganizing the submatrices P and Q, such that P 1 and Q 1 are square matrices of order n — r. The resulting system is similar to P and Q is (4), such that

form.

Q4 (which has order r after these row operations and readjustment of elements in the submatrices) is non-

of the leading

singular.

of L(A).

161

represent an n x 1 column vector whose first k elements are zero. Let B k represent an n X n matrix equivalent to the identity matrix for all but the k+ l$t

Given P and Q corresponding to (4), the determinant of PA + Q is equal to determinant of

G(A)

PIJ+

=

o

Q1 – (P2~+Q2)Q;1Q3

Q3

[

column.

Q4

1“

The k + Ist column

It is easy to see that that

Let

B~l

the k + 1’~ column For each iteration

from

It follows

–bk+l.

of the algorithm

for k ranging

and B~l

Bk

and perform

C = BkCB~l.

det.(G(A))

= det.(GIA

+ G2) x det.(Q4).

At the end of the last iteration senberg matrix. The matrix

the problem

step corresponds

In case, G1 is non–singular, duced to an equivalent the above algorithm

companion

can be re-

matrix,

for computing

otherwise

a companion

ma-

The

fact

that

running

G1 and G2 are matri-

ces of order less than n implies terminates after a finite number

constant

that the algorithm of steps.

We will

of the overall matrix

transformation

is 0(d3).

is bounded

in the

3

of the

In this section

use this bound

by a column

and row

are O((cl + 1 – k)z ) finite The at the M h iteration. to upper

Moreover,

the leading

by ~. A series of similarit

y trans-

formations reduce the upper Hessenberg matrix to its Frobenius canonical form [Wi165]. It involves about Z ds finite field operations. 3

The complexity of this step can be as high as 0(n4) in the worst case. However, such inst ante are very rare in practice and for most cases its complexity by 0(n3).

time

Hessenberg

C = H, an upper Hesmultiplication at each

to multiplying

vector. As a result, there field operations performed

trix of a system of the form PA + Q can be used recursively by substituting G1 and G2 for P and Q,

is bounded

is

bk+l.

to Bk, except

transformation

that

respectively.

of B~l

1 to d– 1 we compute

the similarity

of Bk is equal to

is equivalent

Multivariate

Interpolation

rest of the analysis. For most unz’variaie

practical

determinant

the reduction

to upper

cases the complexity aigorithm

is dominated

Hessenberg

matrix

by

followed

we extend

the algorithm

ing univariate

symbolic

determinants.

The algorithm

polation

and modular

determinants

for expand-

to multivariate

uses multivariate

techniques

as opposed

interto sym-

by the transformation to a similar Frobenius canonical form. As a matter of fact, both of these steps

bolic

can be performed tions. Furthermore

in the computer algebra systems [M C91a]. Let us assume that each entry of the n x n ma-

in 0(d3 + n3) finite the leading constant

field operais small and

trix,

is bounded by two for most cases. For most instances of the problem we expect that the Frobenius canonical form consists of one or two Frobenius matrices and as a result the polynomial multiplication is performed only a constant number of times. We present an 0(d3

+ n3) algorithm

The algorithm berg form proceeds matrix. ELMHES

the element

C.

the kth

At

[o,...

in the ith row and jth

,0,1,

–-, c~+l,~

For most

resultant

in yl, yz, . . . . ym.

The

of the form

. . . . ym).

formulations

F’(yl,

the entries

de-

are lin-

puted by adding the degrees of y; in the rows or columns of M. As a result, F can have at most

be

that

column ck+l,~

....

This

di.

degree

bound

can

always

be

com-

+ 1) . . . (L + 1) terms. In some ql = (dl + l)(~z applications it is simpler to compute the total degree of E’(y~ ,. ... Y~). Let that degree be d and the resulting polynomial can have atmost ql = C(rn, d)

coefficients,

of

3.1

Cn ~

where

c(md)=(m+f-l)

# O.

Otherwise we perform row and column interchange corresponding to similarity transformation by a permut at ion matrix. Let b~+l=

is a polynomial

available

to

any matrix

for reduction to upper Hesseniteratively on the columns of the

step it is assumed

is a polynomial

terminant

determinants

transformat ions. 0(d3 + n3) algocanonical form is

This algorithm is a simplified version of in EISPACK library [G BDM77]. Let ci,j

denote

M,

for computing

ear polynomials. Furthermore, we assume that we have a tight bound on the degree of the polynomial. Let the maximum degree of y~ in ~(yl, vz, . . . . Y~)

for reducing

upper Hessenberg form by similarity Given an upper Hessenberg matrix, rithm for reducing it to Frobenius given in [Wi165].

methods

Dense

Interpolation

In this section we consider

“-IT %+ltk

terpolation

162

algorithm

the dense multivariate

for determinant

computation

in-

and improve gorithm

it using

presented

the univariate

determinant

in the previous

section.

We express F as a polynomial

of the

F(yl,.

..

,Ym)=clm

+c2~2+.

-.+

al-

This

steps

(dl + 1)...

are repeated

for

(dm-l

Each of the Fi’s is cominterpolation and has upto

+ 1).

puted using Vandermonde

=

1, q, where

di = n, q = (n + I)m - 1 and the overall wd~zredz? cor~ponds

to

ql or

Y1’ Y2’ . .. Ymm” are distinct the corresponding coefficients. minant

The

qz.

monomials

rni

=

is 0(nm+2

and Ci are of deter-

formulation

comput at ion reduces to enumerating

the mi’s

is a linear

for ci ‘s. The latter

using Vandermonde

step is performed

interpolation

by

polynomial

problem

a q x g Vandermonde

system.

time

is 0(qn3

+ qz). This Consider

values of m. q=qz=(n+l)m.

ning

time

tion

evaluations

is O(nm+3

running time. tion di < n/m in function

is reduced The

method

overall

running

for more

than

the funchalf

as compared

M= Mdmy:m+ . ..+

M

Sparse

M.

ments are polynomial

functions

rest of the algorithm

proceeds

mial.

To circumvent

determinant

whose ele-.

number

computation

is difficult

in yl, . . . . y~_ 1. The

hand,

At the ii h

[Zip90],

of terms

[MC91b].

require

(yl,..

.,ym_l)

= (p~, . . .,p~_l)in

it as a univariate

matrix

polynomial

for

Zippel’s

determinants.

probabilistic

algorithm, Techniques

of success using modular

sented in [MC91b].

In this section

iln

variate

algorithm

determinant

include The de[BoT88],

bound

on the which

On the other [Zip90],

works

to improve methods

its

are pre-

we use the uni-

to improve

its perfor-

mance.

2. Compute

the

determinant

of the matrix

pol:p3.2.1

nomial using the algorithm highlighted in the previous section. The resulting polynomial cor. . responds to F(p~, pj, . . . . p~_l, y~ ) Let us express F as a polynomial Y2, ...,

Ym)

=

able in the resulting

is:

performance the output.

. . .> Yrn-l)Yk.

. PA_l),

F~m(Pl,

P\,,,PL.

Zippel’s

Algorithm

Zippel’s probabilistic algorithm proceeds inductively. It only expects a bound on the degree of each vari-

details

polynomial,

di. Furthermore,

its

is a function of the number of terms in We present a brief outline here and more

are given in [Zip90].

Choose m random numbers rl, . . . . Tm from the coefficient field used for defining the polynomial coef-

we know the values of

. p), --.,

in ym. That

‘LOFi(YIJ

At the end of ii h iteration

.

and

polynomial,

M

Ym .

F(Y1,

Manocha algorithms These

an upper

in the resulting

for symbolic

probability

Fo(pi,

these problems

deterministic and probabilistic algorithms. terministic versions of Ben-C)r and Tiwari,

iteration:

and treat

the a sparse polyno-

sparse interpolation

well for our applications.

1. Substitute

d~. Furthermore,

. . . . ym ), maybe

F(yl,

Canny considered

Mlym+Mo.

iteratively.

the

algorithm is practical for three or four). This is due to

q is of the order of

the fact that determinant,

to Vandermonde

are matrices

+iii,

Interpolation

The dense interpolation small values of m (upto

and Zippel, Ml,

= Mlym

where Ml is a numeric matrix and entries of ~ are polynomials in yl, . . . , Y~- 1. We either multiply fi by Ml-l or use the transformations presented in the previous section to reduce it to characteristic poly-

3.2

Without loss of generality we assume that rim = max(dl, d2, . . . . dm). Express M as a matrix polynomial in ym as

...,

as

of the

In the case of Macaulay’s formulaand as a result more time is spent

Mdm,

of the given

we express M

as compared to determinant, we have reduced symbolic complexity by one variable.

interpolation.

where

As a result

nomial computation problem. Since computing the characteristic polynomial involves a constant factor

to solving

is useful for lower

In practice

+ nzm).

evaluation

of the matrix

in [MC91 a]. More

the case when CZi = n and As a result the overall rum

account

time

and some of Dixon’s

each entry

in the coefficients

system of equations.

interpolation are given in primes, choose m distinct all,, dz,z Distinct PI, . -., Pm and let bi = PI P2 . . . P~m”. monomials, mi and mj, evaluate to different val.ues. Let a? = F’(p~, p\, . . .lp~~ ) , i = l,q, be the value of the function computed using Gauss elimi The resulting

Sylvester’s

of resultants,

details on Vandermonde [Zip90]. In particular,

nation.

running

+ nzm-l).

In Macaulay’s,

The problem

and solving

q =

q terms. The running time of the resulting algorithm where p = max(n, din). In the event is O(qp3+dmq2),

(q

f4%,

i

.

ficients.

.-, PL-l).

163

The algorithm

introduces

a variable

at each

stage.

At the k + lth

nomial

in k variables

Fk(yl,

stage it has interpolated

3.2.2

a poly-

Improved

Sparse

Interpolation

and is of the form: To improve

. . . ,Yk)=~(Yl, = cl,~ml,~

+..

the performance

use the univariate

Y2, .-. ,Yk, ~k+l,7m),7m)

k + lth stage our algorithm manner:

. + %h,kmcxk,kl

a monomial in ml, k, 1 = 1, CZk, represents in k As a result, Fk is a polynomial Yl, Y2, . . ..Yk. variables having ak terms, where ak ~ q. To com-

of Zippel’s

determinant

algorithm

algorithm.

At

we the

proceeds in the following

where

pute ~~+l(Yl, ---, Y~+l) from F~(Yl, algorithm represents Fk+l as

-..,



Forl~i~ak

do:

F(p~, pj, . . . ,~; ,yk+l, Compute using the univariate determinant

y~), ‘Zippel’s

corresponds F~+l(yl,. =

...

y~+l)

= F(Y1,

k(Yk+l)ml,k

+..

. . .. Yk+l.

~k+2,

each each hi (yh+ 1) is a polynomial

dk+l.

Let us represent

hi(y~+l)

Each

hi(yk+l

= h,,o + hi,ly~+l

hl(yk+l)bl,k,i

using



Vandermonde

do:



Substitute

hi (yh+l ) ‘S tO compute

the

Fk+l.

of the The running time of the k + lth iteration improved algorithm is O(@kp3 + dh+l a!), where p =

unknowns.

F(p;

,..

. )PkTP~+17~k+27.--prrn i ~

It corresponds

)

to the value of the polyno-

4

mial ~l(p~+~)~l,k,i

+

. . . +

Lk(dk+l)hlk,

bl,k,i = ml,k

,.

developed

(u1=P:!v2=P;j...! uk=Pj)

– solve the resulting syst e,m hl(d~+l),hz(~i+l),

in C++

on Sun-4’s and the code has been

ported on an IBM RS/6000 for performance analysis. Many routines for linear algebra are available from

a~ x ~k Vandermonde for . . vha,

Implementation

The improved algorithms for result ant computation have been implemented as part of our package for result ant computation [MC9 lb]. The package has been

k,i,

where



For O ~ j ~ dh+l

max(n, dh+l). In general, dh+l is bounded by n as each entry of the matrix is a linear polynomial in the

do:

Compute



ha, (yk+l)ba,,k,i.

– Compute hl,j, hz,j . . . . h~h, j by solving ~k X ~k Vandermonde system.

in-

For O < j ~ dh+l do: –Forl~i~ak

+ . ..+

.

terpolation. In other words, the algorithm requires to know the value of hi (p~+”l) for 1-~ i ~ ak ~ O ~ j~ dk+l. The algorithm proceeds iteratively on i manner: and j in the following ●

It

of the form

of degree

) as + --- + hi,~,+,y$~;

hi (yb+l ) is computed

.- .,~m)

-.->~n)

. + ‘f%(yk+l)mak,kl

where

to a polynomial

~k+2>

algorithm.

standard libraries. mented in Fortran

(@i+l)-

However, they are mostly impleand used with floating point arith-

h~(yk+l) by solving for a (dk+l + 1) x (dk+~ + 1) Vandermonde system. This is re-

metic as opposed to exact or modular integer arithmetic. One of the nice features of these algorithms

peated

is their storage requirement. The total amount of space is a linear function of the input and output size. That includes the symbolic matrices and the

Compute

for 1 ~ i ~ @k.

Substitute hi (y~+l ) to represent ,., nomial in k+l variables. Fk+l(yl, have at most The

computes

algorithm the

~k+l,—as a poly-. . . . . ZY+l) can

multivariate polynomials. Vandermonde interpolation

(ak * (d~+l + 1)) s q terms. starts

n stages.

with At

the

F(rl,

symbolic

and

. . . . rn)

k + lth

lation

stage the

the worst

case, rzk is bounded

the running

time

corresponding

and the output

to resultant

formu-

polynomial.

In this section we compare the performance of different elimination algorithms for implicitizing parametric surfaces. More experiments are planned for other applications. These include comparisons with implementations of Grobner basis in Cocoa and

algorithm involves ~k * (dk+l + 1) function evaluasystems tions and solving (dk+ 1 + 1) Vandermonde of ak unknowns and ak Vandermonde systems with

dk+l + 1 unknowns. Thus, k + lth stage is 0(a~d~+ln3

matrices

The space requirement for is linear in the size of the

at the In

+ d~+la~ + akd~+l). by q.

Macaulay

164

computer

algebra

systems.

Algorithm Grobner Ritt-Wu’s

Table

bases resultants

multipolynomial

138 sec. \2’;’;;~24 sec. -

resultants

1: The performance

algorithms

of different

lBM

liS/fWU

IBM

RS/600

implicitization

on (6) This

4.1

problem

in geometric

algorithms.

pressed in projective

the

the parametric

parametrizations

4.2

with

Implicitizing with Base

t), I“v(s,t )),

A bsse point

implicit

degrees.

Parametrizations Points

of a parametrization

is the

common

wY(s,

t) – yrv(s,

t) =

o

result ants or Grobner bases fail to implicitize parametrizations. Modified algorithms using

Wz(s,

t) – Zw(s,

i) =

o

tants and Grobner basis are presented by Manocha and Canny, [MC92] and Kalkbrener, [Ka191], respec-

of implicitization

for implicitization

–3t(t

– 1)2+

tively. better.

include

Grobner

parametric

–(2s3 According

to resultant

of implicitization

sur-

where

3s

tant

reduces

l)t

+

+

S(S



algorithms, to expanding

Wx(s,

t) – rw(s,

t) =

o

wY(s,

t) – yw(s,

t) =

o

G(s, t) is a random

is a polynomial

l)). the problem

sented in Section

give us the best performance

rithm

in A and say

y, z)G(z, y), where and G(x7 y) corre-

takes about

using dense interpolation

1180 sec. on an IBM

RS/6000

algofor

a bicubic parametrization with base points [MC92]. Using the improved interpolation algorithms pre-

of clif-

3, computation

of P(z,

y, z)modp

takes about 58 sec. on IBM RS/6000 for the same parametrization. Thus, we see a speed-up of 20.5

1.

Thus, we see that the multipolynomial

resul-

sponds to the projection of seam curves (images of base points) [MC92]. Given a prime p, the comput a-

Section

The performance

The

R(x, y, z, A). Let us

as a polynomial

decomposed as P(x, y, z) = F(z, F’($, y, Z) is the implicit equation

a symbc}lic

tion of F’(x1 y, z)modp

is given in Table

t) = O,

polynomial.

of the form

the matrix is a linear homogeneous polynomial in Z,y, .z, w. This is equivalent to a non–homogenecms polynomial in ~, y, z. We used the improved dense multivariate interpolation algorithm, presented in 3, for this example.

con-

the coefficient of the lowest degree term is P(z, y, z). We are only interested in P(z, y, z) as opposed to the entire polynomial R(x, y, z, ~). P(z, y, z) can be

whose order is equal to the degree of equation. Furthermore, each entry of

algorithms

perform

system of the form:

express the resultant

6s2 – 9s + I)tz

(S3

such resul-

in [MC92]

bases

(6)

+ 3s2 – 6s +

based algorithms

the algorithm

wZ(s, t) – zW(s, t) + AG(s,

(s – 1)3+

– 5s + 5)t3 +

the resultant

In particular,

siders a perturbed

Other

[Hof90]:

3s(s–l)2+t3+3t –3(s(s2

Again

Hoffmann has surveyed A particular benchmamk

has been a bicubic

face given by Hoffmann,

to

by consid-

corresponds

in s and t [MC92].

for implicitization

ferent

higher

of for

o

the result ant of these equations

determinant the implicit

for

equations

and Ritt- Wu’s algorithm. these techniques in [Hof92].

z=

the us a

t) =

algorithms

=

per-

t) – Zw(s,

as polynomials

y

relative

Wx(s,

ering them

a?=

the

root of X(.st), Y(s, t), Z(s, t), W(s, t). They also include the points at infinity. Direct applications of

problem

computing

account

ex-

coordinates,

(x(s, t), Y(s, t), Z(s,

(z, y, z, w) = we formulate

benchmark

Given a parametrization,

into

this case, the implicit equation is a polynomial degree 18. The relative performance improves

and solid mcd-

eling and also seems to be a standard elimination

taking

of different machines. Furthermore, dense interpolation algorithm gives

speed-up of 5.5 ae compared to the previous algorithm present ed in [MC9 la] on this benchmark. In

We applied the results of this algorithm to implicitizing rational parametric surfaces. Implicitization is a fundamental

is after

formance improved

Implicitization

and

Reference

algorithm

Multipolynomial Improved

Machine

Time

result ants

by using algorithms

for implicitization.

165

presented

in this paper over the

earlier algorithm ate interpolation.

s

based on resultants

[Hof90]

and multivari-

Acknowledgements

We are grateful cussions.

and Surfaces, pages 499–528. Academic Publishers, 1990.

to James Demmel

This

research

for productive

was supported

dis-

in part

[Hof92]

by

David and Lucile Packard Fellowship and National Science Foundation Presidential Young Investigator Award (# IRI-8958577). The first author is also supported

by IBM

Graduate

C.M. Hoffmann. Algebraic and numeric techniques for offsets and blends. In W. Dahmen, M. Gasca, and C. Micchelli, editors, Computations of Curves

C. Hoffmann. Implicit curves and surfaces in computer aided geometric design.

Technical

Computer University,

Fellowship.

[Jou89]

M. Ben-or and P. Tiwari. istic algorithm for sparse polynomial

interpolation.

osium

Theory

on

301-309,

Groebner

method

Jean-Pierre

in

Jouanolou.

Publishing

MIT

J.F.

Canny.

polynomials. utation,

184-232.

The

6:49–69,

[KL92]

geom-

‘of Symbolic

209–236,

– EISPA

CK

volume

51 of Lecture

Notes

Springer-Verlag,

[Mac02]

Disser-

X.S. Gao and S.C. Chou.

Lakshman.

Im-

polynomial In

Lecture

volume

Springer-Verlag,

B. Donald

358,

1988.

Laksman.

An

Symbolic

Elimi-

introduction.

In

and J. Mundy,

and

Numerzcal

ed-

Compu-

Academic

an Integration.

F.S. Macaulay. matical

[Mac21]

Comp-

Press,

On some formula

in elim-

of London Mathepages 3–27, May 1902.

Proceedings Society,

F.S. Macaulay.

Note on the resultant

of

a number of polynomials of the same deof London Mathematigree. Proceedings pages 14–21, June 1921.

cal Society,

Pro-

[Mac64]

Soci-

F.S. Macaulay.

Extension, Computer

1977.

[MC91a]

D. Manocha

tzona[ gebraic

and J .F. Canny.

D.

of

Ser-

Manocha

Press, New York,

on

Proceedings Intelligent

Efficient resultant

Proceedings on

and

Symbolic

of Internaand AL

pages 86–95,

Computation,

tipolynomial

166

In

Symposzum

I. Gohberg, P. Lancaster, and L. RodMatrix Poiynomia!s. Academic man. 1982.

Theory

for multipolynomial

algorithms.

[MC91b]

Aigebratc

Stechert–Hafner New York, 1964.

F. Morley and A.B. Coble. New results in elimination. Amerzcan Journal 49:463–488, 1927. of Mathematics, techniques

Implicitization

The

Systems.

vice Agency, [MC27]

Eigensystem in

Linz,

Sctence,

and Y.N.

ination.

J. Dongarra,

Berlin,

Universitat,

algortihms.

methods:

Modular

Guide

thesis, Fakul-

1992. To Appear.

of three quan-

Matrix

Contributions

multivariate

Computer

Kapur

itors,

1908.

Boyle,

in

D. Kapur

1990.

Routines

D.

nation

of rational parametric equations. Technical report, Department of Computer Science TR-90-34, University of Texas at Austin, 1990. [GLR82]

Univer-

PhD

and Y.N.

sparse

pages 467-474.

characteristic

The eliminant

B.S. Garbow, J.M. and C.B. Moler.

E. Kaltofen

Notes

Press, 1988.

Journal

Kepler

interpolation

of Robot

Doctoral

Generalized

9:241-250,

Dixon.

Science.

[GC90]

In

of groebner

Complexity

Three Theory.

Johannes

tation:

ACM MIT

Kalkbrener.

proved

D. Reidel

tics in two independent variables. of London Mathematical ceedings [GBDM77]

[KL88]

Press, 1989.

Award.

J.F.

ety,

report,

1991.

Sys-

comput ational

Planning.

tation

A.L.

theory.

Applications

Canny,

Motion

[Dix08]

Le formalism

Technical

Elimination

tat,

In D. Kapur and J. Mundy, edReasoning, pages 415– Geometric

447.

[Can90]

Symp-

Co., 1985.

B. Buchberger.

bases in non-linear

[Can88]

of

Purdue

Technisch–Naturwissenschaftliche

bases: An al-

ideal

M. to

pages

Mu!tidimenstonal

pages

Theory,

etry. itors,

In ACM

of Computing,

N .K. Bose, editor, tems

[Ka191]

A determinmult ivariate

1988.

B. Buchberger. gorithmic

[Buc89]

Department

sity Louis Pasteur, Department of Mathematics, France, 1989. Manuscript.

References

[BUC85]

report,

Science CER-92-002, 1992.

du r&mltant.

[BOT88]

Kluwer

J.F.

resultant of International Robotzcs,

Canny. algorithms.

1991.

MulIn

Symposium

pages

348–358,

1991. Also available as Technical port UCB/CSD 91/632, Computer ence Division,

University

ReSci-

of California,

Berkeley. [MC92]

D. Manocha

and J.F.

Canny.

The

im-

plicit representation of rational parametric surfaces. Journa! of Symbolic Computation, 1992. To appear. [Rit50]

J.F.

[Sa185]

Colloquium G. Salmon.

Ritt.

Modern

Diflerentiai Lessons

Higher

[stu91]

B. Sturmfels. ory. Technical

[Wae50]

University,

and A. Zelevinsky.

result ants

of

Multitype.

Mathematical Technical report, ences Institute, Cornell University,

1991.

Manuscript. B.L. Van Der Waerden. F. Ungar 1950.

edztion).

New York,

[WU78]

Cornell

sylvester

(third

[Wi165]

&

Sparse elimination th,eReport 91-32, Mathemati-

1991. B. Sturmfels grade

to the

G.E. Stechert

1885.

cal Sciences Institute,

[SZ91]

1950.

Introductory

Algebra.

Co., New York,

AMS

Algebra.

Publications,

J.H. Wilkinson.

Modern

problem. Oxford ford, 1965.

University

W.

decision

Wu.

On the

the mechanization elementary 21:150-172,

1978.

eigenvalue

problem

Oxand

proving

$cientia

Also

Co.,

Press,

of theorem

geometry.

A~gebra

Publishing

algebraic

The

Sci-

in

Sinica,

in Bledsoe

and

Proving: A[fLoveland, eds. Theorem Mathematter 25 years, Contemporary ics,

[Zip90]

29,

213–234.

R. Zippel. from their Computation,

Interpolating values.

polynomials

Journal

9:375-403,

of Symbokc

1990.

167