12bdg2 + 12eh3 ?12h4 + 12eh2 ?18h3 + 8eh ?14h2 ?h ?1; ?24eh3 + .... 3ab + 3ac + 3bc + 3ad + 3bd + 3cd + 2; bu + cu + du + ... 2bc + 2ad + f2 + 2eg + d;. 34 ...
Primary Decomposition: Algorithms and Comparisons Wolfram Decker1 Gert-Martin Greuel2 and Gerhard P ster2 Universitat des Saarlandes, Fachbereich 9 Mathematik, Postfach 151 150, D-66041 Saarbrucken Universitat Kaiserslautern, Fachbereich Mathematik, Erwin-Schrodinger-Strasse, D-67663 Kaiserslautern 1
2
1 Introduction Primary decomposition of an ideal in a polynomial ring over a eld belongs to the indispensable theoretical tools in commutative algebra and algebraic geometry. Geometrically it corresponds to the decomposition of an ane variety into irreducible components and is, therefore, also an important geometric concept. The decomposition of a variety into irreducible components is, however, slightly weaker than the full primary decomposition, since the irreducible components correspond only to the minimal primes of the ideal of the variety, which is a radical ideal. The embedded components, although invisible in the decomposition of the variety itself, are, however, responsible for many geometric properties, in particular, if we deform the variety slightly. Therefore, they cannot be neglected and the knowledge of the full primary decomposition is important also in a geometric context. In contrast to the theoretical importance, one can nd in mathematical papers only very few concrete examples of non{trivial primary decompositions because carrying out such a decomposition by hand is almost impossible. This experience corresponds to the fact that providing ecient algorithms for primary decomposition of an ideal I K [x1 ; : : : ; xn ], K a eld, is also a dicult task and still one of the big challenges for computational algebra and computational algebraic geometry. All known algorithms require Grobner bases respectively characteristic sets and multivariate polynomial factorization over some (algebraic or transcendental) extension of the given eld K . The rst practical algorithm for computing the minimal associated primes is based on characteristic sets and the Ritt{Wu process ([R1], [R2], [Wu], [W]), the rst practical and general primary decomposition algorithm was given by Gianni, Trager and Zacharias [GTZ]. New ideas from homological algebra were introduced by Eisenbud, Huneke and Vasconcelos in [EHV]. Recently, Shimoyama and Yokoyama [SY] provided a new algorithm, using Grobner bases, to obtain the primary decompositon from the given minimal associated primes. In the present paper we present all four approaches together with some improvements and with detailed comparisons, based upon an analysis of 34 examples using the computer algebra system SINGULAR [GPS]. Since primary
decomposition is a fairly complicated task, it is, therefore, best explained by dividing it into several subtasks, in particular, while sometimes only one of these subtasks is needed. The paper is organized in such a way that we consider the subtasks separately and present the dierent approaches of the above{mentioned authors, with several tricks and improvements incorporated. Some of these improvements and the combination of certain steps from the dierent algorithms are essential for improving the practical performance. Section 2 contains the algorithms. After explaining some important splitting tools, we explain two dierent approaches for computing the radical of I respectively the radical of the equidimensional hull. In Subsection 2.2 we present two algorithms for computing the equidimensional hull itself and a weak, that is, up to radical, decomposition of the equidimensional hull. The algorithms of [GTZ] and [EHV] both reduce the general problem to primary decomposition of zero{dimensional ideals. We, therefore, consider the 0{dimensional case, together with some theoretical background, in Subsection 2.3. In Subsection 2.4 we describe the three algorithms of [GTZ], [EHV] and [SY] for the general case, together with an algorithm to compute the minimal associated primes of I . The algorithm of [EHV] uses the normalization of K [x1 ; : : : ; xn ]=I and we present a new algorithm ([J]), based on a criterion of Grauert and Remmert, in Subsection 2.5. Another algorithm for computing the minimal associated primes is based on characteristic sets and this is presented, together with some basic facts about characteristic sets, in Subsection 2.6. Section 3 is devoted to the examples and comparisons of the dierent approaches. The examples were taken from a still larger list and they demonstrate our present knowledge about the relative performance. Our table on the last page shows that a general best strategy does not exist. Generally speaking, the characteristic set method has problems if the examples require too many factorizations over extension elds, while [GTZ] has problems if the examples require going to general position by a random coordinate change. So far, we can only recommend a combination of the dierent subalgorithms, depending on the example. In contrast to the opinion of some authors, our experience is that one should use factorization as often as possible, since usually the Grobner bases computations are the hardest part. This is, in particular, true for the algorithm computing the minimal primes, where we use the factorizing Grobner but also have our exceptions.. We are aware of the fact that comparison of algorithms by examples is certainly aected by the choice of the examples and by tricky implementation features. On the other hand, the present paper appears to be the rst systematic comparison of the four, so far, most important algorithms under equal conditions. Almost all algorithms presented in this paper are implemented in SINGULAR with options for the user to combine his own favourite subalgorithms and are available in the library primdec.lib and distributed with the programme (cf. [GPS]). 2
Throughout this paper, we assume that Grobner bases computations and multivariate polynomial factorization are possible over all elds considered. All Grobner bases are minimal, if not mentioned otherwise. For some assertions and algorithms, if char(K ) = p, we need to assume p = 0 or p >> 0. Acknowledgement: the authors were supported by the Deutsche Forschungsgemeinschaft through projects within the Schwerpunktprogramm.
2 The Algorithms In this section, K is a eld, R = K [x1 ; : : : ; xn ], and I R is an ideal. p Our aim is to explain how to compute several decompositions of I , its radical I , and the normalization of the factor ring R=I . Our main tools are Grobner bases, but, for a complete primary decomposition, we also need multivariate polynomial factorization. Almost all algorithms presented in this note are implemented in SINGULAR. If I = i=1 \r Qi is a minimal primary decomposition (that is, r is minimal) p with associated primes Pi = Qi , then we write Ev (I ) := \ Qi for codim(Qi )=v the equidimensional part of I of codimension v (if codim(Qi ) 6= v for all i let Ev (I ) = R). We are interested in solving the following problems: 1. Compute the equidimensional hull Ec (I ), c = codim(I ). 2. More generally, compute the equidimensional decomposition of I , that is, for v c compute Ev (I ). 2'. To p a weaker problem, compute equidimensional ideals Iv such that pI solve v = Ev (I ), v c. 3. Compute Ass(I ) = fP1 ; : : : ; Pr g and minAss(I ) = fPi 2 Ass(I ) j Pi $ Pj for i 6= j g. p \r Pi = P 2min\Ass(I )P and the equidimensional 4. Compute the radical I = i=1 p p radical I = Ec (I ); c = codim(I ). 5. Compute, for I radical, the normalization of R=I , that is the integral closure of R=I in its quotient ring Q(R=I ). 6. Compute a minimal primary decomposition of I . equi
Splitting tools may allow the reduction of a given problem to a problem involving ideals which are easier to handle.
Lemma 1 (splitting tools). Let I R be an ideal. 1. If I : f = I : f 2 for some f 2 R, then I = (I : f ) \ hI; f i. 2. If f g 2 I , and hp f; gi =pR, then Ip= hI; f i \ hI; gi. 3. If f g 2 I , thenp I =p hI; f i \ hI; gi. 4. If f n 2 I , then I = hI;pf i. p p p p 5. If J R is an ideal, then I = I : J \ I + J = I : J \ I : (I : J ). 3
Remark 2. Our experience shows that in all algorithms one should use Lemma 1 to split the ideal as often as possible. Remark 3. Polynomials f as in Lemma 1, 1. can be found via saturation: if I : h1 = I : hN , then I = (I : hN ) \hI; hN i. If h1 : : : hs is the square{free part of h, then we may replace hN by hk1 : : : hks s , where I : h1 = I : hk1 : : : hks s . In fact, we may compute I : h1 via ideal quotients by successively increasing the powers of the hi . This idea applies, in particular, in the following case. Lemma 4. Let I K [x], x = fx1 ; : : : ; xn g, be an ideal, and let u x be a subset of variables. Fix a block{ordering < on K [x] with u d do while dim Ja (I ) = d do I := I : Ja (I ); a := a ? 1; { return I : Jd(I )
2.2 Equidimensional Hulls and Equidimensional Decompositions Again we present two dierent approaches. The rst approach, which is used in several papers ([GTZ], [KL], : : : ), is based on Lemma 4.
Algorithm 5.
Equidimensional(I )
Input: an ideal I in K [x1 ; : : : ; xn ] Output: two ideals, the equidimensional hull Ec (I ) of I (c = codim I ), and an ideal W of codimension > c such that I = Ec (I ) \ W
{ choose any admissible term{ordering < on K [x1 ; : : : ; xn ]; { compute c := codim(I ); 7
{ compute XI xn , if the reduced Grobner basis of P is of type fx1 ? f1 (xn ); : : : ; xn?1 ? fn?1 (xn ); fn (xn )g with fi 2 K [xn ]. Remark 16. Notice that automatically fn is irreducible and deg fi < deg fn , i < n. Every a = (a1 ; : : : ; an?1 ) 2 K n?1 de nes an automorphism 'a of nP ?1 K [x1 ; : : : ; xn ] by 'a (xi ) = xi if i < n, and 'a (xn ) = xn + ai xi . i=1
Proposition 17. Let P K [x1; : : : ; xn] be a maximal ideal. Then there exists a dense open subset U K n?1 such that every 'a (P ), a 2 U , is in general position with respect to the lexicographical ordering induced from x1 > > xn . De nition 18. Let I K [x1; : : : ; xn ] be a zero{dimensional ideal. I is called in general position with respect to the lexicographical ordering induced from x1 > > xn , if the following holds for the minimal primary decomposition I = Q1 \ \ Qs with associated primes P1 ; : : : ; Ps :
1. P1 ; : : : ; Ps are in general position with respect to the lexicographical ordering induced from x1 > > xn . 2. P1 \ K [xn ]; : : : ; Ps \ K [xn ] are coprime. Proposition 19. Let I K [x1; : : : ; xn] be a zero{dimensional ideal. Then there is a dense open subset U K n?1 such that every 'a (I ), a 2 U , is in general position with respect to the lexicographical ordering induced from x1 > > xn . Theorem 20. Let I K [x1 ; : : : ; xn ] be a zero{dimensional ideal in general position with respect to the lexicographical ordering induced from x1 > > xn , G a corresponding minimal Grobner basis of I , and ff g = G \ K [xn ]. Let f = f1 : : : fss be the decomposition of f into a power product of pairwise non{associated irreducible factors fk . Then the minimal primary decomposition of I is given by s I = k\=1(I; fkk ) : 1
10
Theorem 20 yields the following algorithm.
Algorithm 8.
ZeroPrimDec(I [, check])
Input: a zero-d-imensional ideal in K [x1 ; :p: : ; xn ] Output: fQ1; P1 ; : : : ; Qs ; Ps g, Qi primary, Qi = Pi , Pi 6= Pj for i 6= j and I = Q1 \ \ Q s
# The ideal check and all commands involving check are optional; # check is needed later on for the higher dimensional decomposition # in order to avoid redundant components. { Result := ;, Rest := ;; { [ if check I , then return Result;] { choose a 2 K n?1 by random, and let J := 'a (I ); { [check := 'a(check)]; { compute a Grobner basis G of J with respect to the lexicographical ordering induced from x1 > > xn ; { let G \ K [xn] = ff g; { factorize f = f1 : : : fs s ; { for k := 1 to s do k [if check 6 hJ; fk i, then] test whether hJ; fk k i is primary and in general position, that is, compute a Grobner basis S of hJ; fkk i with respect to the lexicographical ordering induced from x1 > > xn , and check whether S contains h(1k) ; : : : ; h(nk) such that 1. h(nk) = fkk k ? k) ; : : : ; h(k) i, i < n; 2. h(ik) = xi ? gi(k) (xn ) ni mod hh(i+1 n k [if check 6 h J; f i , then] k if hJ; fkk i is primary and in general position, then Pk := hx1 ? g1(k) ; : : : ; xn?1 ? gn(k?)1 ; fk i is the associated prime to Qk := hJ; fk k i; Result := Result [f'?a 1 (Qk ); '?a 1 (Pk )g; else [check := '?a 1 (check)]; Rest := Rest [hI; '?a 1 (fkk )i; { for L 2 Rest do Result := Result [ ZeroPrimDec(L [, check]) { return Result. 1
( )
Remark 21. To make this algorithm really ecient, it is necessary to do some preprocessing in order to avoid as many random coordinate changes as possible. A random coordinate change destroys sparseness, and usually makes the subsequent Grobner basis computations very dicult. Therefore, we use the splitting tools
11
1. I = (I : f ) \ hI; f i if I : f = I : f 2 , 2. hI; f gi = hI; f i \ hI; gi if hf; gi = h1i to split the ideal as often as possible before starting Algorithm 8 (if in 2. the condition hf; gi = h1i is not ful lled, we still can apply 1. to a suitable power of f ). In order to use 1. and 2., we produce as many reducible elements as possible. This leads to the following preprocessing algorithm.
Algorithm 9. Split(I )
Input: a zero{dimensional ideal I in K [x1 ; : : : ; xn ] Output: two sets of ideals, Primary = fQ1 ; P1 ; : : : ; Qsp ; Ps g, and Rest = fI1 ; : : : ; Ik g, such that I = (\Qi ) \ (\Ii ), Qi primary, and Qi = Pi { Primary := ;, Rest := ; ; { for i := 1 to n do compute hFi i := I \ K [xi ]; enlarge the system of generators of I by Fi ; { factorize all the generators of I and split the ideal and the resulting ideals as often as possible; { compute for all ideals obtained in this way a Grobner basis with respect to the lexicographical ordering induced from x1 > > xn ; { test whether the ideals are primary and in general position with respect to the lexicographical ordering induced from x1 > > xn ; put the detected primary ideals and their associated primes to Primary and the other ideals to Rest; { return Primary, Rest Remark 22. Each ideal in Rest comes with a set of generators (which in fact is a Grobner basis with respect to the lexicographical ordering induced from x1 > > xn ) such that every generator is a power of an irreducible element. Remark 23. The preprocessing for a zero{dimensional ideal, which we know to be radical, is simpler than in the general case: we can use the fact that p p p hI; f gi = hI; f i \ hI; gi ; which holds without the assumption hf; gi = h1i. In particular, we can use the factorizing Grobner basis algorithm to split the ideal. Also the prime test for a zero{dimensional ideal is simpler than the primary test: I is prime if there is an irreducible g 2 I \ K [xi ] for some i such that deg(g) = dimK K [x1 ; : : : ; xn ]=I . Especially, we obtain: I is prime and in general position with respect to the lexicographical ordering induced from x1 > > xn if and only if for a corresponding minimal Grobner basis G, and fgg = G \ K [xn ], we have deg(g) = dimK K [x1 ; : : : ; xn ]=I , and g is irreducible.
12
The following probabilistic algorithm, proposed by Eisenbud, Huneke, and Vasconcelos ([EHV]), also goes to general position.
Algorithm 10.
DecompEHV(I )
Input: a zero{dimensional radical ideal I in K [x1 ; : : : ; xn ] Output: the associated prime ideals
{ choose a generic f 2 K [x1; : : : ; xn ], and test whether f is a zero{divisor mod I (that is, check whether I : f % I ); ? { if f is a zero{divisor mod I (which implies I = ?I : f ) \ hI; f i), then return DecompEHV(I : f )[ DecompEHV hI; f i ; { choose m minimal such that 1; f; : : : ; f m are linearly dependent mod I , and denote by F 2 K [T ] the corresponding dependence relation; { if m < dimK K [x1; : : : ; xn]=I restart the algorithm with another f ; { if F is irreducible, then return fI g; { if F factors as F = G1 ?G2 , then ? return DecompEHV hI; Q1 (f )i [ DecompEHV hI; Q2 (f )i 2.4 Higher Dimensional Primary Decomposition The minimal associated primes
One approach, proposed by Eisenbud, Huneke, and Vasconcelos ([EHV]), starts with a radical ideal, computes all associated primes, and uses normalization. The normalization algorithm presented later on in 2.5 has, as input, a radical ideal I R = K [x1 ; : : : ; xn ] and, as output, r polynomial rings R1 ; : : : ; Rr , r prime ideals I1 R1 ; : : : ; Ir Rr , and r maps i : R ?! Ri such that the induced map ? : R=I ?! R1 =I1 Rr =Ir ; (f) = 1 (f); : : : ; r (f) is the normalization of R=I . In fact, if we plug in the computation of idempotents as explained in 2.5, then the result of the normalization algorithm is the minimal prime decomposition I = 1?1 (I1 ) \ \ r?1 (Ir ) of I (recall that normalization commutes with localization). Notice, however, that the computation of the idempotents still needs zero{dimensional prime decomposition. Another possibility, also reducing the problem to the zero{dimensional case, does not necessarily need a radical ideal to start with. This approach, relying on Lemma 4, goes back to Gianni, Trager, and Zacharias ([GTZ]).
Algorithm 11.
MinAssPrimes(I )
Input: an ideal I in K [x1 ; : : : ; xn ] Output: the minimal associated prime ideals of I
13
{ Result := ;; { choose any admissible term order < on K [x1; : : : ; xn]; { use the factorizing Grobner basis algorithm to split I ;
the result m is a set of ideals given by Grobner bases such that 1. all elements of the Grobner bases are irreducible; 2. the radical of the intersection of the elements of m is the radical of I ; { for J 2 m do compute XJ< ; for u 2 XJ< do compute Ass(JK (u)[x r u]) by using zero{dimensional prime decomposition; for P 2 Ass(JK (u)[x r u]) do Result := Result [fP \ K [x]g; compute h such that JK (u)[x r u] \ K [x] = J : h; J := hJ; hi; Result := Result [ MinAssPrimes(J ); { return Result
A third possibility, also starting not necessarily with a radical ideal, is based on characteristic sets. We will treat this approach later.
Associated Primary Ideals
The rst approach, proposed by Eisenbud, Huneke, and Vasconcelos ([EHV]), is based on the following lemma:
Lemma 24. Let I be an ideal, P 2 minAss(I ), and m an integer satisfying I : P m 6 P . Then the equidimensional hull of I + P m is a P {primary ideal of a decomposition of I .
Remark 25. If P 2 Ass(I ) is an embedded prime, then one can obtain a P {primary ideal Q of a decomposition of I as
Q = Equidimensional(I + P m ) for some m. In this case, it is more dicult to estimate m (cf. [EHV]): let I[P ] = fb 2 R j I : b 6 P g. Then Q is a P {primary ideal of a decomposition of I if and only if the map (I[P ] : P 1 )=I[P ] ) ?! R=Q is injective.
The Algorithm of Eisenbud, Huneke, and Vasconcelos Algorithm 12. PrimarydecEHV(I )
Input: an ideal I in R = K [x1 ; : : : ; xn ] Output: a set Result = fQ1p; P1 ; : : : ; Qs ; Ps g such that I = \Qv is a minimal primary decomposition and Qv = Pv , v = 1 : : : s.
14
{ E := fann?ExtvR(R=I; R), v codim(I )g; { m := fEquiradical(J ) j J 2 E; J 6= Rg; { compute Ass(I ) = fP1; : : : ; Ps g := L[2m Ass(L) (by using the normalization algorithm; notice that here all associated primes of I are computed);
{ for i := 1 to s do
compute Qi :=Equidimensional(I + Pim ) with m as in Lemma 24 or Remark 25; { Return fQ1; P1 ; : : : ; Qs; Ps g
A second approach, based on Lemma 4, is due to Gianni, Trager, and Zacharias ([GTZ]).
The Algorithm of Gianni, Trager, and Zacharias Algorithm 13. PrimarydecGTZ(I [, check])
Input and Output as in the previous algorithm
# the input check is optional and needed for recursion
{ { { { { {
Result := ;; if check is not de ned, then check:=h1i; choose any admissible term{ordering < on K [x1 ; : : : ; xn ]; if check I , then return Result; compute XI< ; for u 2 XI< do m := ZeroPrimDec(IK (u)[x r u], check); Result := Result [fQ \ K [x] j (Q; P ) 2 mg; compute h such that IK (u)[x r u] \ K [x] = I : h = I : h2 ; I := hI; hi; for (Q; P ) 2 m do check = check \ Q; { Result = Result [ PrimarydecGTZ(I , check); { return Result
A third approach, proposed by Shimoyama and Yokoyama ([SY]), is based on the following two lemmata:
Lemma 26. Let I be an ideal and minAss(I ) = fP1; : : : ; Pr g. Assume there are f1 ; : : : ; fr such that - fi 2 j\6=iPj ; - fi 26 Pi .
15
Let ki be de ned by I : fi1 = I : fiki , Q i := I : fi1 and J := I + hf1k ; : : : ; frkr i. Then p 1. Q i = Pi , that is, Q i is pseudo{primary with associated prime Pi ; 2. I = i=1 \r Q i \ J ; 3. codim(J ) > codim(I ); 4. let Q i = \j Q(ji) be a minimal primary decomposition of Q i , i = 1; : : : ; r. \r Q i (no redundant Then i;j \ Q(ji) is a minimal primary decomposition of i=1 components!) and [i Ass(Q i ) \ Ass(J ) = ;. 1
Remark 27. Let I be an ideal and minAss(I ) = fP1 ; : : : ; Pr g. Assume that G1 ; : : : ; Gr are Grobner bases of P1 ; : : : ; Pr . Since Pi is minimal in Ass(I ), there Q are always elements tj in Gj not being in Pi for i 6= j . Now de ne fi := tj . j 6=i Then f1 ; : : : ; fr satisfy the assumptions of Lemma 26. p Lemma 28. Let Q be pseudo{primary with Q = P prime and u x a max (u)[x r u] \ K [x] =: Q is P {primary. imal independent set mod Q . Then QK Let h 2 K [u] be chosen such that QK (u)[x r u] \ K [x] = Q : h = Q : h2 , and hi. Then set J := hQ; 1. Q = Q \ J ; 2. codim J > codim(Q ).
De nition 29. 1. Polynomials fi as in Lemma 26 are called separators.
2. A decomposition as in Lemma 26, 2. is called a pseudo{primary decomposition, with remaining component J and pseudo{primary components Q i . 3. A decomposition as in Lemma 28, 1. is called extraction of Q from Q , with remaining component J .
We obtain the following two procedures:
Algorithm 14.
PseudoPrimaryDecomp(I )
Input: an ideal I in K [x1 ; : : : ; xn ] Output: a set Result = f(Q 1 ; P1 ; f1 ); : : : ; (Q r ; Pr ; fr ); J g with Q i , Pi , fi , and J as in Lemma 26
{ { { { { {
compute minAss(I ) := fP1 ; : : : ; Pr g (use your favourite algorithm); if r = 1, then return f(I; P1 ; 1); h1ig; Result := ;; J := I ; compute separators f1 ; : : : ; fr ; for i = 1 to r do 16
compute ki such that I : fi1 = I : fiki =: Q i ; Result := Result [(Q i ; Pi ; fi ); J := hJ; fiki i; { return Result [ J Algorithm 15. Extraction(Q )
p Input: a pseudo{primary ideal Q in K [x1 ; : : : ; xn ], and P = Q Output: (Q; J ) as in Lemma 28 { choose any