Some methods to solve linear differential equations in closed form.

105 downloads 0 Views 291KB Size Report
In this article, we review various methods to find closed form solutions of ...... A classification of the subgroups of GLn(C) allows to give a more precise list of the ...
Some methods to solve linear differential equations in closed form. Edited by Felix Ulmer IRMAR Universit´ e de Rennes 1 F-35042 Rennes Cedex [email protected] [email protected]

Jacques-Arthur Weil XLIM Universit´ e de Limoges 123 avenue Albert Thomas F-87060 Limoges Cedex [email protected]

3 In this article, we review various methods to find closed form solutions of linear differential equations that can for example be found in the symbolic computation package maple. We focus on the presentation of the methods and not on the underlying mathematical theory which is differential Galois theory (see [Sin07, vdPS03]). The text contains many examples which have been computed using maple†. We motivate the search of solution by an integration problem and all the algorithms are motivated via this integration problem.

0.1 Introduction 0.1.1 Integrating via linear differential equations In 1832-3, Joseph Liouville publishes two articles on the determination of integrals which are algebraic functions [Lio33]. His goal is to to design an algorithm which decides when the integral of an algebraic function is algebraic (and compute it when it is) or prove that the integral is not algebraic. In this study, we will consider algebraic functions over C(x), i.e. functions which are a root of a polynomial with coefficients in C(x). Example 1 Consider the function f given by r 1 f= 4

1−

q

1 x

(1 +



x)

. x (x − 1) Pn It may be defined by its minimum polynomial P (f ) := i=0 φi f i = 0 with φi ∈ C(x) ; in our example : 3

P (f ) = f 4 +

6

1 (x − 1) 1 (x − 1) 2 f − . 8 x−2 256 x − 2

Liouville shows that if the integral of f is an algebraic function, then f must have a primitive in the algebraic field extension K := C(x, f ) defined by the equation P (f ) = 0, i.e K = C(x)[Y ]/(P ) (we will give a “modern” proof of this in section 5). So, if there exists an algebraic primitive of f , then it can be written in a unique way as Z f dx = α0 + α1 f + α2 f 2 + . . . αn−1 f n−1 where αi ∈ C(x) (0.1) in the basis (1, f, . . . , f n−1 ) of K/C(x). Formal differentiation of P (f ) = 0 shows that the derivative of an algebraic function f is given by Pn φ0 f i df = − Pn i=0 i i−1 . dx i=1 i · φi f In particular ddxf belongs to the same algebraic extension K = C(x, f ). R R Differentiating the expression 0.1 of f dx, the relation ( f dx)0 − f = 0 yields a differential system of order one for the αi . Here, we obtain !   α1 1 (5 x − 12) α2 3 (5 x − 12) α3 0 3 0 4 + α f + + α f2 + 3 2 4 4 (x − 1) (x − 2) 2 (x − 1) (x − 2) (x − 1) ! 2 2 5 α 3 (x − 1) α 1 (x − 1) α2 1 3 + −1 + α10 + + f + α0 0 + =0 4 x − 1 64 x−2 32 x−2 † A maple file containing all these computation and illustrating their use can be found at http://www.unilim.fr/pages perso/jacques-arthur.weil/ulmer weil methods to solve diff eqns.mws http://www.unilim.fr/pages perso/jacques-arthur.weil/ulmer weil methods to solve diff eqns.mpl.

or

4 As P was the minimum polynomial of f , all coefficients of the above polynomial expression in f must be zero, which in turn gives us the following linear differential system :  (x − 1)2   α00 = − α2 (x)   32 (x − 2)     5 1 3 (x − 1)2   α10 = − α1 (x) − α3 (x) + 1 4 x−1 64 x − 2 (S) : (5 x − 12)   α2 (x)  α20 = −  2 (x − 1) (x − 2)     3 (5 x − 12) 1   α30 = −4 α1 (x) − α3 (x) (x − 1)4 4 (x − 1) (x − 2) This will be our leitmotiv example. We will show in section 0.3.2 that this system admits the following 4 rational solution in (C(x)) :    α0 = c   4 −102 x2 + 147 x + 48 + 35 x3   α1 =  315 (x − 1)2   α2 = 0   2    α = − 64 (x − 2) 5 x + 6 x − 139  3 315 (x − 1)5 where c ∈ C is a constant. We infer that :   Z 4 −102 x2 + 147 x + 48 + 35 x3 64 (x − 2) 5 x2 + 6 x − 139 3 f dx = f− f +c 315 (x − 1)2 315 (x − 1)5 Or equivalently Z f dx =

−5 x2 − 6 x + 139 +



x − 1 35 x2 − 62 x + 91 √ 315 x − 2

 p

−1 +



x−1

.

We see that Liouville’s method reduces the calculation of an integral to the (simpler) problem of computing rational solutions of a linear differential system. In what follows, we will show how to compute such solutions ; elaborating, we will in fact show that the more general problem of solving (in a sense Pn di y that will be defined) a linear differential equation L(y) = i=0 ai dx i = 0 (with ai ∈ C(x)) reduces to computing rational solutions of ancillary linear differential systems. In the two aforementionned memoirs, Liouville already considers the issue of computing algebraic solutions of L(y) = 0. He notes that the main theoretical problem is to bound the algebraic degree of such a solution. In 1872, H. Schwarz determines the values of parameters a, b, c of the hypergeometric equation Ha,b,c (y) = x(x − 1) y (2) + {c − (1 + a + b)x} y 0 − ab y = 0 for which Ha,b,c (y) = 0 has an algebraic solution. In a long forgotten first memoir of 1862 (corrected in 1881), P. Pepin shows how to compute algebraic solutions of second order differential equations. This problems is then handled by F. Klein and L. Fuchs (1875) who introduce the use of linear groups and their invariants in this problem. In his 1878 M´emoire sur les ´equations diff´erentielles lin´eaires ` a int´egrales alg´ebriques (J. f¨ ur Math. 49), C. Jordan establishes methods for solving third order equations. Although this does not constitute a full algorithm, his ideas are a foundation of modern methods. Interested readers may consult [Ves15, Mar98, Gra86, Poo60, Sin99] for more historical details. After the works of Vessiot or Marotte [Mar98], this problem seems to wither and get forgotten, probably because the corresponding calculation are infeasible by hand ; it resurfaces in 1948 with the work on E. Kolchin on algebraic groups and later with the appearance of computer and of computer algebra. In 1977, J. Kovacic gives an algorithm for solving second order equations ; in 1981, M.F. Singer gives a decision procedure for solving linear differential equations of any order. Since then, many improvements

5 have been developped and these have been implemented in computer algebra systems so that users can now really apply them without being specialists. We will describe the state of some of these improvements in what follows. General references for this are, for example, [Sin99, vdP99, vdPS03] and references therein.

0.1.2 Solutions of Linear Differential Equations In what follows, we consider a field k (of characteristic 0) equipped with a derivation D (typically, the d reader may think of k = C(x) and D = dx ). The set C = {a ∈ k | D(a) = 0} of constants of k is a subfield of k ; for technical commodity, we will assume it to be algebraically closed (in general, C = C or C = Q). Our aim is to describe some (algorithmic) methods for solving homogeneous ordinary linear differential equations L(y)

=

y (n) − an−1 y (n−1) − · · · − a1 y 0 − a0 y = 0

(ai ∈ k).

(0.2)

The space of solutions of such an equation is a vector space over C of dimension at most n ([Inc26, Poo60]). To “solve” such an equation, we will construct n functions (in general in some field extension of the field of coefficients), linearly independent over C, which satisfy L(y) = 0. Similarly, to solve a non-homogeneous equation L(y) = b, one will construct a particular solution and adjoin a basis (a fundamental system) of solutions of L(y) = 0. To continue with our leitmotiv example of the integration of an algebraic function, we will first show that solving linear differential equations is equivalent with solving linear differential systems.

0.2 Linear differential equations versus linear differential systems The transformation of a linear differential equation L(y) = b to a first order linear differential system in compagnon form    0     0 1 0 ··· 0 y1 y1 0 ..    y2   0     0 1 0 .       y2   0   · · ·  =  ..      .. ..    .  ···  +  ···  . . 0    y  yn−1    0  n−1  0 ··· ··· 0 1  yn yn b a0 a1 · · · an−2 an−1 is well know. It is then possible to go from the solution space of one to the solution space of the other. It is however also possible to associate a linear equation L(y) = b to a given arbitrary first order linear differential system Y 0 = AY + B, where A ∈ Mn (k), B ∈ k n .

Example 2 In the system (S) obtained from our integration example, we can set z = α0 + α3 . Taking derivatives (and using the relations given by the αi and their derivatives), it is possible to express the derivatives z (i) as linear combinations of the four unknown functions αj (j = 0, . . . , 3) : z (4) must therefore linearly depend on z, . . . , z (3) , so that z satisfies the inhomogeneous linear differential equation L(z) = b, where

L(z)

=

 45 146965 x4 − 1342369 x3 + 4634709 x2 − 7115419 x + 4081618 0 z 32 (x − 2) (x − 1)3 (5525 x3 − 37260 x2 + 84801 x − 64586) 5 (927095 x4 − 8377567 x3 + 28568871 x2 − 43330157 x + 24580270) 00 + z 16 (x − 2) (x − 1)2 (5525 x3 − 37260 x2 + 84801 x − 64586) 160225 x4 − 1425785 x3 + 4781433 x2 − 7135291 x + 3988058 000 + z + z (4) , 2 (x − 2) (x − 1)(5525 x3 − 37260 x2 + 84801 x − 64586

6 b

= −6

−90555 x3 + 284742 x2 − 412627 x + 11050 x4 + 230430 . (5525 x3 − 37260 x2 + 84801 x − 64586) (x − 1)6 (x − 2)

To each solution z of this equation, one can associate the following solution of the first order differential system (S):  275 x2 −1213 x+1322)(x−1) (x−2) 0 (375 x2 −1633 x+1762)(−1+x)2 (x−2) z 00  z − 16 α0 (x) = z −8 (5525 x3 −37260 x2 +84801  3 x−64586 (5525 x3 −37260 x2 +84801 x−64586)    (3) (5 x−11) (x−1)3 (x−2)2 x(−611 x+145 x2 +658 64 128  − 3 5525 x3 −37260 x2 +84801 x−64586 z + 3 (5525 x3 −37260  x2 +84801 x−64586)(x−1)2    −13749 x2 +32644 x+1885 x3 −25364)(x−1)5 00 −9219 x2 +22284 x+1235 x3 −17564)(x−1)4 0 ( (   z α1 (x) = − 2(5525 x3 −37260 x2 +84801 x−64586) z −  3(5525 x3 −37260 x2 +84801 x−64586)   2 −323 x+390 (x−2) (x−1)6 3 −10197 x2 +23166 x−17536  65 x (x−1) 1495 x ( ) ( )  − 34 5525 x3 −37260 x2 +84801 x−64586 z (3) + 34 5525 x3 −37260 x2 +84801 x−64586 (α) : 11305 x3 −75012 x2 +167229 x−124642)(x−2) 0 ( (−12123 x2 +26287 x+1870 x3 −19034)(x−2) z 00  α2 (x) = −32 (x−1)2 (5525 x3 −37260 x2 +84801 x−64586) z − 256  3  (x−1)(5525 x3 −37260 x2 +84801 x−64586)   85 x2 −367 x+402)(x−2)2 (x−2)(170 x3 −1047 x2 +2175 x−1538)  ( (3) 512 2048  − 3 5525 x3 −37260 x2 +84801 x−64586 z + 3 (5525 x3 −37260 x2 +84801 x−64586)(x−1)5     275 x2 −1213 x+1322)(x−1) (x−2) 0  (375 x2 −1633 x+1762)(−1+x)2 (x−2) z 00  α3 (x) = 8 (5525 x3 −37260 x2 +84801 z + 16  3  x−64586 (5525 x3 −37260 x2 +84801 x−64586)   (3) (5 x−11) (x−1)3 (x−2)2 x(−611 x+145 x2 +658 64 z − + 128 3 5525 x3 −37260 x2 +84801 x−64586 3 (5525 x3 −37260 x2 +84801 x−64586)(x−1)2 This shows that solving the first order differential system can be reduced to solving the linear differential equation L(z) = b and the above relation gives a bijection between the two solution spaces. This approach can be turned into an algorithm as we shall see next.

0.2.1 Equivalent differential systems In order to attach a linear differential equation to a given first order differential system Y 0 = AY + B where A ∈ Mn (k) and B ∈ k n , the idea is to transform the system into compagnon form using a basis transformation Z = P Y + β (where P ∈ GLn (k), β ∈ k n ). Such a transformation is also called a gauge transformation. Such a basis transformation produces a new system Z 0 = P Y 0 + P 0 Y + β 0 = P [A]Z + (P B + β 0 − P [A]β)

(0.3)

where  P [A] = (P AP −1 + P 0 P −1 ). ˜ +B ˜ are said to be equivalent or of the same type if there Two systems Y 0 = AY + B and Z 0 = AZ exists an invertible matrix P ∈ GLn (k) and β ∈ k n such that A˜ = P [A] := P AP −1 + P 0 P −1 and ˜ = (P B + β 0 − P [A]β). In this case the relation Z = P Y + β is a bijection between the two solution B spaces.

0.2.2 From systems to scalar equation: cyclic vector approach In order to transform the system into compagnon form using a basis transformation, we consider an arbitrary solution (y1 , y2 , · · · , yn )t of Y 0 = AY + B and use the Ansatz z1 = λ1 y1 + . . . + λn yn . Computing (n) successively z2 = z10 , . . . , zn+1 = z1 using the relation Y 0 = AY + B we obtain n + 1 relations between the n variables yi . Therefore the variables zi must be linearly dependent over k, showing that z1 is a Pn (i) solution of the in-homogeneous linear differential equation L(z1 ) = i=0 bi z1 = b. We obtain this way 0 ˜ + B. ˜ Z = P Y + β and Z = AZ If the matrix P is invertible, then (λ1 , . . . , λn )t is called a cyclic vector for the initial system. In the previous section, our choice corresponds to [1, 0, 0, 1] which indeed turned out to be a cyclic vector. However (1, 0, 0, 0)t or (0, 1, 0, 0)t are example of vectors that are not cyclic for our system. (n) Since z2 = z10 , . . . , zn+1 = z1 the resulting system in Z is in compagnon form and z1 satisfies a linear Pn (i) differential equation L(z1 ) = i=0 bi z1 = b. It can be proved (e.g [Ram85, vdPS03]) that almost any t choice of (λ1 , . . . , λn ) produces a cyclic vector (“almost” means up to an algebraic sub-variety). This

7 justifies the probabilistic approach consisting in trying random vectors in order to find a cyclic vector (in fact, non-cyclic vectors are rare but very useful because they provide a factorization of the differential system). Some of the algorithms that we will present in the remaining sections are easier to explain for linear differential equations than for first order systems, making the cyclic vector approach an useful tool. A drawback of the transformation is the possible swell of the coefficients of the resulting equation. This motivated the search for algorithms that are directly applicable to first order systems, therefore avoiding the possibly costly transformation ([Bar99] and the references therein).

0.3 Rational solutions A rational solution is a solution in the ground field, i.e the field k to which the coefficients belong. In the following we will focus on the classical case k = C(x) (or C = C, Q, . . .). Our goal in this section is to Pn find a basis of those solutions of L(y) = i=0 ai (x)y (i) = 0 (with polynomial ai (x) ∈ C[x]) that belong to C(x). To achieve this, we will use local information given by the Laurent series expansion of solutions at some point c. If we write ai (x) = ai,0 (x − c)βi + ai,1 (x − c)βi +1 + . . . and evaluate L(y) at a Laurent series expansion of a solution p(x) = b0 (x − c)λ + b1 (x − c)λ+1 + . . . , d(x) then the coefficient with the the lowest valuation γ = min{λ + βi − i} must be zero. This shows that the order λ of the serie must be a root of the indicial equation X indc (λ) = λ(λ − 1) · · · (λ − (i − 1))ai,0 = 0 {i|λ+βi −i=γ}

A root of the indicial equation is called an exponent of the linear differential equation at the point c, and the above shows that there are at most n exponent at each point c. A point c ∈ C is a regular point if the exponents at this point are 0, 1, . . . , n − 1 (and there is a basis of analytic solutions) and it is singular point otherwise. It is easily checked that the singular points are the zeros of the highest coefficient an (x) (and, possibly, ∞ if we work over the Riemann sphere).

0.3.1 Computation of the rational solutions p(x) . We The following method appears also in [Lio33]. For k = C(x), a rational solution is of the form d(x) start by computing the denominator d(x) of the fraction. Since the roots of d(x) are all singular points, those roots must also be roots of an (x). This leads to the ansatz s Y p(x) p(x) = Qs = p(x) (x − ci )λi −λi d(x) (x − c ) i i=1 i=1

where ci ∈ C are zeroes of pn (x) and λi ∈ Z are exponents of the linear differential equation. A necessary condition for the existence of a rational solution is that at each singular point there exists an exponent in Z. Instead of trying all the possible combinations of exponents, we choose for λi the smallest exponent in Z at ci . Our rational solution will be of the form s Y p(x) p(x) = Qs = p(x) (x − ci )λi −λi d(x) (x − c ) i j=1 j=1

with known denominator (the other possibilities for the exponents can be put into this form by multiplying the numerator and the denominator with a common factor).

8 Substitution of this expression into L(y) = 0 leads to a new linear differential equation for the yet unknown numerator p(x) :   !(i) s n Y X p(x) λ ˜ L(p(x)) = L p(x) (x − cj ) j  = ai (x) Qs −λj j=1 (x − cj ) j=1 i=0 =

n X

(i)

a ˜i (x) (p(x))

i=0

Pn ˜˜ ˜˜i (x)y (i) with a ˜˜i (x) ∈ We can clear denominators in order to obtain again an equation L(y) = i=0 a ˜ C[x]. As we search for a polynomial solution, we may look at the action of L˜ on a polynomial p(x) = PN i i=0 γi x with unknown coefficients using   1 βi ai,∞,0 + O( ) . a ˜i (x) = x x ˜˜ = 0, the coefficient of the highest term must be zero. Like for the indicial Since we must have L(p(x)) equation above, this leads to a polynomial X ind∞ (λ) = λ(λ − 1) · · · (λ − (i − 1))bi,∞,0 = 0. {i|λ+βi −i=γ}

The possible degrees N of p(x) are the positive integer roots of this polynomial. Again it is sufficient to PN consider the maximal positive integer root N , which leads to the Ansatz p(x) = i=0 γi xi with known degree N . ˜˜ PN γ xi ) = 0 we obtain a homogeneous linear system for the N + 1 unknowns γ . Solving From L( i i=0 i ˜ this linear system will produce a C-basis {p1 (x), . . . , pm (x)} of the polynomial solutions of L(y) = 0, and therefore a C-basis of rational solutions of L(y) = 0 of the form pm (x) p1 (x) , . . . , Qs } −λ −λi i i=1 (x − ci ) i=1 (x − ci )

{ Qs

The computational bottleneck in this algorithm resides in the way this last linear system is handled. Algorithms exploiting the structure of this system are presented in [AK91, ABP95, Bar99] and an optimal (up to date) version is given in [BCS05]. Note that the calculation of the degree N can be understood in a slightly more conceptual way as follows. We look for a solution p(x) = xN + pN −1 xN −1 + · · · = ( x1 )−N (1 + pN −1 x1 + O( x1 )) so we see that we have a pole of order N at infinity so −N must be an exponent at infinity, i.e a root of the indicial equation at infinity. Exponents are easier to compute for linear differential equations than for first order systems. However it is also possible to compute the rational solutions of Y 0 = AY ([Bar99] and references therein) without converting the system to a linear differential equation via a cyclic vector computation.

0.3.2 Non homogeneous equations and solution of the leitmotiv example The previous method can be adapted to non homogeneous equations L(y) = b (cf. [Bro92] and references therein) but instead we will transform the in-homogeneous equation into a homogeneous equation of 0 higher order L(y) = b (L(y)) − b0 L(y) = 0. Note that any rational solution of the first will also be a rational solution of the second. In our leitmotiv example, assuming Liouville’s theorem, we have reduced the integration problem of an

9 algebraic integral to the problem of finding the rational solution of the in-homogeneous equation L(y) = b. Using the above trick we are left with the following homogeneous equation L(y) = 45 440895 x5 + 19124660 x3 − 41917722 x2 + 47163053 x − 4528916 x4 − 21498482 0 y 16 (x − 2) (−90555 x3 + 284742 x2 − 412627 x + 11050 x4 + 230430) (x − 1)4 5 4 3 2 5 17478890 x − 180447867 x + 761665199 x − 1654880375 x + 1838740539 x − 827751650 00 + y 32 (x − 1)3 (x − 2) (−90555 x3 + 284742 x2 − 412627 x + 11050 x4 + 230430) 5 4 3 2 22088950 x − 228088635 x + 958764595 x − 2062761247 x + 2264278659 x − 1007458642 (3) + y 16 (x − 1)2 (x − 2) (−90555 x3 + 284742 x2 − 412627 x + 11050 x4 + 230430) 5 4 3 2 453050 x − 4656315 x + 19428215 x − 41370911 x + 44911683 x − 19779482 (4) + y + y (5) = 0 2 (x − 1) (x − 2) (−90555 x3 + 284742 x2 − 412627 x + 11050 x4 + 230430)

The expressions look more and more terrifying, but are easily handled by a computer. The (finite) singular points are c1 = 2, c2 = 1, and the roots ci (i > 2) of the irreducible equation x4 −

23043 18111 3 142371 2 412627 x + x − x+ = 0. 2210 5525 11050 1105

Computing the exponents (for example using the command gen exp in maple), we find (0, 1, 2, 3, 23 ) at x = 2, (−5, −2, 0, − 12 , − 92 ) at x = 1, and (0, 1, 2, 3, 5) at ci (for i > 2). A rational solution of the Q p(x) homogeneous equation must be of the form p(x)(x − 2)0 (x − 1)−5 i>2 (x − ci )0 , or simply (x−1) 5. p(x) e for p with L(p(x)) e We compute the differential equation L = L( 5 ). According to our method, the (x−1)

degree N of p is such that N ∈ {3, 5}. The solution p(x) we are looking for must be of the form p(x) = γ0 + γ1 x + γ2 x2 + γ3 x3 + γ4 x4 + γ5 x5 .

              

Plugging this ansatz into the relation L(p(x)) = 0, we obtain the following linear system :  0 0 −1125 −900 53959 267545  0 0 0 0 −1 −5  22732650 51866405 239145235 493746279 463316707 −466026280    11443680 16808410 −44942425 −164608566 −313297728 −442425600    0 0 0 0 1 5   0 61880 3442015 4622388 −39011582 −207171040    1868670 13070140 162697775 420978396 691480368 811113600   −3729375 −8503865 −47608495 −87550269 34987468 613145030  −56964465 −140390815 −733512410

−1659388287

−2308356996

γ0 γ1 γ2 γ3 γ4 γ5

     =0   

−1638036600

solving the system produces the following rational solutions of our linear differential equation :  5 3 151 2 4 5 − 139 −696 − 52 x3 + 765 2 − 4 x + 4 x+x 2 x − 5x + x y = γ2 + γ5 . 5 5 (x − 1) (x − 1) where γ2 and γ5 are arbitrary constants in C. Note that {γ5 = 1, γ2 = −10} corresponds to the obvious solution y = 1 . In order to obtain the rational solutions of our in-homogeneous equation, we simply substitute y into this in-homogeneous equation and obtain  64 (x − 2) 5 x2 + 6 x − 139 z=− +γ 315 (x − 1)5 where γ is an arbitrary constant. Now, to solve the integration problem of our leitmotiv example, we substitute this solution into the relations (α) page 6. The resulting solution of the system (S) is the one given in the introduction and we recover the integral given page 4.

10 0.4 Factorization and Reduction of Order of a Differential Equation One way to simplify the solving of L(y) = 0 is to find a factorization in the form of a composition L(y) = L1 (L2 (y)) of operators. Solutions of L2 (y) = 0 would then be solutions of L and we would have reduced the order of the equation that we want to solve. However, this factorization is not unique. This factorization can be given an algebraic (and computable) meaning in an appropriate setting.

0.4.1 Differential operators First let us write L as a differential operator L(y) = (an (x)Dn + . . . + a1 (x)D + a0 (x)) (y) d . To endow the set of differential operators with a ring structure (of where the symbol D represents dx non-commutative polynomials, or Ore polynomials), we note that

d d (xy) = x (y) + y = (xD + 1) (y) dx dx Using this rule Dx = xD + 1, one can show that the set D := k[D] of differential polynomials, with the usual addition and the above multiplication rule (for any a ∈ k, D.a := aD + a0 ) is a non commutative ring. As hinted above, multiplication in this ring corresponds to the composition of differential operators. Furthermore, one can show that D is a left (resp. right) Euclidean ring, which means that one has notions of computable gcd (left or right), least common (left or right) multiple, etc. (Dx)(y) =

0.4.2 Factorization of L There exist several factoring algorithms; we will display them on our leitmotiv example. Example 3 In our leitmotiv example, we remark that D is a factor on the right of our operator L. Using one of the available factoring algorithms (e.g the command DFactor in Maple), we have a “complete” factorization of L in the form L = (D + f1 ) (D + f2 ) L2,1 D,

with

f1 , f2 ∈ C(x)

where † L2,1

=

 −12123 x2 + 1870 x3 + 26287 x − 19034 D + D 2 (85 x2 − 367 x + 402) (x − 2) (x − 1) 3 11305 x3 − 75012 x2 + 167229 x − 124642 + 16 (x − 2) (x − 1)2 (85 x2 − 367 x + 402) 2

When L = M · N , suppose that we know a solution y of N (y) = 0; then L(y) = M (N (y)) = M (0) = 0 so solutions of N are solutions of L. There is a kind of converse to this result: if N is an irreducible operator of order lower than the order of L, and if there exists y such that both N (y) = 0 and L(y) = 0, then N is a right factor of L. This is a consequence of the facts that D is left euclidean and that an operator of some order r possesses at most r linearly independent solutions. We now will show that L could admit many different factorizations. Example 4 In our example, we observe many other factorizations; they can be computed via the eigenring method introduced by Singer (see [Sin96, vH96, BP98]) using the commands DFactorLCLM or eigenring in Maple. For example, L = ! 2 3 (6 x − 13) D 3 85 x − 367 x + 402 2 , (D + f˜1 ) (D + f˜2 ) (D + f˜3 ) D + + 2 x2 − 3 x + 2 16 (x2 − 3 x + 2)2 † The full expressions of f1 and f2 are heavy and uninteresting here ; however, the reader is encouraged to check them with her/his favorite computer algebra system.

11 with f˜i ∈ k. One can show (and we verify it on our examples) that the number of factors (and their order, i.e the degree in D) is unique (up to “isomorphisms” and permutation of factors)†. But there can still be infinitely many factorizations. In order to exhibit infinitely many factorizations, recall that in part 0.3.2 we had found a basis of the rational solutions to L(y) = 0:  5 3 151 2 4 5 − 139 −696 − 52 x3 + 765 2 − 4 x + 4 x+x 2 x − 5x + x y = γ + γ5 2 (x − 1)5 (x − 1)5 0

If we note f = yy the logarithmic derivative of such a solution, then D − f is a right-hand factor of L (because (D − f )(y) = 0). Now f depends on two parameters (arbitrary constants) so we have infinitely many right-hand factors which will be of the form  3 x2 + 10 x3 + 1239 − 612 x (γ2 + 10 γ5 )  D − · (−2784−10 x3 +1530 x−20 x4 +4 x5 )γ5 − (x − 2) (5 x2 +6 x−139) γ2 (x − 1) 0.4.3 Decomposition of L We have seen that finding right-hand factors amounts to finding distinguished subspaces of the solution space of L. To study solutions of L, one may ask for something stronger than a factorization of L, namely a decomposition of L as the Least Left Common Multiple (LCLM) of lower-order operators Li . When this is the case, the solution space of L is the direct sum (as a vector space) of the solution spaces of the Li . This simplifies greatly the operation of solving L, of course; it is roughly what the eigenring method (mentionned above) computes (when possible). We may sketch this method as follows. The eigenring E(L) consists of operators R, of order less than that of L, such that there exists an operator S with L.R = S.L. We then have that, if L(y) = 0 then L(R(y)) = S(L(y)) = S(0) = 0 so R induces an endomorphism of the solution space V (L). For an eigenvalue λ of this endomorphism R, there exists an y ∈ V (L) (i.e a solution of L) such that (R − λ)(y) = 0. This means that L and (R − λ) have a solution in common, so the right-gcd of L and R − λ is a non-trivial factor. The decomposition of V (L) as a direct sum of characteristic subspaces of R induces a decomposition of L as the LCLM of the corresponding factors. This method is developed in [Sin96, vH96, BP98].

0.4.4 Exponential Solutions We may give an idea of how to factor by focusing on first-order factors (in fact, the search for factors of higher order may be reduced to this question, see [Sin99, vdPS03] for more details and references). A first order factor is of the form D − f with f ∈ k. The operator L admits such a factor on the right if and only if the equation L(y) = 0 admits a solution y such that y 0 = f y. Because such a y is an exponential (of an integral of f ), one calls such a solution an exponential solution of L(y) = 0. These solutions will also prove useful in part 0.5. The algorithm for computing such solutions is roughly similar (though quite longer in practice) to the algorithm for computing rational solutions: we first compute what could be the behavior at the singularities and then search for a polynomial that would “glue together” these local informations. For our leitmotiv equation L(y) = 0, one can show ([Sin81, Sin99, SU95] as our equation is fuchsian) Qs that any exponential solution to our equation will be of the form y = j=1 (x − cj )λj p(x), where p is a polynomial, the cj are the singularities, and the λj are exponents (not necessarily integers, as opposed to what happened with rational solutions) in the singularities cj . For each combination of the λj , we Qs set y = j=1 (x − cj )λj z, we compute the linear differential equation satisfied by z, and look whether it † it is a consequence of the Jordan-Holder theorem for D-modules.

12 admits polynomial solutions. This is hence similar to the situation for rational solutions ; however, this is much more costly because many combinations (the number is exponential in the number of singularities) of the λj have to be checked. In part 0.3.2, we had already found two rational solutions to L(y) = 0. Studying the exponents at the singularities which we had already computed, we see that there remain three types of putative exponential solutions: 5 (x − 2) 2 1 1 p (x), p (x), 2 1 1 1 p3 (x). (x − 1)5 (x − 1) 2 (x − 1) 2 We have found a basis of two possibilities for the polynomial p1 , which gave us a vector space of rational solutions of dimension 2. For the third possibility, calculation shows that no polynomial p3 fits. For the second possibility, calculation again shows that p2 = 1 is the only polynomial solution, up to multiplication 1 by a constant, and we obtain the exponential solution y3 = √x−1 ; the corresponding right hand factor of 1 L is D + 2 (x−1) . You could also retrieve this result using the command expsols in Maple. A more sophisticated algorithm for exponential solutions, using a smart mix of local informations and reductions modulo primes, is given in [CvH04].

0.5 Liouvillian solution Since our leitmotiv example L(y) = 0 is of order five, we need to compute a basis of solutions of dimension five over C in order to obtain all solutions. We already computed a two dimensional space of rational solutions and one exponential solution. In order to find the remaining two solutions, we will have to look for solutions in a larger class of functions. Those two solutions will be solutions of the second order left factor of the second factorization of L computed in the previous section :  3 85 x2 − 367 x + 402 3 (6 x − 13) 0 00 y (x) + y(x) L2 (y) = y (x) + 2 (x − 2) (x − 1) 16 (x − 2)2 (x − 1)2 We will now turn R toR more general class of solutions consisting of those functions that can be written using the symbols , e and algebraic functions. The so-called Liouvillian solutions are formally defined in the following way Definition 1 A function is Liouvillian over a (differential) field k if the function belongs to a (differential) field extension K of k where k = K0 ⊂ K1 ⊂ · · · ⊂ KN = K such that for i ∈ {1, . . . , n − 1} we have Ki+1 = Ki (ti ) where (i) ti is algebraic over Ki (algebraic extension), or R (ii) t0i ∈ Ki (extension by an integral, here ti = u, where u ∈ Ki ), or (iii) t0i /ti ∈ Ki (extension by the exponential of an integral). The following function is liouvillian over C(x) : R√ √ x exp( x + 1 dx) R . 2 exp(x ) dx

0.5.1 Differential Galois theory and algorithms In classical Galois theory one shows the existence of a splitting field of a polynomial which is the smallest field extension containing all the roots. The classical Galois group sends a root into another root and therefore permutes the roots. Properties of the polynomial and its roots are mirrorred in the structure of classical Galois group. For exemple the polynomial is irreducible over the coefficient field if and only

13 if the permutation action on the root is transitive and the polynomial is solvable by radicals if and only if the group is solvable. A similar theory exists for linear differential equations as a result of work of Picard, Vessiot and Kolchin. Note that the solution space of L(y) = 0 is a vector space over the field of constants, and therefore the field of constants C will play an important role in the theory. It is convenient to assume that C is algebraically closed of characteristic zero and we will make this assumption throughout the remaining of this section. Another new ingredient we will have to take into account is the notion of a derivative. One shows that there exists a differential splitting field called Picard-Vessiot extension (in short PV extension), which is the smallest differential field extension containing a full basis of solutions as well as all derivatives of those solutions, but no new constants. Example 5 The Airy equation A(y) = y 00 − xy over k = C(x) has a C-basis of solutions given by y1 = 1 +

∞ X i=1

x3i Qi

j=1

3j(3j − 1)

,

y2 = x +

∞ X i=1

x3i+1 Qi

j=1

3j(3j + 1)

The PV extension is C(x, y1 , y2 , y10 , y20 ). Note that higher derivatives of yi can be express using the yi and yi0 like yi00 = xyi . We have y1 y20 − y10 y2 = 1 as a differential relation among the solutions.  Definition 2 The differential Galois group G of the equation L(y) = 0 (in fact of the PV extension K/k associated to L(y) = 0) is the set of automorphisms of K that leave elements of k fixed and commute with the derivation. G = {σ ∈ Aut(K/k) | σD = Dσ} Since G fixes the coefficients ai ∈ k of L(y) = for σ ∈ G and any solution yj of L(y) = 0 that 0 = σ(0) = σ (L(yj )) = σ

n X i=0

Pn

i=0

ai y (i) and commutes with the derivation, we get

! (i) ai yj

=

n X

ai (σ(yj ))

(i)

= L(σ(yj )).

i=0

Therefore G sends a solution of L(y) = 0 into a solution. If y1 , y2 , . . . , yn is a C-basis of the solution Pn space in the PV extension, then σ(yi ) = j=1 ci,j yj where ci,j ∈ C. This shows that G has a faithful representation as a group of n × n matrices (ci,j ) over the field of constants C (while the classical group is a permutation group, the differential Galois group is a linear group). Like the classical Galois group, the differential Galois group mirrors the properties of the linear differential equation and its solutions. Proposition 1 ([Kol48]) Under our standard assumptions and notations, the following holds : (i) All solutions of L(y) = 0 are rational (i.e. in k) if and only if G = {id}. (ii) All solutions of L(y) = 0 are algebraic over k if and only if G is a finite group. (iii) L(y) has a non trivial factorization L1 (L2 (y)) if and only if G ⊂ GLn (C) is a reducible linear group (there exists a non trivial subspace W such that ∀σ ∈ G, σ(W ) = W ). In fact the differential Galois group has an important additional property, it is a linear algebraic group. This means that the set of matrices (ci,j ) of the group G, where each matrix is viewed as points in 2 the space C n , is an algebraic variety over C. This imposes strong restrictions as we shall now see in connection with our integration problem. R Example 6 The computation of a integral a for a ∈ k corresponds to the resolution of the inhomogeneous differential equation y 0 = a. With our previous trick, we can transform this R to a homo0 geneous equation LR (y) = y 00 − aa y 0 . The C-basis Rof the solution space of the later is {1, a}, showing that the PV extension in this case is simply K = k( a). In order to find the matrix of an element σ ∈ G

14 R in the basis {1, a}, we note that σ(1) = 1 (because 1 ∈ k) and since σ commutes with the derivation we obtain :  Z  Z 0 a − a = σ(a) − a = a − a = 0. σ R R R R This shows that σ( a) − a is a constant cσ ∈ C and therefore σ( a) = a + cσ . The matrix of σ is thus   1 cσ . 0 1 The group G consists of the “points” (ci,j ) ∈ C 4 which are zeros of the polynomial system c1,1 = c2,2 = 1, c2,1 = 0. This variety is isomorphic to C and we find that G is a subgroup of the additive group (C, +). Algebraic sub-varieties of C must be zero sets of one polynomial and are therefore either finite sets or C. Since an additive subgroup G of (C, +) is either {0} or contains infinitely many elements, the only possibilities for G are {0} and C. R If G = {0}, then all solution and in particular a, are rationalR and therefore belong to k. Otherwise G = (C, +) and not all solutions are algebraic, which implies that a is not algebraic (and therefore must be transcendental over k).  The above example gives a proof of a result very similar to Liouville’s theorem on algebraic integrals stated in the introduction : Theorem 1 (Liouville) Let k Rbe a differential field with algebraically closed field of constants of characteristic zero. The primitive a of a ∈ k is either in k or is transcendental over k. In particular, an integral that is algebraic over k must be in k .  Liouville’s original proof of the above is easier and more linked to the algebraic step of the Liouvillian extensions than to the integral step of the Liouvillian extensions: Example 7 Suppose that f is algebraic over the differential field (k, D) and that the minimal polynomial Pm−1 of f is p(X) = X m + i=0 ai X i . A derivation ∆ on the algebraic field extension k(f ) which is an extension of D must satisfy 0 = p(f )0 = mf 0 f m−1 +

m−1 X

ai f 0 f i−1 +

i=0

m−1 X

a0i f i .

i=0

This shows that 0

f =−

Pm−1

a0i f i P m−1 mf m−1 + i=0 ai f i−1 i=0

and that there is at most one such derivation. One can verify that the formula defines indeed a derivation on k(f ). If σ ∈ Aut(k(f )/k) is an automorphism, then σDσ −1 is also a derivation of k(f ) and therefore σDσ −1 = D, showing that for any automorphism σD = Dσ. Thus the differential Galois group and the classical Galois group coincide for algebraic extensions. R Liouville now noted that if f 0 = a (or f = a), then for all σ ∈ Aut(k(f )/k) we have σ(f )0 = a. Therefore the trace of f (the sum of all conjugates) is again an integral of a which is in k. Another consequence of f 0 ∈ k(f ) is that all higher derivatives f 00 , · · · , f (m) of f also belong to k(f ). Since [k(f ) : k] = m, the m + 1 elements of f, f 0 , f 00 , · · · , f (m) must be linearly dependent over k, showing that f is the solution of a linear differential equation over k of degree at most m. The existence of a liouvillian solution is also reflected in the differential Galois group. First we note that the set of liouvilian solutions is a G-invariant sub-space of the solution space and therefore that there is a right factor whose solution are all liouvillian solutions of L(y) = 0 (cf. Example 3 and 4). For

15 this reason most algorithms first factor (and, possibly, decompose as an LCLM) the given differential equation and then look for liouvillian solutions of irreducible right-hand factors. For irreducible equations, the existence of a liouvillian solution is equivalent to G ⊂ GLn (C) having a finite subgroup H leaving a one dimensional subspace generated by a solution z of the solution space invariant ([Ulm92]). In more sophisticated terms, an irreducible equation L(y) = 0 has a liouvillian solution if and only if the component of the identity G◦ of the linear algebraic group G is solvable ([Kol48]). Using a theorem of C. Jordan and a bound of I. Schur, it is possible to bound the index [G : H] of the largest group H with the above property ([Sin81]) √ 2n2 √ 2n2 [G : H] ≤ f (n) ≤ 8n + 1 8n − 1 − Note that the bound is independent of the actual equation L(y) = 0 and depends only on the order of the equation. Since H ⊂ G, its elements σ commute with the derivation and H leaves the line generated by z invariant ; so, we find that ∀σ ∈ H, σ(z) = cσ z and σ(z 0 ) = cσ z 0 . In particular H being maximal with this property is the stabilizer of z 0 /z and we get that the orbit of u = z 0 /z is finite of length [G : H] ≤ f (n). This shows thatR u is algebraic of degree at most [G : H] ≤ f (n) and that L(y) = 0 has a solution z of the form z = e u where u is algebraic of degree at most [G : RH] R≤ f (n). We obtain that if L(y) = 0 has a liouvillian solution, i.e. Ra solution that can be written in , e and algebraic fonctions, then it has a solution of the form z = e u where u is algebraic of bounded degree ([Kol48, Sin81]). The existing algorithms to compute liouvillian solutions will attempt to compute the minimal polynomial of u of the form P (u) = um + bm−1 um−1 + . . . + b1 u + b0 = 0

(bi ∈ k)

by trying degrees less than the above bound. If no polynomial is found, then there is no liouvillian solutions. A classification of the subgroups of GLn (C) allows to give a more precise list of the possible degrees (many other degrees are possible, but this is the list of the smallest degrees): (i) For n = 2, ∃u = z 0 /z such that [k(u) : k] ∈ {1, 2, 4, 6, 12} (cf. [Kov86]). (ii) For n = 3, ∃u = z 0 /z such that [k(u) : k] ∈ {1, 3, 6, 9, 21, 36} (cf. [SU93a, Ulm92]). (iii) For n = 4, ∃u = z 0 /z such that (cf. [Cor01]) [k(u) : k] ∈ {1, 4, 5, 8, 10, 12, 16, 20, 24, 40, 48, 60, 72, 120}. (iv) For n = 5, ∃u = z 0 /z such that [k(u) : k] ∈ {1, 5, 6, 10, 15, 30, 40, 55} (cf. [Cor01]). A similar finite list exists for any order. In order to compute the minimal polynomial P (u) of u = z 0 /z, we note that G permutes the roots of P (u) and sends the logarithmic derivative of a solution into the logarithmic derivative of a solution. This gives the following form P (u)

= = =

um + bm−1 um−1 + . . . + b1 u + b0  m  Y zi0 u− zi i=1 Qm m 0 Y ( zi ) m−1 zi0 u + ... + um − Qi=1 m z i=1 zi i=1 i

where zi are solutions of L(y) = 0. In particular, the coefficient bm−1 is the logarithmic derivative of a product of m solutions and one can show that this condition is a necessary and sufficient condition for the existence of a liouvillian solution [Kov86, UW96, SU97]. sm In order to compute bm−1 , one constructs a linear differential equation L (y) whose solution is spanned by the products of length m of of solutions of L(y) = 0. We start with an arbitrary product z = z1 z2 . . . zm where zi are solutions of L(y) = 0. Taking derivatives of z we obtain expressions of the

16 (i ) (i )

(i )

form z1 1 z2 2 . . . zmm . Using L(y) = 0 as a rewrite rule we can replace derivatives of z of order ≥ n by  +1 derivations the n+m−1 linear combinations over k of lower order derivatives of z. After n+m−1 n−1 n−1  n+m−1 expressions z (i) are all linear combinations of at most unknown and will therefore be linearly n−1 dependent over k. The linear combination involving the smallest order of derivation of z is called the sm m-th symmetric power of L(y) and is denoted L (y). sm

The order of the linear differential equation L (y) grows very fast and it can be difficult to compute. sm For second order equations L(y), however, the order of L (y) is always m + 1 and therefore maximal. sm

In this case, L (y) can be computed using a simple recurrence [BMW97] : Example 8 Consider L(y) = y 00 + ry (where r ∈ k). Then setting  = y,  L0 (y) L (y) = y0 ,  1 Li+1 (y) = Li (y)0 + i(m − i + 1)rLi−1 (y). sm We obtain Lm+1 = L

0.5.2 Liouvillian solutions of second order equations The solution of second order equations is simplified by the fact that symmetric products are always of maximal order and that any solution of a symmetric product is a product of solutions. A very efficient algorithm is due to Kovacic [Kov86] and we present a slight improvement of this algorithm [UW96] where the computation of exponential solutions is replaced by the computation of rational solutions whenever possible: (i) Factor L(y) by computing exponential solutions. A non trivial exponential solution gives a liouvillian solution. A second independent liouvillian solution can be obtain by the method of variation of the constant. R a1 (ii) If L(y) is not of the form y 00 + ry (i.e. if a1 6= 0), replace y by y · e − 2 , in order to obtain an equation of the form y 00 + ry (where r ∈ k) which we denote again by L(y). (the change of variable is a bijection which transforms liouvilian solutions into liouvilian solutions). (iii) For m ∈ {4, 6, 8, 12} (in this order!) : sm • Compute rational solutions f of L (y) = 0 • If a non trivial rational solution f exists, then 0

– set bm−1 = − ff , – compute the other coefficients bi of P (u) using the recurrence:  0  bm = 1, bm−1 = − ff 0 (]m )  bi−1 = −bi + bm−1 bi + r(i + 1)bi+1 , m − 1 ≥ i ≥ 0 m−i+1 – return an irreducible factor of the polynomial P (u) Otherwise try the next m (iv) If no m produces a liouvillian solution, then there are no such solutions. For higher order equations, the computation is more difficult and methods can for example be found in [vHRUW99, SU97, Ulm03]. An important observation is that for m ∈ {6, 8, 12}, the solutions are in fact algebraic of known algebraic degree. In this case, it is possible to compute the minimal polynomial of a solution instead of the minimal polynomial of a logarithmic derivative [Fak97, Ulm05].

17 An alternative efficient algorithm, using the above and Klein’s method for computing algebraic solutions of second order differential equations via hypergeometric functions, is given in [Ber04, vHW05]. Example 9 In the decomposition of the operator L of our leitmotiv example, the following second order equation remained to be solved : L2 (y) := y 00 +

3 85 x2 − 367 x + 402 3 (6 x − 13) y0 + y = 0. 2 (x − 2) (x − 1) 16 (x − 2)2 (x − 1)2

The steps in the above algorithm are (i) L2 (y) is irreducible (it was one of the irreducible factors in section 0.4.1) (ii) L2 (y) is not of the form y 00 + ry, therefore we replace y by y·e

R



a1 2

= y (x − 2)

3/4

− 21 4

(x − 1)

(0.4)

and obtain the equation L(y) = y 00 +

3 x2 − 3 x + 3 y 16 (x − 2)2 (x − 1)2

s4 (iii) We consider the first case m = 4 and compute L (y) :   2 3 15 x − 3 x + 3 45 2 x − 9 x2 + 17 x − 12 00 (3) (5) y + y − y 3 3 4 (x − 2)2 (x − 1)2 8 (x − 2) (x − 1)

+

45 2 x4 − 12 x3 + 33 x2 − 45 x + 24 0 45 2 x5 − 15 x4 + 54 x3 − 108 x2 + 113 x − 48 y − y 4 4 5 5 4 4 (x − 2) (x − 1) (x − 2) (x − 1)

There is a two dimensional space of rational solution with basis: [ x (x − 2) (x − 1), (x − 2) (x − 1) ] . Taking f = x (x − 2) (x − 1) the recurrence (]m ) constructs the following irreducible polynomial :   2 − 6 x + 3 x2 3 3 9 x3 − 35 x2 + 43 x − 16 2 u + u x (x − 2) (x − 1) 8 (x − 1)2 (x − 2)2 x 27 x4 − 153 x3 + 319 x2 − 288 x + 94 81 x5 − 594 x4 + 1723 x3 − 2466 x2 + 1737 x − 480 − u + 16 x (x − 2)3 (x − 1)3 256 (x − 2)4 (x − 1)4 x P4

= u4 −

Note that for any choice of (c1 , c2 ) where not both are zero, the recurrence will construct a polynomial for u of degree 4 from f = c1 x (x − 2) (x − 1) + c2 (x − 2) (x − 1). For almost all values of those ci , the resulting polynomial of degree 4 will be irreducible and define an algebraic function u. However, for 3 families of values of (c1 , c2 ), the resulting polynomial will be reducible and in fact will be the square of an irreducible polynomial of degree two ([HvdP95, Ulm94, UW96]). Here, those values (computed via the discriminant of P for the variable u) are { c1 = 0}, { c2 = −2 c1 }, { c2 = −4 c1 }. Indeed, for c1 = 0, we get  P =

3 x2 − 9 x + 7 (2 x − 3) u + u − 2 (x − 1) (x − 2) 16 (x − 2)2 (x − 1)2 2

2

Both the degree two irreducible factor of P or the irreducible polynomial P4 can be used to proceed. In the following we will use P4 . In order to undo the variable change (0.4) and produce a solution for our original equation L, we replace R 6 x−13 u by u = v + 43 (x−2) in the equation P (u) = 0. Computing the expression exp( v) by computing (x−1)

18 the integral and performing the simplification, we obtain two algebraic solutions of L(y) = 0 which are a basis of the solution space:

y4 =

(x − 2)

p 4

x−2

√ 5

(−1 + x)

−1 + x

and

y5 =

(x − 2)

p √ 4 x + 2 −1 + x 5

(−1 + x)

0.6 Brief conclusion We have presented and illustrated algorithms for rational, exponential and Liouvillian solutions of linear differential equations. Those are precisely the solutions that can be expressed using solutions of differential equations of order at most one. One may push further the class of admissible solutions, for example using solutions of equations of other orders and, eventually, compute the differential Galois group (i.e all differential relations between the solutions). But this is beyond the modest purpose of this survey; interested readers are referred to the lovely set of notes [Sin07] and the corresponding fundamental book [vdPS03]. These methods have applications in handling holonomic functions, establishing transcendence properties of numbers, and many others ; a striking application today is the application of constructive differential Galois theory to integrability of dynamical systems [MR99]. It should also be mentioned that the theory extends beautifully to difference or q-difference equations; interested readers may enjoy consulting [vdPS97] or [DVRSZ03].

Bibliography [ABP95] S.A. Abramov, M. Bronstein, and M. Petkovˇsek, On polynomial solutions of linear operators, Proceedings of International Symposium on Symbolic and Algebraic Computation ISSAC’95, ACM Press, 1995. [AK91] S.A. Abramov and K.Yu Kvashenko, Fast algorithms for the rational solutions of linear differential equations with polynomial coefficients, Proceedings of International Symposium on Symbolic and Algebraic Computation ISSAC’91,ACM Press, 1991. [Bar99] M.A. Barkatou, On rational solutions of linear differential systems, J. Symbolic Comput. 28 (1999), no. 4-5, 547–567. [BCS05] A. Bostan, T. Cluzeau, and B. Salvy, Fast algorithms for polynomial solutions of linear differential equations, Proceedings of International Symposium on Symbolic and Algebraic Computation ISSAC’05 (New York), ACM, 2005, pp. 45–52 (electronic). [Ber04] M. Berkenbosch, Algorithms and moduli spaces for differential equations., Ph.D. thesis, Rijksuniversiteit Groningen, 2004. [Beu92] F. Beukers, Differential galois theory, From Number Theory to Physics, Eds : Waldschmidt, Moussa, Luck, Itzykson. SpringerVerlag, 1992. [BMW97] M. Bronstein, T. Mulders, and J.A. Weil, On symmetric powers of differential operators, Proceedings of International Symposium on Symbolic and Algebraic Computation ISSAC’97 (New-York), ACM Press, 1997. [BP98] M. A. Barkatou and E. Pfl¨ ugel, On the equivalence problem of linear differential systems and its application for factoring completely reducible systems, Proceedings of International Symposium on Symbolic and Algebraic Computation ISSAC’98 (New York), ACM Press, 1998, pp. 268–275 (electronic). [BP99] M. Barkatou and E. Pfl¨ ugel, An algorithm computing the regular formal solutions of a system of linear differential equations, J. Symbolic Comput. 28 (1999), no. 4-5, 569–587, Differential algebra and differential equations. [Bro92] M. Bronstein, On solutions of linear differential equations in their coefficient field, J.Symb.Comp 13 (1992), 413–439. [CLO92] D. Cox, J. Little, and D. O’Shea, Ideals, varieties, and algorithms, Undergraduate Texts in Math, Springer, 1992. [Clu03] T. Cluzeau, Factorization of differential systems in characteristic p, Proceedings of International Symposium on Symbolic and Algebraic Computation ISSAC’03 (New York), ACM, 2003, pp. 58–65 (electronic). [Cor01] O. Cormier, R´esolution des ´equation diff´erentielles lin´eaires d’ordre 4 et 5, Th`ese Universit´e de Rennes 1 (2001). [CvH04] T. Cluzeau and M. van Hoeij, A modular algorithm for computing the exponential solutions of a linear differential operator, J. Symbolic Comput. 38 (2004), no. 3, 1043–1076. [DLR92] A. Duval and M. Loday-Richaud, Kovacic’s algorithm and its application to some families of special functions, Appl. Algebra Engrg. Comm. Comput. 3 (1992), no. 3, 211–246.

19 ´ [DVRSZ03] L. Di Vizio, J.-P. Ramis, J. Sauloy, and C. Zhang, Equations aux q-diff´erences, Gaz. Math. (2003), no. 96, 20–49. [Fak97] W. Fakler, On second order homogeneous linear differential equations with liouvillian solutions, Theoretical Computer Science 187 (1997), 27–48. [GCL92] K.O. Geddes, S.R. Czapor, and G. Labahn, Algorithms for computer algebra, Kluwer Academic Publishers, Dortrecht, Hollande, 1992. [Gra86] J. Gray, Linear differential equations and group theory from riemann to poincar´e, Birkh¨ auser, 1986. [HvdP95] P. Hendriks and M. van der Put, Galois action on solutions of a differential equation, Journal of Symbolic Computation 19 (1995), 559–576. [Inc26] E.L. Ince, Ordinary differential equations, Dover, 1926. [Kap57] I. Kaplanski, An introduction to differential algebra, Hermann, 1957. [Kol48] E.R. Kolchin, Algebraic matrix groups and the picard-vessiot theory of homogeneous ordinary differential equations., Annals of Math. 49 (1948). [Kov86] J. Kovacic, An algorithm for solving second order linear homogeneous differential equations, J. Symb. Comp. 2 (1986), 3–43. [Lio33] J. Liouville, Second m´emoire sur la d´etermination des int´egrales dont la valeur est alg´ebrique, Journal de l’Ecole polytechnique 14 (1833), 149–193. ´ [Mar98] F. Marotte, Les ´equations diff´erentielles lin´eaires et la th´eorie des groupes, Annales Ecole normale sup´erieure, Gauthier-Villard, 1898. [MR99] Juan J. Morales Ruiz, Differential Galois theory and non-integrability of hamiltonian systems, Progress in Mathematics, vol. 179, Birkh¨ auser Verlag, Basel, 1999. [Poo60] E.G.C. Poole, Introduction to the theory of linear differential equations, Clarendon Press, Oxford, 1936, r´e´edition : Dover, 1960. [Ram85] J.P. Ramis, Th´eor`emes d’indice gevrey pour les ´equations diff´erentielles lin´eaires, AMS coll. Publications, 1985. [Rit66] J.F. Ritt, Differential algebra, 1950, ou Dover, 1966. [Sin81] M.F. Singer, Liouvillian solutions of nth order linear differential equations, Amer. J. Math. 103 (1981), 661–682. , An outline of differential galois theory, Computer Algebra and Differential Equations, [Sin90] Ed. E. Tournier, New York: Academic Press, 1990. , Testing reducibility of linear differential operators: A group theoretic perspective, Applicable Alge[Sin96] bra in Engineering 7 (1996), 77–104. [Sin99] , Direct and inverse problems in differential galois theory, pp. 527–554, Selected Works of Ellis Kolchin with Commentary , Bass, Buium, Cassidy, eds., American Mathematical Society, 1999. [Sin07] , Introduction to the galois theory of linear differential equations, This volume, 2007. [Stu93] B. Sturmfels, Algorithms in invariant theory., Texts and Monographs in Symbolic Computation; Wien; New York:Springer-Verlag, 1993. [SU93a] M.F. Singer and F. Ulmer, Galois groups of second and third order linear differential equations., J. Symb. Comp. 16 (1993), 9–36. , Liouvillian and algebraic solutions of second and third order linear differential equations., J. [SU93b] Symb. Comp. 16 (1993), 37–73. [SU95] , Necessary conditions for liouvillian solutions of (third order) linear differential equations., Applied Algebra in Engineering, Communication and Computing 6 (1995), 1–22. , Linear differential equations and products of linear forms, J. Pure Appl. Algebra 117-118 (1997), [SU97] 549–563. [Ulm92] F. Ulmer, On liouvillian solutions of linear differential equations, Appl. Algebra in Eng. Comm. and Comp. 226 (1992), 171–193. [Ulm94] , Irreducible linear differential equations of prime order, J. Symbolic Comput. 18 (1994), no. 4, 385–401. , Liouvillian solutions of third order differential equations, J. Symb. Comp. 36 (2003), 855–889. [Ulm03] [Ulm05] , Note on algebraic solutions of differential equations with known finite galois group, Appl. Algebra in Eng. Comm. and Comp. 16 (2005), 205–218. [UW96] F. Ulmer and J.A. Weil, Note on kovacic’s algorithm, J. Symb. Comp. 22 (1996), 179–200. [vdP99] M. van der Put, Symbolic analysis of differential equations, Some tapas of computer algebra, Algorithms Comput. Math., vol. 4, Springer, Berlin, 1999, pp. 208–236. [vdPS97] M. van der Put and M.F. Singer, Galois theory of difference equations, Lecture Notes in Mathematics, vol. 1666, Springer-Verlag, Berlin, 1997. [vdPS03] M. van der Put and M.F. Singer, Galois theory of linear differential equations, Springer Verlag, 2003. ´ [Ves15] E. Vessiot, M´ethodes d’int´egration explicites, Editions Jacques Gabay, 1915. [vH96] M. van Hoeij, Rational solutions of the mixed differential equation and its application to factorization of differential operator, Proceedings of International Symposium on Symbolic and Algebraic Computation ISSAC’96 (New-York), ACM Press, 1996. [vHRUW99] M. van Hoeij, J.F. Ragot, F. Ulmer, and J.A. Weil, Liouvillian solutions of linear differential equations of order 3 and higher.., Journal of Symbolic Computation 28 (1999), no. 4-5, 589–609.

20 [vHW97] M. van Hoeij and J.A. Weil, An algorithm for computing invariants of differential galois groups, J. Pure Appl. Algebra 117-118 (1997), 353–379. [vHW05] M. van Hoeij and J.A. Weil, Solving second order differential equations with klein’s theorem, Proceedings of International Symposium on Symbolic and Algebraic Computation ISSAC’05 (Beijing) (New York), ACM, 2005.

Suggest Documents