INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE
A Simplex-Like Algorithm for Interval Linear Systems. O. Beaumont
N ˚ 3153 Avril 1997 ` THEME 4
ISSN 0249-6399
apport de recherche
A Simplex-Like Algorithm for Interval Linear Systems. O. Beaumont Theme 4 | Simulation et optimisation de systemes complexes Projet Aladin Rapport de recherche n3153 | Avril 1997 | 18 pages
Abstract: In this paper, we show how it is possible to use convex polyhedra for solving
linear interval systems without using preconditionning. We rst show how to derive, from an enclosure of 2([A]; [b]), a polyhedron which contains the convex hull of the solution set. Then, a simplex-like method enables us to nd a new outer inclusion. Moreover, the constraints obtained may be used to compute an inner inclusion of 2([A]; [b]). Key-words: interval linear systems, simplex algorithm
(Resume : tsvp)
Institut de Recherche en Informatique et Systemes Aleatoires - email:
[email protected]
Unit´e de recherche INRIA Rennes IRISA, Campus universitaire de Beaulieu, 35042 RENNES Cedex (France) T´el´ephone : (33) 02 99 84 71 00 – T´el´ecopie : (33) 02 99 84 71 71
Un algorithme de type Simplexe pour resoudre les systemes lineaires avec intervalles.
Resume : Dans cet article, nous montrons comment il est possible d'utiliser des polyhedres convexes pour resoudre les systemes lineaires d'intervalles sans utiliser de preconditionnement. Nous montrons tout d'abord comment trouver, a partir d'un sur-ensemble de 2([A]; [b]), un polyhedre qui contient l'ensemble solution. Nous utilisons alors un algorithme de type simplexe pour determiner un nouveau sur-ensemble de la solution. De plus, les contraintes obtenues peuvent ^etre utilisees pour determiner un sous-ensemble de 2([A]; [b]). Mots-cle : systemes lineaires d'intervalles, algorithme du simplexe
A Simplex-Like Algorithm for Interval Linear Systems.
3
1 Introduction A lot of work has been done in order to solve interval linear systems [A]x = [b] [6]. Rohn and Kreinovich [10],[4] has proved that the calculation of 2([A]; [b]), the smallest box that contains all the solutions of [A]x = [b], is a NP-hard problem. On the other hand, several algorithms obtain good results, especially when the diameter of [A] is small [8],[6]. In this paper, we propose a new algorithm which is based on linear programming.It consists of an iterative scheme which considers as input an enclosure of the solution of 2([A]; [b]) and returns a (usually better) enclosure. This algorithm converges toward a superset of the convex hull of the united solution set ([A]; [b]). In this paper, we use the following notations: ([A]; [b]) = fx; 9A 2 [A]; 9b 2 [b]; Ax = bg ?([A]; [b]) = Co(([A]; [b])); where Co denotes the convex hull We also denote by 2([A]; [b]) the interval hull of the solution. The interval linear systems we consider in this paper are de ned by the following notations: [A] = [A ? A; A + A] [b] = [b; b] = [b ? b; b + b]; b = b ?2 b
2 How to nd a polyhedron that contains the convex hull of the united solution set It is known that the solution of the problem [A]x = [b] is in general not convex[6]. As far as we are only interested in 2([A]; [b]), we may use ?([A]; [b]) as intermediate set. Oettli and Prager [7] proved the following theorem:
Theorem 1
( 9A 2 [A]; 9b 2 [b]; Ax = b ) , jAx ? bj Ajxj + b This expression does not lead directly to a polyhedron, because of the absolute value, which underlines the fact that the solution set is usually not a convex set [6]. Let us assume that we know an enclosure of 2([A]; [b]), which may be obtained, for instance, by the algorithm proposed by Rump [8]. We give a rst result which provides a way to get rid of the absolute values. Lemma 1 If 2([A]; [b]) [x; x]
RR n3153
4
O. Beaumont y
x __ x i
x __i
Figure 1: Situation when both xi and xi are positive and, if
jx j ? jx j x jx j ? x jx j j = xj ? x j and j = j xj ? xj j j j j j where xj denotes the j ?th component of x, we have: 8xj 2 [xj ; xj ]; jxj j j xj + j .
Proof: First case: xj 0, then j = ?1; j = 0, and j x + j = ?x = jxj. Second case: xj 0, then j = 1; j = 0, and j x + j = x = jxj. Third case: xj xj 0, then j = xx ?+xx ; j = ?x2x?xx . {
j
j
j
j
j j
j
j
if 0 z xj , then j z + j ? jz j = x?j ?2xxjj (xj ? z ) 0.
thus, jz j j z + j . { if xj z 0, then j z + j ? jzj = xj2?xjxj (z ? xj ) 0. thus, jz j j z + j :2 In fact, several situations may occur: Lemma 2 The convex hull of the set S = f(x; y) 2 R2; x 2 [xj ; xj ]; y jxjg is the polyhedron de ned by P = f(x; y); x 2 [xj ; xj ]; 0 y j x + j g:
INRIA
5
A Simplex-Like Algorithm for Interval Linear Systems.
y
x __ x i
x __i
Figure 2: Situation when both xi and xi are negative
y
x x __i
__ x i
Figure 3: Situation when xi 0 and xi 0
RR n3153
6
O. Beaumont
Proof: If xj 0 or xj 0, then S = P and the property is trivial. We assume now that xj 0 xj . The set P , which is de ned by linear inequalities, is convex, and the previous lemma implies S P. Therefore, Co(S ) P . Reciprocally, let us consider (a; b) 2 f(x; y); x 2 [xj ; xj ]; y j x + j g n S . Let us consider now the line y = j x + (b ? j a), which intersects y = x and y = ?x at the ? a (?1; 1) respectively. points (a1 ; b1) = b1?? a (1; 1) and (a2 ; b2) = b1+ Since b ? j a , we obtain xj a1 a2 xj and, therefore, (a1 ; b1) 2 S , (a2 ; b2 ) 2 S and (a; b) 2 Co(S ). We have proved that Co(S ) = P :2 j j
j j
Therefore, we can expect that the following simplex ([A]; [b]; [x; x]) represents a good approximation of ?([A]; [b]). If we denote by D the diagonal matrix whose diagonal entries are the i 's and the vector of the i 's then we de ne ([A]; [b]; [x; x]) as follows: Ax ? A D x b + A
([A]; [b]; [x; x]) Ax + A D x b ? A We can now enunciate the theorem of this paper
Theorem 2 Let 2([A]; [b]) [x; x]. Then, we have ([A]; [b]) fx ; Ax ? A D x b + A ; Ax + A D x b ? A g, where xj j = jxx j?j ?x and j = x jxx j??xx jx j . j
j
j
j
j
j j
j
j
j
Proof: This theorem is a direct consequence of Theorem 1 and Lemma 1. 2
We have seen above how an enclosure of 2([A]; [b]) leads to a simplex that describes x and x2 ([Amin x can a superset of ?([A]; [b]). The set of problems x2 ([Amax ];[b];[x;x]) i ];[b];[x;x]) i be solved, for instance, by applying 2n times the simplex method [2]. Therefore, a new enclosure of 2([A]; [b]) is obtained, and an iterative scheme can be developed. The limit of this iterative algorithm is usually a good enclosure of 2([A]; [b]) (not too large) as shown in the last section. Unfortunately, it is dicult to nd a characterization of this limit in order to study its accuracy. The limit is usually not equal to 2([A]; [b]). For instance, when we know an enclosure of 2([A]; [b]) in which each xi keeps a constant sign, it is easy to prove that the algorithm described above provides the exact solution 2([A]; [b]) after only one step. Indeed, in this case, the sets f(x; y); x 2 [xj ; xj ]; y
INRIA
A Simplex-Like Algorithm for Interval Linear Systems.
7
j x + j g and f(x; y); x 2 [xj ; xj ]; y jxjg are equal, and ([A]; [b]; [x; x]) is an exact representation of the solution set of [A]x = [b], which is in this case convex. In the following section, we present a modi cation of the algorithm presented above, in order to perform it in a reasonable amount of time.
3 Algorithm for the outer inclusion. In the previous section, we have presented an iterative method which solves interval linear systems. This method requires the execution of 2n simplex algorithms during each step of the algorithm. Since an execution of the simplex algorithm requires roughly O(n3 ) ops [2] [12], the total amount of work per iteration is therefore of order O(n4 ). The algorithms proposed by Rump [8] and Neumaier [6] require an amount of work of order 5n3 . We show in this section how to perform a step of the algorithm in time O(n3 ). We propose a three-step algorithm in order to obtain an outer inclusion. The rst step consists in solving an approximate problem. The conditions that de ne extremal points of the approximate problem are supposed to be close to those which de ne extremal points of the exact problem. The second step consists in using the results of the rst step to solve the exact problem. The last step is a correction step, and leads to a proved outer inclusion.
3.1 First step
In order to apply general theorems of linear programming, we perform the following change of variables which induce use of positive variables. We set y = x ? x: 8 Ay ? A D y b + A ? (A ? A D )x = b0 <
([A]; [b]; [x; x]) : Ay + A D y b ? A ? (A + A D )x = b0 y0 The rst step consists in solving the problem Ay = [b0 ; b0 ]. The polyhedron 0 which corresponds to the problem Ay = [b0; b0 ] can be considered as a modi cation of the simplex which de nes ([A]; [b]; [x; x]): 8 Ay b0 8 Ay ? A D y b0 < <
0 : Ay b0 ; and ([A]; [b]; [x; x]) : Ay + A D y b0 y0 y0 As long obtained for: 1 1toA, we may 1expect 1situation, the following 1as ?A1 is small compared 5 1 1 A = 1 1 ; A = 4 1 1 ; b = 4 1 and b = 4 1
RR n3153
8
O. Beaumont x2
1
x1 1
2
-1
Figure 4: (dotted line) and 0 (plain line) In this case, we can notice the correspondence between the set of constraints of and
0 that de ne the extremal points. In fact we do not solve entirely the problems max y and ymin y but only determine the set y2 i 2 i of linear equations that are saturated when solving each problem max y and ymin y . These y2 i 2 i results will be used during next steps. Suppose that we know 0 LB =1(Bi;j ), an approximation 0 L 1of the inverse of A. 1 BB L2 CC BB L12 CC Moreover, let A = B @ ... CA and A D = B@ ... CA, the row partition of the matrices A Ln Ln and A D . If B were the exact inverse of A, we would obtain, when applying it to the equality Ay = b0 where b0 2 [b0 ; b0 ]: y = Bb0 where b0 2 [b0 ; b0 ] Thus, in order to minimize yi , if Bi;j 0 we have to consider b0j = b0j and the corresponding constraint ?Lj y ?bj . If Bi;j > 0, we have to consider b0j = b0j and the constraint Lj y bj . Obviously, opposite choices apply when considering the system associated to the minimization of yi . Since B is only an approximation of the inverse of A, the results obtained are not sure, but, as we shall see in the next sections, we only use the results obtained for 0 as indications in 0
0
0
0
INRIA
9
A Simplex-Like Algorithm for Interval Linear Systems.
order to solve the problem with ([A]; [b]; [x; x]). The total cost of this step consists in the inversion of A, i.e. n3 ops when using a LU factorization [3]. The determination of the systems associated with max y and ymin y only y2 i 2 i 2 requires O(n ) comparisons. 0
0
3.2 Second step
3.2.1 Corresponding constraints over ([A]; [b]; [x; x])
This step consists in solving approximately the exact problem min y. y2 i For the sake of clarity, we consider from now on the minimization of y1 If b0 = mid([b0 ; b0 ]), the set of saturated constraints that de nes the minimization of y1 over
0 can be expressed as 8 (d L )y = d b0 + b0 > > < (d12 L12)y = d12b102 + b102 .. > > : .(dn Ln)y = dn b0n + b0n or DAy = Db0 + b0 where D is a diagonal matrix such that jDj = Id(n) and di = 1 depending on the saturated bound. The corresponding set of constraints over is: 8 (d L ? L )y = d b0 + b0 > > < (d21 L21 ? L21)y = d21b021 + b021 , (DA ? D A )y = Db0 + b0 .. > > : .(dn Ln ? Ln)y = dnb0n + b0n
3.2.2 Checking the optimality of the constraint set
Our aim is to check the optimality of the set of above constraints with respect to the minimization of y1 over . We use the following fundamental theorem of linear programming [2]: Theorem 3 Let ut be the solution of ut(DA ? A D) = e1t , where e1t is the rst canonical basis vector. If u 0, then: the set of constraints de ned above is optimal for the problem min y. y2 1
the point y corresponding to the minimization satis es (DA ? A D )y = Db0 + A b0 min y = e1t y = ut(DA ? A D )y = ut (Db0 + Ab0 ) y2 1
RR n3153
10
O. Beaumont
3.2.3 Algorithm
We now present an algorithm to solve approximately the equation ut (DA ? A D ) = e1t . Let M be de ned as follows: M = A D B , where B is the computed inverse of A. Note that M does not depend on the extremal point problem we consider. We use the following iterative scheme:
t = e tB 1
1
i+1t = (it D)M
P
We stop the iteration scheme when k 1 , a small threshold, and we set u~t = ( k1 it )D. We consider that the set of constraints is optimal if u~ 2 , where 2 is a small positive vector. Therefore, if u~ 2 , we go to the third step, otherwise, we perform a step of simplex algorithm and then go to step 2. P Note that if u~0 is the vector associated with the maximization of y1 , then u~0t = ( k1 (?1)i+1 it )D. Therefore, the computation of u~0 only involves k additions of vectors.
3.3 Third step
3.3.1 Computing sure bounds
We are looking for a proved outer inclusion of the interval hull of the solution set. We therefore need to perform a correction step, since all previous computations notsure. A ? Awere D At this stage, we know that u~ 2 . Let u^ = min(~u; 0), S = ?A ? A D and b0 s = b0 If t = u^t S ? e1t , then we have
u^ts maxfzts; z 0; zt S e1t + t g minf(e1 + )t y; y 0; Sy sg duality theorem of L.P. minfet1 y; y 0; Sy sg + maxft y; y 0; Sy sg and, therefore
min y u^t s ? maxft y; 0 y x ? xg y2 1 The expression above gives a proved lower bound for min y and therefore for min y. y2 1 y2 1
3.3.2 Practical implementation
In the sequel, we show how to bypass the computation of . Let A0 denote the exact inverse of the computed inverse B of A (whenever A0 does not exist,
INRIA
A Simplex-Like Algorithm for Interval Linear Systems.
11
we cannot apply what follows). Let us set = A0 ? A, C = DA ? A D , = (~u ? u^)t P k?1 i u~t D = e1t B P1 0 (DMi) , t t t u D = e1 B P10 (DM ) () u (C ? D) = e1t , vt D = e1t A 0 (DM )i () ut C = e1t .
Theorem 4 If jjM jj 21 ,and 0jjAjj jjBjj 21 , where 0, de ned below, depends only on machine accuracy, then:
jjjj ( jjAjj + jjAjj )( jj1 jj + jjjj ) + 4 0 jjAjj2 jjB jj ( jju~jj + jj1 jj ):
Proof: Evaluation of jj(~u ?Pu)1t C jj. P (DM )i (~u ? u)t D = e1t B k (DM )i = k DM 1 0 1 Since jjM jj 2 , jj(~u ? u)t C jj jj1 jj( jjAjj + jjAjj ) and jj(~u ? u)t jj jj1 jj Evaluation of jj(v ? u)t C jj. (v ? u)t C = ut D and therefore jj(v ? u)t C jj jjjj( jju~jj + jj1 jj ) Evaluation of jj(v ? u~)t C jj. jj(v ? u~)t C jj jj(v ? u)t C jj + jj(~u ? u)t C jj and therefore jj(v ? u~)t C jj jj1 jj ( jjAjj + jjAjj ) + jjjj ( jju~jj + jj1 jj ) Evaluation of jjjj.
If the computed inverse B of A is obtained with LU factorization, we know[5] that AB = I + E where jjE jj 0 jjAjj jjB jj; where 0 depends on machine accuracy. Thus, A0 = (I + E1)?1 A and, since jjE jj 0 jjAjj jjB jj 21 , P 0 A ? A = AE 0 (?1)k+1 E k and nally jjjj 20 jjAjj2 jjB jj:
Since = u^t C ? e1t = (~u ? v)t C + (vt C ? e1t ) ? C , we obtain the proof of the theorem. The crucial point is that the majoration of does not require additionnal computations. 2
RR n3153
12
O. Beaumont
0.25
0.1 0.09
0.2
0.08 0.07
0.15
0.06 0.05
0.1
0.04 0.03
0.05
0.02 0.01
0 0
10
20
30
40
50
60
0 0
10
20
30
40
50
60
Figure 5: 1
Figure 6: 2 (A)0:1 Figure 7: not centered { norm(A ) = norm cond(A)
4 Numerical results In this section, we describe numerical results obtained with random matrices A and A and for random vectors b and b. We compare the results of the proposed algorithm after one iteration with the results of the algorithm proposed by Rump[8]. The initial enclosure we consider is the result of Rump's alof Rump's outer inclusion gorithm. It is known[9] that, for Rump's algorithm, the ratio volumevolume of interval hull (A) depends on normnorm (A )cond(A) . (A) We therefore display results according to normnorm (A )cond(A) . As the quality of the enclosure of in depends on the position of the solution set with respect to 0, we also display the results for centered or not centered (that is to say containing 0 or not intersecting any axis). The x-axis represents the size of the matrices. In what follows, we display 1 , the average number of steps of simplex algorithm necessary to the computation of an extremal point (step 2) and width(Simplex method) : 2 = 1 ? width(Rump's algorithm)
INRIA
13
A Simplex-Like Algorithm for Interval Linear Systems. 0.18
0.08
0.16
0.07
0.14
0.06
0.12 0.05 0.1 0.04 0.08 0.03 0.06 0.02
0.04
0.01
0.02 0 0
10
20
30
40
50
60
70
80
0 0
10
20
30
40
50
60
70
80
Figure 8: 1
Figure 9: 2 (A)0:1 Figure 10: centered { norm(A ) = norm cond(A)
0.012
0.01 0.009
0.01 0.008 0.007 0.008 0.006 0.006
0.005 0.004
0.004 0.003 0.002 0.002 0.001 0 0
10
20
30
40
Figure 11: 1
50
60
70
0 0
10
20
30
40
Figure 12: 2 (A)0:01 Figure 13: not centered { norm(A ) = norm cond(A)
RR n3153
50
60
70
14
O. Beaumont −3
1
9
0.8
8
0.6
x 10
7
0.4
6 0.2
5 0
4
−0.2 −0.4
3
−0.6
2
−0.8 −1 0
1 10
20
30
40
50
60
70
0 0
10
20
30
40
50
60
70
Figure 14: 1
Figure 15: 2 (A)0:01 Figure 16: centered { norm(A ) = norm cond(A)
5 Algorithm for the inner inclusion. We now present an algorithm which computes an inner inclusion of the solution set. By inner inclusion, we mean an interval vector [y; y] such that [y; y] 2([A]; [b]) ([A]; [b]; [x; x]): Note that it is a completely dierent problem to solve [y; y] ([A]; [b]): The main interest of the inner inclusion we compute is to allow to estimate the accuracy of the outer inclusion and to enclose the interval hull of the solution set between two interval vectors. Moreover, numerical results indicate that the inner inclusion is very close to the interval hull (in fact, it is very often the exact interval hull). The algorithm is based on the results for the outer inclusion. It computes 2n points of the solution set which are supposed to be close to extremal points.
5.1 How to nd inner \extremal" points?
Let us consider again the problem of the minimization of y1 . We know that the extremal point of which realizes this minimization is de ned by: 9 (d1 L1 ? L1 )y = d1 b01 + b01 > = (d2 L2 ? L2 )y = d2 b02 + b02 > , (DA ? D A )y = Db0 + b0 .. > . > (dn Ln ? Ln )y = dn b0n + b0n ; , D(Ax ? b) = A (D x + ) + b Let us now consider a point on the frontier of .
INRIA
A Simplex-Like Algorithm for Interval Linear Systems.
15
Lemma 3 (Rohn) x belongs to the frontier of , jAx ? bj = A jxj + b and jAx ? bj = A jxj + b , 9D0 diagonal ; jD0 j = Id(n); D0 (Ax ? b) = A jxj + b This lemma is a direct application of Oettli-Prager [7] theorem. Since D x + represents an approximation of jxj , we can notice an analogy between the de nition of x in both expressions. We therefore consider as extremal point associated to the minimization of x1 for the inner inclusion the point x de ned by
D(Ax ? b) = A jxj + b where D is the matrix associated to the minimization of y1 over .
5.2 Algorithm
The algorithm consists in solving the equations
D(Ax ? b) = A jxj + b , x = M jxj + a where M = A?1 DA and a = A?1 (Db + b)
Since such a point belongs to the frontier of , the algorithm will lead to an inner inclusion of . If we suppose that [A] is strongly regular, that is to say (jA?1 jA ) < 1, then (M ) < 1 and we can use the algorithm proposed by Rohn:
Theorem 5 If (M ) < 1, then, for every a, the equation x = M jxj + a has a unique solution, and the iteration xl+1 := M jxl j + a (l = 0; 1; 2; : : : ) converges to the solution for every choice of the starting vector x0
In order to obtain a good starting vector, we can solve the equation
D(Ax ? b) = A (D x + ) + b We therefore start from the vector which corresponds to the minimization of x1 over . When (M ) is not small enough, it is known that the convergence may be slow. Rohn therefore proposed the sign accord algorithm, which does not require strong regularity of [A]. It consists in solving x = MD0 x + a, for dierent matrices D0 .
RR n3153
16
O. Beaumont
Sign accord algorithm (Rohn) Select D0 with jD0 j = Id(n) For s = 1; : : : ; 2n do: solve x = MD0x + a; if D0 x 0 terminate (success); otherwise compute k := minfj = 1; : : : ; n=D0 jj xj < 0g; change the sign of D0 kk ; Step 3 Terminate (failure).
Step 1 Step 2
Rohn[11] has proved that the algorithm is nite when [A] is regular. Although the number of steps may be exponential, it is generally reasonable. If we know the signs of the solution of D(Ax ? b) = A (D x + )+b, then we can start with the matrix D0 such that D0 x 0. In the cases such that the points that realize the minimization of x1 over and have the same sign vector, we will not have to perform any change of sign. In fact, none of the cases we considered for the outer inclusion requires the execution of more than one step.
5.3 Numerical results.
We display results for matrices of the same kind of those used for the outer inclusion. We only consider the case centered as results for inner and outer inclusion are the same when does not intersect any axis. Each gure represents the evolution with n of both quantity 1 and 2 , where width(inner inclusion) and = 1 ? width(outer inclusion) 1 = 1 ? width(outer 2 inclusion) width(Rump's algorithm) 0.08
0.03
0.07 0.025
0.06 0.02
0.05
0.04
0.015
0.03 0.01
0.02 0.005
0.01
0 0
10
20
30
40
50
60
70
80
0 0
10
20
30
40
50
60
70
80
Figure 17: 1
Figure 18: 2 (A)0:1 Figure 19: centered { norm(A ) = norm cond(A)
INRIA
17
A Simplex-Like Algorithm for Interval Linear Systems. −3
−3
2.5
x 10
9
x 10
8 2
7 6
1.5
5 4 1
3 2
0.5
1 0 0
10
20
30
40
50
60
70
0 0
10
20
30
40
50
60
70
Figure 20: 1
Figure 21: 2 (A)0:01 Figure 22: centered { norm(A ) = norm cond(A)
We can see that inner and outer inclusions are usually very close. Even in the case where
(A)0:1 norm(A ) = norm cond(A) , we usually obtain
width(2) 1 ? width(outer inclusion) 1%: Moreover, we can expect that the inner inclusion we obtain is usually the right one, since
and 0 are close. More precisely, if we denote by x+ the vector corresponding to the minimization of x1 for the outer inclusion, by x? the vector corresponding to the minimization of x1 for the inner inclusion, and if we suppose that x+ and x? have the same sign vector, then: 9D; D0 ; jD0 j = jDj = Id(n); D(Ax+ ? b) = A (D x+ + ) + b D(Ax? ? b) = A D0 (x? ) + b Therefore, if xdiff = x+ ? x? , DAxdiff = A D0 xdiff ? A (jx+ j ? D x+ ? ) and Thus,
xdiff = ?(DA ? A D0 )?1 A (jx+ j ? D x+ ? )
norm(xdiff ) norm(A )cond(A) 2 norm( ) norm(x ? x) norm(A) norm(x ? x) This allows to compute a majoration of the dierence, and therefore to determine the quality of the outer inclusion, without computing the inner inclusion exactly.
RR n3153
18
O. Beaumont
6 Conclusion In this paper, we show how to derive from the simplex algorithm an ecient method to compute inner and outer inclusion of the united solution set. The outer inclusion is obtained in O(n3 ) ops and does not require the preconditionning of the system. The inner inclusion coincides very often with the exact interval hull of the united solution set, but we have no way to prove this property. However, an estimation of the quality of the outer inclusion can be obtained, even without computing the inner inclusion.
References [1] Alefeld, G. Inclusion methods for systems of non linear equations - the interval Newton method and modi cations. Topics in Validated computations, J. Herzberger (Editor). Elsevier Science (1994). [2] Chvatal, A. Linear Programming. W.H. Freeman and Company. (1983). [3] Gill,P. ,Murray, W., Wright, M. Numerical Linear Algebra and Optimization. AddisonWesley Publishing Company (1991). [4] Kreinovich, V., Rohn, J. Computing exact componentwise bounds on solutions of linear systems with interval data is NP-Hard SIAM J. Matrix Anal. Appl. 16(2) (1995). [5] Lascaux, P., Theodor, R. Analyse Numerique Matricielle Appliquee a l'Art de l'Ingenieur. Masson. [6] Neumaier, A. Interval methods for systems of equations. Cambridge University Press (1990). [7] Oettli, W., Prager,W. Computability of approximate solution of linear equations with given error bounds for coecients and right-hand sides. Numer. Math. 6,405 9. (1964). [8] Rump, S.M. On the solution of interval linear systems. Computing 47, 337-353 (1992). [9] Rump, S.M. Veri cation methods for dense and sparse systems of equations. Topics in Validated computations, J. Herzberger (Editor). Elsevier Science (1994). [10] Rohn, J. NP-hardness results for linear algebraic problems with interval data. Topics in Validated computations, J. Herzberger (Editor). Elsevier Science (1994). [11] Rohn, J. Checking Bounds on Solutions of Linear Interval Equation is NP-Hard Linear Algebra and Its Applications, 223-224:589-596 [12] Schrijver A. Theory of linear and integer programming. John Wiley & Sons (1986).
INRIA
Unit´e de recherche INRIA Lorraine, Technopˆole de Nancy-Brabois, Campus scientifique, ` NANCY 615 rue du Jardin Botanique, BP 101, 54600 VILLERS LES Unit´e de recherche INRIA Rennes, Irisa, Campus universitaire de Beaulieu, 35042 RENNES Cedex Unit´e de recherche INRIA Rhˆone-Alpes, 655, avenue de l’Europe, 38330 MONTBONNOT ST MARTIN Unit´e de recherche INRIA Rocquencourt, Domaine de Voluceau, Rocquencourt, BP 105, 78153 LE CHESNAY Cedex Unit´e de recherche INRIA Sophia-Antipolis, 2004 route des Lucioles, BP 93, 06902 SOPHIA-ANTIPOLIS Cedex
´ Editeur INRIA, Domaine de Voluceau, Rocquencourt, BP 105, 78153 LE CHESNAY Cedex (France) http://www.inria.fr
ISSN 0249-6399