3. Compensated Horner algorithm. 4. Performance: arithmetic and processor issues. 5. Conclusion. Philippe LANGLOIS (UPVD). 4ccurate 4lgorithms in Floating ...
4ccurate 4lgorithms in Floating Point 4rithmetic Philippe LANGLOIS1 1
DALI Research Team, University of Perpignan, France
SCAN 2006 at Duisburg, Germany, 26-29 September 2006
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
1 / 42
Compensated algorithms in floating point arithmetic
1
Compensated algorithms: why and how?
2
Basic bricks towards compensated algorithms
3
Compensated Horner algorithm
4
Performance: arithmetic and processor issues
5
Conclusion
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
2 / 42
Algorithm accuracy vs. condition number For a given working precision u, Backward stable algorithms: The solution accuracy is no worse than condition number × computing precision (× O(problem dimension)) I
I
Example: arithmetic operators, classic algorithms for summation, dot product, Horner’s method, backward substitution, GEPP, . . . How to manage ill-conditionned cases, i.e., when condition number > 1/u?
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
3 / 42
Algorithm accuracy vs. condition number For a given working precision u, Backward stable algorithms: The solution accuracy is no worse than condition number × computing precision (× O(problem dimension)) I
I
Example: arithmetic operators, classic algorithms for summation, dot product, Horner’s method, backward substitution, GEPP, . . . How to manage ill-conditionned cases, i.e., when condition number > 1/u?
Highly accurate or even faithful algorithms: I
I I
The solution accuracy is independant of the condition number (for a range of it) A faithful rounded result is sometimes available Example: Priest’s algorithm, distillation algorithms, Rump-Ogita-Oishi for summation and dot product, iterative refinement with extra precision, (polynomial evaluation of a canonical form –needs the roots!, . . . ).
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
3 / 42
Compensated algorithms Compensated algorithm corrects the generated rounding error the solution accuracy is no worse than computing precision + condition number × (computing precision)K ; the computation is performed only with the working precision u; the performances are no worse than expansion libraries (double-double, quad-double) and even more general ones (arprec, MPFR, . . . ). It also returns a validated dynamic error bound. (bonus)
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
4 / 42
Compensated algorithms Compensated algorithm corrects the generated rounding error the solution accuracy is no worse than computing precision + condition number × (computing precision)K ; the computation is performed only with the working precision u; the performances are no worse than expansion libraries (double-double, quad-double) and even more general ones (arprec, MPFR, . . . ). It also returns a validated dynamic error bound. (bonus) Accuracy as if computed in twice the working precision I I I
alternative to double-double in e.g., XBLAS Compensated summation: Neumaier (74), Sum2 in Ogita-Rump-Oishi (05) Compensated Horner algorithm and compensated backward substitution
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
4 / 42
Compensated algorithms Compensated algorithm corrects the generated rounding error the solution accuracy is no worse than computing precision + condition number × (computing precision)K ; the computation is performed only with the working precision u; the performances are no worse than expansion libraries (double-double, quad-double) and even more general ones (arprec, MPFR, . . . ). It also returns a validated dynamic error bound. (bonus) Accuracy as if computed in twice the working precision I I I
alternative to double-double in e.g., XBLAS Compensated summation: Neumaier (74), Sum2 in Ogita-Rump-Oishi (05) Compensated Horner algorithm and compensated backward substitution
Accuracy as if computed in K times the working precision I I
Compensated summation: SumK Ogita-Rump-Oishi (05) Compensated Horner algorithm
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
4 / 42
Algorithm accuracy vs. condition number 1/u
1/u2
1/u3
1/u4
Backward stable algorithms
relative forward error
1
Compensated algorithms
u
Highly accurate Faithful algorithms 1/u
Philippe LANGLOIS (UPVD)
1/u2
condition number
1/u4
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
5 / 42
Introducing example: rounding errors when solving Tx = b Pi−1 Classic substitution algorithm computes xi = bi − j=1 ti,j xj /ti,i , i = 1 : n
Algorithm (. . . and its generated rounding errors –inner loop) si,0 = bi for j = 1 : i − 1 loop pi,j = ti,j ⊗ b xj { generates πi,j } si,j = si,j−1 pi,j { generates σi,j } end loop b xi = si,i−1 ti,i { generates δi } The global forward error in computed b x is (∆xi )i=1:n , i−1 X 1 ∆xi = xi − b xi = (σi,j − πi,j − ti,j × ∆xj ) + δi ti,i j=1
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
6 / 42
b Algorithm (Inlining the computation of the correcting term ∆x) for i = 1 : n loop si,0 = bi ui,0 = 0 for j = 1 : i − 1 loop xj ) (pi,j , πi,j ) = TwoProd(ti,j , b (si,j , σi,j ) = TwoSum(si,j−1 , −pi,j ) b j ⊕ πi,j σi,j ) ui,j = ui,j−1 (ti,j ⊗ ∆x end loop (b xi , δi ) = ApproxTwoDiv(si,i−1 , ti,i ) b i = ui,i−1 ti,i ⊕ δi ∆x end loop
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
7 / 42
b Algorithm (Inlining the computation of the correcting term ∆x) for i = 1 : n loop si,0 = bi ui,0 = 0 for j = 1 : i − 1 loop xj ) (pi,j , πi,j ) = TwoProd(ti,j , b (si,j , σi,j ) = TwoSum(si,j−1 , −pi,j ) b j ⊕ πi,j σi,j ) ui,j = ui,j−1 (ti,j ⊗ ∆x end loop (b xi , δi ) = ApproxTwoDiv(si,i−1 , ti,i ) b i = ui,i−1 ti,i ⊕ δi ∆x end loop
b Algorithm (Compensating b x adding the correction ∆x) for i = 1 : n loop b i xi = b xi ⊕∆x end loop Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
7 / 42
Accuracy as in doubled precision for Tx = b dtrsv: IEEE-754 double precision substitution algorithm, dtrsv cor: compensated substitution algorithm, dtrsv x: XBLAS substitution with inner computation in double-double arithmetic 1
dtrsv dtrsv_cor dtrsv_x
1e-2
relative forward error
1e-4
1e-6
2 u + u n cond (3)
u n cond (2)
1e-8
1e-10
1e-12
1e-14 u
1e-16
1/(n u2)
1/(n u) 1
100000
1e+10
1e+15
1e+20
1e+25
1e+30
1e+35
1e+4
condition number
Compensated substitution accuracy as if computed in doubled precision. Actual accuracy of compensated substitution is bounded as kx − b x k∞ /kxk∞ . u + n × cond(T , x) × u2 . Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
8 / 42
Measured running times are interesting The general behavior: 1 Overhead to double the precision: measured running times are significantly better than expected from the theoretical flops count. 2 Compensated versus double-double: compensated algorithm runs about two times faster than its double-double counterpart. An example Env. 1: Intel Celeron: 2.4GHz, 256kB L2 cache - GCC 3.4.1 Env. 2: Pentium 4: 3.0GHz, 1024kB L2 cache - GCC 3.4.1 ratio compensated / double double-double / double double-double/compensated
Philippe LANGLOIS (UPVD)
env. 1 2 1 2 1 2
measured times min mean max 2.17 2.95 9.70 4.50 8.14 9.95 4.02 5.81 19.66 8.82 16.10 19.60 1.81 1.95 2.07 1.87 1.98 2.03
4ccurate 4lgorithms in Floating Point 4rithmetic
theoretical # flops 13.5 22.5 1.67
SCAN 2006 at Duisburg
9 / 42
Our first step towards compensated algorithms (1998-2003) CENA: a software for automatic linear correction of rounding errors Motivation: automatic improvement and verification of the accuracy of a result computed by a program considered as a black box. Principle: compute a linear approximation of the global absolute error w.r.t. the generated elementary rounding error and use it as a correcting term. CENA = algorithmic differentiation + computing elementary rounding error + running error analysis + overloaded facilities (e.g., Fortran 90).
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
10 / 42
Our first step towards compensated algorithms (1998-2003) CENA: a software for automatic linear correction of rounding errors Motivation: automatic improvement and verification of the accuracy of a result computed by a program considered as a black box. Principle: compute a linear approximation of the global absolute error w.r.t. the generated elementary rounding error and use it as a correcting term. CENA = algorithmic differentiation + computing elementary rounding error + running error analysis + overloaded facilities (e.g., Fortran 90). Should be considered as a development tool: easy to use, help to identify where to use more precision. Step 2: inlining the correction! Much more details in BIT 00, JCAM 01, TCS 03.
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
10 / 42
1
Compensated algorithms: why and how?
2
Basic bricks towards compensated algorithms Classic error free transformations
3
Compensated Horner algorithm
4
Performance: arithmetic and processor issues
5
Conclusion
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
11 / 42
Classic error free transformations
Error free transformations are properties and algorithms to compute the generated elementary rounding errors at the CWP. a, b entries in F,
a ◦ b = fl (a ◦ b) + ε,
with ε output in F
Related works EFT are so named by Ogita-Rump-Oishi (SISC, 05) sum-and-roundoff pair, product-and-roundoff pair in Priest (PhD, 92), no special name before. Recent and interesting discussion in Nievergelt (TOMS,04) Algorithms to compute the error term are extremely dependent to properties of the floating point arithmetic e.g., basis of representation, rounding mode . . .
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
12 / 42
Classic error free transformations Be careful: some of the following relations only apply for IEEE-754 arithmetic with rounding to the nearest. Summation: Møller (65), Kahan (65), Knuth (74), Priest(91). (s, σ) = TwoSum (a, b) is such that a + b = s + σ and s = a ⊕ b. Product: Veltkampt and Dekker (72), . . . (p, π) = TwoProd (a, b) is such that a × b = p + π and p = a ⊗ b.
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
13 / 42
Classic error free transformations Be careful: some of the following relations only apply for IEEE-754 arithmetic with rounding to the nearest. Summation: Møller (65), Kahan (65), Knuth (74), Priest(91). (s, σ) = TwoSum (a, b) is such that a + b = s + σ and s = a ⊕ b. Product: Veltkampt and Dekker (72), . . . (p, π) = TwoProd (a, b) is such that a × b = p + π and p = a ⊗ b. FMA: Boldo+Muller (05) (p, δ1 , δ2 ) = ErrFMA(a, b, c) is such that p = FMA(a, b, c) and a × b + c = p + δ1 + δ2 .
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
13 / 42
Classic error free transformations Be careful: some of the following relations only apply for IEEE-754 arithmetic with rounding to the nearest. Summation: Møller (65), Kahan (65), Knuth (74), Priest(91). (s, σ) = TwoSum (a, b) is such that a + b = s + σ and s = a ⊕ b. Product: Veltkampt and Dekker (72), . . . (p, π) = TwoProd (a, b) is such that a × b = p + π and p = a ⊗ b. FMA: Boldo+Muller (05) (p, δ1 , δ2 ) = ErrFMA(a, b, c) is such that p = FMA(a, b, c) and a × b + c = p + δ1 + δ2 . Division and square root: Pichat (77), Markstein (90) If d = a b, then r = a − d × b is a floating point value. EFT exist for the remainder and the square and are not useful hereafter.
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
13 / 42
Not error free transformations
Approximate division: Langlois+Nativel (98) b = ApproxTwoDiv (a, b) (d, δ) is such that d = a b, |δb − δ| ≤ u|δ|.
a/b = d + δ and
Approximate square root: Langlois+Nativel (98)
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
14 / 42
Algorithms to compute EFT: two examples Algorithm (fast-two-sum, Dekker (71)) function s, σ = FastTwoSum (a, b) % |a| ≥ |b| s := a ⊕ b; bapp := s a; σ := b bapp cost: 3 flops and 1 test
Algorithm (algorithm without pre-ordering or two-sum, Knuth (74)) function s, σ = TwoSum (a, b) s := a ⊕ b; bapp := s a; aapp := s bapp ; δb := b bapp ; δa := a aapp ; σ := δa ⊕ δb ; cost: 6 flops
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
15 / 42
Error Free Transformations: summary EFT for proving theorems on error bounds I I I
Standard model: a ◦ b = fl (a ◦ b) (1 + η) with |η| ≤ u; EFT: a ◦ b = fl (a ◦ b) + ε Mathematical equalities between floating point entries
Algorithms to compute EFT I I I I
yield generated rounding errors, are performed at the working precision, without change of rounding mode, better benefit of hardware units: pipeline
Cost in #flops TwoSum FastTwoSum 6 3 (+1 test)
Philippe LANGLOIS (UPVD)
TwoProd 17
TwoProdFMA 2
4ccurate 4lgorithms in Floating Point 4rithmetic
ThreeFMA 17
SCAN 2006 at Duisburg
16 / 42
1
Compensated algorithms: why and how?
2
Basic bricks towards compensated algorithms
3
Compensated Horner algorithm Compensated Horner algorithm Validated error bounds Faithful rounding with compensated Horner algorithm
4
Performance: arithmetic and processor issues
5
Conclusion
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
17 / 42
Accuracy of the Horner algorithm We consider the polynomial p(x) =
n X
i
ai x ,
i=0
with ai ∈ F, x ∈ F
cond(p, x) =
|ai x i | |p(x)|
P
Algorithm (Horner) function r0 = Horner (p, x) rn = an for i = n − 1 : −1 : 0 ri = ri+1 ⊗ x ⊕ ai end
≥1
Relative accuracy of the evaluation with the Horner scheme: |Horner (p, x) − p(x)| ≤ γ2n cond(p, x) |{z} |p(x)| ≈2nu
u is the computing precision: IEEE-754 double, 53-bits mantissa, rounding to the nearest ⇒ u = 2−53 . Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
18 / 42
EFT for the Horner scheme Let p be a polynomial of degree n with coefficients in F. If no underflow occurs, p(x) = Horner (p, x) + (pπ + pσ )(x), with pπ (X ) =
Pn−1 i=0
Philippe LANGLOIS (UPVD)
πi X i ,
pσ (X ) =
Pn−1 i=0
σi X i , and πi and σi in F.
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
19 / 42
EFT for the Horner scheme Let p be a polynomial of degree n with coefficients in F. If no underflow occurs, p(x) = Horner (p, x) + (pπ + pσ )(x), with pπ (X ) =
Pn−1 i=0
πi X i ,
pσ (X ) =
Pn−1 i=0
σi X i , and πi and σi in F.
Algorithm (Horner scheme)
Algorithm (EFT for Horner)
function r0 = Horner (p, x) rn = an for i = n − 1 : −1 : 0 loop pi = ri+1 ⊗ x % rounding error πi ∈ F ri = pi ⊕ ai % rounding error σi ∈ F end loop
function [r0 , pπ , pσ ] = EFTHorner (p, x) rn = an for i = n − 1 : −1 : 0 loop [pi , πi ] = TwoProd (ri+1 , x) [ri , σi ] = TwoSum (pi , ai ) pπ [i] = πi pσ [i] = σi end loop
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
19 / 42
The compensated Horner algorithm and its accuracy (pπ + pσ )(x) is the global error affecting Horner (p, x). ⇒ we compute an approximate of (pπ + pσ )(x) as a correcting term.
Algorithm (Compensated Horner algorithm) function r = CompHorner (p, x) [b r , pπ , pσ ] = EFTHorner (p, x) b c = Horner (pπ ⊕pσ , x) r =b r ⊕b c
%b r = Horner (p, x)
Theorem Given p a polynomial with floating point coefficients, and x a floating point value, 2 |CompHorner (p, x) − p(x)|/|p(x)| ≤ u + γ2n cond(p, x). |{z} 2n2 u2
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
20 / 42
. . . as if computed in doubled precision Accuracy of polynomial evaluation [n=25] 1 0.01 u + γ2n2 cond
γ2n cond
1e-04
relative forward error
1e-06 1e-08 1e-10 1e-12 Horner DDHorner CompHorner
1e-14
u
1e-16 1/u2
1/u
1e-18 100000
1e+10
1e+15 1e+20 condition number
1e+25
1e+30
1e+35
Expected accuracy improvement compared to Horner The a priori bound is correct but pessimistic Same accuracy is provided by CompensatedHorner and DDHorner (Horner with inner computation with double-doubles). Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
21 / 42
How to validate the accuracy of the computed evaluation?
Classic dynamic approaches: Compensated RoT or approximate Wilkinson’s running error bound Interval arithmetic
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
22 / 42
A dynamic and validated error bound Theorem Given a polynomial p with floating point coefficients, and a floating point value x, we consider res = CompensatedHorner (p, x). The absolute forward error affecting the evaluation is bounded according to |CompensatedHorner (p, x) − p(x)| ≤ fl((u|res| + (γ4n+2 HornerSum (|pπ |, |pσ |, |x|) + 2u2 |res|))).
The dynamic bound is computed in entire FPA with round to the nearest mode only. The dynamic bound improves the a priori bound. Proof uses EFT and the standard model of FPA (as in Ogita-Rump-Oishi paper)
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
23 / 42
The dynamic bound improves the a priori bound Example: evaluating p5 (x) = (x − 1)5 for 1024 points around x = 1 Accuracy of the absolute error bounds 1e-24 Actual forward error A priori error bound Dynamic error bound
absolute forward error
1e-26
1e-28
1e-30
1e-32
1e-34 0.995
0.996
Philippe LANGLOIS (UPVD)
0.997
0.998
0.999
1 1.001 argument x
4ccurate 4lgorithms in Floating Point 4rithmetic
1.002
1.003
1.004
1.005
SCAN 2006 at Duisburg
24 / 42
Motivation for faithfully rounding a real value A faithful rounded result = one of the two neighbouring floating point values Interest: sign determination, e.g., for geometric predicates
Philippe LANGLOIS (UPVD)
4ccurate 4lgorithms in Floating Point 4rithmetic
SCAN 2006 at Duisburg
25 / 42
Motivation for faithfully rounding a real value A faithful rounded result = one of the two neighbouring floating point values Interest: sign determination, e.g., for geometric predicates Accuracy of the polynomial evaluation with CompHorner [n=50] 1 0.01 1e-04 u + γ2n2 cond
relative forward error
1e-06 1e-08 1e-10 1e-12 1e-14
u
1e-16 1/u2
1/u
1e-18 100000
Philippe LANGLOIS (UPVD)
1e+10
1e+15
1e+20
4ccurate 4lgorithms in Floating Point 4rithmetic
1e+25
1e+30
1e+35
SCAN 2006 at Duisburg
25 / 42
Two sufficient conditions for faithful rounding Let p be a polynomial of degree n, x be a floating point value.
Theorem CompHorner (p, x) is a faithful rounding of p(x) while one of the next two bounds is satisfied. An a priori bound for the condition number: cond(p, x)