... the Italian Space Agency Contract RS-82 #051602017. *Assistant Professor, Aerospace Engineering School. Member of the American Astronautical Society.
EULEREULER-q ALGORITHM FOR ATTITUDE DETERMINATION FROM VECTOR OBSERVATIONS
Daniele Mortari
*
Universita' degli Studi "La Sapienza" di Roma, 00138 Italy
A new cost function for optimal attitude definition and the Euler-q algorithm based on this cost function are presented. The optimality criterion is derived from the Euler axis rotational property and allows a fast and reliable computation of the optimal eigenaxis. The mathematical procedure leads to the eigenanalysis of a 3×3 symmetric matrix whose eigenvector, associated with the smallest eigenvalue, is the optimal Euler axis. This eigenvector is evaluated by a simple cross vector and the singularity is avoided using the method of sequential rotations. The rotational error is then analyzed and defined, and an accuracy comparison test is performed between a previously accepted criterion of optimal attitude and the proposed one. Results show that the earlier definition of optimality is slightly more precise than Euler-q, which, in turn, demonstrates a clear gain in computational speed.
Nomenclature
A,T = Estimation and true attitude matrix (direction-cosine matrix) e,Φ = Euler axis (eigenaxis, principal axis) and Euler angle (principal angle) s,v
= Observed and referenced direction (unit vector)
S
= True direction of the observed s (unit vector)
Paper 96-173 presented at 6th Annual AAS/AIAA Space Flight Mechanics Meeting, Austin, TX, Feb. 11-15, 1996. This work was supported in part by the Italian Space Agency Contract RS-82 #051602017. *
Assistant Professor, Aerospace Engineering School. Member of the American Astronautical Society.
1
ξ
= Relative precision of the (v-s) direction
β*
= Precision of the (v-s) direction (deg)
α
= Relative precision of attitude sensor
β
= Precision of attitude sensor (deg)
B,z= Attitude data matrix and vector λ
= Eigenvalue, Lagrangian multiplier
L,G = Loss and gain functions
Introduction
The estimation of the orientation of a "Spacecraft Body System" (SBS) with respect to another reference system such as an "Inertial Reference System" (IRS), that is, the estimation of the spacecraft attitude, is an important task of spacecraft navigation, dynamics and control problems. This task, which has to be executed in the fastest and most precise manner, is often accomplished using the information of the n unit vectors si, observed by the attitude sensors in the SBS, and the same directions evaluated by proper codes in the IRS, that is, the vi unit vectors. In the SBS the unit vector vi is represented by the unit vector Avi, where A is the unknown attitude matrix (to be computed) estimating the spacecraft orientation which can be expressed in terms of the Euler axis e (eigenaxis, principal axis) and the Euler angle Φ (principal angle) by A = I cosΦ + (1-cosΦ) e eT - e sinΦ
(1)
where I and e represent, respectively, the 3×3 identity matrix and the 3×3 skew-symmetric matrix performing the cross product. Let βi be the accuracy of the ith sensor. This means that the true direction Si can be displaced from the observed si by an angle smaller than βi. The sensor relative precisions αi are derived from the βi as follows αi =
1 n
βi Σ (1/βk ) k=1
2
(2)
As is easily observed, the relative precisions αi are positive, less than the unit, greater to the extent to which the sensors are more accurate, and satisfying the relative condition Σi αi =1. The optimal attitude estimation problem, which is known as the Wahba problem, means to estimate the spacecraft attitude by satisfying an optimality criterion and using the n≥2 unit vectors pairs (si, vi ). Up to now the optimality criterion 1, originally introduced by Wahba in 1965 and then completed by including the weights αi as well as the 1/2 multiplier, defines as optimal the attitude matrix A that minimizes the loss function LW (A) =
n 1n Σ αi si - A vi 2 = 1 - Σ αi sT A vi i 2 i=1 i=1
(3)
Equivalently, optimality is reached when the gain function n
GW (A) = 1 - LW (A) = Σ αi siT A vi = tr [ A BT ]
(4)
i=1
is maximal, where the attitude data matrix B has the form n
B = Σ αi si viT
(5)
i=1
Now, substituting for the attitude matrix its expression given by Eq. (1), the general expression of the Wahba gain function expressed in term of Euler axis and angle becomes GW (e,Φ) = cosΦ tr [ B ] + (1-cosΦ) eT B e + sinΦ zT e
(6)
where the vector z=Σi αi si ×vi, which indicates the non-symmetricity of B, can directly be expressed using the off-diagonal elements of B as z={b23-b32, b31-b13, b12-b21}T. All of the existing algorithms, which fully comply with Wahba's optimality criterion of Eqs. (3,4), achieve the same accuracy. Thus, those algorithms (e.g., q-Method 2
, QUEST 3, SVD 4, Direct-Method 5, EAA 6, Euler-2, TRIAD-2 and Euler-n 7 ) differ from each other only in
their computational speed. Particular attention was therefore given to the development of faster attitude determination techniques, mainly because an increase of the computational speed allows the attitude information to be provided to the control system at a higher rate. Ref. [7] analyzes the possibility of computing separately the optimal Euler axis and angle (eopt,Φopt) using the
3
Wahba optimal definition. This led to the development of the Euler-2 (when n=2) and Euler-n (when n>2) algorithms. Euler-n evaluates the optimal Euler axis eopt and the optimal Euler angle Φopt in two consecutive steps of an iterative procedure. However, it turns out that it is possible to compute separately eopt and Φopt when
n>2, and without using iterative procedures, but this is achieved when a different criterion for optimality of the attitude determination is used, as shown in the next section.
EulerEuler-q Algorithm
As already stated, in the ideal case of no noise (βi=0, i=1-n), the attitude matrix satisfies all the Avi=si conditions. Premultiplying Avi=si by eT, with A given by Eq. (1), the following identities eT vi = eT si
(7)
are derived. It is to be noted that Eq. (7) is a consequence of the attitude matrix being a rotational operator, which rotates, about the axis e by the angle -Φ, the overall structure of the referenced vector vi until it overlaps with the structure of the observed vectors si. However, due to errors, in the real case the conditions eTvi=eTsi, which imply eT(vi-si)=(vi-si)Te=0, do not hold anymore; therefore, introducing the unit vectors di=(vi-si)/vi-si, the following small differences δi = eT di = diT e
(8)
arise in the real case. Let us define the following loss function n
n
i =1
i =1
2 T LM (e) = Σ ξi δi = Σ ξi eT di di e = eT H e
(9)
where the symmetric matrix n
H = Σ ξi di diT
(10)
i=1
is introduced while the relative weights ξi are defined as follows. The more the di direction is perpendicular to the Euler axis, the more the condition given by Eq. (7) is satisfied. The deviation of di from the perpendicular
4
condition depends on the sensor errors βi; therefore, the relative weight ξi, establishing the di relative precision, is a function of the βi angle. Referring to the spherical triangle identified by the directions vi, si and *
Si, shown in Fig. 1, the worst case for the di direction deviation (that is, the βi angle) occurs when the true observed direction Si is displaced from the measured si by the angle βi and the spherical triangle is right in Si. In this position the βi* angle, which represents the maximum deviation of di, is maximized and its value is obtained from sinωi sinβ*i = sinβi
(11)
where ωi is the angle between si and vi directions. The relative weights ξi are then derived from the βi* as the αi are obtained from the βi, that is, ξi =
1 n
(12)
βi Σ (1/β*k ) *
k=1
The minimization of LM (e) can be obtained by means of the Lagrangian multiplier technique. To this end, the augmented cost function, which includes the constraint of the Euler axis to be a unit vector, is defined as L* (e) = eT H e - λ ( eT e -1) M
(13)
where λ is the Lagrangian multiplier. The stationarity condition implies d L* (e) M
de
= 2He-2λe = 0
(14)
He = λe
(15)
which leads to
Equation (15) states that the searched Euler axis is the eigenvector of the H-matrix associated with the λ eigenvalue. This eigenvalue is computed by premultiplying Eq. (15) by eT; thus, eT H e = λ = LM ( e )
(16)
Now, since LM (e) has to be a minimum, the unknown λ eigenvalue has to be the smallest. Therefore, the
5
optimal Euler axis e opt is the H-matrix eigenvector associated with its smallest eigenvalue H eopt = λ eopt
(17)
Although Eq. (17) provides the solution to the problem, the complete eigenanalysis is unnecessary, as is demonstrated hereafter. In fact it is possible to compute λmin directly from the third order characteristic equation of the H matrix λ3 + a λ2 + b λ + c = 0
(18)
where the a, b and c coefficients can be expressed in terms of the H-matrix elements as a =-tr[H] =-h11 -h22 -h33 {
b = tr[adj(H)] = h11h22 +h11h33 +h22h33 -h2 -h2 -h2 c =-det(H) =
12 13 23 2 h11h22h33 +2h12h13h23 -h22h -h11m2 -h33h2 13 23 12
(19)
Since H is symmetric, its eigenvalues are real; therefore, the solution of the third-order Eq. (18), providing three real roots, has to be chosen. This is accomplished 9,10 by setting p2 =
(
a2 b ) 3 3
q=
[
b a2 a c -( ) ] 2 3 3 2
(20)
and w=
1 q cos -1 ( 3 ) 3 p
(21)
Then, the condition of having real roots leads to the solution λ1 =- p ( 3 sin w + cos w ) - a/3 {
λ2 = p ( 3 sin w - cos w ) - a/3 λ3 = 2 p cos w - a/3
(22)
Since 0≤w≤π/3, it is easy to demonstrate that the λ i computed by Eq. (22) satisfy the conditions 0 ≤ λ1 ≤ λ2 ≤ λ3
6
(23)
When n=2, the coefficient c=-det(H)=0. In this case the three roots of Eq. (18) become λ1 = 0
λ2 =- a /2 - ( a /2)2 - b {
λ3 =- a /2 + ( a /2)2 - b
(24)
which still satisfy the conditions of Eq. (23). Hence, when n=2, the computation of λ min is unnecessary because it results in λ min=λ1=0. Obviously, since λ min is very small [λ min=LM(e)_0, see Eq. (16)], it also can be computed in a very fast way by applying the Newton-Raphson iterative procedure to Eq. (18) with λ0=0 as starting point. The first iteration implies λ1=-c/b while the subsequent values can recursively be obtained by λi+1 = λi -
λ3i + a λ2i + b λi + c
(25)
3λ2i + 2 a λi + b
The iterative procedure converges to the solution so drammatically that only one application of Eq. (25) implies a solution approximated to 10-15. This iterative procedure has, with respect to the closed-form solution, the double advantage to require fewer operations (that is, a gain in computational speed), and to avoid the computation of square-roots and inverse cosines functions [see Eqs. (20-22)], which reduce the numerical errors, because typical space-borne microprocessors have limited word length. However, the closed-form solution is still suggested because it allows the computation of (λ2-λmin) and (λ3-λmin), which, in turn, indicate the solution robustness as well as the distance to the singularity.
Once λ min is computed, eopt is evaluated in an easy and fast manner. To this end, let us write Eq. (17) as follows
mT1 T
(H - λ I ) eopt = M eopt = [ m2 ]eopt T
m3
ma mx my = mx mb mz eopt = 0 my mz mc [
]
(26)
This equation states that all the row vectors of M are perpendicular to e. Therefore, the direction of the
7
optimal Euler axis can be computed by a simple cross product between two row vectors of the M matrix. In order to maximize the accuracy, the computation of the optimal Euler axis by means of a cross product leads to the problem of which row vectors should be selected for computing it. To this end the following three choices are available e1 =m2 × m3 = { mbmc - mz , mymz - mxmc , mxmz - mymb }T 2
{
e2 = m3 × m1 = { mymz - mxmc , mamc - m2y , mxmy - mzma }T e3 = m1 × m2 = { mxmz - mymb , mxmy - mzma , mamb
(27)
- m2 }T x
where all the ei's are parallel and, therefore, differing from each other only in modulus. To evaluate the ei with highest reliability it is necessary to identify which one has the greatest modulus. It is easy to demonstrate that this requirement is satisfied by computing which of the following p1 = mb mc - m2z ,
p2 = ma mc - m2y ,
p3 = ma mb - m2x
(28)
is the greatest. Let pk be the greatest, then the most reliable Euler axis is given as eopt = ek / ek
(29)
The computation of the Euler angle can be effected by performing the derivative of Eq. (6) with respect to the Euler angle. This leads to ∂ GW ( eopt ,Φ) T = ( eopt B eopt - tr [ B ]) sinΦ + zT eopt cosΦ = 0 ∂Φ
(30)
Therefore, the optimal Euler angle can be derived from the following conditions sinΦopt =
1
zT e D opt
cosΦopt =
1
D
T
( tr [ B ] - eopt B eopt )
(31)
where D2 = (zT eopt )2 + (tr[B] - eT B eopt )2 opt
8
(32)
Singularity Discussion
It is to be noted that the computation of the Euler axis by means of a cross product fails in two limit cases for the M row vectors: 1) when they become parallel, which occurs either when all of the observed vectors are parallel or thereabouts (a-singularity), or when the Euler axis and all of the observed vectors are approximately lying on the same plane (b-singularity), and 2) when they become zero vectors, which occurs if Φ→0 (c-singularity). The a-singularity, which occurs when the observed vectors are all parallel, implies the di vectors are parallel as well and, therefore, the matrix H will be resulting as if it were built using only one vector. Such a matrix has nonnegative eigenvalues which are the square roots of the singular values. Therefore, this matrix has rank 1 and, thus, λ3>0 whereas λ2→λ1→0. This singularity case cannot be solved (even by using other existing algorithms) because it is an intrinsically unresolvable case for attitude determination. The b-singularity also implies the di vectors are parallel and, therefore, the same consequences arise as for the a-singularity, which are λ3>0 whereas λ2→λ1→0. Unlike the a-singularity, the b-singularity can be solved by using additional vectors, as explained in the next section. 0 and, finally, Finally, the c-singularity, which occurs for Φ→0, implies first A→I, then si →vi, then (vi-si)→0
H→00, which in its turn implies all of the eigenvalues of H tending to zero, that is, λ →λ →λ →0 and, 3
2
1
consequently, M→H→0 0, i.e., zero row vectors for matrix M. To avoid the b- and c-singularities the Method of Sequential Rotations (MSR) 3, hereafter explained, can successfully be applied. MSR is a simple and elegant solution tool which can be invoked in almost all of the singular cases encountered in the attitude estimation algorithms subjected to a singularity. MSR states that, if the n unit vector pairs (si,vi) imply the optimal attitude matrix A, then the n unit vector pairs (si,wi), where
wi=Rvi and R is any one rotational matrix, imply the optimal attitude matrix F, related to A, as F=[ f f f 1
2
3
]=AR T. Therefore, if the (si,vi) data set implies a singularity for the application method, the (si,wi) set would not, in general, necessarily imply a singularity too. Hence, the MSR evaluates the attitude F (or the quaternion
p) by using the rotated unit vectors wi (in place of the vi) and then computes the searched optimal attitude
9
matrix as A=FR. Particular attention is given to those matrices R, which rotate about one of the coordinate axes c1, c2 and c3 by a π angle. These matrices, for which the rules given in Table 1 are valid, are 1 000 R1 = [ -1 000 -1
(33)
With respect to other matrices these save time in computations. In fact the application of the MSR with them does not involve extra computational loads but sign changes only. When applying the MSR by a π rotation about one of the coordinate axes the computation of the new data matrix B* is unnecessary, because B* is obtained from B=[b1 b2 b3] just by changing the sign of two columns; the attitude matrix A is also obtained from F by sign changes; the optimal searched quaternion q={q1 q2 q3 q4}T, associated with B, is also obtained from the computed quaternion p={p1 p2 p3 p4}T, associated with B*, by sign changes and cross displacements. All these properties are summarized in Table 1. Table 1 Sequential Rotation Properties *
MSR axis
MSR angle
w vectors
B matrix
A matrix
q quaternion
c
π
{v(1) -v(2) -v(3)}T
[b1 -b2 -b3]
[f1 -f2 -f3]
{p4 -p3 p2 -p1}T
c
π
{-v(1) v(2) -v(3)}T
[-b1 b2 -b3]
[-f1 f2 -f3]
{p3 p4 -p1 -p2}T
c
π
{-v(1) -v(2) v(3)}T
[-b1 -b2 b3]
[-f1 -f2 f3]
{-p2 p1 p4 -p3}T
1
2
3
The conditions under which the MSR is invoked are related to the values of the computed roots differences, the expressions of which are provided in Table 2. This represents another advantage of computing the λi by the closed-form solution instead of the iterative procedure. Table 2 Roots Difference Table
n=2
n>2
λ2-λ1
(-a-√)/2
2p√sinw
λ3-λ1
√
p(3cosw+√sinw)
λ3-λ2
(-a+√)/2
p(3cosw-√sinw)
10
The condition λ2-λ1>ζ, where ζ is a small positive tolerance to be derived from numerical tests by limiting the attitude error, implies an optimal Euler axis sufficiently identifiable. In this case the MSR is not invoked. When λ2-λ1ζ is satisfied. As is known, the condition λ3-λ1ζ does not occur for all three possible sequential rotations, then it means that the a-singularity is occurring. When this case occurs, obviously, the attitude estimation is aborted.
Additional Vectors
The b-singularity implies the di vectors are parallel and, therefore, the same consequences arise as for the a-singularity, which are λ3>0 whereas λ2→λ1→0. It is possible, however, to avoid completely the b-singularity using the "additional vectors", hereafter described.
In fact, since the attitude matrix represents a rotational operator, not only does each single vi describe a cone about e, but also any other vector constructed using the vi, as, for instance, the cross products vi×vj. This means that the si×sj vector can be considered as an additional observed vector, whose corresponding reference vector is vi×vj. Therefore, in the ideal case, the conditions eT(si×sj)=eT(vi×vj), which imply that the directions provided by the vectors dij=(vi×vj -si×sj )/vi×vj -si×sj are perpendicular to the Euler axis, are satisfied. Consequently, the small differences ξ i j δ2ij = ξij eT dij dijT e
(34)
where the ξij, the expressions of which are more complicated than the ξi, are provided in the next paragraph. Terms given in Eq. (34) can be added to the loss function LM(e) of Eq. (9). These additional vectors avoid completely the occurrence of the b-singularity. In fact it happens that, when the directions si, sj and e are lying on the same plane, the vector (vi×vj -si×sj ) is perpendicular to both the Euler axis and the parallel vectors di and
dj.
11
When an additional vector is considered the true Dij direction is displaced from the observed dij because Si×Sj differs from si×sj in both direction and modulus. The variation in direction is derived considering the vector
Si×Sj describing a cone about si×sj by the angle βij d . The maximum value of this angle, which is reached in the ( )
condition shown in Fig. 2, can be computed by the relationship sin2 γij sin2 β(ijd) = sin2 βi + sin2 βj + 2 sinβi sinβj cosγij
(35)
where γij is the angle between si and sj. Equation (35) is derived by applying known trigonometric identities to the spherical figure identified by the si, Si, sj and Sj directions. This means that the vector si×sj is displaced from
Si×Sj by a maximum angle of βij d and, therefore, the dij direction can be displaced from Dij by the angle βij d , ( )
( )*
which can be computed from the relationship sinωij sinβ(ijd)* = sinβ(ijd)
(36)
where ωij is the angle between si×sj and vi×vj directions.
The contribution caused by the modulus variation si×sj can be considered approximately perpendicular to that caused by the variation in direction. Referring to the triangle shown in Fig. 3, it is possible to compute the error angle βij(m)* associated with the si×sj variation. In Table 3, the minimum and the maximum values for m(s)=si×sj=sin(γij+β), where the variable β ranges from -βs to βs (where βs=βi+βj), are given as a function of the value of γij.
Table 3 si×sj Range
m s =si×sj range ()
γij range From
To
m
m
0
βs
0
sin(γij+βs)
βs
π/2-βs
sin(γij-βs)
sin(γij+βs)
π/2-βs
π/2
sin(γij-βs)
1
π/2
π/2+βs
sin(γij+βs)
1
π/2+βs
π-βs
sin(γij+βs)
sin(γij-βs)
12
π
π-βs
0
sin(γij-βs)
The angle 2βij(m)*, which is the angle spanned by the vector (vi×vj -si×sj ) during the β variation, can be evaluated from the equation (m)*
2 z1 z2 cos(2βij
2
2
(s)
(s)
) = z1 + z2 - ( mmax - mmin )2
(37)
where z2 = (m(s) )2 + vi×vj 2 - 2 vi×vj mmax cosωij (s)
{
1 z22
=
max (s) 2 s) ( mmin ) + vi×vj 2 - 2 vi×vj m(min cosωij
(38)
Once the precision angles βij(d)* and βij(m)* are computed, then βij*=max[βij(m)*, βij(d)*].
The relative weights ξij, which appeared in Eq. (34), are then evaluated so as the αi were evaluated from the βi in Eq. (2). For instance, when n=2 and an additional vector is considered, the three weights are ξ1 = 1/( D β*1 )
ξ2 = 1/( D β*2 )
ξ3 = 1/( D β*12 )
(39)
where D = 1/β*1 + 1/β*2 + 1/β*12
(40)
and where β1* and β1* are computed using Eq. (12) whereas β12* is evaluated using Eqs. (35-38). The use of the additional vectors can be restricted to when n=2 only (which means only one additional vector) for the following three reasons. Firstly, because the b-singularity is completely avoided using the MSR; secondly, because the b-singularity occurrence highly decreases with n and, thirdly, because additional vectors imply additional computational load, especially for the relative weights ξij computation. It is to be noted, however, that the use of the additional vectors implies a small improvement of the solution accuracy.
Rotation Error
13
Any criterion used to establish the optimality of the spacecraft attitude implies a corresponding optimal attitude matrix. Typically, this matrix does not coincide perfectly with the true attitude matrix describing the spacecraft orientation and, therefore, any estimation of the spacecraft attitude is affected by an error that needs quantification. Since the attitude matrix is a rotational matrix, the problem is to identify which mathematical means is apt to measure the "distance" between two rotational matrices, that is, the error of an estimation with respect to the true attitude matrices. To this end, let T and A be the true and the estimation attitude matrix, respectively. A can even be randomly chosen provided that the conditions of being an attitude matrix are met (ATA=I, det(A)=1). The problem is to evaluate numerically, in an unambiguous manner, how far A is from T. To that end, in the past literature, different criteria have been introduced. One of them 5 is based on the computation of the Euclidean (Frobenius) norm of the matrix δA=T-A, and another 8 introduces the "attitude error vector", which can be considered correct only for infinitesimal deviations between A and T. Both of these criteria calculate quantities which, although related to the error, are not the error itself. In the following it is shown that, the attitude error, that is, the rotational error, has an assigned shape which can be described by only one parameter which is absolutely not a function of the attitude parameterization used. Matrix ∆=TAT, being the product of rotation matrices, is a rotation matrix as well. This matrix represents the corrective rotation that, when applied to the estimation attitude, rotates it up to the true attitude, that is, ∆A=TATA=T. Being a rotation matrix, ∆ has its own Euler axis and angle (e,Φ), which implies ∆e=e. Postmultiplying ∆=TAT by the Euler axis e, the relationship T Te=ATe is obtained. This equation implies that an observed direction identified by the unit vector e, in both the true and the estimated orientation, has an equal corresponding reference direction. In other words, an observed direction identified by e is errorless for the estimation matrix A, no matter what matrix A is chosen. On the contrary, the direction identified by the unit vector w, angularly separated from the Euler axis e by the angle ', is affected by the error ε defined by cosε = cos2 ' (1-cosΦ) + cosΦ where
14
(41)
{
cos' = eT w = eT ∆ w cosε = wT ∆ w
(42)
Equation (41) is derived from the isosceles spherical triangle shown in Fig. 4, whose vertices are defined by e,
w and ∆w unit vectors. The distribution of the rotational error ε, provided by Eq. (41), implies a maximum error of ε=Φ for directions perpendicular to the Euler axis ('=π/2), and a minimum error of ε=0 for those directions parallel to the Euler axis, that is, at '=0 and '=π. Thus, since the rotation error has to satisfy the assigned shape provided by Eq. (41), what characterizes it therefore is its maximum value. This value, which is the Euler angle Φ of the matrix ∆, is also the maximum error obtained in direction definition. This parameter identifies how far the matrix A is displaced from T and, therefore, describes the attitude error. The Euler angle Φ of the rotation matrix ∆ is given by cosΦ =
tr [∆] - 1 2
=
tr [ T A T ] - 1 2
(43)
This equation is easily demonstrated because the trace of a matrix is an invariant with respect to the similar transformations and, therefore, it is equal to the sum of the eigenvalues, that is, tr[TAT ]=Σiλi= =λ1+λ2+λ3 and, the eigenvalues of a rotation matrix are λ1=1, λ2=cosΦ+isinΦ and λ3=cosΦ-isinΦ. Therefore, it follows that
tr[TAT ]=1+2cosΦ. Consequently, the "farthest" A matrices from the T-matrix imply Φ=π, that is, tr[TAT ]=1+2cosΦ=-1, while all the "closest" entail Φ=0, for which tr[TAT ]=3.
Unlike displacement, which allows an open error range (up to infinite), the rotation error is limited to a closed set and, furthermore, has a predefined shape which implies a zero error along one direction and a maximum error in the direction perpendicular to it. Based on this rotation error definition, the next section, which is dedicated to the accuracy tests, compares the optimal estimation matrices obtained from the Wahba criterion and the proposed new one with respect to the true attitude matrix.
Accuracy Tests
Accuracy tests have been performed on the basis of 1000 random attitude data sets and for n ranging from 2
15
to 10. Each data set has been produced as follows: 1) generation of a true attitude matrix T using Eq. (1) by a random choice of the Euler axis e and the Euler angle Φ; 2) generation of n random precision angles βi for the three cases of βMAX = 0.001°, 0.01°, 0.1°, (where 0≤βi≤βMAX); 3) generation of n random reference unit vectors vi; 4) evaluation of the n true observed unit vectors Si=Tvi; 5) evaluation of the n observed unit vectors
si, which are affected by a noise obtained by rotating the Si by a random angle φi (0≤φi≤βi, i=1-n) about a random axis perpendicular to Si; and 6) computation of the relative weights αi and ξi according to Eq. (2) and Eqs. (11,12), respectively. For each attitude data set, which consists of the n data [ξi,αi,vi,si], the attitude estimation matrices AW and AM, are obtained using the q-Method algorithm (which fully complies with the Wahba cost function) and the Euler-q algorithm, respectively. Then, according to Eq. (43), the errors εW and εM provided by 1 T ] - 1) ] εW = cos-1 [ ( tr [ T AW 2
1 εM = cos-1 [ ( tr [ T AMT ] - 1) ] 2
(44)
are computed. Fig. 5 plots the average values of the εW and εM errors obtained from the tests. This plot shows that the Wahba cost function is, on the average, better than the proposed. However, the precision loss, indicated by the difference εM-εW, decreases as n increases and decreases as the sensor errors βi decrease. For instance, in the worst case of n=2, the average of the maximum attitude errors are: 1) εW=0.07° and εM=0.08° for βMAX=0.1° (inaccurate sensors), 2) εW=0.006° and εM=0.007° for βMAX=0.01° (accurate sensors), and 3) εW=0.0007° and εM=0.0008° for βMAX=0.001° (very accurate sensors). We emphasize again that these values represent the maximum attitude errors in the worst case of n=2. Hence, the Euler-q precision loss can be considered acceptable for many practical applications.
Speed Tests
As done for accuracy tests, also the speed tests were performed using MATLAB 11 software on a Pentium PC. The index used to evaluate the algorithm computational speed is the MATLAB flops function which evaluates the approximate cumulative number of floating point operations. Fig. 6 shows the averages of the speed test indices obtained by 1000 tests, n=2-10, and for 0≤βi≤βMAX=0.1°. Four different optimal attitude
16
determination algorithms have been compared with Euler-q (tolerance ζ = 10-2; closed-form solutions for the λi, and no additional vectors used). These algorithms are: 1) q-Method 2, 2) QUEST 3 (MSR tolerance = 10-3), 3) SVD 4, and 4) EAA 6. This figure shows Euler-q to be the fastest algorithm for the optimal attitude estimation when n