Orthogonal distance fitting of implicit curves and surfaces - IEEE Xplore

3 downloads 0 Views 2MB Size Report
coordinate-based algorithm, for implicit surfaces and plane curves, which minimize the square sum ... distance fitting problem of line and plane in space by using.
620

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

VOL. 24, NO. 5,

MAY 2002

Orthogonal Distance Fitting of Implicit Curves and Surfaces Sung Joon Ahn, Wolfgang Rauh, Hyung Suck Cho, Member, IEEE, and Hans-JuÈrgen Warnecke AbstractÐDimensional model fitting finds its applications in various fields of science and engineering and is a relevant subject in computer/machine vision and coordinate metrology. In this paper, we present two new fitting algorithms, distance-based and coordinate-based algorithm, for implicit surfaces and plane curves, which minimize the square sum of the orthogonal error distances between the model feature and the given data points. Each of the two algorithms has its own advantages and is to be purposefully applied to a specific fitting task, considering the implementation and memory space cost, and possibilities of observation weighting. By the new algorithms, the model feature parameters are grouped and simultaneously estimated in terms of form, position, and rotation parameters. The form parameters determine the shape of the model feature and the position/rotation parameters describe the rigid body motion of the model feature. The proposed algorithms are applicable to any kind of implicit surface and plane curve. In this paper, we also describe algorithm implementation and show various examples of orthogonal distance fit. Index TermsÐImplicit curve, implicit surface, curve fitting, surface fitting, orthogonal distance fitting, geometric distance, orthogonal contacting, nonlinear least squares, parameter estimation, Gauss-Newton method, parameter constraint, parametric model recovery, object segmentation, object classification, object reconstruction.

æ 1

INTRODUCTION

W

ITH image processing, pattern recognition, and computer/machine vision, dimensional model (curve and surface) fitting to a set of given data points is a very common task carried out during a working project, e.g., edge detection, information extraction from 2D-image or 3D-range image, and object reconstruction. For the purpose of dimensional model fitting, we can consider three methods, namely, moment method [15], [32], [35], [42], Hough transform [8], [19], [28], and least-squares method (LSM) [23]. The moment method and Hough transform are efficient for fitting of relatively simple models, while their application to a complex object model or to an object model with a large number of model parameters is not encouraged. In this paper, we consider the LS-fitting algorithms for implicit model features. By data modeling and analysis in various disciplines of science and engineering, implicit features are very often used because of their compact description in form of f…a; x† ˆ 0 and because of the possibility of a simple on-off and insideoutside decision. We review the current LS-fitting algorithms for implicit model features and introduce new approaches in this section. Section 2 contains two algorithms for finding the

. S.J. Ahn and W. Rauh are with the Department of Information Processing, Fraunhofer Institute for Manufacturing Engineering and Automation (IPA), Nobelstr. 12, 70569 Stuttgart, Germany. E-mail: {sja, wor}@ipa.fhg.de. . H.S. Cho is with the Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Kusung-dong 373-1, Yusung-gu, 305-701 Taejeon, South Korea. E-mail: [email protected]. . H.-J. Warnecke is with the Fraunhofer Society, Leonrodstr. 54, 80636 Munich, Germany. E-mail: [email protected]. Manuscript received 3 Apr. 2001; revised 25 Oct. 2001; accepted 29 Jan. 2002. Recommended for acceptance by S. Sarkar. For information on obtaining reprints of this article, please send e-mail to: [email protected], and reference IEEECS Log Number 113920.

shortest distance point on an implicit model feature from a given point. Section 3 describes two new algorithms, the distance-based and coordinate-based algorithms, for orthogonal distance fitting of implicit model features. Various fitting examples are given in Section 4 and Section 5, including examples of real data fitting. This paper ends with a summary in Section 6.

1.1 Least-Squares Dimensional Model Fitting The least-squares method [23] is one of the best known, and most often applied, mathematical tools in various disciplines of science and engineering. With applications of LSM to dimensional model fitting, the natural and best choice of the error distance is the shortest distance between the given point and the model feature. This error definition is prescribed in coordinate metrology guidelines [16], [30]. A century ago, Pearson [32] elegantly solved the orthogonal distance fitting problem of line and plane in space by using the moment method (an art of LSM). However, except for some simple model features, computing and minimizing the square sum of the shortest error distances are not simple tasks for general features. For applications, an algorithm for dimensional model fitting would be very advantageous if estimation parameters are grouped and simultaneously estimated in terms of form, position, and rotation parameters [24]. Furthermore, it would be helpful if the reliability (variance) of each parameter could be tested. There are two main categories of LS-fitting problems of implicit features, algebraic and geometric fitting, and these are differentiated by their respective definition of the error distances involved [21], [36]. Algebraic fitting is a procedure whereby model feature is described by implicit equation F …b; X† ˆ 0 with algebraic parameters b and the error distances are defined with the deviations of functional values from the expected value (i.e., zero) at each given point. If F …b; Xi † 6ˆ 0, the given point Xi does not lie on the model feature (i.e., there is

0162-8828/02/$17.00 ß 2002 IEEE

AHN ET AL.: ORTHOGONAL DISTANCE FITTING OF IMPLICIT CURVES AND SURFACES

some error-of-fit). Most publications about LS-fitting of implicit features have been concerned with the square sum of algebraic distances or their modifications [12], [36], [40] 20 ˆ

m  X i

m h i2 X 2 F …b; Xi †=krF …b; Xi †k : F …b; Xi † or 20 ˆ i

621

1.2 New Approaches We will now introduce two versatile and very efficient orthogonal distance fitting algorithms for implicit surfaces and plane curves. The general goal of orthogonal distance fitting of a model feature to m given points is the estimation of the feature parameters a minimizing the performance index

…1† In spite of advantages in implementation and computing cost, the algebraic fitting has drawbacks in accuracy. The estimated parameters are not invariant to coordinate transformation (e.g., a simple parallel shift of the given points set causes changes not only in position but also in form and rotation of the estimated model feature) and, generally, we cannot find a physical interpretation of an algebraic error definition. Finally, resolving the algebraic parameters b into the physical parameters (e.g., in terms of form, position, and rotation parameters) is not a simple task for general features [24]. In geometric fitting, also known as best fitting or orthogonal distance fitting, the error distance is defined as the shortest distance (geometric distance) of the given point to the model feature. However, contrary to its clear and preferable definition, the geometric distance has nonlinear nature with the model parameters and, as said above, computing and minimizing the square sum of geometric distances are not easy tasks for general features. Thus, including the normalized algebraic distance F …b; Xi †=krF …b; Xi †k shown in the second formula of (1) [36], [40], various approximating measures of the geometric distance are used for dimensional model fitting [14], [29], [31]. Because using approximating measures of the geometric distance cannot satisfactorily fill the gap between algebraic and geometric fitting, the development of geometric fitting algorithms has been pursued in the literature. While there are geometric fitting algorithms for explicit [11], and parametric features [5], [13], [21], [38], [41], we are considering in this paper only fitting algorithms for implicit features. Sullivan et al. [39] presented a geometric fitting algorithm for implicit algebraic features F …b; X† ˆ

q X

bj Xkj Y lj Z mj ˆ 0;

j

minimizing the square sum of the geometric distances d…b; Xi † ˆ kXi X0i k, where Xi is a given point and X0i is the shortest distance point on the model feature F from Xi . In order to locate X0i , they have iteratively minimized the Lagrangian function L…; X† ˆ kXi Xk2 ‡ F …b; X†. To obtain the partial derivatives @d…b; Xi †=@b (these are necessary for the nonlinear iteration updating the algebraic parameters b), they have utilized the orthogonal contacting equations defined in machine coordinate system XYZ   F ˆ 0: …2† F…b; Xi ; X† ˆ rF  …Xi X† A weakness of Sullivan et al. algorithm, the same as in the case of algebraic fitting, is that the physical parameters are combined into an algebraic parameters vector b. And, as we will show in Section 3.1, they have unnecessarily expensively obtained the partial derivatives @d…b; Xi †=@b from (2). Helfrich and Zwick described a similar algorithm in [27].

20 ˆ dT PT Pd

…3†

or 20 ˆ …X

X0 †T PT P…X

X0 †;

…4†

where . . .

X: Coordinate column vector of the m given points, T T XT ˆ …XT 1 ; . . . ; Xm †, Xi ˆ …Xi ; Yi ; Zi †; 0 X : Coordinate column vector of the m corresponding points on the model feature; d: Distance column vector, dT ˆ …d1 ; . . . ; dm †, q T di ˆ kXi X0i k ˆ …Xi X0i † …Xi X0i †;

PT P: Weighting matrix or error covariance matrix (positive definitive); . P: Nonsingular symmetric matrix [17]. If we assume independent identically distributed (i.i.d.) measurement errors, the two performance indexes (3) and (4) have the same value up to the a priori standard deviation of the measurement errors. We are calling the new algorithms minimizing the performance indexes (3) and (4) distance-based algorithm and coordinate-based algorithm, respectively. Our algorithms are the generalized extensions of an orthogonal distance fitting algorithm for implicit plane curves [3], [4]. Because no assumption is made about the mathematical handling of implicit features, the proposed algorithms can be applied to any kind of implicit dimensional feature. Each of the two algorithms consists of two nested iterations:  20 fX0i …a†gm …5† min min iˆ1 : 0 m .

a

fXi giˆ1 2F

The inner iteration of (5), to be carried out at each outer iteration cycle, finds the shortest distance points on the current model feature from each given point (Section 2) and the outer iteration cycle updates the model feature parameters (Section 3). Once appropriate break conditions are satisfied, e.g., when no more significant improvement on the performance indexes (3) and (4) could be gained through parameter updating, the outer iteration terminates. By the new algorithms, the estimation parameters a are grouped in three categories and simultaneously estimated. First, the form parameters ag (e.g., radius r for a circle/ cylinder/sphere, three axis lengths a, b, c for an ellipsoid) describe the shape of the standard model feature f defined in model coordinate system xyz (Fig. 1) f…ag ; x† ˆ 0

with

ag ˆ …a1 ; . . . ; al †T :

…6†

The form parameters are invariant to the rigid body motion of the model feature. The second and the third parameters group, respectively, the position parameters ap and the

622

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

VOL. 24, NO. 5,

MAY 2002

which usually has only a few form parameters. Functional interpretations and treatments of the position/rotation parameters concerning the coordinate transformation (7) are identical for all implicit model features.

2

ORTHOGONAL CONTACTING POINT

The time consuming part of the proposed orthogonal distance fitting algorithms in this paper is the inner iteration of (5), finding the shortest distance points (orthogonal contacting points) fX0i gm iˆ1 on a general model feature from each given point fXi gm iˆ1 . For a given point xi in frame xyz xi ˆ R…Xi

Xo †;

…9† x0i

Fig. 1. Implicit features and the orthogonal contacting point x0i in frame xyz from the given point Xi in frame XYZ: (a) plane curve and (b) surface.

rotation parameters ar , describe the rigid body motion of the model feature f in machine coordinate system XYZ X ˆ R 1 x ‡ Xo

or

R ˆ R R' R! ˆ …r1

x ˆ R…X r2

ap ˆ Xo ˆ …Xo ; Yo ; Zo †T ;

r3 †T ; and

Xo †; R

1

where …7†

ˆ RT ;

ar ˆ …!; '; †T :

…8†

We characterize the form parameters as intrinsic parameters and the position/rotation parameters as extrinsic parameters of the model feature according to their context. In this paper, we intend to simultaneously estimate all these parameters T T aT ˆ …aT g ; ap ; ar †

ˆ …a1 ; . . . ; al ; Xo ; Yo ; Zo ; !; '; † ˆ …a1 ; . . . ; aq †: For plane curve fitting, we simply ignore all terms concerning z, Z, !, and '. We may try to describe the model feature F with algebraic parameters b in machine coordinate system XYZ F …b…ag ; ap ; ar †; X† ˆ f…ag ; x…ap ; ar ; X†† ˆ 0; although this has no interest for us from the viewpoint of our model fitting algorithms. In comparison with other dimensional model fitting algorithms, our algorithms simultaneously estimate the physical parameters a of the model feature in terms of form ag , position ap , and rotation parameters ar , thus providing very useful algorithmic features for various applications, e.g., with robot vision, motion analysis, and coordinate metrology. For such applications, algebraic parameters b must be converted into physical parameters a even though the conversion is not an easy task for general model features. The novelty of our algorithms, as will be described in Section 3, is that the necessary derivative values for nonlinear iteration are to be obtained from the standard model feature equation (6) and from the coordinate transformation equations (7). For application of our algorithms to a new model feature, we need merely to provide the first and the second derivatives of the standard model feature equation (6)

we determine the orthogonal contacting point on the standard model feature (6). Then, the orthogonal contacting point X0i in frame XYZ from the given point Xi will be obtained through a backward transformation of x0i into XYZ. For some implicit features (e.g., circle, sphere, cylinder, cone, and torus), the orthogonal contacting point x0i can be determined in closed form at a low computing cost. However, in general, we must iteratively find x0i satisfying appropriate orthogonal contacting conditions. We describe two algorithms for finding the orthogonal contacting point x0i in frame xyz. The first algorithm is derived from the general properties of the shortest distance point. The second algorithm is based on a constrained minimization method, the method of Lagrange multipliers [20]. Within the following sections, the orthogonal contacting point is marked with a prime, as X0i in frame XYZ and as x0i in frame xyz, once it has been determined and is ready to be used. Otherwise, it is notated with the general coordinate vector X or x.

2.1 Direct Method A necessary condition for the shortest distance point x on the implicit model feature (6) from the given point xi is that the connecting line of xi with x should be parallel to the feature normal rf at x (Fig. 1): rf  …xi

x† ˆ 0 ;

where r ˆ …@=@x; @=@y; @=@z†T : …10†

Equation (10) is equivalent to rx f=…xi

x† ˆ ry f=…yi

y† ˆ rz f=…zi



and only two of three equation rows of (10) are independent. Equation (10) describes a space curve as the intersection of two implicit surfaces defined in frame xyz by two independent equation rows of (10). We can find that (10) generally satisfies the parallel condition of the vector …xi x† to the feature normal rf of the iso-feature of (6) f…ag ; x†

const ˆ 0:

…11†

As the constant in (11) is continuously varying, the trajectories of the points lying on (11) and satisfying (10) draw a curve described by (10). We name the curve (10) centrortho-curve about the point xi to the iso-features (11). Our aim is to find the intersection of the centrortho-curve (10) with the implicit model feature (6). In other words, we would like to find the point x which simultaneously satisfies (6) and (10) (orthogonal contacting condition, see Fig. 2 for the case of a plane curve, an ellipse):

AHN ET AL.: ORTHOGONAL DISTANCE FITTING OF IMPLICIT CURVES AND SURFACES

623

Fig. 3. Iterative search (direct method) for the orthogonal contacting point x0i on f…ag ; x† ˆ 0 from the given point xi . The points xi;1 and xi;2 are, respectively, the first and the second approximation of x0i . Fig. 2. Isocurves x2 =64 ‡ y2 =16 const ˆ 0 (const > 0) and the trajectories (centrortho-curves) …3x 8†…3y ‡ 2† ‡ 16 ˆ 0 (thick line) and …3x 8†y ˆ 0 (broken line) of the orthogonal contacting points on the isocurves from the point (2, 2) and (2, 0), respectively.

 f …ag ; xi ; x† ˆ

f rf  …xi

 x†

ˆ 0 with xi ˆ R…Xi

Xo †: …12†

Orthogonal contacting equation (12) governs the variational behavior of the orthogonal contacting point in frame xyz relative to the differential change of the feature parameters a and will play an important role with the coordinate-based algorithm to be described in Section 3.2. We directly solve (12) for x by using a generalized Newton method starting from the initial point of x0 ˆ xi (Fig. 3) (how to compute the matrix @f =@x will be shown in Section 3.2): @f …13† x ˆ f …x† k ; xk‡1 ˆ xk ‡ x: @x k

If necessary, especially in order to prevent the point update x from heading for divergence zone with an implicit feature having very high local curvature, we underweight the on-the-feature condition in (12) as below:   wf ˆ 0; with 0 < w  1: f …ag ; xi ; x† ˆ rf  …xi x† In the first iteration cycle with the starting point of x0 ˆ xi , the linear system (13) and its solution (i.e., the first moving step) are equivalent to     f f rf  x ˆ and x ˆ rf : 0 xi rf  x xi krfk2 x i

It should be noted that the first moving step is the negative of the normalized algebraic error vector shown in the second formula of (1) which has been regarded in the literature [36], [40] as a good approximation of the geometric error vector. For plane curves on the xy-plane (i.e., z ˆ 0), we simply ignore the second and the third row of (12), which have terms of coordinate z in the cross product and have been automatically satisfied, then the fourth row of (12) without terms of coordinate z in the cross product is nothing other than the orthogonality condition of the error vector …xi x† to

the isocurve tangent in xy-plane [3], [4]. From this fact and for the sake of a common program module for curves and surfaces, we interchange the second and the fourth row of (12). Then, for the case of plane curves, we activate only the first two rows of the modified equation (12) in the program module (see Appendix A and Fig. 2 for an ellipse).

2.2 Constrained Minimization Alternatively, we can find the orthogonal contacting point minimizing the Lagrangian function [20] L…; x† ˆ …xi

x†T …xi

x† ‡ f:

…14†

The first order necessary condition for a minimum of the Lagrangian function (14) is     rL 2…xi x† ‡ rf ˆ 0: …15† ˆ @L f @ We iteratively solve (15) for x and  [27], [39]    2I ‡ H rf x ˆ

rTf  2…xi

0   x† rf ; where f

@ rf: Hˆ @x

…16†

In practice, we have observed that, once the second term of the Lagrangian function (14) becomes negative (the first term is always nonnegative), it is occasionally prone to become more negative (i.e., diverging) in the succeeding iteration cycles of (16) to minimize (14), although the second term is commanded to be zero. Consequently, the convergence of iteration (16) is very affected by the selection of the initial values. From this observation and with a hint at determining the first moving step size taken from the direct method described in Section 2.1, we use the following initial values for x and : 8 2 > : x0 ˆ xi < f=krfk xi 0 ˆ ‡1 : x0 6ˆ xi and f…ag ; x0 †  0 > : 1 : x0 6ˆ xi and f…ag ; x0 † < 0: …17† Iteration (16) with the initial values in (17) converges relatively fast. Nevertheless, if the convergence of iteration (16) fails, we catch it using the direct method.

624

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

2.3 Verification of the Orthogonal Contacting Point If no closed form solution of the orthogonal contacting point x0i is available, thus, if we have to find x0i through iteration, we would like to start the inner iteration described in Section 2.1 and Section 2.2, preferably from the initial point of x0 ˆ xi because we need not provide a special initial point (blind search). And, as another advantage, the initial point of x0 ˆ xi almost always leads the iteration (13) and (16) to the global minimum distance point that is just the nearest point on the model feature from xi . However, despite this advantageous initial point selection, the global minimum distance of x0i from xi could not be generally guaranteed. Conditions (12) and (15) merely constrain the connecting line of point x on f…ag ; x† ˆ 0 with the given point xi to be parallel to the feature normal rf at x. Thus, the orthogonal contacting point x0i found by iterations (13) or (16) is, in principle, only a local extreme (maximum or minimum) distance point. For example, there is a maximum of four local extreme distance points on an ellipse for a given point (Fig. 2). While there is no general sufficient condition for global minimum distance of x0i from xi , we check the correctness of the point x0i by testing multiple independent necessary conditions. With many a well-known model feature having axis or plane symmetry (e.g., ellipse, parabola, hyperbola, ellipsoid, paraboloid, etc.), the two points x0i and xi must lie in the same half-plane/space of the model coordinate frame xyz [4]. From this fact, we can simply check if x0i is not the global minimum distance point from xi by comparing the coordinate signs between x0i and xi . If necessary, we selectively invert the coordinate signs of x0i and repeat iteration (13) or (16) starting from the modified x0i . We can also check the correctness of point x0i by examining the direction of the feature normal rf at x0i . Once iteration (13) or (16) is successfully terminated, we verify the direction of the feature normal by using another orthogonal contacting condition (Fig. 4): rf xi x0i sign…f…ag ; xi †† ˆ 0:  …18† krfk x0 kxi x0i k i

Rarely, if (18) could not yet be satisfied with a value (probably 2) other than zero, then the connecting line of the two points xi and x0i pierces the model feature an odd number of times (i.e., at least, x0i is not the global minimum distance point). In this case, we repeat iteration (13) or (16) with an adjusted starting point (e.g., xi;1 on the connecting line of the two points xi and x0i in Fig. 4).

Acceleration of Finding the Orthogonal Contacting Point The use of the initial point of x0 ˆ xi for the iterative search of the orthogonal contacting point x0i is very convenient and preferable, however, it demands a relatively high computing cost. Fortunately, we are aware of the fact that the parameter update a will generally have been moderate in the second half-phase of the outer iteration (after the first 5-10 outer iteration cycles with most fitting tasks) and the orthogonal contacting point X0i in frame XYZ changes its position very incrementally between two consecutive outer iteration cycles. Then, we can dramatically save the computing cost of the inner iteration and, consequently, the overall computing cost

VOL. 24, NO. 5,

MAY 2002

Fig. 4. Verification of the orthogonal contacting point. The point x0i;1 satisfies (18), while x0i;2 does not: (a) f…ag ; xi † > 0. (b) f…ag ; xi † < 0.

of the orthogonal distance fitting if we use the current orthogonal contacting point X0i as a reasonable initial point for the inner iteration inside the next outer iteration cycle x0;a‡ a ˆ Ra‡ a …X0i;a

Xo;a‡ a †:

…19†

However, a pitfall of using (19) must be mentioned. If the current orthogonal contacting point X0i once becomes a local (not global) minimum distance point after an update of the parameters a, it can be inherited into the next outer iteration as long as we use (19) (Fig. 5). In order to interrupt this unintended development, we periodically (e.g., with every 5-10th outer iteration cycle) start the inner iteration from the given point Xi during the second half-phase of the outer iteration x0;a‡ a ˆ Ra‡ a …Xi

Xo;a‡ a †:

…20†

We summarize the strategy for an efficient and inexpensive finding of the orthogonal contacting point x0i : . . . .

2.4

3

Use the closed form solution of x0i , if available; Start the inner iteration from the initial point x0 of (20) in the first half-phase of the outer iteration, during which the parameter update a is generally wild; In the second half-phase of the outer iteration, start the inner iteration from the initial point x0 of (19); Periodically, in the second half-phase of the outer iteration, start the inner iteration from the initial point x0 of (20).

ORTHOGONAL DISTANCE FITTING

With the algorithms described in Section 2 (inner iteration), the shortest distance points fX0i gm iˆ1 on the current model feature from each given point fXi gm iˆ1 are found and now available for the parameter updating (outer iteration) in the nested iteration scheme (5). The inner iteration and the parameter updating are to be alternately repeated until appropriate break conditions are satisfied. In this section, we

AHN ET AL.: ORTHOGONAL DISTANCE FITTING OF IMPLICIT CURVES AND SURFACES

Fig. 5. Orthogonal contacting points X0i;1 and X0i;2 on the updated feature F …a ‡ a; X† ˆ 0 from the given point Xi . Iterative search of the orthogonal contacting point may converge to X0i;1 , if the iteration started from the current orthogonal contacting point X0i on F …a; X† ˆ 0 (19), while starting from the given point Xi converges to X0i;2 (20), being kXi X0i;1 k > kXi X0i;2 k.

present two algorithms for updating the model feature parameters, the distance-based algorithm and the coordinate-based algorithm minimizing the performance index (3) and (4), respectively.

3.1 Distance-Based Algorithm The first order necessary condition for a minimum of the performance index (3) as a function of the parameters a is   @ 2 T @d 0 ˆ 2JT PT Pd ˆ 0 ; : …21† where Jˆ @a @a We may iteratively solve (21) for a by using the Newton method  J P PJ ‡ HP Pd a ˆ T

T

T

T

T

J P Pd;

@2d where H ˆ 2 ; @a …22†

or, more conveniently, using the Gauss-Newton method, ignoring the second derivatives term in (22) JT PT PJ k a ˆ JT PT Pd k ; ak‡1 ˆ ak ‡ a: …23† In this paper, we iteratively estimate the parameters a by using the direct form of the Gauss normal equation (23) PJjk a ˆ

Pdjk ;

ak‡1 ˆ ak ‡ a;

with break conditions for iteration (24) kJT PT Pdk  0 or

kak  0

or

…24†

20 k 20 k‡1  0:

The first break condition denotes that the error distance vector and its gradient vectors in the linear system (24) should be orthogonal and is equivalent to condition (21). If this is satisfied, there will be practically no more parameter update (kak  0) and improvement on the performance index of 20 . In order to complete the linear system (24), we have to provide the Jacobian matrix of each distance di between the given point Xi and the orthogonal contacting point X0i on the current model feature q …25† di ˆ kXi X0i k ˆ …Xi X0i †T …Xi X0i †:

625

From (7) and (25), we get T Xi X0i @X @di Jdi ;a ˆ ˆ @a kXi X0i k @a XˆX0 i   0 T Xi Xi @R 1 @Xo 1 @x ‡ R ‰xŠ ‡ ˆ @a kXi X0i k @a @a xˆx0 i  0 T xi xi @x ˆ kxi x0i k @a xˆx0 i !  @R 1 0 T Xi Xi 0 I ‰x0i Š ; kXi X0i k @ar …26† where

  T T aT ˆ aT g ; ap ; ar ;

 @R @R ˆ @ar @!

@R @'

 @R ; @

0 and

B ‰xŠ ˆ @

x

..

0 .

0 x0i †

Furthermore, from the facts …xi

1 C A:

x

== rfjxˆx0 and i

@x @f @x @f rTf ˆ ˆ @a @x @a @a   @f T T by f…a; x† ˆ f…ag ; x† ˆ 0; ˆ 0 0 @ag the Jacobian matrix Jdi ;a in (26), eventually becomes 0   0 T sign …x x † rf i i @f B T 0 Jdi ;a ˆ@ 0T krfk @ag Xi kXi

T  X0i X0i k

0

xˆx0i

I



1 C A

 @R 1 0 ‰xi Š : @ar …27†

In comparison with Sullivan et al.'s algorithm [39], our algorithm, including (24) and (27), simultaneously estimates the parameters a in terms of form, position, and rotation parameters. And, Sullivan et al.'s algorithm obtains its Jacobian matrix unnecessarily expensively from (2). If we apply a similar procedure of (26) and (27) to F …b; X† ˆ 0 with algebraic parameters b defined in machine coordinate system XYZ, the Jacobian matrix for the Sullivan's algorithm can be obtained as below at a lower implementation cost without using (2):   sign …Xi X0i †T rF @F Jdi ;b ˆ ; with krF k @b 0 XˆXi

T

r ˆ …@=@X; @=@Y ; @=@Z † :

3.2 Coordinate-Based Algorithm In a similar way to the procedure of (21)-(24) carried out in Section 3.1, we construct the Gauss-Newton iteration minimizing the performance index of (4)

626



IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

@X0 ; PJjk a ˆ P…X @a

X0 †jk ; ak‡1 ˆ ak ‡ a; …28†

with break conditions

X0 †k  0 or kak  0 or 20 k 20 k‡1  0

kJT PT P…X

and with the Jacobian matrices of each orthogonal contacting point X0i on the current model feature, directly derived from (7),   @X @R 1 @Xo 1 @x ‡ ‰xŠ ‡ ˆ R JX0i ;a ˆ @a XˆX0 @a @a @a xˆx0 i i …29†   @R 1 0 1 @x ˆR ‡ 0 I ‰x Š : i @a xˆx0 @ar i

The derivative matrix @x=@a at x ˆ x0i in (29) describes the variational behavior of the orthogonal contacting point x0i in frame xyz relative to the differential changes of the parameters vector a. Purposefully, we obtain @x=@a from the orthogonal contacting equation (12). Because (12) has an implicit form, its derivatives lead to   @f @x @f @xi @f @f @x @f @xi @f ‡ ˆ 0 or ˆ ; ‡ ‡ @x @a @xi @a @a @x @a @xi @a @a …30† where @xi =@a is, from (9), @xi @R ˆ ‰Xi @a @a  ˆ 0

Xo Š R

B B B ‡B B @ 0

1 @f B B0 ˆB @a @ 0 0

0

zi

@f @x @f @y @f @z

@f @y @f @x

0

@f @z

0 0

yi

y

…zi z† 0

z

1

@f @z

C 0 C C C; @f C @x A

…yi

y† 0

B B @f ˆB @xi B @

@f @y

0 …xi x† 0 zi

z

0 0 xi …yi

MAY 2002

generality, the second and the fourth row of (12). Then, for plane curve fitting, we consider only the first two rows of the modified equation (12), in other words, we consider only the upper-left block of the FHG matrix in (31) (see Appendix A).

3.3 Comparison of the Two Algorithms Each of the distance-based and coordinate-based algorithms for orthogonal distance fitting of implicit features is versatile and very efficient from the viewpoint of application to a new model feature. We would like to stress that only the standard model feature equation (6), without involvement of the position/rotation parameters, is required in (31) for implementationofeitherofthetwoalgorithmstoanewmodelfeature (the second derivatives @rf=@ag is required only for coordinate-based algorithm). The overall structure of our algorithms remains unchanged for all dimensional fitting problems of implicit features. All that is necessary for orthogonal distance fitting of a new implicit model feature is to derive the FHG matrix of (31) from (6) of the new model feature and to supply a proper set of initial parameter values a0 for iterations (24) and (28). This fact makes possible a realization of versatile and very efficient orthogonal distance fitting algorithms for implicit surfaces and plane curves. An overall schematic information flow is shown in Fig. 6 (coordinate-based algorithm). We compare the two algorithms from the viewpoint of implementation and computing cost as follows: .

Common features -

@Xo @a @R ‰Xi @ar

R

 Xo Š :

The other three matrices @f =@x, @f =@xi , and @f =@a in (13) and (30) are to be directly derived from (12). The elements of these three matrices are composed of simple linear combinations of components of the error vector …xi x† with elements of the following three vector/matrices rf, H, and G (FHG matrix):     f @f @f @f T @ @ ; ; rf; G ˆ rf ˆ ; Hˆ ; …31† @x @y @z @x @ag rf 0 1 0 0 0 C y …x x† 0 y @f B i B i C ˆB CH @x @ …zi z† 0 xi x A 0

VOL. 24, NO. 5,

0

0

@f @y @f @z

@f @x

1 0 0 C C C @f C; @x A

0

0 1 C C C… G x A y†

@f @z

.

For implementation to a new model feature, only the standard model feature equation (6) is eventually required; Computing time and memory space cost are proportional to the number of the data points; The two algorithms demand about the same computing cost because the time-consuming part of the overall procedure is the finding of the orthogonal contacting points. Different features -

-

-

@f @y

0

0 †:

Now, (30) can be solved for @x=@a at x ˆ x0i and, consequently, the Jacobian matrix (29) and the linear system (28) can be completed and solved for parameter update a. For the sake of a common program module for surfaces and plane curves, we have interchanged, without loss of

-

The distance-based algorithm demands smaller memory space (with plane curve fitting one half and with surface fitting one third of what the coordinate-based algorithm does); To a new model feature, the implementation of the distance-based algorithm is relatively easy because it does not require the second derivatives @rf=@ag of the model feature; With the coordinate-based algorithm, each coordinate Xi ; Yi ; Zi of a measurement point can be individually weighted, while only pointwise weighting is possible with the distancebased algorithm. This algorithmic feature of the coordinate-based algorithm is practical for measuring equipment whose measuring accuracy is not identical between measuring axes. From this fact, we can say the coordinatebased algorithm is the more general one than the distance-based algorithm; A comparison between (26) and (29) reveals di ˆ NT i …Xi

X0i †;

where Ni ˆ

Jdi ;a ˆ Xi kXi

NT i JX0i ;a ;

X0i : X0i k

AHN ET AL.: ORTHOGONAL DISTANCE FITTING OF IMPLICIT CURVES AND SURFACES

627

Fig. 6. Information flow for orthogonal distance fitting of implicit features (coordinate-based algorithm).

3.4 Parameter Constraint If any parameter constraint (e.g., on size, area, volume, position, orientation of the model feature) fcj …a†

constj ˆ 0;

j ˆ 1; . . . ; n;

…32†

or, in vector form, f c …a†

const ˆ 0

is to be additionally applied, we append any following n equations (33) with the large weighting values wc to the linear systems (24) and (28):    @fcj a ˆ wcj fcj …a† constj ; j ˆ 1; . . . ; n: …33† wcj @a Considering parameter constraints (32), we can rewrite the performance indexes (3) and (4) and the linear systems (24) and (28) as follows:  T   Pd Pd ; 20 ˆ Wc …f c …a† const† Wc …f c …a† const†     PJ Pd …34† ; a ˆ Wc Jc Wc …f c …a† const† @d @f c Jˆ ; Jc ˆ @a @a



PJ Wc Jc



with

ˆ UWVT UT U ˆ VT V ˆ I;

  W ˆ diag…w1 ; . . . ; wq † ;

and now the linear systems (34) and (35) can be solved for a. After a successful termination of iterations (34) and (35), along with the performance index 20 , the Jacobian matrix provides useful information about the quality of parameter estimations as follows: .

Covariance matrix: " T  # 1 PJ PJ ˆ VW 2 VT ; Cov…^ a† ˆ Wc Jc Wc Jc

.

Parameter covariance:  q  X Vji Vki Cov…^ aj ; a^k † ˆ ; w2i iˆ1

.

j; k ˆ 1; . . . ; q ;

…36†

Variance of parameters: 2 …^ aj † ˆ

20 m‡n

q

Cov…^ aj ; a^j †;

j ˆ 1; . . . ; q ;

for the distance-based algorithm and T   P…X X0 † P…X X0 † ; ˆ Wc …f c …a† const† Wc …f c …a† const†     PJ P…X X0 † …35† ; a ˆ Wc Jc Wc …f c …a† const† @X0 @f c ; Jc ˆ Jˆ @a @a 20



for the coordinate-based algorithm.

3.5 Parameter Test and Object Classification The Jacobian matrix in (34) and (35) will be decomposed by SVD [25], [33],

.

Correlation coefficients: Cov…^ aj ; a^k † …^ aj ; a^k † ˆ p ; Cov…^ aj ; a^j †Cov…^ ak ; a^k †

j; k ˆ 1; . . . ; q:

Using the above information, we can test the reliability of the estimated parameters ^ a and the propriety of the model selection (object classification). For example, we may try to fit an ellipsoid to a set of measurement points of a spherelike object surface. Then, besides a  b  c and the existence of strong correlations between the rotation parameters and the other parameters, we get relatively large variances of

628

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

VOL. 24, NO. 5,

MAY 2002

Fig. 7. Orthogonal distance fit to the points set in Table 1: (a) Circle fit. (b) Ellipse fit. (c) Superellipse fit. (d) Convergence of the fit. Iteration number 0-6: circle, 7-21: ellipse, and 22-: superellipse fit.

the rotation parameters. In this case, the rotation parameters are redundant here (overparameterized) and, although the fitting of an ellipsoid to the points set has better performance than a sphere fitting according to the performance index 20 , the proper (reliable) model feature for the points set is a sphere. The parameter covariance (36) with the distance-based algorithm is generally a little larger than that with the coordinate-based algorithm. If 20 is small enough, justifying ignoring of the second derivatives term in (24) and (28), there is practically no difference of the parameter covariance (36) between the two algorithms.

4

FITTING EXAMPLES

4.1 Superellipse Fitting A superellipse [22] is a generalization of an ellipse and it is described by f…a; b; "; x† ˆ …jxj=a†2=" ‡…jyj=b†2=" 1 ˆ 0; where a, b are the axis lengths and exponent " is the shape coefficient. The use of superellipses in applications of image

processing and pattern recognition is very desirable because superellipse can represent various 2D-features, e.g., rectangle ("  1), ellipse (" ˆ 1), and star (" > 2). Many researchers have tried to fit a superellipse by using the moment method [10], [43] or using the LSM with algebraic distances [34]. In order to demonstrate the outstanding performance of our geometric fitting algorithms, we fit an extreme shape of a superellipse, a rectangle. We obtain the initial parameter values with " ˆ 1 from a circle fitting and an ellipse fitting, successively. In detail, we use the gravitational center and rms central distance of the given points set as the initial parameter values for circle fitting, circle parameters for ellipse fitting [3], [4], and, finally, ellipse parameters for superellipse fitting (Fig. 7, Table 2). Superellipse fitting to the eight points in Table 1 representing a rectangle terminated after seven iteration cycles for kak ˆ 7:8  10 9 (Fig. 7d, Table 2). TABLE 1 Eight Coordinate Pairs Representing a Rectangle

AHN ET AL.: ORTHOGONAL DISTANCE FITTING OF IMPLICIT CURVES AND SURFACES

629

TABLE 2 Results of the Orthogonal Distance Fitting to the Points Set in Table 1

4.2 Cone Fitting The standard model feature of a cone in frame xyz can be described below: f… ; x† ˆ x2 ‡ y2

…z tan… =2††2 ˆ 0;

where , the only form parameter, is the vertex angle of a cone. The position Xo , the origin of frame xyz, is defined at the vertex. The orientation of a cone is represented by the direction cosines of the z-axis, r3 in (8). However, from the viewpoint of coordinate metrology [30], a better parameterization of a cone is 2

f… ; r; x† ˆ x ‡ y

2

…r

2

z tan… =2†† ˆ 0;

with a constraint on the position and rotation parameters (see (32)) fc …ap ; ar † ˆ …Xo

X†T r3 …!; '† ˆ 0;

where the second form parameter r is the sectional radius of a cone cut by the xy-plane (z ˆ 0) and X is the gravitational center of the given points set. As one of the test data sets used in an authorized certification [18], [30] process of our algorithms, the 10 points in Table 3 representing only a quarter of a cone slice were prepared by the German federal authority PTB (PhysikalischTechnische Bundesanstalt). We have obtained the initial parameter values with ˆ 0 from a 3D-circle fitting [5] and a cylinder fitting, successively (Fig. 8, Table 4). The cone fitting to the points set in Table 3 with the constraint weighting value wc ˆ 1 is terminated after 60 iteration cycles for kak ˆ 2:3  10 7 . The convergence is somehow slow because the initial cone parameters (i.e., the cylinder parameters) seem to already provide a local minimum and it is especially slow if

the constraint weighting value wc is large and, thus, locks the parameters from changing. If we perturb this quasi-equilibrium state, similarly to the random walking technique, through adding a small artifact error to the initial parameter values, then we could expect a faster convergence. With a nonzero initial vertex angle value of ˆ =10, the iteration terminates after only 13 cycles for kak ˆ 2:9  10 6 (Fig. 8d).

4.3 Torus Fitting A torus is a quartic surface, and is described by f…r1 ; r2 ; x† ˆ x4 ‡ y4 ‡ z4 ‡ 2…x2 y2 ‡ y2 z2 ‡ z2 x2 † 2…r22 ‡ r21 †…x2 ‡ y2 † ‡ 2…r22

r21 †z2 ‡ …r22

r21 †2 ˆ 0;

where r1 is the radius of the circular section of the tube and r2 is the mean radius of the ring. We obtain the initial parameter values from a 3D-circle fitting [5] (Fig. 9) with q r2 ˆ rcircle : r1 ˆ 20;circle =m; Torus fitting to the 10 points in Table 5 representing a half torus is terminated after nine iteration cycles for kak ˆ 1:4  10 7 (Fig. 9d, Table 6).

4.4 Superquadric Fitting A superquadric (superellipsoid) [9] is a generalization of an ellipsoid and is described by f…a; b; c; "1 ; "2 ; x† ˆ  "1 ="2 …jxj=a†2="1 ‡…jyj=b†2="1 ‡…jzj=c†2="2 1 ˆ 0; where a, b, c are the axis lengths and exponents "1 , "2 are the shape coefficients. In comparison with the algebraic fitting algorithm [37], our algorithms can also fit extreme shapes of

TABLE 3 Ten Coordinate Triples Representing a Quarter of a Cone Slice

630

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

VOL. 24, NO. 5,

MAY 2002

Fig. 8. Orthogonal distance fit to the points set in Table 3: (a) 3D-circle fit. (b) Circular cylinder fit. (c) Circular cone fit. (d) Convergence of the fit. Iteration number 0-4: 3D-circle, 5-16: cylinder, and 17-:cone fit with the initial value of ˆ =10.

a superquadric (e.g., a box with "1;2  1 or a star with "1;2 > 2). After a series of experiments with numerous sets of data points, we have concluded that our algorithms safely converge with the parameter zone 0:002  …"1 and "2 †  10 and

"1 ="2  500:

Otherwise, there is a danger of data overflow destroying the Jacobian matrices of linear systems (13), (16), (24), and (28). Additionally, a sharp increase of the computing cost with inner iteration is unavoidable. We obtain the initial parameter values with "1 ˆ "2 ˆ 1 from a sphere fitting and an ellipsoid fitting, successively (Fig. 10, Table 8).

Superquadric fitting to the 30 points in Table 7 representing a box is terminated after 18 iteration cycles for kak ˆ 7:9  10 7 (Fig. 10d). The intermediate ellipsoid fitting (a  b  c) showed a slow convergence in its second halfphase (iteration number 10 to 42 in Fig. 10d) because of a relatively large variance of the estimated major axis length a^ of the ellipsoid caused by the distribution of the given points. In other words, the performance index 20 is not being changed very much by the variation of a^ (compare the standard deviation …^ a† of the intermediate ellipsoid …^ a†ellip: ˆ 13:4393 with that of the superquadric …^ a†SQ ˆ 0:1034).

TABLE 4 Results of the Coordinate-Based Orthogonal Distance Fitting to the Points Set in Table 3

AHN ET AL.: ORTHOGONAL DISTANCE FITTING OF IMPLICIT CURVES AND SURFACES

631

Fig. 9. Orthogonal distance fit to the points set in Table 5: (a) 3D-circle fit. (b) Initial torus. (c) Torus fit. (d) Convergence of the fit. Iteration number 0-6: 3D-circle, and 7-: torus fit.

5

REAL DATA FITTING

5.1 Data Acquisition In order to show how our algorithms perform on real data, a sphere and an ellipsoid will be fitted to the same set of measurement points of a sphere-like object surface (Fig. 11a). The points set is acquired by our stripe projecting 3D-measurement system with the Coded-Light-Approach (CLA) [7], [44] using the Gray code [26]. The measuring system consists of a standard CCD camera and an LCD stripe projector of ABW [1] and is calibrated by photogrammetric bundle adjustment using the circular coded targets for automation of the calibration procedure [2], [6]. Our 3D-measurement software determines the coordinates of the object surface points along the stripe edges. The number of the measured surface points is about 29,400 (Fig. 11b), from which every eighth point (subtotal 3,675 points) is used for the sphere and the ellipsoid fitting in this section (Fig. 12). Segmentation, Outlier Detection, and Model Fitting With real data fitting, we have to find an effective search (segmentation) method for the measurement points which potentially belong to the target model feature. And, outliers should be detected and excluded from the model fitting

because they distort the fitting results. Conceptually, our algorithms possess very advantageous algorithmic features for segmentation and outlier detection of dimensional measurement data, namely, the geometric error definition and the parameters grouping in terms of form, position, and rotation. In this paper, utilizing these algorithmic features, we used the following interactive procedure between segmentation, outlier detection, and fitting of the measurement points: 1.

2.

5.2

TABLE 5 Ten Coordinate Triples Representing a Half Torus

3.

From the following procedure, exclude the measurement points which belong to the known objects, e.g., measuring stage or already detected and identified objects. Initialize the model feature parameters through a model fitting to a well-conditioned object portion or through a fitting of a simple model feature, e.g. a sphere instead of the target ellipsoid. Also, initialize the safety margin t of the point searching domain volume to be used in the next step, e.g., t0 ˆ r=2 for a sphere. For the current model feature, determine the orthogonal error distances di of each measurement point lying inside the domain volume (see the next section for a detailed description) and evaluate the rms error distance s m 1X d2 ; error ˆ m iˆ1 i where m is the number of the measurement points lying inside the domain volume.

632

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

VOL. 24, NO. 5,

MAY 2002

TABLE 6 Results of the Coordinate-Based Orthogonal Distance Fitting to the Points Set in Table 5

Fig. 10. Orthogonal distance fit to the points set in Table 7: (a) Sphere fit. (b) Ellipsoid fit. (c) Superquadric fit. (d) Convergence of the fit. Iteration number 0-42: ellipsoid and 43-: superquadric fit.

4.

5.

Determine the inliers set of measurement points lying inside the domain volume and having error distances of di  tthreshold  error , where tthreshold is the outlier threshold factor (we used tthreshold ˆ 2:5, i.e., we assumed 1.24 percent outlier probability in the domain volume according to the Gaussian error distribution). Optionally, if there is no change in the inliers set, terminate the procedure. Fit the model feature to the inliers set and save the resulting rms error distance fit error . If fit error is smaller than a given error level concerning the TABLE 7 Thirty Coordinate Triples Representing a Box

6. 7.

accuracy of the measuring system, terminate the procedure. Update the safety margin t tgrowing  tthreshold  fit error of the domain volume, where tgrowing is the domain growing factor (we used tgrowing ˆ 1:0). Repeat again from the third step.

5.3 Domain Volume for Measurement Points In the first step of the procedure described in the previous section, 256 of 3,675 measurement points were detected and removed from the points list as they belong to the measuring stage (Z  0). In order to not only exclude the outliers from the model fitting but also avoid unnecessary determination of the orthogonal contacting points in the third step of the procedure, we applied a two-step domain volume criterion. First, we restrict the measurement points domain within a box with a safety margin t containing the interest portion of the standard model feature (6) defined in frame xyz. In particular, for an ellipsoid, the domain box can be defined below: jxj  a ‡ t;

jyj  b ‡ t;

and

jzj  c ‡ t:

With the domain box criterion, the potential measurement points of the target object surface can be screened from a

AHN ET AL.: ORTHOGONAL DISTANCE FITTING OF IMPLICIT CURVES AND SURFACES

633

TABLE 8 Results of the Coordinate-Based Orthogonal Distance Fitting to the Points Set in Table 7

large set of measurement points at minimal computing cost (for each measurement point, we need only a few multiplications for coordinate transformation). Next, of the measurement points lying inside the domain box, we actually determine the orthogonal contacting point if the measurement point also lies between two iso-features (11) with constinner < 0 and constouter > 0, respectively. In particular, for an ellipsoid, we use constinner ˆ f…ag ; xinner † with 8 T > < …max…a=2; a t†; 0; 0† xinner ˆ …0; max…b=2; b t†; 0†T > : …0; 0; max…c=2; c t††T

: a ˆ min…a; b; c† : b ˆ min…a; b; c† : c ˆ min…a; b; c†;

and constouter ˆ f…ag ; xouter † with 8 T : a ˆ min…a; b; c† > < …a ‡ t; 0; 0† T xouter ˆ …0; b ‡ t; 0† : b ˆ min…a; b; c† > : T …0; 0; c ‡ t† : c ˆ min…a; b; c†: The above two-step criterion, defining the domain volume at a low computing cost by utilizing the parameters

grouping in terms of form, position, and rotation, makes possible a simple and inexpensive search for the measurement point candidates which potentially belong to the model feature. The time-consuming determination of the orthogonal contacting points, which is necessary for the outlier detection, is to be carried out only for the measurement point candidates lying inside the domain volume.

5.4 Fitting Results With the ellipsoid fitting, we have applied the parameter constraints of ! ˆ 0 and ' ˆ 0 with a weighting value of wc ˆ 1:0  108 (see Section 3.4) since the object surface seems to be the top face of a horizontally lying ellipsoid. The computing time with a Pentium III 866 MHz PC was a total of 7.8 s for 17 rounds of sphere fitting with a final parameter update size of kak ˆ 3:7  10 7 and a total of 94.1 s for 26 rounds of ellipsoid fitting with kak ˆ 6:5  10 7 . The number of the outliers (including the out-of-domain points) was 727 for error ˆ 0:0596 mm and 1,005 for error ˆ 0:0309 mm with the sphere and the ellipsoid fitting, respectively (Fig. 12d). If we compare Fig. 12b and Fig. 12c, an ellipsoid seems to be the better model feature than a sphere for the measured object surface. However, a look at Table 9 and Table 11 reveals the

Fig. 11. Sphere-like object surface: (a) Stripe projection. (b) Measured points cloud.

634

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

VOL. 24, NO. 5,

MAY 2002

Fig. 12. Results of the model fitting to the points cloud (3; 419 ˆ 3; 675 256 points) from Fig. 11b. Distance error bars are 10 times elongated. (a) Sphere fit without and (b) with outlier elimination. (c) Ellipsoid fit with outlier elimination under the constraints of ! ˆ 0 and ' ˆ 0. (d) Convergence of the fit with outlier elimination. Iteration number 0-51: 17 rounds of sphere fitting and 52-162: 26 rounds of ellipsoid fitting. The initial sphere parameters are taken from the gravitational center and the rms central distance of the data points. After the first round of the sphere fitting, the initial number of the outliers (including the out-of-domain points) Noutlier ˆ 2; 515 is reduced to 177 and, thereafter, increased.

relatively large variances and the very strong correlations of the axis lengths of the estimated ellipsoid. From this fact, a sphere could be considered rather as the proper model feature for the measured object surface. The maximal anticorrelation between the position parameter Zo and the sphere radius a (also the axis length c of the ellipsoid, Table 10 and Table 11) should be noted which is caused by the onesided distribution of the data points on the top face of the sphere/ellipsoid. On the other hand, there is no correlation between the first two rotation parameters (! and ') and the other parameters of the ellipsoid, which is conditioned by the parameter constraints of ! ˆ 0 and ' ˆ 0.

6

SUMMARY

Dimensional model fitting finds its applications in various fields of science and engineering and is a relevant subject in computer/machine vision, coordinate metrology, and computer-aided geometric design. In this paper, we have presented two new algorithms for orthogonal distance fitting of implicit surfaces and plane curves which minimize the square sum of the orthogonal error distances between the model feature and the given data points. Easily understandable optimization methods, but with a fresh idea, are applied to the problem of geometric fitting, a widely

TABLE 9 Results of the Coordinate-Based Orthogonal Distance Fitting to the Points Cloud of Fig. 12b (Sphere Fitting) and Fig. 12c (Ellipsoid Fitting)

AHN ET AL.: ORTHOGONAL DISTANCE FITTING OF IMPLICIT CURVES AND SURFACES

TABLE 10 Correlation Coefficients of the Sphere Parameters in Table 9

recognized difficult problem. The new algorithms are versatile and very efficient from the viewpoint of implementation and application to a new model feature. Our algorithms converge very well and do not require a necessarily good initial parameter values set, which could also be internally supplied (e.g., gravitational center and rms central distance of the given points set for sphere fitting, sphere parameters for ellipsoid fitting, and ellipsoid parameters for superquadric fitting, etc.). Memory space and computing time cost are proportional to the number of the given data points. The estimation parameters are grouped in form/position/rotation parameters and are simultaneously estimated, thus providing useful algorithmic features for various applications. With the new algorithms, the quality of parameter estimation (including the propriety of model selection) can be simply tested by utilizing the parameter covariance matrix. Together with other algorithms for orthogonal distance fitting of parametric features [5], our algorithms are certified by the German federal authority PTB [18], [30], with a certification grade that the parameter estimation accuracy is higher than 0.1 m for length unit and 0.1 rad for angle unit for all parameters of all tested model features with all test data sets.

APPENDIX A IMPLEMENTATION EXAMPLE

WITH

.

Rigid body motion (7) of the model in machine coordinate frame XY:        X C S x Xo ˆ ‡ ; Y S C y Yo with C ˆ cos…†; S ˆ sin…†:

.

FHG matrix (31):   x=a2 rf ˆ ; y=b2  Hˆ

Parameters vector of an ellipse in XY-plane:

Standard model (6) in model coordinate frame xy:   1 x2 y2 ‡ 1 ˆ 0: f …a; b; x† ˆ 2 a2 b2

0

x2 =a3 B G ˆ @ 2x=a3 0



1=a2

0

0

1=b2

;

1 y2 =b3 C 0 A: 2y=b3

.

Orthogonal contacting equation (12):   …x2 =a2 ‡ y2 =b2 1†=2 ˆ 0: f …a; b; xi ; x† ˆ …yi y†x=a2 …xi x†y=b2

.

Linear system equation (13) of the direct method (Section 2.1):

ˆ .



1=a2 0 …xi x† 0 1=b2 !  x y=b2 y x=a2

0

0

yi y  x=a2 ‡ y=b2 

ELLIPSE FITTING

a ˆ …a; b; Xo ; Yo ; †T : .

TABLE 11 Correlation Coefficients of the Ellipsoid Parameters in Table 9



In this paper, the geometric fitting algorithms for implicit curves and surfaces are quite generally described and, in practice, they are implemented in a highly modular manner. For application to a new curve/surface, the user needs merely to supply the FHG matrix (31) of the standard model feature (6). Nevertheless, in order to help the interested reader realize his/her own implementation, we give, in this section, an example of how some key intermediate equations really look like with a particular fitting task, the ellipse fitting. .

635

…x2 =a2 ‡ y2 =b2

…yi

y†x=a2

…xi

1†=2 x†y=b2



 :

Linear system equation (16) of the constrained minimization (Section 2.2): 0 0 2 ‡ =a2 B 0 2 ‡ =b2 @ x=a2 y=b2 0 B ˆ@

2…xi 2…yi

x† y†

1 x CB C y=b2 A@ y A  0

x=a2

x=a2 y=b2

…x2 =a2 ‡ y2 =b2

1†=2

10

1 C A:

636

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

.

Linear system equation (24) for the distance-based algorithm (Section 3.1): 0

1

0

1

Jd1 ;a d1 B . C .. C C B C . Aa ˆ P@ .. A; dm Jdm ;a q with di ˆ …Xi Xi0 †2 ‡ …Yi Yi0 †2 :

B PB @

.

Jacobian matrix Jdi ;a in (27) for the distance-based algorithm (Section 3.1): Jdi ;a ˆ

with

.

 gi y0i 2 =b3 0 0 0 gi x0i 2 =a3  T  0 0 1 0 …Yi0 Yo † 1 Xi Xi0 ; di Yi Yi0 0 0 0 1 Xi0 Xo

 sign …xi x0i †x0i =a2 ‡ …yi y0i †y0i =b2 q : gi ˆ …x0i =a2 †2 ‡ …y0i =b2 †2

Linear system equation (28) for the coordinatebased algorithm (Section 3.2): 0

1 0 1 JX10 ;a X1 X10 0 B JY10 ;a C B Y1 Y1 C B C B C B .. C B C .. PB . Ca ˆ PB C: . B C B C @ JX0 ;a A @ Xm X 0 A m m Ym Ym0 JYm0 ;a .

Jacobian matrix JX0i ;a in (29) for the coordinatebased algorithm (Section 3.2): JX0i ;a

    1  C S @f @f @xi @f ‡ ˆ @x @xi @a @a 0 S C xˆxi   0 0 1 0 …Yi0 Yo † ; ‡ 0 0 0 1 Xi0 Xo

with    0 0 1=a2 0 @f ˆ @x …xi x† yi y 0 1=b2  2 2  y=b x=a ; ‡ 2 y=b x=a2   0 0 @f ; ˆ 2 @xi y=b x=a2   0 0 C S yi @xi ; ˆ @a 0 0 S C xi   1 0 0 @f ˆ @a 0 yi y …xi x† 0 1 x2 =a3 y2 =b3 0 0 0 B C 0 0 0 0 A: @ 2x=a3 0

2y=b3

0 0

0

VOL. 24, NO. 5,

APPENDIX B GRADIENT AND HESSIAN MATRIX AND SUPERQUADRIC

OF

MAY 2002

SUPERELLIPSE

Except for superellipse and superquadric, the derivation of the FHG matrix (31) is straightforward with the curves and surfaces shown in this paper. Because the power function and the logarithm function can cause data overflow and domain error, a careful program source coding of the FHG matrix is essential for the superellipse fitting and the superquadric fitting.

B.1 Superellipse For the sake of a compact description and efficient source coding of the FHG matrix, we use the following superellipse function:  2="  2=" ! 1 jxj jyj ‡ 1 ; f…ag ; x† ˆ f…a; b; "; x† ˆ 2 a b then the FHG matrix of this superellipse function will be:     0 1 A=x 2 " A=x2 rf ˆ ; Hˆ 2 ; " B=y " 0 B=y2 0

A=a

1B G ˆ @ 2A="ax " 0 with

B=b

…A ln A ‡ B ln B†=2

0 2B="by

A…1 ‡ ln A†="x B…1 ‡ ln B†="y

A ˆ …jxj=a†2="

and

1 C A;

B ˆ …jyj=b†2=" :

From the fact limx!0 x ln x ˆ 0, we prevent the domain error of ln…0† by treating ‰0 ln…0†Š ˆ 0 as a group.

B.2 Superquadric For superquadric, we use the following function: f…ag ; x† ˆ f…a; b; c; "1 ; "2 ; x† 0 1  2="1  2="1 !"1 ="2  2="2 1@ jxj jyj jzj ‡ ‡ 1A: ˆ 2 a b c In order to reduce the danger of a data overflow by the power function, we introduce some intermediate variables cancelling similarly sized values below: A ˆ …jxj=a†2="1 =d; D ˆ d"1 = "2 ;

B ˆ …jyj=b†2="1 =d; C ˆ …jzj=c†2="2 ;

with d ˆ …jxj=a†2="1 ‡…jyj=b†2="1 :

Then, the FHG matrix of the above superquadric function will be:   AD "1 "2 2A ‡ 2 " Hxx ˆ 1 ; "1 "2 x2 "2   BD "1 "2 ; Hyy ˆ 2B ‡ 2 " 1 0 1 "1 "2 y2 "2 AD=x 1@ C rf ˆ BD=y A; Hzz ˆ 2 2 …2 "2 †; "2 C=z "2 z   2ABD "1 "2 ; Hxy ˆ Hyx ˆ "1 "2 xy "2 Hyz ˆ Hzy ˆ Hzx ˆ Hxz ˆ 0;

AHN ET AL.: ORTHOGONAL DISTANCE FITTING OF IMPLICIT CURVES AND SURFACES

0

1

AD=a B C BD=b B C   C @f T 1B B C; ˆ C=c B C @ag "2 B C @ D…A ln A ‡ B ln B†=2 A …C ln C ‡ D ln D†=2 0

2 a



 A "1"2"2 ‡ 1   "1 "2 2 B "2 b

1

C B C B C B  T C B @ @f AD B C ˆ C; B 0  C @ag @x "1 "2 x B  C B "1 "2 B …A ln A ‡ B ln B† "2 ‡ ln A C A @ "1 "2 …1 ‡ ln D† 0

2 a



A "1"2"2



1

C B   C B "1 "2 C B 2 B ‡ 1 C B b " 2 @ @f BD B C ˆ C; B 0  C @ag @y "1 "2 y B  C B "1 "2 C B …A ln A ‡ B ln B† " ‡ ln B 2 A @ "1 …1 ‡ ln D† " 0 1 2 0 B C 0 B C   C @ @f T CB C: ˆ 2 B 2=c B C @ag @z "2 z B C 0 @ A 1 ‡ ln C 

T

REFERENCES [1] [2]

ABW GmbH, http://www.abw-3d.de/home_e.html, Feb. 2002. S.J. Ahn, ªCalibration of the Stripe Projecting 3D-Measurement System,º Proc. 13th Korea Automatic Control Conf. (KACC '98), pp. 1857-1862, 1998 (in Korean). [3] S.J. Ahn and W. Rauh, ªGeometric Least Squares Fitting of Circle and Ellipse,º Int'l J. Pattern Recognition and Artificial Intelligence, vol. 13, no. 7, pp. 987-996, 1999. [4] S.J. Ahn, W. Rauh, and H.-J. Warnecke, ªLeast-Squares Orthogonal Distances Fitting of Circle, Sphere, Ellipse, Hyperbola, and Parabola,º Pattern Recognition, vol. 34, no. 12, pp. 2283-2303, 2001. [5] S.J. Ahn, E. WestkaÈmper, and W. Rauh, ªOrthogonal Distance Fitting of Parametric Curves and Surfaces,º Proc. Int'l Symp. Algorithms for Approximation IV: Huddersfield 2001, I. Anderson and J. Levesley, eds., 2002. [6] S.J. Ahn, W. Rauh, and S.I. Kim, ªCircular Coded Target for Automation of Optical 3D-Measurement and Camera Calibration,º Int'l J. Pattern Recognition and Artificial Intelligence, vol. 15, no. 6, pp. 905-919, 2001. [7] M.D. Altschuler, B.R. Altschuler, and J. Taboada, ªMeasuring Surfaces Space-Coded by a Laser-Projected Dot Matrix,º Proc. SPIE Conf. Imaging Applications for Automated Industrial Inspection and Assembly, vol. 182, pp. 187-191, 1979. [8] D.H. Ballard, ªGeneralizing the Hough Transform to Detect Arbitrary Shapes,º Pattern Recognition, vol. 13, no. 2, pp. 111-122, 1981. [9] A.H. Barr, ªSuperquadrics and Angle-Preserving Transformations,º IEEE Computer Graphics and Applications, vol. 1, no. 1, pp. 1123, 1981. [10] M. Bennamoun and B. Boashash, ªA Structural-Description-Based Vision System for Automatic Object Recognition,º IEEE Trans. Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 27, no. 6, pp. 893-906, 1997. [11] P.T. Boggs, R.H. Byrd, and R.B. Schnabel, ªA Stable and Efficient Algorithm for Nonlinear Orthogonal Distance Regression,º SIAM J. Scientific and Statistical Computing, vol. 8, no. 6, pp. 1052-1078, 1987.

637

[12] F.L. Bookstein, ªFitting Conic Sections to Scattered Data,º Computer Graphics and Image Processing, vol. 9, no. 1, pp. 56-71, 1979. [13] B.P. Butler, A.B. Forbes, and P.M. Harris, ªAlgorithms for Geometric Tolerance Assessment,º Report no. DITC 228/94, Nat'l Physical Laboratory, Teddington, U.K., 1994. [14] X. Cao, N. Shrikhande, and G. Hu, ªApproximate Orthogonal Distance Regression Method for Fitting Quadric Surfaces to Range Data,º Pattern Recognition Letters, vol. 15, no. 8, pp. 781-796, 1994. [15] B.B. Chaudhuri and G.P. Samanta, ªElliptic Fit of Objects in Two and Three Dimensions by Moment of Inertia Optimization,º Pattern Recognition Letters, vol. 12, no. 1, pp. 1-7, 1991. [16] DIN 32880-1, Coordinate Metrology; Geometrical Fundamental Principles, Terms and Definitions. Berlin: Beuth Verlag, German Standard, 1986. [17] N.R. Draper and H. Smith, Applied Regression Analysis, third ed. New York: John Wiley and Sons, 1998. [18] R. Drieschner, B. Bittner, R. Elligsen, and F. WaÈldele, ªTesting Coordinate Measuring Machine Algorithms: Phase II,º BCR Report no. EUR 13417 EN, Commission of the European Communities, Luxemburg, 1991. [19] R.O. Duda and P.E. Hart, ªUse of the Hough Transformation to Detect Lines and Curves in Pictures,º Comm. ACM, vol. 15, no. 1, pp. 11-15, 1972. [20] R. Fletcher, Practical Methods of Optimization. New York: John Wiley & Sons, 1987. [21] W. Gander, G.H. Golub, and R. Strebel, ªLeast-Squares Fitting of Circles and Ellipses,º BIT, vol. 34, no. 4, pp. 558-578, 1994. [22] M. Gardiner, ªThe Superellipse: A Curve that Lies between the Ellipse and the Rectangle,º Scientific Am., vol. 213, no. 3, pp. 222234, 1965. [23] C.F. Gauss, Theory of the Motion of the Heavenly Bodies Moving about the Sun in Conic Sections (Theoria Motus Corporum Coelestium in Sectionibus Conicis Solem Ambientum), first published in 1809, translation by C.H. Davis. New York: Dover, 1963. [24] R.N. Goldman, ªTwo Approaches to a Computer Model for Quadric Surfaces,º IEEE Computer Graphics and Applications, vol. 3, no. 9, pp. 21-24, 1983. [25] G.H. Golub and C. Reinsch, ªSingular Value Decomposition and Least Squares Solutions,º Numerische Mathematik, vol. 14, no. 5, pp. 403-420, 1970. [26] F. Gray, ªPulse Code Communication,º US Patent 2,632,058, Mar. 17, 1953. [27] H.-P. Helfrich and D. Zwick, ªA Trust Region Method for Implicit Orthogonal Distance Regression,º Numerical Algorithms, vol. 5, pp. 535-545, 1993. [28] P.V.C. Hough, ªMethod and Means for Recognizing Complex Patterns,º US Patent 3,069,654, Dec. 18, 1962. [29] G. Hu and N. Shrikhande, ªEstimation of Surface Parameters Using Orthogonal Distance Criterion,º Proc. Fifth Int'l Conf. Image Processing and Its Applications, pp. 345-349, 1995. [30] ISO 10360-6:2001, Geometrical Product Specifications (GPS)ÐAcceptance and Reverification Tests for Coordinate Measuring Machines (CMM)ÐPart 6: Estimation of Errors in Computing Gaussian Associated Features, Int'l Standard, ISO, Dec. 2001. [31] D. Marshall, G. Lukacs, and R. Martin, ªRobust Segmentation of Primitives from Range Data in the Presence of Geometric Degeneracy,º IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 3, pp. 304-314, Mar. 2001. [32] K. Pearson, ªOn Lines and Planes of Closest Fit to Systems of Points in Space,º The Philosophical Magazine, Series 6, vol. 2, no. 11, pp. 559-572, 1901. [33] W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling, Numerical Recipes in C: The Art of Scientific Computing. Cambridge, U.K.: Cambridge Univ. Press, 1988. [34] P.L. Rosin, ªFitting Superellipses,º IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 7, pp. 726-732, July 2000. [35] R. Safaee-Rad, K.C. Smith, B. Benhabib, and I. Tchoukanov, ªApplication of Moment and Fourier Descriptors to the Accurate Estimation of Elliptical Shape Parameters,º Pattern Recognition Letters, vol. 13, no. 7, pp. 497-508, 1992. [36] P.D. Sampson, ªFitting Conic Sections to `Very Scattered' Data: An Iterative Refinement of the Bookstein Algorithm,º Computer Graphics and Image Processing, vol. 18, no. 1, pp. 97-108, 1982. [37] F. Solina and R. Bajcsy, ªRecovery of Parametric Models from Range Images: The Case for Superquadrics with Global Deformations,º IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 12, no. 2, pp. 131-147, Feb. 1990.

638

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

[38] D. Sourlier, ªThree Dimensional Feature Independent Bestfit in Coordinate Metrology,º PhD Thesis no. 11319, ETH Zurich, Switzerland, 1995. [39] S. Sullivan, L. Sandford, and J. Ponce, ªUsing Geometric Distance Fits for 3-D Object Modeling and Recognition,º IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 16, no. 12, pp. 1183-1196, Dec. 1994. [40] G. Taubin, ªEstimation of Planar Curves, Surfaces, and Nonplanar Space Curves Defined by Implicit Equations with Applications to Edge and Range Image Segmentation,º IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 13, no. 11, pp. 1115-1138, Nov. 1991. [41] D.A. Turner, ªThe Approximation of Cartesian Coordinate Data by Parametric Orthogonal Distance Regression,º PhD thesis, School of Computing and Math., Univ. of Huddersfield, U.K., 1999. [42] K. Voss and H. SuÈûe, ºInvariant Fitting of Planar Objects by Primitives,º IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 1, pp. 80-84, Jan. 1997. [43] K. Voss and H. SuÈûe, ªA New One-Parametric Fitting Method for Planar Objects,º IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 21, no. 7, pp. 646-651, July 1999. [44] F.M. Wahl, ªA Coded-Light Approach for 3-Dimensional (3D) Vision,º Research Report no. 1452, IBM Zurich Research Laboratory, Switzerland, Sept. 1984. Sung Joon Ahn received the BS degree in mechanical design and production engineering from Seoul National University in South Korea in 1985. He received the MS degree in production engineering from the Korea Advanced Institute of Science and Technology (KAIST) in 1987. He worked as a junior research scientist at the research center of LG Electronics from 1987 to 1990. Since 1991, he has been working as a research scientist for the Fraunhofer Institute for Manufacturing Engineering and Automation (IPA) in Stuttgart, Germany. His research interests include pattern recognition, optical 3D-measurement, close-range photogrammetry, camera calibration, and 3D-information processing. Wolfgang Rauh received the Doctorate degree in 1993 from the University of Stuttgart. He worked at the University of Stuttgart from 1984 until 1989 after having received the Diploma degree in mechanical engineering from the University of Karlsruhe in 1984. He then became head of the Industrial Image Processing Group at the Fraunhofer Institute for Manufacturing Engineering and Automation (IPA). In 1991, he was made head of the Information Processing Department at the same institute. In this department, a wide variety of projects in the area of image processing are carried out.

VOL. 24, NO. 5,

MAY 2002

Hyung Suck Cho received the BS degree from Seoul National University, Korea in 1971, the MS degree from Northwestern University Evanston, Illinois, in 1973, and the PhD degree from the University of California at Berkeley in 1977. From 1977 to 1978, he was a postdoctoral fellow in the Department of Mechanical Engineering, University of California at Berkeley. Since 1978, he has been a professor in the Department of Production Engineering, Department of Automation and Design, Seoul Campus, and is currently with the Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Korea. From 1984 to 1985, he was a visiting scholar at the Fraunhofer Institute for Manufacturing Engineering and Automation (IPA), Germany, where he carried out research on robot-based assembly. He has been invited as a short-term visiting scholar to several universities: Ritsumeikan University, Japan, the University of Paderborn, Germany, and the New Jersey Institute of Technology, in 1987, 1992, 1998, respectively. From 1995 to 1996, as a visiting professor, he participated in graduate school education for the Advanced Manufacturing Program (AMP) of the University of California, San Diego. Dr. Cho's research interests are focused on the area of environment perception and recognition for mobile robots, machine vision and pattern classification, and application of artificial intelligence/machine intelligence. During his research career, he has published six book chapters and more than 346 research papers (276 papers in international journals and conferences and 66 papers in Korean journals). He now serves on the editorial boards of five international journals, including Control Engineering Practice (IFAC). In addition to his academic activities, he serves and/or has served on a few technical committees (TC) of IFAC and IMEKO: TC on Robotics, TC on Advanced Manufacturing (IFAC), and Measurements on Robotics (IMEKO). He has been general chair and/or cochair of several international conferences, including SPIE-Opto Mechatronic Systems (2000, 2001) and IEEE/RSJ IROS (1999). For his achievements in research works in the fields of robotics and automation, Dr. Cho has been endowed with a POSCO professorship from Pohang Steel Company, Korea, since 1995. In 1998, he was awarded the Thatcher Brothers Prize from the Institution of Mechanical Engineers, United Kingdom, and he also received the fellowship of Alexander von Humboldt of Germany in 1984. He is a member of the IEEE. Hans-JuÈrgen Warnecke studied mechanical engineering at the Technical University of Braunschweig, Germany. He was a research engineer at the Institute for Machine Tools and Manufacturing afterward, from 1965 to 1970, a managing director for planning and manufacturing at the Rollei-Werke, Braunschweig, a camera factory. Since 1971, he has been a full professor for industrial manufacturing and management, University of Stuttgart, and the head of the Fraunhofer Institute for Manufacturing Engineering and Automation (IPA) of the Fraunhofer Society for Applied Research. His main fields of research and development are management and organization of industrial companies, computer integrated manufacturing, computeraided planning of material flow, simulation, production planning and control, manufacturing processes, flexible manufacturing systems, industrial robots, automation of handling and assembly, quality control, industrial measurement, cleanroom manufacturing, semiconductor manufacturing automation, and maintenance. He has published 10 books with many publications in the field of production, industrial engineering, and automation. Since 1 October 1993, he has been the president of the Fraunhofer Society, the largest institution for applied research and development in Europe with a budget of EUR 0.9 billion, of which EUR 300 million are proceeds from 3,000 private enterprises. There are 11,000 employees in 56 institutes. In the years 1995 to1997, he was the president of VDI, the Association of German Engineers, the largest institution of this kind in Europe.

. For more information on this or any other computing topic, please visit our Digital Library at http://computer.org/publications/dlib.