Local curvature scale - CiteSeerX

0 downloads 0 Views 766KB Size Report
tangent at each point of the boundary in a certain direction, by using chain codes ... define the largest homogeneous region at each point, and treat these as fundamental units. ... indicates small curvature at b, and if it is small, it indicates high curvature. ... where Ch(b) represents the length of the chord of the osculating circle ...
Local curvature scale: a new concept of shape description Sylvia Ruedaa , Jayaram K. Udupab , and Li Baia a Collaborative

Medical Image Analysis on Grid Group, School of Computer Science, University of Nottingham, Jubilee Campus, Wollaton Road, Nottingham, NG8 1BB, U.K.; b Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Fourth Floor, Blockley Hall, 423 Guardian Drive, Philadelphia, PA 19104-6021, USA. ABSTRACT Shape description plays a fundamental role in computer vision and pattern recognition, especially in the fields of shape analysis, image segmentation, and registration. Shape representations must be unique, complete and should be able to reflect the differences between similar objects while abstracting from detail and keeping the basic features. Although many methods for shape description exist, they are usually application dependent. The proposed method of boundary shape description is based on the notion of curvature-scale, which is a new local scale concept, defined at each boundary element. From this representation, we can extract special points of interest such as convex and concave corners, straight lines, circular segments, and inflection points. This method is different from existing methods of curvature estimation and can be directly applied to digital boundaries without requiring prior approximation of the boundary. The results show that it produces a complete boundary shape description capable of handling different levels of shape detail. It also has numerous potential applications such as automatic landmark tagging which becomes necessary to build model-based approaches toward the goal of organ modeling and segmentation. The method is applicable to spaces of any dimensionality, although we have focused in this paper on 2D shapes. Keywords: Shape description, curvature, scale, boundary shape, digital boundary.

1. INTRODUCTION Shape description plays a fundamental role in computer vision and pattern recognition, especially in the fields of automatically defining landmarks, shape analysis, image segmentation, and registration. A shape descriptor aims at characterizing a shape uniquely while being invariant to rotation, scale, and translation1 . Shape representations must be unique, complete, and should be able to reflect the differences between similar objects while abstracting from detail and keeping the basic features. Although many methods for shape description exist, there is none that could be labeled as a general method, in the sense that they are usually application dependent. The existing methods can be classified into contour-based and region-based approaches2, 3 . The contourbased class uses the shape boundary whereas region-based methods consider the inside of the shape to extract the features. Our aim is to represent a shape by a certain number of landmarks characterizing the contour of the object. Therefore, we will focus on contour-based approaches, especially those allowing the detection of dominant points in a shape. Most common techniques for contour-based approaches consist of shape signatures4 , boundary moments5 , polygonal and curve decomposition6 , syntactic analysis7 , scale space analysis8, 9 , spectral transform (e.g. Fourier descriptors10 and wavelet descriptors11 ) , and defining shape invariants using boundary primitives12 . Curvature is one of the most important features used for characterizing shapes. It plays an important role in identifying different primitives. The curvature of a smooth curve is defined as the inverse of the radius of its osculating circle at each point. Although curvature estimation from a digital contour would seem a rather simple Further author information: (Send correspondence to Sylvia Rueda.) Sylvia Rueda: E-mail: [email protected], Telephone: +44 (0) 115 95 14236 Jayaram K. Udupa: E-mail: [email protected], Telephone: (215) 662-6781 Li Bai: E-mail: [email protected], Telephone: +44 (0) 115 95 13312 Medical Imaging 2008: Image Processing, edited by Joseph M. Reinhardt, Josien P. W. Pluim, Proc. of SPIE Vol. 6914, 69144Q, (2008) 1605-7422/08/$18 · doi: 10.1117/12.770444 Proc. of SPIE Vol. 6914 69144Q-1

task, not many methods exist that are simultaneously easy to implement, fast, and robust to noise. A local absolute maximum in the curvature corresponds to a generic corner in the shape. If this maximum is positive, then, we have a convex corner, and if it is negative, the corner is concave. Straight line segments are represented by constant zero curvature. Circular segments correspond to constant non-zero curvature. Zero crossings of the curvature function represent inflection points in the shape. Therefore, just with the feature of curvature, we can fully describe the shape and extract its dominant points as well as its associated characteristics. Let the shape of a 2D object be represented by its external boundary B. The elements bi of B will be called bels (an abbreviation for “boundary elements”) or points from now on. To estimate the curvature of a digital boundary, four main approaches have been used13 : orientation-based, derivative-based, osculating circlebased, and angle-based. In orientation-based methods, the curvature is estimated, by the change of slope of the tangent at each point of the boundary in a certain direction, by using chain codes to approximate the boundary and different tangent estimations. The derivative-based approach involves calculating the curvature based on derivatives. To achieve this, the boundary is approximated by second order curves such as splines to obtain a parametric definition of the curve. Then, the curvature is estimated by using κ=

x y  − x y  3

(x2 + y 2 ) 2

.

(1)

In this case, the curvature is calculated on sampled points of the approximated boundary, and the results should be mapped again to the digital boundary using interpolation or averaging14 . For the osculating circle approach, after smoothing, the boundary is fitted by a circular disc of a certain radius. Then, the curvature is computed using the inverse of these radii at each bel. To avoid digital effects, in all these curvature estimators, some continuous approximation is attempted when estimating the tangents at the boundary, or when the boundary is approximated to obtain a parametric definition, or when a circle is fitted to a smoothed boundary. This may introduce errors in the boundary or even in the shape. The angle-based approach15, 16 is widely used in the litterature17–20 for extracting significant curvature maxima and minima directly on digital curves. The curvature at each bel bi is estimated by considering the co−−−→ −−−→ sine of the angle formed by the vectors Aik = bi bi−k and Bik = bi bi+k . The k-vectors at bi are defined as Aik = (xi − xi−k , yi − yi−k ) and Bik = (xi − xi+k , yi − yi+k ). Consequently, the k-cosine cik at bi is cik = cos θ =

Aik · Bik ,  Aik  Bik 

(2)

where −1 ≤ cik ≤ 1. cik is close to 1 when the curve is turning rapidly, and close to −1 when the curve is relatively straight. The curvature estimation depends on the value of k. Therefore, an appropriate value of k at each point in the curve needs to be found. Let n be the total number of points forming the digital curve. An n arbitrary value v, so that m = , is set to compute ci1 , ci2 , ..., cim for each bi . The optimal cih is selected such v that cim < ci,m−1 < · · · < cih ≮ ci,h−1 .

(3)

Each bi has an associated cih , denoted ci . The local curvature maxima are selected at bi if ci  cj for all j h such that | i − j | . A similar operation is applied to detect curvature minima on the curve. The main 2 drawback of this method is that it cannot extract all curvature maxima/minima. Also, it does not allow a “level-of-detail”control. It is desirable to be able to detect dominant points at different scales so that it becomes possible to vary the level of detail depending on the application. In image processing, the concept of scale evolved from the need to handle variable object size in different parts of the scene. Global scale methods process the scene at each of various fixed scales and combine the results, as in the scale space approaches9 . Local scale approaches define the largest homogeneous region at each point, and treat these as fundamental units. The c-scale concept, introduced in this paper, brings the idea of local morphometric scale (such as ball-, tensor-, and generalized

Proc. of SPIE Vol. 6914 69144Q-2

scale)21, 22 developed for images to the realm of boundaries. All previous methods of extracting dominant points lack of this concept of a local scale. In this paper, we present a novel method of estimating curvature without modifying the boundary, taking into account the digital effects and noise, and considering scale in curvature calculation, to obtain a complete description of shapes with different levels of detail. Furthermore, a byproduct of our method is that a shape can be represented by dominant points. We first present a theory for local curvature scale estimation in Section 2. In Section 3, we propose a method for shape description using local curvature scale. In Section 4, we demonstrate the utility of these methods based on shape examples drawn from different areas. Conclusions and future work are stated in Section 5.

2. LOCAL CURVATURE SCALE We define local curvature scale segment or c-scale segment at any point b on a boundary B as the largest connected set of points of B connected to b such that no point in that set is farther than t from a line connecting the two end points of the connected set of points. In this manner, “homogeneity of region” is expressed in terms of “straightness” at every point. Let b1 , ..., bn be the points (or bels) defining a boundary B. We will associate with each point b = bi its c-

b i+1

bi=b

b i+2 bf

b i-1 b i-2

d

bb

Ch(b)

Figure 1. c-scale estimation.

scale segment C(b). This set is an indirect indicator of the curvature at b. To determine C(bi ), we progressively examine the neighbors, first the set of points bi−2 , bi , bi+2 , then the set bi−3 , bi−2 , bi , bi+2 , bi+3 , and so on (Fig. 1). At each examination, we calculate the distance of the points in the set from the straight line connecting the two end points of the set. If the maximum distance of these points from the line is greater than a threshold t, we define the c-scale segment C(b) of b as the last set of connected points found for which the distance was still within the threshold. The c-scale value we assign to bi , denoted Ch (b) is the chord length corresponding to C(b), which is the length of the straight-line segment between the end points bb and bf of C(b). If Ch (b) is large, it indicates small curvature at b, and if it is small, it indicates high curvature. c-scale values are very helpful in estimating actual segments and their curvature, independent of digital effects.

Relation Between c-scale Value Ch (b) and Radius r From a knowledge of Ch (b) and by assuming that C(b) locally represents a circular arc, we will now arrive at the actual arc length A(b) at b corresponding to the c-scale segment C(b). Let b be a point on a circular arc A. Any segment joining the two ends of the circular arc is a chord P2 P3 of a circle C with radius r and centre o (Fig.2). The radius and centre of the circle can be obtained by using the following chord property: the perpendicular bisector of a chord passes through the centre of a circle. If we trace the perpendicular bisector of the chord, we divide the chord into two equal segments. Let h be one of these segments and s the distance between √ b and the middle point of the chord. A right triangle of sides h, s and u can then be defined (Fig.2), so u = s2 + h2 . If we trace a right triangle P1 P2 b, then, we can calculate r using Thales theorem: cos θ = and r=

u s = , u 2r

u2 s2 + h 2 = . 2s 2s

Proc. of SPIE Vol. 6914 69144Q-3

(4)

(5)

b u P2

θ s

h α

P3

o

r

P1 Figure 2. Geometric properties of the chord in a circle.

In our case, we have h =

Ch (b) . If we substitute h in (5), we obtain 2 r=

4s2 + Ch (b)2 . 8s

(6)

Relation Between c-scale Ch (b) and Arc Length A(b) The perimeter of a circle is defined as: P = 2πr. Therefore, the length of a circular arc A(b) with an angle 2α at πrα h Ch (b) 2πr · (2α) = . From tan α = = , the relation between the centre (Fig. 2) is given by A(b) = 360 90 r−s 2(r − s) the chord and the arc length is A(b) =

arctan



Ch (b)  2(r−s) πr

90

.

(7)

Relation Between c-scale Ch (b) and Curvature κ(b) 1 The curvature κ of a circular segment of radius r is κ = . Therefore, the relationship between curvature and r c-scale value is: κ(b) =

8s , 4s2 + Ch (b)2

(8)

where Ch (b) represents the length of the chord of the osculating circle matching the c-scale segment C(b) at b, and s is the distance between the mid-point of the arc and the mid-point of the chord.

Orientation The orientation at each point b is generally defined as the angle Ψ in degrees between the tangent to the boundary at b with respect to the x-axis. In our digital setting, we assume the tangent at b to be a line at b parallel to the chord connecting the end points bb and bf of the c-scale segment C(b) at b. The boundary is followed anticlockwise, the inside of the object kept always on the left of the boundary at any bel b. By using the chords found previously for each bel, we calculate the unit vector u of the chords in the direction of the boundary (Fig. 3). u has two components: the direction cosines ux , uy . From the direction cosines, we can write down the orientation at b as

Proc. of SPIE Vol. 6914 69144Q-4

y

Ψ n

u x 0

Ψ n

Figure 3. Orientation using direction cosines.

 Ψ(b) =

360 − arccos(ux (b)) arccos(ux (b))

if uy < 0 if uy  0.

(9)

For a variety of reasons, it is better to measure the orientation with respect to the starting point b1 as O(b) = Ψ(b) − Ψ(b1 ).

(10)

To have a continuous orientation along the boundary, we need to detect the discontinuities caused if we change from 360◦ to 0◦ and vice versa, keeping track of how many turns the chord continuously makes. This way we can also describe shapes, such as spirals that have several turns, and represent the real angle at each point of the boundary, which may be greater than 360◦ .

Relation Between Orientation O(b) and Curvature Curvature is defined as the rate of change of the slope of the tangent at each point on the boundary. Therefore, this is the exact definition of the first derivative O (b) of the orientation O(b). By using O (b) we note that information about the orientation of the boundary is also captured in the curvature obtained, and therefore, we have more information than using only the magnitude of the curvature. Using this approach we can locate special points of interest. Local positive maxima of O (b) will correspond to convex corners, local negative minima to concave corners, constant zero curvature to straight lines, constant non-zero curvature to circular segments, and zero crossings to inflection points. In this manner, we have a more complete description of the boundary by using O (b).

Example The above theory is illustrated by using an example shape in Fig.4. This shape is constructed from theoretical functions. It includes different parts such as a rotated rectangle, circular arcs of different radii (convex and concave), and a sine wave. The starting point of the boundary b1 is represented in the figure by a cross. The boundary is oriented and follows the direction of the arrow, leaving always the inside of the object to the left. The order of the bels is defined using this direction, and the coordinates of the bels are computed by using the functions that define the different sections. To this shape, we apply our c-scale calculation method with t = 0.02. This parameter controls the level of detail or global scale. For digital boundaries, we usually set t ≈ 3, which works well for all the shapes we tested. This value of t is able to preserve appropriate boundary details and at the same time ward against digitization noise. The values of the chord lengths (i.e., c-scale) for each bel are represented in Fig.5.a.

Proc. of SPIE Vol. 6914 69144Q-5

Figure 4. A general shape formed by straight lines, circular arcs, and a sine wave.

Due to the symmetry of the c-scale method, each straight segment on the boundary corresponds to a peak in the Ch (b) plot, and the location of the peak corresponds to the midpoint of the segment. Chords of same length are obtained when we have a circular shape with constant curvature. And valleys on the Ch (b) plot represent curved regions of the boundary. Using Equation (7), a representation of the arc length A(b) (as shown in Fig.5.c) can be made. In this case, A(b) and Ch (b) are very similar, but this may not be the case for digital boundaries. 10 10

8 K(b)

Ch(b)

8 6

6

4

4

2

2 50

100

150

200 b

250

300

350

400

50

100

150

(a)

200 b

250

300

350

400

(b)

10 300

6

O(b)

A(b)

8 4

50

100

150

200 b

250

300

350

0

400

50

100

150

200 b

250

300

350

400

50

100

150

200 b

250

300

350

400

40

0.2

20 O’(b)

A’(b)

200 100

2

0

0 −20

−0.2

−40 50

100

150

200 b

250

300

350

400

(c)

(d)

Figure 5. c-scale related entities for the shape in Fig. 4 starting with b1 at origin. (a) The c-scale value Ch (b). (b) Curvature κ(b). (c) Arc length A(b) and its derivative A (b). (d) Orientation O(b) and its derivative O (b).

The orientation of each bel is defined in degrees. The first element of the boundary will be set to 0◦ and the rest will be defined with respect to this first element. The orientation O(b), as well as its derivative O (b), for each bel for the shape in Fig.4 are shown in Fig.5.d. O (b) will be very useful for shape description as it constitutes a good estimation of curvature, allowing additionally to distinguish between convex and concave regions in the boundary. According to Equation (8), we represent the curvature of the shape under consideration in Fig.5.b. Straight

Proc. of SPIE Vol. 6914 69144Q-6

lines on the boundary have a curvature equal to zero, circular regions have constant curvature and sine waves and corners have high curvature values.

3. A METHOD OF SHAPE DESCRIPTION In this section we present the method of boundary shape description that uses c-scale concepts. Given a (digital) boundary B and a scale parameter t, our goal is to obtain a partition PB of B into segments, a set sL of landmarks (or characteristic dominant points), and a shape description assigned to every element of PB . The method is summarized in Fig.6.

B

A(b)

c-scale computation

Median A f (b) Peak/Valley Filter detection

P’s V’s

Landmark tagging Shape

O(b)

Median Of (b) Filter

First Derivative

O’(b)

convex concave flat

Description

Figure 6. The method of boundary shape description.

First, we determine at each point b of B the c-scale value Ch (b) from which we estimate arc length A(b) via Equation 7 and orientation O(b) via Equation 10. Second, we smooth A(b) and O(b) using a median filter of width 2w + 1 centered at every element b, where w is the half width of the window used for filtering, specified in terms of the number of points considered on either side of b. We repeat this process k times to get a smoothed version of A(b), called Af (b). This is necessary for detecting automatically the peaks and the valleys of Af (b) by using mathematical morphology. The peaks correspond to straight line segments in the boundary and the valleys to curved segments. Mathematical morphology5 is based on set theory and provides powerful tools for image analysis. Fundamental operations are erosion, dilation, opening and closing. An opening consists of an erosion followed by a dilation. A closing is defined as a dilation followed by an erosion. A structuring element defines the size and shape of the transformation to be done. In our case, we use a structuring element of size se applied to the signal Af (b). Opening and closing are the transformations we need to detect the peaks and the valleys of Af (b) (see Fig.7). 10 6 4 2

Af(b) Opening

8 Af(b)

Af(b)

10

Af(b) Closing

8

6 4 2

50

100

150

200 b

250

300

350

400

50

(a) Closing.

100

150

200 b

250

300

350

400

(b) Opening.

Figure 7. Closing and opening applied on Af (b) for the shape in Fig. 4. (a) Valley detection. (b) Peak detection.

In particular, to find the valleys, we apply to Af (b) a bottom-hat filtering operation, which is the difference between Af (b) and its closing. This filter will extract only the valleys of Af (b). Once we have detected all the valleys in Af (b) we find the minimum value for each valley detected. These local minima correspond to the valleys in Af (b) and represent points with high curvature in B. Similarly, to find the peaks, we need a top-hat filtering operation, which is the difference between Af (b) and its opening. This will extract only the peaks of Af (b). These maxima correspond to the peaks in Af (b), which are the middle points of the straight segments in B. By selecting a different size (sev and sep ) for the structuring elements, we can vary the number of valleys and peaks detected. Let θp be the parameter controlling the size of the detected peaks, and θv the parameter controlling the size of the detected valleys. We can avoid spurious valleys and peaks, or select most prominent

Proc. of SPIE Vol. 6914 69144Q-7

ones, by keeping only those peaks and valleys that are greater than a certain value. This allows us to fully control the number of dominant points we want for a given application. —lJom'dry +Peaks

0 VaIy!J

10

Af(b) Peaks Valleys

8 Af(b)

+ Mth]!

6 4 2 50

100

150

200 b

250

300

350

(a) Detection of peaks and valleys.

400

(b) c-scale landmarks.

(c) Angle-based landmarks.

Figure 8. Dominant points detected on the shape in Fig. 4.

For the example of Fig.4, we used a structuring element of size 5 for both valley and peak detection. No element selection was necessary in this case by specifying θv and θp . After the bottom-hat and the top-hat filtering operation, we detect the peaks and valleys as shown in Fig.8.a. Once we have detected the peaks P and the valleys V in Af (b), we can locate the landmarks sL on the boundary as shown in Fig.8.b by simply identifying the points where local minima/maxima occurred. From O (b) (Fig.5.d) we can extract a complete description of each bel in B. Local positive maxima correspond to convex corners, local negative maxima to concave corners, constant zero curvature to straight line segments, constant non-zero curvature to circular segments, and zero crossings to inflection points.

4. RESULTS We have illustrated in the previous section the curve description process on a mathematical object (Fig.4). In this section, we will focus on the extraction of dominant points and present the results for digital boundaries of natural and medical objects. We will compare the results obtained with the angle-based approach15, 16 described in Section 1. The result of applying the angle-based method to the mathematical object (Fig.4) is shown in Fig.8.c for v = 10, which is the optimal parameter found for this shape. As we can see, not all the corners (maxima) and inflection points (minima) are found in the contour and some of them are detected away from where they should appear. Also, some unreasonable results appear, such as the detection of two minima in the corners of one of the circular arcs, and the detection of points in the circular arcs where the curvature is constant and no dominant points exist. In the following description, we assume that the shape boundaries are extracted after segmentation of appropriate images.

Natural objects First, we illustrate the influence of the scale parameter t on a shape in the c-scale method (Fig.9.a-c). We compare these results with the angle-based method where the parameter v was selected to obtain a similar number of landmarks on the same shape for corresponding values of t (Fig.9.d-f). For the c-scale method, we used a structuring element of size 15 to detect peaks and 10 to detect valleys and no element selection. We applied the method for t = 2, t = 5, and t = 10. As t increases, the number of landmarks decreases and the level of detail captured diminishes. At a higher scale, most representative corners and inflection points are captured. For the angle-based method, as we decrease the parameter m (or increase v), we decrease the number of landmarks obtained as well as the accuracy of their placement (see nose and ears of the rabbit in Fig.9.d-f). The location of the landmarks in this method depends on the value selected for m, whereas in the c-scale method, the position found for the same landmark at different t is the same. The accuracy does not depend on the scale, therefore

Proc. of SPIE Vol. 6914 69144Q-8

the method is more robust to scale changes. We can also notice the same drawbacks announced before for the mathematical object, such as some important corners not detected and misplacement of certain landmarks. If we keep on diminishing m, clusters will appear on some of the landmarks detected and the result will not be acceptable for landmark selection if some of the landmarks are the same. This effect will never happen in c-scale. A landmark can be detected only once, and it is either a corner or an inflection point.

(a) c-scale with t = 2.

(d) Angle-based with m =

(b) c-scale with t = 5.

n . 35

(e) Angle-based with m =

(c) c-scale with t = 10.

n . 20

(f) Angle-based m =

n . 10

Figure 9. (a-c) Results of varying the scale parameter t in the rabbit shape for the c-scale method. (d-f) Results of varying the parameter v in the rabbit shape for the angle-based method.

(a) Corners detected with c-scale.(b) Inflection points detected with c-scale.

(c) Angle-based landmarks.

Figure 10. Results of applying the c-scale and the angle-based methods on a snow flake shape.

The corners detected by using the c-scale method on a snow flake shape are shown in Fig.10.a, and the inflection points in Fig.10.b. In this case, t = 1 and the structuring elements are set to 5 for both valley and peak detection. The result obtained with the angle-based method is shown in Fig.10.c. In this case a parameter

Proc. of SPIE Vol. 6914 69144Q-9

v = 66 was selected to obtain the maximum we could get without clusters of landmarks. As we can see, both methods perform well in this case, but the angle-based method is unable to detect some of the corners and inflection points on the extrema of the flake, whereas c-scale detects all of them correctly.

Medical objects We apply the two methods on medical objects, such as the liver and the talus bone. The results are shown in Fig.11. In this case, in order to get enough number of landmarks with the angle-based method, we allowed the

(a) Valleys detected on liver.

(b) Peaks detected on liver.

(c) Angle-based landmarks on liver.

(d) Valleys detected on talus.

(e) Peaks detected on talus.

(f) Angle-based landmarks on talus.

Figure 11. (a-b) c-scale valley and peak detection on liver with t = 3.5, θv = 3, θp = 3, sev = 20, sep = 20. (c) Angle-based method on liver with v = 25. (d-e) c-scale valley and peak detection on talus with t = 3.1, θv = 0, θp = 0, sev = 5, sep = 5. (f) Angle-based method on talus with v = 20.

clustering of dominant points detected on them only for comparison purposes. We can see that c-scale performs better and is able to capture accurately the details of the shapes even with a small number of landmarks.

5. CONCLUSIONS We have presented a new theory and method for shape description based on the novel concept of c-scale and we compared it with an existing, popular shape descriptor. The method performs significantly better, is simple, and produces a comprehensive description of shape, with numerous potential applications. For each boundary element b, the arc length of the largest homogeneous curvature region is estimated as well as the orientation of the tangent at b. The method is different from previous methods of curvature estimation and can be directly applied to digital boundaries without requiring prior approximations of the boundary, giving robust and accurate results at different levels of detail by considering the local morphometric scale of the object. We have shown that this method is useful for shape description as well as the extraction of dominant points. This work focuses on 2D shapes but it can be extended to shapes of any dimensionality. In future work, we will use this method to build automatically Point Distribution Models for creating Active Shape Models (ASMs)23 toward the goal of organ modeling and segmentation.

Proc. of SPIE Vol. 6914 69144Q-10

ACKNOWLEDGMENTS This research is funded by the European Commission Fp6 Marie Curie Action Programme (MEST-CT-2005021170). The medical images were provided by the Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, USA.

REFERENCES 1. I. L. Dryden and K. V. Mardia, Statistical Shape Analysis, Wiley, 1998. 2. D. Zhang and G. Lu, “Review of shape representation and description techniques,” Pattern recognition 37, pp. 1–19, January 2004. 3. S. Loncaric, “A survey of shape analysis techniques,” Pattern recognition 38, pp. 983–1001, August 1998. 4. E. R. Davies, Machine Vision: Theory, Algorithms, Practicabilities, Academic Press, 1997. 5. M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis, and Machine Vision, Brooks/Cole, 1999. 6. U. Ramer, “An iterative procedure for the polygonal approximation of plane curves,” Computer Graphics and Image Processing 1, pp. 244–256, 1972. 7. K. S. Fu, Syntactic Pattern Recognition and Applications, Prentice Hall, 1982. 8. H. Asada and M. Brady, “The curvature primal sketch,” IEEE Transactions on Pattern Analysis and Machine Intelligence 8(1), pp. 2–14, 1986. 9. F. Mokhtarian and A. Mackworth, “Scale-based description and recognition of planar curves and two dimensional shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence 8(1), pp. 34–43, 1986. 10. R. Chellapa and R. Bagdazian, “Fourier coding of image boundaries,” IEEE Transactions on Pattern Analysis and Machine Intelligence 6(1), pp. 102–105, 1984. 11. Q. Tieng and W. Boles, “Recognition of 2d object contours using the wavelet transform zero-crossing representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence 19(8), pp. 910–916, 1997. 12. D. M. Squire and T. M. Caelli, “Invariance signature: characterizing contours by their departures from invariance,” Computer Vision and Image Understanding 77, pp. 284–316, 2000. 13. M. Worring and A. Smeulders, “The accuracy and precision of curvature estimation methods,” IC on Pattern Recognition III, pp. 139–142, 1992. 14. S. Herman and R. Klette, “A comparative study on 2d curvature estimators,” IC on Computer Theory and Applications , pp. 584–589, 2007. 15. A. Rosenfeld and E. Johnston, “Angle detection on digital curves,” IEEE Transactions on Computers 22, pp. 875–878, 1973. 16. A. Rosenfeld and J. Weszka, “An improved method of angle detection on digital curves,” IEEE Transactions on Computers 24, pp. 940–941, 1975. 17. L. Davis, “Understanding shape: Angles and sides,” IEEE Transactions on Computers 26(3), pp. 236–242, 1977. 18. B. Ray and K. Ray, “An algorithm for detection of dominant points and polygonal approximation of digitized curves,” Pattern Recognition Letters 13, pp. 849–856, 1992. 19. R. Neumann and G. Teisseron, “Extraction of dominant points by estimation of the contour fluctuations,” Pattern Recognition 35, pp. 1447–1462, 2002. 20. A. Carmona-Poyato, N. Fernandez-Garcia, R. Medina-Carnicer, and F. Madrid-Cuevas, “Dominant point detection: a new proposal,” Image and Vision Computing 23, pp. 1226–1236, 2005. 21. P. Saha, J. Udupa, and D. Odhner, “Scale-based fuzzy connected image segmentation: theory, algorithms, and validation,” Computer Vision and Image Understanding 77, pp. 145–174, 2000. 22. A. Madabhushi, “Generalized scale: Theory, algorithms, and applications in image analysis,” Ph.D. Thesis, Department of Bioengineering - University of Pennsylvania, Philadelphia, Pennsylvania , 2004. 23. T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham, “Active shape models - their training and application,” Computer Vision Image Understanding 61, pp. 38–59, 1995.

Proc. of SPIE Vol. 6914 69144Q-11