Detecting Geometric Features of an Objects

3 downloads 184840 Views 958KB Size Report
information of a scene, 3D reconstruction, object detection, object recognition, ..... Illustrator, after which it is transformed to raster image with Adobe Photoshop.
COMPARISON AND EVALUATION OF EDGE AND CORNER DETECTORS1 G. ZAPRYANOV1, I. NIKOLOVA2 1

Assist. Prof. Georgi Zapryanov, Computer Systems Department, Technical University of Sofia, Bul. “Kl. Ohridsky” No. 8, 1100 Sofia, BG, e-mail: [email protected] 2 Assist. Prof. Iva Nikolova, Computer Systems Department, Technical University of Sofia, Bul. “Kl. Ohridsky” No. 8, 1100 Sofia, BG, e-mail: [email protected] Abstract: A large number of images processing applications rely on extraction of important features from image data, from which a description, interpretation, or understanding of the scene can by provided by machine (automatically). Some of the most notable examples are stitching of panoramic photographs, extracting metric information of a scene, 3D reconstruction, object detection, object recognition, motion tracking, and etc. Such applications often require relating two or more images in order to extract information from them. One of the key steps in this process is the selection of a number of image features and the establishment of correspondence between them. The type of features selected in an image depends on the type of image provided. The features used are geometric features like corners, lines, curves, templates, regions, and patches. This paper addresses the problems of feature detection and extraction from images. The objective is to introduce and compare a few commonly used feature detectors – edge and corner detectors and to assess their efficiency by means of experimental studies with different type of images. Several methods are investigated in order to evaluate the geometrical stability of features detection under different geometric transformations. The results obtained during the experiments are presented and discussed. The best performing methods are highlighted. Keywords: Image Analysis, Image Registration, Feature Extraction, Edge Detection, Corner Detection, Computer Vision

1. INTRODUCTION

Comparison between two or more images whose aim is the extraction of information from the differences between them is necessary in many applications. Examples of this are the identification of objects in scenes, movement indication, when processing consecutive stills by a camera, identification of images and others. The use of a method which compares all image pixels is unacceptable in most such applications as they work in real time and the quick action is of great significance. Consequently, it is necessary to compare solely the “key” image areas that are the ones which carry the information about the image structure. The comparison is done only among the open areas which considerably reduce the calculation time. The initial stage of finding such image areas could be done with the detection of characteristic geometrical signs in them such as edges and corner. The detection of edges and corners is done on the basis of analysis of the change of the brightness of the image elements. The detection of these properties considerably reduces the image data quantity and at the same time its important structural properties remain intact. The article reviews some of the most often used algorithms for detection of edges and corners, while at the same time their abilities in different kinds of images are compared. Three kinds of images – planar (vector created), photographs of geometrical objects and real photographic images – are used to meet the ends of this comparison. The paper is organized as follows. Section II, briefly presents some of the methods for edge and corner detection. The used test images and the motivation for selecting them are

1

The research reported in this paper is partially supported by the Bulgarian Ministry of Education and Science under project VU-MI-204/2006 “Intelligent sensor systems for security enhancement”.

1

given in Section III. Section IV discusses results, obtained via our experimental studies. Finally, concluding remarks are provided in the last section. 2. EXPLORED ALGORITHMS – A BREIF DESCRIPTION 2.1 ALGORITHMS FOR EDGE DETECTION

An edge in the image forms as a result of the differences in the intensity caused by changes in the structure of the scene (the image): changes in the borders between the objects; as a result from shades; changes occurring when incorporating an object into another one; caused by the presence of texture, noise and others. The main algorithm requirments for detection of edges are: 1. Good detection. The number of the false and missing edges should be reduced to the minimum. The ratio signal to noise should be maximal. 2. Proper positioning. The detected edges should coinside to the maximum with the place where the real ones are situated in the incoming image. 3. The detection of each edge should happen only once, that is, there should be only one response to each edge. Most algorithms for detection of edges could be grouped in two categories: based on the first and on the second derivatives of the intensity function. On the borders between the surfaces occurs a change in the value of the intensity function f x, y  showing the intensity of the reflected light in point (x,y). The gradient of the function in these points will be high. The methods operating on the basis of the first derivative detect the minimum and the maximum of the first derivative of the function. The methods from the group of the second derivative try to find its crossing point with zero. 2.1.1 OPERATORS BASED ON THE FIRST-DERIVATIVE OF THE INTENSITY FUNCTION TWO MASKS’ EDGE DETECTORS

The first step is determination of a gradient magnitude and the angle of orientation of the edge. The gradient magnitude is given by:  f x, y    f x, y    (1) J ( x, y )  f x, y   J  J       x   y  The angle of orientation of the edge giving rise to the spatial gradient is given by:  f f   Jy     arctg    arctg  (2)  y  x  Jx    After the value of the gradient is calculated for all image pixels, it is necessary to apply threshold t. The values under this threshold are nullified, and the rest are marked with “one”. The result is binary representation of the image in which the detected edges are marked with “1”. The threshold choise is important for the correct detection of edges. If the threshold is too low, false edges resulting from noise or texture could be marked. In case of setting a threshold which is too high, significant edges happen to be missed. The studied two-mask operators are those of Sobel and Prewitt. There is also an opportunity to set consumer coefficients (masks) aiming at the determination of the optimum for concrete applications. 2

2 x

2

2 y

COMPASS OPERATORS

Another method for calculating the size of the gradient and the direction of the edge is using eight masks with which each pixel is processed. Apperture 3x3 is used. The group of eight masks is created from the initially given mask through rotating the coefficients on its periphery (Robinson, 1977). After convolution is done to all masks, we accept the highest 2

value obtained for gradient value of the function at a given point. The eight masks define eight possible edge directions. The edge direction – horizontal, vertical or diagonal – can be determined through determining the mask with which the highest calculated value is obtained. After calculating the gradient, a suitable threshold should be applied again in order to reject the values under it as false edges. The studied groups of masks, which are also the most often used ones of this type, are those of Kirsch and Robinson. CANNY’S ALGORITHM FOR EDGE DETECTION

Canny’s algorithm (Canny, 1986) for detection of edges is considered to be the optimal and is used in many applications. Its basic steps are: Step 1: The image is filtered in order to eliminate the noise. To this end, convolution with Gaussian filter is done to the incoming image. The size of the apperture and the standard deviation are set in accordance to the noise in the image. Step 2: Using the method with the differences in the intensity (calculating the first derivative of the intensity function) the gradient of the image and the edge directions can be detected. Sobel’s operator can be used. The result is calculated gradient of the image ( J x, y  function) and edge direction (  ). Step 3: Thinning of the obtained in the previous step gradient image in which it is possible to find “hills” with changes in the intensity. It is necessary to extract edges from these “hills”. Step 4: Application of two thresholds on the image with thinner edges in order to eliminate the falseedges obtained as a result from the noise. If only one threshold is used, the gradient of points, which are not part of the edge, could surpass the threshold and get into the outcoming image.if the threshold is too high, interruptions in the real edges could occur. For this reason, two thresholds t1 and t 2 are applied to the obtained image, their approximate ratio is t 2  2t1 . Each image pixel whose value is higher than t 2 is considered to be part of the edge. All pixels connected to it and whose value is higher than t1 are also considered to be part of the same edge. After that, the adjacent pixels to the newly-found ones from the edge are checked and this process continues until a pixel whose value is lower than t1 is detected. The final result is obtaining an image with thinner edges connected in contours while the presence of false edges is reduced to the minimum. 2.1.2

OPERATORS BASED ON THE SECOND-DERIVATIVE OF THE INTENSITY FUNCTION – LAPLACIAN OPERATOR.

Laplacian operator is used to calculate the second derivative of the intensity function. The methods for obtaining the gradient image, based on the second derivative, are highly sensitive to noise. That is why before applying the Laplacian operator the image is smoothed with Gaussian filter. The result is: J x, y    2 G  f x, y  , (3) 2 where  is the Laplacian operator and G is the Gaussian operator. To present the function concisely, we can say that, first, convolution with Gaussian filter is applied to the image and then the obtained image is convoluted with Laplacian operator. As the operation “convolution” is associative, this is equal to convolution of Laplacian operator with Gaussian filter and to the following convolution of this result with the image. The convolution of the Laplacian operator and the Gaussian filter defines a new operator which is called Laplacian of Gaussian (LoG). It is calculated according to the following formula: 1  x 2  y 2  2 2    e LoG   Gx, y    4  4  2

3

x

2

 y2

2 2



(4)

2.2 ALGORITHMS FOR CORNER DETECTION

Another group of points which can describe the structure of the image, besides the edges, are the corners. A corner is obtained when two or more edges cross, and the edges define the border between two objects or between parts of one and the same object. Corner detection should satisfy a number of important criteria: 1. All the true corners should be detected. 2. No false corners should be detected. 3. Corner points should be well localized. 4. Corner detector should be robust with respect to noise. 5. Corner detector should be efficient. MORAVEC’S ALGORITHM

Moravec (Moravec, 1980) proposed measuring the intensity variation by placing a small square window (typically, 3x3, 5x5, or 7x7 pixels) and than shifting this window by one pixel in each of the eight principle directions (horizontally, vertically, and four diagonals). The intensity variation for a given shift is calculated by taking the sum of squares of intensity differences of corresponding pixels in these two windows. Intensity variation at the central pixel is the minimum intensity variation calculated over the eight principle directions. Applying the Moravec operator to each pixel in an image creates a cornerness map. The corners are the local maximum in the cornerness map. The next step from the algorithm is to apply a threshold on the elements from the obtained map so that all elements whose value is above the threshold are marked as corners (with one), and the rest are nullified. Finally, suppression of the non-maximums is applied in order to obtain as a result a point from each corner. HARRIS-STEPHENS’S ALGORITHM

Another algorithm from the same group is the one proposed by Harris-Stephens (Harris, 1988). Since the window used by the Moravec operator is square and binary, the estimate of the intensity variation is considered noisy. This drawback is eliminated by using a Gaussian filter. Let the intensities of the original and the shifted image be denoted by A and B. Let the difference of the intensities be denoted by V. Denote the intensity at a point (x,y) by I. For a shift in the horizontal direction and kernel of 3x3 pixels the result is: 2 9 9 9  I  2 2 Vx   wi  Ai  Bi    wi Bi  Ai    wi  i  (5)  x  i 1 i 1 i 1 where wi is a Gaussian mask centered at position i. The intensity variation can be written as a function of the gradient of the image. Equation (6) can be re-write as: Vu,v x, y 

n

 i 1

2

I   I wi  u i  v i   y   x

n

 w u i

i 1

2

Ii I I I   2uv i i  v2 i   x x y y 

(6)

 Au 2  2Cuv  Bv 2 2

 I I   I   I    w where A     w , B     w , C    x   x y   y  Harris and Stephens noticed that this could be written as a matrix equation: u  Vu ,v x, y   Au 2  2Cuv  Bv 2  u vM   , v  2

 A C where M    C B  4

(7)

Matrix M now contains all the differential operators describing the geometry of the image surface at a given point (x,y). The eigenvalues of M will be proportional to the changes of the image surface. When the position is interior to an object where it is assumed image intensity will be relatively constant within the window. Since there is little curvature in the surface within this window both the eigenvalues will be relatively small. For local windows straddling an edge, there is significant curvature perpendicular to the edge and very little curvature along the edge so one of the eigenvalues will be large and the other small. Both positions, which correspond to a corner and isolated pixel, will have significant curvature in both directions so both eigenvalues will be large. Let the eigenvalues of M be denoted by 1 and  2 . The above analysis indicates the plane described by 1 and  2 can be divided into 3 distinct regions: 1  0  2  0 - the point is in an area without change in the intensity (background or inside an object). 1  2  2  1 - the point belongs to an edge 1 and  2 are large - the point belongs to a corner. On this basis is done the calculation of the value showing to what extent a certain point belongs to an image corner. Let’s presume that this value for point (x,y) is marked as C(x,y). Harris defines the following: 2 C x, y   det M   k traceM  det M   12  AB  C 2 (8) traceM   1  2  A  B k – Constant, experimentally defined (the Harris’s suggestion is 0.04 – 0.06) To tell the difference between a corner and a point in an object, it is necessary to apply a threshold. The points from the inside of an object obtain low value. Practically, the threshold should be high enough to prevent finding false corners obtained as a result from noise. The corners occur as local maximums in the obtained map of corners. SMITH-BRADY’S ALGORITHM (SUSAN ALGORITHM)

Another approach to detecting corners is the one proposed by Smith and Brady (Smith, 1997). The name of the algorithm is SUSAN (Smallest Univalue Segment Assimilating Nucleus). It accepts that the intensities of the pixels, situated in a comparatively small round area and belonging to one object from the image, have one and the same value. This algorithm is used to calculate the number of the pixels which have a similar value of the pixel in the centre of the mask (the nucleus of the mask). The number of these pixels is marked as USAN (Univalue Segment Assimilating Nucleus) of the corresponding mask, or, pixels similar to the nucleus. The mask is applied for all image pixels. For each pixel it is recorded what is the number of the pixels similar to it in the mask. After that, the local minimums in the obtained massive are checked. Figure 2 depicts a round mask applied in different points of the image.

FIGURE 1. A MASK APPLIED IN DIFFERENT IMAGE POINTS.

The pixels whose value is close to the one of the nucleus are marked with red. As it can be seen from the figure, when approaching an edge the number of the pixels whose value is close

5

to the one of the nucleus reduces. Their number has the lowest value in the corners of the object. TRAJKOVIC-HADLEY’S ALGORITHM

Traikovic and Hadly (Trajkovic, 1998) propose an algorithm which operates through calculating the differences in the intensities of all the lines crossing a particular point. Trajkovic believes geometric corners are more stable than texture corners, so texture corners should be eliminated. That is why at first the image that is studied is twice or four times reduced as in this way the intensity of the adjacent pixels is averaging. Since the corners in the texture are small areas with changes in the intensity, this averaging eliminates them in the reduced image. The Trajkovic operator considers a small circular window and all the lines which pass through the center of this circle (fig. 2). Let the center of the circle be denoted by C. Now consider an arbitrary line that passes through C and intersects the boundary of the circular window at P and P’. Denote the intensity at a point X by Ix. P

Ic – intensity at center of circle Ip – intensity at intersection point P Ip’ – intensity at intersection point P’

C P’

FIGURE 2. NOTATION FOR TRAJKOVIC OPERATOR

The cornerness measure for the Trajkovic operator is then given as: 2 2 C x, y   min I P  I C   I P '  I C  , for P, P' (9) This cornerness measurement can be understood by considering its response to the different cases shown in Figure 3. Cases 1 and 2: The majority of the circular window is within an interior region. There will be at least one line where: I P  I C  I P ' . Cases 3 and 4: The center of circle lies just on an edge. There will be exactly one line where: I P  I C  I P ' . Case 5: The center of the circle is on a corner. For every line at least one of Ip or Ip’ should be different than Ic. Case 6: Isolated Pixel. For every line both Ip and Ip’ will be different than Ic.





2

3 4 6 1

5

FIGURE 3. TRAJKOVIC ALGORITHM – DIFFERENT CASES OF MASK POSITION

The Trajkovic corner detection algorithm makes use of a low resolution version of the input image in order to reduce the number of texture corners reported and to increase the speed of the algorithm. C1(x,y) is calculated for the pixels in the reduced image and the potential corners are marked – only the ones whose values are higher than threshold t1 are considered to be corners. The position of each potential corner in the original image is calculated. C2(x,y) is calculated for the pixels of the positions found in order to confirm or deny the presence of a corner. The obtained value is compared with a new threshold t 2 and the elements whose values are above this threshold are marked as corners. It should also be

6

taken into account that the reduction of the image leads not only to disappearance of corners in the texture, but the number of the geometrical corners reduces as well. The Trajkovic algorithm for 8 neighbours is identical to the Trajkovic algorithm for 4 neighbours except for how it calculates the simple and interpixel cornerness measures C1 and C2. This algorithm considers all 8 neighbours of a central element. 3. TEST IMAGES FOR COMPARISON OF ALGORITHMS

Two types of images are used: artificially created and real photographic pictures. Test image1 (256х256 pixels) contains many corner types: L-, Y-, T-, Arrow- and X-Junctions and is widely used to evaluate how a corner detector responds to each of these corner types. This test image also contains corners formed at a range of different grayscale values to test operation on corners of varying intensity. Test image2 (similar to test image1) with three different resolutions (256x256, 512x512 and 1024x1024 pixels) is created with Adobe Illustrator, after which it is transformed to raster image with Adobe Photoshop. Test image3 (512х720 pixels) tests the operation of the corner detectors on a real image where the location of the corners is intuitively clear, the background is relatively uniform, and each object has a nearly uniform color and texture. Test images4-7 are also photographs of the real world (512х512, 820х820, 820х1200, 820x1200 pixels respectively) and they are used due to the various geometrical objects they contain. This examples illustrates why a strict definition for a corner has not been established – it is unlikely different individuals would agree on exactly what parts of the image have a corner and would certainly not agree on the precise location of a given corner.

FIGURE 4. COLLECTION OF 7 TEST IMAGES (IMAGES ARE NUMBERED FROM (1) TO (7) IN THE ORDER OF LEFT-TO-RIGHT AND TOP-TO-BOTTOM).

In order to study the stability (recurrence) of the algorithm for detection of corners, test image2 is rotated clockwise with Adobe Illustrator on 15° and 30°. In order to study the noise stability Gaussian noise is applied over test image3. All algorithms are also studied when the contrast and the brightness of the same image are reduced. 4. EXPERIMENTAL RESULTS AND DISCUSSION

Results from the described in Section II algorithms for edge and corner detection, applied over test images are showed in Figs. 5-6 (edge detection) and Figs. 7-10 (corner detection).

7

One of the parametres of each algorithm is the threshold with which the obtained after the processing values are compared. The choise of a suitable threshold is of great significance to the final result of the operation of the algorithm. Using one and the same threshold value with different algorithms does not lead to equal results on one image (fig. 5). Figure 6 shows results from applying the algorithm for detection of edges with a threshold chosen so that the visual results be approximately the same. Setting the threshold is specific both for the algorithm and for the image itself.

FIGURE 5. RESULTS FOR TEST IMAGE2. IMAGES ARE SIGNED FROM (A) TO (H) IN THE ORDER OF LEFT-TO-RIGHT AND TOP-TO-BOTTOM: (A) – SOBEL, THRESHOLD (T) = 50; (B) – ROBINSON, T = 50; (C) – PREWITT, T = 50; (D) – KIRSCH, T = 50; (E) – LAPLACIAN OF GAUSSIAN (LOG),  = 0.7, KERNEL SIZE N = 7, T = 50; (F) – CANNY,  = 3, N = 3, T1 = 5, T2 = 50; (G) – (LOG),  = 0.1, N = 3, T = 0; (H) – CANNY, WITHOUT GAUSSIAN FILTER, T1 = 20, T2 = 40.

FIGURE 6. RESULTS FOR TEST IMAGE4. IMAGES ARE SIGNED FROM (A) TO (F) IN THE ORDER OF LEFTTO-RIGHT AND TOP-TO-BOTTOM: (A) – SOBEL, T = 100; (B) – PREWITT, T = 80; (C) – ROBINSON, T = 80; (D) – KIRSCH, T = 300; (E) – LOG,  = 3, N = 7, T = 25; (F) – CANNY, WITHOUT GAUSSIAN FILTER, T1 = 30, T2 = 60.

8

FIGURE 7. RESULTS FOR TEST IMAGES1, 2. IMAGES ARE SIGNED FROM (A) TO (F) IN THE ORDER OF LEFT-TO-RIGHT AND TOP-TO-BOTTOM: (A) – MORAVEC, N = 3, T = 400; (B) – HARRIS,  = 0.4, N = 3, K = 0.16, T = 10; (C) – SUSAN, T1 (GEOMETRIC) = 18, T2 (INTENSITY) = 10; (D) – MORAVEC, N = 3, T = 250; (E) – TRAJKOVIC, 8 NEIGHBORS, SCALE FACTOR = 1/4, T1 (LOW-RESOLUTION IMAGE) = 8, T2 (ORIGINAL IMAGE) = 20; (F) – TRAJKOVIC, 8 NEIGHBORS, SCALE FACTOR = 1/4, T1 = 160, T2 = 300.

FIGURE 8. RESULTS FOR TEST IMAGES1, 2. IMAGES ARE SIGNED FROM (A) TO (F) IN THE ORDER OF LEFT-TO-RIGHT AND TOP-TO-BOTTOM. TOP ROW – ROTATION OF 15°, BOTTOM ROW – ROTATION OF 30°. (A) – MORAVEC, N = 5, T = 800; (B) – HARRIS,  = 1, N = 3, K = 0.2, T = 40000; (C) – SUSAN, T1 = 40, T2 = 33; (D) – TRAJKOVIC, 8 NEIGHBORS, SCALE = 1/2, T1 = 2000, T2 = 39000; (E) – HARRIS,  = 1, N = 3, K = 0.2, T = 40000 (F) – SUSAN, T1 = 40, T2 = 33.

9

FIGURE 9. RESULTS FOR TEST IMAGE3. IMAGES ARE SIGNED FROM (A) TO (D) IN THE ORDER OF LEFT-TO-RIGHT AND TOP-TO-BOTTOM: (A) – MORAVEC, N = 7, T = 3000; (B) – HARRIS,  = 0.7, N = 3, K = 0.12, T = 40000; (C) – TRAJKOVIC, 8 NEIGHBORS, SCALE = 1/4, T1 = 500, T2 = 2200; (D) – SUSAN, T1 = 90, T2 = 33.

FIGURE 10. RESULTS FOR LOW CONTRAST TEST IMAGE3. IMAGES ARE SIGNED FROM (A) TO (D) IN THE ORDER OF LEFT-TO-RIGHT AND TOP-TO-BOTTOM: (A) – MORAVEC, N = 7, T = 3000; (B) – HARRIS,  = 1, N = 3, K = 0.18, T = 1310; (C) – TRAJKOVIC, 4 NEIGHBORS, SCALE FACTOR = 1/2, T1 = 20, T2 = 65; (D) – SUSAN, T1 = 55, T2 = 30.

The main obtained experimental results via edge and corner detection algorithms are: 1) The choice of a suitable threshold depends not only from the algorithm but from the image as well. In most of the algorithms the threshold has not influence on the number of algorithm operations and thus on its speedup. Exceptions to this are the Canny’s and Trajkovic’s algorithms of where in some cases two thresholds are used. 2) The results of Canny’s algorithm show that it is one of the best for real images. It finds the most of the true edges. In most cases the width of the edges is one pixel. The 10

reduction (or without Gaussian filter) of the value of standard deviation  maintaining finer edges in the image (fig. 5h), but for noisy images the bigger value of  is necessary. 3) Of the algorithms for detection of corners those of Trajkovic are the fastest, while Harris’s is the slowest one due to the repeated application of the Gaussian filter. Trajkovic algorithm for 8 neighbors has a better detection ratio (the ratio between the detected and the false and/or missing corners) than algorithm for 4 neighbors. Both Trajkovic algorithms find false corners along diagonal edges. For applications requiring a computationally efficient operator and working on a restricted set of images where it can be shown that the detection rate of the Trajkovic operator is acceptable, this operator may be useful. 4) SUSAN algorithm finds all corners for test image1 (fig. 4c), but for a real scenes finds a false corners along diagonal edges. 5) Moravec’s algorithm has’t good detection ratio. However both the Moravec and Trajkovic operators show good localization on all junction types for test image1 and test image2 (fig. 4). Moravec’s algorithm works very well in a case of low-contrast images (fig. 10a). 6) Harris’s algorithm has best detection ratio for real scene images (fig. 9). Besides it has the best recurrence of all algorithms at rotation (fig. 8), at noise and at low image contrast (fig. 10b). The reduction of the value of standard deviation  increases the number of the detected corners, but ones on the edges are marked as well. Very good results for the value of the constant k are obtained when k=0.12÷0.2. These higher values eliminate most of the false corners, but not all corners are detected. The highest number of corners is found when k = 0.04÷0.06, but corners on the edges are also detected. The main problem is setting the algorithm parameters with different types of images. 5. CONCLUSION

A comparative study of several edge and corner detectors was present. Experimentally, it has been found that the Canny’ and Harris-Stephens’ algorithms are best suited for the most images. No precise corner detector exists – sometimes the Harris algorithm suffers from poor localization and is computationally expensive. However, it has the best detection rate and has been shown to have a good repeatability rate. Localization is not critical for many applications. Despite all, the characteristics of Harris’s algorithm make it the most suitable to be used in real applications. 6. REFERENCES

1. Robinson, G., “Edge Detection by Compass Gradient Masks”, Computer Graphics Image Processing, vol. 6, pp. 492-501, 1977. 2. Canny, J., “A computational approach to edge detection”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679-698, 1986. 3. Moravec, H., “Obstacle avoidance and navigation in the real world by a seeing robot rover”, Technical Report CMU-RI-TR-3, Carnegie-Mellon University, Robotics Institute, 1980. 4. Harris, C. and Stephens, M., “A Combined Corner and Edge Detector”, Proc. Alvey Vision Conference, pp. 147-151, 1988. 5. Smith, S.M. and Brady, M., “SUSAN – A New Approach to Low Level Image Processing”, International Journal of Computer Vision, vol. 23, no. 1, pp. 45-78, 1997. 6. Trajkovic, M. and Hedley, M., “Fast Corner Detection”, Image and Vision Computing, vol. 16, no. 2, pp. 75-87, 1998.

11