Evaluation of Edge Detectors: Critics and Proposal - CiteSeerX

5 downloads 0 Views 409KB Size Report
Many di erent edge detection techniques have already been proposed. ... new algorithms are proposed each year, a consistent evaluation of detectors becomesĀ ...
Evaluation of Edge Detectors: Critics and Proposal

Marc Salotti

Fabrice Bellet and Catherine Garbay

Groupe VISIA, CMCS Facult des Sciences BP 52, 20250 Corte, FRANCE

Lab. TIMC-IMAG, Institut Bonniot Facult de Mdecine Domaine de la Merci 38706 La Tronche, FRANCE Abstract

It is not possible to have a fair evaluation of edge detectors as edges are usually presented with simple synthetic images or real images that entail lots of semantic objects. We propose to elaborate a human expertise of edge detection by looking at zooms of small areas and to let humans build reference maps manually. Then it is possible to evaluate edge detectors and to undertake a classi cation of errors. This evaluation is rich of experience and leads to a better understanding of the problems occurring in edge detection.

1 Introduction

Many di erent edge detection techniques have already been proposed. For instance, those of Sobel, Marr and Hilldreth, Canny or Haralick are well known [1],[5], [8],[12]. However, since several techniques exist and new algorithms are proposed each year, a consistent evaluation of detectors becomes more and more necessary. Performance characterization is thus a critical issue to the evolution of the eld, as pointed out by Haralick [6], [7]. The basic question that has to be solved is \what is the more appropriate edge detector for a speci c application ?". Some criteria have already been suggested, for example \the average risk" of Spreeuwers and Heijden [13], but they do not show clearly the weak points of edge detectors. In particular, while the behavior of a speci c edge detector may be well understood with typical synthetic images, mainly the results can not be anticipated with real images and so far, the best detector of a speci c application remains unknown. Our aim is to propose a pertinent evaluation by determining reference maps of real images based on human expertise and to classify the errors according to local criteria. Our paper is organized as follows. First, we discuss the general problems of performance characterization and we present di erent evaluation techniques. Secondly, we present our method in four parts, the protocol used to build reference maps, the human expertise of edge detection, the method used to count spurious end missing edges and the classi cation of errors. Then, an evaluation is presented with two real images and two edge detectors.

2 Problematic

2.1 The goal of edge detection

Edge detection is de ned as the process aimed at describing the image with small patterns that are usually sets of pixels. The goal of edge detection is not to detect outlines of objects, as Pavlidis pointed out [9], it is rather to detect discontinuities. Such a de nition implies that edges should be detected without taking into account information on objects. As an example, when a contour of an object is blurred and is not distinguishable from the background, it is not the role of an edge detector to detect its boundary. Conversely, when a shadow or a highlight is well contrasted, the discontinuity should be detected. However, the problem is not that simple. In practice, the goal of edge detection can not be accurately de ned for two main reasons:

 First, in two dimensional images, there exist many types of discontinuities and the frontier is not clear between an acceptable discontinuity and a non signi cant transition. Let us consider a human face. Due

to surface curvature and shading, there exist many blurred transitions that are dicult to classify as edge or non edge. In the blocks world, even with sharp edges, the presence of small spots, texture, highlights, shadows or simply the presence of other edges in the close neighborhood makes it dicult to decide about the presence and the location of discontinuities. An other problem concerns the scale of edges. It is a common idea that a good edge detector should detect edges at all scales. However, when the transition is too large, it is not really a discontinuity, it is rather a region with shading gradients. The problem is to de ne an upper bound on the maximum width of an acceptable discontinuity, and this depend on the resolution of the image and the application.  Secondly, the goal of edge detection is sometimes simpli ed in order to de ne a model of edge that admits a rigorous mathematical solution. For example, the model of Canny does not take into account the eventual presence of other edges in the neighborhood [1]. Important di erences can also be noted in the stages of the detection. For example, applying Sobel masks [12], several pixels may be marked for the same transition and a technique of suppression of non local maxima of gradient has to be applied to get thin edges. However, several methods of suppression exist and some errors may occur during this stage. The problem is that a speci c thinning operation is sometimes included in the method, or even does not exist as for edge following techniques and the evaluation is therefore biased. An other important problem is the gradient thresholding technique. As pointed out by Fleck [4], most di erences between two edge maps are due to the gradient thresholds rather than to the way the gradient is computed. But how to take into account the in uence of the threshold in the evaluation method ? The goal of edge detection is therefore complex and not unique. A fair and reliable evaluation should take into account the richness and diversity of discontinuities, the possible biases introduced by the decomposition of the detection and the needs of the application.

2.2 Short overview of evaluation techniques

The most simple way to present the results of the detection is to display edge maps. If the approach is nave, the underlying assumption is that human eye is the best judge. The problem is that edge detection is a local process and we may loose our objectivity by considering more or less contextual or global information when looking at the image. In the usual approach synthetic images are used and reference maps are based on the model of the image. Comparisons are made between the results and the reference and statistical evaluations are then computed:

 Peli and Malah used images of squares and circles more or less corrupted by noise [10]. Each edge is classi ed

as \Perfect edge", \Broken edge" or \Perfect but broken at critical points". Error functions are computed to get a performance characterization.  Pratt proposed a performance measure called the \ gure of merit" [11]. This measure is equal to 1 when the result is perfect and decreases as the number of missed edges or mislocations increase.  Spreeuwers and Heijden used test images based on Voronoi tesselation [13]. They propose 6 types of errors, isolated missed, clustered missed, isolated spurious, clustered spurious, displaced and thickened. They present a statistical approach that consists in estimating error probabilities. A cost function called \the average risk" is used to get a global performance measure.

However, if the basic idea of the approach is consistent, some criticisms can be made:

 First, the richness and diversity of local con gurations encountered in synthetic images is in no way compa-

rable to those present in real images. In his proposals for a rigorous performance characterization, Haralick claims that the edge models should be representative of the edge con gurations present in real images [6],[7]. This may well be true. Nevertheless, as pointed out by several authors replying to Haralick, this is a very dicult undertaking [6].  Secondly, it is our opinion that there is a misleading underlying assumption when considering that the reference map is the same for an image and its noisy versions. In particular, it is well known that the precision of location decreases as noise increases and in some occasions, with strong disruptions, the exact

90 90 90 100 110 110 110 110

90 90 90 100 110 110 110 110

90 90 90 100 110 110 110 110

90 90 90 100 110 110 110 110

90 90 90 100 110 110 110 110

90 90 90 100 110 110 110 110

90 90 90 100 110 110 110 110

90 90 90 100 110 110 110 110

90 90 90 93 102 110 110 110

90 90 90 97 105 110 110 110

Table 1: Sample 1 90 90 90 98 106 110 110 110

90 90 90 97 105 110 110 110

90 90 90 93 102 110 110 110

90 90 90 90 100 110 110 110

90 90 90 93 102 110 110 110

90 90 90 97 105 110 110 110

Table 2: Sample 2 location of the edge can not be recovered. In such a case, we believe that the reference map is wrong and that care should be taken before penalizing detectors. To illustrate this problem, we present two samples, table 1 and 2, extracted from two synthetic gray level images. In the rst one, the edge curb is straight while in the second it is very irregular. The reference maps of the samples have been determined by hand. They are displayed table 3 and 4. Edges are marked 255 and non edges are marked 0. Gaussian noise has then been added to the rst sample and the new gray level values are presented table 5. The edges are still distinguishable but there is too much loss of information for recovering their exact location. A new reference map, displayed table 6, has been determined approximately by hand. This reference di ers from the one presented table 3. In fact, both sample 1 and sample 2 could have originated the noisy version. For this reason, it is our opinion that a reference map is not necessary the same before and after adding noise. It should be noticed that this problem has not been addressed in the evaluation methods proposed so far. Moreover, when adding noise, strong disruptions may even lead to the loss of the contour. This event may particularly occur when a bordering region is very small: the gray level values inside the region may not be 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 255 255 255 255 255 255 255 255 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Table 3: Reference map of sample 1

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 255 255 0 0 0 255 0 255 0 0 255 255 255 0 255 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Table 4: Reference map of sample 2

97 91 86 101 111 110 107 111

102 89 90 100 112 109 110 110

96 88 90 105 113 111 115 109

86 89 90 91 89 100 99 82 90 80 85 90 101 90 93 96 94 100 108 95 102 104 110 110 115 110 110 118 110 110 109 99 110 111 114 111 110 109 110 106

Table 5: Noisy version of sample 1

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 255 255 255 0 0 255 255 255 0 0 0 255 255 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Table 6: Reference map of the noisy version of sample 1

representative any more and the discontinuity may become invisible.  Thirdly, except for speci c applications, noise is not responsible for detection errors, as been explained by De Micheli et al [2]. If discontinuities are weakly distorted by noise, good results with noisy synthetic images does not necessarily mean good results with real images. Most problems rather concern typical con gurations with low gradient values, shading gradients, irregular transitions or proximity of other edges. What emerges from all this is that it is very dicult to de ne a fair and reliable performance characterization in edge detection. Interesting ideas have been suggested but very few conclusions can be drawn from the methods proposed so far.

3 Methodology

3.1 Reference maps

In order to determine reference maps automatically, accurate models of edges and regions have to be de ned. This suggests that synthetic images are to be used. However, from the discussion on the evaluation methods, two important problems arise. First, it is dicult to create synthetic images from realistic models of edges. Second, the reference map may not be reliable when adding noise to the original image. The conclusion is that the use of complex synthetic images does not allow the use of reliable automatically built reference maps. To overcome this problem, one solution could be to determine reference maps by hand. However, as we have already explained, contextual or semantic information should not be used. To this end, the whole image should not be presented to the human operator. In order to avoid misleading impressions due to high level knowledge, our proposal is to zoom the image and to look only at small parts of it. A reference map of edges can then be created by hand. We propose the following stages :

 (1) The human operator zooms the image such that the operator can see a window of exactly n by n pixels.

In practice, with the images presented at the end of the paper, we propose n = 16. The presentation of the window is particular: each pixel is a small square window with its gray level at the top and the numeric gray value below.  (2) The human operator looks at the region and marks edge pixels. For the purpose of the evaluation, a speci c program displays the part of the image, analyses each mouse click and marks edge pixels in white in the same window.  (3) Using scrolling bars, the human operator moves the window in order to process an other part of the image. In most cases, the operator tries to follow a discontinuity, so that information on previous edge pixels can help to take the right decision. The process carries on until the whole image is processed.

3.2 Human expertise of edge detection

Our proposal is consistent only if a human is able to determine a reliable reference map of edges. Clearly, it could not be proved that it is true. However, it is our opinion that a human expertise of edge detection is the most sophisticated tool to build reference maps. In the following, we present a broad outline of this human expertise, trying to explain why it is powerful and reliable. It should be noted rst that the numeric gray values of the pixels are sometimes very useful to take the decision about the presence of edges, in particular when the contrast is low. For this reason, we propose to argue about arrays of numeric gray values instead of small windows of gray level images.

 The most important point may be the expertise on the location of edges. In many cases, it is not possible to

decide on which pixel is the edge. It seems that the uncertainty is not negligible and that several decisions could be acceptable. The quantization stage, the shape and thickness of the transition and the presence of other edges nearby make it very dicult to locate the edge. An example is presented table 7. There is a dark homogeneous region on the right (gray values 55-65) and a light region on the left. The problem is that the left region looks noisy and the location of the frontier is not clear. Several edge candidates, outlined in the table, are therefore acceptable. Arbitrary rules could be chosen to decide about the correct edges. However, it would lead to arbitrary penalties when comparing edge maps to the reference. In fact, if such an uncertainty exists for a large set of edges, any evaluation of the location based on any reference

134 148 145 153 161 160 161 161 156 157 162 162 170 165 159 170

136 153 146 166 171 170 148 152 156 160 161 161 163 164 156 156

146 157 154 174 167 173 141 156 150 160 151 160 162 154 153 142

152 164 161 175 166 166 142 148 151 154 133 142 146 133 158 134

154 179 171 172 159 151 136 132 148 138 129 120 130 124 131 126

154 173 175 172 147 139 129 125 134 117 128

111 113 118 112 123

142 146 162 157

124 116 121 111 114 109 113 101 103 109 107 104

132

128 141 135 108 105 107 100 95 93 88 83 79 82 80 78

113 105 109 102 90 91 87 79 74 73 69 67 63 64 63 62

84 78 77 74 72 71 71 69 67 66 65 63 60 59 58 57

65 64 63 63 64 64 64 64 64 65 65 63 61 61 58 57

58 58 58 59 60 60 60 60 60 61 62 62 63 62 59 59

57 57 57 56 57 57 57 57 58 59 59 59 60 60 59 60

57 57 56 57 56 56 56 56 56 57 57 56 57 57 57 58

56 56 56 56 57 56 56 55 55 56 56 56 56 55 56 56

57 56 56 56 56 56 56 55 55 55 56 56 55 55 55 55

Table 7: The problem of edge location map would be misleading. One may even ask if an acceptable evaluation of the location exists when using such images. For convenient reasons, we have decided not to estimate mislocation errors. Moreover, we will consider correct an edge at (x; y) if there exists at least one edge in the reference map located in a 3x3 window centered in (x; y).  Di erent pro les of edge exist: step, roof, ramp and so on. The problem is that most detectors focus on a speci c single pro le. Let us consider that two basic transitions exist: { The rst type of transition is the frontier between two regions with di erent gray level average. In the following, we will call this type of transition \step edge". Among the possible candidates located on the transition, we propose to mark as edge the pixel which gray level value is the closest to the mean of the two gray level averages. Mostly, this pixel corresponds to the one with highest gradient value. However, in case of large transitions, this property is not necessarily respected. { The second type of transition is a line going through a single region. In the following, we will call this type of transition \line edge". Since most detectors are sensitive to the step edge pro le, we propose to split the transition in two parts and to mark as edge the two pixels at each side of the line. A line edge is in fact considered as two step edges facing one another. By this way, detectors that concentrate on step edges will not be necessarily penalized.  We de ne a shading gradient area as a small region with a gradation of the gray level values. This con guration is not really taken into account in the models of edge (or non edge) that are commonly proposed. If the region is narrow, it might be a standard step edge, but if the region is large ? In that case, it might be dicult to decide about the presence of an edge, depending on the strength and consistency of the gradation and the thickness of the region. For instance, table 8 shows a shading gradient area that corresponds to the bottom of the cheek of the human face displayed at the end of the paper. The use of an edge detector could be criticized for such images. However, shading gradient areas are common con gurations in most types of images and it would be an error to avoid processing them. The problem is to determine the rules that enable to decide about the presence and location of edges. A large number of shading gradient areas have been observed. Sometimes it is a rounded edge of a polyhedral, a part of a cylinder, a blurred transition, shadows or parts of texture. According to this experience and in order to maximize the probability that edges correspond to meaningful contours, we propose to mark pixels if a signi cant acceleration of the gradation is visible. It is dicult to explain what is \signi cant" and \visible" because our expertise is more qualitative than quantitative. Nevertheless, we give the main idea of our expertise: Let SGA be

115 110 110 108 108 105 109 105 109 107 111 110 105 100 103 100

112 117 111 115 110 107 106 109 109 112 110 109 112 113 100 105

120 119 121 112 109 106 108 106 106 104 104 111 110 103 107 99

127 121

115 116 116 114 108 111 107 108 109 106 109 111 103 106

132 134 128

122 117 110 106 106 107 106 109 108 110 109 109 104

139 134 134 128 127

121 113 111 111 115 113 116 114 109 107 104

143 140 136 142 138 130 122

122 119 119 116 107 110 120 116 108

136 143 144 143 140 135 134 124

124 117 117 122 119 111 114 110

145 136 141 138 141 138 137 135 132 135 127 120 116 121 114 107

133 139 139 138 136 141 131 136 134 125 125 124 118 110 113 115

138 132 132 133 139 129 144 142 139 139 137 127 121 122 123 116

139 141 139 142 136 138 139 137 143 137 140 130 120 117 115 118

144 146 145 144 140 141 147 151 144 145 138 135 127 133 125 118

145 145 145 144 142 144 141 139 147 143 144 141 141 134 130 122

145 146 148 147 147 149 151 153 144 141 141 140 139 139 134 130

157 154 155 150 151 151 150 144 144 144 147 141 143 136 135 127

Table 8: A shading gradient area a shading gradient area and D the direction of the gradation. Let G(i) be the gray level value of a pixel i in SGA and G(i + 1) the gray level value of its successor in direction D. Let A and S be the average and standard deviation of the di erences between each G(i) and G(i + 1) in SGA. An acceleration of the gradation is \signi cant" at pixel i if the gray level di erence jG(i + 1) ? G(i)j is much superior to A + S . The acceleration is \visible" if there exist other accelerations of the gradation among the neighboring pixels located transversally. It is dicult to be more accurate because the decision depends on too many factors. Table 8, the shading gradient area is large and complex. According to our qualitative expertise, only seven pixels can really be considered as edge pixels. They are outlined in the table and can be identi ed in the reference map of image face presented in the results. Clearly, in some places, the ambiguity is important. However, it can be observed that this edge element corresponds to an acceleration of the gradation visually present in the image (even if the contrast is not evident on the paper).  When small texture elements or spots are present, it is sometimes dicult to make the decision about the presence of edges. It is a problem of quantization and scale. In order to learn the right strategy, a long time has been spent looking rst at images and then at the corresponding small window of numeric gray values, trying to determine a limit of visibility. According to this expertise, and also for convenient reasons, we consider that 4 pixels is the minimum length of a meaningful edge element. However, if the contrast is low, an edge element must be longer than 4 pixels to be visible. Table 9 shows a highly textured region with a dark small region and a white line edge con guration. Edges have been outlined. The two small objects are distinguishable from the complex background because the gray level average of the small regions is signi cantly di erent. In that case, we propose to mark as edge the pixels located at the frontier of the regions.  All other problems, in particular junctions or badly contrasted discontinuities are relatively easily solved. The human expertise enables a very powerful and adaptive detection strategy: { When a complex con guration is present, our expertise takes great care of the shape of the discontinuity a few pixels further. An edge following technique is in that case a powerful detection technique. { We can easily identify junctions and adapt the detection process. In particular, the gradient direction is misleading because it follows the most contrasted transition. To help making the right decision and correctly locating the edge, we can try to follow the border of the region instead of the transition itself. { Using a large window (we propose 16x16), we can better appreciate the gray level average of the local regions and ensure the presence of an eventual badly contrasted discontinuity.

131 127 114 116 112 109 115 114 117 115 113 106 93 77 88 119

123 127 119 124 124 115 112 113 120 129 127 119 108 96 94 103

108 117 118 125 129 130 133 127 117 112 111 115 115 99 92 104

102 110 121 127 133 129 114

98 110 121 111 108 113

110 129 117 110 130

106 127 115 105 126

77

105 50 97 45 80 44 78 60

95 93 96 98 91 106 100 115 112 90 117 110 127 121 87 111 113 119 117 95 96 98 114 115 106 105 89 95 110 109 96 89 69 93 107 62 76 63 86 109 43 71 78 91 104 45 77 91 96 97 72 89 92 104 104 111 98 91 108 109 119 96 93 109 113 108 94 93 105 114 98 97 94 102 118 108 103 95 95 116

113 103 98 104 115 132

143 148 151 145 134 118 113 121 122 116

129 126 124 129 138

153

136 136 136 134 129 122

170 178 177 166

128 143 152 149

129 120 125 127 113

116 124 141 142 127

147 127

135 137 133 128 119 105 101 112 123 123 117 115 126 139 136 130

135 146 142 129 117 113 116 111 106 104 116 123 123 125 135 147

136 136 131 112 100 96 98 100 102 106 110 114 120 121 135 155

Table 9: Small objects It can be argued that our way of working is too much hazardous. The ambiguity really exists. The uncertainty is inherent to the problem and di erent reference maps would probably be obtained if performed by di erent persons. However, we believe that in most cases the decision of the human operator is consistent and reliable because of its experience and its ability to appreciate the situation qualitatively. Is that sucient to be con dent with the reference map? When comparing a reference map proposed by a human operator and the results of an edge detector, it seems that the reference looks better (see for example the edge maps at the end of the paper). In fact, there are more meaningful edges and less meaningless ones in the reference map than in other edge maps. It is our opinion that the reference map is not perfect, but can be used as the best possible reference.

3.3 Rough evaluation

We propose a simple method to distinguish missing edges from spurious ones:

 A missing edge pixel is a pixel p(x; y) marked in the reference map with no matching element in the 3x3 window centered in (x; y) of the resulting edge map.  A spurious edge pixel is a pixel p(x; y) marked in the resulting edge map with no matching element in the 3x3 window centered in (x; y) of the reference map. Some bias may be introduced when the same edge pixel of the reference map matches several di erent pixels of the resulting edge map or conversely. However, these problematic con gurations are rare and most multiple matchings are justi ed.

3.4 Classi cation of errors

A rough evaluation gives an idea of the errors been made. It is interesting to go into details and to classify errors in several categories:  Problems are often encountered when the discontinuity is badly contrasted and small gradient values are present. We propose to distinguish between high gradient values and low gradient values.  Line edge con gurations might be the source of systematic errors, we propose to identify them.  Shading gradient areas are also interesting con gurations.  When other edges are present in a close neighborhood, the con guration is sometimes complex. We propose to take them into account. Finally, we propose the 6 following classes:

0 1 1 1 0 0 1 1 1 0 0 0 0 0 0 0 -1 -1 -1 0 0 -1 -1 -1 0

0 1 1 1 0

0 1 1 0 0

0 0 1 0 0 0 -1 0 0 -1 -1 0 0 -1 -1

1 1 0 0 1 0 0 0 0 0 0 -1 0 0 -1 -1 0 -1 -1 0

0 1 1 1 0

0 0 0 0 -1 -1 0 -1 -1 0 -1 -1 0 0 0 1 1 0 0 0

0 1 1 0 0

Table 10: Masks for step edges 0 1 1 1 0 0 0 0 0 0 0 -2 -2 -2 0 0 0 0 0 0 0 1 1 1 0

0 1 1 1 0

0 0 0 0 0 -2 0 1 0 -2 0 1 0 -2 0 1 0 0 0 0

0 1 0 0 0 1 1 0 -2 0 0 0 -2 0 0 0 -2 0 1 1 0 0 0 1 0

0 0 0 1 0 0 -2 0 1 1 0 0 -2 0 0 1 1 0 -2 0 0 1 0 0 0

Table 11: Masks for line edges

     

a) Step edge con guration, low gradient values b) Step edge con guration, high gradient values c) Line edge con guration, low gradient values d) Line edge con guration, high gradient values e) Shading gradient areas f) Other edges nearby

The identi cation of the con guration can be manual or automatic. The manual method would probably be more reliable but it is a fastidious work. We propose an automatic classi cation of errors. In order to identify line edge con gurations and to compute gradient values, speci c template masks are used. The rst masks, presented table 10, are sensitive to step edge con gurations, the second, presented table 11, are sensitive to line edge con gurations. The 8 masks are successively applied, the strongest response determining the nal gradient value, orientation and type (step or line) of each pixel. This method may not be perfect, but the purpose is just to nd for each pixel the main characteristics of the con guration. More accurately, the 6 classes are de ned by the following expressions:

 (a) Step edge con guration, low gradient values. An edge pixel located at (x; y) belongs to the class if there is no pixel labeled \line edge" in the 5x5 window

 

 

centered in (x; y) and the gradient value at (x; y) is inferior to 20. There is no real justi cation of the threshold 20, except that a choice had to be made. Nevertheless, one simple reason could be that for edge detectors based on the gradient magnitude, the thresholds are often chosen around this value. (b) Step edge con guration, high gradient values. An edge pixel located at (x; y) belongs to the class if there is no pixel labeled \line edge" in the 5x5 window centered in (x; y) and the gradient value at (x,y) is superior to 20. (c) Line edge con guration, low gradient values. An edge pixel located at (x; y) belongs to the class if there is at least one pixel labeled \line edge" in the 5x5 window centered in (x; y) and the gradient value at (x; y) is inferior to 20. The test in the 5x5 window is justi ed because we consider that a line edge con guration is large and that pixels located at the beginning or the end of the transition must be classi ed as line edge, even if the strongest response at (x; y) is obtained with a step edge mask. (d) Line edge con guration, high gradient values. An edge pixel located at (x; y) belongs to the class if there is at least one pixel labeled \line edge" in the 5x5 window centered in (x; y) and the gradient value at (x; y) is superior to 20. (e) Shading gradient areas. While an edge pixel necessary belongs to one class and only one of the set fa; b; c; dg, its membership to class (e) is independent. Let a be the gradient direction in (x; y). Let P1 (x1 ; y1 ) and P2 (x2 ; y2 ) be the points de ned respectively by:

P1 : x1 = x + 3cos(a) y1 = y + 3sin(a) P2 : x2 = x ? 3cos(a) y2 = y ? 3sin(a) Let a1 be the gradient direction at (x1 ; y1 ) and a2 be the gradient direction at (x2 ; y2 ). There is a shading gradient con guration at (x; y) if at least one of the two following conditions is respected: (1) (2)

Grad(x1 ;y1 ) 2 

and 1 2 a

Grad(x2 ;y2 ) 2 

and 2 2 a

(

) < 2Grad(x1 ; y1 )

< Grad x; y

a

? ; +  4

a

(

4

) < 2Grad(x2 ; y2 )

< Grad x; y

a

? ; +  4

a

4

 (f) Other edges nearby.

The membership to this class does not depend upon previous classi cation. Let P1 and P2 be the same pixels de ned in (e). There is a complex con guration at (x; y) if an edge pixel is present in at least one of the two 3x3 windows of the reference map centered in P1 and P2 . The windows under study are relatively far from (x; y) in order to avoid looking at the same discontinuity.

4 Application

We propose to apply our method and to evaluate edge maps obtained with Deriche (derived from Canny criteria [1] and Sobel operators [3],[12]. After gradient computation, pixels with local maximum of gradient (in the gradient direction) are marked in the edge map. Then, an hysteresis thresholding technique is used to select only the most signi cant edges and edge elements smaller than six pixels are nally removed.

4.1 Local maximum selection

There are several ways to select the pixels with local maximum of gradient. It is important to give the details of ours, since it has a non negligible in uence on the results. Assume that G(x; y) and D(x; y) are respectively the gradient value and its orientation at pixel (x; y), then there is a local maximum at (x; y) if the following expression is respected:

2 4 19 70 90 1 5 64 93 68 3 37 92 60 28 42 85 71 42 9 80 64 34 9 4

Table 12: Gradient values for a small window with diagonal gradient directions if (D(x,y) is horizontal within 30 deg) then G(x-1,y) < G(x,y) and G(x+1,y)