Detection of Lines, Line Junctions and Line Terminations F. Deschˆenes, D. Ziouand M.-F. Auclair-Fortier D´epartement de math´ematiques et d’informatique Universit´e de Sherbrooke Qu´ebec, Canada, J1K 2R1 email : fdeschene,ziou,
[email protected] - Technical Report No. 259 Abstract This paper describes an optimal line detector for the one-dimensional case which is derived from Canny’s criteria, and an efficient approach for the detection of line junctions and line terminations. The line detector is extended to the two-dimensional case by operating separately in the x and y directions. An efficient implementation using an infinite impulse response (IIR) filter is provided. This implementation has an additional advantage that increasing the filter scale affects neither temporal nor spatial complexity. The detection algorithm for junctions and terminations is divided into two steps. First, given the lines extracted from the original image, a local measure of line curvature is estimated. Two different measures of curvature were tried out: the rate of change of direction of the orientation vector along the line and the mean of the dot products of orientation vectors within a given neighborhood. The second step involves the localization of junctions and terminations. Experimental results using several synthetic and real images demonstrate the validity of the two methods. Keywords: Feature extraction, Line detection, Junction and termination detection, Local line curvature, IIR filter
1 Introduction In edge detection, the word “lines” refers to curvilinear image events in which the intensity surface forms a roof, a valley or a ridge with a narrow width. These edges result from mutual illumination, from the placement of thin objects against a background or roads and rivers in remotely sensed images. Most existing edge detection algorithms are generally adapted to step edges (Ziou and Corresponding author.
Tel.: +819-821-8000 x2859; fax:+819-821-8200
1
DMI, Universit´e de Sherbrooke
Technical Report No. 259
Tabbone, 1998), which are the most common. However, it is also necessary to consider lines, as they are widely used in numerous applications, including the automatic updating of geographical databases from high resolution images (Auclair Fortier et al., 2000) and the extraction of anatomical features in medical imaging (Coppini et al., 1993). The intersection of several edges (e.g. steps, lines) constitutes a junction or a corner. The endpoints of an edge are commonly called terminations. Junctions and terminations represent robust information. Junctions prove to be useful in many computer vision and image-understanding tasks such as optical flow estimation (Park and Han, 1997), stereo matching (Deriche and Faugeras, 1990) and detection of road intersections in remotely sensed images (Auclair Fortier et al., 2000). Terminations are used to detect dead ends of roads in high resolution images (Auclair Fortier et al., 2000) and to fill gaps between aligned edges (Mokhatarian and Suomela, 1998; Nitzberg et al., 1993). Traditional corner detectors mainly deal with step-edge junctions (Cooper et al., 1993; Deriche and Giraudon, 1993; Kitchen and Rosenfeld, 1982; Mokhatarian and Suomela, 1998; Tabbone, 1994). These detectors make use of an implicit or explicit search for local grey-level variations that produce step edges. However, such variations differ from those that produce lines (Ziou and Tabbone, 1998; Ziou, 2000). Therefore, most existing corner detectors are not suitable for line junctions. Their response to a single line junction is not unique, as can be seen in Figures 1 and 2. Recently, Steger (1998) proposed a line detector using a Gaussian filter. The results are of good quality, but the algorithm is time consuming. We propose a faster line detector using Canny’s performance criteria with an IIR implementation. We also propose a detection method for grey-level line junctions and line terminations which is suitable for L, X, Y and T junctions.
a)
b)
c)
Figure 1: Outputs of Tabbone’s detector for step edge junctions (Tabbone, 1994). a) Synthetic junctions. b) Lines. c) Road intersections. In Section 2, an overview of line detection and some existing junction detectors is presented. Section 3 is devoted to our line detector. Section 4 presents the line junction and termination detector. Experimental results are shown in Section 5.
2
DMI, Universit´e de Sherbrooke
a)
Technical Report No. 259
b)
c)
Figure 2: Outputs of CSS detector for step edge junctions (Mokhatarian and Suomela, 1998). a) Synthetic junctions. b) Lines. c) Road intersections.
2 Related Work This section briefly reviews some existing techniques for detecting lines, junctions and terminations. Few line detectors have been proposed (Haralick, 1983; Steger, 1998; Ziou, 1991; Ziou and Tabbone, 1998). Recently, Steger (1998) proposed a line detector using a Gaussian filter. The first and second-order partial derivatives of the image smoothed by the Gaussian are computed. A line is detected at the zero-crossing of the first derivative of the image taken in the direction of the eigenvector of the Hessian that corresponds to the greatest eigenvalue. The plausibility of the line is the greatest eigenvalue. The results are of good quality, but the algorithm is time consuming. To detect junctions, Beaudet (1978) proposed a rotationally invariant operator named DET which corresponds to the determinant of the Hessian matrix. He suggested that junction detection can be done by thresholding the absolute values at the extrema of this operator. Kitchen and Rosenfeld (1982) presented four techniques for detecting grey-level junctions. These are based on the gradient magnitude of gradient direction, the difference between gradient directions along the edge within a local neighborhood, the angle between most similar neighbors, and the curvature of a second-order polynomial surface fitted to square neighborhoods. Deriche and Giraudon (1993) studied the behavior of some classical detectors (Beaudet, 1978; Dreschler and Nagel, 1982; Kitchen and Rosenfeld, 1982) in the scale space using ideal L and T junction models. They showed that junctions extracted with these detectors are not localized precisely. Since the exact location of a junction corresponds to a stable zero-crossing of the Laplacian in the scale space, they proposed an approach that combines the Laplacian and DET operators. Tabbone (1994) proposed a junction detection algorithm that uses a multi-scale representation and the elliptic extremum (local maximum or local minimum) of the Laplacian of Gaussian (LOG) operator. He showed that this elliptic extremum is always located inside the junction, and it moves in scale space on the bisector of the junction. Cooper et al. (1993) presented two junction detectors. The first uses the dissimilarity between image patches along the edge direction to detect junctions in the image edge. The second detector estimates image curvature using the second derivative of the image along the edge direction. Lacroix and Acheroy (1998) proposed a junction detection method that uses the cross product of gradient vectors to detect local change of orientation in the edge direction. Mokhatar3
DMI, Universit´e de Sherbrooke
Technical Report No. 259
ian and Suomela (1998) proposed a method based on the curvature scale-space (CSS) technique. Given the edges extracted using Canny’s edge detector, they compute curvature at a highest scale and locate junction candidates. As the scale decreases, they track junctions to improve localization. Concerning terminations, Nitzberg et al. (1993) defined an edge termination as a weak form of T junction. This means that a termination corresponds to a T junction formed by the intersection of a very short edge (e.g. approximately one pixel) and a longer one. They thus suggested that edge terminations may be extracted using existing junction detectors. Therefore, they did not propose any explicit detection method for edge terminations. All of these junction detection methods are devoted to the detection of step-edge corners, but none of them addresses the problems of line junction and line termination detection. We are proposing a junction and termination detector specially designed for lines. Step-edge junctions are usually located using a Laplacian operator (zero-crossings or extrema) or a curvature measure based on the gradient direction. However, the gradient direction of a line does not exist, and a line junction does not correspond to a zero-crossing of the Laplacian. Consequently, step-edge junction detectors are not suitable for line junctions and terminations.
3 An Optimal Line Detector Using Canny’s performance criteria, we propose an optimal line detector which operates separately in the x and y directions. We use the IIR implementation method which preserves the initial filter performance and is economical with regard to temporal complexity. Furthermore, increasing the filter scale affects neither temporal nor spatial complexity. It should be noted that the IIR implementation method has been widely used in the case of step edges (Deriche, 1990; Shen and Castan, 1986; Sarkar and Boyer, 1991) and in the case of line edges (Ziou, 1991; Petrou and Kolomvas, 1992). Section 3.1 is devoted to the problem of finding an optimal detector for a given edge model in one dimension, using Canny’s criteria. The extension of the detector and its implementation in two-dimensional case are presented in Sections 3.2 and 3.3. The algorithm is summarized in Section 3.4.
3.1 1D Detector Derivation
()
Let us consider an image I as the sum of two components: edge profile without noise, F x , and white noise, N x : I x F x N x; (1)
()
( ) = (1 +
)
( )= ( )+ ( )
where F x p j x j e pjxj and p positive constant. The choice of this function is motivated by the fact that its first and second derivatives exist. According to Canny, a good edge detector f x 1 must maximize the signal-to-noise ratio, the localization, and the multiple-response criterion. In this way, he transformed the search problem for a good detector f into an optimization problem,
()
1
In this paper, edge detection is performed only by using a filtering operation; thus the terms filter and detector as used are synonymous.
4
DMI, Universit´e de Sherbrooke
Technical Report No. 259
by maximizing the quantity:
C (f )
=
R
+1 F ( x)f (x)dx R +1 F ( x)f 00 (x)dx 1q 1 : R +1 R +1 2 f (x)dx f 002 (x)dx 1
(2)
1
The problem in Equation 2 or one of its variants has been resolved by many authors for the case of the step edge (Canny, 1986; Spacek, 1986; Petrou and Kittler, 1991; Sarkar and Boyer, 1991). For the line case, we propose an analytical solution. We want to determine the filter f x which detects the line point at the origin by first convolving it with the input signal I x and then identifying the maximum of the output signal. Among the class of functions satisfying the above requirement, we want to pick the function f x which has the following four properties: 1) f 1 ; it must be infinitely extended. It has been shown in Ziou (1991), that implementation on an infinitely extended filter by recursive filtering preserves its properties and is less sensitive to noise than the same filter implemented by masks. 2) f x f x ; it must be symmetrical, so that the line is given by the maximum of the convolved image. 3) f < ; we would like to detect bright lines. R +1 4) 1 f x dx . It must produce zero as response to a constant signal. R +1 2 x dx, assuming that R +1 F x f x f To solve the problem in Equation 2, we choose to minimize 0 0 R R 00 00 c1 , 0+1 F x f x c2 , and 0+1 f 2 x c3 , where c1 ; c2 , and c3 are real constants. By using the Lagrange multipliers, we have to minimize the integral of the following functional:
()
()
()
(+ ) = 0
( )= ( ) (0) 0
( ) =0 ( ) ( )=
()
( )=
( ) ( )=
= f 2(x) + 1F ( x)f (x) + 2F ( x)f 00 (x) + 3 f 002(x);
(3)
where 1 , 2 , and 3 are arbitrary constants. This functional satisfies the Euler equation given by:
2f (x) + 23f 0000 (x) =
1 F ( x)
2 F 00 ( x):
(4)
It has been shown in Ziou (2000) that the particular solution of this equation has the best performance if compared to the homogeneous solution or the combination of both :
f (x) = e sjxj( j x j
1):
(5)
The parameter will be obtained by normalization.
3.2 The 2D Detector
()
In order to detect lines in an image, we need to extend the filter f x , which is expressed in one dimension, to two dimensions. This is done by using two directional filters, limiting ourselves to the case of separable filters. The filter in the x direction (y respectively) is the result of the y direction (x product of f (Equation 5) in the x direction (y respectively ) and g Equation 6R in the x Rt respectively). Usually, g x is defined from the normalized f x by g x 1 1 f x dxdt 12 e s j x j s j x j . In order to have a constant image as a response to a constant image, we s must modify this filter slightly and rewrite it as follows:
(
+1)
()
()
g (x) = k(1 + s j x j)e sjxj; 5
( )=
()
=
(6)
DMI, Universit´e de Sherbrooke
Technical Report No. 259
where k is a normalization constant. As we will show below, the computation of the orientation and plausibility of the line requires the use of another filter which operate in both the x and y Rx directions. This filter is given by h x xe sx . In order to have a null response 1 f t dt to a constant signal, we have to modify slightly this filter and rewrite it as follows:
( )=
() =
h(x) = cxe sjxj ;
(7)
( )= ( ) ( )
where c is a normalization constant. The filter in the x direction is X x; y f x g y . Symmetrically, the filter in the y direction is Y x; y f y g x . The filter in both x and y directions is Z x; y hxhy. The extraction of lines requires the estimation of plausibility and orientation at each pixel of the image. Since the local maxima in the output signal correspond to sought edges, the edge orientation can be defined as the orientation of the eigenvector of the following matrix which corresponds to the greatest eigenvalue (Steger, 1998). The plausibility is the value of that eigenvalue. The matrix is: " #
( )= ( ) ( )
( )= ( ) ( )
(I X )(x; y) (I Z )(x; y) (I Z )(x; y) (I Y )(x; y)
:
(8)
The eigenvalue can be easily calculated by:
(x; y ) =
1 (I X + I Y (I X 2 q
I Y )2 + 4(I Z )2 ):
(9)
If the eigenvalue with greatest maximum absolute value is negative, the line is bright on a dark background and if it is positive, the line is dark on a light background. The normalized direction perpendicular to the line is: 1
0
I Z; max I X A; ~n = @ q 2 2 (I Z ) + (max I X )
(10)
( )
( )
where max is the eigenvalue with maximum absolute value at point x; y . A pixel x; y is labelled as a line pixel if max is a maximum in the ~n direction at that point. The non-maximum suppression algorithm (Canny, 1986) is used to detect maxima in the image max .
3.3 Implementation of the 2D detector The infinite impulse response (recursive) filter (IIR) isa specialized implementation method compared to the existing ones (i.e, FFT, ”classical” convolution mask), and is only useful for certain functions, i.e., causal, stable functions. However, in some cases recursive filtering is even more computationally efficient and preserves the initial detector performance (Ziou, 1991). We are now faced with the problem of implementing D filters. We use the same procedure as in (Deriche, 1990; Shen and Castan, 1986; Ziou, 1991; Sarkar and Boyer, 1991; Petrou and Kolomvas, 1992). Since the filters X , Y , and Z are separable, we can reduce the D filtering problem design to a D filtering problem design. Let us look closely at the properties of the three component filters f , g , and h. We can write the underlying equations in a form that can be coded.
2
2
6
1
DMI, Universit´e de Sherbrooke
Technical Report No. 259
()
()
Implementation of f x : Regarding the properties of filter f x (Equation 5), the method we use for designing the IIR filter consists in digitizing by using a simple substitution and Z transforming the resulting digital filter. Let us consider the discrete function f n obtained by the sampling f x . To make f n causal, we decompose it into two filters f n and f+ n :
()
()
f (n) = and
f+ (n) =
()
() ()= = 2
(
(
()
()
0 ( 1
n)e
sn
( 1 + n)e 0
sn
if n > if n
if n
:
2 2 e (x +y ) if (x; y ) and (x + x; y + y ) are linked by
pixels having non-zero plausibility otherwise
0
( +
+ )
=0
and N represents the number of points x x; y y such that 6 . ensures that the influence of faraway neighbors of x; y on C is weaker than the influence of closer neighbors. It also ensures that only line points are taken into account, which means that line strength does not need to be considered explicitly in Equation 15. Thus C x; y is large for line points lying in the neighborhood of a junction. Notice that we have experimentaly shown in Deschˆenes and Ziou (2000) that Equation 15 is superior to Equation 13. Its response to a given junction is unique and localization is more accurate. This difference is mainly due to the imprecision of the numerical differentiation used in Equation 13, which increases the delocalization error. In Equation 15, there is no numerical differentiation. Furthermore, x; y ensures that only contiguous line points are taken into account. Consequently, in the next Sections, we will only use C x; y from Equation 15.
( )
( )
( )
( )
4.2 Localization of Line Junctions We mentioned before that local maxima of line curvature correspond to line junctions. However this property is not always valid in practice. First, smoothing and numerical differentiation affect the estimation of curvature in the vicinity of a junction. They also influence line location and orientation. Secondly, a line junction is formed by the intersection of several lines. Hence, the orientation of a junction point is not unique (Figure 4). However, the line detector provides only one orientation vector per pixel. Therefore, there is no guarantee that local maxima of curvature coincide with junction locations as can be seen in Figure 5. In order to circumvent these problems, orientation vectors of junctions are extrapolated from those of surrounding line points. Since smoothing modifies orientation vectors, especially in high curvature regions such as the vicinity of a junction (Figure 5c), the orientation vectors of the immediate neighbors of a junction must not be taken into account. For this purpose, two classes of line points are created: Line points having low 10
DMI, Universit´e de Sherbrooke
Technical Report No. 259
Figure 4: Local maximum of line curvature of a discretized T junction. Extracted T junction and orientation vectors.
200
C 100
2 4 0
a)
b)
c)
6
y
6
y
8 2
4
6
x
8
10 10
d)
200
C 100
10 8 0
4 2
e)
f)
g)
4
6
x
8
2 10
h)
Figure 5: Example of curvature estimation influenced by smoothing. a) Smoothed junction. b) Extracted lines. c) Estimated orientation vectors. d) Estimated curvature (Equation 15). e) X junction. f) Extracted lines. g) Estimated orientation vectors. h) Estimated curvature (Equation 15). curvature value (less than a global threshold tc ) and those having high curvature value (greater than or equal to tc ). Let low curvature endpoints (LCE) be defined as line points having low curvature value and belonging to the neighborhood of a high curvature point, as shown in Figure 6. Based on this definition, the junction detector extracts low curvature endpoints in order to predict junction locations. Since lines are assumed to be one pixel wide, we consider only one LCE per line within a local neighborhood. For this purpose, all non-maximum LCE within a neighborhood are removed using a measure of continuity p x; y given by:
3 3
( ) p(x; y ) = [1 C (x; y )] L(x; y ); (16) where C (x; y ) represents the estimated curvature and L(x; y ) corresponds to the line strength. p(x; y ) is maximum for LCE having the lowest curvature value and the highest line strength. Notice that in the case of Equation 13, line strength has already been considered explicitly, but considering it again does not influence the final result. 11
DMI, Universit´e de Sherbrooke
Technical Report No. 259
a)
b)
Figure 6: Example of low curvature endpoints. a) High curvature line pixels (black pixels). b) Low curvature endpoints (black squares). Using low curvature endpoints (LCE), the orientation vectors of every junctions are recovered in order to update the curvature estimate. Starting from every LCE, high curvature neighbors in the line direction are visited. Each time the pixel x; y is visited, the orientation vector of the current LCE (d~i x; y ) is assigned to it, as shown in Figure 7b and 7c. Then, the curvature C x; y of every
( )
( )
a)
b)
( )
c)
Figure 7: Example of propagated orientation vectors and curvature update. a) Original vectors. b) Propagated vectors from the first endpoint (upper black square). c) Given the propagated vectors, the curvature estimate is updated using the orientation vector of the second endpoint (lower black square). high curvature pixels is updated according to the following rule:
C (x; y ) = Co (x; y ) +
n X
n X
i=1 j =i+1
s(d~i (x; y ); d~j (x; y )) + k;
( ) ( ( ) ( ))
(17)
( )
( )
where n is the number of orientation vectors d~i x; y assigned to the pixel x; y , Co x; y corresponds to the original curvature estimation, s d~i x; y ; d~j x; y is defined in Equation 14 and k is a constant that corresponds to the minimum curvature increase. k is used to distinguish line points that have been visited at least once from those that have not been. Such a rule ensures that maximum values of line curvature correspond to line junctions. Figure 8 shows the resulting curvature estimates for Figures 5a and 5e. Consequently, junctions can be located by extracting line points corresponding to maximum curvature value within a local neighborhood.
1
4.3 Detection Method for Line Terminations Recall that a termination, as defined by Nitzberg et al. (1993), corresponds to a weak form of T junction. This means that it is a T junction formed by the intersection of a very short edge (e.g. 12
DMI, Universit´e de Sherbrooke
Technical Report No. 259
200
200
C
C
100
100
2
10
4 6 0
8 2
4
8
y
6 0
4 2
6
x
8
10 10
a)
4
6
x
8
y
2 10
b)
Figure 8: Resulting curvature : a) For Figure 5a. b) For Figure 5e. approximately one pixel) and a longer one. The response of a line detector to a termination, as shown in Figure 9, confirms this definition. In this figure, the real termination is on the right. Due to smoothing, the extracted line strength is a bit lower at the termination location (Figure 9b). However, a change of orientation is detected (Figure 9c). Based on this property, line terminations may be located using the detection method described in Sections 4.1 and 4.2. First, the change of direction of the orientation vectors within the local neighborhood of a line termination ensures that the curvature estimate is large (Equation 15). Figure 10a gives an example of estimated curvature. Secondly, a given termination has only one low curvature endpoint. The orientation vectors are thus propagated in a unique direction which is the line direction. Consequently, the line point with the maximum curvature value in the line direction corresponds to a termination, as shown in Figure 10b.
a)
b)
c)
Figure 9: Response of a line detector to a line termination. a) Original line. The real termination is on the right. b) Extracted line. c) Estimated orientation vectors.
4.4 Summary of the Algorithm Let us assume that the orientation, plausibility and location of lines have been extracted from the original image. The implemented algorithm is straightforward and is divided into two main steps: 1. Estimation of local line curvature using either Equation 13 or Equation 15. We assume that the plausibility, orientation and location of the lines have been extracted using the line detection presented in Section 3. 2. Localization of line junctions and line terminations: Given the curvature estimate (C ) from step 1, additional processing is required in order to localize junctions and terminations. 13
DMI, Universit´e de Sherbrooke
Technical Report No. 259
200
200
C
C
100
100
2
2 4
4
0
6 8
10
x
20
6
0
y
8 10
10
a)
x
20
y
10
b)
Figure 10: Example of curvature estimation at termination location. a) Local curvature estimate. b) Updated curvature estimate. - Extraction of low curvature endpoints: Line points having low curvature values (< t c ) and belonging to the neighborhood of a high curvature point are extracted (Section 4.2). Since lines are expected to be one pixel wide, only one low curvature endpoint may exist within a local neighborhood. If this is not the case, suppression of non-maximum endpoints is applied within a neighborhood using a measure of continuity given by Equation 16.
3 3
- Updating of curvature estimate: Starting from every low curvature endpoint, orientation vectors are propagated to contiguous high curvature points in the line direction, as described in Section 4.2. Simultaneously, the curvature estimate is updated using Equation 17. - Extraction of local maxima: Local maxima of the updated curvature coincide with junction and termination locations. Line points having the maximum curvature value within a given neighborhood are extracted.
5 Experimental Results and Discussion 5.1 Line Detector We have performed experiments with our detector on various grey-level images. To demonstrate the benefits of our detector, we will also show the lines obtained by the second partial derivatives of the Gaussian, used by Steger. The Gaussian and its derivatives are implemented by using convolution masks. Without approximation, they cannot be implemented by difference equations with constant coefficients. Figure 11a shows an original image with pixels and 256 grey-levels. The scene contains polyhedral objects placed against a grid background. This image contains lines corresponding to the grid and to the mutual illumination. Figures 11b and 11c show the lines detected by our optimal line detector at several scales, s : ; : . There has been no thresholding to suppress false lines. We have also extracted the lines from a satellite image with pixels and 256 grey-levels shown in Figure 12a, at s : ; : (Figures 12b and 12c).
256 256
= 16 07 =09 04
331 277
14
DMI, Universit´e de Sherbrooke
Technical Report No. 259
a)
b)
c)
256 256). b) and c) Lines produced by our
Figure 11: Polyhedral objects. a) Original image ( : and : , resp.). detector at two scales (s
=16
07
= 1 33 3 33 = 3 33
: ; : are shown in Figures 12d, 12e. Note The lines produced by the Gaussian filter at : , the Gaussian detector misses part of the road. that s ' : = . At the higher scale, Figure 13 presents run time (measured on Sparc 10) for both the Gaussian detector and ours, as a function of the scale. We used the satellite image in Figure 12. As shown, the resulting detector is faster than the Gaussian used by Steger. The temporal and spatial complexity of our algorithm is not affected by the scale. For example, when the Gaussian is used at the large scale , our . detector is 33 times faster; it is 5 times faster at
1 41
=3
=1
5.2 Line Junction and Termination Detector The detection method for line junctions and line terminations described in this paper was tested using several synthetic and real images containing line terminations and L, T, Y, X line junctions. Lines were first extracted from the original images using the algorithm presented in Section 3. During the experimentation of the junction detector, it was shown that this line detector determines line location accurately and can be scaled to lines of arbitrary width. The following results were obtained with four different grey-level images. First, a synthetic image composed of L, T, Y and X junctions was created. In order to make it more realistic, this image was first smoothed with a gaussian filter (standard deviation = 1.0) and significant gaussian noise (stdev = 6.0) was then added. Figure 14 shows the results of the algorithm, including intermediate steps. The line detector parameters are the scale s : and the threshold t : . The values of these parameters are selected in order to reduce noise and remove low strength lines before computing local line curvature. The curvature estimate (Equation 15) was computed within a neighborhood ) and thresholded using tc : . As shown in Figure 14e, all line junctions and (M line terminations were localized precisely using a curvature estimate from Equation 15. Figure 15 shows the results obtained with the real image of polyhedral objects (Figure 11a). The line detector parameters used are s : and t : . The junction detector parameters used are M , tc : . As shown in this figure, all line junctions within the grid have been extracted. Results obtained from remotely sensed images containing road intersections and dead ends are shown in Figure 16 (for line detector: s : ,t : ; for Equation 15: M , tc : ) and
= 11
=2
= 08
= 36 0
5 5
= 0 25
= 11
= 36 0
=2
= 0 35 = 55 0 15
= 3
= 0 25
DMI, Universit´e de Sherbrooke
Technical Report No. 259
a)
b)
c)
d) e) Figure 12: Satellite image ( ), courtesy of CTI, Geomatics Canada. a) Original image. b) and c) Lines produced by our detector at two scales (s : and : , resp.). d) and e) Lines : and : , resp.). produced by Gaussian detector at two scales (
256 256
=09 = 1 33 3 33
04
80
60
Time
40
20
0 1
1.5
2
2.5 scale
3
3.5
Figure 13: Run time, expressed in seconds and measured on Sparc 10, of the Gaussian detector and our detector as a function of the scale. Horizontal curve (thick) corresponds to our detector.
16
DMI, Universit´e de Sherbrooke
Technical Report No. 259
Error type SNR=10 Omission error 0% Localization error (pixels) 0.28 Multiple-response error 0% False positives 0%
SNR=5 0% 0.44 0% 6%
SNR=3 6% 0.72 0% 29%
SNR=2 0% 1.22 0% 68 %
Table 1: Detection performance and robustness to noise for different signal-to-noise ratio (SNR).
= 0 6 = 52 0
=2
= 05
Figure 17 (for line detector: s : ,t : ; for Equation 15: M and t c : ). As shown in these figures, all road intersections and dead ends have been localized. The detection performance of the algorithm as well as the robustness to noise were evaluated. In order to do so, we used the synthetic image in Figure 14a and added it different amounts of noise such that the signal-to-noise ratio (SNR = contrast/ noise ) 2 ; . The detector parameters : ,t , tc : and M . Overall, the omission and multipleare set to s and the localization error varies from 0.28 (SNR=10) to 1.22 response errors are both less than pixels (SNR=2). Concerning sensibility, there are less than of false positives when SNR , when SNR and when SNR . Recall that these error values combine the errors accumulated from both the line extraction process and the junction detection. These results are resumed in Table 5.2. The results above, obtained with four different grey-level images, show that the detection method described in this paper localizes line junctions and terminations accurately.
= 05
29%
=3
= 50
= 0 14 6% 68% =2
[2 10]
= 3
6%
5
5.3 Effect of parameters Let us now examine the effect of the line detector parameters. As mentioned previously, s and t represent the scale and the line plausibility threshold, respectively. Decreasing s reduces noise, as can be seen in Figure 18. However, experiments have shown that it also causes real lines to be flattened out. Furthermore, smoothing influences the estimation of orientation vectors. The changes of direction are smoothed. The estimate of local curvature may thus decrease as s decreases. In order to circumvent this problem, smaller values for t c have to be selected. Varying the junction detector parameter M may influence the accuracy of the results. According to Equation 15, a larger value for M ensures that local curvature is not null at junction locations even if the change of direction of the orientation vectors of immediate neighbors is smooth. As shown in Figure 19, increasing M may avoid multiple responses to a single junction. However, using large values for M may lead to additional problems. First, it may cause contiguous junctions to be merged together. Also, it tends to decrease the mean curvature at junction locations, as can be seen in Figure 20. Consequently, the curvature threshold t c must be set to a smaller value in order to avoid removing real junctions and terminations.
17
DMI, Universit´e de Sherbrooke
Technical Report No. 259
a)
c)
b)
d)
e)
Figure 14: Synthetic image composed of L, T, Y and X junctions. a) Original image. b) Thresholded : and t ). c) High curvature points (in black) given by Equation 15. line plausibility (s d) Extracted low curvature endpoints based on curvature estimation using Equation 15. e) Extracted junctions and terminations based on Equation 15 (M and t c : ).
=11
= 36
=2
a)
= 0 25
b)
= 1:1 and t = 36) of = 2 and t c = 0:25).
Figure 15: Real image containing lines. a) Thresholded line plausibility (s Figure 11. b) Extracted junctions and terminations based on Equation 15 (M
6 Conclusion Line information is completely different from the information localized by step edge detectors. It represents different aspects of the visual world. The response of a step edge detector to a single 18
DMI, Universit´e de Sherbrooke
a)
Technical Report No. 259
b)
c)
Figure 16: Real image containing road intersections and dead ends. Courtesy of CTI, Geomatics Canada. a) Original image. b) Thresholded line plausibility (s : and t ). c) Extracted and t c : ). junctions and terminations based on Equation 15 (M
=3
a)
b)
= 0 35 = 0 25
= 55
c)
Figure 17: Real image containing road intersections and dead ends. a) Original image. b) Thresh: and t ). c) Extracted junctions and terminations based on olded line plausibility (s and tc : ). Equation 15 (M
=2
= 06 =05
= 52
line is represented by two parallel edges close together, while the line detector described in this paper produces only a single edge. Junctions and terminations of lines are different from step edge junctions and terminations. The response of a step-edge junction detector to a single line junction is not unique as shown in Figures 1 and 2. Consequently, step-edge corner detectors are not suitable for line junctions. Using Canny’s criteria, we have derived an optimal line detector which, when convolved with a grey-level image, gives a high value at the location of a line. We have described a two-dimensional detector which is separable and implemented recursively to reduce the computational cost. The line plausibility and orientation have been defined. Our detector is faster than the Gaussian one used by Steger (1998). The junction detection method described in this paper is suitable for L, X, Y, T line junctions and line terminations. This method is divided into two steps. First, given the lines extracted from the original image with the line detector presented before, a local measure of
19
DMI, Universit´e de Sherbrooke
Technical Report No. 259
a)
b)
c)
d)
Figure 18: Extracted lines from a noisy image. a) Noisy image. b), c) and d) Detected lines for s : ,s : and s : .
= 11 = 05
= 03
a)
b)
c)
d)
e)
f)
Figure 19: Example of multiple response improvement. a) Smoothed L junction. b) Extracted line. c) Estimated orientation vectors. d) Extracted junctions and terminations using M = 1. e) M = 2. f) M = 3. curvature is computed. For this purpose, two different measures of curvature were developped: the rate of change of direction of the orientation vector along the line (Equation 13) and the mean of the dot products of orientation vectors within a given neighborhood (Equation 15). Then, since junction and termination locations do not necessarily correspond to local maxima of curvature, 20
DMI, Universit´e de Sherbrooke
Technical Report No. 259
0.04
C 0.02
01
2
3
4
5
6
7
8
9
10
M
Figure 20: The influence of M on curvature estimation. Mean of the local line curvature C as a function of M . Dashed lines represent a confidence interval which corresponds to C one standard deviation. additional processing based on low curvature endpoints is done to localize junctions and terminations, as defined in Section 4.2. The performance of the line junction and termination detector was analyzed. We have shown that this detector provides accurate results as long as real lines are not flattened out by smoothing. Further, the influence of the detector parameters on localization error and curvature estimation was evaluated. This junction detector has been integrated into systems that extract road networks in remotely sensed images (Auclair Fortier et al., 2000). The two detectors has been implemented in C++. The sources are available on request.
References Auclair Fortier, M. F., Ziou, D., Armenakis, C., Wang, S., 2000. Automated Correction and Updating of Road Databases from High-Resolution Imagery. Accepted for publication in Canadian Journal of Remote Sensing . Beaudet, P. R., 1978. Rotationally Invariant Image Operators. In: Proceedings of the International Joint Conference on Pattern Recognition. Canny, J., 1986. A Computational Approach to Edge Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 8 (6), 679–698. Cooper, J., Venkatesh, S., Kitchen, L., August 1993. Early Jump-Out Corner Detectors. IEEE Transactions on Pattern Analysis and Machine Intelligence 15 (8), 823–828. Coppini, C., Demi, M., Poli, R., Valli, G., February 1993. An Artificial Vision System for XRay Images of Human Coronary Trees. IEEE Transactions on Pattern Analysis and Machine Intelligence 15 (2), 156–162.
21
DMI, Universit´e de Sherbrooke
Technical Report No. 259
Deriche, R., 1990. Fast Algorithm for Low-Level Vision. IEEE Transactions on Pattern Analysis and Machine Intelligence 12 (1), 78–87. Deriche, R., Faugeras, O., June 1990. 2-D Curve Matching Using High Curvature Points: Application to Stereo. In: Proceedings of 10th International Conference on Pattern Recognition. Deriche, R., Giraudon, G., 1993. A Computationnal Approach for Corner and Vertex Detection. International Journal of Computer Vision 10 (2), 101–124. Deschˆenes, F., Ziou, D., 2000. Detection of Line Junctions and Line Terminations Using Curvilinear Features. Pattern Recognition Letters 21 (6-7), 637–649. Dreschler, L., Nagel, H., 1982. On the Selection of Critical Points and local Curvature Extrema of Region boundaries for Interframe Matching. In: Proceedings of International Conference on Pattern Recognition. Haralick, R. M., 1983. Ridge and valley on digital images. Computer Vision Graphics Image Processing 22, 28–38. Kitchen, L., Rosenfeld, A., December 1982. Gray-level Corner Detection. Pattern Recognition Letters I , 95–102. Lacroix, V., Acheroy, M., 1998. Feature Extraction Using the Constrained Gradient. ISPRS Journal of Photogrammetry and Remote Sensing 53, 85–94. Mokhatarian, F., Suomela, R., December 1998. Robust Image Corner Detection Through Curvature Scale Space. IEEE Transactions on Pattern Analysis and Machine Intelligence 20 (2), 1376–1381. Nitzberg, M., Mumford, D., Shiota, T., 1993. Filtering, Segmentation and Depth. Lecture Notes in Computer Science 662. Park, J., Han, J., 1997. Estimating Optical Flow by Tracking Contours. Pattern Recognition Letters 18 (7), 641–648. Petrou, M., Kittler, J., 1991. Optimal Edge Detector for Ramp Edges. IEEE Transactions on Pattern Analysis and Machine Intelligence 13 (5), 483–491. Petrou, M., Kolomvas, A., 1992. The recursive implementation of the optimal filter for the detection of roof edges and thin lines. In: et al., J. V. (Ed.), Signal Processing VI: Theory and Applications. Sarkar, S., Boyer, K., 1991. On Optimal Infinite Impulse Response Edge Detection Filters. IEEE Transactions on Pattern Analysis and Machine Intelligence 13 (11), 1154–1171. Shen, J., Castan, S., 1986. An Optimal Linear Operator for Edge Detection. IEEE Computer Vision and Pattern Recognition , 109–114. 22
DMI, Universit´e de Sherbrooke
Technical Report No. 259
Spacek, L., 1986. Edge Detection and Motion Detection. Image and Vision Computing 4, 43–56. Steger, C., 1998. An Unbiased Detector of Curvilinear Structures. IEEE Transactions on Pattern Analysis and Machine Intelligence 20 (2), 113–125. Tabbone, S., 1994. Detecting Junctions Using Properties of the Laplacian of Gaussian Detector. In: Proceedings of the 12th International Conference on Pattern Recognition. Ziou, D., 1991. Line Detection Using an Optimal IIR Filter. Pattern Recognition 24 (6), 465–478. Ziou, D., 2000. Optimal line detector. In: International Conference on Pattern Recognition. Vol. 1. Ziou, D., Tabbone, S., 1998. Edge Detection Techniques - An Overview. International Journal of Pattern Recognition and Image Analysis 8 (4), 537–559.
23
DMI, Universit´e de Sherbrooke
Technical Report No. 259
List of Tables 1
Detection performance for the line junction detector . . . . . . . . . . . . . . . . . 17
24
DMI, Universit´e de Sherbrooke
Technical Report No. 259
List of Figures 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Tabbone’s detector for step edge junctions . . . . . . . . . . CSS detector for step edge junctions . . . . . . . . . . . . . Ex. of theoretical line curvature at junction locations . . . . Local maximum of line curvature of a discretized T junction Ex. of curvature estimation influenced by smoothing . . . . Example of low curvature endpoints . . . . . . . . . . . . . Ex. of propagated orientation vectors and curvature update . Curvatures . . . . . . . . . . . . . . . . . . . . . . . . . . . Response of a line detector to a line termination . . . . . . . Example of curvature estimation at termination location . . . Polyhedral objects . . . . . . . . . . . . . . . . . . . . . . . Satellite image . . . . . . . . . . . . . . . . . . . . . . . . Run time for line detector . . . . . . . . . . . . . . . . . . . Synthetic image composed of L, T, Y and X junctions . . . . Real image containing lines . . . . . . . . . . . . . . . . . . Real image containing road intersections and dead ends . . . Real image containing road intersections and dead ends . . . Extracted lines from a noisy image . . . . . . . . . . . . . . Example of multiple response improvement . . . . . . . . . Influence of M on curvature estimation . . . . . . . . . . . .
25
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
2 3 9 11 11 12 12 13 13 14 15 16 16 18 18 19 19 20 20 21