EDGE DETECTION AND SEGMENTATION OF SAR ... - CiteSeerX

0 downloads 0 Views 1009KB Size Report
Descombes 35 and Sigelle 36 studied their links to physical statistics. ..... F. Tupin, H. Ma^ tre, J.-F. Mangin, J.-M. Nicolas, and E. Pechersky. Detection of linear ...
EDGE DETECTION AND SEGMENTATION OF SAR IMAGES IN HOMOGENEOUS REGIONS A. LOPE S, R. FJRTOFT AND D. DUCROT

Centre d'Etudes Spatiales de la Biosphere (CESBIO), UMR 5639 CNES/CNRS/UPS, 18 avenue Edouard Belin, bpi 2801, 31401 Toulouse cedex 4, France E-mail: [email protected] ; [email protected] ; [email protected]

P. MARTHON AND C. LEMARE CHAL

Laboratoire d'Informatique et de Mathematiques Appliquees (LIMA), Ecole Nationale Superieure d'Electrotechnique, d'Electronique, d'Informatique et d'Hydraulique de Toulouse (ENSEEIHT), Institut de Recherche en Informatique de Toulouse (IRIT), UMR 5505 UPS/INP/CNRS, 2 rue Camichel, bp 7122, 31071 Toulouse cedex 7, France E-mail: [email protected] ; [email protected] Edge detection and segmentation are fundamental issues in image analysis. Due to the presence of speckle, which is generally modeled as a strong, multiplicative noise, edge detection in synthetic aperture radar (SAR) images is extremely dicult, and edge detectors developed for optical images are inecient. Several robust operators have been developed speci cally for SAR images, based on the restrictive assumptions of isolated step edges and detected images with uncorrelated speckle. As SAR images at full spatial resolution are intrinsically complex, we here present detectors for the complex correlated speckle eld observed in single look complex (SLC) images. By means of the spatial whitening lter (SWF), we can thus obtain more accurate estimates of the local mean radar re ectivities, and consequently, better edge detection and edge localization than with operators known from the literature. We propose two di erent approaches to edge detection in the presence of multiple contours, and describe how multidirectional analyzing windows are used to obtain an edge strength map. Robust extraction of closed, skeleton boundaries from the edge strength map is obtained by the watershed algorithm in combination with thresholding of the basin dynamics or region merging. Estimators of the edge position are developed, and two-dimensional implementations of the edge localization stage are proposed, based on Gibbs random elds and active contours. Finally, we show that the segmentation scheme facilitates contextual classi cation of agricultural parcels using multitemporal spaceborne SAR data.

1 Introduction Segmentation is de ned as the decomposition of an image in regions, i.e., spatially connected, non-overlapping sets of pixels sharing a certain property. A region may, for example, be characterized by constant re ectivity or texture. Region-based segmentation schemes, such as histogram thresholding and split-and-merge algorithms, try to de ne regions directly by their content, whereas edge-based methods try to identify the transitions between di erent regions. In images with no texture, an edge can be de ned as an abrupt change in re ectivity between adjacent homogeneous regions. In the case of optical images, an edge is usually de ned as a local maximum of the gradient magnitude in the gradient direction, or equivalently, as a zero-crossing of the second derivative in the direction of the gradient. Due to the presence of additive noise, smoothing is necessary prior to derivation, as di erential operators are sensitive to noise. The smoothing and di erentiation operations are in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 1

merged and implemented by two-dimensional lters. Gradient-based edge detection basically consists in calculating the di erence of the local radiometric means on opposite sides of the central pixel. This is done for every pixel position in the vertical and horizontal directions. Finally, local maxima of the gradient magnitude image are extracted. In synthetic aperture radar (SAR) images, a particular kind of granularity known as speckle can be observed, even in zones of constant re ectivity. Speckle is a consequence of the coherent illumination and the interferences between the waves scattered by numerous randomly located point targets within each resolution cell covering a rough surface. Speckle cannot be considered as an additive noise. It is generally modeled as a strong, multiplicative noise with Gamma distributed intensity. 1 Due to the multiplicative nature of speckle, the usual di erential edge detectors are inecient when applied to SAR images. In particular, the false alarm rate varies with the underlying re ectivity. It is well known that a logarithmic transformation of the data turns speckle into an additive noise, in which case the di erential operators have constant false alarm rate (CFAR). However, this approach is shown to be suboptimal in terms of noise reduction. Several CFAR edge detectors have therefore been developed speci cally for SAR intensity images with Gamma distributed speckle. Edge detectors have also been proposed for Weibull distributed multiplicative noise, 2 but these operators will not be presented here. Among the detectors developed for Gamma distributed speckle, the generalized likelihood ratio (GLR) 3;4 and the normalized ratio (NR) 5;6 operators are the most well-known examples. The GLR is an approximation of the likelihood ratio (LR), which realizes the optimal Neyman-Pearson simple hypothesis test. It should be noted that the LR only applies to detection problems where all statistical parameters are known a priori . In our case, where the local radar re ectivities are unknown and must be estimated, we can only use the GLR, which is optimal in a more restricted sense. 7 The GLR for an arbitrary edge direction derived by Frost et al. 3 actually yields poor performance as compared to the multidirectional NR detector. 5 However, the unidirectional GLR derived by Oliver et al. 4 has been shown to outperform all other unidirectional edge detectors proposed for SAR intensity images with uncorrelated, Gamma distributed speckle. The highest detection rate is obtained when the analyzing window is split in two equally sized parts, in which case the GLR and NR performances coincide. 4 It has been shown that these parametric tests yield better edge detection than non-parametric tests. 8;9 The NR can be used in conjunction with a heterogeneity measure, such as the variation coecient, in order to detect edges, lines and isolated point targets, and thereby improve adaptive speckle lters. 10 The NR and GLR operators proposed in literature work on detected (intensity) data, and the speckle is supposed to be Gamma distributed and spatially uncorrelated. They rely on the maximum likelihood (ML) estimator of the local mean re ectivity, which is the arithmetic mean intensity (AMI) in this case. SAR data generally has correlated speckle, in which case an underlying hypothesis of the NR and GLR criteria described above is violated. As a consequence, the statistics of the operators will not be exact, and the performance becomes suboptimal. We shall here develop similar criteria for edge detection in single look complex (SLC) in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 2

R

R R i+1

R2 Ri

R1

x2

(a)

x

x i x i+1

(b)

x

Figure 1. One-dimensional (a) monoedge model and (b) multiedge model.

images, which give optimal results even when the speckle is spatially correlated. The theoretical study in Sec. 2 is restricted to the monoedge case, illustrated in Fig. 1 (a), where we suppose that there is never more than one edge within the analyzing window. A certain edge direction is also assumed. A major drawback of the GLR and NR operators is that they are optimal only in the unidirectional monoedge case. Moreover, simple thresholding of the edge strength maps produced by such operators typically yields several pixels wide, isolated edge segments, that do not de ne any segmentation of the image. In Sec. 3 we propose two methods for multiedge detection. The rst one, developed for detected data and uncorrelated speckle, uses a linear lter which is optimal in the minimum mean square error (MSSE) sense under a stochastic multiedge model, which is illustrated in Fig. 1 (b). It computes a normalized ratio of exponentially weighted averages (ROEWA) of the local intensities on opposite sides of the central pixel. The second approach is a statistical multiresolution method, based on the NR 5;11 or GLR 12 computed on a series of window sizes. The generalization of the edge detectors to the multidirectional case is discussed in Sec. 4.1. To obtain thinner edges and to determine the edge positions more precisely, Bovik combined the Laplacian of Gaussian (LoG) operator with the NR. 6 Simple morphological operators have also been used to thin the edges obtained by plain thresholding, but they do not produce closed edges. 5;8 In Sec. 4.2, this problem is solved with the watershed algorithm. However, a modi ed and more robust version of the algorithm must be used in order to avoid a strong over-segmentation. Another way of reducing the over-segmentation is to merge adjacent regions that are radiometrically close. The optimal criterion for region merging in complex and detected SAR images is given in Sec. 4.3. The extracted edges are not always in the correct positions, mainly due to local speckle patterns or to the use of big analyzing windows which impose regular edge shapes. In the unidirectional case, Oliver et al. 4 propose a two-stage algorithm, which rst detects step edges using the GLR operator, and then determines the maximum likelihood (ML) edge position. In fact, the same radiometric criterion can be used in both stages, but the window con guration is di erent: a scanningwindow central-edge (SWCE) con guration is used for detection, whereas a xedwindow scanning-edge (FWSE) con guration is used to improve the edge positions. in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 3

(a)

(b)

Figure 2. 77 (a) SWCE con guration and (b) FWSE con guration for vertical edges.

In Sec. 5, we present ML and suboptimal criteria for complex SAR images with correlated speckle, and we show that regularized, two-dimensional edge localization can be realized with Gibbs random elds techniques or active contours (snakes). The di erent stages of the segmentation scheme are illustrated on a simulated SAR image in Sec. 6. Finally, segmentation followed by regionwise classi cation of a multitemporal series of ERS images is compared to contextual classi cation with a sliding window in Sec. 7.

2 Unidirectional Detection of Isolated Step Edges In this section, we study the edge detection problem under the hypothesis that only one edge is present within the analyzing window, as illustrated in Fig. 1 (a). We also assume a certain edge direction. Detection in the presence of multiple edges and without a priori knowledge about the edge direction will be addressed in section 3. 2.1 Vectorial Probability Distributions Let us consider a window centered on a given pixel in a SLC image. The window is split in two parts, containing N1 and N2 pixels, respectively. Let X1 and X2 be the complex signal vectors corresponding to the two half-windows, and let X0 be the signal vector containing the N0 pixels of the entire window. If the speckle is fully developed, the probability density functions (PDF's) of the di erent signal vectors are circular complex Gaussian distributions: 13;10 1 exp ??Xt C?1 X  ; i = 0; 1; 2 ; p(X ) = (1) i

i

Ni jCXi j

Xi i

where CX is the Ni Ni complex spatial covariance matrix corresponding to signal vector Xi , i = 0; 1; 2. If we suppose that the underlying re ectivity Ri is constant for each of the signal vectors Xi (homogeneous regions), the multiplicative speckle model allows us to express the covariance matrix of signal vector Xi as 10 (2) CX = R i  C S ; where CS represents the spatial covariance due to speckle. i

i

i

i

in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 4

The spatial covariance matrix CS of the SLC speckle only depends on sensor and SAR processor parameters. Principally, it should therefore be possible to obtain exact values from the data provider. If CS is unknown, it can be estimated on any part of the SLC image where the speckle is fully developed, except where the mean re ectivity is so low that the thermal noise becomes dominant. The underlying re ectivity does not necessarily need to be constant. 14 If ground range images are used, however, the correlations are not exactly the same in near range and far range. This can be taken into account by using a slowly varying matrix when processing over the full swath. To produce the matrices CS , i = 0; 1; 2, we simply rearrange the terms of CS in accordance with the corresponding signal vectors. i

2.2 ML Estimator of the Re ectivity Let us now consider the estimation of the unknown local re ectivity Ri . From Eqs. (1) and (2) it can easily be shown that the ML estimator of re ectivity for SLC images is the spatial whitening lter (SWF), which is given by 1 R^ = Xt C?1 X : (3) i

Ni i

Si

i

Novak and Burl 13 used a whitening lter (WF) to obtain an intensity image with minimal speckle variance from polarimetric data. A combined spatial and polarimetric WF was proposed by Larson et al. 15 for improved target detection. Lopes et al. 16;17 showed that the WF is a ML estimator of texture. Taking the AMI of N pixels does not reduce the variance by a factor N , as in the case of uncorrelated speckle, but by a factor N 0 < N , and the computed mean values are only approximately Gamma distributed. 18 However, if the data is available in SLC format, the optimal variance reduction factor N can be attained using the SWF. Moreover, the SWF output is truly Gamma distributed. If the covariance matrix CS in Eq. (3) is not perfectly known, we have to use an estimated matrix C^ S . The expectation of the estimated re ectivity R^i computed by the SWF is then given by 17 1 ^ )R ; (4) E [R^ ] = trace (C?1 C i

i

i

Ni

Si

Si

i

which means that the SWF is an unbiased estimator only when C^ S  CS . Hence, any inaccuracy in C^ S will introduce a bias on R^i . This bias may be observed when using estimated covariance matrices to compute the SWF on very large windows, especially when the bandwidth of the complex signal is signi cantly reduced. 12 The number of multiplications per pixel for the SWF is about N 2 + N , so the computational cost becomes considerable for very large windows. A practical solution to both problems is to calculate the SWF on maximally overlapping smaller windows within the big window, and then average the results in intensity. The computational complexity of this hybrid lter is only slightly higher than that of the SWF for the small window. If the small window is big enough to contain all pixels with which the central pixel is strongly correlated, the performance loss will be modest. For the hybrid lters the bias will also be negligible. i

i

i

in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 5

400 Spatial Whitening Filter (SWF)

Equivalent number of independent looks

350

Arithmetic Mean Intensity (AMI) Hybrid : Mean from SWF 3x3

300

Hybrid : Mean from SWF 7x7

250

200

150

100

50

0 0

50

100

150 200 250 Number of pixels in the window

300

350

400

Figure 3. Speckle reduction capacity of the SWF, the AMI and hybrid lters, computed for speckle correlation coecients close to those of ERS SLC images.

The performances of the SWF, the AMI and hybrid lters, computed on simulated complex speckle with correlation coecients close to those of ERS SLC images, are illustrated in Fig. 3. The speckle reduction capacity is given in terms of the equivalent number of independent looks, which is proportional to the inverse of the normalized speckle intensity variance. 1 As predicted, the speckle variance is reduced by a factor N when the SWF is computed on a sliding window covering N pixels of the simulated SLC image, whereas the AMI applied to the corresponding single-look intensity image reduces the speckle variance by a factor N 0 , which is here about 60% lower than N , due to the speckle correlation. Hybrid lters, which require far less computation than the SWF, yield intermediate results. Averaging the results of the 33 SWF doubles the speckle reduction compared to the AMI. Starting from the result of the 77 SWF, we obtain a speckle reduction factor that is only slightly below the optimum, and the computational complexity of the hybrid lter is still acceptable. 2.3 Neyman-Pearson Test We want to test the hypothesis H1 , saying that X1 and X2 are separated by an edge and cover homogeneous regions with di erent re ectivities R1 and R2 , against the null hypothesis H0 , which says that X0 corresponds to a zone of constant re ectivity R0 . Hence, the likelihood ratio (LR) for edge detection is

X0 jH1) :  = pp((X jH ) 0

0

(5)

The null hypothesis H0 is rejected if  is superior to a threshold t . in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 6

Obviously, p(X0 jH0 ) = p(X0 jR0 ). If X1 and X2 are independent, p(X0 jH1 ) = p(X1 jR1 )  p(X2 jR2 ), and Eq. (5) becomes  = p(X1 jpR(1X) jpR(X) 2 jR2 ) : (6) 0 0

As the speckle is correlated, X1 and X2 must be separated spatially by a distance which is greater than the correlation length, in order to be independent. For most sensors, the speckle correlation becomes insigni cant for lags of more than one or two pixels. This requirement is therefore not very restrictive in the case of edge detection with the SWCE approach, where a window con guration like the one shown in Fig. 2 (a) is used. 2.4 Likelihood Ratio Operator If the pixels in the band which assures the independence between X1 and X2 are excluded from X0 , N0 = N1 + N2 , and it can easily be shown that jCS0 j = jCS1 jjCS2 j. Using Eq. (2), and in particular the fact that jRi  CS j = RiN jCS j, we may rewrite Eq. (6) as log  = N0 log R0 ? N1 log R1 ? N2 log R2 ^ ^ ^ +N0 R0 ? N1 R1 ? N2 R2 ; (7) i

R0

R1

i

i

R2

where R^i is computed by the SWF given by Eq. (3). The LR is an optimal detector, in the sense that it maximizes the probability of detection (PD) for a given probability of false alarm (PFA). The PFA is the probability of detecting an edge in a zone of constant re ectivity. However, we see from Eq. (7) that  depend on the re ectivities Ri , which are unknown in our case. Hence, the LR is only of theoretical interest here, and it will not be analyzed any further. 2.5 Generalized Likelihood Ratio Operator If we replace the unknown re ectivities Ri in the LR by the ML estimations R^i , we obtain the GLR. From Eq. (7) we see that the GLR is given by log ^ = N0 log R^0 ? N1 log R^1 ? N2 log R^2 : (8) While the LR is optimal in the general case, the GLR can only be shown to be optimal in the minimax sense for Gaussian variables. 7 This means that the GLR given by Eq. (8) maximizes the PD for a given PFA when R1 and R2 are close. The GLR de ned by Eq. (8) was developed under the hypothesis that the halfwindows can be made independent by introducing a band of pixels between them. If the X1 and X2 cannot be made independent, due to the correlation length or restrictions on the window con guration, the GLR operator can be based directly on Eq. (5), by substituting estimated parameters into the PDF's, similar to the edge localization method described in Sec. 5.1. The PDF of the GLR operator given by Eq. (8) has not been developed analytically. In order to compute the theoretical PFA corresponding to a threshold t^

in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 7

or vice versa , a simple solution is to use the relation between the GLR and the double-sided ratio (DR) of two Gamma-distributed variables R^1 and R^2 , 5 R^ r = ^1 ; (9) R2 which is the ML estimator of the edge contrast C = R1 =R2 , 12 and whose distribution is 5;4 p(rjR1 ; R2 ; N1 ; N2 ) =

  1 N1 R2 r N1

N2 R1 ?(N1 + N2 )  ?(N1 )?(N2 ) 1 + N1 R2 r N1 +N2 : r

(10)

N2 R1

For two thresholds t1 < 1 and t2 > 1 of the DR, which are used when R^1 < R^2 and when R^1 > R^2 , respectively, the PD is given by pdet = 1 ?

Z

t2

t1

p(rjR1 ; R2 ; N1 ; N2 )dr ;

(11)

which may be computed by numerical integration or by an analytical expression 19 based on the hypergeometric function, which here reduces to a nite series. 4 The relation between ^ and r is found by rearranging the terms of Eq. (8):   log ^ = ?N log r + N log N1 r + N2 (12) 1

0

N0

In practice, we rst x the PFA that can be accepted. The next step is to nd the appropriate threshold t^ for the GLR, which will be used to decide whether or not an edge is present. We guess a rst value for t^ , solve Eq. (12) with respect to r to obtain the two corresponding thresholds t1 and t2 for the DR, and compute the PFA by introducing t1 , t2 and R1 = R2 into Eq. (11). The threshold t^ is adjusted and the procedure is repeated until the computed PFA is suciently close to the desired one. If we measure ^ > t^ in a given pixel position, it indicates the presence of an edge with a risk of error equal to the speci ed PFA. Another measure that can be mapped to the DR is the normalized ratio (NR) 5;6 : ( )   1 R^1 R^2 (13) rm = max ^ ; ^ = max r; r R2 R1 We here consider a version of the NR operator where R^1 and R^2 are computed by the SWF, so that problems due to speckle correlation are eliminated. A given threshold tm for the NR corresponds to the thresholds t01 = 1=tm and t02 = 1=t01 of the DR, which in general are di erent from the optimal thresholds t1 and t2 found through Eq. (12). For the SWCE con guration, however, where N1 = N2 , it can easily be shown that t2 = 1=t1, so that the GLR and NR performances coincide. When R1 > R2 and R1 < R2 have the same probability, it can be shown 4;19 that the highest PD is obtained in the central edge position, where the NR has the same performance as the GLR. However, the NR rapidly becomes suboptimal when the presumed edge position is moved away from the center, especially for weak edges. in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 8

1

6dB

0.9 0.8

Measured PD

0.7 0.6

3dB

0.5 0.4

2dB

0.3 0.2

1dB GLR−SWF 7x7 GLR−AMI 7x7

0.1 0 0

0.1

0.2

0.3

0.4 0.5 0.6 Measured PFA

0.7

0.8

0.9

1

Figure 4. Experimental receiver operating characteristics (ROC) for di erent edge contrasts for the SWF-based and AMI-based GLR operators, computed on simulated complex SAR images where the speckle correlation is close to that of ERS SLC images.

The improvement in detection performance obtained by working on complex data is illustrated in Fig. 4 for a spatial speckle correlation close to that of ERS SLC images. The receiver operating characteristics (ROC) show the PD as a function of the PFA, and we see that the SWF-based GLR operator yields a considerably better result than the AMI-based GLR operator. This is due to the decorrelation of the speckle, which permits better estimation of the local re ectivities. 2.6 Monoedge Detection in Intensity Images with Uncorrelated Speckle It is interesting to note that Eq. (8) is similar to the expression derived by Oliver et al. for intensity images. 4 The only di erence resides in the way the local mean re ectivities are estimated. When the speckle is uncorrelated, CS in Eq. (2) becomes an identity matrix, and the SWF given by Eq. (3) reduces to the AMI. The GLR presented in 4 is therefore a special case of GLR given by Eq. (8). Hence, the GLR for intensity images is also an optimal detector in the minimax sense, when the hypothesis of uncorrelated speckle is veri ed. This is also the case for the AMI-based NR when N1 = N2 . With this observation in mind, we shall now compare the NR computed on an intensity image with uncorrelated speckle, with a gradient operator applied to the logarithm of the intensity image, using the same window size in both cases. It is easy to show that taking the absolute di erence (AD) of the arithmetic mean of the logarithm (AML) of the pixel values is equivalent to computing the NR of the geometric mean intensities of the two half-windows. The geometric mean intensity is a biased estimator of the re ectivity, but an unbiased estimator can easily be realized. However, the variance of this unbiased estimator is much greater than i

in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 9

1 6dB 0.9 0.8

Measured PD

0.7

3dB

0.6 0.5 0.4

2dB

0.3 0.2

1dB NR−AMI 7x7 AD−AML 7x7

0.1 0 0

0.1

0.2

0.3

0.4 0.5 0.6 Measured PFA

0.7

0.8

0.9

1

Figure 5. Experimental ROC curves for di erent edge contrasts for the AMI-based NR operator, computed on simulated SAR intensity images with uncorrelated speckle, and for the AD-AML operator, which is computed on the logarithm of the simulated SAR images.

that of the AMI estimator. 20 The simulation results in Fig. 5 con rm that the di erential operator computing the AD of the AML's performs signi cantly poorer than the NR of the AMI's. Moreover, it can be shown that the arithmetic mean amplitude (AMA) has a slightly lower speckle reduction capacity than the AMI. 20 The performance of detectors based on the AMA will therefore be inferior to that of the corresponding AMI-based operators.

3 Edge Detection in the Presence of Multiple Edges For most scene types, the large windows that we use to detect edges in SAR images are likely to contain several edges simultaneously. In this case, we need to estimate the local mean values fRi g of a signal which undergoes abrupt transitions with random intervals, as illustrated by Fig. 1 (b). 3.1 Ratio Detector Optimized under a Stochastic Multiedge Model When the monoedge hypothesis is not veri ed, the AMI is no longer the optimal estimator of the local re ectivity in SAR images with uncorrelated speckle. Estimators with non-uniform weighting should be considered. The lter coecients decide the weighting of the pixels as a function of the distance to the central pixel. For our application, they should optimize the tradeo between noise suppression and spatial resolution, based on a priori knowledge of image and noise characteristics. We restrict ourselves to a separable image model. In the horizontal as well as in the vertical direction we suppose that the re ectivity image (ideal image) R is a

in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 10

stationary random process composed by piecewise constant segments of re ectivity

fRi g, with mean value R and standard deviation R . The localization of the re ectivity jumps fxi g follows a Poisson distribution with parameter  corresponding to the mean jump frequency, i.e., the probability of k jumps in the interval x is given by (x)k exp (?x) : pk (x) = (14) k! The re ectivities fRi g and the jump localizations fxi g are supposed to be independent. Fig. 1 (b) illustrates the multiedge model in the one-dimensional case. Although it is idealized, this model is a good approximation for important scene types, such as agricultural elds. Assuming the speckle to be uncorrelated, it can then be shown 21 that the linear, non-causal estimator minimizing the mean square error (MSE) is the in nite symmetric exponential lter (ISEF) given by f (x) = exp (? jxj) ; (15) 2 where 2L 2 (16) 2 = 1 + ( = )2 +  : R R

In the discrete case, f can be implemented very eciently by a pair of recursive lters f1 and f2 , realizing the normalized causal and anti-causal part of f , respectively. 22;21 Based on the linear minimum mean square error (MMSE) lters described above, we propose a new ratio-based edge detector: the ratio of exponentially weighted averages (ROEWA) operator. 21 To compute the horizontal edge strength component, the intensity image I is rst smoothed column by column using the one-dimensional smoothing lter f . Next, the causal and anti-causal lters f1 and f2 are employed line by line on the result of the smoothing operation to obtain the local mean intensities: ^x1 (x; y) = f1 (x)  (f (y) ? I (x; y)) ^x2 (x; y) = f2 (x)  (f (y) ? I (x; y)) Here  denotes convolution in the horizontal direction and ? denotes convolution in the vertical direction. The normalized ROEWA in the horizontal direction is given by   ^ (x ? 1; y) ^x2 (x + 1; y) : (17) ; rex (x; y) = max x1 ^x2 (x + 1; y) ^x1 (x ? 1; y) The vertical edge strength component can be obtained in the same manner, except that the directions are interchanged. It is even possible to compute the ROEWA in the diagonal directions, but the implementation becomes more delicate. The exponentially weighted averages ^1 and ^2 are normalized to be unbiased, and as the standard deviation remains proportional to the mean value, the ROEWA operator has CFAR. There is no simple generalization of this method to complex images with correlated speckle. in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 11

0.03

0.03

0.025

0.025

0.02

0.02

0.015

0.015

0.01

0.01

0.005

0.005

0

0 10

15

5

10

0

5

15

5

−15

5 0

−5

−5

−10

−10 −15

10

0

−5

−10 y

10

0

−5

x

(a)

y

−15

−10 −15

x

(b)

Figure 6. Impulse responses of the exponential lters estimating (a) ^x2 and (b) ^x1 , which are used to compute the horizontal component of the ROEWA operator.

3.2 Multiresolution GLR Operator

Rather than using a xed size window, a statistical multiresolution approach 5;11;12 can be used. Weak edges separating large regions can then be detected with large windows, while strong edges between small regions are detected with small windows. The fact that GLR's computed on di erent resolution levels have di erent statistical signi cance is taken into account: 12 Thresholds corresponding to a given PFA are rst computed over the whole range of relevant window sizes. The GLR on each level is then divided by the appropriate threshold. The maximum normalized GLR across the resolution levels, which can be considered as the strongest indication of the presence of an edge, is retained as the edge strength of the pixel. Alternatively, we can take the mean of the normalized GLR's across the scales. Experimental results 12 indicate that the latter method yields slightly better results. Both the AMI-based and the SWF-based GLR can be used, but it is of course preferable to use the SWF if the speckle is correlated and complex data is available. When the SWCE con guration is used, we can compute the NR rather the GLR at each level. It has also been proposed to use the continuous wavelet transform and the Haar wavelet to detect edges degraded by multiplicative noise. 23 The magnitudes of the wavelet coecients obtained on di erent scales are averaged pixel by pixel, similar to the multiresolution GLR operator. However, wavelets are basically di erential operators, so the wavelet transform is not well suited for edge detection in SAR images.

4 Two-dimensional Edge Detection and Edge Extraction The description so far is basically one-dimensional. The two-dimensional implementation poses some additional problems: Firstly, the edge orientation is in general unknown, so that several edge directions must be considered in the edge detection stage. Secondly, the monoedge model is not always veri ed. Depending on the scene type, several edges may co-occur within the analyzing window, especially if a large window is used. Finally, we must make sure that the edges that we extract are closed and skeleton if we want them to de ne a segmentation of the image. in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 12

4.1 Multidirectional Edge Detection To detect edges with unknown orientation, a simple approach consists in splitting the analyzing window in several di erent directions. The main issues are the number of directions to examine, the shapes of the multidirectional analyzing window, and the way that the edge strengths obtained in di erent directions are combined. Partial responses to these questions have been obtained through simulations. 12 In the idealized monoedge case, it is advantageous to use long and narrow windows, as they impose very weak constraints on the edge shape. Using large and short windows, on the contrary, with many pixels adjacent to the supposed edge, makes it necessary to test an important number of shapes and directions. In the more realistic multiedge case, and if the scene characteristics are the same in azimuth and range, it is preferable to use square windows. When the resolution or the scene characteristics are di erent in azimuth and range, rectangular windows re ecting this di erence should be used. The higher the number of directions and shapes that are tested, the higher is the probability of a good t between the real edge and one of the windows. However, not only the PD increases, but also the PFA. When too many directions are examined, the augmentation of the PFA becomes higher than that of the PD, and the overall performance decrease. As a compromise, windows split in four directions are frequently used. When only two perpendicular directions are tested, it seems natural to take the magnitude of the two components, by analogy with the gradient magnitude of di erential operators. Simulation results indicate that this is a good choice. 12 When the edge strength is computed in four directions, across the vertical, horizontal, and diagonal axes of a square window, it is better to take the maximum of the components. If there were no overlap between the half-windows for di erent directions, the overall PFA would be related to the theoretical unidirectional one through p4 = 1 ? (1 ? p1 )4 : (18) For square windows, where the half-windows for the di erent directions are highly overlapping, the empirical relation p04 = 1 ? (1 ? p1 )3 (19) is a good approximation. 5 For long and narrow windows, which give a lower degree of overlap between the masks for di erent directions, the PFA of the maximum of the four components tends towards Eq. (18). 24 It is, however, preferable to take the average of the four components, rather than the maximum, for such windows. 12 4.2 Robust Extraction of Closed Skeleton Edges Plain thresholding of the edge strength map computed by the multidirectional GLR operator generally produces isolated edge segments that are several pixels wide. Closed, skeleton boundaries de ning a segmentation of the image can be obtained with the watershed algorithm. 25 The edge strength map is here considered as a surface, and the edges are extracted through an immersion simulation. The

in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 13

watershed algorithm usually yields a massively over-segmented image. To reduce the number of false or irrelevant edges to an acceptable level, we can threshold the basin dynamics of the edge strength map. 26 The concept of edge dynamics 27;28;29 permits a compact, hierarchical representation of the segmentations obtained by applying di erent thresholds to the basin dynamics. The user can then choose the best threshold for his application interactively. Refer to 30 for more details on the use of basin and edge dynamics thresholding in SAR image segmentation. 4.3 Region Merging Another way of reducing the over-segmentation is to merge adjacent regions whose mean re ectivities are not signi cantly di erent. Several merging criteria have been proposed, including the Student's t-test 31 and the unequal variance Student's ttest. 32 The GLR can also be used to decide whether or not two regions should be merged, and again constitutes an optimal criterion. In fact, the hypotheses H0 and H1 are simply inverted compared to the GLR for edge detection. Assuming the adjacent regions to be independent, the logarithm of the GLR for region merging therefore becomes log ^ m = N1 log R^1 + N2 log R^2 ? N0 log R^0 : (20) It can easily be seen that log ^ m  0, and that a value close to zero suggests that the two regions together form a Gamma-homogeneous region. In practice, negative thresholds are used. The more irregularities we accept within the regions, the further the threshold can be from zero. Just like for edge detection, the threshold of the GLR for region merging can be related to the PFA. Geometrical considerations, such as region size 32;21 and edge regularity, 31 may also be taken into account in the merging process, based on a priori knowledge about the size and shape of the regions. The order in which the regions are merged has a strong in uence on the nal result. Finding the globally optimal merging order requires much time-consuming sorting. The iterative pairwise mutually best merge criterion 33 is a locally optimal approach which is much quicker: First all regions are compared with their neighbors in terms of the merging criterion, and the results are stored in a dynamic array. The array is then traversed sequentially, and a region A is merged with an adjacent region B if and only if B is the closest neighbor of A according to the merging criterion, and if A is also the closest neighbor of B. When two regions are merged, the local statistics of the resulting region must be computed and the comparison with all its neighbors must be redone before continuing. The array is traversed repeatedly until no adjacent regions satisfy the merging criterion.

5 Edge Localization Edge localization is an estimation problem rather than a detection problem. Assuming that an edge is present within the window, we want to determine the most probable edge position. Edge localization can therefore be considered as a re nement stage following the edge detection procedure described above. The question is how to divide X0 into X1 and X2 . in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 14

5.1 ML Estimator of the Edge Position ML estimation of the edge position consists in maximizing the probability density function p(X0 ) given by Eq. (1) with respect to the edge position, or equivalently, by computing 



X^ edge = X0 Xmin U (X1 ; X2 ) 1 ;X2



;

(21)

where the energy function U is given by ?1 X0 : (22) U (X1 ; X2 ) = jCX0 j + Xt0 CX 0 Using the FWSE con guration illustrated in Fig. 2 (b), we center the window on the detected edge pixel and split the window in all possible edge positions. As the ML estimator must use the same data set for all positions, we cannot introduce any band of pixels separating the two parts of the window here. The signal vectors X1 and X2 will consequently be dependent. For each pair of signal vectors the SWF de ned in Eq. (3) is used to compute R^1 and R^2 . Let p Q be a column vector where the N1 elements which correspond to X1 are set to R^1 and the N2 other elements p ^ are set to R2 . We can now write  ? (23) CX0 = QQt CS0 ; where denotes the element-by-element product of two equally sized matrices. To estimate the edge position using Eq. (21), the matrix CX0 , its determinant and its inverse must be computed for each new window position, and for all possible edge positions within the window. If we denote the number of possible edge positions by M , the total number of multiplications per edge pixel is about M (N03 + 2N02 + N12 + N22 + 3N0 ). 5.2 Approximate ML Solution If the window is big and the speckle correlation is relatively weak, we may consider X1 and X2 as approximately independent. In this case p(X0 )  p(X1 jR1 )  p(X2 jR2 ). As p(X0 jR0 ) is constant for a xed window position, p(X0 ) is proportional to ^ in Eq. (6). Hence, the most probable edge can be computed by Eq. (21) with the suboptimal energy function U 0 (X1 ; X2 ) = N1 log R^1 + N2 log R^2 : (24) 0 2 2 Computing Eq. (21) with the new cost function U requires about M (N1 + N2 + N0 ) multiplications per edge pixel, which is far less than for the true ML estimator based on Eq. (22). When the speckle is uncorrelated, Eq. (24) coincides with the optimal solution, and the SWF can be replaced by the AMI. Simulation results 19;12 show that the true ML estimator yields better results than the approximate ML estimator in terms of bias and MSE. However, the computational complexity of the true ML estimator is prohibitive, so only the approximate solution is of practical interest. As the MSE increases dramatically towards the extremities, one should exclude the extreme positions from the test, and rather concentrate on positions closer to the center.

in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 15

5.3 Two-dimensional Edge Localization and Regularization We shall now suppose that the actual contours have been extracted with the methods described above, but that the detected edge pixels not necessarily are in the correct positions. The segmentation is initially represented by an edge map, but we also create a label image, in which every pixel carries the number ` of the region it belongs to. From a radiometric point of view, the optimal edge position is the one that minimizes the cost function in Eq. (22). However, constraints on the regularity of the edges must be included in the energy function, as the radiometric criterion alone tends to create very irregular edges which follow local speckle patterns. We have considered two ways of introducing regularity constraints: a simple method based on Gibbs random elds, and a more sophisticated one based on active contours. Contextual regularization using Gibbs random elds Gibbs random elds have been introduced in image processing to model regularity constraints in the entire image. Geman 34 used it in the eld of pattern recognition. Descombes 35 and Sigelle 36 studied their links to physical statistics. Potts model is a simple and frequently used model, which has constant and anisotropic potential functions. It permits to model geometric regularity eciently without having to specify complicated local con gurations.

(a)

(b)

(c)

Figure 7. Illustration of Potts model, de ned by Eq. (25). (a) The indices i of the labels `i within a 33 window. (b) A regular con guration of labels which yields V = 2 =T . (c) An irregular spatial con guration for which V = ?4 =T .

Let the labels ` within a 33 window have indices as shown in Fig. 7 (a). The energy term V corresponding to Potts model is here given by: 36;37 8 X V (X1 ; X2 ) = [2(`0; `i ) ? 1] ; T i=1

(25)

where in our case is a negative constant which controls the weight given to the regularization constraint compared to the radiometric criterion. The parameter T decreases exponentially from the initial value T0 during the global optimization stage, similar to the temperature in simulated annealing. The sum in Eq. (25) simply calculates the number of pixels in the 8-neighbourhood which have the same label as the central pixel, minus the number of pixels which have a di erent label. The energy function, including the regularization term, becomes UV (X1 ; X2 ) = U (X1 ; X2 ) + V (X1 ; X2 ) ; (26) in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 16

where the radiometric term U is given by Eq. (22) or (24). We traverse the image several times and compute the minimum of Eq. (26) with respect to the edge position for each detected edge pixel, similar to what is done in the iterated conditional modes (ICM) algorithm. 38 The role of the regularization term is to privilege regular shapes, which for many scene types are more likely than very irregular ones. Active contours The edge localization algorithm based on active contours is preceded by a vectorization of the edges, as shown in Fig. 8. The edges are represented by nodes and arcs. We have to set the maximum distance that can be accepted between an edge pixel and the corresponding arc. The higher the required precision is, the higher the number of nodes will be. The re nement of the edge position consists in traversing the list of nodes iteratively, and for each node calculate an energy function for a series of new node positions in its neighbourhood. When a new edge position is considered, we have to take the corresponding changes for the arcs and the label statistics into account to compute the energy function. If a position with a lower cost is found, the new position is accepted and the concerned labels and label attributes are updated before continuing.

(a)

(b)

(c)

Figure 8. Illustration of the edge localization algorithm based on active contours. (a) Initial contours. (b) Vectorization process. (c) Vectorized contours where the node position are modi ed to improve the edge localization.

The radiometric term in Eq. (22) or (24) can again be used. If we use Eq. (24), the method is similar to the one proposed by Refregier et al. 39 for intensity images. Nodes where several regions meet need a special treatment. The energy function in Eq. (24) can easily be generalized to more than two adjacent regions. The fact that we work on vectorized edges constitutes a regularity constraint in itself. However, regularity constraints based on the angle between adjoining arcs or form parameters describing the complexity of the contours of entire regions can be introduced. in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 17

(a)

(b)

(c)

(d) (e) (f) Figure 9. (a) Ideal re ectivity image with 6dB edge contrast. (b) Corresponding image with single-look correlated speckle. (c) Edge strength map obtained with the 1111 GLR operator on the speckled image. (d) Inverted edge dynamics of the edge strength map, where a dark edge segment indicates a strong edge. (e) Edges obtained by thresholding the edge dynamics, superposed on the ideal image. (f) Edges after the Gibbs random elds localization re nement stage, using = ?0:8, T0 = 6 and 5 iterations.

6 Segmentation of a Simulated SAR Image Fig. 9 illustrates the di erent stages of the segmentation scheme, from the creation of the edge strength map, via edge extraction by thresholding of edge dynamics, to the edge position re nement. The ideal re ectivity image in Fig. 9 (a) is composed of regions of constant re ectivity. The edge contrast is 6dB. This synthetic image was multiplied by simulated complex speckle with correlation characteristics close to those of ERS images, to obtain the speckled image in Fig. 9 (b). The SWF-based GLR operator was applied to this image using an 1111 window split along the horizontal, vertical and diagonal axes. Retaining the maximum GLR for each pixel gave the edge strengths in Fig. 9 (c). The edge dynamics computed on this edge strength map are inverted in Fig. 9 (d), so that a dark edge segment indicates a strong edge. Using an interactive visualization tool, we observed the segmentation results for a wide range of thresholds. With the most suitable one, all real edges and only two false ones were detected. This is mainly due to the fact that we used a relatively big analyzing window when computing the edge strength map. in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 18

However, the edge localization is not always perfect, as can be seen from Fig. 9 (e), where the detected edges have been superposed on the ideal image. When the segmentation is superposed on the speckled image, it is much more dicult to assess such small deviations from the correct edge positions by eye. The edge localization was e ectuated with the approximate ML criterion given by (21) and (24), using a hybrid SWF lter, and with the contextual regularization term based on Potts model. Several combinations of parameters were used in (25). The most satisfactory result, obtained with = ?0:8, T0 = 6 and 5 iterations, is shown in Fig. 9 (f). The t to the correct edge positions has become signi cantly better, but the contours are slightly more irregular. Stronger regularization created problems at the extremities of narrow structures. A better preservation of ne structures may be obtained with more sophisticated image models, such as the Chien model. 37

7 Contextual Classi cation of Multitemporal ERS Data Contextual classi cation consists in comparing local statistics, calculated on a neighborhood of the pixel to be classi ed, with the statistics of each of the classes. The classi cation methods employed here are based on Bayes' rule. Let Z be the feature vector for a given pixel and its neighborhood. We seek the class ! which maximizes the a posteriori probability p (Zj!)  p (!) ; (27) p (!jZ) = p(Z) where p (Zj!) is the conditional joint PDF of the vector Z relative to the class !. As p(Z) is independent of the class, it is not necessary to take it into account in the optimization stage. Supervised contextual classi cation of speckled images into a prede ned set of classes is traditionally based on estimates obtained on a xed size sliding window. This approach is here compared with regionwise classi cation, based on a preliminary segmentation obtained with the scheme described above, for multitemporal classi cation of an agricultural area. A series of 6 ERS-1 SLC images and 2 SPOT images of the agricultural region of Bourges, France, were available. The ERS-1 images were acquired monthly from March to September 1993. A 41 SWF was applied to the co-registered SLC images, followed by downsampling by a factor of 3 in the azimuth direction, to facilitate the co-registration of the SPOT data on the SAR data. The equivalent number of independent looks L of the SAR intensity images obtained in this way is exactly 4, and the speckle is truly Gamma distributed and only weakly correlated. However, the secondary lobes of strong scatterers are more apparent than in ERS PRI data. In an earlier study, 40 we considered multisource classi cation of the scene, based on both ERS-1 and SPOT images. The SPOT images were here only used for validation of the class samples, as a supplement to the ground truth, whereas the segmentation and the classi cation were e ectuated on the 4-look SAR intensity images. An extract of a color composite of the three rst dates is shown in Fig. 10. The image is mainly composed of agricultural parcels, but there are also some forested areas. in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 19

The module of the horizontal and vertical components of the ROEWA operator was calculated on each of 6 dates. The pixel-by-pixel average of these edge strengths constituted the edge strength map. This means that we privileged edges that are present in several images. If we wanted to detect abrupt changes in re ectivity which concern only one date, it would be better to take the maximum of the edge strengths of the di erent dates. Closed, skeleton edges were extracted by means of the watershed algorithm with thresholding of the edge dynamics. A relatively low threshold was used, in order to detect a maximum of signi cant edges. The number of false edges was reduced by region merging based on the GLR criterion given by Eq. (20). Finally, the edge localization was improved by minimizing Eq. (24), using the approximate ML criterion and the regularization term based on Potts model. The resulting edges are superposed on the three rst dates in Fig. 11. The segmentation scheme successfully distinguishes agricultural elds which have clearly di erent mean re ectivities or di erent temporal evolution. The forested areas are also well separated from the surrounding elds. Concerning the forests, it should be stressed that the segmentation algorithm is not made for textured regions, so that adjacent zones with di erent texture, but the same mean re ectivity, generally are gathered in the same region. Some regions that seem homogeneous by eye are split in several regions, but these will generally be regrouped in the classi cation step. It should be noted that these split states may be due to di erences present only in the three last dates, which are not visualized here. In its current form, the segmentation scheme does not detect narrow linear structures, such as roads and rivers. Based on the ground truth and the co-registered SPOT data, we determined a set of zones in the SAR images serving as training samples for the 13 di erent classes. An independent set of test samples was also de ned. The result of the supervised Bayesian classi cation computed on a 77 sliding window is shown in Fig. 12, whereas the result obtained by regionwise classi cation, using the segmentation in Fig. 11, is shown in Fig. 13. Classi cation errors occur for both methods, mainly due to the similarity between the temporal signatures of certain classes, texture and phenological variations within some of the classes, and the uncertainty related to the presence of speckle. The average proportion of correctly classed pixels in the test samples is about 75% for both methods. Also in terms of the inter-class confusion, the two approaches are relatively close. The visual aspect is, however, quite di erent. The method relying on a sliding window yields a very heterogeneous result, with numerous small zones attributed to other classes than the surroundings, and a high degree of interclass confusion near the boundaries of the elds and the forests. Regionwise classi cation produces a very regular result, with no interclass confusion near the boundaries. As the training and test samples generally are situated well inside the regions, the improvement brought by the region-based method near the boundaries is not re ected by the percentages of correctly classed pixels. Obviously, details that are not detected in the segmentation stage cannot be discriminated in the classi cation stage, and a higher number of pixels are affected if a region is wrongly classi ed. Nevertheless, regionwise classi cation yields more meaningful results from a cartographic point of view, and it permits fast and ecient classi cation of agricultural areas with ERS 40 and Radarsat 41 data. in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 20

Figure 10. Extract of a color composite of three 4-look SAR images covering an agricultural scene c ESA { ERS-1 data { 1993. Distribution SPOT Image. near Bourges, France.

in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 21

Figure 11. Edges obtained with the segmentation scheme, superposed on the color composite of the three rst dates shown in Fig. 10.

in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 22

Figure 12. Classi cation result obtained with feature vectors computed on a 77 sliding window.

in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 23

Figure 13. Classi cation result obtained with feature vectors computed on entire regions de ned by the segmentation in Fig. 11.

in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 24

8 Conclusion We here develop optimal criteria for edge detection, edge localization and region merging in SLC images with correlated speckle, and relate them to the corresponding criteria for intensity images known from literature. Working on complex SAR data combines the advantages of full spatial resolution and optimal radiometric estimators, which permit us to realize improved statistical tests. The principal di erence between these new methods and those developed for intensity images resides in the way that the local re ectivities are estimated. The SWF decorrelates the speckle in SLC images and yields optimal speckle reduction. In the case of detected images, this optimum can only be attained by the AMI when the speckle is uncorrelated. For large neighborhoods, the SWF implies an important number of multiplications, but near optimal speckle reduction can be obtained with hybrid lters at a much lower computational cost. We propose an ecient two-dimensional implementation of the segmentation scheme, where the edge detection is e ectuated by the GLR and the watershed algorithm with thresholding of the basin dynamics, and where the edge localization stage is based on Gibbs random elds or active contours. Precise segmentations facilitate the estimation of scene parameters for adaptive speckle ltering 42 or classi cation of SAR data. To further improve the results, the segmentation scheme should be completed with criteria for textural segmentation 43 and operators for the detection of point targets and linear structures. 10;44

Acknowledgments The authors thank the French Space Agency (CNES) for nancial support. This work is part of contracts 833/CNES/94/1022/00 and 833/CNES/96/0574/00.

References 1. F. T. Ulaby, R. K. Moore, and A. K. Fung. From Theory to Applications, volume 3 of Microwave Remote Sensing: Active and Passive. Artech House Inc., Dedham, MA, 1986. 2. R. Brooks and A. Bovik. Robust techniques for edge detection in multiplicative Weibull image noise. Pattern Recognition, 23(10):1047{1057, 1990. 3. V. S. Frost, K. S. Shanmugan, and J. C. Holtzman. Edge detection for synthetic apperture radar and other noisy images. In Proc. International Geoscience and Remote Sensing Symposium, volume FA2, pages 4.1{4.9, Munich, Germany, June 1982. 4. C. J. Oliver, D. Blacknell, and R. G. White. Optimum edge detection in SAR. IEE Proc. Radar, Sonar and Navigation, 143(1):31{40, February 1996. 5. R. Touzi, A. Lopes, and P. Bousquet. A statistical and geometrical edge detector for SAR images. IEEE Trans. Geoscience and Remote Sensing, 26(6):764{ 773, November 1988. 6. A. C. Bovik. On detecting edges in speckle imagery. IEEE Trans. Accoustics, Speech, and Signal Processing, 36(10):1618{1627, October 1988. in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 25

7. M. Basseville and I. V. Nikiforov. Detection of Abrupt Changes: Theory and Application. Prentice-Hall, Englewood Cli s, 1993. 8. M. Adair and B. Guindon. Statistical edge detection operators for linear feature extraction in SAR images. Canadian Journal of Remote Sensing, 16(2):10{19, 1990. 9. M. Beauchemin, K. P. B. Thomson, and G. Edwards. On nonparametric edge detection in multilook sar images. IEEE Trans. Geoscience and Remote Sensing, 36(5):1828{1831, September 1998. 10. A. Lopes, R. Touzi, E.Nezry, and H. Laur. Structure detection and statistical adaptive speckle in SAR images. International Journal of Remote Sensing, 9(14):1735{1758, June 1993. 11. R. Fjrtoft, A. Lopes, P. Marthon, and E. Cubero-Castan. Di erent approaches to multiedge detection in SAR images. In Proc. International Geoscience and Remote Sensing Symposium, volume 4, pages 2060{2062, Singapore, 3{8 August 1997. 12. R. Fjrtoft. Detection de contours pour la segmentation d'images radar a synthese d'ouverture en regions homogenes. PhD thesis, Institut National Polytechnique, Toulouse, France, 1999. http://www-sv.cict.fr/cesbio/gti 13. L. M. Novak and M. C. Burl. Optimal speckle reduction in polarimetric SAR imagery. IEEE Trans. Aerospace Electronics Systems, 26(2):293{305, 1990. 14. S. Nrvang Madsen. Spectral properties of homogeneous and nonhomogeneous radar images. IEEE Trans. Aerospace Electronics Systems, 23(4):583{588, 1987. 15. V. Larson, L. M. Novak, and C. Stuart. Joint spatial-polarimetric whitening lter to improve SAR target detection performance for spatially distributed targets. In Proc. Algorithms for SAR Imagery, volume SPIE 2230, pages 285{301, April 1994. 16. J. Bruniquel and A. Lopes. Multi-variate optimal speckle reduction in polarimetric SAR imagery. International Journal of Remote Sensing, 18(3):603{627, 1997. 17. A. Lopes and F. Sery. Optimal speckle reduction for the product model in multilook polarimetric SAR imagery and the Wishart distribution. IEEE Trans. Geoscience and Remote Sensing, 35(3):632{647, May 1997. 18. J. Bruniquel and A. Lopes. On the true multilook intensity distribution in SAR imagery. In Proc. International Geoscience and Remote Sensing Symposium, Seattle, Washington, USA, July 1998. 19. R. Fjrtoft, J. C. Cabada, A. Lopes, and P. Marthon. Optimal edge detection and segmentation of SLC SAR images with spatially correlated speckle. In Proc. SAR Image Analysis, Modelling, and Techniques III, volume SPIE 3497, Barcelona, Spain, 21-24 September 1998. 20. D. S. Zrnic. Moments of estimated input power for nite sample averages of radar receiver outputs. IEEE Trans. Aerospace Electronics Systems, 11(1):109{113, January 1975. 21. R. Fjrtoft, A. Lopes, P. Marthon, and E. Cubero-Castan. An optimum multiedge detector for SAR image segmentation. IEEE Trans. Geoscience and Remote Sensing, 36(3):793{802, May 1998. in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 26

22. J. Shen and S. Castan. An optimal linear operator for step edge detection. CVGIP, Graphics Models and Image Processing, 54(2):112{133, March 1992. 23. M. Chabert. Detection et estimation de ruptures noyees dans un bruit multiplicatif { Approches classiques et temps-echelle. PhD thesis, Institut National Polytechnique, Toulouse, France, December 1997. 24. R. G. Caves. Automatic matching of features in synthetic apperture radar data to digital map data. PhD thesis, University of Sheeld, England, June 1993. 25. L. Vincent and P. Soille. Watersheds in digital spaces: An ecient algorithm based on immersion simulations. IEEE Trans. Pattern Analysis and Machine Intelligence, 13:583{598, May 1991. 26. M. Grimaud. A new measure of contrast: Dynamics. In Proc. Image Algebra and Morphological Processing, volume SPIE 1769, pages 292{305, San Diego, USA, July 1992. 27. L. Najman and M. Schmitt. Geodesic saliency of watershed contours and hierarchical segmentation. IEEE Trans. Pattern Analysis and Machine Intelligence, 18:1163{1173, December 1996. 28. C. Lemarechal, R. Fjrtoft, P. Marthon, and E. Cubero-Castan. Comments on `Geodesic saliency of watershed contours and hierarchical segmentation'. IEEE Trans. Pattern Analysis and Machine Intelligence, 20(7):762{763, July 1998. 29. M. Schmitt. Response to the comment on `Geodesic saliency of watershed contours and hierarchical segmentation'. IEEE Trans. Pattern Analysis and Machine Intelligence, 20(7):764{766, July 1998. 30. C. Lemarechal, R. Fjrtoft, P. Marthon, A. Lopes, and E. Cubero-Castan. SAR image segmentation by morphological methods. In Proc. SAR Image Analysis, Modelling, and Techniques III, volume SPIE 3497, Barcelona, Spain, 21-24 September 1998. 31. R. Cook and I. McConnell. MUM (Merge Using Moments) segmentation for SAR images. In Proc. SAR Data Processing for Remote Sensing, volume SPIE 2316, pages 92{103, Rome, Italy, 1994. 32. R. Fjrtoft, P. Marthon, A. Lopes, and E. Cubero-Castan. Edge detection in radar images using recursive lters. In Proc. Second Asian Conference on Computer Vision, volume 3, pages 87{91, Singapore, 5{8 December 1995. 33. A. Baraldi and F. Parmiggiani. Segmentation driven by an iterative pairwise mutually best merge criterion. In Proc. International Geoscience and Remote Sensing Symposium, pages 89{92, Firenze, Italy, July 1995. 34. S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Trans. Pattern Analysis and Machine Intelligence, 6:721{741, November 1984. 35. X. Descombes. Champs Markoviens en analyse d'images. PhD thesis, Ecole Nationale Superieure des Telecommunications, Paris, France, December 1993. 36. M. Sigelle. Champs Markoviens en traitement d'images et modeles de la physique statistique : applications en relaxation d'images de classi cation. PhD thesis, Ecole Nationale Superieure des Telecommunications, Paris, France, December 1993.

in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 27

37. X. Descombes, J.-F. Mangin, E. Pechersky, and M. Sigelle. Fine structures preserving Markov model for image processing. In Proc. 9th Scandinavian Conference on Image Analysis, pages 349{356, Uppsala, Sweden, June 1995. 38. J. Besag. On the statistical analysis of dirty pictures. Journal Royal Statistic Society Ser. B, 48:259{302, 1986. 39. P. Refregier, O. Germain, and T. Gaidon. Optimal snake segmentation of target and background with independent Gamma density probabilities, application to speckled and preprocessed images. Optics Communications, 137:382{ 388, May 1997. 40. F. Sery, D. Ducrot, A. Lopes, R. Fjrtoft, E. Cubero-Castan, and P. Marthon. Multisource classi cation of SAR images with the use of segmentation, polarimertry, texture and multitemporal data. In Proc. European Symposium on Satellite Remote Sensing III, Image and Signal Processing for Remote Sensing III, volume SPIE 2955, pages 186{197, Taormina, Italy, 23{26 September 1996. 41. D. Ducrot and H. Sassier. Contextual methods for multisource land cover classi cation with application to Radarsat and SPOT data. In Proc. Image and Signal Processing for Remote Sensing, volume SPIE 3500, Barcelona, Spain, 21-24 September 1998. 42. R. Fjrtoft, F. Lebon, F. Sery, A. Lopes, P. Marthon, and E. Cubero-Castan. A region-based approach to the estimation of local statistics in adaptive speckle lters. In Proc. International Geoscience and Remote Sensing Symposium, volume 1, pages 457{459, Lincoln, Nebraska, USA, 27{31 May 1996. 43. C. J. Oliver, I. McConnell, and D. Stewart. Optimum texture segmentation of SAR clutter. In Proc. European Conference on Synthetic Aperture Radar, pages 81{84, Konigswinter, Germany, 26-28 March 1996. 44. F. Tupin, H. Ma^tre, J.-F. Mangin, J.-M. Nicolas, and E. Pechersky. Detection of linear features in SAR images: Application to road network extraction. IEEE Trans. Geoscience and Remote Sensing, 36(2):434{453, March 1998.

in Information Processing for Remote Sensing, C. H. Chen, World Scienti c 28

Suggest Documents