Image Enhancement Based on a Nonlinear

0 downloads 0 Views 275KB Size Report
Image Enhancement Based on a Nonlinear Multiscale Method. Farook Sattar, Lars Floreby, Göran Salomonsson, and Benny Lövström. Abstract—An image ...
888

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 6, JUNE 1997

constraint. Both the PPSNR and the visual quality are improved. In addition the method reduces the artifacts in postprocessing such as edge detection. Furthermore, this approach requires only a simple modification to the compression scheme and a low excess of computing. The decoder scheme is exactly the same as in the standard JPEG. The proposed array is incorporated in the bitstream instead of the standard default array. ACKNOWLEDGMENT The authors thank the reviewers both for their helpful comments and suggestions.

Image Enhancement Based on a Nonlinear Multiscale Method Farook Sattar, Lars Floreby, G¨oran Salomonsson, and Benny L¨ovstr¨om Abstract—An image enhancement method that reduces speckle noise and preserves edges is introduced. The method is based on a new nonlinear multiscale reconstruction scheme that is obtained by successively combining each coarser scale image with the corresponding modified interscale image. Simulation results are included to demonstrate the performance of the proposed method.

I. INTRODUCTION

REFERENCES [1] W. B. Pennebaker and J. L. Mitchell, JPEG Still Image Data Compression Standard. New York: Van Nostrand, 1993. [2] H. C. Reeve and J. S. Lim, “Reduction of blocking effects in image coding,” Opt. Eng., vol. 23, pp. 34–37, Jan./Feb. 1984. [3] A. Baskurt, R. Prost, and R. Goutte, “Iterative constrained restoration of DCT-compressed images,” Signal Processing, vol. 17, pp. 201–211, 1989. [4] Y. Yang, N. P. Galatsanos, and A. K. Katsaggelos, “Regularized reconstruction to reduce blocking artifacts of block discrete cosine transform compressed images,” IEEE Trans. Circuits Syst. Video Technol., vol. 3, pp. 421–432, Dec. 1993. [5] K. Miller, “Least squares methods for ill-posed problems with a prescribed bound,” SIAM J. Math. Anal., vol. 1, pp. 52–74, Feb. 1970. [6] B. Chitprasert and K. R. Rao, “Discrete cosine transform filtering,” Signal Processing, vol. 19, pp. 233–245, 1990. [7] A. K. Katsaggelos, J. Biemond, R. W. Schafer, and R. M. Mersereau, “A regularized iterative image restoration algorithm,” IEEE Trans. Signal Processing, vol. 39, pp. 914–929, Apr. 1991. [8] Z. Wang, “Fast algorithm for discrete W transform and for the discrete Fourier transform,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-32, pp. 803–816, Aug. 1984. [9] R. Deriche, “Using Canny’s criteria to derive a recursively implemented optimal edge detector,” Int. J. Comput. Vis., vol. 1, no. 2, pp. 167–187, 1987.

In many imaging systems, there is noise that is inherently signal dependent. One kind of such noise is speckle, which, e.g., appears in acoustic and laser imaging and which is often modeled by a random process. Noise in images can be suppressed by conventional linear or nonlinear filtering. Linear techniques are simple but blur the edges and have a poor performance in the presence of signal-dependent noise [1]. One nonlinear method is the Nagao filter [2], which uses masks of different shapes in order to find the neighborhood of least variance, after which a local mean is computed using the selected masks. When dealing with signal and image processing problems, it is often attractive to perform a multiscale decomposition where a sequence of approximations at successively coarser resolution is generated. Such a scheme is mathematically described by the dyadic wavelet transform [3], in which a series expansion of the signal represents the approximation at a resolution being a power of two. The difference in information between approximations at two consecutive resolutions, the detail signal, can be represented by another series expansion. The reversal of the multiresolution decomposition, i.e., reconstruction, is the synthesis form of the scheme, where a finer representation is constructed via coarseto-fine scale recursion. The multiresolution representation can be implemented as a subband coding scheme, in which a quadrature mirror filter (QMF) pair gives the lowpass and detail signals. Another implementation scheme is the multiresolution pyramid [4], in which the detail signal at one particular resolution is obtained by subtracting the lowpass signal from the original. Thus, such a scheme requires no QMF filter pair. In this correspondence, we introduce a multiscale image enhancement method that reduces speckle noise while the sharpness of edges is preserved. The method is based on pyramid reconstruction where, at each resolution, the output from the previous stage is interpolated and combined with a modified version of the interscale (detail) image. The significance of edges as information bearing highpass features in images has been emphasized by Graham [5]. Useful information having a high-frequency content should therefore be preserved in the reconstruction while noise should be suppressed. In our scheme, edge detection is applied to each lowpass image, thus introducing Manuscript received July 5, 1995; revised July 19, 1996. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Moncef Gabbouj. F. Sattar, L. Floreby, and G. Salomonsson are with the Department of Applied Electronics, Signal Processing Group, Lund University, S-221 00 Lund, Sweden (e-mails: [email protected]; [email protected]; [email protected]). B. L¨ovstr¨om is with the Department of Signal Processing, University of Karlskrona/Ronneby, S-372 25 Ronneby, Sweden (e-mail: [email protected]). Publisher Item Identifier S 1057-7149(97)03894-3.

1057–7149/97$10.00  1997 IEEE

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 6, JUNE 1997

889

(a)

Fig. 1. One decomposition and one reconstruction stage in multiresolution pyramid.

nonlinearity to the method. The binary edge image is then used for selecting which pixels of the interscale image to be retained. Three different approaches to edge detection are used: the Laplacian-ofGaussian (LoG) based (linear) technique [1], the ratio-of-averages (RoA) method [6], and finally a new scheme based on a varianceweighted mean estimator (VWME). An artificial test image degraded by speckle is used in order to illustrate the performance. Quantitative measures are presented for comparison between different enhancement methods. In this context, the performances of the average filter and the Nagao filter are presented. Furthermore, the method is applied to real ultrasonic images acquired from measurements on a phantom and also from echocardiographic scanning. II. THE NONLINEAR MULTISCALE METHOD A. Pyramid Decomposition and Reconstruction The multiscale decomposition of a two-dimensional (2-D) signal, such as an image, can be carried out by using separable filters derived from horizontally and vertically directed one-dimensional (1-D) filters. Although this method has the advantage of requiring only 1-D filters, it has one major disadvantage for the current purpose: The image is decomposed into one lower resolution image and three interscale images, which all represent different components of the detail information. Moreover, separable filters emphasize the importance of the horizontal and vertical directions at the expense of other directions. Feauveau proposed a multiscale scheme which has the sharpness of the separable dyadic wavelets, but which does not suffer from their disadvantages [7]. The method creates a multiresolution pyramid by iteratively applying a nonseparable lowpass filter and subsampling p the output by a factor of 2 in each dimension. The subsampling operation is achieved by retaining every second pixel in the image. In order to make the pixels of the lower resolution image fit to the Cartesian grid, subsampling includes rotation of the image by 45 . In Fig. 1, one decomposition stage in the multiresolution pyramid is shown together with the corresponding reconstruction stage. The lower resolution image um is acquired by applying the lowpass filter g to the image um01 and subsampling the output. The coarser image, um , is interpolated to give the image am , with the same size as um01 , but containing only its lowpass information. The interscale image is

(b) Fig. 2. (a) Structure of the multiresolution pyramid and (b) illustration of decomposition into lowpass and detail images.

given by the difference dm = um01 0 am . A perfect reconstruction of the original image can be achieved by interpolating the low-resolution image component and adding it to the interscale component. In this case, the dashed parts of Fig. 1 are discarded and the elements of wm are set to one. Multiscale decomposition and reconstruction is sketched schematically in Fig. 2(a). Iterative decomposition in four stages is illustrated by Fig. 2(b). In contrast to the algorithm given by Feauveau, the size of each interscale image is equal to that of the input image to the corresponding decomposition stage. This approach is less economic in terms of data storage, but it leads to a perfect reconstruction with few filter demands: No matching highpass filter is needed and the lowpass filter does not have to be halfband. Moreover, we can process the interscale image without introducing aliasing after reconstruction. B. The Modified Reconstruction Scheme An interscale image can be assumed to contain both noise and the outlines of objects. Moreover, an object that covers a large number of pixels has a frequency content that is essentially lowpass. Therefore, most of its edge information is preserved when the image is subject to decimation. Thus, it can be said that it is easier to detect the boundary of an object in an image of low resolution, because the noise variance is lower (although the localization of the edges is less precise, or the edges have a larger width). These observations lead to a modification of the reconstruction process for the purpose of image enhancement. The modified scheme for one reconstruction stage is obtained by including the dashed parts in Fig. 1. A binary mask, wm , at the mth scale is determined as the result from edge detection applied ~m ). Its pixel value is 1 if to the interpolated lowpass component (a ~m is determined to belong to an edge, 0 the corresponding pixel in a otherwise. Thus, wm indicates the part of the image where the useful high-frequency components (edges) are preserved. Finally, interscale image dm is multiplied by wm to remove the noise located inside

890

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 6, JUNE 1997

(a)

(b)

(c) Fig. 3. Filter design. (a) Passband support of the desired filter. (b) Frequency response of the designed filter. (c) Coefficients of the filter g (n1 ;

homogeneous regions and we obtain the output from reconstruction using the modified interscale image. C. Filter Design Using the frequency sampling method [1], the desired frequency response, Gd (!1 ; !2 ), (!1 ; !2 2 [0; ]), is sampled at equally spaced points yielding Gd (k1 ; k2 ), 0  k1 , k2  (N 0 1) (N is even). Fig. 3(a) shows for Gd (k1 ; k2 ) the diamond-shaped passband, transition band, and stopband (marked by “1,” “1/2,” and “0,” respectively) in the frequency plane. The filter is designed to prevent aliasing and its passband is given by j!1 j + j!2 j 2 [0;  0 2M=N ], where M (0  M  N=2) is an integer [see Fig. 3(a)]. The desired impulse response, gd (n1 ; n2 ), is defined as the inverse discrete Fourier transform of Gd (k1 ; k2 ). In order to get a computationally more manageable filter, gd (n1 ; n2 ) is approximated using a Hamming window of size L 2 L (L  N ). In our case, the parameters are set to N = 96, M = 16, and L = 11. Fig. 3(b) shows the frequency response of the lowpass filter whereas the coefficients of the corresponding filter are given in Fig. 3(c). D. Edge Detectors A step edge in an image is defined as the transition between regions of different intensities [8]. Here, edge detection is used only in order to determine which pixels in each interscale image to include in the corresponding reconstruction stage. For each stage in the p image decomposition, the resolution is decreased by a factor of 2 in each spatial dimension. As the resolution is decreased,

n2 )

.

the number of pixels used for representing the image is decreased by the corresponding factor. Thus, using the same edge detector for all stages, the spatial precision relative to an object’s size is varied through the multiresolution pyramid. For the linear edge detector [1], edges can be defined as the zero crossings of the output when an image is filtered through a LoG filter. It corresponds to a bandpass filter, whose passband is determined by the spatial dispersion, G , of the underlying Gaussian pulse. Among linear filters, it can be said to have the best ability to localize edges at a given scale. In our algorithm, the images are convolved by a sampled version of the LoG filter and the zero crossings of the output are declared to be edges only when the brightness gradient’s magnitude exceeds a threshold. The second edge detector uses the RoA as a measure of the contrast between different image regions. Within the processing window, a median line is defined by the center row, column, or diagonal. The average gray level on each side of each such line is computed, and the maximum ratio is selected. In Fig. 4, the regions in which the averages are computed are marked by “A” and “B,” while the pixels of the median lines are marked by filled circles. Making some assumptions about the image statistics, the threshold selection problem can be considered equivalent to setting a constant false alarm ratio (CFAR). Using this approach, however, one can only expect to detect an edge delimiting two regions if the contrast between them significantly exceeds the selected threshold. The third method for constructing the mask wm is based on a new nonlinear edge detection technique. It uses VWME carried out in

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 6, JUNE 1997

Fig. 4. Regions in which averages are computed for the 3 edge detector.

891

2 3-pixel RoA

a square window centered at pixel (n1 ; n2 ). The pixels within the window are arranged into eight groups, fx(n1 0 i; n2 0 j )j(i; j ) 2 sp g; p = 1; 2; 1 1 1 ; 8, where sp are the subsets of the window as indicated in Fig. 5. The mean of the image intensity in subset number p is denoted by p (n1 ; n2 ), while the corresponding sample variance is denoted by p2 (n1 ; n2 ). The variance-weighted mean estimate is now given by 8

VWME (n1 ; n2 ) =

bp (n1 ; n2 )p (n1 ; n2 )

p=1

8

(1)

bp (n1 ; n2 )

p=1

where bp (n1 ; n2 ) = 1=p2 (n1 ; n2 ). In the degenerate case where p2 approaches zero for some p, (1) should be interpreted in the limit sense. For edge detection, VWME (n1 ; n2 ) is compared with the arithmetic mean, AM (n1 ; n2 ) computed over the entire window. If the window lies in a homogeneous region, the VWME performs similar to an arithmetic mean operator. If an edge, on the other hand, is included in the window, the weights of the window vary due to the abrupt change in the variance, and the edge can be located. For each pixel (n1 ; n2 ), the decision statistics is given by R(n1 ; n2 ) = ln [VWME (n1 ; n2 )] 0 ln [AM (n1 ; n2 )]. In a homogeneous region, this quantity can be expected to be close to zero, which is not the case, e.g., at an edge. Thus, the mask elements are set to one for those pixels whose magnitudes exceed a threshold. III. SIMULATION RESULTS

2 5 used in the VWME approach.

age of the speckle image in a homogeneous region approximates the reference image, the latter can be used in quantitative comparisons. The image obtained from the speckle model can be said to simulate a cross section containing regions with various scatterer densities. A region with different speckle magnitude can thus represent, e.g., a lesion in a medical echogram. Thus, the test image of Fig. 6(b) contains no specular echoes from any boundaries, but the objects are completely defined by their different speckle levels. The point signalto-noise ratio (SNR) of a speckle image has been defined and, in the case of fully developed speckle, it equals 1.91 (5.63 dB) [9]. B. Quality Measures

A. Test Image and Noise Model Ultrasonic speckle can be modeled as the result when a number of independent random-phase scatterings are accumulated coherently at the transducer face [9]. These scatterings originate from inhomogeneities of size comparable to the wavelength of the ultrasound and their density in a particular region determines the average speckle magnitude. From a reference image containing different constantlevel regions, a speckle image is thus obtained after multiplication by a random field with unit mean. A simulated speckle image is represented as

I (n1 ; n2 ) = s(n1 ; n2 ) 1 v(n1 ; n2 )

Fig. 5. Window of size 5

(2)

where s(n1 ; n2 ) is the reference image and v (n1 ; n2 ) is a random process representing the variations in speckle amplitude. The 256 2 256-pixel reference image s(n1 ; n2 ) shown in Fig. 6(a) is generated as follows. A 45 rotated square, a ring, a rectangle, and a solid rectangle having gray levels 100, 150, 175, and 200, respectively, are placed at different locations in the image. Near the image border, there is a stripe with gray level 200. The background gray level is set to 50. The gray level at one point in a speckle image is modeled to be Rayleigh distributed while its second-order behavior is described by a spatial autocorrelation function, which can be derived from a model or estimated from measured data. We generate spatially correlated speckle noise v (n1 ; n2 ) by lowpass filtering a complex Gaussian random field and taking the magnitude of the filtered output. The speckle image I (n1 ; n2 ) is shown in Fig. 6(b). Since the local aver-

For the quantitative evaluation, a region of interest (ROI) is selected in the test image. This ROI has size 206 2 128 pixels and is marked in Fig. 6(a). As a measure of speckle suppression, the mean square error (MSE) is perhaps the one closest at hand. Relating it to the reference signal’s energy, we obtain the output SNR, which in its “pointwise normalized” version is given by Number of pixels in ROI (3) SNRout =

1 0 s^(i; j ) s(i; j )2 (i; j )2ROI

where s^(i; j ) is the estimated pixel intensity. This quantity has the property to give every pixel in the image equal importance, irrespective of the corresponding gray-level value in the reference image. The SNR is not always an accurate measure of noise suppression in images, since it does not well reflect, e.g., the preservation of edges. Therefore, we propose a supplementary performance evaluation based on correlation. Thus, as a measure of the noise suppression, we use

0(s 0 s; s^ 0 s^) (4) 0(s 0 s; s 0 s) 1 0(^s 0 s^; s^ 0 s^) where s(i; j ) and s^(i; j ) are mean values in the ROI of s(i; j ) and s^(i; j ), respectively, and 0(s1 ; s2 ) = s1 (i; j ) 1 s2 (i; j ): (5) =

(i; j )2ROI

892

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 6, JUNE 1997

(a)

(b)

(c)

(d)

(e)

(f)

Fig. 6. Simulation results. (a) Reference image. (b) Corresponding speckle image. (c) Image enhanced by the multiscale method (LoG approach). (d) Image enhanced by the multiscale method (RoA approach). (e) Image enhanced by the multiscale method (VWME approach). (f) Image enhanced by the Nagao filter.

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 6, JUNE 1997

893

(a)

(b)

(c)

(d)

(e)

Fig. 7. (a) Map of the region of the phantom used. (b) Measured echographic image of phantom. (c) Image after multiscale enhancement (LoG approach). (d) Image after multiscale enhancement (RoA approach). (e) Image after multiscale enhancement (VWME approach).

Furthermore, as a measure of the edge preservation, we use

=

0(1s 0 1s; 1s 0 1s)

correlation measures of (4) and (6) should be high, i.e., close to unity when the estimated image is similar to the reference image. (6)

0(1s 0 1s; 1s 0 1s) 1 0(1s 0 1s; 1s 0 1s) where 1s(i; j) is a highpass filtered version of s(i; j), obtained with a3

2 3-pixel standard approximation of the Laplacian operator. The

C. Results The test image of Fig. 6(b) was enhanced by using the present multiscale method with the three described approaches to edge

894

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 6, JUNE 1997

(a)

(b)

(c)

(d)

Fig. 8. (a) Echocardiographic image. (b) Image after multiscale enhancement (LoG approach). (c) Image after multiscale enhancement (RoA approach). (d) Image after multiscale enhancement (VWME approach).

detection. The images resulting from five-stage reconstruction are shown in Fig. 6(c)–(e). For the LoG-based (linear) edge detector, we used G = 1:0, which implies a “reasonably” small amount of aliasing. By truncating this filter to 7 2 7 pixels, less than 0.003% of its energy was discarded. The zero crossings of the filtered image were defined as those pixels, for which the 4-neighbors indicated a simple sign shift. To be declared an edge, the image gradient magnitude also had to exceed a global threshold. The latter was automatically selected at each scale as the first local minimum of the smoothed histogram. The resulting image is shown in Fig. 6(c). In fact, we have found that a 3 2 3-pixel standard approximation of the Laplacian [1] yields results comparable to those of the LoG filter. A window of size 3 2 3 pixels was used with the RoA edge detector. Thus, three-pixel neighborhoods were compared pairwise and edges were declared if the maximum ratio exceeded the threshold TRoA = 2:0. The result of image enhancement is shown in Fig. 6(d). The value of the threshold, 2.0, was equal to the ratio between the

gray levels of the rotated square and the background. This object was therefore partly distorted. A lower threshold would, however, result in a larger number of false edges and, thereby, poorer noise suppression. Nonlinear VWME edge detection was performed using a window of size 5 2 5. For every reconstruction stage, the threshold was set to TVWME = 0:1, which was high enough to reject most of the detail information in homogeneous regions [see Fig. 6(e)]. The choice of threshold, however, did not seem to be crucial for the quality of the enhanced image. For comparison with another edge-preserving smoothing technique, the simulated speckle image was processed by a 5 2 5-pixel Nagao filter. The resulting image is displayed in Fig. 6(f). Visual examination of the images in Fig. 6 shows that, for a speckle image, the present multiscale method performs edge preserving smoothing more efficiently than the Nagao filter. The performance measures of (3), (4), and (6) were evaluated for images enhanced by the described multiscale method. The highest performance was achieved using the RoA edge detector. Comparison

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 6, JUNE 1997

TABLE I QUANTITATIVE RESULTS FROM SIMULATION EXAMPLE: QUALITY MEASURES  AND (IN PERCENT) AND SNRout (IN dB)

895

that the image is smoothed except near the edges of objects. The background, as well as the area inside the objects, becomes more homogeneous without being significantly blurred at the edges. B. Echocardiographic Image

of the present method with some other techniques in terms of SNRout ,  and is given in Table I. All results concern five-stage processing. However, similar performances were achieved using four or six stages. The motive for limiting the number of stages is due to scale limitations. The resolution imposes a lower limit to the object size that can be represented. Moreover, the finite support of the images makes boundary effects more compromising at a coarse resolution. At scales larger than that of the innermost stage, no highpass information—desired or undesired—is suppressed. If too many stages are used, an edge may be missed at a coarse scale and thereby at each finer scale. In the enhanced images, a high gray-level variance remained at the boundaries of objects, where no detail information was suppressed. In order to further improve the performance of the method, postprocessing can be applied to the output from the final reconstruction stage. In our case, this was carried out by means of oriented smoothing, where the average gradient in a 3 2 3-pixel window first was computed. If a significant direction was at hand, the new pixel was determined by smoothing in an approximately perpendicular direction. Otherwise, nonoriented smoothing was applied. Improved results in terms of SNRout ,  and was acquired by oriented smoothing, as can be observed in Table I. IV. APPLICATION

TO

REAL IMAGES

A. Echographic Phantom Image In the first example, we used an echographic image of a phantom from Nuclear Associates (model 84-317). Fig. 7(a) shows a map of the cross section, 96 mm 2 65 mm, in which the measurement was made. The large bright “discs” represent “high scatter” regions with diameters 8 mm and 12 mm, respectively, while the three dark “discs” indicate “low scatter” regions with diameters 2, 4, and 6 mm, respectively. Besides, the phantom contains a number of fine reflecting targets for calibration purposes. The rectangular region was reproduced by scanning a 2.9 MHz ultrasonic beam through 108 parallel lines and sampling each envelope-detected echo at 440 points. In order to give nearly square pixels, interpolation was performed in the lateral direction. The acquired image is shown in Fig. 7(b). The echogram was processed using the present multiscale method. The same parameters and number of stages as in Section III were used. The resulting images from using the three different approaches to edge detection are shown in Fig. 7(c)–(e). It can be observed

As a second example, the multiscale method was tested on an ultrasonic image of the heart. Transesophageal scanning was thereby used for acquiring a short-axis cross section of the left ventricle. The received echo data were scan converted onto a Cartesian grid and displayed as a fan-shaped image. The ventricular wall causes echoes, which appear as bright spots in the echocardiogram. However, these regions have a grainy texture, and sometimes parts of the tissue are almost invisible in the image. Furthermore, there is speckle noise, which in this case can be considered undesirable. An echocardiographic image is shown in Fig. 8(a). The resulting enhanced images using the different methods for edge detection are presented in Fig. 8(b)–(d). The parameters were the same as the ones used for the phantom image. In all cases, the speckle patterns were considerably suppressed. The sharpness of the heart wall boundary was preserved in most parts.

V. CONCLUSIONS A nonlinear multiscale method for image enhancement was presented. It worked as a space-varying, speckle-suppressing filter by removing high-frequency components at each scale only where no edges were detected. The results showed that the method was able to bring out homogeneous areas of constant speckle level while the edges were preserved. In the multiscale scheme, we used three different approaches to edge detection, each of which required some kind of threshold selection. For the linear (LoG) approach, this was accomplished automatically at each scale and the method worked satisfactorily on the considered set of images. The RoA method gave the highest performance among the three edge detectors, but required a trade-off between low frequency of false edges and high contrast discrimination ability. The VWME method involved essentially the same compromise, but could handle smaller image contrasts than the RoA approach. REFERENCES [1] J. S. Lim, Two-Dimensional Signal and Image Processing. Englewood Cllifs, NJ: Prentice-Hall, 1990. [2] M. Nagao and T. Matsuyama, “Edge preserving smoothing,” Comput. Graphics Image Processing, vol. 9, pp. 394–407, 1979. [3] S. G. Mallat, “A theory for multiresolution signal decomposition: The wavelet representation,” IEEE Trans. Pattern Anal. Machine Intell., vol. 11, pp. 674–693, July 1989. [4] P. J. Burt and E. H. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun., vol. COMM-31, pp. 532–540, Apr. 1983. [5] D. N. Graham, “Image transmission by two-dimensional contour coding,” in Proc. IEEE,, vol. 55, pp. 336–346, Mar. 1967. [6] R. Touzi, A. Lopes, and P. Bousquet, “A statistical and geometrical edge detector for SAR images,” IEEE Trans. Geosci. Remote Sensing, vol. 26, pp. 764–773, Nov. 1988. [7] J. C. Feauveau, “Analyze multir´esolution pour les images avec un facteur de r´esolution 2,” Traitement du Signal, vol. 7, no. 2, pp. 117–128, 1990. [8] J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-8, pp. 679–698, Nov. 1986. [9] J. T. M. Verhoeven, J. M. Thijssen, and A. G. M. Theeuwes, “Improvement of lesion detection by echographic image processing: Signal-tonoise ratio imaging,” Ultrason. Imaging, vol. 13, no. 3, pp. 238–251, 1991.

p