a wavelet transform approach - Geoscience and Remote Sensing ...

1 downloads 0 Views 947KB Size Report
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 33, NO. 1, JANUARY 1995. 61. Disparity Analysis: A Wavelet Transform Approach.
IEEE TRANSACTIONS ON GEOSCIENCE AND

61

REMOTE SENSING, VOL. 33, NO. 1, JANUARY 1995

Disparity Analysis: A Wavelet Transform Approach Jean-Pierre Djamdji and Albert Bijaoui

A mapping of the disparities over the entire image by the Abstract-We describe a new method for the computation of a disparity map between a couple of stereo images. The disparities kriging method. are computed along the 2 and y axis, respectively at each point of the image. In order to compute the disparity field, first a set of An example of a situation where the disparities are useful, ground control points is detected in both images. Next, a mapping the geometrical registration of a stereo pair of images, will be of the disparities over the entire image is done using the kriging presented. method. Finally, the stereo couple of images is registered using the disparity maps. 11. DISPARITYDEFINITION I. INTRODUCTION

D

IFFERENCES in images of real world scenes may be induced by the relative motion of the camera and the scene, by the relative displacement of two cameras or by the motion of objects in the scene. These differences are important because they contain enough information to allow a partial reconstruction of the three dimensional structure of the scenes from its two-dimensional projections. When such differences occur between two images, we say that there is a disparity between them, which may be represented by a vector field mapping one image onto the other [ 2 ] .The evaluation of the disparity field has been called the correspondence problem [ 8 ] . Time-varying images of real world can provide kinematical, dynamical and structural information [23].The disparity field can be interpreted into meaningful statements about the scene, such as depth, velocity and shape. Disparity analysis may be broadly defined as the evaluation of the geometric differences between two or more images of the same or similar scenes. The differences in remote sensing are mainly the result of different imaging directions. The goal of the analysis is to assign disparities, which are represented as two-dimensional vectors in the image plane, to a collection of points in one of the images. Disparity analysis is useful for image understanding in several ways. There is information in a disparate pair of images that is difficult or even impossible to find in any single image. Disparity is therefore a very general property of images which may be used in a variety of situations. Our purpose is to determine the disparities c z ( i , j ) and e y ( i , j ) respectively in the x and y direction, in each point (i,j ) of the image. Our approach will rely on two main steps: The detection of a set of ground control points (GCP's) using a multiresolution approach [6] over which the disparities are computed, Manuscript received October 19, 1993. J.-P. Djamdji was with the Observatoire de la CBte d'Azur, Nice, France and the Ecologie et Phytosociologie, Universitk de Nice, France. He is now with the University of California Riverside, College of Engineering,Riverside, CA 92521-0425 USA. A. Bijaoui is with the Observatoire de la C6te d'Azur, Nice, Nice Cedex 04,France. IEEE Log Number 9406428.

Let P be a point in the real world, and Pi' and P;" the images of this point in frames 1 and 2 respectively. These two points are similar as being the image plane projection of the same real world surface point. Consequently, matching Pi' with Pf is the same as assigning to Pi' a disparity with respect to image 2 of

We shall modify this classical definition by getting rid of the deformation polynomial model underlying the geometrical registration. Instead, we shall consider the disparity as being the divergence between two identical points with respect to K ) and (xi,y;) the deformation model considered. Let (Xi, be the coordinates of an identical point in the reference and the working frames, then

(x;,Yt)= (Xi,Y,) Y;z> = (Xi,Yi).

($4,

(2)

If the viewing angles were the same, (Xi, y Z ) and (xi,y;) will be related by

(3) where f and g are polynomials that take into account the deformation between the two frames. But when the viewing angles are different, the model considered previously is no longer valid, and a correction term has to be introduced in order to take into account the deformation introduced by the viewing angles. Thus the previous relationship can be rewritten as

+ +

xi = f(Xi,K)%%(Xi,yZ) Yi = S(Xi, K ) cy, (Xi, yZ)

(4)

where ex, and cy, describe the new deformation. Then the y Z ) along the x and y axis is given disparity at the point (Xi, by ex, and eyt, respectively. 111. EXTRACTING GCP's

The extraction of the GCP's is obtained via a multiscale approach and its implementation is given in [ 6 ] .The method is based on a special implementation of the discrete wavelet

01962892/95$04.00 0 1995 IEEE

68

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 33, NO. 1, JANUARY 1995

transform, the “ U trous” algorithm [ 111, [20]. A brief description of the algorithm is given in the Appendix-B. The matched GCPs are the maxima of the detected structures in the wavelet images upon a set of dyadic scales, the multiscale schemes starting from the coarser resolution to the finest one. In case of scenes acquired under the same viewing angle, we have shown [6], [7] that we were able to detect and correctly match the GCPs, which, in turn enables us, through this multiscale schemes, to register the images with a subpixel accuracy. For images taken under different viewing angles, a residue remains due to the disparity between these images. Nevertheless, we shall use this multiscale procedure for the GCPs extraction.

IV. DATAINTERPOLATION mm THE KRIGINGMETHOD At any given step of the matching procedure we have a set of matched points, which leads to the disparities E , , cy. Our aim is to fully map these functions, and for this purpose we shall use an interpolation technique known as kriging, based upon the theory of regionalized variables [ 131. The theory of regionalized variables was developed by G. Matheron in the late 1950’s. Matheron demonstrated that spatially dependent variables can be estimated on the basis of their spatial structure and known samples [14]. A random variable distributed in space is said to be regionalized. These variables, because of their spatial aspect, possess both random and structured components. Two regionalized variables separated by a distance vector h are not independent, but are correlated by a relation dependent upon h [5]. Usually as the length of h increases, the similarity between two regionalized variables decreases. One way of examining the spatial structure of a regionalized variable is to analytically relate the change of the variable as a function of the separating distance h. The function which defines the spatial correlation or structure of a regionalized function is the variogram given by 1 (5) r(h) = ZE(if(.) - f ( . h)Y)

+

practice, a good numerical approximation of a two dimensional image variogram is given by [18], [19]

where Nl and Nc are respectively the number of lines and columns of the image, and h is a distance expressed in pixel. Kriging is a mean of weighted local averaging in which the weights Xi, a = 1,. . . , n are chosen so as to give an unbiased estimate f* at point zO, while at the same time minimizing the estimation variance. Thus kriging can be thought of as a special kind of linear smoothing filter [11. Often, the weights are all When the property the same and are therefore equal to of interest is spatially dependent, however, a more precise estimate is obtained if the weights are chosen according to their influence on the point to be estimated. Kriging provides a means by which this is achieved. This estimate is given by

t.

f*(.o)

=

(8)

Xif(.i). i=l

In practice, a neighborhood of N points is defined outside which the observations carry so little weight that they can be ignored. We shall call this neighborhood the kriging search window. Minimizing the variance and introducing a Lagrange multiplier in order to obtain unbiasedness leads to the following system of linear equations:

A more detailed derivation can be found in [12]. The kriging variance which is the estimation error is then given by

E[{f*(zo)- ~ ( z o ) } ~=]

Xi~(.i

- ZO)

+ p-

(10)

2

= C ( 0 )- C ( h ) (6) v. DISPARITYMAPPINGWITH THE WAVELET TRANSFORM where C ( h ) is the covariance function, E the mathematical Our aim is to compute the disparity in each point of a expectation and h the lag or separating distance. Equation (6) couple of stereo images. For this purpose we shall introduce holds only if the covariance function is defined. The shape an iterative process for the computation of the disparity values of the variogram reflects the degree of correlation between samples. Variogram functions that rises as h increases indicates based on a multiresolution approach. We begin by defining some of the terminology terms used that the spatial correlation decreases as more distant samples in the following. We shall call: are chosen, until a separation distance is reached at which real disparities: the disparities computed on the discrete knowledge of one sample tells us nothing about the others set of GCP, (uncorrelated) [9]. Once the spatial structure of a regionalized disparity map: the disparities estimated at each point of variable has been demonstrated through computation of the the image, variogram, the spatial structure can be used to estimate the real variogram: the variogram computed from the discrete value of the variable at unsampled locations. This estimation set of GCP, process is known as kriging. Kriging is optimal because it theoretical variogram model: the theoretical model used makes use of information on the spatial dependence of the in the kriging procedure which is based on a least squares property of interest represented either as the variogram or fit of the real variogram values. as the covariance function. In order to use the variogram for kriging, a mathematical model must be fitted [151, [ 161. Let I,, n E ( l , N ) , N = 2, be the two stereo images to This model must meet certain criteria, and several so called be processed. Let us consider I I and 1 2 as the reference “authorized models” are available that do that [12], [17]. In and the working image respectively. Let M be the largest

-

69

DJAMDJI AND BUAOUI: DISPARITY ANALYSIS: A WAVELET TRANSFORM APPROACH

distance in the pixel space between two identical features. The matching must be first processed with the largest scale L, 2L-1 < M 5 2 L , in order to automatically match identical features without errors 141, [61. On each image I,, we compute the wavelet transform with the so called ''2 trous" algorithm up to the scale L. We then obtain N x L smoothed image Snl(i,j)and N x L wavelet images Wnl(i,j),n E (1,2) and 1 E ( l I L ) . The smoothed images are not used in the disparity computation procedure. The reference image will be for n = 1. With L being the initial dyadic step, we perform on Wnl(i,j)a detection procedure in order to detect the structures in the wavelet images and keep only those structures above a threshold of (6 x 0,1), 0 being a constant which increases when the resolution decreases, and cn1 being the standard deviation of Wnl [6]. We only retain from these structures their local maxima which will then act as GCP's. Our objective is to obtain the biggest number of matched points in order to have a real disparity map as dense as possible. Let (X, Y) be the coordinates of a maxima in the reference image and (z,y) the coordinates of the corresponding point in the working image. Let (z1,yl) be the coordinates (in the working frame) of the point corresponding to (X, Y) after applying the deformation model. If the model used describes correctly the geometrical deformation, ($1, y1) must be very close or equal to (2, y). On the other hand, as the polynomial model does not model the deformations adequately due to the difference in the viewing angles, (21,y1) is different from (z, y) and a corrective term, the disparity, has to be introduced. We have

{

21

Y1

= fZ(X,Y) =dX,Y)

.

(11)

The disparity (ez, ey) is then computed at every point (XIY) by

{

EZ(XIY)= 2 - f,(X,Y) = 2 - 21 'Y(X, Y) = y - gz(X, Y) = Y - 91 .

(12)

At step 1, 1 # L, we carry out a new detection procedure over Wil. Like previously, we detect the coordinates of the local maxima (X,Y) and (z,y) in the reference and the working image respectively. We then compute the variogram of the real disparities for step 1 - 1. The theoretical variogram model is then adjusted by least squares over these values. The coordinates (X,Y) of the GCP from step 1 are then transformed into the working frame using the deformation model (fl-1,gl-l) in order to get the new set of coordinates (21, yl). The disparities (eZl ey) are estimated on each GCP (X, Y ) between the points (2,y) and ( q , y1) (12) by kriging with the theoretical variogram model of step 1- 1.These values are then used to correct the values ( z l , y l ) of step 1, from the distortions due to the difference in the viewing angles. The corrected coordinates (22,y2) (in the reference frame) are therefore obtained from

c

Next, the new real disparities associated to each GCP (X, Y) are computed and the process is reiterated until we reach the finest resolution. At the lowest resolution, generally one, the real variogram is computed from the disparities at this resolution. The theoretical variogram is then adjusted over these values and the final disparity map is computed by kriging. This approximation process can be seen as an inverse problem which solution can be computed iteratively, the inverse problem being formulated as follow: I1 and 1 2 , being the two images, by wavelet transform, thresholding and maxima detection we get a list of maxima L(I1) and L(I2):

L(I1) = (Max o Thresh o WT)(Il)

L(I,") = (Max o Thresh o WT)(I,")

(14)

with

WT Wavelet transform operator. Thresh Thresholding operator. M u z Maxima detection operator. The goal is to obtain an image I,"in the same frame as and which list L(I,") is identical to L(Il), i.e., Distance{L(I1), L(I,"))

minimum.

I 1

(15)

12 is obtained by the application of an operator 0" over = 1;

I2

I," = O"(I2).

n

2 1.

(16)

0" being the geometrical operator to determine, and n being the iteration number. The estimation of 0" must be refined until it converges toward a stable solution. The working image, at a given iteration n, is then registered using the deformation model associated to the final disparity map for that iteration. The entire process is then reiterated using the registered image as image 12. The convergence is fast and reached after a few iterations, three in our case. Once the iterative process has been done, the resulting disparity maps should be established for a given deformation model. We shall use the deformation model (fl, 91) of the first iteration. For a procedure with only two iterations (Fig. l), we have the final expression

{

+ (4,

zo = fl{f2(4, Yb) EZ, ! I 2 1 92(41 Y 2 + CY, (4, Y2) + E z , (4,Y;) Yo=91{f2(zLYb) + ~2,(4IY;),92(4,Y;) + E y z ( 4 , Y 2 ) +EYl(ZLY:).

(17) This model can be extended to any given number N of iterations. The outcome is the final disparity maps associated to the deformation model (fl,gl). The flowchart of this algorithm is given in Figs. 2 4 . VI. APPLICATION TO REAL WAGES

This procedure was applied to the two following SPOT scenes: Y2 €y(X,Y) . Scene nr 148-319 dated 05 February 1991 taken at The points ( 2 2 , y2) are then matched with (zly) and a new 07 h 5 1 mn-04 s, composed of 3000 rows and 3000 deformation model is computed between (X,Y) and ( 2 , ~ ) . columns, level la. 22

= 21 = Yl

+ €Z(x,Y)

+

(13)

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 33, NO. 1, JANUARY 1995

70

12redstered = 1'2

2nd Itexation

12

1st Iteration

Fig. 1. Iterative procedure: ' h o iterations.

Wavelet Transform

Wavelef Transform

A x2 =

the disparity values of step (i-1)

x

y2=Y

over the values of step (i-I)

I I

0 11 1 For1 = L a 1

Threshold Transform

1

1 Maxima Mection

(X,Y): Maxima coordinates detectd,

ThresholdTransform

1

Maxima Detection (n,y) : Maxima coordinates

detected,

A

w

4 Maxima coordinates transform of (X Y) in the reference image using the deformation models from step (i- 1)

.

X I =qi-1) (X,Y) y l = g(i-,) (X.Y)

c

I

Computation of the disparity values ( ~ r . ~ yat) coordinates (X , Y) by kriging

Coordinates values correction of (XIy l ) by adding Ex and Ey x2 = X I + Ex(X,Y) y 2 = yl + Ey(X,Y)

.

I

Fig. 2. Flowchart of the disparity computation algorithm.

Fig. 3. Flowchart of the disparity computation algorithm (continued).

Scene nr 148-319 dated 02 April 1988 taken at 07 h-34 mn-40 s, composed of 3003 rows and 3205 columns, level lb. These two scenes from the eastern region of Murib in the Republic of Yemen, were taken under different imaging directions. Level l a [21] (Fig. 5 ) was taken with an incidence of 25.8 degree left, while level l b [21] (Fig. 6) was taken with

an incidence of 6.3 degree right. Two subscenes of 512 x 512 pixels were then extracted. Image l b will be considered as the reference image and image l a as the working one. The noise level in the SPOT images being low, we have reduced the threshold in the matching procedure in order to obtain the maximum number of GCP. A four iteration procedure is then applied on these two images with a kriging search window of ten points.

DJAMDJI AND BJJAOUI: DISPARITY ANALYSIS: A WAVELET TRANSFORM APPROACH

I

Maxima matching of (x2 , y2)

.

and (x Y)

.1

71

I

Deformation model computation of (f(i,, 4 i ) ) between (X,Y) and (x,Y)

A Disparities Computation at (X , Y) Fx(X,Y) = x - f (i)(x,Y) Ey(X,Y) = Y - g (i)(x.Y)

Real variogram computation over the disparity values of step i

.1

Fig. 6. Reference Spot Image - Level lb.

Theoretical variogram fitting over the previous values of step i

A Computation of the final disparity map on every point of the image by kriging

Fig. 4. Flowchart of the disparity computation algorithm (continued)

Fig. 7. Isocontour of the final disparity map along the X axis.

Fig. 5. Working Spot Image - Level la.

For each iteration, we compute the real disparity maps in z and y by kriging. These maps, together with the associated

deformation model, allows us to register the working image, by correcting the distortions due to the viewing angles. The working corrected image is then used as the input working image for the next iteration, and the process is reiterated until convergence toward a stable solution. A stability criteria can be determined from a statistical study of the real disparity maps

for each iteration. The iterative process can be stopped if the standard deviation of the disparity maps (after kriging) reaches a certain threshold. The convergence is nevertheless quite fast and three iterations are sufficient. In this case, yemenlb image will be the reference frame and yemenlai image the working one at iteration i. The working image at iteration i, i # 1, will be the one of the (i - 1) iteration corrected from the distortions and registered. The resulting final disparity maps in z and y (Figs. 7-9) are then built up from the disparity maps in z and y for every iteration and from the associated deformation model, which is in this case, the deformation model of the first iteration. We have used a six order wavelet decomposition and a linear model for the theoretical variogram. The variogram was computed only over the last five resolutions, the number of GCP’s in the first one (6) being insufficient for a variogram computation. The theoretical variogram model was fitted over the first 300 distance values h in pixels ( h E [l, 3001) of

12

IEEE TRANSACTIONS ON GEOSCIENCE

AND REMOTE SENSING, VOL. 33, NO. 1, JANUARY 1995

500

400

300

Y 200

100

0

-..

0

Fig. 8. Iswontour of the final disparity map along the Y axis.

the real variogram ~ ( hfor ) the three resolutions (5, 4, 3) and on a distance of 100 pixels (h E [l,l00]) for the last two resolutions (2, 1). In Tables I-IV we give a summary of the different parameters used to compute the variogram for each iteration. Looking at those tables, it can be easily seen that the fourth iteration was unnecessary and that three iterations would have sufficed. The real variogram is noisy as can be seen in Figs. 10 and 11, especially for the coarsest resolutions. This is due to the small number of points used for its estimation. Examples for the real and theoretical variogram in z and y are shown in Figs. 10 and 11. The parameters of the second order polynomial deformation model z' = a,X2

y' = a,X2

+ b,Y2 + c,XY + d,X + e,Y + f z + b,Y2 + c,XY + d y X + e,Y + f,

(20)

the with ($(ref$), !/(ref,i)) and ( z ( w r k , i ) , Y(wrk,i))r coordinates of the ith TGCP in the reference and working image. We have then computed the kriged disparities (ckrige,, ckn'ge,) at the TGCP, from the final disparity maps (Idisp2 and .Idisp-,)and the associated deformation model (fl, 91) following the classical definition by tkrige(z,i)= z ( r e f , i ) - {fi( z ( r e f , i ) 7 Y(ref,i))

+

Idisp2(z(ref,i)

> Y(ref,i)))

ckrige(y,i) = Y(ref,i) - { 91( z ( r e f , i ) 7 g ( r e f , i ) )

+ Idi,yp-y(z(ref,i),

g(ref,i))}

*

b E R

+

9

64'

5

I 0

(18) (19)

are given in Tables V and VI. The accuracy of the final disparity map must be checked. In order to estimate the quality of the final disparity maps, we have selected manually, on both the reference and the working image, 49 test ground control points (TGCP) uniformly distributed over the entire image (regions with high and low elevations). We have then computed the disparities (t-test, ,t-test,) at the TGCP using the classical definition (1) EJest(z,z) = z ( r e f , z ) - z ( w r k , i ) fdest(y,i)= Y(ref,i) - Y(wrk,i)

a V

(21)

Fig. 9. Perspective view of the final dispariqZaps along the Y and X axis, respectively, and plane view of the reference image.

and the residue ( p z , p , ) between the values (t-test,, c?est,) and the corresponding values after kriging ( d r i g e , , tkrige,) for each TGCP

- t-&y,,i) = t-test(,,;) - tkrkqy,i).

P(@) = - q , , i ) P(y,i)

(22)

On these residues, we have computed the minimum (Min), the Maximum (Max), the mean (E) and the standard deviation (0). The results are given in Table VII. We can see from these results that the final disparity maps are well estimated, very few disparity pixels being inaccurate, the precision of the TGCP selection being of fl pixel. Another way of estimating the final accuracy is to use the disparity maps in order to register the two images. This is done by registering the working image using the deformation polynomials (f1,gl) and by adding the disparity maps

13

DJAMDJI AND BUAOUI: DISPARITY ANALYSIS: A WAVELET TRANSFORM APPROACH

TABLE I SUMMARY OF THE DIFFERENT PARAMETERS USEDIN THE ITERATIVE

TABLE IV PARAMETERS USED IN THE ITERATIVE PROCEDURE: FOURTH ITERATION VARIGGRAMCOMPWATION

SIJMMARY OF THE DIFFERENT

I(

I

Reference linage

yemenlb

Working Image

yemenlal

Working Image

yemenla4

Number of lines

512

Number of lines

512

Number of columns

512

yemenlb

Number of columns

6

5

4

3

2

1

Threshold level 0

0.5

0.5

0.5

0.5

0.5

1

Number of maxima detected

52

168 641

2403

8634

14484

45

168 671 2404

8183

15451

29

109 468

1698 5367

10606

Resolution

Reference Image

512 6

Resolution

5

4

3

2

20

40

M)

1

(Reference Image) Number of maxima detected (Working Image) Number of maxima matched

TABLE I1 OF THE DIFFERENTPARAMETERS USEDIN THE ITERATIVE SUMMARY PROCEDURE: SECONDITERATION VARIOGRAM COMPUTATION Reference Image

yemenlb

Working Image Number of lines

yemenla2

I

512

Number of columns

512

Resolution

6

Number of maxima matched

37

5

4

131 496

3

2

1839 5648

1

6971

0

1 Reference Image

1

Working Image

100

200

150

250

300

0

80

100

i

- - theoretical

yemenlb

I

I

yemenla3

Number of lines

512

Number of columns

512

Resolution

50

6

5

4

3

2

1

0

20

40

60

80

100

Fig. 10. Real and theoretical disparity variogram along the Xaxis for the first iteration as a function of the distance in pixels.

(see (4)). We have therefore registered the working image using the resulting final disparity maps and the deformation model associated to it. The result being shown in Fig. 12. We have also added the reference and the registered image in order to have a visual confirmationof the accuracy of the registration

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 33, NO. 1, JANUARY 1995

14

Disparity

Neighborhood

I

TGCP N X

Y

U

10

M z n ( ~ ) Maa(p)

&)

49

-7.147

2.135

-0.199

1.419

49

-3.012

4.823

-0.073

1.128

-

50

0

-

U

10

Evaluation (pixel)

Number of

0

--

-

100

50

150

!M)

150

100

0

150

200

250

300

0

100

150

200

250

300

theoretical

50

100

20

60

40

80

IM)

- - theoretical

Fig. 12. Registered working image with the help of disparity maps. 0

20

60

40

80

100

Fig. 1 1 . Real and theoretical disparity variogram along the Yaxis for the firsit iteration as a function of the distance in pixels.

Iteration

Iteration

n,

ay

bz

b,

cy

CY

d,

d,

L

e,

eY

fY

Fig. 13. The addition of the reference and the registered working image.

VII. CONCLUSION

procedure (Fig. 13), which in our case is very good. We just recall that we were unable to have a good registration of these images using a classical polynomial deformation model [6].

We have presented a new method that allows the computation of the disparity maps along the z and y axes, at each point of the image, between a couple of stereo images with a good accuracy, and without knowledge over the position parameters of the satellite. The procedure is fully automated and converges quite fast, three iterations being sufficient to achieve a good

~

15

DJAMDJI AND BUAOUI: DISPARITY ANALYSIS: A WAVELET TRANSFORM APPROACH

Fig. 15. Filtering with an increasing factor distance of 2 between samples.

where +(x) must satisfy the dilation equation [22]

Fig. 14. Linear scaling function 4 and its associated wavelet @.

estimation of the disparity maps. The method has many other applications, one of which is the geometrical registration of images obtained under different viewing angles, a process which is achieved quite readily and with good accuracy. A pyramidal implementation [7] of this procedure is possible and may reduce the processing time as well as the disk space needed. APPENDIX

THE WAVELETTRANSFORM

The first filtering is then performed by a twice magnified scale leading to the {f:’)} set. The signal difference {f:’)} - {f:’)} contains the information between these two scales and is the discrete set associated with the wavelet transform corresponding to +(x). The associated wavelet $(x) is therefore [3]

The distance between two samples increasing by a factor of two from the scale ( n - 1) to the next one, f j k ) is given by (Fig. 15)

A. The Continuous Wavelet Transfonn

n

The continuous wavelet transform of a 1D signal f ( x ) with respect to the analyzing wavelet $(x) is the 2 0 set defined as

JiJ’”

W ( a ,b) = !- f(x)$* -03

(q) dx

w(i, k) = p

1”

< m.

- 1 )

(23)

where a is the scale factor. The wavelet coefficients W ( a ,b) give information on the signal at the location b and for the scale a. The function $(x) must obey the admissibility condition

C=

and the discrete wavelet transform w(z, IC) by

(24)

-

fi’”.

(30)

The linear piece-wise continuous scaling function (Fig. 14) defined by +(x) = 1-

Ix 1

if x E [ - l , l ] if x [--1,1]

(31)

was used in our calculations, which leads to the kernel coefficients

It has also been shown [lo] that an inversion formula exists

The wavelet transform can then be easily interpreted in the Fourier space as a set of bandpass filters. The signal is examined both in direct space, pixel by pixel and in frequency space, band by band. The filtering is determined by the basic wavelet function. B. The Discrete Wavelet Transform: The “ci trous” Algorithm

In order to process observed images, a discrete approach must be used. The discrete approach we used was done with the so called “h trous” algorithm [ll], [20]. We assume that the sampled data {f,!”} are the scalar products at pixel {i} of the function f ( x ) with a given scaling function +(x) which corresponds to a low pass filter

The algorithm allowing one to rebuild the data frame is the following: we add the last smoothed array f,!”’ to all the differences w ( i , k ) k = 1 to n. II

(33) k=l

This works independently of the number of scales. The transformation is overdetermined, and the number of points increases by a factor k. ACKNOWLEDGMENT

The authors would like to thank Dr. R. Manikre for providing the images, L. Laurore for some fruitful discussions over the kriging method and Dr. F. Djamdji for his help in improving the presentation of the manuscript.

IEEE TRANSACTIONS ON GEOSCIENCE AND

76

REFERENCES [I] P. M. Atkinson, “Optimal ground-based sampling for remote sensing investigations: Estimating the regional mean,” Int. J. Remote Sensing, vol. 12, no. 3, pp. 559-567, 1991. [2] S. T. Bamard and W. B. Thompson, “Disparity analysis of images,” IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-2, pp. 333-340, Feb. 1980. [3] A. Bijaoui, “Algotihmes de la transformation en ondelettes: Applications i I’imagerie astronomique,” Ondeletres er Paquet d’ondelertes-Cours CEA/EDF/INRIA,pp. 1-26, June 1991. [4] A Bijaoui and M. Giudicelli, “Optimal image addition using the wavelet transform,” Experimental Astron., vol. 1, pp. 347-363, 1991. [5] J. R. Carr and D. E. Myers, “Application of the theory of regionalized variables to the spatial analysis of Landsat data,” in Proc. Pecora 9 Spatial Info. Technol.for Remote Sensing Today and Tomorrow, 1984, pp. 55-61. [6] J.-P. Djamdji, A. Bijaoui, and R. Maniere, “Geometrical registration of images: The multiresolution approach,” Photogrammetric Eng. & Remote Sensing, vol. 59, no. 5, pp. 645-653, May 1993. “Geometrical registration of remotely sensed images with the [7] -, use of the wavelet transform,” SPIE’s Int. Symp. Optical Eng. Photon., Orlando, FL, Apr. 12-16, 1993. [8] R. 0. Duda and P. E. Hart, Pattern Recognition and Scene Analysis. New York Wiley, 1973. [9] C. Glass, J. Cam, H. Yang, and D. Myers, “Application of spatial statistics to analyzing multiple remote sensing data sets,” in A. I. Johnson and C. B. Patterson, ed., Geotechnical Applications of Remote Sensing and Remote Data Transmission. Philadelphia: American Society for Testing and Materials, 1991, pp. 136-150. [ 101 A. Grossmann, R. Kronland-Martinet, and J. Morlet, “Reading and understanding continuous wavelet transform,” in Wavelets: Time Frequency Methods and Phase-Space. Berlin: Springer, 1989, pp. 2-20. [ l l ] M. Holdschneider, R. Kronland-Martinet, J. Morlet, and P. Tchamitchian, “A real-time algorithm for signal analysis with the help of the wavelet transform,” in Wavelets: lime Frequency Methods and Phase-Space. Berlin: Springer, 1989, pp. 286-297. Mining Geostatistics. Chicago, [12] A. G. Joumel and C. J. Huijbregets, . E Academic, 1978. G. Matheron, Les Variables Rigionalisies et leur Estimation. Paris: Masson, 1965. -, “La thCorie des Variables RCgionalisks et ses Applications,” Cahier du Centre de Moiphologie Mathimatique de Fontainebleau, &ole des Mines, 1970. A. B. McBratney, R. Webster, and T. M. Burgess, “The design of optimal sampling schemes for local estimation and mapping of regionalized variables I: Theory and method,’’ Computers Geosci., vol. 7, no. 4, pp. 331-334, 1981. -, “The design of optimal sampling schemes for local estimation and mapping of regionalized variables 11: Program and examples,” Computers Geosci., vol. 7, no. 4, pp. 335-365, 1981. D. E. Myers, “Interpolation and estimation with spatially located data,” Chemometics Intell. Lab. Syst., vol. 11, pp. 209-228, 1991.

REMOTE SENSING, VOL. 33, NO. 1,

JANUARY 1995

[18] G. Ramstein, “Structures spatiales irrCguli&e dans les images de TClCdCtection,”Ph. D. dissertation,Universit6 Louis Pasteur, Strasbourg, France, Sept. 13, 1989. 1191 G. Ramstein and M. Raffy, “Analysis of the structure of the radiometric remotely-sensed images, Int. J. Remote Sensing, vol. 10, no. 6, pp. 1049-1073, 1989. [20] M. J. Shensa, “The discrete wavelet transform: Wedding the i trous and Mallat algorithms,” IEEE Trans. Sig. Process., vol. 40, pp. 2464-2482, Oct. 1992. [21] Spotimage, Ed., Guide des Utilisateurs de Donnkes SPOT, vol. 2. CNES et SPOTIMAGE, 1986. [22] G. Strang, “Wavelets and dilation equations: A brief introduction,” SIAM Rev., vol. 31, no. 4, pp. 614-627, Dec. 1989. [23] J. Weng, T. Huang, and N. Ahuja, “Motion and structure from two perspective views: Algorithms, error analysis, and error estimation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 11, pp. 451476, May 1989.

Jean-Pierre Djamdji was born in Beyrouth, Lebanon, in 1965. He received the M.E. degree from the Lebanese University-Faculty of Science 11 in 1987, the D.E.A. Automatique et Traitement du Signal and Doctorat en Scineces de I’IngCnieur from the University of Nice-Sophia Antipolis, Nice, France, in 1989 and 1993, respectively. Since 1994, he has been a Postgraduate Researcher at the College of Engineering, University of California, Riverside. His research interests are. in wavelet analysis, multiresolution image registration techniques, disparity analysis, image processing, pattern recognition, and remote sensing.

Albert Bgaoui was born in Monastir, Tunisia, in 1943. He has been with the University of Nice-Sophia, Nice, France, since 1974. He has engaged in research in astronomy at the Paris Observatory on the applications of electronography. At Nice Observatory, he built with collaborators a processing system for astronomical images especially adapted to large images, which led him to different applications for the study of the galatic structure and for observational cosmolom. He has oublished many papers in international scientific reviews on electronography,data, signal and image processing, and astrophysics. He has published a book on image and information.

-_

Suggest Documents