IMAGE FUSION USING MULTI DECOMPOSITION

0 downloads 0 Views 1MB Size Report
1 Menoufia University, 2 National Authority for Remote Sensing and Space Sciences, ... for their applications. .... Figure 5: Results of image fusion of Lena using.
IMAGE FUSION USING MULTI DECOMPOSITION LEVELS OF DISCRETE WAVELET TRANSFORM M. A. Berbar1, S. F. Gaber2, and N. A. Ismail1 1

Menoufia University, 2 National Authority for Remote Sensing and Space Sciences, NARSS, Egypt.

1. ABSTRACT

Many researchers are concerning with using powerful image processing tools to achieve high quality images for their applications. Recently, great interest has been arisen on using wavelet transforms [1-10] to analysis multi-resolution images and to fuse remote sensing images. Image fusion especially in remote sensing applications is one of the fields that growing continuously. Many methods have been proposed to fuse panchromatic and multi-spectral images [1-7]. This work presents a proposed scheme to fuse the Landsat-7 low-resolution (30m) multi-spectral images and its high-resolution (15m) panchromatic images. The proposed image fusion method is based on Two Dimensional Discrete Wavelet Transform (2D-DWT). The method aims to fuse two images in order to produce high resolution (15m) multi-spectral image. It also presents a comparison between the implemented 2D-DWT method and other conventional methods of image fusion using correlation between the output image and the two input images to ascertain the best possible technique that can result in better results of multi-sensor image fusion. The output image of the 2D-DWT technique contains 86.6 % of spectral content of the multi-spectral image and 97.08 % of the spatial content of panchromatic image.

2. BACKGROUND

The most common conventional methods for image fusion are the Principal Component Analysis (PCA) as Li et al [1], Laplacian pyramid as Hui, Manjunath, and Sanjit [2], Intensity-Hue-Saturation (IHS) as Bretschneider and Kao [3], and Brovey transform as Sanjeevi et al [4]. These conventional methods have some defects. The fused output image from these conventional methods contains blocking artifacts in the regions where the multi-sensor data are significantly different. However, those methods normally improve the spatial resolution while distort the colour composite. The 2D-DWT had been used in comparison with Laplacian pyramid transform method and the discrete wavelet frame method as shown in Zhong [13] and the tested images were digital camera images which does not need registration before performing the fusion process. The 2D-DWT improves the spatial resolution of fused images while preserving the colour appearance of the images for interpretation purpose as

Zhijun et al [5]. The 2D-DWT is described by the block diagram in Figure 1.

Figure 1: The block diagram of the 2D-DWT. The output from 2D-DWT is four images of size equal to half the size of the original image. These images are called HH, HL, LH, and LL images. LH means that low-pass filter is applied along x and followed by highpass filter along y. The LL image contains the approximation coefficients. LH image contains the horizontal detail coefficients. HL image contains the vertical detail coefficients. HH contains the diagonal detail coefficients. The wavelet transform can be performed for multiple levels. The next level of decomposition is performed using only the LL image. The result is four sub-images each of size equal to half the LL image size. This process could be continuing to reach the required frequency. Landsat-7 has been launched at April 15,1999. It produces six low-resolution multi-spectral bands with 30m resolution and one high-resolution panchromatic band with 15m resolution. In this work, bands 7,4, and 2 have been used as a colour composite for the low resolution (30m) multi-spectral image. Sample of the two types of images is shown in figure 2.

(a) (b) Figure 2: (a) The high-resolution panchromatic image, (b) The low-resolution multi-spectral image. 3. METHODOLOGY

Image fusion technique using 2D-DWT can be performed at three different processing levels according to the stage at which the fusion takes place: pixel, feature, and decision level. The implemented method is concerning on pixel level image fusion where the combination mechanism works directly on the pixels of the image. The method based on replacing the wavelet details coefficients from the multi-spectral image by the wavelet details coefficients of the panchromatic image at multiple levels of wavelet decomposition. The multi-level of wavelet decompositions (up to 8 levels) has performed. The main steps of the image fusion technique using 2DDWT method are: 1- Perform accurate co-registration between the low-resolution multi-spectral image (30m) and the high-resolution panchromatic (15m) image. Then the multi-spectral image is resampled to have the same number of pixels as the panchromatic image. 2- Match the histogram of the panchromatic image with the intensity component (I) of the multispectral image to adjust the contrast and the brightness.

(a) (b) (a) Fused image using PCA method. (b) Fused image using IHS method.

(c)

(d)

(c) Fused image using wavelet transform (M =1). (d) Fused image using wavelet transform (M =3).

3- The low-resolution multi-spectral image (30m) and the high-resolution panchromatic (15m) image of size 2Nx2N are decomposed to their LL, LH, HL, and HH images using 2D-DWT with Daubechies filters with 8 coefficients. 4- Replace the details images (LH, HL, and HH) of the low-resolution multi-spectral image with the details images (LH, HL, and HH) from the high-resolution panchromatic image.

(e) (f) (e) Fused image using wavelet transform ( M =5 ). (f) Fused image using wavelet transform ( M = 6 ).

5- Repeat steps 3, and 4 for multiple levels equal M using the resulted two LL images instead of multi-spectral image and panchromatic image. This process could be continuing to N levels. 6- The inverse wavelet transform is performed to get the fused image that contains both spatial and spectral information of the two input images.

4. EVALUATION OF RESULTS

The proposed method, PCA (Principal Component Analysis) method, and HIS (Intensity Hue and Saturation) method were applied on the images shown in figure 2. The results are shown in Figure 3.

(g) (h) (g) Fused image using DWT ( M =7 ). (h) Fused image using DWT ( M =8 ). Figure 3: The results of applying conventional methods and DWT method. The three methods were applied on another type of images as shown in Figure 4. The size of the grey highresolution image is 512 x 512 pixels, where the RGB low-resolution image size is (256 x 256) pixels. The two images have been resampled to each other, so the RGB image has the same number of pixels as the grey image.

(a) (b) Figure 4: (a) The high-resolution panchromatic image. (b) The low resolution coloured image.

(g) (h) (g) Fused image using DWT ( M =5 ). (h) Fused image using DWT ( M =6 ).

The wavelet transform is performed eight levels (M=8). The results of fusion using Lena image is shown in Figure 5.

(a) (b) (a) Fused image using PCA method. (b) Fused image using IHS method.

(c) (d) (c) Fused image using DWT ( M =1 ). (d) Fused image using DWT ( M =2 ).

(e) (f) (e) Fused image using DWT ( M =3 ). (f) Fused image using DWT ( M =4 ).

(i) (j) (i) Fused image using DWT ( M =7 ). (j) Fused image using DWT ( M =8 ). Figure 5: Results of image fusion of Lena using different methods. The different methods were applied on more testing images of Landsat 7 and a comparison between them was made. The comparison depends on correlation coefficient measurement. The correlation coefficient computes the correlation between two images. As the correlation coefficient increases, this means that the two images are highly correlated to each other. The correlation coefficients computed between the multispectral image and the output image as shown in Table 1. The correlation coefficient directly indicates the amount of spectral content preserved. TABLE1_shows the correlation coefficients computed for various methods of merging. Merging Method

A

B

PCA

0.8662

0.9098

HIS

0.8750

0.9670

Wavelet (1 level)

0.8909

0.8558

Wavelet (3 levels)

0.8948

0.9454

Wavelet (4 levels)

0.8776

0.9609

Wavelet (5 levels)

0.8660

0.9708

Wavelet (7 levels)

0.8488

0.9822

Wavelet (8 levels)

0.8374

0.9825

In Table 1, column A shows coefficients of correlation between the multi-spectral image and the output

image, column B shows coefficients of correlation between the panchromatic image and the output image. It should be noted that the highest value of the correlation coefficient occurs when using 2D-DWT method for image fusion. Also as the wavelet decomposition levels increase, the correlation between the panchromatic image and the output image increases and the correlation between the multispectral image and the output image decreases. The proposed method overcomes the problem of blocking artifacts and the problem of misregistration between the input images.

[2] Hui L., Manjunath B. S., and Sanjit K. M. , 1994, “Multi-sensor image fusion using the wavelet transform,” Proc. ICIP - International Conference on Image Processing, 1, 51-55. [3] Bretschneider T., and Kao O., 2000, “ Image fusion in remote sensing,” Proceedings of the 1st Online Symposium of Electronic Engineers. [4] Sanjeevi S., Vani K., and Lakshmi K., 2001, “Comparison of conventional and wavelet transform techniques for fusion of IRS-1C LISS-III and Pan images,” Proc. ACRS 22nd Asian Conference on Remote Sensing, 1, 140-145.

The PCA method failed to maintain the colour information of the multi-spectral image. More spatial information appears in the resulting image as the level of wavelet decomposition increases, but the spectral information decreases at level six, and start to look like the panchromatic image at level eight. The best image that contains maximum details from the multi-spectral image and the panchromatic image results when performing five wavelet decomposition levels ( i.e. M = N/2+1). The output image contains 86.6 % of spectral content of the multi-spectral image and 97.08 % of the spatial content of panchromatic image.

[5] Zhijun W., Deren L., and Qingquan L., 2000, “Wavelet analysis based image fusion of Landsat-7 panchromatic image and multi-spectral images,” International Conference on Geographic Information Science and Technology, 21-23.

5. CONCLEUSION

[8] Jishuang O., Chao W., 2001, “A wavelet packagebased data fusion method for multitemporal remote sensing image processing,” Proc. ACRS—22nd Asian Conference on Remote Sensing, 1, 164-169.

This paper has demonstrated the potential of image fusion technique as a tool for improving the interpretation of low-resolution images. Correlation coefficient is used as evaluation measurement. The evaluation confirms superiority of the 2D-DWT method over conventional methods. Multi-level image fusion based on 2D-DWT method is more reliable for remote sensing image fusion than the conventional techniques as IHS and PCA methods. The proposed method overcomes the problem of blocking artifacts and the problem of misregistration between the input images especially in the remote sensing images. The output image of the 2D-DWT method contains 86.6 % of spectral content of the multi-spectral image and 97.08 % of the spatial content of panchromatic image. The results of the experiments show reliability of the proposed method in improving the spatial resolution and in the same time it maintains the colour composite of the image.

6. REFERENCES

[1] Li J., Zhou Y., and Deren L., 1999, “PCA and wavelet transform for fusing panchromatic and multispectral images,” SPIE—The International Society for Optical Engineering, 3719, 369-377.

[6] Pohl C., 1999, “Tools and methods for fusion of images of different spatial resolution”, International Archives of Photogrammetry and Remote Sensing, 32, 76-82. [7] Nunez J., Otazu X., Fors O., and Prades A., 1999, “Multiresolution-based image fusion with additive wavelet decomposition,” IEEE Transactions on Geoscience and Remote Sensing, 37, 1204-1211.

[9] Guojin H., Kelu L., and Deyong H., 1998, “A fusion approach of multi-sensor remote sensing data based on wavelet transform,” 19th Asian Conference on Remote Sensing, 16-20. [10] Pohl C., 1996, “Geometric aspects of multisensor image fusion for topographic map updating in the humid tropics,” International Institute for Aerospace Survey and Earth Sciences. Ph.D. Thesis ITC. [11] Sonka M., Hlavac V., and Boyle R., 1999 “ Image processing, analysis, and machine vision,” Brooks/Cole Publishing Company. [12] Li Shutao, James T. Kwok, and Y. Wang, 2002. “Using the discrete wavelet frame transform to merge Landsat TM and SPOT panchromatic images,” Information Fusion, 3, 17-23. [13] Zhong Zhang and R. S. Blum, 1999, “A categorization and study of multiscaledecomposition-based image fusion schemes,” Proceedings of IEEE, pp. 1315-1328.

Click to return to session page