Satellite image fusion based on principal component ...

7 downloads 432 Views 1MB Size Report
This paper presents an integrated method for the fusion of satellite images. Several ..... yond 77 for the Quick-Bird images does not add more to the spatial ...
Metwalli et al.

Vol. 27, No. 6 / June 2010 / J. Opt. Soc. Am. A

1385

Satellite image fusion based on principal component analysis and high-pass filtering Mohamed R. Metwalli,1 Ayman H. Nasr,1 Osama S. Farag Allah,2 S. El-Rabaie,3 and Fathi E. Abd El-Samie3,* 1

Data Reception, Analysis and Receiving Station Affairs Division, National Authority for Remote Sensing and Space Sciences, 23 Joseph Broz Tito St., El-Nozha El-Gedida, Cairo, Alf-Maskan 1564, Egypt 2 Department of Computer Science and Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf 32952, Egypt 3 Department of Electronics and Electrical Communications, Faculty of Electronic Engineering, Menoufia University, Menouf 32952, Egypt *Corresponding author: [email protected] Received July 23, 2009; revised March 30, 2010; accepted March 31, 2010; posted April 2, 2010 (Doc. ID 114606); published May 19, 2010 This paper presents an integrated method for the fusion of satellite images. Several commercial earth observation satellites carry dual-resolution sensors, which provide high spatial resolution or simply high-resolution (HR) panchromatic (pan) images and low-resolution (LR) multi-spectral (MS) images. Image fusion methods are therefore required to integrate a high-spectral-resolution MS image with a high-spatial-resolution pan image to produce a pan-sharpened image with high spectral and spatial resolutions. Some image fusion methods such as the intensity, hue, and saturation (IHS) method, the principal component analysis (PCA) method, and the Brovey transform (BT) method provide HR MS images, but with low spectral quality. Another family of image fusion methods, such as the high-pass-filtering (HPF) method, operates on the basis of the injection of high frequency components from the HR pan image into the MS image. This family of methods provides less spectral distortion. In this paper, we propose the integration of the PCA method and the HPF method to provide a pan-sharpened MS image with superior spatial resolution and less spectral distortion. The experimental results show that the proposed fusion method retains the spectral characteristics of the MS image and, at the same time, improves the spatial resolution of the pan-sharpened image. © 2010 Optical Society of America OCIS codes: 100.0100, 100.02980, 280.0280.

1. INTRODUCTION For optical sensor systems, image spatial resolution and spectral resolution are contradictory factors. For a given signal-to-noise ratio, a higher spectral resolution (narrower spectral band) is often achieved at the cost of a lower spatial resolution. Some satellite sensors supply the spectral bands needed to distinguish features spectrally but not spatially, while other satellite sensors supply the spatial resolution to distinguish features spatially. For many applications, the combination of data from multiple sensors provides more comprehensive information. Several commercial earth observation satellites carry dualresolution sensors of this kind, which provide highresolution (HR) pan images and low-resolution (LR) multi-spectral (MS) images [1,2]. Image fusion techniques are therefore useful for integrating a high-spectralresolution image with a high-spatial-resolution image to produce a fused image with high spectral and spatial resolutions. Some image fusion methods, such as the intensity, hue, and saturation (IHS), the Brovey transform (BT), and the principal component analysis (PCA) methods, provide superior visual HR MS images but ignore the requirement of high-quality synthesis of spectral information [1,3]. While these methods are useful for visual interpretation, high-quality synthesis of spectral information is very important for most remote sensing applications based on 1084-7529/10/061385-10/$15.00

spectral signatures, such as lithology and soil and vegetation analysis [4]. Multi-resolution analysis (MRA) employing the discrete wavelet transform (DWT) [5–7], the wavelet frames [8], or the Laplacian pyramid [9] can be adopted for the extraction of the spatial details from the pan image. Couples of subbands of the corresponding spatial frequency content are merged, and the fused image is synthesized by taking the inverse transform. On the other hand, in MRA approaches, the filtering operations may produce ringing artifacts. This problem may reduce the visual quality of the fused product considerably [10]. In an attempt to overcome this limitation, another family of methods was developed. These methods operate on the basis of the injection of high frequency components from the high-spatial-resolution pan image into the MS image. This family of methods was at the beginning initiated by the HPF method, which provides less spectral distortion [11]. In this paper, we propose the integration of the PCA method and the high-pass-filtering (HPF) method to provide a pan-sharpened image with superior spatial resolution and less spectral distortion. The proposed fusion method is assessed in this paper on two types of remote sensing data with different spatial and spectral properties. The experiments show that the proposed fusion method retains the spectral characteristics of the MS image and, at the same time, improves the spatial resolution of the obtained pan-sharpened image. © 2010 Optical Society of America

1386

J. Opt. Soc. Am. A / Vol. 27, No. 6 / June 2010

Metwalli et al.

2. TRADITIONAL IMAGE FUSION METHODS This section describes some of the traditional image fusion methods for satellite images. A. PCA Method The PCA fusion method is based on the principal component transform (PCT) that converts an MS image with correlated bands into a set of uncorrelated components. The first component resembles the pan image. It is, therefore, replaced by the HR pan image for the fusion. Before the replacement, the pan image needs to be matched to the first component. The pan image is fused into the LR MS bands by performing an inverse PCT. The block diagram of the PCA image fusion method is shown in Fig. 1. The algorithm replacing the spatial component of the MS image with the pan image allows the spatial details of the pan image to be incorporated into the MS image [11]. In steps, we can summarize the PCA fusion method as follows: 1. The resampled bands of the MS image to the same resolution as the pan image are transformed with the PCT. 2. The pan image is histogram matched to the first principal component. This is done to compensate for the spectral differences between the two images, which occur due to differences in sensors or acquisition dates and angles. 3. The first principal component of the MS image is replaced by the histogram matched pan image. 4. The new merged MS image is obtained by computing the inverse PCT.

Fig. 2.

Image fusion with the HPF method.

high frequency spatial content of the pan image is extracted using a high-pass filter and transferred to the resampled MS image. The mathematical model of this method for each band is given by H L H L DNMS = DNMS + 共DNPAN − DNPAN 兲,

共1兲

where DN means the digital numbers or pixel values, L H = DNPAN 丢 h is the smoothed version of the pan imDNPAN age, and h is a low-pass filter such as the boxcar filter. H is the pan image with high frequency spatial deDNPAN tails, DNLMS is a certain band of the LR MS image, and DNH MS is its corresponding band of the pan-sharpened MS image [1]. This method preserves a high percentage of the spectral characteristics of the MS image, since the spectral information is associated with the low spatial frequencies of the MS image. The cutoff frequency of the filter has to be chosen in such a way that the included data does not influence the spectral information of the MS image [14].

The radiometric accuracy of the first principal component is greater than the radiometric accuracy of the HR pan image. This results in a loss of radiometric accuracy in the pan-sharpened image, when the first principal component is replaced with the matched pan image [12].

C. Gram–Schmidt Method In the Gram–Schmidt (GS) method, as described by its inventors [12], the spatial resolution of the MS image is enhanced by merging the HR pan image with the lowspatial-resolution MS bands. The block diagram of this method is shown in Fig. 3. The main steps of this method are as follows:

B. HPF Method If the spectral bands are not perfectly spectrally overlapped with the pan band, as happens with the Spot4, Ikonos, and Quick-Bird images, the PCA method yields poor results in terms of the spectral fidelity of color representations [13]. To definitely overcome this inconvenience, methods based on injecting high frequency spatial details taken from the pan image have been introduced and have demonstrated superior performance. Figure 2 shows the block diagram of the HPF method, where the

1. A lower spatial resolution pan image is simulated. 2. The Gram–Schmidt transformation (GST) is performed on the simulated lower-spatial-resolution pan image together with all the lower-spatial-resolution spectral band images. The simulated lower-spatial-resolution pan image is employed as the first band in the GST. 3. The statistics of the higher-spatial-resolution pan image are adjusted to match the statistics of the first transform band that results from the GST to produce a modified higher-spatial-resolution pan image.

Fig. 1.

Image fusion with the PCA method.

Fig. 3.

Image fusion with the GS method.

Metwalli et al.

Fig. 4.

Vol. 27, No. 6 / June 2010 / J. Opt. Soc. Am. A

1387

Image fusion with wavelet transform method.

4. The modified higher-spatial-resolution pan image is substituted for the first transform band that results from the GST to produce a new set of transform bands. The inverse GST is performed on the new set of transform bands to produce the enhanced-spatial-resolution MS image. In this method, the spectral characteristics of the lower-spatial-resolution MS data are preserved in the higher-spatial-resolution pan-sharpened MS image. D. DWT Method Multiresolution image fusion methods are effective for the pan sharpening of MS images [11]. The DWT-based image fusion method involves three steps [15]: 1. Forward transform of the input images to obtain the approximation and detail components of the images to be fused. 2. Coefficient combination of the input images coefficients according to a certain fusion rule (substitutive or additive) to get the fused wavelet coefficients. 3. Inverse discrete wavelet transform (IDWT) of the fused wavelet coefficients to obtain the fused image. The block diagram of this method is shown in Fig. 4. E. Stationary Wavelet Transform Method The image fusion method based on the DWT is not translation invariant because of the underlying down-sampling process. As a result, misalignments between the wavelet coefficients of the MS and pan subbands may lead to block artifacts and aliasing effects in the fused image. An approach to solve this problem is to employ the stationary wavelet transform (SWT) for image fusion. As the SWT has no dyadic decimation at each decomposition level, it yields an overcomplete wavelet decomposition and guarantees the result to be both aliasing free and translation invariant. The choice of the wavelet parameters such as the wavelet filters and the decomposition level can affect the quality of the fused image [16].

3. PROPOSED INTEGRATED PCA AND HPF FUSION METHOD To improve the process of injection of the spatial details extracted from the pan image into the MS image, we propose the integration of the PCA and HPF fusion methods as shown in Fig. 5. The steps of the proposed integrated fusion method can be summarized as follows: 1. The resampled bands of the MS image to the same resolution as the pan image are transformed with the PCT.

Fig. 5.

Image fusion with the proposed method.

2. The pan image is smoothed with a Gaussian lowpass filter (GLPF). 3. The spatial details of the pan image are extracted as the difference between the original pan image and the smoothed one. 4. A linear combination between the extracted spatial details of the pan image and the first principal component is performed using a gain factor, estimated as the ratio between the standard deviation of the first principal component PC1 and the standard deviation of the pan image. The gain factor is used to compensate for the difference in radiometry between the pan image and PC1. 5. The new first principal component and the other principal components are transformed with the inverse PCT to obtain the pan-sharpened MS image. The mathematical model for the injection of spatial details of the pan image into the MS image is given by H L H L DNPC = DNPC + ␣共DNPAN − DNPAN 兲 1 1

共2兲

L is the first principal component of the MS where DNPC H1 is the pan-sharpened first principal compoimage, DNPC 1 nent, and ␣ is the gain parameter.

A. Gaussian Low-Pass Filter The GLPF used to smooth the pan image has the following kernel: 1 G共x,y兲 =

2␲␴2



exp −

x2 + y2 2␴2



.

共3兲

The degree of smoothing is determined by the standard deviation of the kernel ␴ and the window size of the filter. The smoothing by the GLPF has no ringing effect as in the case of the Butterworth low-pass filter [17]. The difference between the original pan image and the smoothed pan image is taken as the high frequency detail image that will be injected into the first principal component of the MS image.

4. EVALUATION METRICS Since the goal of pan sharpening is to enhance the spatial quality of the MS image while preserving its spectral properties, two sets of metrics need to be used; spectral and spatial quality metrics [3].

1388

J. Opt. Soc. Am. A / Vol. 27, No. 6 / June 2010

Metwalli et al.

Table 1. Spectral and Spatial Characteristics of the Spot4 Images

Table 2. Spectral and Spatial Characteristics of the Quick-Bird Images

Band

Wavelength 共␮m兲

Resolution (m)

Band

Wavelength 共␮m兲

Resolution (m)

1 2 3 4 PAN

0.50− 0.59 (Green) 0.61− 0.68 (Red) 0.78− 0.89 (NIR) 1.58− 1.75 (MIR) 0.61− 0.68 (PAN)

20 20 20 20 10

1 2 3 4 PAN

0.45− 0.52 (Blue) 0.52− 0.60 (Green) 0.63− 0.69 (Red) 0.76− 0.90 (NIR) 0.45− 0.90 (PAN)

2.8 2.8 2.8 2.8 0.7

A. Spectral Quality Metrics The spectral quality set of metrics is used to measure the amount of change that happens in the bands of the pansharpened image in comparison with the original MS image. The root mean square error (RMSE) between the bands of the original MS image and the corresponding sharpened bands is used for this purpose to measure the spectral fidelity [5]. It is calculated for each band as follows: L H 2 RMSE = 冑E关共DNMS − DNMS 兲 兴.

共4兲

B. Spatial Quality Metrics The high-spatial-resolution information missing in the MS image is present in the high frequencies of the pan image. Thus, the correlation coefficient between the highpass-filtered pan image and the bands of the pansharpened MS image would indicate how much spatial information from the pan image has been incorporated into the MS image. A higher correlation implies that the spatial information has been injected into the MS image. This correlation coefficient is defined as follows [6]: crh =

⬘H ,DNPAN ⬘H 兲 Cov共DNMS

冑Cov共DNMS ⬘H ,DNMS ⬘H 兲Cov共DNPAN ⬘H ,DNPAN ⬘H 兲

,

共5兲

H , DNPAN where DN⬘MS ⬘H are the high-pass-filtered versions of a certain band of the pan-sharpened MS image and the original pan image, respectively.

Fig. 6.

5. RESULTS AND COMPARISON In this section, simulation experiments are performed to test the proposed fusion method and compare it with the traditional fusion methods. Two different types of data are used in these simulation experiments as follows: 1. Spot4 Data for a part of Cairo, Egypt, with spectral and spatial properties as shown in Table 1. Fig. 6 shows the MS and the pan images of the Spot4 data. The ratio between the spatial resolution of the pan image and the spatial resolution of the MS image is 1 / 2. 2. Quick-Bird data for a part of Sundarbans, India, with spectral and spatial properties as shown in Table 2. Fig. 7 shows the MS and the pan images of the QuickBird data. The ratio between the spatial resolution of the pan image and the spatial resolution of the MS image is 1 / 4. The test areas include different types of ground covers such as urban areas, vegetation, and water supplies to enable analysis of the fusion methods for a variety of spectral and spatial contents. A. Effect of the GLPF The proposed image fusion method is tested on the Spot4 data shown in Fig. 6, and the Quick-Bird data shown in Fig. 7. Similarly, it is tested on the Quick-Bird data shown in Fig. 8, and the fusion result is given in Fig. 9. Figure 10 shows the variation of the RMSE of each band in the pan-sharpened MS image with the ␴ and the win-

(Color online) Spot4 images. (a) MS image. (b) Pan image.

Metwalli et al.

Vol. 27, No. 6 / June 2010 / J. Opt. Soc. Am. A

1389

Fig. 7. (Color online) Fusion results for the Spot4 images. (a) Original MS image resampled to the Pan resolution. (b) Proposed method. (c) PCA method. (d) HPF method. (e) GS method. (f) DWT method. (g) SWT method.

Fig. 8.

(Color online) Quick-Bird images. (a) MS image. (b) Pan image.

1390

J. Opt. Soc. Am. A / Vol. 27, No. 6 / June 2010

Metwalli et al.

Fig. 9. (Color online) Fusion results for the Quick-Bird images. (a) Original MS image resampled to the Pan resolution. (b) Proposed method. (c) PCA method. (d) HPF method. (e) GS method (f) DWT method. (g) SWT method.

dow size of the GLPF for the fusion of the Spot4 images. Figure 11 shows the variation of the crh of each band in the pan-sharpened MS image with the ␴ and the window size of the GLPF for the fusion of the Spot4 images. Figure 12 shows the variation of the RMSE of each band in the pan-sharpened MS image with the ␴ and the window size of the GLPF for the fusion of the Quick-Bird images. Figure 13 shows the variation of the crh of each band in the pan-sharpened MS image with the ␴ and the window size of the GLPF for the fusion of the Quick-Bird images. From these figures, we notice that increasing the filter window size beyond 5 ⫻ 5 for the Spot4 images and be-

yond 7 ⫻ 7 for the Quick-Bird images does not add more to the spatial quality of the pan-sharpened MS images but leads to more spectral distortion. As ␴ increases, the spatial quality for all bands increases until a certain limit is reached. Beyond this limit, increasing ␴ does not enhance the spatial quality of the pan-sharpened MS image but leads to more spectral distortion. In general, it was concluded from the experiments that when the ratio between the spatial resolution of the pan image and the spatial resolution of the MS image is 1 / 2, a GLPF with a window size of 5 ⫻ 5 and ␴ of about 1.5 produces a pan-sharpened MS image with as minimum spectral distortion as pos-

Metwalli et al.

Vol. 27, No. 6 / June 2010 / J. Opt. Soc. Am. A

1391

Fig. 10. (Color online) Variation of the RMSE of each band in the pan-sharpened MS image with the ␴ and the window size of the GLPF for the fusion of the Spot4 images.

Fig. 11. (Color online) Variation of the crh of each band in the pan-sharpened MS image with the ␴ and the window size of the GLPF for the fusion of the Spot4 images.

1392

J. Opt. Soc. Am. A / Vol. 27, No. 6 / June 2010

Metwalli et al.

Fig. 12. (Color online) Variation of the RMSE of each band in the pan-sharpened MS image with the ␴ and the window size of the GLPF for the fusion of the Quick-Bird images.

Fig. 13. (Color online) Variation of the crh of each band in the pan-sharpened MS image with the ␴ and the window size of the GLPF for the fusion of the Quick-Bird images.

Metwalli et al.

Vol. 27, No. 6 / June 2010 / J. Opt. Soc. Am. A

Table 3. RMSE Values of All Bands with the Different Fusion Methods for the Spot4 Images

1393

Table 6. crh Values of All Bands with the Different Fusion Methods for Quick-Bird Images

Method

Band1

Band2

Band3

Band4

Method

Band1

Band2

Band3

Band4

Proposed PCA HPF GS DWT SWT

2.6306 9.6062 2.7091 3.0464 3.3282 2.5614

3.0115 11.868 3.2653 3.5907 3.418 2.4945

2.1153 12.8428 3.4897 3.9875 3.9032 2.7992

2.2102 12.2177 3.0482 3.3386 3.5378 2.6689

Proposed PCA HPF GS DWT SWT

0.7928 0.7456 0.6643 0.7231 0.9649 0.9692

0.9087 0.9082 0.7893 0.8941 0.9732 0.9599

0.9596 0.9901 0.8936 0.9628 0.9691 0.9818

0.9783 0.99 0.9374 0.99 0.9754 0.9677

Table 4. RMSE Values of All Bands with the Different Fusion Methods for the Quick-Bird Images Method

Band1

Band2

Band3

Band4

Proposed PCA HPF GS DWT SWT

3.5855 20.331 5.7710 4.6335 17.7391 17.7760

8.4124 45.407 11.8481 10.1132 20.4836 18.7424

8.3750 47.1685 12.7072 10.0769 18.9723 16.9048

18.4019 118.119 31.8660 28.1732 29.9498 18.7581

sible and with good spatial resolution. When the ratio between the spatial resolution of the pan image and the spatial resolution of the MS image is 1 / 4, a GLPF with window size of 7 ⫻ 7 and ␴ of about 1.5 produces a sharpened MS image with good spatial and spectral properties.

B. Performance Comparison Simulation experiments are also carried out to compare the performance of the proposed fusion method, the PCA method, the HPF method, the GS method, the DWT method, and the SWT method. For the proposed fusion method, a window of size 5 ⫻ 5 for the Spot4 data and size 7 ⫻ 7 for the Quick-Bird data is used. The value of ␴ used in the simulations is 1.5. For the DWT and SWT methods, we use biorthogonal filters with a single decomposition level for the Spot4 data and two decomposition levels for the Quick-Bird data. Figures 7 and 9 show the results of the different fusion methods for Spot4 and Quick-Bird data, respectively. Tables 3 and 4 show the RMSE between the corresponding bands of the original MS image and the pan-sharpened MS image using the different fusion methods for the Spot4 and the Quick-Bird data, re-

spectively. The obtained results show that the proposed fusion method gives the lowest RMSE values for the different bands. Tables 5 and 6 show the values of crh for all bands of the obtained MS images with the different fusion methods. The results in these two tables indicate the spatial improvements obtained from the different fusion methods. The HPF method has the lowest spatial improvement. The proposed fusion method gives spatial enhancement results comparable to those of the PCA, the GS, the DWT, and the SWT methods, but it has the best preservation of spectral characteristics.

6. CONCLUSION This paper presented an integrated method for the fusion of pan and MS images. This method aims at merging the HPF method and the PCA method by injecting the high frequency spatial details of the pan image into the first principal component of the MS image. Simulation results were carried out to optimize the GLPF used to smooth the pan image. It was concluded from the experiments that when the ratio between the spatial resolution of the pan image and the spatial resolution of the MS image is 1 / 2, a GLPF with window size of 5 ⫻ 5 and ␴ of about 1.5 produces a sharpened MS image with the least spectral distortion and the best spatial quality. When the ratio between the spatial resolution of the pan image and the spatial resolution of the MS image is 1 / 4, a GLPF with window size of 7 ⫻ 7 and ␴ of about 1.5 produces a sharpened MS image with the least spectral distortion and the best spatial quality. The experiments have shown that the proposed fusion method has significantly reduced the spectral distortion in comparison with the PCA, the HPF, the GS, the DWT, and the SWT fusion methods. The spatial quality of the proposed fusion method is higher than that of the HPF fusion method and comparable to that of the PCA, the GS, the DWT, and the SWT fusion methods.

Table 5. crh Values of All Bands with the Different Fusion Methods for the Spot4 Images Method

Band1

Band2

Band3

Band4

Proposed PCA HPF GS DWT SWT

0.9363 0.9241 0.8441 0.9033 0.8912 0.8918

0.9408 0.9421 0.8697 0.9545 0.8780 0.8704

0.892 0.9412 0.8564 0.899 0.8727 0.8718

0.9171 0.936 0.8639 0.8956 0.8643 0.8955

REFERENCES 1. 2.

3.

Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, “A comparative analysis of image fusion methods,” IEEE Trans. Geosci. Remote Sens. 43, 1391–1402 (2005). Y. Yang, C. Han, X. Kang, and D. Han, “An overview on pixel-level image fusion in remote sensing,” in Proceedings of the IEEE International Conference on Automation and Logistics (IEEE, 2007), pp. 2339–2344. X. Otazu, M. González-Audícana, O. Fors, and J. Núñez, “Introduction of sensor spectral response into image fusion

1394

4.

5.

6. 7. 8. 9.

10.

J. Opt. Soc. Am. A / Vol. 27, No. 6 / June 2010 methods. Application to wavelet based methods,” IEEE Trans. Geosci. Remote Sens. 43, 2376–2385 (2005). J. Liu, “Smoothing filter-based intensity modulation: a spectral preserve image fusion technique for improving spatial details,” IEEE Trans. Geosci. Remote Sens. 21, 3461– 3472 (2000). D. A. Yocky, “Multiresolution wavelet decomposition image merger of Landsat Thematic Mapper and SPOT panchromatic data,” Photogram. Eng. Remote Sens. 62, 1067–1074 (1996). J. Zhou, D. Civco, and J. Silander, “A wavelet transform method to merge Landsat TM and SPOT panchromatic data,” Int. J. Remote Sens. 19, 743–757 (1998). P. Scheunders and S. De Backer, “Fusion and merging of multispectral images with use of multiscale fundamental forms,” J. Opt. Soc. Am. A 18, 2468–2477 (2001). A. Garzelli and F. Nencini, “PAN-sharpening of very high resolution multispectral images using genetic algorithms,” Int. J. Remote Sens. 27, 3273–3292 (2006). B. Aiazzi, L. Alparone, S. Baronti, and A. Garzelli, “Context-driven fusion of high spatial and spectral resolution data based on oversampled multiresolution analysis,” IEEE Trans. Geosci. Remote Sens. 40, 2300–2312 (2002). L. Alparone, S. Baronti, A. Garzelli, and F. Nencini, “Re-

Metwalli et al.

11. 12. 13.

14. 15.

16.

17.

mote sensing image fusion using the curvelet transform,” Inf. Fusion 8, 143–156 (2007). T. Stathaki, Image Fusion: Algorithms and Applications 1st ed. (Elsevier, 2008). C. Laben and B. Brower, “Process for enhancing the spatial resolution of multispectral imagery using pan-sharpening,” U.S. Patent 6011875 (January 4, 2000). B. Aiazzi, L. Alparone, S. Baronti, A. Garzelli, and M. Selva, “MTF-tailored multiscale fusion of high-resolution MS and pan imagery,” Photogrammetric Eng. Remote Sensing 72, 591–596 (2006). T. Bretschneider and O. Kao, “Image fusion in remote sensing,” in Proceedings of the Online symposium for Electronic Engineers (2000). A. Das and K. Revathy, “A comparative analysis of image fusion techniques for remote sensed images,” in Proceedings of the World Congress on Engineering WCE 2007, London, 2–4 July, 2007, Vol. I, pp. 639–644. L. Shutao, “Multisensor remote sensing image fusion using stationary wavelet transform: effects of basis and decomposition level,” Int. J. Wavelets Multiresolution Inf. Processing 6, 37–50 (2008). R. Gonzalez and R. Woods, Digital Image Processing 3rd ed. (Prentice-Hall, 2007).

Suggest Documents