A new Pansharpening Approach Based on

0 downloads 0 Views 1MB Size Report
latterly the Algerian one ALSAT-2A, provide two different types of images: multispectral (MS) and panchromatic (Pan). The pansharpening process aims to ...
J Indian Soc Remote Sens DOI 10.1007/s12524-016-0554-9

RESEARCH ARTICLE

A new Pansharpening Approach Based on NonSubsampled Contourlet Transform Using Enhanced PCA Applied to SPOT and ALSAT-2A Satellite Images Soumya Ourabia 1 & Youcef Smara 1

Received: 23 August 2015 / Accepted: 11 January 2016 # Indian Society of Remote Sensing 2016

Abstract The Pansharpening process aims to merge the high spatial resolution of the panchromatic (Pan) image with the spectral information of the multispectral (MS) images. The fused images should represent an enhanced spatial resolution and should preserve the spectral information simultaneously. In the two last decades, many pansharpening algorithms have been implemented in the literature such as IHS, PCA, HPF, etc. Therefore, in comparison with the various conventional methods, our contribution is the conception of a new fusion scheme by combining two different approaches: the Principal Component Analysis (PCA) and the NonSubsampled Contourlet Transform (NSCT). The hypothesis in this combination represent the use of PCA, in first, like statistical approach to obtain from the MS bands the main information, followed by the NSCT as a robust multiresolution and multidirectional approach, to give an optimal representation of the characteristics in the image compared to the classical methods (wavelets), in order to overcome the drawback caused by PCA with the spectral distortion. The focus of this study is to show a new way to combine differently from usual those two approaches, to find a compromise between enhancing the spatial resolution and preserving the spectral information at the same time. The quality of the resulted images has been evaluated by

* Soumya Ourabia [email protected] Youcef Smara [email protected]

1

Faculty of Electronics and Computer Science, Image Processing and radiation Laboratory, University of Sciences and Technology Houari Boumediene (USTHB), BP N° 32, El-Alia, 16111 Beb Ezzouar, Algiers, Algeria

the visual interpretation and the statistical assessment to prove its efficiency compared to other conventional methods. Keywords Pansharpening . NSCT . PCA . Spatial and spectral resolutions . ALSAT-2A . SPOT

Introduction Many earth observation satellites like SPOT, IKONOS, and latterly the Algerian one ALSAT-2A, provide two different types of images: multispectral (MS) and panchromatic (Pan). The pansharpening process aims to merge two or more images of different source and resolutions but which represent the same geographic area, in order to create a single composition containing the spatial features of the Pan image and preserving the spectral information of the MS bands simultaneously. A large number of pansharpening techniques and applications has been proposed in the literature to enhance the spatial quality of the multispectral images (Pohl and Van Genderen 1998; Ranchin and Wald 2000). Among the widely used methods in image fusion field, we find the intensity-hue-saturation (IHS), as a well-known fusion method, initially introduced by Haydn et al. (1982) and the Principal Component Analysis (PCA) presented by Jolliffe (1986). The IHS method is a common tool for fusion process, it has been employed by Gillespie et al. (1986) to improve the colors in the image, and by Chavez and Bowell (1988) where they compared the spectral content between two different satellites in different regions. IHS is a transform that is originally executed on exactly three MS modalities. When more than three bands are available like in the case of IKONOS satellite, Tu et al. (2004) presented a Generalized IHS (GIHS). In 2010, Rahmani et al. proposed two new modifications where they used an adaptive IHS pansharpening method to obtain more

J Indian Soc Remote Sens

accurate spectral resolution and also to increase the spectral fidelity beyond the edges. Other techniques are used in the pansharpening field as PCA transformation which represents a multi-bands orthogonal linear and statistical transformation. Several researchers have developed PCA-based method, for example Chavez and Kwarteng (1989), who used the selective PCA to extract the spectral contrast in the Landsat thematic mapper image data, and also, the methods developed by Ehlers (1991) and Yesou et al. (1993) who used the PCA in their scheme, where they described in details the pansharpening process based in this approach and gave the different steps required to implement the algorithm. Usually, IHS and PCA give a good spatial resolution; the images look very clear and have sharp edges. The main drawback of these methods is the presence of spectral distortion in the pansharpening results (Shettigara, 1992; Zhou et al. 1998). Many methods based on multiscale or multiresolution image representation, including the Laplacian pyramid (Burt and Adelson 1983), the morphological pyramid (Toet et al. 1989), and the wavelet transform (Zhang and Blum 1999), have been proposed for pansharpening algorithms, and had proven their efficiency in that period of time. The 2D Discrete Wavelet Transform (DWT) as proposed by Mallat (1989) and the BA trous^ algorithm developed by Ranchin (1993) represent the most popular image representations applied in remote sensing domain, thanks to their good ability to Multi Resolution Analysis (MRA). Wavelets are useful in spectral information preserving but they are weak to express the spatial characteristics because of their limitations to represent anisotropic edges and contours in the image. Therefore, many other image transforms are used and provide good representation of the geometrical characteristics such as Ridglets and Curvelets (Sveinsson and Benediktsson 2007) and Contourlets (Do and Vetterli 2001). In the last few years, the pansharpening algorithms based on contourlet transform as a kind of multiscale analysis tools, which is very different from wavelets, is becoming a promising technique to produce images with high spatial and spectral resolutions all together (Choi et al. 2005). The Standard Contourlet transform (SCT) introduced by Do and Vetterli (2005) is a new extension to the DWT in two dimensions. The SCT is constructed by applying two successive stages: a Laplacian pyramid (LP) and a directional filter bank (DFB). While this last approach can represent smooth edges and contours in infinite orientations, the DWT lacks in this way, and can only catch point discontinuities in limited directions. The majority of works in the pansharpening domain deals with the NonSubsampled Contourlet Transform (NSCT) (Cunha et al. 2006) because of its shift invariant and high frequency selectivity properties. In this context, we find the study realized by Shah et al. (2007), which proposes a new fusion alternative algorithm based on the merger of PCA and contourlet

transform. In 2008, the same authors extended their work and combined the contourlet transform with an adaptive PCA approach (Shah et al. 2008). Latterly, El-Mezouar (2014) has estimated the number of decomposition levels in each modality, and the position of the upsampling process applied to multispectral bands, before or after the contourlet transform procedure. The contourlet transform has been used not only for image fusion of optical images but also it has taken part in the fusion of the multispectral bands with the synthetic aperture radar (SAR) imagery (Yang et al. 2009; Roberts et al. 2011). In this work, we present a new pansharpening scheme inspired by the method proposed by Minhayenud et al. (2008). The proposed technique takes advantage of PCA to keep the maximum of information present in the multispectral bands, also, it benefits from the multiresolution analysis and the high directionality offered by the NSCT transform, in order to extract the main features, detect contours of objects in several orientations and reduce the spectral distortion. The fused multispectral bands are obtained using a fast linear reconstruction, which shows the simplicity and the originality of our fusion scheme. Initially, the authors combine the PCA approach and the fast IHS transform in order to extract the information of the low resolution and incorporate it with the IHS which is appropriate to inject the high resolution (Minhayenud et al. 2008). Our idea using the MRA tool designs a new way to merge the remotely sensed images and enhances their spatial and spectral aspects. This method is evaluated on two databases (SPOT and ALSAT-2A images). The main difference between the two sensors is the resolution rate produced by each one. This paper is organized as follows: in the next section, we describe the theoretical basis of the NSCT and PCA. Then, we present the algorithm of our contribution based on the enhanced PCA and NSCT transform. After that, we give a qualitative and quantitative assessment accompanied with a discussion of the obtained results. Finally, we conclude via a comparison report between merged images obtained by several fusion techniques, illustrating the contribution of our fusion scheme in the improvement of the optical satellite images.

Theoretical Basis In this section, we introduce the theoretical aspect used to generate the NSCT and PCA given in this work to realize the pansharpening process. The Nonsubsampled Contourlet Transform In order to build a flexible and efficient transform, a NonSubsampled Contourlet Transform NSCT has been proposed (Cunha et al. 2006). It has a fast implementation and provides better contours representation.

J Indian Soc Remote Sens

The NSCT is a non decimated version of the standard contourlet transform (SCT) (Do and Vetterli 2001; Do and Vetterli 2005) and with fully shift-invariant, multiscale and multidirection expansion whose core is the nonseparable two channels NonSubsampled Filter Bank (NSFB). The less stringent design condition of the NSFB to build filters, leads to NSCT with better frequency selectivity and regularity when compared to the SCT. To achieve the shift-invariance property, the NSCT is created upon coupling a NonSubsampled Pyramid (NSP) with the NonSubsampled Directional Filter Bank (NSDFB). The Principal Component Analysis Usually, the PCA transform is adopted and applied to multidimensional information in order to extract the main information and classify them in a new coordinates system containing some Principal Components PCs, according to the variance degree of each component. The first coordinate represents the largest variance (called first principal component), the second coordinate refers to the second largest variance, and so on (Duda and Hart 1973).

Enhanced PCA-NSCT Pansharpening Process In this section, we present our pansharpening scheme to merge MSs and Pan images. This method incorporates the properties of the PCA with the high advantage of multi resolutions and multi orientations established by the NSCT. The steps followed to implement our pansharpening scheme are listed below: 1. The original Pan and MS images are geometrically registered to each other. 2. Up sample the MS bands to the size of Pan image in order to be superimposed using the bicubic interpolation technique. 3. Perform the PCA analysis on the upsampled MS bands and obtain some principle components depending on the number of the multispectral images provided by the corresponding satellite. 4. Apply the histogram matching to adjust the brightness of Pan to correspond with that of PC1. 5. Apply NSCT to both images PC1 and histogram matched Pan’ to generate, respectively, the approximations (app_pc, app_pan) and details (det_pci , j, det_pani , j) coefficients that we need to merge together, where (i, j) are subband indices corresponding to scale i and direction j. In this step, we assume two and three decomposition levels on each

corresponding image separately in order to test the efficiency of our technique. In each case of two and three decomposition levels respectively, {2, 4} directions and {2, 4, 8} directions are used from coarser to finer scale respectively during the decomposition process. 6. Introduce the fusion rule between the coefficients stemming from each element, to provide new approximation (Im_app) and new detail (Im_deti , j) images differently from usual, as follow: Im app ¼ app pan

ð1Þ

Im deti; j ¼ det pci; j

ð2Þ

7. Apply the inverse contourlet transform to reconstruct new image Breconst_im^ reconst im ¼ Im app∪Im deti; j

ð3Þ

8. Calculate the delta parameter to perform the fast calculation leading to the fused images δ ¼ ðpan0 −reconst imÞ 2 3 3 2 bf 1 b1 þ δ 6 b f 2 7 6 b2 þ δ 7 6 7 7 6 4 b f 3 5 ¼ 4 b3 þ δ 5 bf 4 b4 þ δ

ð4Þ

ð5Þ

{b1, b2, b3, b4} and {bf1, bf2, bf3, bf4} represent, respectively, the original and the fused images. To implement our method, we have adopted for the NSCT transform the B9–7^ Filter Banks generated by the one dimensional bi-orthogonal filters to produce the LP decomposition stage, while the DFB decomposition has used the PKVA Filter Banks with a ladder structure, proposed by Phoong et al. (1995). In order to test our method, we have introduced different number of decomposition levels. In this fusion scheme, we use the histogram matching procedure to adjust the brightness of Pan image to correspond with that of PC1 from the original MS bands. The main change given in this method is the joining between the approximation obtained from the adjusted Pan and the details stemming from PC 1 via NSCT transform, contrary to the usual methods. Therefore, this way of fusion shows a large amount of information which could be contained in the reconstructed image. The δ image represents the difference like mentioned above in the expression (5). It is added to the upsampled MS bands due to its richness in edges and contours (see Fig. 3), in order to obtain the fused images with high degree of the spectral and spatial information simultaneously.

J Indian Soc Remote Sens

Fusion Results Study Area and Image Data Sets In this study, two test regions are selected. They are located in the north part of Algeria, and imaged by different satellites: SPOT and ALSAT-2A (Fig. 1). a) SPOT dataset:

The SPOT images were taken on an urban area which is close to the bay of Algiers: Pan image (512 × 512) pixels with high spatial resolution and three MS images (256 × 256) pixels, representing three high spectral resolution channels (Fig. 2). These images are stemming from HRV sensor of SPOT satellite acquired in April 1st, 1997. The ratio between both resolutions represents 1/2. b) ALSAT-2A dataset:

The selected scene is located in the region of Oran, in the west of Algeria. It has been acquired by the Algerian satellite ALSAT-2A in the 09th of February 2011 (Fig. 2). Two kinds of images were produced by this satellite: Pan image (1024 × 1024) pixels and MS images (256 × 256) pixels. The ratio between the spatial resolution of the Pan image and that of the MS images is 1/4. The Table 1 below presents more details about each datasets. The figures above represent different heterogeneous areas. They have been chosen where the main coverage is either vegetation or urban, with a large quantity of contours and lines

such as streets and roads and different shapes of structures and buildings. For a clear visualization, we only display a small interesting part (Fig. 2. c, Fig. 2. f), to differentiate and compare well the results obtained using different fusion methods. Subjective Assessment In this part, we start by showing in figure (Fig. 3) bellow, the intermediate results obtained during our pansharpening process as mentioned in the last section. After that, we present the fused results corresponding to SPOT and ALSAT-2A satellites in (Fig. 4). Visually, it appears that our method applied to SPOT bands shown in (Fig. 4e, Fig. 4g) conserves the spectral information and improves the spatial aspect simultaneously if compared to other methods. Lines, contours, roads, buildings and different structures are easy to be distinguished in heterogeneous areas; while the colors seems to be preserved in homogeneous parts, as it is presented in those figures. The same impression has been noticed when dealing with ALSAT-2A images seen in (Fig. 4n, Fig. 4p). Whereas, the scene seems to be fuzzy and lines and contours surrounding objects on the ground are little bit blurred in the results of the usual techniques like PCA combined with NSCT (Fig. 4d, Fig. 4f, Fig. 4m, Fig. 4o). Pansharpened images in (Fig. 4c, Fig. 4l) reveal PCA based IHS method corresponding respectively to SPOT and ALSAT2A satellites. They illustrate some evolution in the spatial content to the detriment of the spectral information when compared to the original images (Fig. 4b, Fig. 4k). We can see in (Fig. 4h, Fig. 4i,) and (Fig. 4q, Fig. 4r) the structures are highly injected and the spectral information is poorly handled. The NSCT alone is able to improve the quality of the fused image, for that reason, we have used the NSCT in our technique to take advantage of its properties that lead to very good results among others in this paper. When we use the NSCT with three LP decomposition levels (Fig. 4g, Fig. 4p), the spatial quality of the image seems to be improved. Thus, we notice a slight spectral distortion that we consider like a decreasing of the spectral information due to the better separation of structures in the synthetic images. Details seem to be clearer than those produced with two LP decomposition levels, as it is shown in (Fig. 4e, Fig. 4n). Objective Assessment To complete the quality assessment of the pansharpened images and judge the goodness of their spatial enhancement and their spectral preservation, we have used some quality metrics proposed by several researchers that are famous in the pansharpening field.

Fig. 1 Synoptic scheme of the proposed method

a) Spectral Quality Assessment:

J Indian Soc Remote Sens Fig. 2 a) SPOT Pan image, b) the colored composition of MS images, c) zoom on contour region, d) ALSAT-2A Pan image, e) the colored composition of MS images, f) zoom in contour region.

Six spectral quality metrics were applied, their calculation is based only on the original MS and the pansharpened bands: the Deviation index (DI), (Costantini et al. 1997), the Spectral discrepancy (SD) (Li et al. 2002), the Root-mean -squareerror (RMSE) (Dobson et al. 1995), the Correlation coefficient (CC) (Robert et al. 1973), the Relative Average Spectral Error index (RASE) and the ERGAS which is the abbreviation of the Relative dimensionless global error in synthesis (Wald 2000). b) Spatial Quality Assessment Usually, the spatial indices are not very used to evaluate the fused image like the spectral indices; instead commonly the visual inspection is widely enough in that way for the assessment of the pansharpened methods. In this paper we have used three evaluation parameters: the High-pass (HP) correlation coefficient (HP-CC) (Zhou et al. 1998), the Canny edge Table 1

correspondence (CEC) (Canny 1986) and the RMSE of Sobel filtered Pan and fused images (S_RMSE) (Pradhan et al. 2006). Those spectral and spatial quality metrics can be found in (Witharana et al. 2013). To confirm the performance of our fusion method and prove the visual inspection, the quantitative evaluation is present in this part. Therefore, we have basically used, from left to right, six spectral quality metrics and three spatial factors listed in Table 2 for SPOT images, and in Table 3 specifying ALSAT-2A images. According to the tables above, we notice high and low variations in the values obtained for each method. In Table 2, we can observe that the spectral quality values of the proposed method are lower than those obtained from PCA based IHS and close to B0^, knowing that zero is assumed to be the best one, in terms of DI, SD and RMSE in

Characteristics of SPOT and ALSAT-2A images: Pan and Multispectral bands

Satellite

Modality

Band

Spectral Resolution

Spatial Resolution

Radiometric Resolution

Band Size

SPOT

MS

XS1 XS2

0,50–0,59 μm 0,61–0,68 μm

20 m × 20 m 20 m × 20 m

8 bits

256 × 256 256 × 256

XS3 ALSAT-2A

Pan MS

0,79–0,89 μm 0,51–0,73 μm 0,45–0,52 μm 0,53–0,59 μm 0,62–0,69 μm 0,76–0,89 μm 0,45–0745 μm

20 m × 20 m 10 m × 10 m 10 m × 10 m 10 m × 10 m 10 m × 10 m 10 m × 10 m 2.5 m × 2.5 m

Pan

Blue Green Red NIR

10 bits

256 × 256 512 × 512 256 × 256 256 × 256 256 × 256 256 × 256 1024 × 1024

J Indian Soc Remote Sens Fig. 3 example of intermediate results of our proposed method: a) adjusted Pan image from SPOT satellite, b) the reconstructed image obtained by NSCT−1, c) the difference image δ.

all bands. Whereas, the rates of RASE and ERGAS decrease respectively from 46.017 and 24.326 (PCA_IHS) to 0.709 and 0.413 (proposed method). This statement proves the ability of our method to enhance the spectral quality lacking in the PCA based IHS technique.

Fig. 4 Results of different pansharpening methods using SPOT and ALLSAT-2 A images: a) SPOT_Pan image, b) Colored composition of SPOT_MS bands, c) SPOT_PCA_IHS, d) SPOT_ PCA_NSCT_2levels, e) SPOT_ Enhanced PCA_NSCT_2levels, f) SPOT_PCA_NSCT_3levels, g) SPOT_Enhanced PCA_NSCT_ 3levels, h) SPOT_IHS, i) SPOT_ HPF, j) ALSAT2A_Pan image, k) Colored composition of ALSAT2A_MS bands, l) ALSAT2A_PCA_IHS, m) ALSAT2A_PCA_NSCT_ 2levels, n) ALSAT2A_Enhanced PCA_NSCT_2levels, o) ALSAT2A_PCA_ NSCT_ 3levels, p) ALSAT2A_Enhanced PCA_NSCT_3levels, q) ALSAT2A_IHS, r) ALSAT2A_ HPF

To assess the spatial quality, we have seen that the amount found in our method for every spatial factor is good regarding each band. This quantity is considered to be slightly different from that computed for PCA based IHS technique in the case of two decomposition

J Indian Soc Remote Sens Table 2

Spectral and spatial Evaluations of SPOT results (two and three decomposition levels)

Method IHS

Band

DI

Spec_D

RMSE

CC

RASE

ERGAS

25.800

12.503

b1

0.288

30.086

29.890

0.762

b2

0.254

15.225

16.643

0.941

b3 b1

0.305 0.543

29.706 54.949

19.927 54.957

0.065 0.897

b2 b3

0.817 0.599

53.339 54.902

53.337 54.967

0.826 0.832

b1

0.372

39.491

41.095

0.959

b2 b3

0.573 0.438

39.596 39.594

41.519 39.675

0.963 0.605

PCA_NSCT 2levels

b3 b2

0.023 0.025

2.477 1.726

0.146 0.282

0.985 0.987

b1

0.007

0.627

0.212

0.998

Enhanced PCA_NSCT_ 2levels

b3 b2

0.029 0.043

3.086 2.983

0.360 0.834

0.977 0.965

b1 b3

0.035 0.031

3.220 3.340

0.599 0.341

0.953 0.973

b2

0.035

2.392

0.578

b1 b3 b2 b1

0.007 0.039 0.059 0.047

0.642 4.176 4.097 4.309

0.212 0.678 1.505 1.108

HPF

PCA_IHS

PCA_NSCT 3levels

Enhanced PCA_NSCT_ 3levels

levels, thanks to the capacity given by both techniques to reinforce the spatial content by structures. The SCC and CEC computed for our method using three decomposition levels show increasing values compared to (PCA_IHS). We have obtained respectively (0.971, 0.968, 0.910) and (82.636, 66.216, 34.323) for each single band, while evidently, we noticed a decreasing in S_RMSE number from (18.584, 21.133, 41.296) in two decomposition levels presented in Table 2 to (14.202, 20.063, 39.663) in three decomposition levels case. This report validates the high level of details injection and the clear appearance of edges and contours in the fused image obtained from our method. The spatial quantities increase in the case of three decomposition levels introduced by the NSCT compared to the case of two decomposition levels, what confirms the slight degradation in the colors. The evaluation of ALSAT-2A images is presented in Table 3. Through the values, we have practically observed the same remark done before for SPOT results. The only difference between the two kinds of datasets is the increasing values of RASE and ERGAS due to the dissimilar resolution rate and the number of bands provided by each sensor. These two characteristics are used in the calculation of RASE and ERGAS parameters. In general, the PCA based NSCT provide good spectral quality through the values given in the tables above but do not provide the spatial quality as well as our method.

61.429

32.167

46.017

24.326

SCC

CEC

S_RMSE

0.988

69.777

15.619

0.992

84.422

8.081

0.988 0.991

72.055 88.636

15.017 10.372

0.910 0.932

60.982 35.265

25.764 38.005

0.936

71.069

22.912

0.972 0.923

76.185 62.624

13.204 26.875

62.771 70.927

21.586 26.875

0.249

0.143

0.947 0.957 0.420

18.834

56.453

0.709

0.413

0.965 0.965

66.288 70.941

18.584 21.133

0.458

0.272

0.901 0.953

35.165 78.827

41.296 17.894

0.977

0.959

71.883

25.476

0.998 0.959 0.939 0.919

0.421 0.971 0.968 0.910

19.031 82.636 66.216 34.323

56.328 14.202 20.063 39.663

1.295

0.750

In addition to the quality metrics listed before, we have used a new index of Quality Not requiring Reference QNR, recently proposed by Alparone et al. (2008) which is considered one among few tools available for evaluating the quality of the pansharpened images at the desired high resolution. The QNR factor represents a combination of two indexes; one is appropriate to spectral distortion Dλ and the other one to spatial distortion Ds. However, keeping the two indexes separated is essential for comparisons with the proposed protocol (Khan et al. 2009). QNR ¼ ð1−Dλ Þα ð1−DS Þβ

ð6Þ

The product is weighted by α and β parameters of two separate values Dλ and Ds, respectively. The higher the QNR index, the better the quality of the fused product is. The maximum theoretical value of this index is 1, when both Dλ and Ds are equal to 0 (Vivone et al. 2015). vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u   N N u X X     ∧ ∧ P P 1 t   ð7Þ Dλ ¼ Q bi ; b j −Q bi ; b j  N ðN −1Þ i¼1 j¼1; j≠i

vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u   N   u X ∧  ∧ q q 1 Q bi ; Pan −Q bi ; Pan  DS ¼ t   N ðN −1Þ i¼1

ð8Þ

J Indian Soc Remote Sens Table 3

Spectral and spatial Evaluations of ALSAT-2A results (two and three decomposition levels)

Method

band

DI

Spec_D

RMSE

CC

RASE

ERGAS

SCC

CEC

S_RMSE

GIHS

b1

0.185

17.448

12.965

0.768

14.045

3.547

0.999

79.645

14.157

b2

0.171

17.448

13.924

0.716

0.999

85.388

11.943

b3 b4

0.176 0.163

17.448 17.448

16.270 10.638

0.596 0.778

0.999 0.998

85.369 80.448

13.869 17.941

b1 b2

0.274 0.256

26.319 26.344

23.292 25.266

0.573 0.512

0.896 0.896

67.080 67.108

108.638 109.059

b3

0.264

26.290

28.939

0.392

0.894

64.484

108.944

b4 b1

0.246 1.098

26.363 93.887

21.762 94.580

0.611 0.944

0.896 0.914

63.711 79.055

110.600 38.641

b2 b3

0.991 0.993

93.886 93.886

94.588 94.667

0.923 0.865

0.913 0.912

81.955 81.280

37.852 37.266

HPF

PCA_IHS

b4

0.971

93.885

94.447

0.942

b1 b2

0.027 0.021

2.418 2.063

0.815 0.755

0.993 0.992

b3 b4

0.014 0.026

1.375 2.613

0.554 0.764

0.993 0.993

Enhanced PCA_NSCT_ 2levels

b1

0.040

3.511

0.526

0.984

PCA_NSCT_3levels

b2 b3 b4 b1

0.035 0.034 0.036 0.045

3.434 3.299 3.605 3.937

0.362 0.109 0.541 0.460

0.978 0.957 0.987 0.980

b2 b3 b4 b1 b2

0.034 0.023 0.044 0.078 0.072

3.315 2.162 4.410 6.913 6.962

0.493 0.395 0.479 0.705 1.061

0.980 0.981 0.981 0.941 0.917

b3 b4

0.073 0.071

7.035 7.069

2.201 0.403

0.840 0.951

PCA_NSCT_ 2levels

Enhanced PCA_NSCT_ 3levels

bi ∧ and Pan∧ are the filtered and downsampled version corresponding, respectively, to MS and Pan images (Aiazzi et al. 2006). α, β, p, q are usually set to 1 (Alparone et al. 2008), and the different block size used in the calculation of QNR are: &

SPOT images: use 256 × 256 to divide the high resolution and 128 × 128 applied to the low resolution.

Table 4 Evaluation of SPOT and ALSAT2A results using the Non Reference Quality metric

Method

IHS/GIHS HPF PCA_IHS PCA_NSCT_ 2 levels Enhanced PCA_NSCT_ 2levels PCA_NSCT_ 3levels Enhanced PCA_NSCT_ 3levels

&

25.775

97.662

0.753

0.436

0.474

1.330

6.488

24.506

0.911

75.932

41.652

0.904 0.898

55.936 56.582

61.071 69.580

0.869 0.901

47.576 52.615

86.829 60.076

1.109

0.907

65.411

43.982

0.119

0.905 0.903 0.905 0.914

65.810 62.795 60.626 72.282

45.644 49.981 47.189 51.799

0.908 0.880 0.918 0.921 0.918

68.621 61.900 66.997 77.159 76.209

62.018 82.138 50.022 35.020 37.286

0.917 0.918

71.459 74.399

41.790 38.107

0.189

0.337

ALSAT-2A images: introduce 512 × 512 on the high resolution and 128 × 128 on the low resolution.

Depending to Table 4, the QNR value calculated for the proposed method is close to 1, while the balance between spectral and spatial distortions (Dλ and Ds) is occurred e.g. (QNR = 0.919, Dλ = 0.030, DS = 0.053). Consequently, the SPOT Results

ALSAT2A Results



Ds

QNR



Ds

QNR

0.194 0.102 0.165 0.001 0.062 0.002 0.073

0.292 0.209 0.211 0.023 0.085 0.031 0.102

0.571 0.717 0.659 0.976 0.858 0.967 0.833

0.037 0.198 0.066 0.006 0.030 0.008 0.038

0.120 0.042 0.146 0.011 0.053 0.032 0.078

0.848 0.768 0.798 0.983 0.919 0.960 0.888

J Indian Soc Remote Sens

values obtained by both spectral and spatial evaluations prove the efficiency of our pansharpening process in the preservation of the spectral quality and the enhancement of the spat9ial content of the multispectral images.

Conclusions In this work, an efficient pansharpening algorithm based on enhanced PCA using NSCT, applied to SPOT and ALSAT-2A imagery, was presented. The main objective of our method is to merge the high spectral quality of the MS bands and the high spatial content of the Pan image into a single image via a new pansharpening process different from other usual methods. The results obtain by applying our method show a good preservation of the spectral information and a noticeable improvement in the spatial content of the fused image, thanks to the advantage offered by both PCA and NSCT approaches. Therefore, the fused image getting by our method represents edges, contours and any structure shapes on the ground, better than other methods as it is mentioned in the last section. This is achieved without affecting and distorting the color of objects, or at least degrading the spectral quality very slightly, as we have revealed in the case of three decomposition levels used for NSCT. The statistical evaluation shows the accuracy of our visual statement, but the QNR factor favors the quantitative assessment and has drawback of penalizing the visual inspection, which is principally based on both spectral and spatial distortions. Acknowledgments The authors would like to acknowledge the Algerian Space Agency (ASAL) for providing the ALSAT-2A images related to our region of interest.

References Aiazzi, B., Alparone, L., Baronti, S., Garzelli, A., & Selva, M. (2006). MTF-tailored multiscale fusion of high-resolution ms and pan imagery. Photogrammetric Engineering & Remote Sensing, 72(5), 591–596. Alparone, L., Aiazzi, B., Baronti, S., Garzelli, A., Nencini, F., & Selva, M. (2008). Multispectral and panchromatic data fusion assessment without reference. Photogrammetric Engineering & Remote Sensing, 74(2), 193–200. Burt, P. J., & Adelson, E. H. (1983). The laplacian pyramid as a compact image code. IEEE Transactions on Communication, 31(4), 532– 540. Canny, J. (1986). A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI, 8(6), 679–698. Chavez, P. S. J., & Bowell, J. A. (1988). Comparison of the spectral information content of landsat thematic mapper and SPOT for three different sites in the Phoenix, Arizona region. Photogrammetric Engineering & Remote Sensing, 54(12), 1699–1708. Chavez, P. S., & Kwarteng, A. Y. (1989). Extracting spectral contrast in Landsat thematic mapper image data using selective principal

component analysis. Photogrammetric Engineering & Remote Sensing, 55(3), 339–348. Choi, M., Kim, R. Y., Nam, M. R., & Kim, H. O. (2005). Fusion of multispectral and panchromatic satellite images using the curvelet transform. IEEE Geosciece and Remote Sensing Letters, 2(2), 136– 140. Costantini, M., Farina, A., & Zirilli, F. (1997). The Fusion of different resolution SAR images. Proceedings Of IEEE, 85(1), 139–146. da Cunha, A. L., Zhou, J., & Do, M. N. (2006). The nonsubsampled contourlet transform: Theory, design, and applications. IEEE Transactions on Image Processing, 15(10), 3089–3101. Do, M. N., & Vetterli, M., (2001). BContourlets^, in Beyond Wavelets,J. Stoeckler and G. V. Welland, Eds., (pp. 1–27). New York: Academic Press. Do, M. N., & Vetterli, M. (2005). The Contourlet Transform: An Efficient Directional Multiresolution Image Representation. IEEE Transactions on Image Processing, 14(12), 2091–2106. Dobson, J. E., Bright, E. A., Ferguson, R. L., Field, D. W., Wood, L. L., Haddad, K. D., Iredale, H., III, Jensen, J. R., Klemas, V. V., Orth, R. J., & Thomas, J. P. (1995). Noaa coastal change analysis program (c-cap): Guidance for regional implementation. NOAA Technical Report. April 1995. Duda, R., & Hart, P. (1973). Pattern classification and scene analysis. New York: John Wiley & Sons. Ehlers, M. (1991). Multisensor image fusion techniques in remote sensing. ISPRS Journal of Photogrammetry and Remote Sensing, 46(1), 19–30. El-Mezouar, M. C., Kpalma, K., Taleb, N., & Ronsin, J. (2014). A Pansharpening based on the non-subsampled contourlet transform: Application to worldview-2 Imagery. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 7(5), 1806–1815. Gillespie, A. R., Kahle, A. B., & Walker, R. E. (1986). Color enhancement of highly correlated images. Decorrelation and H.S.I. constrast stretch. Remote Sensing Environment, 20(3), 209–235. Haydn, R., Dalke, G.W., Henkel, J., & Bare, J. E., (1982). Application of IHS color transform to the processing of multisensor data and image enhancement. In Proceeding International Symposium on Remote Sensing of Arid and Semi-Arid Lands, Cairo, Egypt, Environmental Research Institute of Michigan: 599–616. Jolliffe, I. T. (1986). Principal component analysis. NewYork: Springer. Khan, M. M., Alparone, L., & Chanussot, J. (2009). Pansharpening quality assessment using the modulation transfer functions of instruments. IEEE Transactions on Geoscience and Remote Sensing, 47(11), 3880–3891. Li, S., Kwok, J. T., & Wang, Y. (2002). Using the discrete wavelet frame transform to merge Landsat TM and SPOT panchromatic images. Information Fusion, 3(1), 17–23. Mallat, S. G. (1989). Multifrequency channel decomposition of images and wavelet models. IEEE Transactions on Acoustics, Speech and Signal Processing, 37(12), 2091–2110. Minhayenud, S., Chitwong, S., & Cheevasuvit, F., (2008). a fast intensityhue-saturation fusion approach via principal component analysis for ikonos imagery. ASPRS American Society for Photogrammetry and Remote Sensing 2008 Annual. Phoong, S. M., Kim, C. W., Vaidyanathan, P. P., & Ansari, R. (1995). A New Class of Two-Channel Biorthogonal Filter Banks and Wavelet Bases. IEEE Transactions on Signal Processing, 43(3), 649–665. Pohl, C., & Van Genderen, J. L. (1998). Multisensor image fusion in remote sensing: Concepts, methods, and applications. International Journal. Remote Sensing., 19(5), 823–854. Pradhan, P. S., King, R. L., Younan, N. H., & Holcomb, D. W. (2006). Estimation of the number of decomposition levels for a waveletbased multiresolution multisensor image fusion. IEEE Transactions on Geoscience and Remote Sensing, 44(12), 3674– 3686.

J Indian Soc Remote Sens Rahmani, S., Strait, M., Merkurjev, D., Moeller, M., & Wittman, T. (2010). An adaptive IHS pan-sharpening method. IEEE Transactions on Geoscience and Remote Sensing Letters, 7(4), 746–750. Ranchin, T. (1993). Applications de la transformée en ondelettes et de l’analyse multiresolution au traitement des images de télédétection (110 p). Thèse de Doctorat en Sciences de l’Ingénieur: Nice-Sophia Antipolis University, France. Ranchin, T., & Wald, L. (2000). Fusion of high spatial and spectral resolution images: The ARSIS concept and its implementation. Photogrammetric Engineering and Remote Sensing, 66(1), 49–61. Robert, M., Haralick, K., Shanmugam, & Its’Hak Dinstein, (1973). Textural features for image classification. IEEE Transactions on Systems, Man and Cybernetics, 3(6), 610–621. Roberts, J. W., Van Aardt, J. A. N., & Ahmed, F. B. (2011). Image fusion for enhanced forest structural assessment. International Journal of Remote Sensing, 32(1), 243–266. Shah, V.P., Younan, N.H., & King, R.L., (2007). Pan-sharpening via the Contourlet Transform. In Geoscience and remote sensing symposium Proceedings, IGARSS’07, IEEE 2007 International, 2007 (pp. 310–313), doi:10.1109/IGARSS.2007.4422792. Shah, V. P., Younan, N. H., & King, R. L. (2008). An Efficient PanSharpening Method via a Combined Adaptive PCA Approach and Contourlets. IEEE Transactions on Geoscience and Remote Sensing, 46(5), 1323–1335. Shettigara, V. K. (1992). A generalized component substitution technique for spatial enhancement of multispectral images using a higher resolution dataset. Photogrammetric Engineering & Remote Sensing, 58(5), 561–567. Sveinsson, J.R., & Benediktsson, J.A., (2007). Combined wavelet and curvelet denoising of SAR images using TV segmentation. In Geoscience and remote sensing symposium Proceedings, IGARSS’07, IEEE 2007 International, 2007 (pp. 503–506). IEEE.

Toet, A., van Ruyven, L. J., & Valeton, J. M. (1989). Merging thermal and visual images by a contrast pyramid. Optical Engineering, 28(7), 789–792. Tu, T. M., Huang, P. S., Hung, C. L., & Chang, C. P. (2004). A fast intensityhue-saturation fusion technique with spectral adjustment for IKONOS imagery. IEEE Transactions on Geoscience and Remote Sensing Letters, 1(4), 309–312. Vivone, G., Alparone, L., Chanussot, J., Dalla Mura, M., Garzelli, A., Licciardi, G. A., Restaino, R., & Wald, L. (2015). A Critical Comparison Among Pansharpening Algorithms. IEEE Transactions on Geoscience and Remote Sensing, 53(5), 2565– 2586. Wald, L., (2000). Quality of high resolution synthesised images: Is there a simple criterion? In: Ranchin, T., Wald L., (Editors) Fusion of Earth data: merging point measurements, raster maps and remotely sensed images. SEE/URISCA,Nice, Sophia Antipolis, France 166. Witharana, C., Civco, D. L., & Meyer, T. H. (2013). Evaluation of pansharpening algorithms in support of earth observation based rapid-mapping workflows. Applied Geography, 37(1), 63–87. Yang, S., Wang, M., Lu, Y., & Jiao, L. (2009). Fusion of multiparametric SAR images based on SW-nonsubsampled Contourlet and PCNN. Signal Processing, 89(2), 2596–2608. Yesou, H., Besnus, Y., & Rolet, Y. (1993). Extraction of spectral information from Landsat TM data and merger with SPOT panchromatic imagery-A contribution to the study of Geological structures. ISPRS Journal of Photogrammetry and Remote Sensing, 48(5), 23–36. Zhang, Z., & Blum, R.S,. (1999). A categorization of multiscale decomposition-based image fusion schemes with a performance study for a digital camera application. Proceedings of the IEEE, 87(8), 1315–1326. Zhou, J., Civco, D. L., & Silander, J. A. (1998). A wavelet transform method to merge landsat TM and SPOT panchromatic data. International Journal of Remote Sensing, 19(4), 743–757.