Comparison of pixel based and feature based fusion ...

11 downloads 4739 Views 951KB Size Report
The objective of this study is to register one high resolution optical image and ... of optical and SAR images as well as the wavelength domain are differ- ... Figure 2: Comparison of the pixel based and the feature based registration strategies.
214

Remote Sensing for a Changing Europe D. Maktav (Ed.) IOS Press, 2009 © 2009 The authors and IOS Press. All rights reserved. doi:10.3233/978-1-58603-986-8-214

Comparison of pixel based and feature based fusion of high resolution optical and SAR imagery G. Atay Karadeniz Technical University Engineering Faculty, Dept. of Geodesy and Photogrammetry, Trabzon, Turkey, [email protected]

J.D. Wegner, P. Lohmann, P. Hofmann, U. Sörgel Institute of Photogrammetry and Geoinformation (IPI), Leibniz University of Hannover, Germany

Keywords: SAR, pixel based, feature based, image fusion, ITK, OTB, coregistration

ABSTRACT: The number of high resolution imaging remote sensing systems is increasing rapidly. But, because of technologic limitations, not every system can produce images with both very good quality and all desired features. Images obtained with different sensors may have complementary information. Fusion of these data with a fine image fusion approach can provide better information about the observed scene. Two common complementary information sources are optical and Synthetic Aperture Radar (SAR) images. These two products have some advantages and disadvantages separately because of their sensor characteristics. Combining advantages of the two images may decrease disadvantages of them; moreover the process will provide extra information. The study includes two parts. First, it consists of pixel based image fusion of SAR and optical images while the second part is fusion of the same data with a feature based method. Registration strategies of both pixel based and the feature based methods were determined and they were implemented for our optical and SAR images. The results of these methods were compared. According to the results it was clear that the feature based method worked better than pixel based method.

1

INTRODUCTION

Lately, the number of imaging remote sensing systems has been increasing rapidly. But, because of the technologic limitations, every system cannot produce images with the desired quality and features. Therefore, especially in recent years, combining images taken by various sensors has attracted increasing attention. Image fusion is a process to obtain a more useful joint image from more than one input image. With the availability of multi-sensor, multi-temporal, multi-resolution image data, the fusion of digital image data has become an important tool in remote sensing. The objective of this study is to register one high resolution optical image and one high resolution SAR image. A pixel based method and feature based method are developed and the different results are compared. The pixel based method accomplishes the fusion at the lowest level. It is applied on raster data in which every pixel and its grey value are examined. The values are modified by mathematical calculations and operators (Schmitz 2007). The feature based method requires the extraction of objects recognized in the optical and the SAR image. Features correspond to characteristics extracted from the initial images which are depending on their environment such as extent, shape and neighborhood (Pohl & Van Genderen 1998). In this study a SAR and an optical image were used. The SAR image was acquired by the EMISAR sensor (Electromagnetic Institute Synthetic Aperture Radar Sensor). EMISAR is a dual frequency (L- and C-band) polarimetric SAR sensor build by the Technical University of Denmark for

G. Atay et al. / Comparison of pixel based and feature based fusion of high resolution optical imagery

215

radar and remote sensing research. It is installed on a Danish Air Force Gulfstream G3. For this study a C-band SAR image was used (Fig.1 (right)). The optical image is a high resolution aerial image (Fig.1 (left)). Panchromatic data and color information have been captured covering the same area as the SAR image. The optical image was resampled to the pixel size of the corresponding SAR image and hence the resolution of both images is four meters.

Figure1. Optical (left) and SAR (right) images The viewing geometry of optical and SAR images as well as the wavelength domain are different. Hence, geometric and radiometric differences occur. Geometric differences appear because SAR sensors measure distances in slant range geometry whereas optical sensors measure angles with usually small off-nadir angles. Radiometric differences occur because optical sensors capture the terrain reflectivity response to the visible sun light. SAR sensors, however, image the terrain response to actively sent microwaves. The higher the resolution of the images becomes, the greater become the differences that have to be accounted for during the registration process.

2

METHODOLOGY

First of all, two registration strategies, one pixel based and one feature based, were developed in this study. The steps of the strategies are shown in Figure 2. The feature based image registration algorithm strategy was developed by (Wegner 2007). It can be observed from Figure 2 that the common steps of pixel based and feature based image fusion are ortho-rectification, preprocessing and registration. The first common step is orthorectification, i.e. the optical and the SAR image are projected from sensor space to object space on the ground. It is implemented in order to decrease the geometric differences between optical and SAR imagery. Both optical and SAR images were already ortho-rectified before the study. Hence, this step had not to be implemented. The second common step is preprocessing which is very important in order to reduce some undesirable signals in the images. After the images are preprocessed some extra processes are needed for the feature based registration strategy before the registration process. First, features must be extracted from both optical and SAR images. It is necessary to get features from the optical image in order to register the results with the feature image of the SAR image. In this study edges are used as features for both optical and SAR images. The best results for the optical image were achieved with the Canny edge detection algorithm (Canny 1986). Lines of the SAR image are extracted using the algorithm developed by Touzi et al 1988 followed by a threshold operation. After extracting features in the images, distance maps of the optical and the SAR image are produced. A distance map is an image where the grey value of each pixel is the distance to the nearest pixel from a set of objects (Cuisenaire 1989). This step was done in order to transform onedimensional lines to two-dimensional continuous information. In our study distance maps display the Euclidean distance between a background pixel and the nearest line pixel. Then the numerical value of the distance is translated to a grey value. An approach developed by Danielsson (Danielsson 1980) was chosen for the computation of the Euclidean distances.

216

G. Atay et al. / Comparison of pixel based and feature based fusion of high resolution optical imagery

Feature based image fusion

Pixel based image fusion Optical image

SAR image

Preprocessing

Optical image

SAR image

Preprocessing

Orthorectification

Edge Detection Distance Maps

Coregistration with ITK registration framework

Coregistration with ITK registration framework

Comparison of results

Figure 2: Comparison of the pixel based and the feature based registration strategies For the implementation of the registration strategy, algorithms already existing in the open source library ORFEO Toolbox were used. ORFEO Toolbox (OTB) is distributed as an open source library of image processing algorithms. It is based on the medical image processing library ITK and offers particular functionalities for remote sensing image processing in general and for high spatial resolution images in particular. OTB was set up in order to prepare for the exploitation of high resolution images, derived from Pleiades (PHR) and Cosmo-Skymed (CSK) systems, the French Space Agency (CNES) (CNES 2008).

3

PREPROCESSING

Before the data obtained by satellite systems or digital airborne systems can be analyzed, it is often necessary to preprocess them in order to correct defects (Gibson & Power 2000). A preprocessing step is necessary for both pixel and feature based fusion methods. It is necessary to get rid of noise and to reduce the speckle effect that the images include for further steps. For this purpose two edge preserving smoothing filters were used. An anisotropic diffusion filter (Perona & Malik 1990) was applied to the optical image and a Frost filter (Frost et al. 1982) to the SAR image.

4

IMAGE REGISTRATION

Image registration is the task of finding a spatial transform mapping one image onto another. In ITK, registration is performed within a framework of pluggable components that can easily be interchanged. This flexibility means that a combinatorial variety of registration methods can be created, allowing users to pick and choose the right tools for their specific application. (Ibánez et al. 2005)

Fixed Image Moving Image

Metric Optimizer

Interpolator Transform

Figure 3: The ITK registration framework (Ibánez et al. 2005) The ITK registration method requires two input images, a transform, a metric, an interpolator and an optimizer (Fig. 3). Input data to the registration framework are two images: one is called the fixed image and the other one the moving image. Registration is interpreted as an optimization

G. Atay et al. / Comparison of pixel based and feature based fusion of high resolution optical imagery

217

problem with the goal of finding the optimum spatial mapping that will align the moving image with the fixed image. In this study the distance map of the optical image is the fixed image and the distance map of the SAR image is the moving image. The transform component represents the spatial mapping of points from fixed image space to points in moving image space. The interpolator is used to evaluate moving image intensities at non-grid positions and the metric component provides a measure of how well the fixed image is matched by the transformed moving image. This measure is the quantitative criterion to be optimized by the optimizer over the search space defined by the parameters of the transform. (Ibánez et al., 2005) In this study a two-dimensional translation transform was used as the transformation component for simplicity reasons. It involves shifting the origin of the current coordinates system horizontally and vertically by a specific amount. For simplicity and computation cost reasons a bilinear interpolation was used as the interpolator component. It has been extensively shown that metrics based on the evaluation of mutual information are well suited for overcoming the difficulties of multimodality registration. (Ibánez et al., 2005) Therefore, in the study a “Mutual Information Metric” was implemented as the metric component. A gradient descent optimizer was used for the optimizer component. It searches for the set of transformation parameters that maximizes the metric value by iteratively changing the transformation parameters. In ITK there are several alternatives for these components. For example, a mean squares, normalized correlation, and mutual information can be chosen for the metric, too. In our study we used translation transform for the transform parameter, a bilinear interpolation for the interpolator, mutual information for the metric and a gradient descent optimizer for the optimizer.

5

RESULTS AND DISCUSSION

As mentioned before, both optical and SAR images were already ortho-rectified and coregistered and thus fit very well. So for this project their positions one to another were changed a bit. A shifting was applied to them. The amount of the translations is known. Therefore, the results of both the pixel and the feature based algorithm can be compared to the original coregistered images for evaluation reasons.

16 17

(a)

(b)

(c)

Figure 4: Test region of the optical image (a), Test region of the SAR image (b), Real translations between images in pixels (c) In Figure 5 the registration result of pixel based approach is shown. The ellipses point some important features than can show the results very well. The red ellipses display the situation before the registration and the yellow ellipses show the situation after the registration. The program also gives some numerical results about the registration concerning the transformation parameters. In this study the parameters are only the translations between the two images. These computed translations were used to calculate an Euclidean distance value. The Euclidean distance between the real translation values and the computed translations was 2.98 pixels. In Figure 5 both optical and SAR images can be observed in both (a) and (b). For better visualization checkerboard scheme was used.

218

G. Atay et al. / Comparison of pixel based and feature based fusion of high resolution optical imagery

Optik I.SAR I. SAR Image Optik Image

(b)

(a)

Figure 5: Checkerboard before the registration (a), Checkerboard after the registration (b) Feature extraction results of Canny and Touzi algorithms and distance maps of both optical and SAR images can be observable in Figure 6.

(a) (c)

(b)

(d)

Figure 6: (a) Extracted lines from optical image (b) Extracted lines from SAR image (c) Distance map of the optical image (d) Distance map of the SAR image Then the registration framework was implemented to these distance maps. That is to say, the main inputs of the ITK registration framework are distance maps. In Figure8 (a) and (b) the results of the registration are displayed. The Euclidean distance between real transition values and computed translations between images was 0.47 pixel.

(b) (a) Figure 8: Checkerboard before the registration (a), Checkerboard after the registration (b) Axis x y

Real value (pixel) 16 17 Euclidean distance

Pixel based fusion (pixel) 18,3021 15,1134 2,98

Feature based fusion (pixel) 15,5427 16,8995 0,47

Table 1: Numerical values of registration processes. The numerical values of two approaches are shown in Table 1. In Figure 9 results of a selected specific detail were shown. It is much clearer that feature based approach gives the better results than the pixel based method.

G. Atay et al. / Comparison of pixel based and feature based fusion of high resolution optical imagery

219

(b) (a) Figure 9: Pixel based registration result (a), Feature based registration result (b)

6 CONCLUSION In conclusion, the feature based registration approach works better on high resolution optical and SAR imagery than the pixel based approach. In terms of visual results and numerical values, the feature based registration algorithm aligns the SAR image better to the optical image. Due to high resolution and multi-modality imagery (different sensors) the pixel based method does not work as well. OTB proved to be very useful for this image analysis and registration task. For further work it is important to implement and test additional metrics, optimizing methods and also different images in order to be able to make more general statements. ACKNOWLEDGEMENT The authors thank the Scientific and Technical Research Council of Turkey (TUBITAK) for supporting the stay of the first author Gülçin ATAY at the Institute of Photogrammetry and Geoinformation (IPI), Leibniz University of Hannover, Germany. REFERENCES Canny, J. 1986. A Computational Approach to Edge Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(6):679-698. CNES.Pleiades. 2008. ORFEO Accompaniment Program. Accessed 21.05.2007. http://smsc.cnes.fr/ PLEIADES/A_prog_accomp.htm Cuisenaire, O. & Macq, B. 1999. Fast and Exact Signed Euclidean Distance Transformation with Linear Complexity. IEEE International Conference. 06 : 3293-3296. Danielsson, E. 1980. Euclidean distance mapping. Computer Graphics and Image Processing, 14:27-248. Frost,V. S., Stiles, J. A., Shanmugan, K. S. & Holtzman, J. C. 1982. A Model for Radar Images and Its Aplication to Adaptive Digital Filtering of Multiplicative Noise. IEEE Transactions on Pattern Analysis and Machine Intelligence. 4(2):157-166. Gibson, P.J. & Power, C.H. 2000. Introductory remote sensing: Digital Image Processing and Applications. New York. Taylor & Francis. Perona, P., Malik, J. 1990. Scale Space and Edge Detection Using Anisotropic Diffusion. IEEE Transactions on Pattern Analysis and Machine Intelligence. 12:629-639. Pohl, C. & Van Genderen, J. L. 1998. Multisensor image fusion in sensing: concepts, methods and applica tions. Int. J. Remote Sensing. 19(5): 823-854. Touzi, R., Lopez, A. & Bosquet, P. 1988. A Statistical and Geometrical Edge Detector for SAR Images. IEEE Transactionson Geoscience and Remote Sensing. 26:764-773. Wegner, J.D. 2007. Automatic Fusion of SAR and Optical Imagery, Diploma Thesis. Wegner, J.D., Inglada, J. & Tison C. 2008. Image Analysis of fused SAR and Optical Images Deploying Open Source Software Library OTB. Article to be published in proceedings of EUSAR 2008. Zhang, Y., 2004. Understanding image fusion, Photogrammetric Engineering and Remote Sensing. 70(6): 657-661. Ibánez, J., Schroeder W., Ng., Lydia & Cates J. 2005. The ITK software guide. 2 edition. version 2.4.

Suggest Documents