wavelet based image fusion using pixel based maximum selection rule

15 downloads 0 Views 786KB Size Report
Wavelet Based Image Fusion Using Pixel Based Maximum Selection Rule. Wavelets .... [1] David Hall, James Llinas, “An introduction to multisensor data fusion”, ...
Deepali A. Godse et al. / International Journal of Engineering Science and Technology (IJEST)

WAVELET BASED IMAGE FUSION USING PIXEL BASED MAXIMUM SELECTION RULE DEEPA LI A. GODSE Associate Professor, Information Technology Department, University of Pune Bharati Vidhyapeeth’s College of Engineering for Women, Pune -43(MS) India [email protected]

DATTATRAYA S. BORMANE Principal, JSPM’s Rajarshi Shahu College of Engineering, University of Pune Tathawade, Pune (MS) India [email protected] Abstract: Image Fusion is a technique that integrates complementary information from multiple images such that the new images are more suitable for processing tasks. Image fusion combines perfectly registered images to produce a high quality fused image with spatial and spectral information. It integrates complementary information to give a better visual picture of a scenario, suitable for processing. Image Fusion produces a single image from a set of input images. The fused image has more complete information which is useful for human or machine perception. The fused image with such rich information will improve the performance of image analysis algorithms. In this paper, we propose wavelet based image fusion using pixel based maximum selection rule algorithm. Keywords: fusion; registered; spatial; spectral; wavelet. 1. Introduction Information is used in many forms to solve problems and monitor conditions. When multiple source information is combined, it is essentially used to derive or infer more reliable information. However, there is usually a point of diminishing returns after which more information provides little improvement in the final result. Which information and how to combine it is an area of research called data fusion [1]. In many cases, the problem is ill defined when data is collected. More information is gathered in hopes of better understanding the problem, ultimately arriving at a solution. Large amounts of information are hard to organize, evaluate, and utilize. Less information giving the same or a better answer is desirable. Data fusion attempts to combine data such that more information can be derived from the combined sources than from the separate sources. Data fusion techniques combine data and related information from associated databases, to achieve improved accuracies and more specific inferences.

2. Preprocessing of Image Fusion Two images taken in different angles of scene sometimes cause distortion. Most of objects are the same but the shapes change a little. At the beginning of fusing images, we have to make sure that each pixel at correlated images has the connection between images in order to fix the problem of distortion; image registration can do this. Two images having same scene can register together using software to connect several control points. After registration, resampling is done to adjust each image that about to fuse to the same dimension. After resampling, each image will be of the same size. Several interpolation approaches can be used, to resample the image; the reason is that most approaches we use are all pixel-by-pixel fused.

ISSN : 0975-5462

Vol. 3 No. 7 July 2011

5572

Deepali A. Godse et al. / International Journal of Engineering Science and Technology (IJEST)

Images with the same size will be easy for fusing process. After the re-sampling, fusion algorithm is applied. Sometimes we have to transfer the image into different domain, sometimes haven’t depending on the algorithm. Inverse transfer is necessary if image has been transferred into another domain. Fig.1 summarizes these steps called, preprocessing of image fusion [7].

Fig 1: Preprocessing of image fusion

3. Wavelet Based Image Fusion Using Pixel Based Maximum Selection Rule Wavelets are localized waves [3] .They have finite energy. They are suited for analysis of transient signal. They are finite duration oscillatory functions with zero average value [6]. The irregularity and good localization properties make them better basis for analysis of signals with discontinuities. Wavelets can be described by using two functions viz. the scaling function f (t), also known as ‘father wavelet’ and the wavelet function or ‘mother wavelet’. ‘Mother’ wavelet ψ (t) undergoes translation and scaling operations to give self similar wavelet families as given by Eq.[1].

ψ a ,b (t ) =

1 a

 t −b   , (a, bε R ), a > 0  a 

ψ

(1)

where a is the scale parameter and b the translation parameter [7].

Fig. 2: Wavelet based image fusion

The source image is decomposed in rows and columns by low-pass (L) and high-pass (H) filtering and subsequent down sampling at each level to get approximation (LL) and detail (LH, HL and HH) coefficients. Scaling function is associated with smooth filters or low pass filters and wavelet function with high-pass filtering. Wavelet transforms provide a framework in which an image is decomposed, with each level corresponding to a coarser resolution band.

ISSN : 0975-5462

Vol. 3 No. 7 July 2011

5573

Deepali A. Godse et al. / International Journal of Engineering Science and Technology (IJEST)

4. Proposed Algorithm The steps involved in this algorithm are as below: Step 1: Read the two source images, image I and image II to be fused and apply as input for fusion. Step 2: Perform independent wavelet decomposition of the two images until level L to get approximation ( LLL ) and detail ( LH l , HLl , HH l ) coefficients for l=1, 2, ..., L. Step 3: Apply pixel based algorithm for approximations which involves fusion based on taking the maximum valued pixels from approximations of source images I and II. LLLf = maximum( LLLI (i, j ), LLLII (i, j ))

(2)

Here, LLLf is the fused and LLLI and LLLII are the input approximations, i and j represent the pixel positions of the sub images. LH lf , LH Il , LH IIl are vertical high frequencies, HLlf , HLlI , HLlII are horizontal high frequencies, HH lf , HH Il , HH IIl are diagonal high frequencies of the fused and input detail sub bands respectively. Step 4: Based on the maximum valued pixels between the approximations from Eq. (2), a binary decision map is formulated. Eq.(3) gives the decision rule D f for fusion of approximation coefficients in the two source images I and II as D f (i, j ) = 1, d I (i, j ) > d II (i, j )

= 0, otherwise

(3)

Step 5: Thus, the final fused transform corresponding to approximations through maximum selection pixel rule is obtained. Step 6: Concatenation of fused approximations and details gives the new coefficient matrix. Step 7: Apply inverse wavelet transform to reconstruct the resultant fused image and display the result. Results obtained by applying above algorithm are shown in Fig.3 and Fig.4. Fig.3(a) shows image1 which is captured by focusing on vertical book and Fig.3(b) shows image2 which is captured by focusing on horizontal book. Fig.3(c) shows resultant fused image that is obtained by applying our algorithm, mentioned above. Similarly, Fig.4(a) shows image1 which is captured by focusing on left part and Fig.4(b) shows image2 which is captured by focusing on right part. Fig.4(c) shows resultant fused image that is obtained by applying our algorithm.

ISSN : 0975-5462

Vol. 3 No. 7 July 2011

5574

Deepali A. Godse et al. / International Journal of Engineering Science and Technology (IJEST)

(a)

(b)

(c) Fig.3: (a) Source image1: Focus on vertical book, (b)Source image 2: Focus on horizontal book. (c) : (a) and (b) are fused using pixel based maximum selection rule to give the fused image in (c)

ISSN : 0975-5462

Vol. 3 No. 7 July 2011

5575

Deepali A. Godse et al. / International Journal of Engineering Science and Technology (IJEST)

(a)

(b)

(c)

Fig.4: (a) Source image1: Focus on left part (b) Source image 2:Focus on right part. (c) : (a) and (b) are fused using pixel based maximum selection rule to give the fused image in (c)

ISSN : 0975-5462

Vol. 3 No. 7 July 2011

5576

Deepali A. Godse et al. / International Journal of Engineering Science and Technology (IJEST)

5. Applications 5.1 Non-military: • • • • • •

Air traffic control, Law enforcement, Homeland security, Medical diagnosis, Robotics (Manufacturing and Hazardous workplace) Remote sensing: Crops, Weather patterns, Environment, Mineral resources, Buried hazardous waste

5.2 Military: • Detection, location, tracking and identification of military entities. • Sensors: radar, sonar, infrared, synthetic aperture radar (SAR), electro-optic imaging sensors etc. 6. Conclusion The work done in this paper forms the basis for image analysis algorithms. Image fusion seeks to combine information from different images. It is widely recognized as an efficient tool for improving overall performance in image based application. Among the hundreds of variations of image fusion techniques, the most popular and effective methods include Wavelet transforms. Wavelet transforms provide a framework in which an image is decomposed, with each level corresponding to a coarser resolution band. The waveletsharpened images have a very good spectral quality. Fourier transform provides information regarding frequency. Short time Fourier transforms give only constant resolution. So, wavelet transform is preferred over Fourier transform and short time Fourier transforms since it provides multiresolution. The spatial quality of the sharpened images varies based on the data used for sharpening. Hence there is a need to investigate with different combination models in the wavelet domain to make the wavelet-based systems more robust spatial quality. 7. References [1] [2] [3] [4] [5] [6] [7]

David Hall, James Llinas, “An introduction to multisensor data fusion”, Proceedings of the IEEE, vol. 85, No. 1, January 1997. J. Llinas and E. Waltz, Multisensor Data Fusion. Boston, MA: Artech House, 1990. S. Mallat, “Wavelets for a vision,” Proceedings of the IEEE, New York Univ., NY, 84(4):604-614, April 1996. M.A. Mohamed and B.M. El-Den, “Implementation of Image Fusion Techniques Using FPGA”, IJCSNS International Journal of Computer Science and Network Security, Vol.10 No.5, May 2010. Ibrahim Saeed Koko, Herman Agustiawan, “wo-dimensional Discrete Wavelet Transform Memory Architectures”, International Journal of Computer and Electrical Engineering, Vol. 1, No. 1, April 2009. K.P. Soman and K.I. Ramachandran , “Insight into Wavelets from Theory to Practice”, PHI Publication. Susmitha Vekkot, Pancham Shukla, “A Novel Architecture for Wavelet based Image Fusion”, World Academy of Science, Engineering and Technology 57, 2009.

ISSN : 0975-5462

Vol. 3 No. 7 July 2011

5577

Suggest Documents