Image Fusion and Its Real-time Processing in Dual ...

3 downloads 0 Views 858KB Size Report
CHARACTERISTIC OF MWIR&LWIR IMAGE. Infrared image reflects the radiation characteristics of targets and background. Wien's displacement law states that ...
Image Fusion and Its Real-time Processing in Dual-band Infrared Night Vision System Qin Qingwang, Gao Kun∗, Ni Guoqiang (Department of Optical Engineering, Beijing Institute of Technology, Beijing, China 100081) ABSTRACT MWIR and LWIR are normally used in night vision system. A pseudo-color dual-band infrared image fusion algorithm is proposed in this paper based on image features in wavelet domain. After wavelet decomposed from source images, edge features are extracted from each low frequency component. The fusion rules are defined by the edge information, using local modulus maxima rule for edge pixels and its sub-band neighboring pixels, and weighted mean coefficient rule for non-edge pixels. Fusion result is mapped into pseudo-color image using special color look-up table in YUV color space. The embedded fusion system used in night vision system is designed by TI multi-media processor DM642, and the algorithm is validated on this platform. The experiments show that fusion result has good visual effect, and also has good performance on the objective evaluation and anti-noise ability. Keywords: image fusion, dual-band infrared, night vision, DSP, real-time processing

1. INTRODUCTION Future enhancement of night vision systems will require multicolor sensors. Detection of targets in clutter with reduced false alarm rates, operation in an expanded set of atmospheric conditions, and improved situational awareness are all expected to improve significantly using multicolor sensors. Mid-wavelength infrared (MWIR) and long-wavelength Infrared (LWIR) are normally used in night vision system. The contrast of target and background is different in these dual-band images because of the difference of radiation between them. Fusion of them can provide more abundant and accurate information which could improve the performance of night vision system. One of the earliest multi-sensor image fusion methods, proposed by Toet [1], was based on multi-resolution image analysis using contrast pyramids. Discrete Wavelet Transform was first used in image fusion purposes by Li [2] and a thorough investigation into DWT and other related multi-scale and multi-resolution fusion methods are provided by Zhang and Blum. Multi-scale fusion for visual display was considered, with additional orientation sensitivity by Peli [3] and in an efficient real time framework by Petrović and Xydeas [4]. More recently, a fusion system based on wavelet transform modulus maxima was proposed by Qu [5] and a multi-scale method based on morphological towers was presented by Mukhopadhyay and Chanda [6] for fusion of medical images. However, almost all fusion methods presented so far are based on relatively basic processing techniques and do not consider subjectively relevant information from higher levels of abstraction. As some recent studies have shown this does not always satisfy the complex demands of a human observer and a more subjectively meaningful approach is required. Because wavelet transform has special time-frequency characteristics, it could focus on image’s local structure and reflect image contour and detail by wavelet coefficient, so its application in image fusion achieves good result. In addition, the ability of human eyes recognizing color exceeds gray scale far, which means target and background can be more easily differentiated in color image. Color or pseudo-color image fusion algorithms are more popular in last few years. The pseudo-color image fusion results by these methods usually have too bright color which not good at long time observation, especially to those dim and small targets recognition [7]. In view of the above questions, after analyze the radiation characteristics of blackbody on MWIR & LWIR, a pseudo-color dual-band infrared image fusion algorithm is proposed in this paper based on image features in wavelet domain. The embedded fusion system used in night vision system is designed by TI multi-media processor DM642, and the algorithm is validated on this platform. The experiments show that fusion result has good visual effect, and also has good performance on the objective evaluation. More details are kept in fusion image and object recognition could be done more efficiently. Moreover, the algorithm can be implemented in real-time certified by DSP fusion platform. ∗

E-mail: [email protected]; Phone: +86-10-6891-2560-803. 2008 International Conference on Optical Instruments and Technology: Optical Systems and Optoelectronic Instruments edited by Yunlong Sheng, Yongtian Wang, Lijiang Zeng, Proc. of SPIE Vol. 7156, 71562P · © 2009 SPIE · CCC code: 0277-786X/09/$18 · doi: 10.1117/12.807341 Proc. of SPIE Vol. 7156 71562P-1

2. CHARACTERISTIC OF MWIR&LWIR IMAGE Infrared image reflects the radiation characteristics of targets and background. Wien's displacement law states that the hotter an object is, the shorter the wavelength at which it will emit most of its radiation, and further that the frequency for maximal or peak radiation power is found by dividing Wien's constant by the temperature in Kelvins. (1)

λmax T = b

Which λmax is the peak of blackbody radiation wavelength, T is temperature of blackbody, and constant b = 2898.8µ m ⋅ K .The peak of radiation wavelength at different temperature can be calculated according to Wien's law. Then the blackbody temperature range is 579.8~966.3 (306.8~693.3℃) which corresponding to MWIR, and the difference of temperature is 386.5K. The blackbody temperature range is 241.6~362.4K (-31.4~89.4℃ ) which corresponding to LWIR, and the difference is 120.8K. According to above data, the targets are hotter and their temperature range is wider in MWIR image, those targets are colder and their temperature is narrower. More specifically, Planck's law describes the spectral radiance of electromagnetic radiation at all wavelengths from a black body at temperature T, which is shown in equation 2. M λ bb =

2π hc 2

λ

5



1 e

hc /( λ K B T )

(2)

−1

Which M λ bb is radiation ( W /(m 2 ⋅ µ m) ), λ is wavelength, T is absolute temperature ( K ), c is the speed of light ( m / s ) and K B is Boltzmann constant ( J / K ). Figure 1 gives the blackbody radiation with the change of wavelength under temperature between 300~600K in the curve. It’s obviously that the peak of radiation wavelength decreases when blackbody temperature becomes higher, and the short-wave component in proportion of radiation increases rapidly.

It is difficult to distinguish target from background in infrared image when their radiation are different not too much. Define target and background radiation contrast to describe the difference between targets and background as equation 3. C=

MT − M B MB

(3)

λ2

Which C is target and background radiation contrast, M T = ∫ M λ (TT )d λ is the radiation of target in λ1 ~ λ2 , λ1

λ2

M B = ∫ M λ (TB )d λ is the radiation of background in λ1 ~ λ2 . Figure 2 shows the radiation of target with its λ1

temperature from 300K to 1000K in 3~5 µ m MWIR and 8~12 µ m LWIR, and the radiation contrast to the 300K background in these two wave bands. 4

2

blackbody radiation M / (W/(m 2*µm))

1000

Radiation of target(blackbody) in MWIR&LWIR / (W/m2)

300K 400K 500K 600K

800

600

400

200

0

0

1

2

3

4

5

6 7 8 9 wavelength λ/µm

10

11

12

13

14

Fig.1 the curve of blackbody’s radiation

15

x 10

1.8

4

10 radiation contrast of MWIR radiation contrast of LWIR

radiation of MWIR target radiation of LWIR target

1.6

3

10 1.4 1.2

2

1

10

0.8 0.6

1

10 0.4 0.2 0 300

0

400

500 600 700 800 Temperature of target(blackbody)/(K)

900

10 1000

Fig.2 radiation contrast at different temperature

Proc. of SPIE Vol. 7156 71562P-2

target and background radiation contrast

1200

Figure 2 shows that the MWIR radiation is less than LWIR radiation in lower temperature area ( < 580K ) but more than it in higher temperature area ( > 580K ). The increment speed of MWIR radiation is much higher than LWIR radiation with temperature increasing. It’s obviously that the target and background radiation contrast in MWIR image is higher than what in LWIR image at the same target temperature. Note that the target radiation at lower temperature in MWIR is less but the target and background radiation contrast is higher, that’s because low temperature background radiation in MWIR is quite less than the radiation in LWIR. Consider the target and background radiation contrast, MWIR suite for higher temperature targets detection, and LWIR suite for lower temperature targets detection. The quantization bits of source images which used to fusion are the same in most cases. Then MWIR and LWIR images have some characteristics on this premise. (1) The difference of gray-scale value is higher between hotter targets and colder background in MWIR image, and hot targets are obvious in the image; on the contrary, the difference of gray-scale value is lower between colder targets and background in MWIR image, and the targets may not obvious. (2) LWIR image has more details than MWIR image. (3) The distribution of gray-scale in MWIR image is decentralized. And the less hot area in the scenes, MWIR image will be the darker. The distribution of gray-scale in LWIR image is more regular, and LWIR image is bright usually. It should be noted that radiation characteristics of target and background are reflected in the infrared image. Actually infrared image has relation with the temperature distribution, materials emissivity, imaging angle, distance, environment, atmospheric attenuation, the spectral response of the imager, and other factors. The analysis of the above can be used as reference as a general rule.

3. THE PSEUDO-COLOR FUSION ALGORITHM 3.1 Algorithm structure

The most important part of wavelet multi-resolution fusion method is the definition of fusion rules. The common fusion rule is using modulus maxima at high frequency domain and weighted mean coefficients at low frequency domain (called as WMM method). There are other fusion rules like based on regional energy of coefficients [8] (called as WRE method) and etc. Because the high frequency coefficients after wavelet decomposition reflect singular points and details information, the above fusion rules reserve maximum details but meanwhile bring some undesirable noise inevitably. Fusion rules based on image features are used in this paper. According to the edge feature of source images, local modulus maxima are selected at edge pixels and its sub-band neighboring pixels, but not at non-edge pixels even on high frequency sub-band. The fusion result can keep details from source images and also suppress noise by these rules. The hot targets and moving targets are automatically extracted from infrared image and sequence images according to features of source images in fusion processing. The extracted targets are marked into fusion result with pseudo-color, which can help human eyes to distinguish targets from background in fusion result. The structure of target pseudo-color image fusion algorithm in wavelet domain shows as figure 3. The differences between other wavelet fusion methods show as follows: (1) fusion rules are defined by edge feature of source images; (2) The interested targets are extracted by threshold segmentation method and moving targets detection method, and marked into fusion result with pseudo-color, which can help human eyes to distinguish dim and small targets. 2O11LC

LflOU

jurn

TUJ

Fig3. The structure of target pseudo-color fusion

3.2 Image features extraction in wavelet domain

Proc. of SPIE Vol. 7156 71562P-3

3.2.1 Edge detection

There are many algorithms used for image edge extraction, canny operator is one of them which have good effect. Canny operator is insensitive to noise, and it’s not easy to import phase offset and generate fake edge. Suppose that A and B are source images with resolution M × N . The edge map of image A is denoted as E A show as Equation 4, and EB has the same form as E A . edge ⎧1 E A( m , n ) = ⎨ (4) (m ∈ M , n ∈ N ) nonedge ⎩0 The energy of singular points and edges in source image would be diffused into sub-band neighborhood in wavelet decomposition processing. E A and EB need to be dilated use edge pixels as the center point in neighborhood. The dilated

edge maps E AD and EBD are used for defining fusion rules. E AD is expressed as Equation 5, and EBD has the same form. E AD( m , n ) = Dilate ( E A , k × k )( m , n )

(5)

Which k × k is the size of dilate neighborhood, and it has the relation to base function used in wavelet decomposition. The base function db3 is used in wavelet decomposition, whose vanishing moment is 3, so the size of dilate neighborhood is 3 × 3 . The interested targets are extracted by automatic threshold segmentation method and moving targets detection method in MWIR and LWIR images. 3.2.2 Threshold segmentation

The hot targets are extracted using maximum inter-class variance method proposed by Ostu [9]. Ostu’s algorithm is deduced from decision and analysis of least square method. It calculates square error and inter-class variance between targets and background to determine segment threshold using grey level histogram of image. This method guarantees that maximum separation between target and background because of selection maximum inter-class variance. Because the difference of Radiation Characteristics between hot targets and background, so the hot targets are easily identified in infrared image. Hot targets can be segmented well by Ostu’s method in MWIR image. The segmented hot targets TIR are shown as Equation 6. IR ( m,n ) ≥ τ IR ⎧⎪1 TIR ( m , n ) = ⎨ (6) (m ∈ M , n ∈ N ) otherwise ⎪⎩0 Which τ IR is segment threshold, the points those value equal 1 in TIR are segmented target. The segment threshold needs to be adjusted according to complicated background in realistic application.

3.2.3 Moving target detection

Moving target could be detected from both source images, then fuse them into integrated moving targets. Inter-frame difference method is used to detect moving target. Suppose that f1 ( m, n ) and f 2 ( m, n ) are adjacent frames in sequence images, so the difference image shows as D ( m, n ) = f1 ( m, n ) − f 2 ( m, n ) , where the part changes a little could be considered as background and the part changes a lot could be considered as a target. Then binarization difference image is generated according to threshold τ , and moving target information TM could be extracted after farther process like de-noise which is shown as Equation 7. D( m,n ) ≥ τ M ⎧⎪1 TM ( m , n ) = ⎨ (m ∈ M , n ∈ N ) otherwise ⎪⎩0 The hot targets and moving targets are fused into fusion targets by “OR” operation, which is shown as Equation 8. TF = TMWIR | TLWIR | TM ( MWIR ) | TM ( LWIR )

Which TF is fusion targets, TM ( IR ) and TM (V ) are moving targets detected from infrared and visual images.

Proc. of SPIE Vol. 7156 71562P-4

(7)

(8)

3.3 Fusion rules based on image feature

The edge maps are classified into 3 types according to theirs spatial position. They are special component E AD∗ of source D image A , special component EBD∗ of source image B and common component E AB , which are shown as Equations 9~11.

E AD∗ = ( E AD XOR EBD ) AND E AD

EBD∗ = ( E AD XOR EBD ) AND EBD D E AB = ( E AD AND EBD )

(9) (10) (11)

For the purpose of keep details from source images as more as possible, fusion rules for wavelet coefficients are defined D as follows: using local modulus maxima rule for common edge E AB ; using wavelet coefficients of source image A for its special edge E AD∗ ; using wavelet coefficients of source image B for its special edge EBD∗ ; using mean weighted coefficients rule for non-edge points because it means similarity between two source images. To non-edge points, noise is conspicuous in high frequency sub-band, but it is suppressed effectively in low frequency sub-band. So the fusion rules use different strategy for high frequency coefficients and low frequency coefficients at non-edge points: using mean weighted method for low frequency coefficients to obtain information from source images averagely, and attenuation factor is added into high frequency coefficients to suppress noise. The fusion rules based on image edge are expressed as Equations 12~13: ⎧C ( m, n ) ∈ E ABD AND C A( m, n) ≥ CB( m, n) ⎪ A( m , n ) ⎪C ( m, n ) ∈ E ABD AND C A( m,n) < CB( m, n) ⎪ B( m,n) ⎪ (12) CF ( m , n ) = ⎨ C ( m, n ) ∈ E AD∗ A( m , n ) ⎪ ⎪ CB ( m , n ) ( m, n ) ∈ EBD∗ ⎪ otherwise ⎪⎩CNE ( m , n ) ⎧⎪ ( AA( m , n ) + AB ( m , n ) ) / 2 ( m, n ) ∈ Approximation coef CNE ( m , n ) = ⎨ (13) ( m, n ) ∈ Details coef ⎪⎩( DA( m , n ) + DB ( m , n ) ) / a Which CF ( m, n ) are wavelet coefficients of fusion image, C A( m , n ) and CB ( m , n ) are wavelet coefficients at the edge of source

images. AA( m , n ) and AB ( m , n ) are low frequency sub-band wavelet coefficients at non-edge of source images, DA( m , n ) and DB ( m , n ) are high frequency sub-band wavelet coefficients at non-edge of source images; a is attenuation factor and

a ≥ 2.

3.4 Pseudo-color image fusion in wavelet domain

The extracted targets according to features of source images could be marked by special color, and exported into the fusion result, may help the human eyes greatly to detect interested targets. As only the local area where the target is in fusion result is mapped into pseudo-color, will not cause the human eyes visual fatigue. In the numerous color spaces, YUV color space can reduce the complexity of color image processing, and is suit for used in DSP fusion system. So the pseudo-color mapping operation will be processed in YUV color space. The color map relations are shown as Equation 14~16. FCY( m , n ) = FG ( m , n )

(14)

⎪⎧( m * LWIR( m , n ) − m2 * MWIR( m , n ) ) FCU( m , n ) = ⎨ 1 128 ⎪⎩ ⎧⎪( m * MWIR( m , n ) − m4 * LWIR( m , n ) ) FCV( m , n ) = ⎨ 3 128 ⎪⎩

( m, n ) ∈ TF otherwise

( m, n ) ∈ TF otherwise

Proc. of SPIE Vol. 7156 71562P-5

(15) (16)

Which FCY , FCU , FCV are luminance and chrominance components of pseudo-color fusion image respectively, and m is coefficient to determine fusion color. The targets color is red when gray value in MWIR image is higher than LWIR image, and the color is blue on the contrary. The different colors tell observer which source image the targets come from. Luminance of the fusion result has no change after mapping operation. The steps of fusion algorithm in this paper are as follows: (1) Decompose source images using mallat algorithm after selecting wavelet base function and decomposition level. (2) Extract image edge feature in low frequency component of every scale spatial, and define fusion rules by the edge maps (show as Equation 4~5). (3) Extract hot target and moving target using threshold segmentation method in source images and moving targets detection method in sequence images (show as Equation 6~8). (4) Fuse source images by fusion rules in every scale spatial, and get multi-resolution expressions of fusion image, and then get the gray scale fusion result through IDWT. (5) Mapping extracted targets into gray scale fusion result in YUV color space (show as Equation 14~16), and get pseudo-color fusion image finally.

4. REAL-TIME FUSION SYSTEM 4.1 The hardware design of system

The real-time fusion system is composed of forward looking infrared camera platform, video capture and processing board, and video display module. Forward looking infrared camera platform includes precision pan/tilt and MWIR&LWIR imagers. The dual-band images could be registered by adjusting precision pan/tilt. The source images are captured and processed by DSP mini module. In addition to above functions, the board can also realize data communication and video display. TI DSP is widely used for digital signal processing in many systems, and the video port is a new type peripheral in digital media series DSP. TMS320DM642 is a powerful digital media processor which has one c64x DSP core and three video ports and other peripherals; it’s very perfect to build a versatile miniature image processing platform. A color TFT monitor used for display image fusion result.

Decoder1 TVP5150

VP0

Decoder2 TVP5150

VP1

Encoder SAA7121

VP2

JTAG

SDRAM 32M byte E M I F

TMS320 DM642

Fig.4 Block diagram of DSP board

FLASH 8M byte

CPLD XC95144

Power Module

Fig.5 Hardware of DSP fusion processing module

The mini DSP module includes a 600MHz DM642 processor, 256Mbit SDRAM and 32Mbit flash memory. 32Mbit nonvolatile flash is used for program memory and DM642 boot from this flash when system power on. The fusion system has dual video capture channel and one display channel which use VP0, VP1 and VP2 individually. Video capture channel uses chip TVP5150 as decoder, and video display channel uses chip SAA7121 as encoder. The system has multi-channel power including 1.4V and 3.3V, here TI SWIFT family of dc/dc regulators TPS5431X are used for power module. The block diagram of DSP board is shown as figure 4, and the hardware of DSP fusion processing module is shown as figure 5.

Proc. of SPIE Vol. 7156 71562P-6

4.2 The algorithm optimization on DSP platform

It’s necessary to optimize algorithm and allocate memory space reasonably on DSP platform though DM642 has high performance. That’s because the amount of processing data is huge, and there are lots of intermediate results. The optimization must according to the characteristics of DM642 framework, these optimization strategies including but not limited as follows: (1) The data should be processed in on-chip RAM as can as possible. Allocate temporary buffers in RAM on-chip and move the data will be processed to the buffer. This method could decrease CPU access time efficiently. (2) It needs to allocate appropriate size buffer for saving CPU access time. The buffer size is impossible too large limited to size of on-chip RAM. But the buffer size also should not be too small, which will increase data moving times and decrease efficient of program. (3) algorithms can use ping-pong buffer schemes to parallel the EDMA transfer and the CPU execution, thus hiding most or all overhead associated with the data movement. (4) Make the best use of DSP cache. The problem of cache could be detected by TI cache analysis tool and then improved timely. Note that it is the application’s responsibility to ensure cache coherency. (5) Optimize the key function using linear assembler program. Make the best use of DM642 multi-pipeline structure, and optimize the key function using parallel assembler language will improve system performance largely.

5. EXPERIMENTS AND DISCUSSION Db3 is used as wavelet base function in experiments which is orthogonal, compactly supported and symmetric. The decomposed level is 3. The attenuation factor has much effect on fusion result. Set the attenuation factor a = 2 when the quality of source images is quite well. The coefficients of high frequency sub-band are not attenuated which reserve details from source images as much as possible in this condition. Attenuation factor could be increased to suppress noise when source images are polluted by Gaussian noise. In order to verify performance of our algorithm, it’s compared with WMM method and WRE method (mentioned in section 3.1) in experiments. The product (named MWMS) of image structural similarity and mutual information is used as objective index to evaluate fusion result. The calculate method and formula of MWMS are introduced in reference [10]. Use signal to noise rate (SNR) as another objective evaluation index in order to prove de-noise effect in our algorithm. 5.1 Attenuation factor a=2

There are two LWIR and MWIR images in figure 6, which show some hot targets in ground scene background. There are some small bright targets probably like vehicles and others like roads and rooms in these two images. LWIR image is brighter and has more details than MWIR image entirely. But the hot targets are more obvious in MWIR image. WMM method, WRE method and our method all have good result in figure 6. The hot targets in MWIR image are marked by warm color and hot targets in LWIR image are marked by cold color in pseudo-color fusion image, which is more vivid and good for the human eyes observation. The different colors tell observer which source image the targets come from and this is very important. Tab1. Objective evaluation results of experiment 1 MWMS SNR(db)

WMM Method 1.3989 43.0527

WRE Method 1.7052 47.4635

Our Method 1.7151 52.1702

The objective evaluation results of experiment 1 are shown in table 1. There is a conclusion that method 2 is better than method 1, and our method is better than method 2 from table 1. It means that the fusion result get more structure and details information by our method than the other two, and the SNR of fusion result has improved at meanwhile. 5.2 Attenuation factor a>2

In order to discuss how the attenuation factor to suppress noise, we add more noise into source images. The Gaussian noise whose mean is 0 and variance is 0.003 is added in this experiment. The attenuation factor should not too large, and

Proc. of SPIE Vol. 7156 71562P-7

let a=2.5 here. The source images are dam scene. There are some obvious point targets in MWIR image, but parts of them are little fuzzy in LWIR image. The fusion results of this experiment are shown as figure 7. Note that the noise from source images is magnified in WMM method and WRE method, Gaussian noise is very obvious in both fusion results, but it’s suppressed effectively in our method and the SNR values in table 2 indicate the same meaning. However the MSMW value will be affected by attenuation factor, that’s because the attenuation factor suppress noise on non-edge points, and lost high frequency sub-band details more or less on these points. So achieving the best fusion result needs to select proper attenuation factor according to the quality of source images. Tab2. Objective evaluation results of experiment 2 MWMS SNR(db)

WMM Method 0.7943 17.4107

WRE Method 0.9643 18.7531

Our Method 1.0400 20.4277

6. CONCLUSIONS A pseudo-color dual-band infrared image fusion algorithm is proposed in this paper based on image features in wavelet domain. The fusion rules are defined by source edge information. The interested targets are extracted by threshold segmentation method and moving targets detection method, and mapped into local pseudo-color fusion image in YUV color space. The pseudo-color fusion image based on extracted targets is more vivid and good for the human eyes observation. The embedded fusion system used in night vision system is designed by TI multi-media processor DM642, and the algorithm is validated on this platform in real-time. Note that the attenuation factor has much effect on fusion result, which should be adjusted between de-noise and keep high frequency sub-band tiny details on non-edge points.

ACKNOWLEDGEMENT This work supported by the National Natural Science Foundation of China (No. 60702017) and the Aero Science Foundation of China Project (No. 20060112106).

REFERENCES [1] A Toet, “Hierarchical Image Fusion,” Machine Vision and Applications, Vol. 3, 3-11 (1990). [2] H Li, B Munjanath, S Mitra, “Multisensor Image Fusion Using the Wavelet Transform,” Graphical Models and Image Processing, Vol. 57(3), 235-245 (1995). [3] T Peli, E Peli, K Ellis, R Stahl, “Multi-Spectral Image Fusion for Visual Display,” Proc. SPIE, Vol. 3719, 359-368 (1999). [4] V Petrović, C Xydeas, “Computationally Efficient Pixel-level Image Fusion,” Proceedings of Eurofusion99, Stratford-upon-Avon, 177-184 (1999). [5] G Qu, D Zhang, P Yan, “Medical image fusion by wavelet transform modulus maxima,” Optics Express, Vol. 9(4), 184-190 (2001). [6] S Mukhopadhyay, B Chanda, “Fusion of 2D grayscale images using multiscale morphology,” Pattern Recognition, Vol. 34, 1939-1049 (2001). [7] Guanghua Huang, Guoqiang Ni, Bin Zhang, “Visual and infrared dual-band false color image fusion method motivated by Land’s experiment,” Optical Engineering, Vol. 46(2): 027001 (2007). [8] WANG Jiang-an, XIAO Wei-an, “A Method for Image Fusion Based on Area Feature Measure,” Optics & Optoelectronic Technology, Vol. 1(2): 57~59 (2003). [9] N. Otsu, “A Threshold Selection Method from Gray-Level Histograms,” IEEE Transactions on Systems, Man, and Cybernetics, Vol. 9(1): 62~66 (1979). [10] Di Hongwei, Liu Xianfeng, “Image Fusion Quality Assessment Based on Structural Similarity,” Acta Photonica Sinica, Vol. 35(5): 766~770 (2006).

Proc. of SPIE Vol. 7156 71562P-8

(a) LWIR source image

(d) Result by WRE method

(b) MWIR source image

(c) Result by WMM method

(e) Gray scale result by our method

(f) Pseudo-color result by our method

Fig6. Fusion results of experiment 1

(a) LWIR source image

(d) Result by WRE method

(b) MWIR source image

(c) Result by WMM method

(e) Gray scale result by our method

(f) Pseudo-color result by our method

Fig7. Fusion results of experiment 2

Proc. of SPIE Vol. 7156 71562P-9

Suggest Documents