2017 3rd IEEE International Conference on Computer and Communications
Single Fog Image Restoration via Multi-scale Image Fusion
Yin Gao, Yijing Su, Qiming Li, Jun Li* Quanzhou Institute of Equipment Manufacturing, CAS Quanzhou, china e-mail:
[email protected],
[email protected]
Abstractβthe captured images and videos in bad weather usually have degraded quality by reduced contrast and faded colors, and is very difficult to achieve promising performance. The traditional prior techniques are not sufficient to address this challenging problem to deal with the halo artifacts and brightness distortion problems. In this paper, we propose a multi-scale fusion method for single fog image restoration. By creating two divided regions, the global atmospheric light can be effectively obtained in the sky regions. To properly optimize the transmission, our method is designed in a new Kirsch operators with adaptive boundary constraint. With a new multi-scale image fusion method, we can effectively remove and fuse the haze from these images. The proposed method reduces the halo artifacts by adaptively limiting the boundary of an arbitrary haze image. A new multi-scale image fusion method for single image dehazing has also been proposed to produce a more nature visual recovery effect. Experimental results show that this method outperforms state-of-the-art haze removal methods in terms of both efficiency and the dehazing visual effect. Keywords-image restoration; histogram analysis; adaptive boundary constraint; multi-scale image fusion
I.
INTRODUCTION
In object detection, recognition, and navigation, it is important to understand the natural environment and to successfully execute visual activities [1]. However, the quality of captured images and videos usually degraded in bad weather conditions, which is characterized by reduced contrast and faded colors. It is very difficult to achieve promising visual performance in this case [2-3]. Early dehazing methods mainly rely on additional cost assumption for performance improvement. Many methods have been proposed to solve the hazy problem in the last years [4-12]. Representative works include [7], [10], [11], and [12]. He et al. notice that the transmission tends to zero when the depth tends to infinity the dark channel prior (DCP) [7]. Based on this observation, they develop the DCP. Meng et al. propose an efficient regularization method [10]. By this method, the transmission can be optimized from nine boundary constraint map. Nishino et al. introduce a novel Bayesian probabilistic method to dehaze an image by using a factorial Markov random field [11]. Kim et al. notice that a hazy image exhibits low contrast in general [12]. Based on the observation. They propose a fast and optimized dehzing method for hazy images and video.
978-1-5090-6351-2/17/$31.00 Β©2017 IEEE
1873
Single image dehazing has obtained increasing attention in a variety of practical applications, and is a more challenging problem, since less information about the scene structure increases the difficulties to restore observed hazy image [17-23]. Recently, some significant advances have also achieved [17], [18], [19], [22] and [23]. These progresses benefit much from the insightful explorations on new fusion-based technique. Schaul et al. propose a method to dehaze images using both visible and NIR images [17]. Under the fusion method, a hazy image can be obtained by fusing a visible and a near-infrared image of the same scene results in a dehazed color image. Ancuti et al. propose a Multi-scale fusion-based technique for dehazing on a single degraded image [18-19]. His method can generate better results, especially in edge regions with hazes. However, since it is not a physics-based method, the obtained image often suffers from too much haze remaining problem. Wang et al. present a novel and adaptive local similarity-based wavelet fusion method for single image dehazing [22]. This fusion method can achieve a quite compelling result, especially in regions with dense hazes. Galdran et al. propose a fusion-based variational image dehazing method, in which series of images are acquired with the image energy method [23]. Each generated iterate implicitly carries useful information on the degree of enhancement each region needs. Wang et al.βs method can recover a hazy-free image with better visual effects, but the results often suffer from haze remaining and distorted colors. The most challenging problem of recovering single hazy image is how to achieve the best visual effects. In this paper, we introduce a constructed single-image based strategy which is able to accurately dehazing using only the original degraded image. This work mainly address the challenging problem of halo artifacts and brightness distortion in image dehazing. Our goal is to develop a single and effect method and therefore, all the fusion processing steps are designed in order to support these important features. We mainly focus on improving the visual effects of fog image restoration based on multi-scale image fusion and decreasing the computational complexity. To solve the brightness distortion problem, we segment sky regions so as to obtain different the global atmospheric light accurately. In order to reduce the halo artifacts, we develop a new Kirsch operators with adaptive boundary constraint (KA) to optimize the transmissions. According to the DCP processing, three dehazing images can be obtained from different global atmospheric light and the transmission. Finally, to improve
the visual effects of fog image restoration, we fuse three input images by a new multi-scale image fusion method at the same time. The main contributions of the proposed method can be summarized as follows. Firstly, the global atmospheric light can be obtained effectively to solve the brightness distortion problems for the first time. Secondly, a new Kirsch operators with adaptive boundary constraint is constructed to optimize the transmissions for reducing the halo artifacts. Finally, we combine with a new multi-scale image fusion method to improve the visual effects of fog image restoration.
performing histogram analysis on the observed fog image. In DCP analysis, we find that the estimation precision of the global atmospheric light is closely related to the brightness of the output dehazing image. Moreover, estimating the transmission as a whole for the fog image restoration, the halo artifacts may be observed in the edges of a fog image. Therefore, it is important to effectively estimate the global atmospheric light for improving the visual effects of fog image restoration. This inspire us to analyze various hazy images to find the statistical characteristics and seek a new global atmospheric light estimation method for single image dehazing. We estimate the global atmospheric light from the characteristic of sky regions in a fog image. When we draw a histogram of observed fog image, it is found that the threshold in each channel, which can segment the fog image into sky and non-sky regions, are just in the first trough from right to left in the histogram. Based on the above property, we can use these thresholds to solve the value of global atmospheric light in R, G and B channels for a fog image. Fig. 1 gives an example with natural scene to show where we find the threshold in R, G and B channels. As illustrated in Fig. 1(c-e), in the histogram for a fog image, the value of the first trough from 255 to 0 can be used as a threshold to segment the image of each channel (see the corresponding segmented images, the sky regions is efficiently segmented in image of each channel) . Fig. 1(a-b) shows hazy image and our dehazing result.
II. ATMOSPHERIC SCATTERING MODEL In the field of computer vision and graphics computing, the atmospheric scattering model usually can be described as follows: πΌ(π₯) = π½(π₯)π‘(π₯) + π΄(1 β π‘(π₯)),
(1)
where πΌ(π₯) is an observed or received image from a camera. π½(π₯) is an image without fog. π‘(π₯) is the medium transmission describing the portion of the light that is not scattered and reaches the camera. π΄ represents the global atmospheric light. In the DCP, the transmission in (1) can be refined with the Guided image filtering [8]. π‘(π₯) = 1 β π β πΌπππ (π₯)
(2)
Note that, πΌπππ (π₯) = minπβ{π,π,π} (minπ¦βΞ©(π₯) πΌπ (π¦)/π΄) . π‘(π₯) is the result of the Guided image filtering method process. πΌπ (π¦) is intensity for a channel of the RGB image. ΟΟ΅[0,1] is the constant parameter to keep a bit of haze for natural output appearance. The final scene radiance is represented by: π½(π₯) = (πΌ(π₯) β π΄)/ max(π‘(π₯), π‘0 ) + π΄,
(b)
(a)
(3)
where π‘0 is the lower bound of the transmission t(x). III. PROPOSED METHOD In this section, we mainly describe how to utilize the multi-scale image fusion for image dehazing. There are three major components during the dehazing. First, according to the histogram analysis of three channels, a fog image is first divided into two regions (e.g. the sky and non-sky regions) to estimate the global atmospheric light (π΄1 , π΄2 andπ΄3 ). Subsequently, to properly optimize the transmission, the hazy image is processed by an adaptive boundary constraint of an arbitrary haze image in the different global atmospheric light, and then a new Kirsch operators with adaptive boundary constraint (KA) is applied for the constraint images. The accuracy of the transmissions are obtained by the above two steps. Finally, we can use a new multi-scale image fusion method to get the final dehazing image after the classical atmospheric scattering model processing.
G
R (c)
B (d)
(e)
Figure 1. A new sky region segmentation result by our method. (a) The hazy image. (b) our dehazing result. (c-e) the corresponding segmented images of R, G and B channels.
To solve the threshold for segmenting sky regions in each channel of observed image, we develop a new method. Here we define this value as a threshold segmenting of observed image. We first perform a smoothing processing with Gauss filtering on the histogram of each channel βπ (π₯), and then obtain the threshold by: ππ (π₯) = βπ (π₯) β π(π₯) { , ππ = argmaxπ₯π[0,255] (π₯|ππβ² (π₯) = 0, ππβ²β² > 0), ππ{π, π, π} (4) where π(π₯) is a Gaussian kernel function. ππ represents a threshold for segmenting sky regions in each channel of a fog image. For each channel, we can take the points ππ to get the corresponding segmented images (see in Fig. 1(b-d)).
A.
Estimation of the Global Atmospheric Light with Histogram Analysis To solve the brightness distortion problems, we propose a new global atmospheric light estimation method by
1874
where π½Μ(π₯) represents a final haze-free image obtained by our new fusion method. ππ represents a new multi-scale fusion method in gradient domain. For the new multi-scale fusion method in gradient domain, we will describe it in more detail in the following sections. This new method operates in the YCbCr color space. As we know the luminance channel (π) represents the image brightness information and it is in this channel where variations and details are most visible, since the human visual system is more sensitive to luminance (π) than to chrominance (πΆπ, πΆπ). Therefore, it is important to get two main consequences for the proposed fusion method. On one hand, the proposed luminance fusion is performed in the gradient domain. We first get the gradient of the luminance channel of three images (ππ1,ππ2,ππ3). The criterion of the proposed luminance fusion can be defined as: ππ = max (πππ ), (10)
To estimate the global atmospheric light π΄ more effectively, we reformulated A by: π΄1 = max(ππ ) , π΄2 = mean(ππ ) , π΄3 = min(ππ ),
(5)
where π΄1 is the maximum of three thresholds, π΄2 is the mean of three thresholds, π΄3 is the minimum of three thresholds. π΄1 , π΄2 and π΄3 are the solved values of three global atmospheric light. B.
The Transmission Optimization By using the fixed boundary constraint operations, the derived observed image still suffer from the hue and brightness distortion problems, which are mainly in sky regions with dense haze and low light conditions [10]. In order to solve this problem, we propose a new Kirsch operators with adaptive boundary constraint to improve the accuracy of the transmission optimization. According to radiance cube definition, we define an adaptive boundary constraint of an arbitrary haze image with underneath translation as: π‘π (π₯) = minππ[1,2,3] {maxππ[π,π,π] (
π΄π βπΌπ (π₯)
,
π΄π βπΌπ (π₯)
π΄π βπΆ0π (π₯) π΄π βπΆ1π (π₯)
)},
ππ[1,2,3]
(6)
where π‘π (π₯) is the transmission with the boundary constraint in each global atmospheric light π΄. πΆ0π (π₯) represents the minimum value of color channel pixel, πΆ0π (π₯) = min (πΌπ (π₯)) . πΆ1π (π₯) represents the ππ[π,π,π]
maximum value of color channel pixel, πΆ1π (π₯) = min (πΌπ (π₯)) . π΄π contains three values ππ[π,π,π]
(π΄1 , π΄2 , π΄3 ) getting by (5). Generally, pixels in sky regions will appear abrupt depth jumps, leading to significant halo artifacts in dehazing results. Moreover, there is no more effective way to deal with this problem so far. To suppress insignificant abrupt depth jumps and maintain major edges for the transmission, we replace the bank of high-order filters with a new Kirsch operators with adaptive boundary constraint. The solution is based on Meng et al.βs method [10]. After the adaptive boundary constraint optimization, the transmission in (6) can be refined by: π‘Μπ (π₯) = πΎπ΄ (π‘π (π₯)), ππ[1,2,3]
(7)
where πΎπ΄ is our new Kirsch operators with adaptive boundary constraint. π‘Μπ (π₯) is the transmission with the new Kirsch operators with adaptive boundary constraint in each global atmospheric light π΄. C.
Image Fusion In (3), by bringing π΄π and π‘Μπ (π₯) in the equation, we can obtain the different dehazing results as follows: π½π (π₯) = (πΌ(π₯) β π΄π )/(max(π‘Μπ (π₯), π‘Μ0 ))ππ‘ + π΄π , ππ{1,2,3}, (8) where π½π (π₯) is an dehzing result with different π΄π and π‘Μπ (π₯). ππ‘ is an adjustable parameter in the range of [0,1]. π‘Μ0 is t is the lower bound of the transmission π‘π (π₯). To improve the better visual effects, the new multi-scale fusion method is employed to improve the image quality. π½Μ(π₯) = ππ (π½π (π₯)), (9)
where ππ is the result of the proposed luminance fusion. πππ represents the gradient of the luminance channel of three images. In this paper, the fused luminance gradient can π¦ π¦ be represented as πππ = [ππππ₯ , πππ ] , ππππ₯ and πππ denote the values of the π₯ and π¦ gradient components of three input images. Then, we get the proposed luminance fusion with a gradient reconstruction technique by Paul et al [24]. After obtaining the image from the gradient domain, some pixels may have intensity values outside the standard range of the luminance component. This is due to the fact that the fused gradient is obtained by merging multiple image gradients, and as a result, high differences between neighboring gradient values exist, possibly leading to a reconstructed image with a high dynamic range of pixel intensities. A linear mapping of the pixel intensities of the reconstructed luminance channel can be done such that the resultant intensities lie within the required range [24]. The final luminance fusion image can be obtained using πππ = (ππ β πππππ₯ )/(πππππ₯ β πππππ ) Γ π + π, (11) where πππππ₯ and πππππ are the maximum and minimum intensity values in ππ . πππ is the result of the final luminance fusion. In the case of the luminance component of a color image, π = 216, π = 19. One the other hand, the proposed chrominance (πΆπ,πΆπ) fusion is performed with a new multi-scale method. In chrominance channels, the criterion of the proposed chrominance fusion can be described as: 3 π π π π 3 π πΆππ = βπ=1 ππ β πΆπ , π€βπππ ππ = πΆπ / βπ=1 πΆπ { 3 π π π π 3 π ,(12) πΆππ = βπ=1 ππ β πΆπ , π€βπππ ππ = πΆπ / βπ=1 πΆπ where πΆπ π , πΆπ π and πππ , πππ are three input chrominance images and the corresponding weights. πΆππ and πΆππ are the results of the final chrominance fusion. For three channels (πππ, πΆππ and πΆππ), we can get the final haze-free image with an inverse transformation method in color space. IV. EXPERIMENTAL RESULTS AND ANALYSIS In order to evaluate our method, we compare our performance to state-of-the-art methods, such as He et al. [7],
ππ[1,2,3]
1875
Meng et al. [10], Sulami et al. [13], Chen et al.βs [14], Cai et al. [15], Berman et al. [16], respectively. Our dataset is 500 fog images with different size and background containing both natural and synthetic images. For quantitatively evaluating the restoration performance, three objective indicators are employed to assess the corresponding results [25-26], which are the rate of new visible edges π, mean ratio πΜ
of the gradients at visible edges, percentage of pixels π΄ which becomes completely black or completely white after restoration is computed. Moreover, the
parameters used in the proposed method are initialized as follows: π‘Μ0 = 0.001 and ππ‘ = 0.85. In our experiment, the typical parameter values used are, π = 0.95 andπ‘0 = 0.1. As we all know, the ultimate goal of dehzing for various hazy images is to restore the characteristics of the original image. All the dehazing methods have achieved good results, but these methods are difficult to obtain the best visual effects. In order to verify the effectiveness of the proposed method, we carry out the method on various natural environment hazy images with large white or gray regions.
Pumpkins Cones Train Florence (b) (a) (c) (d) (e) (f) (h) (g) Figure 2. Qualitative comparison with the different dehazing methods for natural environment images. (a) The hazy images. (b) He et al.βs results [7]. (c) Meng et al.βs results [10]. (d) Sulami et al.βs results [13]. (e) Chen et al.βs results [14]. (f) Cai et al.βs results [15]. (g) Berman et al.βs results [16]. (h) Our results.
in the distance cannot be valid removed in Fig. 2(e) (see the objects in the Cones). This is due to the fact that, although Chen et al.βs method improve the visual artifacts by the gradient residual minimization, it does not address the problem of ambiguity between the image color and haze. In image visibility, Cai et al.βs results are close to those observed by Chen et al. as displayed in Fig. 2(e). Similarly, the method of Cai et al. performs well, but shows some limitations for getting a better visibility (see the building in the Cones). This is mainly due to the fact that the method of Cai et al. is basically a trainable end-to-end system that requires limited hazy images to estimate the transmission. In brightness, the results of Berman et al. are much better visually (see Fig. 2(g)). The dense haze in the distance can be well removed, and there are no halo artifacts. Nevertheless, over-enhancement appears in the regions with white objects such as the sky regions in the Florence. The reason for this problem is a new haze model based on a new non-local prior. Unfortunately, the global atmospheric light estimation term is not effectively estimated using He et al.βs method when the scene brightness is similar to the global atmospheric light, and the estimated transmission is thus not reliable enough in some case.
Fig. 2 shows the qualitative comparison of results with six state-of-the-art dehzing methods on challenging natural environment images (named as Pumpkins, Cones, Train, and Florence) [7], [10], [13], [14], [15], and [16]. Fig. 2(a) shows the hazy images. Fig. 2(b-g) depicts the results of He et al. [7], Meng et al. [10], Sulami et al. [13], Chen et al. [14], Cai et al. [15], and Berman et al. [16], respectively. The results of the proposed method are given in Fig. 2(h). It can be seen that for natural environment images, all the six methods can remove the haze. As shown in Fig. 2(b), the results of He et al. remove most of the haze but significantly suffer from over-enhancement (for instance, the sky regions of the Pumpkins and Cones appear halo artifacts). This is because He et al.βs method has an inherent problem of overestimating the transmission. The results of Meng et al. and Sulami et al. have a similar problem as He et al.βs method tends to over enhance the local contrast of the image. As we can observe in Fig. 2(c) and Fig. 2(d), the restored images are oversaturated and distorted(see the sky regions in the Florence, the color of the clouds is changed to azure and blood red, respectively), and the sky regions of the images are too dim than it should be (see the object in the Train). Chen et al.βs results have a better visual effect, and are no halo artifacts in the sky regions. Nevertheless, the dense haze
1876
Figure 3. The value of
e with the different dehazing methods.
Figure 4. The value of r with the different dehazing methods.
Figure 5. The value of βwith the different dehazing methods
Cones and Florence), followed by He et al. βs results which are similar to Meng et al. Chen et al.βs results only outperform Sulami et al. in the Cones. In Fig. 4, Berman et al. βs results are similar to our results which produce the highest values overall for the mean ratio of the gradients at visible edges (πΜ
). Nevertheless, Berman et al.βs results have the highest values overall for the percentage of pixels (π΄). The highest values ( πΜ
) are mainly because of the over-enhancement as mentioned earlier. Meng et al.βs results outperform the results of He et al. overall in the four images, followed by Sulami et al.βs results. As for the results of Chen et al. and Cai et al., they have the consistent performance in the four images, which rank the second from the bottom in terms of the total performance. As for the percentage of pixels (π΄), the seven methods including the proposed one have the consistent performance in the Pumpkins and Cones. Nevertheless, the six methods for the remaining two images (Train and Florence) have higher values while our results are still kept to 0.
Compared with the results of the six methods, our results are free from oversaturation. As we can see in Fig. 2(h), four images have achieved the best visual effects. The sky regions in the images are clear and the details of the river are enhanced moderately (see the sky regions in the Florence, the color of the clouds is white, the color of the sky regions is blue, the river is also clearly visible). To make quantitative evaluation for the restoration performance, we use three objective indicators to assess the corresponding results in Fig. 2 [25-26]. Higher values of e and πΜ
or a lower value of β obtained by a method indicate the method achieves a better performance than others. The results are list in Fig. 3, Fig. 4 and Fig. 5. In Fig. 3, Fig. 4 and Fig. 5, the value of three objective indicators (π, πΜ
andπ΄) with seven methods including the proposed one are plotted to explain the quantitative evaluation. As can be seen in Fig. 3, our results produce the highest values in the three images (e.g. Pumpkins, Cones and Train) for seven methods while Cai et al. βs results have a worst performance in the three images (e.g. Pumpkins,
1877
V.
CONCLUSION
In this work, we proposed a single fog image restoration method based on multi-scale fusion method. According to the histogram analysis of fog-degraded images, the global atmospheric light can be estimated by these pixels in the sky regions so as to effectively solve the brightness distortion problems. For the transmission optimization, we design a new Kirsch operators with adaptive boundary constraint to reduce the halo artifacts. Finally, a new multi-scale image fusion method proposed to blend three dehazing images obtained by the different global atmospheric light and the transmissions improve the visual effects of fog image restoration. In contrast to previous methods, our method is the first that construct three images from an observed hazy image to solve the hazy. This makes our method more efficient to obtain image information for improving improve the visual effects. As has shown in the experiments, our method performs favourably against some state-of-the-art methods on the image visibility, color distortion, and computational cost.
[11]
[12]
[13]
[14]
[15]
[16]
ACKNOWLEDGMENT
[17]
This work was supported by the National Key Research and Development Program of China Grant (No.2016YFC11000502).
[18]
REFERENCES
[19]
S. K. Nayar and S. G. Narasimhan, βVision in bad weather,β in Proc. IEEE Int. Conf. Comput. Vis, Sept. 1999, pp. 820β827, doi: 10.1109/ICCV.1999.790306. [2] G. D. Moro and L. Halounova, βHaze removal for high-resolution satellite data: a case study,β Int. J. Remote Sens, vol. 28, May. 2007, pp. 2187β2205, doi: 10.1080/01431160600928559. [3] S. G. Narasimhan and S. K. Nayar, βContrast restoration of weather degraded images,β IEEE Trans. Pattern Anal. Mach. Intell, vol. 25, June.2003, pp. 713β724, doi: 10.1109/TPAMI.2003.1201821. [4] R. Fattal, βSingle image dehazing,β ACM Trans. Graph, vol. 27, Aug. 2008, pp. 1-9, doi: 10.1145/1360612.1360671. [5] R. T. Tan, βVisibility in bad weather from a single image,β in Proc.IEEE Conf. Comput. Vis. Pattern Recognit, Aug. 2008, pp. 1β8, doi: 10.1109/CVPR.2008.45876431. [6] J.-P. Tarel and N. Hautiere, βFast visibility restoration from a single color or gray level image,β in Proc. IEEE Int. Conf. Comput. Vis, July. 2009, pp. 2201β2208, doi: 10.1109/ICCV.2009.5459251. [7] K. He, J. Sun and X. Tang, βSingle image haze removal using dark channel prior,β in Proc.IEEE Conf. Comput. Vis. Pattern Recognit, Aug. 2009, pp. 2341β2353, doi: 10.1109/CVPR.2009.5206515. [8] K. He, J. Sun and X. Tang, βGuided image filtering,β IEEE Trans. Pattern Anal. Mach. Intell, vol. 35, June. 2013, pp. 1397β1409, doi: 10.1109/TPAMI.2012.213. [9] Z. Zhang and R. S. Blum, βA categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application,β Proc. IEEE, vol. 87, Aug. 1999, pp. 1315β1326, doi: 10.1109/5.775414. [10] G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, βEfficient image dehazing with boundary constraint and contextual regularization,β in [1]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
1878
IEEE Int. Conf. on Computer Vision, Dec. 2013, pp. 617β624, doi: 10.1109/ICCV.2013.82. K. Nishino, L. Kratz and S. Lombardi, βBayesian defogging,β Int. J. Comput. Vis, vol. 98, July. 2012, pp. 263β278, doi: 10.1007/s11263-011-0508-1. J.H. Kim, W.D. Jang, J.Y. Sim and C.S. Kim, βOptimized contrast enhancement for real-time image and video dehazingβ, J. Vis. Commun. Image Represent., vol. 24, April. 2013, pp. 410β425, doi: 10.1016/j.jvcir.2013.02.004. M. Sulami, I. Glatzer, R. Fattal, and M. Werman, βAutomatic recovery of the atmospheric light in hazy images,β in Proc. IEEE Int. Conf. Comput. Photogr, May. 2014,pp. 1β11, doi: 10.1109/ICCPHOT.2014.6831817. C. Chen, M. N. Do, and J. Wang, βRobust image and video dehazing with visual artifact suppression via gradient residual minimization,β in Proc.Eur. Conf. Comput. Vis, October. 2016, pp. 576β591, doi: 10.1007/978-3-319-46475-6_36. B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, βDehazeNet: an end-to-end system for single image haze removal,β IEEE Trans. Image Process, vol. 25, Nov. 2016, pp. 5187β5198, doi: 10.1109/TIP.2016.2598681. D. Berman, T. Treibitz and S. Avidan, βNon-Local Image Dehazing,β in Proc.IEEE Conf. Comput. Vis. Pattern Recognit, June. 2016, pp. 1674β1682, doi: 10.1109/CVPR.2016.185. L. Schaul, C. Fredembach, and S. SΓΌsstrunk, βColor image dehazing using the near-infrared,β in Proc. IEEE Int. Conf. Image Process., Nov. 2009, pp. 1629β1632, doi: 10.1109/ICIP.2009.5413700. C. O. Ancuti, C. Ancuti, and P. Bekaert, βEffective Single Image Dehazing by Fusion,β in Proc. IEEE Int. Conf. Image Process., Sept. 2010, pp. 3541β3544, doi: 10.1109/ICIP.2010.5651263. C. O. Ancuti and C. Ancuti, βSingle image dehazing by multi-scale fusion,β IEEE Trans. Image Process., vol. 22, Aug. 2013, pp. 3271β3282, doi: 10.1109/TIP.2013.2262284. F. Guo, J. Tang, and Z. Cai, βFusion strategy for single image dehazing,β Int. J. Digit. Content Technol. its Appl, vol. 7, January. 2013, pp. 19β28, doi: 10.4156/jdcta.vol7.issue1.3. C. Ancuti, C. O. Ancuti, C. De Vleeschouwer, and A. C. Bovik, βNight-time dehazing by fusion,β in Proc. IEEE Int. Conf. Image Process, Sept. 2016, pp. 2256β2260, doi: 10.1109/ICIP.2016.7532760. W. Wang, W. Li, Q. Guan, and M. Qi, βMultiscale single image dehazing based on adaptive wavelet fusion,β Math. Probl. Eng, vol. 2015, Oct. 2015, pp. 1β14, doi: 10.1155/2015/131082. A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmio, βFusion-based variational image dehazing,β IEEE Signal Process. Lett, vol. 24, Feb. 2017, pp. 151β155, doi: 10.1109/LSP.2016.2643168. S. Paul, I. S. Sevcenco, and P. Agathoklis, βMulti-exposure and multi-focus image fusion in gradient domain,β J. Circuits, Syst. Comput., vol. 25, Oct. 2016, p. 1650123, doi: 10.1142/S0218126616501231. N. HautiΓ¨re, J. P. Tarel, D. Aubert, and Γ. Dumont, βBlind contrast enhancement assessment by gradient ratioing at visible edges,β Image Anal. Stereol., vol. 27, June. 2008, pp. 87β95, doi: 10.5566/ias.v27.p87-95. L. K. Choi, J. You, and A. C. Bovik, βReferenceless prediction of perceptual fog density and perceptual image defogging,β IEEE Trans. Image Process., vol. 24, Nov. 2015, pp. 3888-3901, doi: 10.1109/TIP.2015.2456502