942
Research Article
Vol. 56, No. 4 / February 1 2017 / Applied Optics
Real-time image haze removal using an aperture-division polarimetric camera WENFEI ZHANG,1,2,3 JIAN LIANG,1,3 LIYONG REN,1,* HAIJUAN JU,1,3 ENSHI QU,1 ZHAOFENG BAI,1 YAO TANG,4 AND ZHAOXIN WU2 1
Research Department of Information Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China 2 Department of Electronics Science and Technology, School of Electronic & Information Engineering, Xi’an Jiaotong University, Xi’an 710049, China 3 University of Chinese Academy of Sciences, Beijing 100049, China 4 Xi’an University of Posts & Telecommunication, Xi’an 710121, China *Corresponding author:
[email protected] Received 2 November 2016; revised 13 December 2016; accepted 27 December 2016; posted 3 January 2017 (Doc. ID 279988); published 26 January 2017
Polarimetric dehazing methods have been proven to be effective in enhancing the quality of images acquired in turbid media. We report a new full-Stokes polarimetric camera, which is based on the division of aperture structure. We design a kind of automatic polarimetric dehazing algorithm and load it into the field programmable gate array (FPGA) modules of our designed polarimetric camera, achieving a real-time image haze removal with an output rate of 25 fps. We demonstrate that the image quality can be significantly improved together with a good color restoration. This technique might be attractive in a range of real-time outdoor imaging applications, such as navigation, monitoring, and remote sensing. © 2017 Optical Society of America OCIS codes: (110.0113) Imaging through turbid media; (100.2980) Image enhancement; (110.5405) Polarimetric imaging. https://doi.org/10.1364/AO.56.000942
1. INTRODUCTION Many outdoor applications, such as navigation, monitoring, and remote sensing, etc., rely considerably on the quality of images that the visual systems obtain, including the visibility and detailed information. However, such images are sometimes degraded by a variety of turbid media, e.g., rain, haze, and fog. There are vast atmospheric particles suspended in the atmosphere in turbid media. They not only scatter and absorb the radiance reflected from the objects (namely as direct transmission), but also scatter some undesired atmospheric radiance (namely as the airlight) to the detector [1,2]. Consequently, the images obtained in such conditions usually show poor visibility and low contrast, which is fatal for outdoor automatic visual systems. So there are demands for techniques that can improve the quality of such images, especially dehazing algorithms and equipment used in real-time applications. With the development of computer vision technology and optical technology, the dehazing techniques have attracted much attention and are widely developed [3–23]. These techniques can be roughly classified into two categories: digital image dehazing techniques and optical image dehazing techniques. Digital image dehazing techniques are the methods that can remove the airlight effect and highlight the objects through 1559-128X/17/040942-06 Journal © 2017 Optical Society of America
analyzing the haze properties based on some image degradation models [3–20]. Although digital image dehazing techniques have advantages in keeping image color and detailed information, unfortunately, most of the digital image dehazing methods are not suitable for real-time dehazing due to two kinds of limitations. One is low computational efficiency, such as Fattal’s method [3], Tan’s method [4], and He’s method [5], while the other is difficulty in acquiring desired images simultaneously, such as Narasimhan’s method [6] and Liang’s method [7]. Besides these methods, some simple digital algorithms specifically for real-time dehazing applications are proposed for enhancing the quality of video sequences [8–11], which have been reviewed in [20]. On the other hand, optical image dehazing techniques are mainly based on infrared imaging methods, with the principle of the near-infrared light propagating farther than the visible light in turbid media [12,13]. The infrared light detector can obtain better images than the visible light detector in the same condition. But the infrared images are always monochrome and low resolution. In addition, infrared cameras usually cost much more than ordinary visible light cameras. In recent years, polarimetric dehazing methods have been proven to be effective in image quality enhancement [14–22]. Its principle is that it first estimates the airlight radiance and
Research Article
Vol. 56, No. 4 / February 1 2017 / Applied Optics
second recovers the object radiance based on the classical physical degradation model. To realize real-time polarimetric dehazing, cameras that can simultaneously obtain images with different polarization orientations, so-called polarimetric cameras, are essential. So various polarimetric cameras intended for hazy image quality enhancement are developed based on diverse structures [23–25]. Mudge and Virgen reported their dehazing results based on a near-infrared polarimetric camera [23], and the image quality is obviously improved. In this paper, we first detail our fabricated polarimetric camera based on the division of aperture structure. Then we propose a kind of optimized automatic polarimetric dehazing method based on our previously proposed polarimetric dehazing method [22], which can be loaded into the field programmable gate array (FPGA) modules to realize a real-time dehazing process. The experimental results demonstrate that good real-time dehazing performances can be realized. 2. PRINCIPLES AND CONFIGURATION A. Principles
The classical physical degradation model of images acquired in turbid media is mathematically expressed as [1,2] I x; y Lx; y · tx; y A∞ 1 − tx; y;
(1)
where I is the total radiance received by the camera; L and A∞ are the light radiance reflected from objects without any attenuation and the global atmospheric light radiance, respectively. It should be pointed out that A∞ is a global constant, which can be estimated from the brightest pixels in the sky region. t is the medium transmittance, which denotes the proportion of light reaching the camera. The airlight radiance A, the atmospheric light scattered by suspended particles, is defined as Ax; y A∞ 1 − tx; y:
(2)
Combining Eqs. (1) and (2), the dehazed image can be represented as the object radiance L calculated by the expression I x; y − Ax; y : Lx; y 1 − Ax; y∕A∞
(3)
As can be interpreted by Eq. (3), the key procedure of the dehazing process is to estimate A accurately. Fortunately, the airlight is partially linearly polarized, which can be precisely estimated by the polarimetric imaging method [14]. The polarimetric dehazing method is principally based on the fact to estimate the airlight radiance through multiple images with different polarization orientations. For simplicity, the directions of 0° and 90° are defined as the x- and y-axis, respectively. First, three images are captured with the polarization orientation at 0°, 45°, and 90°, and the intensities of these images are expressed as I 0 x; y, I 45 x; y, and I 90 x; y, respectively. The linear Stokes parameters can be written as [26] S 0 x; y I 0 x; y I 90 x; y S 1 x; y I 0 x; y − I 90 x; y S 2 x; y 2I 45 x; y − S 0 x; y;
(4)
where S 0 represents the total light radiance; S 1 represents the intensity difference between the horizontal and vertical linearly polarized components; and S 2 represents the intensity difference
943
between the 45° and −45° linearly polarized components with respect to the x-axis. According to Eq. (4), the degree of polarization (DOP) p and the angle of polarization (AOP) θ of the light can be given by pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi S 21 x; y S 22 x; y px; y ; (5) S 0 x; y 1 S 2 x; y : θx; y arctan 2 S 1 x; y
(6)
The DOP and AOP of the airlight (denoted as pA and θA , respectively) can be, respectively, estimated as the mean values corresponding to the pixels without object radiance. These pixels can be identified by the sky region identification algorithm we reported recently [22]. After obtaining pA and θA , A can be calculated as follows. We use Ap to denote the polarized radiance of the airlight, and in this case the polarized radiance of the airlight in the x and y directions can be expressed as Apx x; y Ap x; y · cos2 θA Apy x; y Ap x; y · sin2 θA :
(7)
Meanwhile, Apx and Apy can also be obtained by Apx x; y I 0 x; y − S 0 x; y1 − px; y∕2 Apy x; y I 90 x; y − S 0 x; y1 − px; y∕2;
(8)
where px; y is the DOP of the total radiance for each pixel over the whole image. Based on Eqs. (7) and (8), Ap can be derived as I 0 x; y − S 0 x; y1 − px; y∕2 cos2 θA I x; y − S 0 x; y1 − px; y∕2 90 : sin2 θA
Ap x; y
(9)
Then A can be precisely estimated by Ax; y Ap x; y∕pA :
(10)
Finally, the dehazed image L can be obtained according to Eq. (3). It should be pointed out that, for color dehazing application, we first execute the dehazing process for R, G, B channels, respectively; we then construct the color image from the dehazed data for the three color channels. B. Polarimetric Camera
A polarimetric camera is a device that can obtain four different linear polarization images simultaneously (linear-Stokes polarimetric camera), or three linear polarization images and one circular polarization image (full-Stokes polarimetric camera) with optimal designed structures [27]. Combined with the corresponding algorithms, the polarimetric camera can realize different functions, such as measuring the Stokes parameters of objects (DOP and AOP), testing the Mueller matrix of objects, doing the polarimetric dehazing process on hazy images, and so on. The polarimetric camera we fabricated can also achieve these functions. Our fabricated polarimetric camera is based on division of aperture structure. The polarimetric camera with such an
944
Vol. 56, No. 4 / February 1 2017 / Applied Optics
Research Article
Fig. 1. (a) Schematic of polarization-state distribution on the detector. Bottom left region is the polarization orientation of 0°, bottom right is the polarization orientation of 90°, upper left is the polarization orientation of 45°, and upper right is the circular polarization state. (b) Photo of our full-Stokes polarimetric camera.
optical structure has the advantages of high integration and fixed misregistration for a specific camera. The incident light reflected from the scene projects on the focusing lens, and then the beam will be split into four optical channels. These channels are modulated by different polarizers, including three linear polarizers (0°, 45°, and 90°) and one circular polarizer, respectively. Eventually the beam from four optical channels projects on the designated four regions of the same detector. The distribution of the polarization states on the detector is shown in Fig. 1(a). The four optical channels are completely consistent except the polarizers to ensure the same optical path among different channels. The detector is a complementary metal-oxide semiconductor (CMOS) device with the pixel number of 2048 × 2048. Using one detector receiving four different polarization images can avoid the response difference between different detectors, e.g., quantum noise. Because of sharing the same detector, the pixel number of each image is 1024 × 1024. The camera is precisely calibrated to correct the mismatch of the three linear polarization images in intensity and the error of the angles of the polarizers. The polarimetric dehazing algorithm we proposed above is loaded into the FPGA modules to realize a real-time haze removal process. The polarimetric camera can deal with the images at a rate of 25 fps. The assembled polarimetric camera is shown in Fig. 1(b). The circular polarization image is usually used for many applications. But in our polarimetric dehazing process, the circular polarization of the airlight is ignored, and thus only three linear polarization images are used.
Fig. 2. Original image the detector obtained directly without any additional processing. The pixel number of the whole image is 2048 × 2048.
to be registered once through implementing a registration procedure, since misregistration is inherently determined after the camera is packaged. Many methods have been developed to register images accurately, such as scale invariant feature transform (SIFT) [28] and speed up robust features (SURF) [29], but the computational efficiency of these methods is insufficient for the real-time dehazing applications. Considering that the misregistration of our camera is mainly aroused by the mechanical structures, the shift and the rotation effects are dominated. Therefore, a simple registration method, namely, the rigid transformation, is employed in the camera, and the results demonstrate that this method can achieve the ideal registration accuracy. The registered images of the original image are shown in Fig. 3. The next step is to process images by the polarimetric dehazing algorithm introduced in Section 2.A. The dehazing result is shown in Fig. 4. Comparing with Fig. 3, the quality of the dehazed image is improved significantly. The main building on the left region of the image is much clearer, and the concrete structure of the building is easily seen. Furthermore, the building on the right side just behind this building also becomes much clearer, which is hardly visible in Fig. 3. In particular,
3. EXPERIMENTS AND DISCUSSION
Some dehazing results are given to demonstrate the real-time dehazing effectiveness of our polarimetric camera. The original video is captured on the rooftop of our laboratory building by slowly moving the camera manually. The main scenes of the video are buildings, and it is suitable for demonstrating the improvement of the video quality after haze removal. First, we use one frame to illustrate the whole dehazing process. The original image is shown in Fig. 2. Obviously, the four images are somewhat mismatched. This is mainly imputed to the mismatch of the mechanical structures among different channels. And this defect is unavoidable in aperturedivision polarimetric cameras [27]. Fortunately, it only needs
Fig. 3. Registered images of the original image in Fig. 2 using the rigid transformation.
Research Article
Vol. 56, No. 4 / February 1 2017 / Applied Optics
945
Fig. 4. Dehazing result of Fig. 3 using the real-time dehazing polarimetric camera.
Fig. 5. Histograms. (a) and (b) are the histograms of the hazy image I 0 in Fig. 3 and the dehazed image in Fig. 4, respectively.
the characters on the top of this building can be distinguished without any assistant equipment. We use histograms to show the quality improvement of the dehazed image visualized. The histograms shown in Figs. 5(a) and 5(b) are the histograms of the hazy image I 0 in Fig. 3 and the dehazed image in Fig. 4, respectively. From Fig. 5(a), it is seen that the distribution of the gray level is centralized in a narrow range. This implies that most of the detailed information is submerged by the airlight. In addition, it can be roughly seen that the airlight is mainly located around the gray level of 150. In contrast, the distribution of the histogram of Fig. 5(b) is much broader than that of Fig. 5(a), meaning that the dehazed image contains more information than the hazy image. The gray level is distributed over the whole level from 0 to 255. Note that an obvious reduction of pixel number around the gray level of 150 means that the airlight is effectively removed. It should be pointed out that there is inhomogeneous intensity on the margin of the dehazed image. This is aroused by the optical halo of the camera, which is the intrinsic defect of the aperture-division polarimetric cameras [27]. Six frames are given in Fig. 6 to verify the universality during the real-time dehazing process. The interval between the displayed neighboring frames is 12 frames (equivalent to 0.5 s). The images of the left column are the hazy frames, and those of the right column are the corresponding dehazed frames. The qualities of the images are evaluated using the functioncontrast. The contrast can be calculated by CI
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P 2 1 x;y I x; y − I N I
;
(11)
Fig. 6. Real-time dehazing results of different frames. The images of the left column are original hazy frames and those of the right column are the dehazed frames.
where N is the pixel number in the image, I¯ is the mean intensity of the image, and I x; y is the intensity of the pixel (x, y). The evaluation results are shown in Table 1. And the improvement ratio is also calculated according to the values of contrast. It can be seen that the qualities of the hazy frames are improved significantly, and more information is recovered in the dehazed frames. Meanwhile from Table 1 we can see that the improvement ratios of all frames are almost the same. This indicates that the dehazing capacity of the polarimetric dehazing
946
Table 1.
Image Quality of Fig. 6
4. CONCLUSIONS
Contrast Figure No. Hazy Frames Dehazed Frames Improvement (%) Fig. Fig. Fig. Fig. Fig. Fig.
6(a) 6(b) 6(c) 6(d) 6(e) 6(f )
Research Article
Vol. 56, No. 4 / February 1 2017 / Applied Optics
0.2270 0.2309 0.2455 0.2298 0.2417 0.2618
0.5004 0.5118 0.5392 0.5046 0.5224 0.5784
120.44 121.65 119.63 119.58 116.14 120.93
algorithm does not seriously fluctuate during the video capture, meaning that the polarimetric dehazing algorithm has an advantage for real-time extraction of polarization information from the three linear polarization images obtained simultaneously by our camera. In Fig. 6, the color of the images is almost monochrome. Now we give another group of images to demonstrate the color restoration capacity of the polarimetric camera. Figure 7(a) is one of another series of original images acquired in dense haze condition. The dehazed image outputted from the polarimetric camera is shown in Fig. 7(b). One can see that the dehazed image is much clearer and more comfortable. Meanwhile, the color on the buildings is recovered well, e.g., the red and yellow parts on the buildings. Figures 8(a) and 8(b) give the color component distributions of the hazy image in Fig. 7(a) and that of the dehazed image in Fig. 7(b), respectively. It is seen that the distribution in Fig. 8(a) is centralized in a small range, meaning that the image is fuzzy. In contrast, the distribution in Fig. 8(b) is extended. It indicates that the image shows good contrast and ample color. By now, we can conclude that the polarimetric camera shows good universality in real-time dehazing of moving scenes.
Fig. 7. (a) The registered image of I 0 taken by the polarimetric camera; (b) the image dehazed using the real-time polarimetric camera.
Fig. 8. (a) and (b) are the distributions of RGB color components in Figs. 7(a) and 7(b), respectively.
In this paper, we detail a polarimetric camera based on division of aperture structure, which can obtain the full-Stokes parameters of the scene simultaneously. Meanwhile, we design an automatic polarimetric dehazing algorithm and load it into the FPGA modules, making the camera capable of realizing real-time image haze removal. The dehazing results demonstrate that the image quality is improved significantly, while the color information is well restored. More details are visible after the dehazing process. Moreover, the polarimetric camera shows good universality of real-time dehazing in the process of its movement. Such a real-time dehazing system will be useful in outdoor target detection and remote sensing. Funding. National Natural Science Foundation of China (NSFC) (61505246, 61275149, 61535015).
REFERENCES 1. R. C. Henry, S. Mahadev, S. Urquijo, and D. Chitwood, “Color perception through atmospheric haze,” J. Opt. Soc. Am. A 17, 831–835 (2000). 2. S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” Int. J. Comput. Vis. 48, 233–254 (2002). 3. R. Fattal, “Single image dehazing,” ACM Trans. Graph. 27, 1 (2008). 4. R. T. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008). 5. K. M. He, J. Sun, and X. O. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 2341–2353 (2011). 6. S. G. Narasimhan and S. K. Nayar, “Contrast restoration of weather degraded images,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 713–724 (2003). 7. J. Liang, W. F. Zhang, L. Y. Ren, H. J. Ju, and E. S. Qu, “Polarimetric dehazing method for visibility improvement based on visible and infrared image fusion,” Appl. Opt. 55, 8221–8226 (2016). 8. J. P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE Conference on Computer Vision (ICCV) (IEEE, 2009), pp. 2201–2208. 9. J. H. Kim, W. D. Jang, J. Y. Sim, and C. S. Kim, “Optimized contrast enhancement for real-time image and video dehazing,” J. Vis. Commun. Image R. 24, 410–425 (2013). 10. H. M. Lu, Y. J. Li, S. Nakashima, and S. Serikawa, “Single image dehazing through improved atmospheric light estimation,” Multimedia Tools Appl. 75, 17081–17096 (2015). 11. S. Archa and A. Abdul, “A novel method for video dehazing by multiscale fusion,” Int. J. Sci. Eng. Technol. Res. 3, 4808–4813 (2014). 12. L. Schaul, C. Fredembach, and S. Susstrunk, “Color image dehazing using the near-infrared,” in Proceedings of IEEE international Conference on Image Processing (ICIP) (IEEE, 2009), paper LCAVCONF-2009-026. 13. C. Feng, S. Zhuo, X. Zhang, and L. Shen, “Near-infrared guided color image dehazing,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2013), pp. 2363–2367. 14. Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarizationbased vision through haze,” Appl. Opt. 42, 511–525 (2003). 15. S. Fang, X. S. Xia, H. Xing, and C. W. Chen, “Image dehazing using polarization effects of objects and airlight,” Opt. Express 22, 19523–19537 (2014). 16. M. Boffety, H. F. Hu, and F. Goudail, “Contrast optimization in broadband passive polarimetric imaging,” Opt. Lett. 39, 6759–6762 (2014). 17. B. J. Huang, T. G. Liu, H. F. Hu, J. H. Huang, and M. X. Yu, “Underwater image recovery considering polarization effects of objects,” Opt. Express 24, 9826–9838 (2016).
Research Article 18. J. Liang, L. Y. Ren, E. S. Qu, B. L. Hu, and Y. L. Wang, “Method for enhancing visibility of hazy images based on polarimetric imaging,” Photon. Res. 2, 38–44 (2014). 19. J. Liang, L. Y. Ren, H. J. Ju, E. S. Qu, and Y. L. Wang, “Visibility enhancement of hazy images based on a universal polarimetric imaging method,” J. Appl. Phys. 116, 173107 (2014). 20. Y. Xu, J. Wen, L. K. Fei, and Z. Zhang, “Review of video and image defogging algorithms and related studies on image restoration and enhancement,” IEEE Access 4, 165–188 (2016). 21. J. Liang, L. Y. Ren, H. J. Ju, W. F. Zhang, and E. S. Qu, “Polarimetric dehazing method for dense haze removal based on distribution of angle of polarization,” Opt. Express 23, 26146–26157 (2015). 22. W. F. Zhang, J. Liang, H. J. Ju, L. Y. Ren, E. S. Qu, and Z. X. Wu, “A robust haze-removal scheme in polarimetric dehazing imaging based on automatic identification of sky region,” Opt. Laser Technol. 86, 145–151 (2016).
Vol. 56, No. 4 / February 1 2017 / Applied Optics
947
23. J. Mudge and M. Virgen, “Real-time polarimetric dehazing,” Appl. Opt. 52, 1932–1938 (2013). 24. F. Goudail and M. Boffety, “Optimal configuration of static polarization imagers for target detection,” J. Opt. Soc. Am. A 33, 9–16 (2016). 25. H. F. Hu, E. G. Caurel, G. Anna, and F. Goudail, “Simplified calibration procedure for Mueller polarimeter in transmission configuration,” Opt. Lett. 39, 418–421 (2014). 26. D. Goldstein, Polarized Light, 3rd ed. (Taylor & Francis, 2011). 27. J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. 45, 5453–5469 (2006). 28. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60, 91–110 (2004). 29. H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded up robust features (SURF),” Comput. Vis. Image Underst. 110, 346–359 (2008).