Image Deblurring with Blur Kernel Estimation from a Reference Image Patch. Po-Hao Huang ... of aligning the corresponding image patches, which are the co-existing planar object, .... rotating the patches with a pre-defined range. Note that,.
Image Deblurring with Blur Kernel Estimation from a Reference Image Patch Po-Hao Huang Yu-Mo Lin Shang-Hong Lai Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan {even, g9562568, lai}@cs.nthu.edu.tw Abstract In this paper, we propose a new approach for image deblurring from two images, non-blurred and blurred, in different poses by exploiting the co-existing planar object in both views. We focus on the problem of aligning the corresponding image patches, which are the co-existing planar object, in both images and propose an iterative two-stage algorithm for patch alignment and kernel estimation. In the first stage, we extend the intensity-based alignment method to find the geometric transformation between patches, and then the aligned image patches are used for blur kernel estimation in the second stage. These two stages are repeated until convergence. Furthermore, the proposed algorithm can also be used when the geometric relationship between the two images is a homography or an approximate homography, such as images from image mosaic. Experimental results on real images are given to demonstrate its performance.
1. Introduction The blind-deconvolution problem [1] has been researched for a long time in image and signal processing areas. However, it is still a challenging problem to correctly estimate the motion blur kernel from a single image, even with the recent impressive works [2,3]. In contrast, image deblurring from a wellaligned blurred and non-blurred image pair was proposed to produce excellent restoration results [4,5]. However, they need to align the pair of images very accurately, which was done with specially designed hardware in these works [4,5]. For image alignment, a general tutorial to the previous works can be found in [7]. There are two main image alignment approaches; namely, the intensity-based and the feature-based approaches.
978-1-4244-2175-6/08/$25.00 ©2008 IEEE
(a) (b) Figure 1. An example of (a) blurred and (b) non-blurred image pair. The yellow rectangle indicates the co-existing planar object in both images.
However, if one of the image pair to be aligned is blurred, the image features, such as corners or the SIFT features, can not be extracted correctly. Some previous works [9,10] aimed to extract features that are invariant to blur and geometric transformation. Nevertheless, these works are restricted to the assumption that the blur kernels are centro-symmetric. Recently, Yuan et al. [6] proposed a new approach, which is not limited to the above assumption, to align the blurred and non-blurred image pair with the geometric relation between images assumed to be an affine transformation. They exhaustively search around a given initial parameters and the best alignment is determined based on the criterion of the sparsest blur kernel. In this paper, we focus on the problem of image deblurring from two images, non-blurred and blurred, of the same scene. The main difference between this work and the previous methods [4,5] is that we assume the two images of the same scene could be acquired at different poses with both images containing a common planar object as shown in Figure 1. This paper aims to find the geometric transformation between the two image patches with different degrees of blurring of a co-existing planar object and estimate the blur kernel from the aligned image patches. Furthermore, the proposed approach can also be applied to restore
images from image mosaic or video sequence without restricted to a co-existing planar object, because in these cases, the geometric transformation between images can be as approximated by a homography. Yuan et al. [6] solved the image restoration problem under the similar setting with the exhaustive search around a given initial geometric parameters. In contrast, we apply a modified intensity-based matching method to iteratively align the image patches with different degrees of blurring and estimate the blur kernel. The rest of paper is organized as follow: Section 2 describes how to iteratively align the blurred and nonblurred image patches with the intensity-based alignment method. Experimental results of image restoration by using the proposed method are given in section 3. Finally, we conclude this paper in section 4.
2. Blurred/Non-blurred image alignment With the assumption of the spatially-invariant blurring process, the relationship between the blur kernel K, the blurred image B, and the non-blurred image N can be expressed as follow: (1) B = K ⊗ H −1 ( N ) -1 where H (N) is the transformed image of N by H-1. In equation (1), both K and H are unknowns. This equation is a non-linear problem. Hence, we propose an iterative two-stage estimation algorithm to solve the problem. In the first stage, the geometric transformation H is determined by using the intensity-based alignment method. Therefore, the transformed nonblurred image H-1(N) and the blurred image B are used to determine the blur kernel K in the second stage. The estimated kernel is applied to the first stage to refine the estimation of the geometric transformation, and subsequently update the estimate of the blur kernel. Repeat until the estimated kernel is converged. Similar to [8], we derive the intensity-based image alignment algorithm from the following assumption: (2) α I 0 (x , y ) = I1 ( x + u , y + v ) where I0 and I1 are the intensity functions of two images for alignment, (u,v) is the displacement vector at location (x,y), and α is the intensity variation factor between images. If the displacement (u,v) is small between images, I1 can be approximated by the first-order Taylor series and equation (2) becomes: αI 0 ( x, y ) = I1 ( x, y ) + I x (x, y ) ⋅ u + I y (x, y ) ⋅ v (3) where Ix and Iy are the partial derivatives of I1 with respect to x and y. The displacement vector (u,v) is determined from the geometric transformation, denoted by H, between
images. For a planar object, it can be well represented as an affine transformation when the perspective effect is small. Therefore, H in the affine case is represented by the parameter vector as follow: (4) h = (h0 , h1 ,..., h5 ) With equation (4), the vector (u,v) can be written as (u, v ) = ((h0 − 1) ⋅ x + h1 ⋅ y + h2 , h3 ⋅ x + (h4 − 1) ⋅ y + h5 ) (5) Substitute equation (5) into equation (4), we have the constraint equation given as follows: (6) fi h = g i where Ix,i = Ix (xi , yi), Iy,i = Iy (xi , yi) and
g i = αI 0 ( xi , yi ) − I1 (xi , yi ) + I x ,i ⋅ xi + I y ,i ⋅ yi (7) f i = (I x ,i ⋅ xi , I x ,i ⋅ yi , I x ,i , I y , i ⋅ xi , I y , i ⋅ yi , I y , i ) (8)
With enough constraints, the parameter h can be obtained from the least-squares method. Since the constraint is derived under the assumption of small displacement, an iterative process can be applied to refine the estimation [8]. In the first stage, we extend the intensity-based alignment algorithm from the affine transform to homography. The homography H is represented by the following vector: (9) hˆ = (h0 , h1 ,..., h7 ) Therefore, the displacement vector (u,v) becomes ⎞ ⎛ (u, v ) = ⎜ h0 ⋅ x + h1 ⋅ y + h2 − x, v = h3 ⋅ x + h4 ⋅ y + h5 − y ⎟ (10) ⎜ h ⋅ x + h ⋅ y +1 7 ⎝ 6
h6 ⋅ x + h7 ⋅ y + 1
⎟ ⎠
With similar derivation to the previous, we can derive the constraint equation as follows: (11) fˆi hˆ = g i ˆ where (12) f = (f , − g ⋅ x , − g ⋅ y ) i
i
i
i
i
i
In these equations, I0 is regarded as the blurred image B and I1 corresponds to the non-blurred image N. Thus, we can compute the homograph between the corresponding patches in the blurred and non-blurred images by least-square estimation. However, the homograph estimation algorithm may not converge to the true solution without a good initial solution. In practice, we first apply the algorithm to estimate the affine approximation and then use it as an initial guess for computing the homography by applying the extended algorithm. To find the corresponding patches, simple user interactions, such as selecting and dragging the image patch and putting on the rough corresponding position on the other image can be used to provide an initial rough alignment for the proposed iterative image alignment algorithm. Some factors, such as initial position, scale of patch size, and the planar rotation, will influence the alignment results. Nevertheless, these issues can be resolved by locally shifting the
(f) (e) (a) (b) (c) (d) Figure 2. (a) Blurred and non-blurred patches. Top of (b)-(d): Results after the first stage at different iterations. Bottom of (b)(d): Results of blur kernel estimation after the second stage. (e) Restored image of Figure 1(a). (f) The top and bottom row are corresponding patches of blurred and restored images. See text for details. Enlarge the electronic paper to see more clear results.
(a)
(b)
(c)
(d) (e) (f) (g) Figure 3. (a) Blurred and (b) non-blurred image pairs where the estimated kernel is shown on the right-top side of (b). (c) The restored image. (d) The corresponding image patches (e) The aligned image patch. (f)-(g): The corresponding blurred and restored image patches. See text for details.
patch positions, multi-scaling the patch size and rotating the patches with a pre-defined range. Note that, these processes are done within a reasonably small range which is different from an exhaustive search in a large parameter space. In the second stage, we apply the estimated geometric transformation H on the nonblurred image N to obtain the aligned image patch H-1(N). The blur kernel estimation algorithm described in [5] is then used to determine the blur kernel K, which will be applied to the transformed image H-1(N) to make the non-blurred image close to the blurred image in the next iteration. Example results are depicted in Figure 2. Figure 2(a) shows the blurred and non-blurred image patches. The intensity of the blurred patch is adjusted according to the non-blurred patch. From Figure 2(b) to 2(d), the top row shows the alignment results after the first stage at different iterations. In each image, the left part is the blurred patch and the right part is the transformed nonblurred patch via the determined transformation H. The bottom row depicts the estimated blur kernels obtained in the second stage.
After the estimated kernel is converged, we apply the Richardson-Lucy deconvolution algorithm [11] to restore the blurred image. The resorted image of Figure 1(a) is shown in Figure 2(e). In Figure 2(f), some corresponding blurred and restored image patches are shown in each column. In this paper, we focus on determining the geometric transformation between blurred and non-blurred image patches. Therefore, no other image improvement methods were applied to obtain the results.
3. Experimental results In the experiments, we apply the proposed algorithm to restore different kinds of blurred images, including general blurred images (Figure 3) and images from image mosaic (Figure 4). Figure 3(a) and 3(b) is a pair of blurred and non-blurred images. Figure 3(d) shows the blurred and non-blurred image patches. Figure 3(e) depicts the alignment result where the left part of the image is the original blurred patch with intensity adjusted according to the corresponding
(a)
(c)
(b)
(d)
(e)
(f)
Figure 4. Image mosaic (a) from blurred/non-blurred images and (b) from restored/non-blurred images. (c) The corresponding patches (d) The aligned image patch. (e)-(f): The corresponding blurred and restored image patches.
non-blurred patch and the right part is the transformed non-blurred patch convoluted with the estimated kernel. Figure 3(c) is the restored image and Figure 3(f) and 3(g) are the corresponding cropped patches of the blurred and restored images. Figure 4 shows the experiment on the images from image mosaic, whose geometric transformation can be expressed as a homogrpahy. In this experiment, the patch is no longer restricted to the planar object because the special camera motion. Figure 4(a) is the panoramic mosaic stitched from two images, blurred and non-blurred, and Figure 4(b) is obtained from restored and non-blurred images and the rest arrangement is similar to that in Figure 3.
4. Conclusion In this paper, we proposed a new approach for blind image deblurring by exploiting an additional nonblurred reference image containing a co-existing planar object for blur kernel estimation. The problem is focused on determining the geometric transformation between the blurred and non-blurred image patches. Different form exhaustively searching the large parameter spaces, we proposed an iterative two-stage intensity-based matching algorithm for patch alignment. The proposed algorithm can also be applied to images from image mosaic or video sequence, which is no longer restricted to the assumption of a coexisting planar object.
References
[1] D. Kundur and D. Hatzinakos, “Blind image deconvolution”, IEEE Signal Processing Magazine, 13(3):43–64, 1996. [2] R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis and W. T. Freeman, “Removing camera shake from a single photograph”, ACM Trans. on Graphics, 25:787–794, 2006. [3] J. Jia, “Single image motion deblurring using transparency”, Proc. Computer Vision and Pattern Recognition, 1-8, 2007. [4] J. Jia, J. Sun, C.-K. Tang and H.-Y. Shum, “Bayesian correction of image intensity with spatial consideration”, Proc. of European Conf. on Computer Vision, 3:342354, 2004. [5] L. Yuan, J. Sun, L. Quan, and H.-Y. Shum, “Image deblurring with blurred/noisy image pairs”, ACM Trans. on Graphics, 26(3):1–10, 2007. [6] L. Yuan, J. Sun, L. Quan, and H.-Y. Shum, “Blurred/non-blurred image alignment using sparseness prior”, Proc. Int’l Conf. on Computer Vision, 2007. [7] R. Szeliski, “Image alignment and stitching: A tutorial”, Foundations and Trends in Computer Graphics and Computer Vision, 2(1):1–104, 2006. [8] S.-H. Lai, “Robust image matching under partial occlusion and spatially varying illumination change”, Computer Vision Image Understanding, 78:84-98, 2000. [9] J. Flusser, J. Boldys, and B. Zitova, “Moment forms invariant to rotation and blur in arbitrary number of dimensions”, IEEE Trans. on Pattern Analysis and Machine Intelligence, 25(2):234–246, 2003. [10] Y. Zhang, C. Wen, Y. Zhang, and Y. C. Soh, “Determination of blur and affine combined invariants by normalization”, Pattern Recognition, 35:211-221, 2002. [11] W. H. Richardson, “Bayesian-based iterative method of image restoration”, J. Opt. Soc. Am., 62(1):55-59, 1972.