SKETCH-GUIDED TEXTURE-BASED IMAGE INPAINTING Yan Chen1, Qing Luan2, Houqiang Li2, Oscar Au1 1 Dept. EEE, Hong Kong University of Science and Technology 2 University of Science and Technology of China, Hefei Email:
[email protected] [email protected] [email protected] [email protected] ABSTRACT In this paper, we propose a novel framework for image inpainting, named sketch-guided texture-based image inpainting. Inspired by the well-known primal sketch model, we present a penetrating perspective into the process of image formation, where each image is seen as a variety of texture organized by some underlying structure. Based on this conceptual foundation, our approach of image inpainting integrates two unified stages: it first reconstructs the image structure with the sketch model, and then guided by the structure, it restores the missing region by patchbased texture synthesis. The major superiority of the framework over other ones consists in its capability of simultaneously recovering the structure and texture in the missing regions. Comprehensive experiments are performed to compare our method with other state-of-the-art ones; the encouraging results obtained convincingly demonstrate the effectiveness of our method. Index Terms: Image restoration, image edge analysis, image texture analysis. 1. INTRODUCTION Inpainting is method restoring the missing region in image. Given the known information, we should predict the unknown information in an undetectable way. Our analysis shows that mainly three types of priors are involved to accomplish this task, they are: 1. Generic priors: concerning the type of priors that work in any types of image, including the high smoothness assumption, low total curvature or variation assumptions and etc. 2. Image-specific priors: concerning the information summarized by analyzing current image. Including texture patterns, etc. 3. Object-specific priors: this type of priors is obtained based on observer’s previous visual experience. It’s a type of prior associated with high level image understanding, which is at present hard to apply to ordinary inpainting algorithm. Local inpainting, defined in [5], concerning the approaches that inpaint the missing region using information locally around it. Level line based methods such as [4], or PDE based methods such as [2, 3], belong to this type. By
1424404819/06/$20.00 ©2006 IEEE
assuming high smoothness and least variation in missing region, this approach makes great progress in restoring global structure of image, however, texture details are inevitably blurred. In this approach only the first type of priors is utilized, the information provided by image pattern is totally ignored. Texture synthesis concerns the methods generating any size texture image by learning the input image [6-10]. Patchmatching based algorithms [6, 9] are currently the most effective methods in this realm. [8] shows that the patchmatching order plays a core role in reconstructing faithful texture. [11] develops it into large block image inpainting by reorder the synthesizing process according to the gradient. This approach inherits the feature of texture synthesis in producing vivid details and is able to propagate linear structure into missing region. However, when coming to other type of structure, it will fail. Generally speaking, texture synthesis uses only the second type of priors, which inevitably suffers problem lacking guidance of global structure. We call this “blind patch-matching”. Efforts have been made to hybrid these two types of prior to well restore both the structure and texture. Most of the methods [12,13] decompose image into two composites representing image structure and texture, and separately reconstruct each of them. Significant progress is made, whereas, we notice that blur effect or unbounded texture grow may occur around main structure. Reason is that reconstructions of texture and structure are isolated, no information interchanges during the restoring process. According to the previous analysis, finding a method that reasonably decomposes image into structure and texture, in which not only the texture and structure are obtained but also their relationship is depicted, is the key problem. [14] relaxed the problem by defining the structure manually. In this paper, we utilize the ‘Primal sketch model’ [1] and propose a novel approach called sketch-guided texture-based image inpainting method. In this method, image is synthesized as texture directed by global structure. First, global structure is generated by primal sketch model automatically. Then, based on the smoothness, least variation and other well-studied visual assumption, sketches in missing region are reconstructed; after that, textures of image are synthesized by reorder the patch-matching process according to these sketches. To assure the faithful restoration of global structure, patch-matching first takes
1997
ICIP 2006
a b c d e Fig. 1 Overview of the proposed algorithm. (a) The source image with missing region. (b) The sketch image. (c) The reconstructed sketch image. (d) Interim result. (e) Final result of the proposed algorithm.
place along the reconstructed sketches. Constrained by those pixels on structure, the rest of the region is synthesized using the method proposed in [11]. Experiments show the encouraging results of our algorithm where both the structure and texture are well reconstructed. The rest of the paper is organized as follows. The proposed scheme is described in section 2. Section 3 shows the experimental results. Finally, section 4 concludes this paper. 2. SKETCH-GUIDED TEXTURE-BASED ALGORITHM 2.1. Overview of the proposed framework The core of our proposed algorithm is sketch-guided patchmatching texture synthesis process. Given an image with missing region, our proposed algorithm first generates the sketch using primal sketch model. As shown in fig.1 (b), L1 L2 L3 L4 are the generated sketch lines. By learning the characters of the sketch lines, the “lost” sketch lines in missing region are reconstructed by the proposed sketch reconstruction method. Then guided by the sketch lines, the information surrounding sketch lines is restored by improved patch-matching algorithm (e.g. fig. 1 d). Finally, constrained by the information surrounding the sketch lines, the rest of missing region is restored by the method in [11]. The following subsections will introduce each part of our proposed algorithm in details. 2.2. Sketch Generation and Reconstruction 2.2.1. Sketch Generation Our aim is to find a model that reasonably divides image as well consistent with our notion in understanding image. Guo et al. [1] propose the primal sketch model which labels one image as structure region and texture region. This model pursuits the structure of image and assures that the regions beside the defined structure can be well reconstructed as texture. Accordingly, under this model, it’s theoretically reasonable to well reconstruct image by synthesizing texture on condition of the pixels on structure, which constitutes the core notion of our algorithm. In this paper, the inpainting region is first selected manually by users. Then the primal sketch model proposed in [1] is utilized to generate the sketch which represents the main global structure.
2.2.2. Sketch Reconstruction After finding all the main sketches across the missing region, based on the rules described in [15], we propose a simple algorithm to reconstruct the “lost” sketches. First, all the sketches around the missing region are generated and denoted as lines L1 L2 ...Ln (e.g. fig.1 b). For each line Li , the corresponding matching line MLi is generated according to the following rules. ¾ Texture around each matched pair should be similar. ¾ The curvature of each matched pair should be similar. ¾ The matched pair is near in distance. ¾ No intersection is allowed. Then the lost line between the line Li and the corresponding matching line MLi will be reconstructed in a smooth way by a restrict function learning from the coordinates of the points on Li and MLi . In details, for each line Li , the restrict function f ( x ) must satisfy the following formula: k −1 2 2½ m −1 min ® ¦ ª y pi − f x pi º + ¦ ª ympi − f xmpi º ¾ (1) ¼ i=0 ¬ ¼ ¿ ¯ i=0 ¬ Where ( x pi , y pi ) and ( xmpi , ympi ) are the coordinates of the
( )
(
)
pixels belonged to line Li and MLi respectively. The function f ( x ) can be a line, conic or others curve functions. In this paper, only line function and conic function are utilized, and we find that they are enough to provided reasonable results. But for more accurate results, others functions can be utilized. 2.3. Sketch-Guided Patch-Matching Algorithm In tradition, image restoration was done by professional restorators. Different restorator may complete the task in different way, but the underlying methodology includes two steps: first the main global structure is observed and prolonged in the target region; then the petty structure and texture are filled in. In this paper, we simulate this procedure by reorder the patch-matching process. Guided by reconstructed sketch, patch-matching first takes place along the global structure in missing region. Then, on condition of the pixels on structure, rest of the missing region is restored using the method proposed in [11].
1998
2.3.1.Improved Patch-Matching Algorithm Since sketch denotes the boundary of two different textures, it is reasonable for us to assume that each side has the same texture along the continuous sketch. Inspired by this understanding, we utilize patch-matching algorithm to reconstruct the information surrounding the sketch in missing region by searching along corresponding sketch orientation. As fig. 2 a shows, there is an image with mission region which is denoted by white region and the “gray” line tells the reconstructed sketch line. Given the sketch line, the information surrounding the sketch along the sketch orientation is first exploited and rearranged in a rectangle Φ by setting the sketch as the middle line, shown in fig.2 b. Then two patches R p1 and R p2 along the sketch orientation are selected (e.g. fig. 2c). For each patch R p , the most similar patch R p ∧ is found according to the following formula by searching in Φ . R p∧ = arg min D ( R p , Rq ) (2) Rq ∈Φ
Here, R p indicates the patch to be matched, Rq represents the candidate matching patch. R p ∧ is the resulted matching patch, and Φ is the searching range. The function D( R p , Rq ) is defined as the sum of the square differences of the known pixels in the two patches. Notice that all pixels should be known pixels in the candidate matching patch. Having found the most similar region, the unknown pixels in the matched region R p are recovered with the pixels inside R p ∧ while the known pixels including the filled ones will be unchanged (e.g. fig. 2d). Then the missing region is updated and the aforementioned steps are repeated until all the information surrounding the sketches is restored (e.g. fig. 2e). Finally, we set the pixels in the rectangle Φ back to the original image (e.g. fig. 2f). 2.3.2.The Rest of The Missing Region Though the step mentioned above, the information surrounding the sketches in the missing region is restored. The rest of the missing region is petty structures and textures. Here, exemplar-based image inpainting algorithm [11] is utilized to restore the rest of the missing region.
a
b c d e f Fig. 2 Improved patch-matching algorithm.
3. EXPERIMENTAL RESULTS The proposed algorithm is tested on a variety images, ranging from synthesis images to real scene photographs. The experimental results convincingly demonstrate the effectiveness of our method. The first experiment is done on the double-ellipse image to show how the proposed algorithm works on a curve structure synthetic image. As shows in fig.3, guided by the reconstructed sketch image, patch-matching algorithm can be well-performed on recovering the information surrounding the sketches. The results of the fig.3 demonstrate that our method can restore curve structure, where the method in [11] can only propagate linear structure into missing region. Fig.4 shows the results of applying our algorithm on real scene photograph. As shown in (a), there are three kinds of textures organized by curve structures surrounding the missing region. Our algorithm first reconstructs the curve structures and the texture along the structures orientation. Then constrained by the structures, the rest of the missing region will be well-reconstructed by texture synthesis. As shown in (d), both the curve structures and the three kinds of textures are restored in an undetectable way by our method, where the method in [11] fails to keep the boundary of the different textures and cause serious visible artifacts. Fig. 5 shows the results of removing large object using different methods. As fig.5 (c) shown, the serious visible blurs propagate into target region, which shows that when applied to large inpainting region, tradition image inpainting technologies fail. Fig.5(d) and (e) show the results generated by the method in [11] and our method respectively. We can find that, compared with the method in [11], the proposed algorithm can restore global structure well. 4. CONCLUSIONS In this paper, based on the understanding of the image as a variety of texture organized by some underlying structure, we proposed a novel sketch-guided texture based image inpainting algorithm. The proposed algorithm first generates the sketch image using primal sketch model and reconstructs “lost” sketch lines in missing region by learning the characters of the sketch lines. Then guided by the sketch lines, the information surrounding sketch lines is restored by the improved patch-matching algorithm. Finally, constrained by the information surrounding the sketch lines, the rest of missing region is restored by the method in [11]. Different from previous hybrid inpainting methods, the algorithm not only decomposes original image into structure and texture but also generate their relationship to simultaneously restore structure and texture in the missing region, due to which both the structure and texture are wellreconstructed and less blurs and visual artifacts are left in the missing region.
1999
5. ACKNOWLEDGEMENT This work has been supported in part by the Innovation and Technology Commission (projects no. ITS/122/03 and GHP/033/05) and the Research Grant Council (DAG04/05.EG34) of the Hong Kong Special Administrative Region, China. 6. REFERENCES [1] C. E. Guo, S.C. Zhu Y. N. Wu, “Towards a Mathematical Theory of Primal Sketch and Sketchability,” In Proc. ICCV, 2003. [2] M. Bertalmo et al., “Image inpainting,” SIGGRAPH 2000. [3] T. Chan and J. Shen, “Mathematical models for local nontexture inpaintings,” SIAM, 62(3):1019̢1043, 2002. [4] S. Masnou and J.-M. Morel, “Level lines based disocclusion,” In Int. Conf. Image Processing, Chicago, 1998. [5] A. Levin, A. Zomet and Y.Weiss, “Learning How to Inpaint From Global Image Statistics, ICCV,2003. [6] A. Efros and T. Leung, “Texture synthesis by non-parametric sampling,” In Int. Conf. on Computer Vision, 1999.
[7] D. Heeger and J. Bergen, “Pyramid-based texture analsysis and synthesis,” Computer Graphics, 1995. [8] P. Harrison, “A non-hierarchical procedure for re-synthesis of complex texture,” In Proc. Int. Conf. Central Europe Comp. Graphics, Visua. and Comp. Feb 2001. [9] WEI, L.-Y., AND LEVOY, M, “Fast texture synthesis using treestructured vector quantization,” Proceedings of SIGGRAPH2000. [10] L. Liang, C. Liu, Y.-Q. Xu, B. Guo, and H.-Y. Shum, “Realtime texture synthesis by patch-based sampling,” ACM Transactions on Graphics, 2001. [11] A. Criminisi, P. P´erez, and K. Toyama, “Object Removal by Exemplar-Based Inpainting,” IEEE Proc. CVPR, 2:721-728, 2003 [12] M. Bertalmio, et al. “Simultaneous structure and texture image inpainting,” CVPR 2003. [13] H. Yamauchi, J. Haber, H.-P. Seidel, “Image restoration using multiresolution texture synthesis and image inpainting,” In Proceedings of Computer Graphics International, July, 2003 [14] Jian Sun, Lu Yuan, Jiaya Jia and H. Y. Shum, “Image Completion with Structure Propagation,” SIGGRAPH 2005. [15] F. Crick, The astonishing hypothesis: The scientific search for the soul. Scribner, 1994.
a b c d e f Fig. 3 Filling in double-ellipse. (a) The source image with missing region. (b) The sketch image with missing region. (c) The reconstructed sketch image. (d) Interim result. (e) Final result of the proposed algorithm. The curve structures of the ellipses are well-reconstructed. (f) Result of the method in[11]. Since the method can only reconstruct linear structure, the serious corner artifact is visible.
a b c d Fig. 4 Filling in the road image. (a) The source image with missing region. (b) Interim result. (c) Final result of the proposed algorithm. Both the curve structure and texture of the missing region are well-reconstructed in an undetectable way. (d) Result of the method in[11].
a b c d e Fig. 5 Object removal. (a) Original image (from [11]). (b) The source image with missing region. (c) The result generated by traditional image inpainting. (d) The result generated by the method in [11]. (e) The final result generated by our method. Notice that the main structures are well-reconstructed.
2000