feature-based texture synthesis and editing using ... - Semantic Scholar

4 downloads 0 Views 7MB Size Report
Efros and Freeman [5], Xu et al. in [26] and Liang et al. in [12] suggested ..... 20(3):127–150, 2001. [13] Yanxi Liu, Wen-Chieh Lin, and James Hays. Near-.
FEATURE-BASED TEXTURE SYNTHESIS AND EDITING USING VORONOI DIAGRAMS Muath Sabha Department of Multimedia Technology Arab American University-Jenin, Palestine email: [email protected]

Philip Dutr´e Department of Computer Science Katholieke Universiteit Leuven, Belgium email: [email protected]

Abstract

erating a similar distribution in the target texture and that is explained in the subsections 3.1 and 3.2. (2) The second and major component, which we consider the main contribution in our work, is the shape of the patch and its placement in the target texture, which is discussed in subsection 3.3. Texture editing is an important application of our technique, and that is explained in 3.4. In section 4, we show some results generated from different textures, followed by discussion. Finally, section 5 concludes this work and puts some recommendations for future work.

In this paper, we present a technique to synthesize featured textures easily and interactively. The main idea is to synthesize a new texture by copying irregular patches from the source to the target texture, each of which contains a complete feature. The interior part of the feature is not touched while the cutting and stitching is performed on the background texture between the features. The technique starts by selecting a feature in the source texture by the user, after which the algorithm finds the positions of other features, generates a similar distribution of features, and finally synthesizes the target texture by copying and stitching patches of the target’s Voronoi cellular shapes from the source texture. The technique is fast enough to be used interactively to edit textures in a simple and easy way. KEY WORDS Texture editing, Voronoi diagrams, Image synthesis.

1

2

Related work

There are various sample-based texture synthesis approaches which start from an exemplar to generate new textures: Texture Synthesis by Analysis generates a new texture depending on statistics extracted from the sample texture [2, 15, 27]. Pixel-based Texture Synthesis techniques depend on neighborhood matching to generate the target texture pixel by pixel. For each pixel, the already synthesized spatial neighborhood is compared to exemplar neighborhoods to identify the most elaborate pixels [6, 24, 19, 11, 8]. Most of the pixel-based techniques follow the Markov Random Field model. Texture morphing [14] works better with Pixel-Based techniques, and [20] uses the texton mask for texture morphing. [22] and [25] used the pixel-based approach to synthesize texture on surfaces, and [28] used a a data structure called jump-map to synthesize pixels over surfaces. Efros and Freeman [5], Xu et al. in [26] and Liang et al. in [12] suggested copying one patch-at-a-time instead of one pixel-at-a-time as a Patch-based Texture Synthesis model. This maintains the interior part of the patch. The idea was stitching random patches from the sample texture together and modifying them in a consistent way. The GraphCut [9] and GrabCut [17] techniques improved the way of cutting and stitching the patches. Liu et al. [13] worked on synthesizing the near regular textures and for that, the user inputs are important in assigning the latices and the technique extracts the deformations in color and geometry. The lapped texture technique [16] tries to stitch random patches over 2D and 3D meshes,while [18] tried to speed up the patch selection process. [21] uses wavelets to

Introduction

Computer graphics applications often use textures to decorate virtual objects to increase realism without modifying geometric details. In recent years, texture synthesis has been a useful tool to generate large textures starting from a sample texture. It is also used to fill holes in damaged pictures, or to generate limitless textures in interactive applications like games and simulators applications. Samplebased texture synthesis schemes are based on analyzing the sample texture to create visually similar images. From a sample image taken from nature by a photographer or made by an artist, our technique generates an infinite aperiodic texture by copying irregular patches from the sample to the target texture. Figure 1 outlines the stages of the technique. Using a suitable editor, the user can modify the generated texture easily. The user can add, move, remove and merge features from different textures as shown in figure 6. This paper starts by a quick overview of the recent contributions and most related to our work in the field of texture synthesis in section 2. In section 3, our technique will be explained in detail. Our work is composed of two main components, (1) the first is analyzing the source texture and extracting the features’ positions, and then gen1

R min

(b)

(a)

Source texture

Source distribution

(c)

Generated distribution

(d)

Figure 2. This figure is the source texture (a), along with the selected feature (b), while (c) shows the result of crosscorrelation between the feature and the source texture. Blue peaks indicate a high correlation. A suitable threshold is needed to separate the features in the texture. (d) shows the distribution of the detected features. The yellow detected roses, R1 to R12, in (a) are numbered according to the sequence of detection. The edge uncomplete feature near R2 is not considered.

Generated texture

Figure 1. A summary of our rechnique into stages. First is the source texture, next is the distribution of the detected features in the source texture. The generated distribution follows, and finally the synthesized texture according to the generated distribution.

select suitable patches. Some techniques try to build different square tiles from the source texture [3, 10], and then perform the synthesis by matching edges or corners. The main drawback of tile-based texture synthesis is the limitation of the variety due to the fixed tile set and often the tiling appears regular if the texture is viewed from afar, especially for non-homogeneous textures. A new class of image editing tools has emerged which employs this from texture synthesis to perform sophisticated image editing operations including Texture-ByNumbers (Image Analogies) [7], region filling and object removal from scenes [4] and photo montage [1].

3

Our Technique

We try to synthesize a range of textures where ”clear and relatively big” features are present in front of a background, which is often a stochastic texture in our examples. Those textures can be found commonly in nature such as flower fields, sand with stone pebbles, animals and birds in fields,

on water and in sky. We target this category of textures because most of the existing texture synthesis algorithms do not work well with input textures of two different categories (featured and stochastic), or they would need different parameters for each part of the texture. Our approach is also motivated by the observation that the human eye notices mostly main features in an image, while at the same time being sensitive to cuts in these main features [23]. Previous Patch-Based texture synthesis techniques do not take care of the patch shape or its position relative to the features. Most of the previous techniques (i.e. [5, 17, 18, 13]) use rectangular shape patches which is mostly suitable for near regular textures, while others use randomly irregular patch shapes by finding minimal cuts [9]. In our paper, the expression feature refers to the big repeated parts in the texture, where the user is attracted in the first look. 3.1

Feature positions

Our technique starts by finding the centers of the main features in the source texture. The user manually selects a feature from the source texture; based on the CIE L*a*b* color space, the L2 norm is calculated between the selected feature and the sample texture. The technique then finds the positions of similar features as shown in figure 2. The user is given the chance to assign the main feature to simplify the process of detecting features and to speed

it up. The user doesn’t have to select the feature accurately. In most of the textures we have examined, a simple box, rather than a polygon, is enough for the feature position finder of our technique. Relatively, the effort and time taken from the user is negligible compared to the time to be taken by an automatic program to detect which kind of features are the main features and which are wanted to be considered as back ground. Until now, to my knowledge, there is no technique that can detect the repeated feature from a general texture, some of them may succeed in regular or near regular textures. To improve the result, the user can interactively add, move or delete feature centers to make the resulting source distribution as accurate as possible, which means, if the cross correlation doesn’t detect all the features, or some of the features near edges, and their existence helps in generating the target distribution, those feature positions can be added. The time taken to modify the result also is negligible and that is because in most of the examined textures, it doesn’t have to be accurate in assigning the positions, but it is enough to put the feature nearly in the center of a Voronoi cell. 3.2

Distribution

After detecting the distribution of features in the source texture, a similar distribution is to be generated in the target. The generated point distribution will be used in synthesizing the target texture, where each point will be a position for a feature. The first step in generating the target distribution is to construct the Voronoi Diagram of the source distribution, and let’s call it the source space. Each point in the source space is a cell generator in the Voronoi diagram. The first thing which could be learned from the Voronoi diagram is the direct neighbors. For example, the direct neighbors of the point SP7 in figure 3(a) are {SP2 , SP3 , SP8 , SP1 0, SP9 , and SP6 }. The Voronoi diagram also shows which of the points are edge points (edge cells) and which are internal points. For example, SP12 is an edge node, SP5 is a corner node, while SP7 is an internal point. The regularity is calculated from the vectors to the direct neighbors. We calculate the standard deviation of the angles and the standard deviation of the distances to the next neighbors. From those standard deviations, and based on the lattice theory, the regularity of the distribution is calculated. The minimum distance and maximum distance between neighbors are also calculated. Generating the distribution starts with a node near the top left corner. This first node is randomly selected within the range of distances between minimum cell radius and maximum cell radius from the x-axis and y-axis taking the upper left corner as the origin. That is the blue node in figure 3(b). A random point is taken from the source space (SPn ), which is not an edge point, and the neighbors of this point are copied to the target space with reference dis-

SP2

SP4

SP3

SP1

SP5 SP6

SP8

SP7

SP9

SP10

SP11

SP12

(a) Source Distribution

TP1

TP2

CP TP5

TP3

TP4

(b) Target Distribution

Figure 3. Generating a target distribution from the source distribution. The blue point in the target distribution is the initial point, the green points are the located (fixed) points in previous steps, and the yellow points are the suggested points at the current step, which is finding the next neighbors from the red point CP (Current Point). Three of them are discarded (those with the red crosses), because the distances to their fixed neighbors are less than the minimum distance. The other two are accepted {T P3 , T P4 }. The source distribution is taken from the source distribution of figure 2.

tances to the target point equal to the reference distances from SPn in the source space. Those situated points have to be within the target space dimensions. The point in the target space which is wanted to find it direct neighbors will be called the current point CP. A list of all the direct neighbors of the point SPn in the source is constructed and called SurroundingsList. Each point T Pn from the SurroundingsList is tested to be added to the target space on a relative position to CP as its relative position to SPn . If the tested point satisfies the conditions, it is added to the TargetQueue, otherwise, it is rejected and the technique continues with the next point from the SurroundingsList. The Voronoi diagram is constructed for testing

each point to be added to the target space. When the SurroundingsList is empty, a new point from the TargetQueue is popped out to be the new CP. This procedure runs until the TargetQueue is empty, which means all the target space is covered. Let’s explain the previous steps by taking the example in figure 3. It is needed to generate a target distribution (figure 3(b)) from a given source distribution (figure 3(b)). The source distribution is the distribution in the figure 2(d) resulting from the source texture in figure 2(a). Suppose that we reached the situation in figure 3(b), where the current point is CP (red spot) and the green points have already been placed successfully in previous steps to the target space and added to the fixed list. We want to generate the next points based on CP. A random point from the source space is picked, and suppose that it is SP10 from figure 3(a). It is clear that the surroundings of SP10 are {SP7 , SP8 , SP11 , SP12 and SP9 } with relative spacings from this point indicated by blue lines. When placing those 5 surroundings to the target space, they are given the names T P1 to T P5 , which are the yellow spots in figure 3(b). The new T Pn points are tested one-by-one to be added to the fixed list. In the testing procedure to add a new point, the Voronoi diagram is created, and the distances between the new added point and its surroundings are tested. If the radius of the cell matches (equals or exceeds) the radius of a random cell taken from the source (it could be different from SP10 in our example), the point is added to the target space, otherwise, it is discarded. In the example of figure 3(b), the points T P1 , T P2 and T P5 are discarded, while T P3 and T P4 are kept in the space and added to the TargetQueue. Each of the points in the TargetQueue will be popped out and act as a CP. In the generated distribution, the user can control the density of the generated distribution. This can be done by introducing a factor α for the ratio between the distances in the source space to the distances in the target space (dT = α1 ds ). If this factor is less than 1.0, the effect will be increasing the density for the resulting distribution, and if it is more than 1.0, the target space will be sparser than the source space. Different results are shown in figure 6. The factor must not be less than the value which makes the minimum target Voronoi cell radius equals to Rmin (the radius of the selected feature) which is shown in figure 2(b). This ensures that complete features fit inside each cell. 3.3

Patch selection and stitching

After generating the distribution for the target texture, a Voronoi diagram is constructed. Each cell generator (point) indicates a feature center. Starting with the first cell in the top left corner, a feature position from the source texture is selected randomly. A patch with similar shape (and size) of the target cell is copied from the source and placed in the target texture. The patch from the source texture to be copied is not only the target cell area, but a region around the selected

Already synthesized region

Target cell position

Selected cell from source

(a)

(b)

Result after stitching the cell

(c) Figure 4. The procedure of pasting a cell. The patch in (b) which is taken from the source texture is the cell with a surrounding band (surrounded by blue line in (a)). The overlapped region (surrounded by red line in(a)) is stitched with the already synthesized part of the target texture after finding the best cut. (c) is the resulting texture after stitching the cell.

cell is copied with the patch to overlap the surrounding cells as shown in figure 4. The width of the region can be defined as a parameter, but, in all our examples, we used a relative constant band width which is 0.25 the distance between the point of the Voronoi cell and its neighbor, again to ensure that features aren’t cut. In order to blend the new patch, the L2 norm of the overlapped region is constructed. A cutting algorithm is used to find the best cut. In our implementation, we used the GraphCut [9] algorithm. An other parameter can be applied which is the perspective flow to improve the result by putting the copied cells in the same relative position between source and destination. The geese in figure 6 were given a perspective priority to the flow of the features in the source picture. Applying the perspective flow to the source texture gives the result of stretching the texture in a very smooth way resulting in enlarging the texture without loosing the perspective distances. 3.4

Texture Editing

In order to provide more flexibility, the technique enables the user to edit textures in a very easy and interactive way. The user can control the distribution of the target texture

of high resolutions interactively and in a very flexible, easy and efficient way. We believe that the technique can be extended to work with the full range of the feature based textures. The technique can be extended to map textures directly on 3D surfaces. The inevitable deformation can be done between the features and the features could be scaled to match the target mesh. The same texture editing can be done over 3D surfaces.

References [1] Aseem Agarwala, Mira Dontcheva, Maneesh Agrawala, Steven Drucker, Alex Colburn, Brian Curless, David Salesin, and Michael Cohen. Interactive digital photomontage. ACM Trans. Graph., 23(3):294–302, 2004.

Figure 5. A result together with its false color image to show from where each patch was selected and the sizes and shapes of copied patches.

by adding, moving, resizing or deleting features. The technique also enables the user to merge textures (especially if the background is easily matched) as shown in figure 6.

4

Results and Discussion

The main objective of our feature-based texture synthesis and editing technique is to copy complete features from the source to the target textures and to make cuts on the background between features without touching the features themselves. From the results shown in figure 6 , we noticed that the aim of the technique is achieved where features are not affected while being copied from source to destination. From the false color image result shown in figure 5, it is clear that irregular patterns have been copied from random places in the source to the target textures. It is clear that complete features are copied with patches of irregular shapes.

5

Conclusions and Future Work

We discussed a technique which improves the synthesis procedure for an important category of natural textures. The main contribution of the technique is choosing irregular patches such that the interior part of the features are not affected from any discontinuity caused from copying and stitching patches from source to target textures. Our technique generates distributions which are congrous to the source distributions with the ability to change the density. Our technique is fast enough to edit and synthesize textures

[2] Jeremy S. De Bonet. Multiresolution sampling procedure for analysis and synthesis of texture images. In SIGGRAPH ’97, pages 361–368, New York, NY, USA, 1997. ACM Press/Addison-Wesley Publishing Co. [3] Michael Cohen, Jonathan Shade, Stefan Hiller, and Oliver Deussen. Wang tiles for image and texture generation. ACM Trans. Graph., 22(3):287–294, 2003. [4] A. Criminisi, P. Prez, and K. Toyama. Region filling and object removal by exemplar-based image inpainting. IEEE Trans Image Process, 13(9):1200–12, 2004. [5] Alexei A. Efros and William T. Freeman. Image quilting for texture synthesis and transfer. Proceedings of SIGGRAPH 2001, pages 341–346, August 2001. [6] Alexei A. Efros and Thomas K. Leung. Texture synthesis by non-parametric sampling. In ICCV (2), pages 1033–1038, 1999. [7] Aaron Hertzmann, Charles E. Jacobs, Nuria Oliver, Brian Curless, and David H. Salesin. Image analogies. In SIGGRAPH ’01: Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 327–340. ACM Press, 2001. [8] Vivek Kwatra, Irfan Essa, Aaron Bobick, and Nipun Kwatra. Texture optimization for example-based synthesis. ACM Trans. Graph., 24(3):795–802, 2005. [9] Vivek Kwatra, Arno Sch?dl, Irfan Essa, Greg Turk, and Aaron Bobick. Graphcut textures: Image and video synthesis using graph cuts. ACM Transactions on Graphics, SIGGRAPH 2003, 22(3):277–286, July 2003. [10] Ares Lagae and Philip Dutr´e. An alternative for wang tiles: colored edges versus colored corners. ACM Trans. Graph., 25(4):1442–1459, 2006.

(a)

(b)

Figure 6. Some of the results (on the right) together with their sample textures (on the left,). Different density control factor (α) values where used.

[11] Sylvain Lefebvre and Hugues Hoppe. Appearancespace texture synthesis. In SIGGRAPH ’06: ACM SIGGRAPH 2006 Papers, pages 541–548, New York, NY, USA, 2006. ACM Press. [12] Lin Liang, Ce Liu, Ying-Qing Xu, Baining Guo, and Heung-Yeung Shum. Real-time texture synthesis by patch-based sampling. ACM Trans. Graph., 20(3):127–150, 2001. [13] Yanxi Liu, Wen-Chieh Lin, and James Hays. Nearregular texture analysis and manipulation. ACM Trans. Graph., 23(3):368–376, 2004. [14] Wojciech Matusik, Matthias Zwicker, and Fr´edo Durand. Texture design using a simplicial complex of morphable textures. ACM Trans. Graph., 24(3):787– 794, 2005. [15] Javier Portilla and Eero P. Simoncelli. A parametric texture model based on joint statistics of complex wavelet coefficients. Int. J. Comput. Vision, 40(1):49– 70, 2000.

[16] Emil Praun, Adam Finkelstein, and Hugues Hoppe. Lapped textures. In Proceedings of ACM SIGGRAPH 2000, pages 465–470, 2000. [17] Carsten Rother, Vladimir Kolmogorov, and Andrew Blake. ”grabcut”: interactive foreground extraction using iterated graph cuts. ACM Trans. Graph., 23(3):309–314, 2004. [18] Muath Sabha and Philip Dutr´e. Image welding for texture synthesis. In Vision, Modeling, and Visualization 2006, pages 97–104. IEEE Computer Society, 2006. [19] Muath Sabha, Pieter Peers, and Philip Dutr´e. Texture synthesis using exact neighborhood matching. Comput. Graph. Forum, 26(2):131–142, 2007. [20] Leandro Tonietto and Marcelo Walter. Morphing textures with texton masks. In SIBGRAPI ’04: Proceedings of the Computer Graphics and Image Processing, XVII Brazilian Symposium on (SIBGRAPI’04), pages 348–353, Washington, DC, USA, 2004. IEEE Computer Society.

[21] Leandro Tonietto, Marcelo Walter, and Claudio Jung. Patch-based texture synthesis using wavelets. In SIBGRAPI ’05, page 383, Washington, DC, USA, 2005. IEEE Computer Society. [22] Greg Turk. Texture synthesis on surfaces. In SIGGRAPH ’01: Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 347–354, New York, NY, USA, 2001. ACM Press. [23] Bruce Walter, Sumanta N. Pattanaik, and Donald P. Greenberg. Using perceptual texture masking for efficient image synthesis. Comput. Graph. Forum, 21(3), 2002. [24] Li-Yi Wei and Marc Levoy. Fast texture synthesis using tree-structured vector quantization. In SIGGRAPH ’00, pages 479–488, New York, NY, USA, 2000. ACM Press/Addison-Wesley Publishing Co. [25] Li-Yi Wei and Marc Levoy. Texture synthesis over arbitrary manifold surfaces. In Eugene Fiume, editor, SIGGRAPH 2001, Computer Graphics Proceedings, pages 355–360. ACM Press / ACM SIGGRAPH, 2001. [26] Y. Q. Xu, B. Guo, and H. Shum. Chaos mosaic: Fast and memory efficient texture synthesis. In Tech. Rep. MSR-TR-2000-32. Microsoft Research, 2000. [27] Alexey Zalesny and Luc Van Gool. A compact model for viewpoint dependent texture synthesis. Lecture Notes in Computer Science, 2018:124–143, 2001. [28] Steve Zelinka and Michael Garland. Interactive texture synthesis on surfaces using jump maps. In EGRW ’03: Proceedings of the 14th Eurographics workshop on Rendering, pages 90–96, Aire-la-Ville, Switzerland, Switzerland, 2003. Eurographics Association.