adaptive parameter control for image moment ... - Semantic Scholar

3 downloads 0 Views 1MB Size Report
imitating the artists' rendering styles, such as oil-paintings, technical illustration, etc. The painterly rendering is one of the nonphotorealstic rendering methods ...
ADAPTIVE PARAMETER CONTROL FOR IMAGE MOMENT-BASED PAINTERLY RENDERING Michio SHIRAISHI and Yasushi YAMAGUCHI University of Tokyo Komaba, Tokyo, Japan

ABSTRACT The nonphotorealistic rendering is recognized as one of the major topics in the recent computer graphics society. The general issue in this research area is achieving methods for imitating the artists’ rendering styles, such as oil-paintings, technical illustration, etc. The painterly rendering is one of the nonphotorealstic rendering methods dealing with the effects as the oil-paintings. It is usually achieved by painting many brush stroke textures on the canvas image. The image moment-based painterly rendering algorithm controls each stroke to fit the local source image. In this paper, we propose the method which incorporates the depth information into the image moment-based painterly rendering algorithm. The original algorithm uses some constant parameters. By changing these parameters according to the depth information, more satisfactory results can be obtained. 1.

INTRODUCTION

Nonphotorealistic rendering (NPR) is an emerging research area in computer graphics that pursuits the various artistic styles other than photorealism. The goals of this research area include the images which emulate the styles of paintings, artistic illustrations, technical illustrations, etc. Painterly rendering techniques yield painting-like images with a hand-crafted look. As introduced in section 2, the painterly rendering generally includes the process of placing many brush stroke textures on the canvas. These brush strokes have some attributes such as the orientation, size, etc. The key to these algorithms is how to determine these attributes for each stroke so that they deliver the pleasing result. In this paper, we propose the new parameter control method for the image moment-based algorithm (Shiraishi and Yamaguchi, 2000). This method is capable of determining these attributes so that the stroke fits the local feature of the source image. This paper proceeds with the overview of our painterly rendering algorithms. Section 3 briefly describes the image moment-based painterly rendering algorithm. Section 4 introduces the the parameter control method by depth. Finally, we conclude the paper in Section 5.

(a)

(b)

Figure 1: (a) The source image. (b) The resulting image. 2.

PAINTERLY RENDERING

The input to the painterly rendering process can be the several variations. For example, some take still images(Haeberli, 1990) (Hertzmann, 1998), three-dimensional objects (Meier, 1996), and video sequences(Litwinowicz, 1997). The resulting output also includes the various media such as still images, video sequences, et al. In the system we are presenting, both the input and the output are two-dimensional still images. The system converts the input image (Figure 1(a)) to the painterly rendered output image (Figure 1(b)). The final painterly-looked image is composited by painting many brush strokes on the canvas. As shown in Figure 2, we should determine the stroke distribution, the attributes for each stroke, and the painting order, before the strokes are actually painted. The preparatory step is organized by three sequential substeps. Firstly, the stroke distribution is determined in order to fix locations where the strokes are placed on the canvas. The jittered grid is generally adopted by the previous works (Litwinowicz, 1997) (Hertzmann, 1998). The following substep determines the stroke attributes for every stroke, its color, location, orientation and size. These attributes are used when a brush stroke is rendered on the canvas. Since these attributes control the look-and-feel of

Preparation color (r,g,b) location (xc,yc) orientation θ size (w, l)

Stroke Distribution

Stroke Attributes

Painting Order

Composition

(a)

(b)

(c)

Painting

Figure 2: A process of painterly rendering. (Shiraishi and Yamaguchi, 2000) (d)

(e)

(f)

l θ w

(xc,yc)

C=(r,g,b)

Figure 4: A process of stroke attributes determination: (a) A source image. (b) A stroke color. (c) A local source image. (d) A color difference image. (e) An image of the equivalent rectangle. (f) A rendered brush stroke.

Figure 3: Stroke Attributes. This image becomes the target to be approximated by one brush stroke. The size of the window s is given by the user. 4. A color difference image is generated by examining the color of each pixel in the local source image (see Figure 4(d)). The pixel value of the color difference image stands for color similarity based on the difference from the stroke color in the CIE Luv color space (Glassner, 1995). 5. The equivalent rectangle is calculated using image moments, so that it approximates the color difference image. The image of the equivalent rectangle is shown in Figure 4(e). This image has the same zeroth, first and second image moments as the color difference image(Freeman et al., 1998). 6. Finally, the remaining stroke attributes (i.e. location, size and orientation) follow those of the equivalent rectangle. The resulting stroke is shown in Figure 4(f).

each stroke, the strategy for this step is crutial to every painterly rendering algorithm. The last substep sets the rendering order of the strokes. The strokes are overpainted on the canvas one after the other in the composition step, so the previously drawn strokes are hidden by the strokes drawn later. Therefore, the rendering order of the strokes should be carefully planned. 3.

IMAGE MOMENT-BASED PAINTERLY RENDERING

3.1 Overview The image moment-based painterly rendering (Shiraishi and Yamaguchi, 2000) is based on the idea that a stroke approximates the corresponding region of the input source image. The main contribution of this work is a new method for determining the attributes of rectangular brush strokes. Figure 3 shows the stroke attributes: color (r; g; b), location (x; y), orientation, width and height. The attributes are determined for each stroke by the following procedure: 1. The position given in the stroke distribution step is used as the tentative location for a stroke attributes determination algorithm. Figure 4(a) shows an example of the tentative location. In this figure, the tentative location is set to the center of the white rectangle. 2. The color attribute is determined by sampling the source image at the location as shown in Figure 4(b). 3. The square region is cropped from the source image as shown in Figure 4(c). This region is called the window.

4.

ADAPTIVE PARAMETER CONTROL WITH DEPTH

One of the important properties, which real paitings posess but current synthesized images lack, is the sense of depth. Since the canvas is underlying in the 2D plane, the image cannot represent the depth explicitly. The artists have been struggling to preserve the sense of depth in the canvas. In this paper, we suggest the method to utilize the depth information for the painterly image synthesis. The representation of the depth information is described in Section 4.1. The following three sections shows the ideas to incorporate the depth information into the painterly rendering.

(a)

(a)

(b)

(c)

Figure 6: (a) A resulting image. (b)The magnified image corresponding to the upper black square. (c)The magnified image corresponding to the lower square.

(b)

Figure 5: (a) A source image. (b) The depth image corresponding to (a). 4.1 DEPTH IMAGE

larger strokes. In the image moment-based painterly rendering algorithm, the stroke size is limited by the window size, s. Therefore, the size of the resulting stroke can be controlled by changing the window size for each stroke. In order to obtain the sense of depth, the window size s stroke is controlled by the depth value as follows: sstroke = smin +

dstroke dmin dmax dmin

 (smax

smin );

As shown in Section 3., the original image moment-based painterly rendering algorithm deals with the 2D input image as its input. In addition to that, we assume that the depth image is available as an input. Figure 5 shows an example of the depth image. The intensity value of each pixel represents the distance from the view point to the object. The depth image can be easily obtained through the stereo matching, synthesized image, 3D digitizer, and so on. In this paper, we used the synthesized depth image by the simple ray tracing method.

where smin and smax are the predefined values which form the range of the window size. dmin and dmax are the minimum and maximium values in the depth image. dstroke is the depth value at the initial location of the stroke. Therefore, the larger dstroke results in the larger the window size s stroke . Figure 6 shows an example of the window-size control by the depth. 6(b) shows the magnified view of the background region of 6(a). It is painted with larger strokes than other regions such as Figure 6(c). As shown in this example, the sense of depth is enhanced by changing strokes sizes.

4.2 WINDOW-SIZE ADAPTATION BY DEPTH

4.3 COLOR DIFFERENCE IMAGE MASKING BY DEPTH IMAGE

The sense of depth in the paintings are enhanced by many ways, such as the occlusion of objects and so on. One of the elements contributing to the sense of depth is the stroke size. It means that the closer objects to the viewer are painted by smaller strokes, while the objects far away are painted by

The original image moment-based method premises a square window. A stroke is supposed to approximate one object within the window, therefore the problem arises when the window contains two or more objects of the same color.

(a)

(c)

(b)

(d)

(a)

(e) (b)

Figure 7: (a) Source image. The black square indicate the window. (b) The cropped source image. (c) The color differnce image. (d) The depth difference image. (e) The modified

Figure 8: (a) Sort by depth. (b) Sort by size.

color differnce image. In such cases, the strokes become inappropriately large because these objects are considered as one object. This problem can be partly solved by using the depth image. Objects with differences in the same window are separated by the depth information. Only the pixels whose depth values are close to the depth at the center are used to calculate the equivalent rectangle. Figure 7 shows some images generated by this depth masking technique. Figure 7(a) is the source image, with a black square indicating a window. The close-up of the window is shown in Figure 7(b). In this case, since two objects retain similar colors, the color difference image inevitably covers both two objects as shown in Figure 7(c). The resulting equivalent rectangle lies over the two objects. The depth information can solve this problem. The depth difference image is generated just the same way as the color difference image. The pixel value of the depth image is larger when the depth at the pixel is close to that at the center pixel. Figure 7(d) shows an example of the depth difference image. In this case, the object to be painted is the right column, therefore the pixels corresponding to the right column are chosen. The color difference image can be improved by using this depth difference image. Figure 7(e) is generated by masking the default color difference image(Figure 7(c)) with the depth differnce image(Figure 7(d)). Calculating the equivalent rectangle of this modified color difference image, more appropriate stroke can be obtained.

of the final image. The authors’ previous work adopted the sorting by stroke size, i.e. width  height. In the case when the depth information is available, there is another approach, i.e. sorting by depth. This idea is also employed in the work by Meier (Meier, 1996). After all stroke attributes are determined, the stroke can be sorted from far to near. Figure 8 shows the comparison between these two methods. Figure 8(a) is created by sorting brush strokes by their depth. It successfully eliminates the artifacts around the contour of the objects. 5.

CONCLUSION AND DISCUSSION

This paper discussed the method to incorporate depth information into the image moment-based painterly rendering method. By using the depth information, the following improvements are achieved:

  

The average brush stroke size is changed according to the depth so that the resulting image bears the sense of depth. This is done by controlling the window size based on the depth value at the stroke location. The depth can be used for separating the objects with the similar color, while the original method disinguishes the objects by the colors only. The painting order of the brush strokes can be determined by the depth, which preserves the smooth contours of objects.

4.4 BRUSH STROKE SORTING BY DEPTH

REFERENCES

As introduced in Section 2., the composition order of brush strokes is one of the main factors which contribute the quality

Michio Shiraishi and Yasushi Yamaguchi, 2000, An Algorithm for Automatic Painterly Rendering based on Lo-

cal Source Image Approximation, First International Symposium on Non Photorealistic Computer Graphics Proceedings (NPAR 2000), 2000. Paul Haeberli, 1990, Paint by numbers: Abstract image representations. Computer Graphics, 24(4):207-214, 1990. Aaron Hertzmann, 1998, Painterly rendering with curved brush strokes of multiple sizes. In Michael F. Cohen, editor, SIGGRAPH 98 Conference Proceedings, Annual Conference Series, pages 453-460, 1998. Barbara J. Meier, 1996, Painterly Rendering for Animation. In Holly Rushmeier, editor, SIGGRAPH 96 Conference Proceedings, Annual Conference Series, pages 477– 484, 1996. Peter Litwinowicz, 1997, Processing Images and Video for an Impressionist Effect, In Turner Whitted, editor, SIGGRAPH 97 Conference Proceedings, Annual Conference Series, pages 407–414, 1997. Andrew S. Glassner, 1995, Principles of Digital Image Synthesis, The Morgan Kaufmann Series in Computer Graphics and Geometric Modeling, Morgan Kaufmann, 1995 William T. Freeman, David B. Anderson, Paul A. Beardsley, Chris N. Dodge, Michal Roth, Craig D. Weissman, William S. Yerazunis, Hiroshi Kage, Kazuo Kyuma, Yasunari Miyake and Ken-ichi Tanaka, 1998, Computer Vision for Interactive Computer Graphics, IEEE Computer Graphics and Applications, Vol. 18, No. 3, pages 42–53, 1998.

Figure 9: A result.

ABOUT THE AUTHORS Michio Shiraishi, is a PhD course student in The Division of International and Interdisciplinary Studies, University of Tokyo. His research interests are computer graphics, especially image synthesis. He received a BE and an MA in the systems science from the University of Tokyo in 1997 and 1999. He can be reached by e-mail: [email protected], by fax: +81-3-5454-6890, or through postal address: Yasushi Yamaguchi Lab., The Division of International and Interdisciplinary Studies, University of Tokyo, 38-1, Komaba, Meguro-ku, Tokyo, Japan.

Yasushi Yamaguchi is an associate professor in the Department of Graphics and Computer Sciences of the University of Tokyo. His research interests lie in computer aideddesign and geometric modeling, including parametric modeling, topology models for B-reps, surface surface intersection, and surface interrogation. He is a member of ACM SIGGRAPH, IEEE Computer Society, and SIAM. He received a BE in precision machinery engineering and a Dr. Eng. in information engineering from the University of Tokyo in 1983 and 1988, respectively. He was an assistant professor of Tokyo Denki University from 1989 to 1993. He can be reached by e-mail: [email protected], by fax: +81-3-5454-6890, or through postal address: Department of Graphics and Computer Sciences / The University of Tokyo / 3-8-1, Komaba, Meguro-ku / Tokyo 153-8902 / Japan.

Suggest Documents