Brushstroke Rendering Algorithm for a Painting Robot - IEEE Xplore

1 downloads 0 Views 235KB Size Report
algorithm converts a source bitmap image into a set of brushstrokes which we call a brushstroke map. The brushstroke map can be unambiguously transformed ...
Brushstroke Rendering Algorithm for a Painting Robot Artur I. Karimov, Dmitriy O. Pesterev, Valerii Y. Ostrovskii, Denis N. Butusov, Ekaterina E. Kopets Department of Computer Aided Design Saint Petersburg Electrotechnical University “LETI” Saint Petersburg, Russia [email protected] Abstract—A painting robot ARTCYBE is a machine developed to imitate an ability of a human painter to create realistic images with acrylic paints and a brush. In this paper a brushstroke rendering algorithm for this robot is presented. The algorithm converts a source bitmap image into a set of brushstrokes which we call a brushstroke map. The brushstroke map can be unambiguously transformed into a list of commands driving the painting robot. Simulated paintings obtained by proposed algorithm are represented. Advantages of the described algorithm in comparison to other known brushstroke rendering algorithms are discussed. Keywords—painting generation; computer art

machine;

robotics;

brushstroke

I. INTRODUCTION At this time a variety of brushstroke rendering approaches have been developed. The main application of existing algorithms is non-photorealistic image rendering, mimicking artistic paintings with use of source photographic images. Only a few techniques are aimed at implementation in painting machines. In the latter field notable results were achieved by T. Lindemeier's team in Konstanz University [1]. This team have developed a painterly rendering algorithm based on a visual feedback and approved it on the e-David robotic setup. Another team whose impact on a subject worth mentioning is a research group headed by H. Lipson, whose robotic painter PIX18 won a 2017 Robot Art contest [2]. The most common idea behind painterly rendering algorithms suitable for realization in a painting robot is to place brushstrokes along gradient isolines, somehow detected in a source image. This approach allows generating curved brushstrokes realistically imitating a manner of a human artist in various painting styles from impressionism to pointillism. One of the first algorithms based on this idea was described by A. Herzmann [3], who used a vector field derived from photography to drive a brushstroke generation. The more complicated algorithms extract a tensor field, which, in couple with a manual editing tool, can yield more expressive and somehow more precise results [4]. In these approaches a length of a brushstroke is usually limited by a user setting and a length of an elementary brushstroke segment is taken proportionally to a brush width. Various brush sizes allow detailing the image as precisely as needed.

Another approach to brushstroke map generation consists in using optimization techniques. C. Aguilar and H. Lipson proposed genetic algorithm which considers each brushstroke as an individual in a population and a color distance between source image pixels and a brushstroke is taken as a fitness function [5]. The researchers illustrate a convergence of the described algorithm but the resulting image appearance is far from realistic. O. Deussen [6] proposed to use the Lloyd’s relaxation algorithm to generate uniformly distributed stipples represented by Voronoi cells covering a generated image. Later T. Lindemeier adapted this approach to brushstroke rendering [5]. The main disadvantages of optimization algorithms are that their computational costs are high and their convergence is not always guaranteed. The essential problem in brushstroke rendering is a brush simulation used both to create a virtual effect of human painting and to drive a feedback in brushstroke generating algorithms. The surveys of Hegde [7] and Herzmann [8] present several approaches including texture replication, geometric brushstroke simulation, 3D brush simulation etc. In the e-David painting robot this problem is solved by introducing a visual feedback from a photo camera to eliminate a need of an accurate brush simulation [1]. Image preparation is also a subject of study because brushstrokes of a certain finite width usually cannot express too small details. Additionally, high-frequency patterns and noises in a source image can affect rendering quality. A common solution is blurring the source image with blur radius proportional to a radius of the brush [7]. But some valuable problems of a machine painting are missed or insufficiently enlightened in literature. Probably it is because most of researchers are focused on a computer image simulation rather than a real painting implementation. A. Quantity and density of brushstrokes The more brushstrokes are generated, the more time for the painting machine is needed. Estimation of Lindemeier shows that the Lloyd’s relaxation based algorithm generates 20% less brushstrokes than the Herzmann’s algorithm, which results in 3 hours of working time economy [1]. For realistic paintings an algorithm should generate a dense brushstroke map with minimum gaps between brushstrokes but also with minimal overlaps. The latter requirement is missed in the Lindemeier’s algorithm due to visual feedback, which allows covering all

331

978-1-5386-0703-9/17/$31.00 ©2017 IEEE

unpainted background where it is needed without special algorithm modifications, and a special approach to color generation based on a form of dithering: strokes of primary colors, placed near each other, constitute regions with a certain intermediate color. This feature is introduced because no color mixing devise is provided in the e-David robot.

consisting of straight segments connected subsequently to each other.

B. Quantity of colors Using a palette and a brush, a skilled human artist can easily obtain almost every color allowed by properties of pigments in paints he utilizes. As every brushstroke can have its unique tint, a number of colors in a human painting is theoretically unlimited, but till now there is no painting robot able to mix colors in a manner as a living artist does. Thus, a problem of colorful robotic painting remains unsolved. Our painting machine, ARTCYBE, is intended to be able to utilize arbitrary colors in painting like a human due to a special color mixer. However, mixing paints is a durable process and, moreover, it leads to additional paint consumption. So, the quantity of colors in the final image should be anyway restricted to a minimal feasible value. A possible solution of the color minimization problem is reducing color depth in a source image before further processing [9]. In this paper we show that reducing color depth in the final image can provide more precise results.

Fig. 1. Geometrical interpretation of the chosen brushstroke model

C. Order of brushstroke application While an order how brushstrokes are applied in a singlelayered human or computer simulated painting can be arbitrary, there are some limitations for a painting machine. The most general requirement follows from the fact that during the painting process residuals of previously used paints contaminate a color mixer and a brush. Thus, though our painting machine has a clearing subsystem, it is preferable to compose the next color of primary paints. Also first strokes should be composed of “weak” paints (e.g. yellow and white). Traces of these paints do not affect colors of the following brushstrokes much, and mixtures of “strong” paints, e.g. blue and black, should be applied later.

Fig. 2. Brushstroke model: (a) border and core, (b) seed points distribution

In order to let brushstrokes densely cover a canvas without excessive overlap we attributed every brushstroke by core and border parts as shown in Fig. 2, (a). The radius of the core is about a half of a brush radius. We place the initial seed points on a regular grid with side equal to

2R, where R is a brush radius. Seed points distribution is illustrated in Fig.2, (b).

Therefore, generalized problem considered in this paper is to design a brushstroke rendering algorithm satisfying three main requirements:

III. THE PAINTING ALGORITHM The first stage of our algorithm is based on ideas described by Herzmann [3]. If the current seed point was not overpainted by one of the previously applied strokes, a new brushstroke starts from it and continues along the curve normal to the image gradient until it reaches a maximal length or a color distance between the next intended segment of a brushstroke and a corresponding region of the source image exceeds a predefined value. We allow brushstroke borders to overlap freely but their cores are prohibited to overlap on more than 20 %. The latter value was obtained experimentally. After execution of the first stage, a canvas becomes covered with uniformly distributed brushstrokes lying with accordance to the image gradient. However, after this stage a significant number of unpainted gaps remains. Their size is often compatible to brushstroke width and they have arbitrary orientation with no relation to the gradient. To paint over these gaps, the second stage of the algorithm is applied. In each unpainted pixel, a new brushstroke is generated, with a color, obtained by averaging colors of source image pixels inside a circle with a

1. The algorithm should generate dense brushstroke maps with no gaps and minimal overlaps. 2. The algorithm should use a minimally possible number of colors. 3. The order of brushstroke application should be defined by properties of paints and a painting machine. II. BRUSHSTROKE MODEL Our painting machine uses a brush with a round-shaped tip, opaque acrylic paints and white canvas. A drying time of paints is about an hour, so paints can mix with each other on borders of brushstrokes, but their viscosity prevent them to diffuse between different strokes. This allows us excluding flow effects from simulation and using a simple model of a brushstroke (Fig. 1). In our algorithm a brushstroke is represented by a trace of a circle moving along a path,

332

All experiments were carried out in the MATLAB environment. Several common test images were considered. Since our painting machine is incapable to change brush automatically for now, only one brush size was used. For an image with maximum size 13x19 cm 1-2 mm brushes should be selected to achieve a fine detalization and 3-4 mm brushes to mimic a rough etude. In Fig. 3, a 3 mm brush was used to “paint” a 13x13 cm image “Peppers” (top) and a 4 mm was used to paint a 13x13 cm picture “House” (bottom).

center in the pixel and radius equal to the brush radius (this corresponds to the round brush tip). After that several random directions and brushstroke segment lengths are tested to take the one with minimal average color error. The second stage of the algorithm can be performed several times to minimize the quantity of unpainted pixels, but usually two iterations are enough. There can be no perfect correspondence between the simulated canvas and the real painted image due to painting machine imperfectness and discretization effects in the simulated image. We use relatively low resolution to obtain reasonable program running times (e.g. 512 x 512 pixels) and brushstroke radii 2–8 pixels, similar to those used by Herzmann [3].

V. BRUSHSTROKE MAP POST-PROCESSING Three additional options, or subroutines, should be added to the above mentioned algorithm to make it practically applicable. First, a quantity of colors should be minimized. In our realization, this quantity k is usually set by the user. The colors of brushstrokes are clustered with k-means algorithm. Then, after determining which brushstroke belongs to which cluster, the colors are averaged along the clusters. Thus, the resulting colors of the brushstrokes belong to one of k final colors used by a painting robot then. Earlier, a color clusterization along pixels of a source image was proposed [9]. This allowed to perform image segmentation before brushstroke rendering and made the algorithm slightly less computationally expensive, but the second stage of the algorithm described here will necessarily increase the number of colors used and the repeated clusterization procedure would be needed anyway.

In the algorithm described two error measures were tested. The first is a 1-norm in the RGB colorspace

e1 =

1 n

¦ j =1 ¦ n

ci − pij

i ={R , G , B}

where c is a color of a brushstroke, pij is a pixel color and n is a quantity of engaged pixels in the source image. The second measure is a 2-norm (Euclidean norm) in the CIE-Lab colorspace:

e2 =

1 n ¦ n j =1

¦ (c − p )

j 2

i

Second option is that the brushstrokes should be sorted by their positions to optimize a trajectory of a machine painting tool. A proper technique would be to order them in a head-bytail manner, such that the next brushstroke in the brushstroke list is the one with a starting point closest to the ending point of a previous brushstroke. This sorting is required for every color cluster. After that the quantity of brushstrokes can be reduced by connecting those brushstroke pairs where a distance between ending and starting points is less than the brush width and their total width is less than the maximal width allowed. This will also lead to a working time economy.

i

i ={L , A, B}

Though the CIE-Lab colorspace was designed to be more precise in sense of human perception [10], we found that the subjective difference between the resulting images depends more on the other algorithm settings than on the chosen error measure. IV. EXPERIMENTAL RESULTS

The final subroutine needed is converting a brushstroke list into a PLT file, a simple text file format used by plotters. Compared to such file formats as SVG it has a simpler syntax and is compatible with many existing pen plotters and the Dobot robot, used in our experimental setup. VI. CONCLUSION AND DISCUSSION In this paper an algorithm is proposed for a brushstroke map generation. It is used to drive the ARTCYBE painting robot, which mechanically applies acrylic paints on a canvas with a brush. The paintings, created by the robot, are intended to resemble those of human painters. The results obtained by the algorithm are presented, and we believe it can achieve satisfactory painting “aesthetics”. A numerical error between source and painted image is not given as it cannot relevantly evaluate the painting “quality” as perceived by humans, so some expert estimation will be attracted in the future. The mechanical part of the robot is now under development and we hope to obtain acrylic paintings soon. A machine painting has some sufficient difference between that performed by a human being in several major aspects.

Fig. 3. Simulated paintings generated by the algorithm; (a) is a source image, (b) is a rendered image

333

First, a human uses image semantics. Though the algorithm cannot “understand” what is depicted on a source image, a number of researchers proved that some additional information such as determining plans and background, face recognition etc. can help the algorithm to obtain more human-like results. This feature will be taken into account in the next version of our algorithm. Second, a human artist applies brushstrokes covering each other multiple times even in one-layer “a-la prima” technique. The machine acts more rational, requiring less movements and paintings. On the other hand, human artists use many effects of the medium, but the machine can handle only few of them as an experience of e-David shows.

[2]

Robotart: Creative Machines Lab (2015–2017). Available at: https://robotart.org/archives/2017/team/pix18-creative-machines-lab/ (accessed 5 August 2017) [3] Hertzmann A. Painterly rendering with curved brush strokes of multiple sizes. Proceedings of the 25th annual conference on Computer graphics and interactive techniques. ACM. 1998. pp. 453-460. DOI: 10.1145/280814.280951 [4] Zhang E., Hays J., Turk G. Interactive tensor field design and visualization on surfaces. IEEE Transactions on Visualization and Computer Graphics. 2007. Vol. 13 No. 1. DOI: 10.1109/TVCG.2007.16 [5] Aguilar, C., Lipson H. A robotic system for interpreting images into painted artwork. International conference on generative art. 2008. Vol. 11. [6] Deussen O., Hiller S., Van Overveld C., Strothotte T. Floating points: A method for computing stipple drawings. Computer Graphics Forum – Blackwell Publishers Ltd .2000. Vol. 19. No. 3. pp. 41-50. DOI: http://dx.doi.org/10.1111/1467-8659.00396 [7] Hegde S., Gatzidis C., Tian F. Painterly rendering techniques: a state-ofthe-art review of current approaches. Computer Animation and Virtual Worlds. 2013. Vol. 24, No. 1, pp. 43-64. DOI: 10.1002/cav.1435 [8] Hertzmann A. A survey of stroke-based rendering. Institute of Electrical and Electronics Engineers. 2003. pp. 70-81. DOI: 10.1109/MCG.2003.1210867 [9] Karimov A., Ostrovskii V., Butusov D. Teoreticheskie i prakticheskie aspekty mashinnoy zhivopisi. Programmnye sistemy i vychislitelnye metody (in Russian). 2016. No. 4. pp. 403-414. DOI: 10.7256/23056061.2016.4.21188 [10] McGuire R.G. Reporting of objective color measurements. HortScience. 1992. Vol. 27, No. 12. pp. 1254-1255.

The color minimization requirement can seem slightly artificial as negative effects of reducing color depth are well known. Our experience shows that color clusterization gives satisfactory results but for different images different number of colors is needed to retain a realistic appearance, and this choice is not a trivial problem. In the presented realization, this is done by a machine operator but we plan to automate this procedure in future research. REFERENCES [1]

Lindemeier T., Metzner J., Pollak L., Deussen O. Hardware-Based NonPhotorealistic Rendering Using a Painting Robot. Computer Graphics Forum. 2015. Vol. 34. No. 2. pp. 311–323. DOI: 10.1111/cgf.12562

334