Video-based Nonphotorealistic and Expressive Illustration ... - CiteSeerX

3 downloads 0 Views 747KB Size Report
[4] Harold Edgerton. Elctronic Flash, Strobe. MIT Press, 3rd edition,. 1990. [5] William T. Freeman, Edward H. Adelson, and David J. Heeger. Mo- tion without ...
Video-based Nonphotorealistic and Expressive Illustration of Motion Byungmoon Kim Irfan Essa [email protected] [email protected] GVU Center and College of Computing, Georgia Institute of Technology

Time-Lapse(left)

Temporal-Flare(right)

A BSTRACT We present a semi-automatic approach for adding expressive renderings to images and videos that highlight motions and movement. Our technique relies on motion analysis of video where the motion information from the image sequence is used to add expressive information. The first step in our approach is to extract a moving region of the video by segmenting and then grouping regions of compatible motions. In the second step, a user can interactively choose or refine a grouping region that represents the moving object of interest. In the third and final stage, the user can apply various visual effects such as a temporal-flare, time-lapse, and particle-effects. We have implemented a prototype system that can be used to illustrate and expressively render motions in videos and images, with simple user interaction. Our system can deal with most translational and rotational motions without a need for a fixed background. Keywords: Nonphotorealistic Rendering, Video Processing, Image-based Rendering. 1

I NTRODUCTION

Most forms of drawings, illustrations, and cel animations use techniques to effectively illustrate and accentuate the motion of moving objects. Animators choose among possible variations of speedlines, motion-lines, time-lapse, and temporal-flares to show movement. In videos, these types of effects are often overlooked as they compromise the photorealism associated with the images and video. Possible exceptions to this are photographic effects like motion blur [10] and strobe effects [4]. Nonphotorealistic and expressive illustrations are used to show specific types of motions. For example, to show a car moving very fast, a comet tail is attached to it to exaggerate or highlight the motion. This is usually done manually in a video editing packages by segmenting the moving objects (using for example rotoscoping techniques) and then assigning an effect to the region. Our goal in this paper to present an approach that seeks to automate parts of this type of motion effects generation, while allowing for userinput to achieve desired effects. We do this is by using methods

Particle-Effect(left)

Speed-Lines(right)

for detecting the motion, segmenting the moving object and then synthesizing the effect to illustrate the motion. Our approach works in scenarios with moving backgrounds and with hand-held cameras. In addition to showing the motions in different styles on video, our approach allows the rendering of a single image from a video clip that illustrates the motion. Extracting a moving object and its motion from a video clip provides essential parameters for various motion effects. While, this form of motion segmentation problem has been studied broadly and a number of methods have been proposed, identifying moving objects correctly in various situations still remains a research goal. To this end, we develop an approach for image segmentation (based on [2]) that is simple and more importantly effective and efficient for our goals. We start with the segmentation of the region we want to add expressive motion to in the first frame. Then we compute the motion of each segments to the second frame grouping adjacent segments that have similar velocities. This provides initial measurement of moving regions where the motion can be depicted (Figure 1 (a)). We feel that a user needs to be involved to ensure that effects being rendered are as needed, for this purpose, we have added a simple user interface to allow the user to pick the region. The user can also refine the region by adding/deleting segments. This requires a few additional mouse clicks but it supplements the weakness of the automation because of errors in segmentation of motions primarily caused by occlusions, lighting changes and large motions (Figure 1 (b)). The user interface provided in our current implementation is simple and demonstrates that user-assisted can provide useful renderings. 2

R ELATED W ORK

While the manual generation of effects to accentuate motion is widely used in post-production, a few methods have been introduced for showing motions. For example, Masuch et al [9] introduces a method for generating speed-lines by using the given motion of the 3D model. This is a technique for creating still images with pen and ink styles to show motions on nonphotorealistically rendered 3D model. We are more interested in extracting motion from a sequence of frames to illustrate motions on video and images and to certain extent is motivated by efforts like “Motion without Movement”[5]. Existing Video post-processing approaches to create nonphotore-

(a)

(b)

(c)

(d)

Figure 1: Outline of the algorithm with an example video of 320×200 pixels, 16 frames on 2.5GHz Pentium4, (a) Segmentation and grouping by motion compatibility (6 sec). Colored dots are segment centers where color represents the segment group (Notice that most of the background is grouped into a single group). The white line is the displacement of the segment. (b) User refined the initial segment group (top) by removing 13 and adding 2 segments (bottom). (c) Tracking the motion for following frames (0.5 sec/frame). (d) Speed-lines (top) and motion-lines (bottom) effects are applied ( vs j − (v + ω k × s j ) = vs j − v − ω (−y j i + x j j) (6) where i, j are orthonormal basis vectors for the image plane. If true, add s j to g and update v, ω and continue to the adjacent segments. This automates some of the manual template refinement

Tracking the Object

Once we have a segment group that represents an object, we need to track its motion in all the following frames. We assume rigid motion with no occlusion. Since we know the motion from the first frame to the second frame, we can translate and rotate all segments of the object to the second frame. We then search the matching displacement using the SSD measure in (1). Since the search is performed only for the segment that belongs to the object, this step is fast if the object is small. We also assume small rotations per frame and simply search the best matching displacements of each segments. We do not search for the rotation. The rotation angle is computed from displacement profile using (4). The example in Fig. 3 shows such a displacement field and tracking result. The most time consuming part is the exhaustive search for the minimum SSD value. A possible speed is to narrow this search space using velocity extrapolation. 3.4.2

Object Mask and the Leading and Trailing Edges

Once a moving region is chosen, a binary mask can be built by collecting all pixels that belong to the segments in the object. We also allow the user to apply morphological open and close operation to enhance the mask boundary. The leading and trailing edges are important to depict speed lines and motion lines. It is relatively easy to build leading and trailing edges once the object mask is obtained. First, the boundary of the mask needs to be identified. Starting from an internal pixel, we traverse in any direction until it hits a boundary pixel and then start following the boundary. Notice that some boundary pixel may be connected to multiple boundary pixels, which is the case where multiple directions are possible. In this case, we recursively start a new boundary following procedure. When it returns, we simply collect the longest ones. This is admittedly crude but it is simple to implement. This way, we have ordered circular list of boundary pixels. The second step is computing the outward normal from this. The last step is identifying whether the boundary pixels are in leading or trailing edge using the dot product between their outward normal and velocity of the object. Figure 3 illustrates leading and trailing edges by blue and red colors respectively. 4

R ENDERING THE M OTIONS

After a moving object is identified, various nonphotorealistic (NPR) styles can illustrate the motion of the object. We have implemented a few styles at present and addition of other styles is also possible. Particle-Effects: We randomly generate particles near the trailing edges of the object. The lifetimes of the particles are chosen in random and the velocities are the negatives of the object velocity with small random perturbation. As a result, particles travel in the opposite direction of the object during their life time. Colors of the particles are chosen as the average color of the object scaled up by a factor, yielding a brighter color to produce flame like effect. Opacity of the particle decreases proportional to the remaining lifetime. Those particles can be rendered in numerous ways. And the parameters such as particle density, lifetime, and magnitude of random variations may depends on rendering methods. We have implemented two particle rendering models.

Figure 3: Tracking a rotating object. Red is the trailing edge and blue is the leading edge. The displacements of each segments are also shown. On 2.4GHz Pentium4, it took 2 sec/frame for 320×240 pixels. The displacement search space is [-14,14] pixels in both x and y directions.

Gaussian Splat: This methods renders the particles with the opacity maps defined by small Gaussian blobs centered at the particle positions with the center opacities proportional to the remaining lifetimes. The resulting video shows a smoky trail of the moving object. Line Stroke: Instead of a Gaussian blob, a line stroke can be placed in particle position. The line stroke is aligned along the velocity of the particle. The line has maximum opacity at the particle position and became fully transparent at each ends. The line is anti-aliased to reduce the jaggy shape. The result is shown in 4

its motion in all other frames. This provides sufficient information for depicting motion on photorealistic video by applying four different styles : time-lapse, temporal-flare, particle-effects, speedline/motion line streaking. Future efforts include other stylistic motion depiction on photorealistic raw video or renderings of nonphotorealistic styles such as cartoon or painting. Further research may also be needed for dealing with deforming objects. Our method can be easily used for multiple object. Tracking object that can be occluded temporarily in a video sequence would also be interesting direction as well. R EFERENCES

Figure 4: The particle effects applied to a moving car

Time-Lapse: Time laps are also often used scheme to describe motion in a still image. Since we have the object mask and its positions for all frames, we can achieve this effect easily. Starting from the oldest position, the portion of the mask that does not overlap with its next position will be drawn with reduced transparency. Temporal-Flare: Temporal flare is an effect well suited in a photorealistic image. In comparison to the speed or motion lines that are a primary tool in cartoon, it seems to be a primary tool for photorealistic scene. We achieve this effect with simple advection of color, i.e., by pushing flare color away from the trailing edge of the object. Temporal flare effect is shown in the first page. Speed-lines/Motion-lines: In photorealistic image, speedline streaking does not seem to be as efficient as in cartoon. However, it still creates interesting results. We provide two types of speedlines in this paper; applying cartoon style solid line stroke for speedline and motion line shown in Fig. 1 and blurring colors aligned with the motion shown in first page. 5

C ONCLUSION AND F UTURE W ORK

We introduce a semi-automatic method for adding expressive illustration of motion on video. Our approach relies on the construction of the template using the first two frames and then computing

[1] Gabriel J. Brostow and Irfan Essa. Image-based motion blur for stop motion animation. In Proceedings of ACM SIGGRAPH, pages 561– 566. ACM SIGGRAPH, ACM Press, 2001. [2] Dorin Comaniciu and Peter Meer. Mean sift: A robust approach toward feature space analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(5), May 2002. [3] Doug DeCarlo and Anthony Santella. Stylization and abstraction of photographs. ACM Transactions on Graphics, 21(3):769–776, July 2002. ISSN 0730-0301 (Proceedings of ACM SIGGRAPH 2002). [4] Harold Edgerton. Elctronic Flash, Strobe. MIT Press, 3rd edition, 1990. [5] William T. Freeman, Edward H. Adelson, and David J. Heeger. Motion without movement. In Proceedings of the 18th annual conference on Computer graphics and interactive techniques, pages 27–30. ACM Press, 1991. [6] James Hays and Irfan Essa. Image and video based painterly animation. In NPAR 2004: The 3rd International Symposium on NonPhotorealistic Animation and Rendering, page To Appear, June 2004. [7] Aaron Hertzmann and Ken Perlin. Painterly rendering for video and interaction. In NPAR 2000: First International Symposium on NonPhotorealistic Animation and Rendering, pages 7–12, June 2000. [8] Allison W. Klein, Peter-Pike J. Sloan, Adam Finkelstein, and Michael F. Cohen. Stylized video cubes. In Symposium on Computer Animation. ACM SIGGRAPH, July 2002. [9] Maic Masuch, Stefan Schlechtweg, , and Ronny Schulz. Speedlines: Depicting motion in motionless pictures. In SIGGRAPH 99 Conference Abstracts and Applications, page 227. ACM SIGGRAPH, ACM Press, 1999. [10] Azriel Rosenfeld and Avinash C. Kak. Digital Picture Processing. Academic Press, Inc., 1982.

Suggest Documents