Reactive Pedestrian Path Following from Examples - CiteSeerX

16 downloads 2606 Views 1MB Size Report
The examples are used to build a model of the desired reac- tive behavior. ... marketing graphics and contractors use them to communi- cate to construction ..... ings of the IEEE Conference on Robotics and Automation, pages 1664–1669 ...
Reactive Pedestrian Path Following from Examples Ronald A. MetoyerÝ Computer Science Department Oregon State University Corvallis, OR 97330 [email protected]

Jessica K. HodginsÝ Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213-3890 [email protected]

Abstract To present an accurate and compelling view of a new environment, architectural and urban planning applications both require animations of people. Ideally, these animations would be easy for a non-programmer to construct, just as buildings and streets can be modeled by an architect or artist using commercial modeling software. In this paper, we explore an approach for generating reactive path following based on the user’s examples of the desired behavior. The examples are used to build a model of the desired reactive behavior. The model is combined with reactive control methods to produce natural 2D pedestrian trajectories. The system then automatically generates 3D pedestrian locomotion using motion capture resequencing algorithms. We discuss the accuracy of the model of pedestrian motion and show that simple direction primitives can be recorded and used to build natural, reactive, path-following behaviors.

1. Introduction Three-dimensional (3D) models of architectural and urban designs are increasingly used to visualize expensive construction concepts before full production, plan complex urban areas, and visualize pedestrian evacuation patterns. Architects and engineers evaluate options during the design process with 2D simulations and 3D visualizations. Land developers use visualizations to produce compelling marketing graphics and contractors use them to communicate to construction estimators, sub-contractor bidders, and building owners[45]. These visualizations can save time and money for those involved in the construction and urban planning process. To identify potential problems, the scenes should be accurately produced with realistic structure models as well as Ý This work was done while both authors were at the Georgia Institute of Technology.

Figure 1. Architectural visualization of a crowd scene in a model of the College of Computing at Georgia Tech. Models provided by the Imagine Lab.

realistic human inhabitants. Presently, these scenes often do not include human agents because natural human motion is difficult to create. The animation of the high-level behaviors of humans is particularly difficult and time consuming to produce. For example, a scene with human characters in a courtyard would require that the user generate locomotion for each character taking into account collision avoidance (Figure 1). For a scene of even ten characters, this task becomes difficult. Animators and programmers have developed skills and techniques for generating strikingly realistic human characters. Unfortunately, those who wish to generate animated figures are often not experts in animation or computer programming. We are interested in generating more natural reactive path planning by building on user expertise. In this paper, we develop a system that allows naive users to control the reactive path planning of human characters. We present an approach that computes most of the character motion automatically while still giving the user control over the resulting animation. While the gross character motion is

specified by the user, the fine details of the navigation, such as obstacle avoidance and path following, are implemented with automatic reactive navigation techniques. The user can refine the motion by directing the characters with reactive navigation primitives. The system uses this direction along with other information about the scene to build a model of desired reactive behavior for use in similar situations. This 2D simulation produces 2D time-stamped trajectories, but we ultimately want to produce 3D motion that tracks the 2D trajectories. We use a motion capture resequencing algorithm that pieces together frames, or poses, to track the 2D trajectories in time and space while maintaining natural transitions. We test the model quantitatively by computing error rates for the training data as well as unseen data. We also qualitatively compare the resulting motion to that of models using only reactive control and models using random choice.

2. Background Behavior control has been a topic of interest in several fields including computer graphics, robotics, and urban planning. Early interest in the computer graphics community was sparked by the seminal work of Reynolds that introduced the Boids model for flocks, schools, and herds[37]. Several others in the graphics area have since made progress in creating characters that exhibit realistic behaviors for human-like characters, fish, and dinosaurs[44, 14, 36, 46]. Most of these solutions had focused primarily on autonomous behavior. In 1995, Blumberg introduced a model for directing character behavior at multiple levels, giving the user the ability to control the behavior[8]. More recently, Blumberg presented a framework for designing a dog character that learns herding behaviors based on user clicker training[7]. Our aim is also to add the human into the behavior control and training loop. In the area of intelligent agents, Barnes takes a more direct approach to character training by designing a virtual environment for interactive visual programming of agents. The user specifies preconditions, post conditions and the corresponding actions visually within the environment[4]. In other work, we have focused on generating natural reactive sports behaviors via demonstration of full trajectories[30]. Our approach aims to indirectly collect information about the user’s desires in order to build a model of user preference and produce motions that more closely match it. Several researchers in the graphics area have focused specifically on generating pedestrian behaviors such as navigation planning and natural locomotion[24, 9, 21, 35, 42]. Others have focused on designing simulation environments for generating realistic, directible crowds[32, 43, 12, 33, 34]. Koshi and his colleagues analyze actual crowd

video to build statistical models of higher level pedestrian behavior[3]. Goldenstein and his colleagues take a dynamic systems approach to generate reactive behaviors and crowd behaviors for autonomous agents[16]. Pedestrian behavior modeling is also of interest in urban planning. Urban planning researchers build models of human behavior that are valuable tools for planning and design of urban areas such as shopping malls and urban centers[6, 5, 15, 10, 27]. Pedestrian models have been developed at several levels of granularity ranging from coarse fluid flow motion models to fine grain inter-pedestrian interaction models such as that proposed by Helbing and Molnar[17, 19, 18]. Helbing’s model of social interactions is based on attractive and repulsive potential fields very similar to those used by the roboticists for mobile robot control[20, 23, 28, 2]. Using their social forces model, Helbing and Molnar are able to recreate phenomena such as lane formation in halls, queuing, and turn-taking at doorways. Others have used cellular automata approaches for modeling pedestrian motion at a similar granularity and have observed similar emergent behaviors[6, 38, 10]. Each of these approaches generates motion that is particularly useful for observing patterns and collecting statistics, but they typically generate robotic-looking motion that is not suitable for realistic visualization. BDI’s PeopleShop system also provides an interface for novices to design pedestrian content for 3D training and visualization scenes. They allow users to specify paths, sensor regions and behavior for characters in a terrain and allow for run-time control of characters using a joystick. Rather than design fully autonomous characters they implement “Intelligence Amplification” to build on the user’s intelligence[11]. We are also interested in simplifying the content creation process for novices by providing simple interfaces and characters that improve performance with time.

3. 2D Intelligence Model To populate a 2D scene with animated pedestrians, the user first describes the motion on a 2D floor plan of the scene to be animated. To relieve the user of some of the tedious details involved with human navigation, we provide a low-level character intelligence model. Character intelligence provides a basic level of behavior for the character via reactive path following in the presence of obstacles, desired paths, and other pedestrians. Reactive control using potential fields is a well studied area in mobile robotics and in other fields such as pedestrian modeling. We use the social forces model of Helbing and Molnar to model the reactive intelligence of a pedestrian in a 2D representation of an architectural environment. This approach defines obstacles as repulsive potentials, goals as attractive potentials, and combines all potentials to produce a com-

f(d max) d max

tB

y

tA A

x

B

s

p f(s)

D

C

Figure 2. A and B represent attractive and repulsive fields centered on the circle. C represents a repulsive pedestrian field moving in the positive direction. D represents a boundary field caused by the wall.

posite potential field (Figure 2)[18]. In 2D space, each pedestrian is modeled with point mass dynamics. The update equation for a point mass is       where the force  is obtained from the behavior control potential fields,  is the character’s mass, and  is the simulation time step. Similar equations hold for  . The velocity of the point-mass pedestrians is clamped at a limit of 2.0 m/s to allow for fast walking. The social forces model described above provides the user with the ability to specify desired goal locations. In order to allow for more control over the path to the goal location, we provide the user with the ability to supply a natural path. People, as experts in navigating physical environments, can visualize and draw natural paths between two points in a scene in the absence of moving obstacles. The user supplies these paths by drawing directed lines across the floor plan of the environment. These paths can also be generated using global path planning algorithms such as visibility roadmaps[25]. These paths provide general path guidelines while the reactive control accounts for potential collisions along these paths. The user-supplied paths are converted into forces:  



  





  

  

  



(1) where   is a unit vector along the path starting at the nearest point on the path,    is a unit vector perpendicular to  ,  is the perpendicular distance from

Figure 3. Force diagram for a character at several points in time along the userspecified natural path. At  the character is on the path. At  the character has been forced off course and must return to the path. As distance from the spline increases (at  ), the force direction begins to point towards the spline. When the distance is small (at  ), the force direction is aligned with the spline.

the nearest point on the path,   is the largest allowed perpendicular distance, and   is a path gain. The force on the pedestrian is based on the pedestrian’s perpendicular distance from the spline that represents the desired natural path that it is following (Figure 3). These reactive models provide a simple system for the user to populate an environment with pedestrians and direct them from one point to another within the environment.

4. Behavior Models from Direction Primitives The intelligence model alone should produce correct 2D motion in terms of avoiding the defined obstacles and reaching goals, but will not necessarily produce natural motion for navigating complex scenes. When the motion resulting from the intelligence model does not meet the user’s goals, the user is able to interactively direct the character with navigation primitives. As a 2D pedestrian simulation progresses in time, potential collisions may arise and the user is visually alerted to the situation. The user can then stop the simulation and provide direction from the following set of navigation primitives: yield, cut-in-front, go-around-right, go-around-left, and no-action. The user observes the motion and accepts

A Desired Path

Desired Path

Potential Collision Area

Potential Collision Area

B

Cut In Front Yield Avoidance Targets

Figure 4. The circles with arrows represent pedestrians and their velocity. The yield navigation primitive chooses a velocity that will put the pedestrian at a 0.75m cushion before the potential collision along the desired path. The cut-in-front primitive chooses a velocity that will put the pedestrian at a 2.25m cushion beyond the potential collision spot along the path.

the direction and continues in the animation process or revises the direction. The set of navigation primitives was chosen based on research from the traffic planning field. There are five main tasks a pedestrian undertakes while navigating: monitoring, yielding, checkerboarding, streamlining, and avoiding perceptual objects[13]. In this paper, we are concerned with two of these tasks, monitoring and yielding, because these are relevant to pedestrian-pedestrian collision avoidance. Monitoring refers to the act of observing pedestrians in the nearby area to determine their navigation intentions. Yielding refers to the act of adjusting velocity (magnitude or direction) in order to avoid a potential collision. The yield primitive is designed to alter the velocity of the pedestrian character slightly so that it allows another pedestrian to pass safely in front. The system chooses a target point along the pedestrian’s desired natural path before the predicted collision point. A new desired velocity is computed that will put the pedestrian at this point at the predicted time of impact (Figure 4). Once the collision danger has passed, the pedestrian resumes his original desired velocity. The cut-in-front primitive is implemented in a similar manner, choosing a target point ahead of the predicted point of impact. The go-around primitive is designed to generate an alteration to the desired path that takes the pedestrian around the potential collision spot and back to the natural path. The path around the collision depends on the relative travel di-

Figure 5. The circles with arrows represent pedestrians and their velocity. The colliding pedestrian is shown from both sides to demonstrate the two avoidance targets. The around right navigation primitive chooses a desired path offset based on the approach angle of the potentially colliding pedestrian. If the pedestrian is approaching from the left, A, the offset is chosen with respect to the desired path. If the pedestrian is approaching from the right, B, the offset is chosen with respect to that pedestrian himself.

rection of the other pedestrian. An offset position is chosen based on the other pedestrian’s approach angle and a curve through this position and back to the natural path makes up the path adjustment (Figure 5). A similar procedure produces a path for going around to the left.

4.1. Generalizing Direction Examples One of our goals in this work is to ease the burden on a naive animator who is designing a scene. Rather than discard the naive user’s direction examples, we use them to aid the user in future situations. Generalization of direction primitives requires that the system not only record the direction primitives themselves, but also the features of the scene. The feature vector describes the aspects of the scene that affect the path planning of the character and computation of these features represents a pedestrian’s monitoring behavior. Ideally, features are fully descriptive of the situation while easy and fast to compute. A pedestrian’s motion at any point in time may depend on several obstacles or several other pedestrians as well as the architectural situation (wide or narrow hall, etc.). We have chosen a set of seven discrete features to describe the situation of a pedestrian in an urban scene. The seven features are

 

Is the path around left blocked by other pedestrians or obstacles (Y or N) Is the path around right blocked by other pedestrians or obstacles (Y or N)



Relative speed of the colliding pedestrian (5)



Approach direction of the colliding pedestrian (8)



Colliding pedestrian’s distance to collision (5)



Pedestrian’s distance to collision (5)



Desired travel direction (3)

where the numbers in parentheses represent the number of discretized values for each feature. Speed and direction are clearly important when trying to negotiate a collision. The ”blocked right” and ”blocked left” features allow us to account for potential collisions aside from the one under consideration. A single value for each of these variables makes up a feature vector. We use a naive Bayes classifier to model the decision making process of the pedestrian character. The four primitives described above (around-left, around-right, yield, letpass) and a fifth, no-action primitive, are used as the choices or hypotheses for the naive Bayes classifier while the seven variables above make up the features used as input to the naive Bayes classifier. We treat potential collision situations as a choice. The system must classify potential collisions as one of five possible alternatives represented by the direction primitives. The naive Bayes classifier is defined as



   





 

   

(2)



where  are members of , the set of five possible actions and represents the set of seven attributes, or features, described above. The probability of a particular hypothesis,  is

  

  

(3)

Where   represents the number of examples seen thus far and  represents the number of examples where the th primitive was chosen. The conditional probabilities for the attributes can be computed by counting the occurrences of each attribute given each particular hypothesis. To avoid a biased underestimate, we compute the -estimate for estimating the conditional probabilities[31].

4.2. Run-time Behavior Using the probabilistic model, we can now compute the probabilities of the five possible decisions or hypotheses: go-around-left, go-around-right, yield, cut-in-front, and noaction. The hypothesis with the highest probability is chosen. The model can be used in later situations to reduce the amount of direction necessary from the user. At run time, the pedestrians compute potential collisions by observing the velocity of nearby pedestrians to determine future locations. When a potential collision arises, the pedestrian consults the behavior model to determine the proper navigation action to take. The chosen action runs to completion or until another possible collision occurs. The reactive intelligence model is still used to account for any collisions not avoided by the learned model and to track the desired path. Both the learned model and the reactive model produce forces which are summed together to generate a final force for the pedestrian.

5. 3D Motion Generation For visual display, we need 3D motion that tracks the trajectories from the 2D simulation. We first capture multiple locomotion trajectories such as walking straight at a comfortable and slow speed, turning by various amounts, starting and coming to a complete stop. Due to limitations of our motion capture system, we were able to capture only short segments of motion in a 9x9 foot capture region. We build a transition matrix that defines a distance, in pose space, between any two poses in our set of motion sequences. We then use a beam-search combined with a cost function to determine the sequence of poses that tracks the trajectory while maintaining natural transitions. This technique is similar to a number of recent approaches in which motion is treated as a directed graph of poses from motion capture sequences. Kovar, Arikan, and Lee each presented techniques along these lines at Siggraph 2002 in which the graph is searched for a semi-optimal path using a cost function that incorporates user constraints such as desired poses, paths, tasks. [26, 1, 22, 41, 39, 40, 29].

6. Results Movies can be downloaded from http://www.cs.orst.edu/˜metoyer/casa03/ We measure performance of the system both quantitatively and qualitatively. A user provided a total of 28 direction examples over several scenes. The learned model from each scene was carried over to the next scene. To evaluate the generalization of the learned behavior model, we compute a sample error rate that represents the accuracy of the

Probabilistic Model Performance Test # Examples % Correct Sample Error Rate 28 72 Generalization Set 1 8 50 Generalization Set 2 7 57 Table 1. Performance tests for the probabilistic model learned from the user direction. We computed both a sample error rate to determine correctness of the model for known situations and generalization tests to determine accuracy for unseen situations. Again, in all cases, a random guess would produce a 20% accuracy.

Figure 6. In these images, the pedestrian’s velocity vector has been extended for clarity. A pair of white vectors indicate that two pedestrians are in danger of colliding while a pair of gray vectors indicates that two pedestrians are on a less-urgent collision course. Black vectors indicate no apparent danger of a collision. The top image shows a snapshot of a simulation using only the social forces model to guide the motion of the pedestrians. The middle image shows the same point in time for a system using a random choice of navigation primitives for potential collisions. The bottom image represents a simulation using the social forces model combined with the learned probabilistic model.

model for the data that was used to train the naive Bayes classifier. We take a random sample of the existing 28 direction examples for which we have both features and the user’s direction, or actions. We then pass these sampled features to the model and compare the model’s output to

the user’s actual direction for each of these examples. This error rate measures the number of times the probabilistic model is correct, where correct is defined as choosing the direction primitive that the user actually chose for each of the sampled feature vectors. A random guess of the correct navigation primitive would result in 20% accuracy. The model achieved 72% accuracy over the sampled sets. To determine how well the model generalizes to situations other than the training data, we also performed tests on two sets of data that were not included in the training set. These tests measure the model’s ability to generalize to new, unseen situations. Again, a random guess would result in 20% accuracy. For these two sets, the model achieved 50% and 57% accuracy. These tests are summarized in Table 1. We also qualitatively compared a crowd scene where all pedestrians used the probabilistic model combined with the reactive model (2DPedsLearned.mpg) to navigate the scene to a scene where all pedestrians made random choices to navigate the scene (2DPedsRandom.mpg). The model-guided pedestrians produced much more natural motion than the random pedestrians and more natural motion than pedestrians using only the reactive navigation model (2DPedsSocialForces.mpg) (Figure 6).

7. Discussion We have used our approach to animate pedestrians in a 3D visualization of an architectural scene with simple user input while maintaining user control (3DPedsScene.mpg) (Figure 1). In 2D, we allow the user to populate and direct a simple representation of pedestrians in an architectural scene. We further utilize the user’s direction to build a model of character behavior for future similar situations. In 3D, we have generated automatic trajectory tracking for articulated 3D pedestrian characters. Although we have chosen a small set of direction primitives, we are able to demonstrate the utility of using this

direction rather than discarding it after the animation is produced. When the user must animate a new scene with similar conditions, the model will in many cases produce the correct behavior, reducing the number of times the user will have to provide direction. Although the resulting model produces motion that looks natural there is still a fair amount of error in the model predictions. We believe this error is due to the inability of the system to distinguish between situations that the user considers different when directing the character. This inability could be caused by the coarse granularity of the features which may prevent the system from identifying small but significant differences in two situations. We believe that a finer feature space could result in better predictions although it would require more examples from the user. We also believe that with more data, the model predictions would improve in accuracy. We will also explore other behavior representations such as decision trees that may perform better with limited training data. Our examples have typically shown 10-15 pedestrian characters in a scene, allowing us to show a reasonable sized crowd that is not too congested. Without congestion, the pedestrians enter situations where they must make strategic decisions about navigating. As scenes get more congested, fewer options exist and navigation strategy becomes more reactive. In our experience, the social forces model alone produces fairly natural results for large scenes with more congestion because most motion is reactive within a very small planning area. A useful result of our 3D motion approach is the automatic choice of proper sequences for velocity changes as well as path curvature changes. For example, if the 2D simulated character comes to a stop in order to avoid a collision and then continues along its path, the 3D algorithm produces a sequence of motion capture poses that brings the 3D character to a complete stop and eventually back to a forward walking sequence (3DPedsAvoid.mpg) . The motion data and search structure enforce natural transitions because the actor who was recorded performed only natural motions and transitions between motions are only made between similar poses. Our 3D motion can be further improved by collecting more data. In the 3D crowd scene (3DPedsScene.mpg) for example, the animator created a situation where two characters met in the middle and appeared to be conversing. Unfortunately, we had no motion capture data of a character standing still, rather, the closest motion to standing still was a sequence of a person turning in place. Therefore, the characters appear to turn their backs on one another. This problem could be solved by adding relevant sequences to the database of motion. More data would also allow us to vary the motion between pedestrians. Currently, all pedestrians draw from the same motion library.

We are currently investigating several extensions of this work. First, we are implementing a tangible table-based interface as a simple means for a naive user to interact with the simulated pedestrians for animation and ultimately for teaching characters to behave naturally. We also hope to use this table-based interface to provide a tool for domain experts, such as a security specialist, to explore what-if scenarios for important situations such as evacuation during an emergency. Second, we are investigating real-time motion capture sequencing algorithms. We hope to develop tools and techniques that allow naive users to produce compelling scenes of pedestrians in 3D environments for use in training, visualization, and entertainment.

Acknowledgments The authors would like to thank Dr. Peter Molnar and the Georgia Tech College of Computing for the invaluable input and support during the course of this work.

References [1] O. Arikan and D. A. Forsyth. Interactive motion generation from examples. In Proceedings of Siggraph 2002, pages 483–490, 2002. [2] R. Arkin. Motor schema based navigation for a mobile robot. In Proceedings of the 1987 IEEE Conference on Robotics and Automation, pages 264–271, 1987. [3] K. Ashida, S. Lee, J. Allbeck, H. Sun, N. Badler, and D. Metaxas. Pedestrians: Creating agent behaviors through statistical analysis of observation data. In Proceedings of Computer Animation, 2001. [4] C. Barnes. Visual programming for virtual environments. Proceedings from 2000 AAAI Spring Symposium on Smart Graphics, 2000. [5] M. Batty, B. Jiang, and M. Thurstain-Goodwin. Local movement: Agent-based models of pedestrian flow. Center for Advanced Spatial Analysis Working Paper Series, 4, 1998. [6] V. Blue and J. Adler. Cellular automata model of emergent collective bi-directional pedestrian dynamics. Artificial Life VII: Proceedings of the Seventh International Conference on Artifical Life, pages 437–445, August 2000. [7] B. Blumberg, M. Downie, Y. Ivanov, M. Berlin, M. Johnson, and B. Tomlinson. Integrated learning for interactive synthetic characters. In Proceedings of Siggraph 2002, pages 417–426. ACM Press, 2002. [8] B. Blumberg and T. A. Galyean. Multi-level direction of autonomous creatures for real-time virtual environments. Proceedings of SIGGRAPH 95, pages 47–54, August 1995. [9] M. Choi, J. Lee, and S. Shin. Planning biped locomotion using motion capture data and probabilistic roadmaps. To Appear in ACM Transactions on Graphics, 22(2), 2003. [10] J. Dijkstra and H. Timmermans. Towards a multi-agent system for visualizing simulated behavior within the built environment. Proceedings of Design and Decision Support Systems in Architecture and Urban Planning Conference: DDSS’2000, pages 101–117, 2000.

[11] B. Dynamics. Peopleshop. http://www.bdi.com/PeopleShop.html, 2000. [12] N. Farenc, S. Musse, E. Schweiss, M. Kallmann, O. Aune, R. Boulic, and D. Thalmann. A paradigm for controlling virtual humans in urban environment simulations. Applied Artificial Intelligence, 14(1):69–91, 2000. [13] F. Feurtey. Simulation of collision avoidance behavior for pedestrians. Master’s Thesis, The University of Tokyo, School of Engineering, 2000. [14] J. Funge, X. Tu, and D. Terzopoulos. Cognitive modeling: Knowledge, reasoning and planning for intelligent characters. Proceedings of SIGGRAPH 99, pages 29–38, August 1999. [15] G. Gipps and B. Marksjo. A micro-simulation model for pedestriain flows. Mathemathics and Computers in Simulation, 27:95–105, 1985. [16] S. Goldenstein, E. Large, and D. Metaxas. Non-linear dynamical system approach to behavior modeling. The Visual Computer, 15(7/8):349–364, 1999. [17] D. Helbing. A fluid dynamic model for the movement of pedestrians. Complex Systems, 6:391–415, 1992. [18] D. Helbing and P. Molnar. Social force model for pedestrian dynamics. Physical Review, 51(5):4282–4286, 1995. [19] L. Henderson. On the fluid mechanics of human crowd motion. Transportation Research, 8:509 – 515, 1974. [20] O. Khatib. Real-time obstacle avoidance for manipulators and mobile robots. International Journal of Robotics Research, 5(1):90–98, 1986. [21] H. Ko and J. Cremer. Vrloco: Real-time human locomotion from positional input streams. Proceedings of Presence ’96, 5(4):367–380, 1996. [22] L. Kovar, M. Gleicher, and F. Pighin. Motion graphs. In Proceedings of Siggraph 2002, pages 473–482. ACM Press, 2002. [23] B. Krogh and C. Thorpe. Integrated path planning and dynamic steering control for autonomous vehicles. In Proceedings of the IEEE Conference on Robotics and Automation, pages 1664–1669, 1986. [24] J. Kuffner. Goal-directed navigation for animated characters using real-time path planning and control. In CAPTECH ’98: Workshop on Modelling and Motion Capture Techniques for Virtual Environments, pages 171–186. CAPTECH ’98, nov 1998. [25] J. Latombe. Robot Motion Planning. Kluwer Academic Publishers, 1991. [26] J. Lee, J. Chai, P. Reitsma, J. Hodgins, and N. Pollard. Interactive control of avatars animated with human motion data. In Proceedings of Siggraph 2002, pages 491–500, 2002. [27] G. G. Lovas. Modeling and simulation of pedestrian traffic flow. In In Modeling and Simulation: Proceedings of 1993 European Simulation Multiconference, 1993. [28] D. Lyons. Tagged potential fields: An approach to specification of complex manipulator configurations. In Proceedings of the IEEE Conference on Robotics and Automation, pages 1749–1754, 1986. [29] R. Metoyer. Buidling behaviors with examples. Ph.D. Dissertation, Georgia Institute of Technology, 2002. [30] R. Metoyer and J. Hodgins. Animating athletic motion planning by example. In Proceedings of Graphics Interface, pages 61–68, 2000.

[31] T. Mitchell. Machine Learning. McGraw-Hill, New York, 1997. [32] S. Musse, F. Garat, and D. Thalmann. Guiding and interacting with virtual crowds in real-time. In Computer Animation and Simulation ’99, pages 23–33. Eurographics, SpringerVerlag, 1999. [33] S. Musse and D. Thalmann. A model of human crowd behavior: Group inter-relationship and collision detection analysis. In In Proceedings of Eurographics-CAS ’97Workshop on Computer Animation and Simulation, 1997. [34] S. Musse and D. Thalmann. Hierarchical model for real time simulation of virtual human crowds. 7(2):152–164, 2001. [35] H. Noser, O. Renault, D. Thalmann, and N. MagnenatThalmann. Navigation for digital actors based on synthetic vision, memory and learning. Computers and Graphics, 19(1), 1995. [36] K. Perlin and A. Goldberg. Improv: A system for scripting interactive actors in virtual worlds. Proceedings of SIGGRAPH 96, pages 205–216, August 1996. [37] C. W. Reynolds. Flocks, herds, and schools: A distributed behavioral model. Proceedings of SIGGRAPH 87, 21(4):25– 34, July 1987. [38] A. Schadschneider. Cellular automaton approach to pedestrian dynamics. In Pedestrian and Evacuation Dynamics, pages 75–84, 2002. [39] A. Sch¨odl and I. Essa. Machine learning for video-based rendering. In T. K. Leen, T. G. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing systems, volume 13. The MIT Press, 2000. [40] A. Sch¨odl and I. Essa. Controlled animation of video sprites. Proceedings of the First ACM Symposium on Computer Animation, pages 121–127, July 20002. [41] A. Sch¨odl, R. Szeliski, D. Salesin, and I. Essa. Video textures. Proceedings of SIGGRAPH 2000, pages 489–498, July 2000. [42] H. Sun and D. Metaxas. Automating gait generation. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 261–270. ACM Press, 2001. [43] F. Tecchia, C. Loscos, R. Conroy, and Y. Chrysanthou. Agent behavior simulator(abs): A platform for urban behaviour development. In Proceedings of Games Technology Conference 2001, 2001. [44] X. Tu and D. Terzopoulos. Artificial fishes: Physics, locomotion, perception, behavior. Proceedings of SIGGRAPH 94, pages 43–50, July 1994. [45] VISARC. professional visualization services for the building industry. http://www.visarc.com/, 1999. [46] B. Webber and N. Badler. Animation through reactions, transition nets and plans. In Proceedings of the International Workshop on Human Interface Technology, 1995.

Suggest Documents