3D Visualization and Animation of Crowd Simulation Using a Game Engine Kabilen SORNUM, Yuanxi LIANG, Wentong CAI, Malcolm Yoke Hean LOW and Suiping ZHOU School of Computer Engineering Nanyang Technological University Nanyang Avenue, Singapore 639798
[email protected]
Abstract Crowd simulation is an essential component of military training and operations. In this paper, we present an approach for generating 3D visualizations and animations from simulation results in the area of agent-based crowd simulation for military operations. The main contribution is to create a simulation bridge between a federated agent-based crowd simulation architecture and a game engine to drive the visualization and animation of virtual crowd in a 3-dimentional space. In our simulation bridge framework, we present a method for data acquisition to create the virtual environment. An interfacing methodology is introduced between the 3D virtual environment maps and the federated agent-based crowd simulator. Our method comprises of converting the 3D maps to multi-level 2D maps to feed the 2D agent-based crowd simulator’s virtual world representation. A tool was implemented for defining objects in the virtual environment based on 2D maps and a color coder scheme. Our bridge framework makes use of a game engine as the visualization medium. The 3D virtual environment maps are reconstructed within the game engine and human avatars are created to simulate the agents. Motion, expression and behavioral animations are attached to each virtual agent. At run-time, data is fed from the 2D crowd simulator to our framework via an event-driven file buffer. Virtual agents are created at run-time based on the social aspects of the crowd composition. Behavioral animation is triggered on each agent based on the commands represented in the simulation results. Our framework also provides visualization of emergency situations and makes use of particle dynamics to generate more realistic visuals of disaster situations and human crowd behavior in an emergency situation.
Keywords Game Simulation, Crowd Simulation, Visualization, Virtual Reality, Animation, Game Engine
1. Introduction Crowd simulation is an essential component of military training and operations. With the rapid growth of an urbanized social environment, there is an increased need for a system to study and monitor crowd behavior to help in the control and management of the crowd during disaster or emergency situations. To help overcome the military challenges and potential risks enforced by crowd, a federated agent-based crowd simulation architecture was developed using Modeling & Simulation techniques to create virtual crowd with agent-based behaviors [1]. The system consists mainly of a cognitive layer, a physical layer and a visualization component. The cognitive layer is the sensory
and behavioral system of the crowd simulation architecture. It defines a behavior model for virtual humans in a crowd simulation. It also holds a knowledge repository of ontology and knowledge-base data. The virtual environment is represented in the physical layer. Path planning and navigation are performed within the physical layer. The animation and visualization component is based on a 2dimentional representation of observable states of the agents. Preliminary 3-dimentional visualization was tested using the Unreal Tournament® engine [2]. The communication was performed using a tool called GameBots [3] through socket connection to the engine. Although the technique reported in [1] can be used to support multiple bots through a single socket, the total number of bots and map size is limited for a large-crowd simulation.
In this paper, we describe an alternate visualization methodology which uses a bridge framework to represent simulation results from the 2D simulator into 3-dimentional visuals and animations. Research was made to evaluate a 3D game engine that could support large amount of dynamic entities. Virtual 3D maps are created using a polygonal meshing modeling 3D package. Real-life scenarios have been used to build the virtual maps. Our method includes converting these 3D maps to multi-layered 2D maps to be fed into the agent-based crowd simulation framework. We introduce a tool using color coding system to decrypt different objects present in the virtual environment in a form readable by the crowd simulation architecture. The virtual 3D maps are reconstructed in the game engine’s world representation to match the spatial definitions of the engine core units. Human avatars are used to represent each agent. Multiple animations are applied on them to simulate a virtual agent. During run-time such 3D human avatars are created dynamically and the physical appearance is incorporated according to the social aspects of the crowd composition. Commands from the 2D simulation results are used to drive the 3D avatars in the virtual environment. For a more realistic visualization, we made use of particle dynamics and animation to generate visuals of disaster situations like bomb explosions within our virtual world.
In this paper, we provide an overview of the 2D agent-based crowd simulation architecture in Section 2. In Section 3, we introduce our simulation bridge framework and the steps involved to create the communication between the 2D simulator and the 3D visualization. Section 4 shows an application of our framework. The visualization performance is evaluated for a range of number of entities. Finally, the work is concluded in Section 5 and future work is discussed.
2. Overview of the 2D Agent-Based Crowd Simulation Architecture In this section, we give an overview of the agent-based crowd simulation architecture. As shown in Figure 1, the system comprises mainly of a cognitive layer, a physical layer, ontology & knowledgebase, a virtual environment and a visualization component. The system is implemented using RePast [4], an agent-based simulation toolkit.
Figure 1: 2D Agent-Based Crowd Simulation Architecture
The behavior and cognitive layer aims to reflect the cognitive process involved in real human’s decision making and behavior execution following the perceive-decide-act paradigm [5]. It incorporates the important cognitive components and mechanisms, which are necessary for real human’s decision making and behaviors in daily life situation. The ontology and knowledge-base is used to keep track of the dynamically-changing environment and agent behaviors in the simulation.
The physical layer comprises of the representation of the virtual environment and agent path planning and navigation throughout the simulation. The virtual environment is represented using a framed quad-tree [6]. Path planning and collision avoidance is performed using a combined D* [7] and a force based mechanism [8] for steering the agents across the simulation.
A 2D visualization component is used to display the simulation results of agents in the virtual environment. Figure 2 shows a screenshot of the simulation results at a simulation time t = 5.0. The scenario represented by the figure is a snapshot of a normal situation at the subway station. Different colors represent the different groups and social ties.
Figure 2: 2D Screenshot at Simulation time 5.0
3. Simulation Bridge Framework For realistic 3D visualization of the simulation results from the 2D simulator, we have created a simulation bridge framework between the 2D simulator and a game engine. Figure 3 illustrates our framework.
Figure 3: The Simulation Bridge Framework
3.1 Choice of Game Engine The game engine used is Conitec’s Game Studio A7 [9]. This choice resulted after we compared the performance of a few reviewed engines [10] using high number of entities. Game Studio outperformed other engines under test with a higher support for large number of interactive entities. Figure 4 shows the performance of the tested engines [11] [12] [13] under same conditions and constraints for 100 and 1000 agents respectively.
Figure 4: Comparison of Performance of engines
3.2 Data Acquisition Our 3D virtual environment is built to scale in Maya [14] using polygonal geometry. Real-life textures are taken from a digital camera and mapped accordingly on the 3D model. The polygonal meshes are then triangulated before exporting to FBX format using the Autodesk® FBX exporter [15]. The scene is then reconstructed in the Game Engine’s world editor. Additional textures, lights and cameras are added to create a complete 3D map. Figure 5(a) shows a screenshot of a fully textured model of an underground train station in Singapore.
3.3 Conversion to 2D maps The 3D map is converted to multi-level 2D maps to feed the 2D agent-based crowd simulator’s virtual world representation. A tool called the VE tool (Figure 6) was implemented for defining objects in the virtual environment based on 2D maps and a color coding scheme. Figure 5(b) shows the relative 2D map of the underground station.
(a)
(b) Figure 5: Conversion of 3D map to 2D maps
3.4 VE Tool The VE Tool was built in C#, and it is used to define areas and objects in the virtual environment. This process is also called environment denotation where modelers interpret the information from the 3D model and 2D maps into a format that the simulation model can retrieve and understand. According to the structure of virtual environment used in the simulation, the VE Tool provides the following steps to help generate different information. 1. Define layers: Users can specify the number of planes in the virtual environment, and associate a 2D map for each plane. For example, an underground station may consist of multiple levels, and hence the virtual environment will have multiple planes, while one plane represents one layer (one physical level). 2. Define the grid in each plane: Each plane is defined as a grid, and each grid cell represents a specific area in the real world. The modeler needs to define the number of cells in the grid by defining the cell length which is the actual length in the real world that each cell represents. The grid will be used to define different components in the virtual environment. 3. Define accessible regions in each plane: Users can specify the coordinate system and accessible regions of the environment. The accessible regions refer to the areas which an agent can access.
4. Define topological areas in each plane: Users can specify the name and other attributes for the topological areas which are accessible, and modify the attributes of individual cells in the grid if necessary. 5. Define objects in each plane: Users can define objects in the environment and specify the position and other attributes for the objects. 6. Define the relationship between topological areas: Users can define the relationship between all topological areas in the environment within the same plane and across different planes.
Figure 6: Screenshot of VE Tool
3.5 Agent Behavioral Animation Different kinds of 3D human avatars are designed to represent agents with respect to the social composition of the crowd e.g. child, adult, tourist, staff, etc. For each type a different model is used to reflect their characteristics defined in the Behavior Layer of the 2D Simulation architecture. Humanlike motion and crowd behavior animations like walk, run, fall, topple, death, etc. are created and attached to the 3D human avatar. Figure 7(a) shows a snapshot of the crowd at a particular time and the crowd composition is illustrated.
3.6 Event Driven Visualization The results from the 2D simulator are logged in a file buffer based on a regular time interval. Our 3D map level is loaded on the rendering engine of the game engine. The trace file is read from the file buffer and 3D visualization is driven from these commands. Common events which are stored in the trace file are creation of agents, motion of agents, removal of agents and emergency situations. The motion of agents in the 3D environment is made using the animation frames stored within the 3D model. Orientation and speed are calculated based on the direction vector. The speed of motion also affects the animation cycles. Visualization of emergency situations is made possible by the use of the particle systems of the game engine. Dynamic explosions are created and agent behavioral animation is updated to reflect the situation. Figure 7(b) shows an emergency situation and death animation of agents.
(a)
(b)
Figure 7: Left - (a) Crowd composition Representation. Right - (b) Emergency Visualization
A typical log in the trace file is in the format “Action Name, AgentID, x-position, y-position, z-position, social_id, group_id, timestamp”.
For example, “CREATE, 02, -799.021, 300.034, 10.0, ADULT,
INDIVIDUAL, 12.75” means “create an agent with ID 02 at the position vector (-799.021, 300.034, 10.0) and at time 12.75, with the social attributes being Adult and Individual”. A list of common actions and explanations are given in Table 1.
Log Action Name
Resulting Action in 3D world
CREATE
Create an entity at position (x,y,z) with appearance matching the social attributes.
RUN
Move the entity from his last stored position to the position in the command and orient it towards the directional vector. The motion is made over the time frame between two time stamps.
DELETE
Remove the particular agent from the simulation.
EMERGENCY
Generate the emergency particle animation at position(x,y,z) and at the timestamp given in the command.
DEATH
The particular agent has died and the death animation is played at that particular time stamp. Table1: Common actions logged in the trace
4. Application of Framework As a case study, we have applied this framework on the simulation of crowd in an underground station. Figure 5(a) shows the 3D map of level 1 of the station and the relative 2D color coded map is illustrated in Figure 5(b). Human avatars, based on their social aspects, are created during run time as shown in Figure 7. Visualization of emergency situations is also illustrated in the figure. The visualization performance is evaluated for a range of different number of agents. Figure 8 shows the different frame rates for the range of agents under the visible viewport.
Figure 8: Frame Rate Performance of Visible Entities in a current Viewport
5. Conclusion and Future Work The bridge framework described in this paper was created to provide a fast method of creating virtual environments for use in a simulation engine. By the use of our tool, rapid definition of virtual environments can be made and this extends the 2D simulation engine to use multi-level environment to simulate 3D environment in a 2D platform. This paper outlines our approach for creating this bridge framework to solve more general simulation problems. An extension of this framework can be a possible future work. A network support can be added to the framework to make it HLA [16] compliant and thereby eliminating the use of trace-based visualization.
Acknowledgements This research is supported under the Defense Science and Technology Agency (DSTA) grant POD0613456. The author would like to thank the other team members of COSMOS, Luo Linbo, Wang Yongwei David, Xiong Muzhou, Xiao Xian and Michael Lees.
References [1] M. Y. H. Low, W. Cai, S. Zhou. A Federated Agent-Based Crowd Simulation Architecture. In Proceedings of the 2007 European Conference on Modeling and Simulation, pages 188-194, June 4-6, 2007. [2] Epic Games Inc., Unreal Tournament Game Engine, http:// www.unrealtechnology.com, 2008. [3] R. Adobbati, A. N. Marshall, A. Scholer, S. Tejada, G.A. Kaminka, S. Schaffer, C. Sollitto. GameBots: A 3D virtual world test bed for multi-agent research. In Proceedings of the Second International Workshop on Infrastructure for Agents, MAS, and Scalable MAS, May, 2001. [4] R. Minson and G. Theodoropoulos. Distributing RePast agent-based simulations with HLA. In European Simulation Interoperability Workshop 2004, pages 04E–SIW–046, June 2004. [5] L. Luo, S. Zhou, W. Cai, M. Y. H. Low, F. Tian, Y. Wang, X. Xiao, and D. Chen. Agent-based human behavior modeling for crowd simulation. Computer Animation Virtual Worlds, 19(3-4):271–281, 2008. [6] D. Z. Chen, R.J. Szczerba, J.J Uhran Jr., A framed-quadtree approach for determining Euclidean shortest paths in a 2-D environment. IEEE Transactions on Robotics and Automation Volume:13 Issue:5, page(s): 668-681, Oct 1997. [7] A. Stentz, Optimal and efficient path planning for partially-known environment. In Proceedings of the 1994 IEEE International Conference on Robotics and Automation, May 1994. [8] C. W. Reynolds, Steering Behaviors For Autonomous Characters, In the proceedings of Game Developers Conference 1999 San Jose, California, pages 763-782, 1999. [9] Conitec Datasystems, 3D Gamestudio Game Engine A7. http://www.3dgamestudio.com/, 2008. [10] Devmaster.net, Game Engines Review, http://www.devmaster.net/engines/, 2008. [11] Open Source, Delta3d Game and Simulation Engine, http://www.delta3d.org/, 2008. [12] Open Source, Irrlicht 3D Engine. http://irrlicht.sourceforge.net/, 2008. [13] GarageGames, Torque Game Engine, http://www.garagegames.com/, 2008. [14] Autodesk, Maya, http://usa.autodesk.com/adsk/servlet/index?siteID=123112\&id=7635018, 2008. [15] Autodesk, Autodesk FBX technology, http://usa.autodesk.com/adsk/servlet/index?siteID=123112&id=683747, 2009. [16] K. Frederick, W. Richard, D. Judith, Creating Computer Simulation Systems: An Introduction to the High Level Architecture, Prentice Hall. ISBN 0-13-022511-8, Oct 18, 1999. [17] F. Tecchia, C. Loscos, and C. Yiorgos, Visualizing crowds in realtime. Computer Graphics Forum, 21(4):753–765, 2002. [18] T. While. A survey of 3-d urban mapping and visualization capabilities from an army perspective. In 5th International Symposium Remote Sensing of Urban Areas (URS 2005), Tempe, AZ, USA, March 2005.