Simulating the C2SM 'fast' robot - The Vision, Graphics, and Robotics ...

0 downloads 87 Views 1MB Size Report
The CBRN Crime Scene Modeler [6] is a mobile sensor system that has been .... and to the development of Serious Games wh
Simulating the C2SM ‘fast’ robot R. Codd-Downey1 , M. Jenkin1 , M. Ansell2 , H.-K. Ng2 , and P. Jasiobedzki2 1

Computer Science and Engineering, York University, Canada 2 MDA Corporation, Brampton, Canada

Abstract. A critical problem in the deployment of commercial teleoperated robots is the development of effective training tools and methodologies. This paper describes our approach to the problem of providing such training to robot operators tasked to contaminated environments. The Vanguard MK2 and C2SMFast Sensor Simulator – a virtual reality based training system – provides an accurate simulation of the C2SM Crime Scene Modeler, an autonomous robotic system developed for the investigation of contaminated crime scenes. The training system provides a simulation of the underlying robotic platform and the C2SMFast sensor suite, and allows training on the system without physically deploying the robot or CBRNE contaminants. Here we describe the basic structure of the simulator and the software components that were used to construct it.

1

Introduction

Investigating crime scenes where narcotics or explosives may be found or responding to man-made or natural disasters puts police and first responders at significant safety risk as the nature of the event may introduce hazardous contaminants into the scene. The extreme version of this problem occurs when environments have been contaminated by Chemical, Biological, Radiological-Nuclear or Explosive (CBRNE) substances. Initial response to such events involves using unmanned robotic platforms to identify the hazardous substances and their distribution, and defining threat levels in order to plan the remediation operations before manned teams are deployed. Such robotic platforms are equipped with cameras and detectors selected for specific threats and are teleoperated in the scene providing live images and /> Fig. 5. A snippet of a scene environment description with robot position, orientation and Lua scene manager script.

Figure 7 illustrates how a single object (here a sphere) is coded in the simulator. Each object has a name (here ellipsoid), that provides a unique label for the object for later manipulation by scene scripts. The visual appearance of an object is defined in terms of a Blender mesh object which may be modified by a set of transformations. In addition to its visible appearance, objects may also have a physical existence in terms of the physics engine. This includes mass

Fig. 6. Definition of a spot light from a scene file. Extremely complex lighting models are possible. Fig. 7. A snippet of a scene description. Ogre’s dotScene XML scene description is augmented with state information for the physics engine.

and physical extent. Here the object has a mass of 1 and a physical extent that matches its visual appearance. This is a movable object within the scene. Static physical objects are signalled by omittting the mass property. 3.2

Simulating other visual sensors

Simulating the infrared camera on board the robot involves two key features of the rendering system. The first of which is Ogre3D’s shader framework. This permits fine grained control over the rendering pipeline, which we exploit to transforms light from the scene into ”heat” and the absence of light being “cold”. Ogre’s Material schemes allow for top-level switching of rendering techniques which is used to switch from the traditional rendering pipeline to our homegrown shaders that produce the desired effect. Since our shaders convert a grayscale light spectrum to colours that represent the infrared spectrum of light, we wish to give some objects a heat value irrespective of their “true” colours in the visual environment. This is accomplished by manipulating the material attributes of the object. A material scheme named thermal is used to give the object a high light value when rendering the thermal scene. This value is then converted to a heat signature by the shader. Figure 8 illustrates how this thermal information is

material spherehot { technique { pass { diffuse 1.000000 1.000000 1.000000 specular 0.500000 0.500000 0.500000 12.500000 texture_unit { texture mda.jpg } } } technique { scheme thermal pass { ambient 1.0 1.0 1.0 1.0 } } } Fig. 8. The infrared camera pipeline. An object coloured with the spherehot material will appear as a normally lit sphere with the texture defined in mda.jpg applied to it. In the infrared camera, however, the thermal rendering scheme will be applied which identifies it as hot. It will then appear coloured red in the thermal camera.

integrated into the rendering pipeline for an object. Different levels of ambient noise can be introduced within this process. Figure 4(c) shows the simulated infrared camera in action. As the robot’s arm is perpendicular to the lighting direction it shows as “hot”. The infrared camera is further illustrated in the sample scenario below. The range camera is implemented in a similar fashion to the infrared camera. The range camera uses a set of shaders that calculates each pixel’s value in the z-buffer and converts it to a grayscale colour value. Unlike the infrared camera no object needs to be marked as special, i.e., there are no heat sources. Rather objects obtain their “colour values” based on how far they are from the camera. Instead of assigning special material schemes to objects, at render time all objects are assigned a common material that will colour them based on their depth. The customizable pipeline approach for scene rendering can be used to simulate a variety of different sensors. One can imagine creating a night vision camera that could be simulated by coupling a shader with a material and applying that to all objects at render time. Another possible sensor could be a camera that only picks up ultraviolet light. This could be implemented by adding a UV material scheme and light values to each object that emits light in the ultraviolet spectrum and coupling it with a shader much like the infrared shader described above.

radiationData = function() t = {getObjectState("robotvanguard_body")} best = 1; x = {getObjectState(targets[1])} dbest = dist(x[1], x[2], x[3], t[1], t[2], t[3]) for i, v in ipairs(targets) do x = {getObjectState(v)} d = dist(x[1], x[2], x[3], t[1], t[2], t[3]) if d < dbest then dbest = d best = i end end x = {getObjectState(targets[best])} dir = norml({x[1] - t[1], x[2] - t[2], x[3] - t[3]}) head = quatRotate(t[4], t[5], t[6], t[7], 0, 0, -1) dotp = head[1] * dir[1] + head[2] * dir[2] + head[3] * dir[3] angle = math.acos(dotp) cp = crossProduct(dir, head) if cp[2] > 0 then angle = -angle end spectrum = {0,0,0,0,0,0,0,0} magn = 1000 / (dbest * dbest + 1) return magn, angle, #spectrum, spectrum end Fig. 9. Script snippet illustrating how non-visual sensors are defined procedurally. This snippet defines the radiation landscape. The location of a number of sources are known and the closest source is identified. Given this source a simulated magnitude, direction and signal spectrum are generated. The sensor on board the robot adhere to this type of radiological measuring.

3.3

Simulating non-visual sensors

In addition to providing a realistic visual and physical environment, the C2SM simulator must also simulate a number of non-visual events including radiation and chemical distributions. Whereas the visual and physical simulations can leverage existing software systems that were designed to simulate the visual and physical world, this is not the case for other classes of events. In order to provide the developers of training scenarios the greatest flexibility in terms of defining the interaction between the non-visual event being simulated and the environment, the C2SM simulator uses a procedural definition of these events written as a script in Lua. Figure 9 shows a snippet of how a simple radiation source model can be scripted within the simulator. (Of course, more sophisticated radiation land-

scapes can be defined using the same interface.) A Lua script file is associated with a simulation that defines the function radiationData that must return the radiation signal in a specific format (here including magnitude and direction to the source and along with information about the signal spectrum). The state of the simulated world is exposed to the scripting language through the functions getObjectState which returns a vector that describes the current state of that object in the simulation (its position, orientation and its visibility status). A similar function setObjectState allows a script to manipulate the state of an object.

4

Sample scenario

Figure 10 illustrates a sample scenario with the C2SM Fast simulator. The operator’s goal is to identify the ‘hot’ infrared source in the environment. The environment is populated with a large number of textured spheres. These spheres all have an identical visual appearance, as seen in the operator’s console. Finding the ‘hot’ sphere involves using the C2SM infrared camera to distinguish between the various spheres in the environment. After scanning the environment using the Vanguard mobile base, the operator identifies the ‘hot’ sphere and then localizes it with the stereo cameras mounted on the C2SM sensor. From this stereo pair the operator can build a depth map of the hot object.

5

Discussion and future work

For sophisticated robotic systems – even teleoperated ones – to find widespread deployment it is essential that effective training technologies be developed. With advances in video game engines and physical modelling systems it is now relatively straightforward to build realistic simulation systems using open source or easily licensed software components. When coupled with appropriate embedded scripting languages and tools for content generation this provides a powerful set of inexpensive tools for the development of effective robotic simulation systems. Ongoing work involves the development of realistic training scenarios and integration of live data from the robot within 3D models that can be loaded into the simulator. This will allow the simulation system to be used for task rehearsal when planning missions involving contaminated crime scenes.

Acknowledgments This project is supported by the Defence Research and Development, Centre of Security Science, CBRNE Research and Technology Initiative and by the National Science and Engineering Council of Canada.

(a) Time 0

(b) Time 1

(d) Time 2

(e) Time 2 stereo pair Fig. 10. Sample scenario seeking a hot source (here a large textured sphere in the environment). The operator’s goal is to find and localize the ‘hot’ sphere in an environment of similar ‘cold’ spheres.

References 1. Cantoni, L., Kalbaska, N.: The waiter game: structure and development of an hospitality training game. In: 2nd Int. Conf. on Games and Virtual Worlds for Serious Applications (VS-GAMES). pp. 83–86 (2010) 2. Hashimotot, N., Kato, H., Matsui, K.: Evaluation of training effect of tooth scaling simulator by measurement of operation force. In: Proc. IEEE Int. Conf. on Virtual Reality. pp. 273–274 (2010) 3. Hess, R.: The Essential Blender: Guide to 3D Creation with the Open Source Suite Blender. No Starch Press (2007) 4. Ierusalimschy, R.: Programming in Lua, 2nd ed. Lua.org (2006) 5. Jarez, J., Suero, A.: Newton game dynamics, http://newtondynamics.com 6. Jasiobedzki, P., Ng, N.K., Bondy, M., McDiarmid, C.H.: C2SM: a mobile system for detecting and 3d mapping of chemical, radiological and nuclear contamination. In: Carapezza, E.M. (ed.) Proc. SPIE Conf. on Sensors, and Command, Control Communications and Intelligence (C3I) Technologies for Homeland Security and Homeland Defense VIII. vol. 7305, pp. 730509–730509–10 (2009) 7. Jaspers, H.: Resotring and operating historical aviation trainers. In: Flight simulation 1929–2029: a centennial perspective. London, UK (2004) 8. Junker, G.: Pro OGRE 3D Programming. Apress (2006) 9. Khattak, S., Sabri, H., Hogan, M., Kapralos, B., Finney, K., Dabrowski, A.: A serious game for community health nursing education and training. Journal of Health Professions Education 1 (2009) 10. Kwietniewski, M., Wilson, S., Topol, A., Gill, S., Gryz, J., Jenkin, M., Jasiobedzki, P., Ng, H.K.: A multimedia event database for 3d crime scene representation and analysis. In: Proc. 24th Int. Conf. on Data Engineering. Cancun, Mexico (2008) 11. Martynowicz, Z.E.: Digital avionics simulation for trainers. In: Proc. 1992 IIIE/AIAA Digital Avionics Systems Conference. pp. 434–439 (1992) 12. Petrasova, A., Dzanner, S., Chalmers, A., Farrer, J.V., Wolke, D.: The playability evaluation of virtual baby feeding application. In: 2nd Int. Conf. on Games and Virtual Worlds for Serious Applications (VS-GAMES). pp. 95–100 (2010) 13. Topol, A., Jenkin, M., Gryz, J., Wilson, S., Kwietniewski, M., Jasiobedzki, P., Ng, H.K., Bondy, M.: Generating semantic information from 3D scans of crime scenes. In: Proc. Computer and Robot Vision (CRV). Windsor, ON (2009) 14. Wang, W., Li, G.: Virtual reality in the substation training simulator. In: 14th Int. Conf. on Computer Supported Work in Design (CSWD). pp. 438–443 (2010) 15. White, S., Prachyabrued, M., Baghi, D., Aglawe, A., Reiners, D., Borst, C., Chambers, T.: Virtual welder trainer. In: Proc. IEEE Int. Conf. on Virtual Reality. p. 303 (2009)