November 13-19, 2015, Houston, Texas ... The development of a mature VR platform usually .... corresponding software development kit provided by Microsoft.
Proceedings of the ASME 2015 International Mechanical Engineering Congress & Exposition IMECE2015 November 13-19, 2015, Houston, Texas
IMECE2014-37149 A SMART METHOD FOR DEVELOPING GAME-BASED VIRTUAL LABORATORIES Zhou Zhang Stevens Institute of Technology Hoboken, New Jersey, USA
Mingshao Zhang Stevens Institute of Technology Hoboken, New Jersey, USA
Sven K. Esche Stevens Institute of Technology Hoboken, New Jersey, USA
Yizhe Chang Stevens Institute of Technology Hoboken, New Jersey, USA
Constantin Chassapis Stevens Institute of Technology Hoboken, New Jersey, USA
1. ABSTRACT
Keywords: virtual laboratories, game-based virtual laboratories, Garry’s Mod, Kinect, artificial intelligence
Virtual laboratories are one popular form of implementation of virtual reality. They are now widely used at various levels of education. Game-based virtual laboratories created using game engines take advantage of the resources of these game engines. While providing convenience to developers of virtual laboratories, game engines also exhibit the following shortcomings: (1) They are not realistic enough. (2) They require long design and modification periods. (3) They lack customizability and flexibility. (4) They are endowed with limited artificial intelligence. These shortcomings render gamebased virtual laboratories (and other virtual laboratories) inferior to traditional laboratories.
2. INTRODUCTION Virtual reality (VR) is commonly described as a computer-simulated environment in which the users can interact with the corresponding virtual representation of the real world [1]. VR is a complex interdisciplinary system, and thus it has remained an active research topic for several decades. VR has several import characteristics: (1) Virtual systems lead to a reduction of costs and resource consumption. The creation of virtual systems instead of physical ones can make the creation of multiple copies of physical devices unnecessary, thus reducing the consumption of natural resources. In addition, virtual systems can also alleviate the requirements for human resources during training. (2) Virtual systems are inherently safer and less failure prone than physical ones. What happens in a virtual environment does not threaten the users physically. For example, a fire fighter training system can never hurt the fire fighters but instead can warn the trainees about potential dangers. (3) VR systems can be shared locally and remotely by multiple users simultaneously. Users can visit the same VR server and collaborate to complete the same tasks through the Internet, which can provide them with significant flexibility and convenience. Moreover, VR is an integrated combination of immersion, interaction and imagination (3 I’s). Therefore, improvements for VR systems are mainly focusing on the innovation for the 3 I’s of VR [2].
This paper proposes a smart method for developing gamebased virtual laboratories that overcomes these shortcomings. In this method, 3D reconstruction and pattern recognition techniques are adopted. 3D reconstruction techniques are used to create a virtual environment, which includes virtual models of real objects and a virtual space. These techniques can render this virtual environment fairly realistic, can reduce the time and effort of creating the virtual environment and can increase the flexibility in the creation of the virtual environment. Furthermore, pattern recognition techniques are used to endow game-based virtual laboratories with general artificial intelligence. The scanned objects can be recognized, and certain attributes of real objects can be added automatically to their virtual models. In addition, the emphasis of the experiments can be adjusted according to the users’ abilities in order to get better training results. As a prototype, an undergraduate student laboratory was developed and implemented. Finally, additional improvements in the approach for creating game-based virtual laboratories are discussed.
Unfortunately, present VR systems have several shortcomings which keep them from gaining further popularity. Firstly, the virtual components cannot reproduce the corresponding real parts precisely. Virtual environments (VEs)
1
(including models, avatars, maps and plots) are too artificial. Secondly, the design and implementation are costly in labor and time. The development of a mature VR platform usually takes several person years. In addition, the common approach to creating a VE based on a specific VR platform is also laborious. Usually, this task is divided into several sub-tasks: data acquisition for the geometry and the texture of the real objects, virtual model creation and format conversion. Finally, the absence of natural interfaces prevents users from truly immersing themselves into the VE. Nobody feels comfortable when wearing heavy augmented VR devices.
methods are introduced here. First, 3D reconstruction techniques are adapted to create VEs for GBVLs. 3D reconstruction techniques can be used to create models of real objects by scanning their surfaces with one or multiple scanners. 3D reconstruction involves several steps: (1) obtaining the surface data of real objects using scanners, (2) processing the raw surface data, (3) generating an improved point cloud and (4) creating the final model. Models for VLs created through 3D reconstruction methods need to be stored in valid 3D model files. Then, these models are imported into and rendered in the specific GE. In addition, during the creation of VEs, the acquisition and processing of the surface shape information of real objects are the most important aspects. The ability to adjust the training emphasis based on the trainees’ level is a much more advanced requirement. VLs should be able to assess the trainees’ weaknesses and provide them with customized training plans. In turn, customized training plans necessitate different training tools, which means that more virtual models of these training tools should be created to meet the various trainees’ preferences. Once the necessary virtual models are available, the GE can adjust the experimental tasks according to the trainees’ preferences and the necessary clues.
Virtual laboratories (VLs) are one of the most popular implementations of VR. They have all of the common characteristics of VR but also inherit all of its disadvantages. Therefore, a smart method for developing VLs is proposed here. This method reduces the development time, enables the adjustment of the training emphasis according to the level of the trainees, and provides for natural interfaces and immersion, enhancement in performance, etc. In order to make the proposed method smarter, the inherent shortcomings of the VL platform must be overcome. Therefore, game-based virtual laboratories (GBVLs) were devised. GBVLs are created based on game engines (GEs), and these GEs provide a relative mature platform for implementing GBVLs. Thus, the effort necessary for developing a new VR platform can be avoided. In Figure 1, the main components of a GBVL are depicted. A GBVL includes 5 basic components. The experimental tasks are assigned to the users (a group composed of students and instructors). Next, immerse themselves into the VL environments through I/O interfaces and interact with each other as well as with the VE. In this step, all of the necessary experimental data are integrated into the GE by middleware in order to generate the immersive VL environment.
Here, the recognition of scanned objects is discussed. In Figure 2, the comparison between the traditional and proposed methods for creating VLs is depicted. Basically, the VL creation with the traditional method involves a series of complicated and error-prone manual procedures. First, measurements with manual tools must be performed, which is laborious. Secondly, the fact that there are several necessary intermediate steps increases the complexity of the VL creation. In the proposed procedure, the VL creation can be performed automatically in 3 basic steps. In addition, the reduction in intermediate steps can decrease the systematic error. The reduction in labor can lower the random error from manual operations. Then, the precision of the created VE can be improved considerably.
Certainly, the implementation of a GE causes some side effects. Firstly, GEs lack customizability and flexibility. In essence, they are closed platforms, and any modification of GEs must comply with the rules and standards of the GEs. In addition, they offer limited artificial intelligence (AI). In games, AI is only used to realize the logic that triggers prescripted game events. These disadvantages combined with the inherited shortcomings of VR render GBVLs (and other VLs) inferior to traditional laboratories. Some work aiming to improve the efficiency of the design and implementation of GBVLs has been reported. At first, more natural interfaces were developed [3]. Moreover, in order to reduce the work required for surveying the real world, a measurement tool was applied. This method significantly reduces the time needed for acquiring the information describing the real world [4]. Other efforts aimed at developing the VE automatically in real time are presented in this paper, including ones addressing the VE creation and scanned object recognition. In order to realize these objectives, a series of
FIGURE 1: MAIN COMPONENTS OF A GAME-BASED VIRTUAL LABORATORY
2
FIGURE 2: COMPARISON BETWEEN TRADITIONAL AND PROPOSED METHODS OF CREATING VIRTUAL LABORATORIES
3. CREATING VIRTUAL ENVIRONMENTS WITH A 3D RECONSTRUCTION METHOD
middleware that can process the acquired raw data and generate ready-to-use VEs automatically would simplify the whole process of creating VEs significantly. The first problem can be solved using non-contact 3D scanners and the second problem can be solved with the procedures proposed here.
3.1. Virtual Environment Creation: Model Creation and Virtual Space Creation VEs represent implementations of VR. Generally, VEs are composed of models, avatars, a virtual space (VS) and plots. The creation of VEs with a 3D reconstruction method only focuses on the creation of models and the VS. The models refer to all virtual objects that can be manipulated within a VE. A virtual space is also known as a map. It provides a space where all of the interactions between the models and avatars are taking place. There are some differences between the model creation and the VS creation. While the model creation is only concerned with the surface reconstruction of the real objects, the VS must facilitate the interactions between the models implemented within it. Moreover, the models in VEs are impenetrable solids and thus collision detection prevents the avatars and other models from penetrating into them. By contrast, VSs must be penetrable by the objects and avatars. The VS creation must overcome this constraint. Therefore, different creation procedures for models and VSs are presented here, even though they share some of the same processing steps.
3D reconstruction is the process of obtaining the shape information of real objects. It can directly generate a corresponding 3D model of a real object in a specific rendering system, or it can export a 3D model file that can be recognized by 3D modeling software, for example Autodesk 3ds Max, Pro/Engineer, CATIA, etc. The most important aspect of 3D reconstruction is the acquisition of the information of the scanned object surfaces. 3D reconstruction in real time has already been presented in the context of some special applications, which inspired the idea of creating VEs in real time. It has been a few years since 3D reconstruction techniques were first introduced, but their potential for implementation in various fields of application (e.g. archaeological research [ 5 , 6 ], medical applications [ 7 , 8 ], reverse engineering [ 9 ] and recovery of 3D shapes of deformable surfaces [ 10 , 11 ]) makes them the subject of ongoing research.
Currently, the efficient creation of VEs is hampered by the conventional methods for acquiring the geometric parameters of real-world artifacts. Therefore, if sensors can replace the traditional measuring devices, the laborious work of surveying the real world can be completed very quickly. Moreover, some
The data used in 3D reconstruction is often obtained by 3D scanners, which are devices used to collect the spatial coordinates and color information of real objects. 3D scanners are divided into three main types according to the form of scanning: contact, non-contact active and non-contact passive.
3.2. 3D Reconstruction and 3D Scanners
3
Contact 3D scanners obtain the surface data of the object through physical touch (e.g. coordinate measuring machines). Non-contact active scanners emit some kind of radiation or light and detect the radiation passing through the object or the light reflected by the object. The possible types of emission include light, ultrasound and x-ray. Non-contact passive scanners detect the ambient radiation from objects. Passive methods can be very cheap, and they do not necessitate special hardware but rather only simple digital cameras. In addition, the differences between contact and non-contact scanning methods contribute to the diversity of 3D reconstruction approaches: active methods and passive methods. With the development of structured light cameras (laser scanners, infrared cameras, etc.), 3D reconstruction techniques have been improving quickly recently. Many innovative applications of 3D reconstruction have been reported, for instance the reconstruction of 3D faces of people [ 12 , 13 ], 3D reconstruction of human motions [14, 15] and reconstruction of urban scenes from videos [16]. There are two main approaches for obtaining the shape information of real objects. The first method is to register the surface information of the real objects by dynamic scanning devices. For instance, ‘ReconstructMeQt’ can be used to build 3D surfaces with a Kinect and to export 3D surface files to other software [17]. The second method is to retrieve the surface information from 2D still images. For instance, ‘3DSOM Pro’ and ‘insight3d’ can be utilized to produce 3D models through a series of alignment pictures [ 18 , 19 ]. A real-time 3D reconstruction system is ‘KinectFusion’. It uses a ‘Microsoft Kinect for Windows’ to acquire the shape information of objects and to process the resulting raw data to generate the corresponding 3D models in real time [20, 21].
Windows was employed here as 3D scanner for acquiring the surface information of real objects. The application of the Kinect as a data acquisition device in LEs was introduced previously [4, 26, 27]. The Kinect for Windows is composed of an accelerometer, a structured light projector, a microphone array and two cameras. One of the two cameras is an ‘infrared depth camera’, which captures the depth information of objects with a resolution of up to 640×480 in pixels. The other camera is a ‘color (RGB)’ camera, which captures the color information of objects with a resolution of up to 1280×960 in pixels. The absolute error of the depth values is more than 1 mm, which implies that the accuracy of the models created with the Kinect is more than 1 mm. Although the accuracy of the Kinect is not comparable with laser scanners, it meets the requirements of GBVLs since the main object of GBVLs is to let students explore the underlying theories of the experiments and familiarize themselves with the operation of the laboratory equipment. Therefore, the quality of the VEs for GBVLs is not the main concern. Certainly, the obvious advantages of the Kinect, as described above, are its affordability and its user-friendliness. Moreover, the corresponding software development kit provided by Microsoft benefits the users in development and application.
3.3. Selection of Game Engine and 3D scanners GEs provide the developers with various basic functions such as graphics rendering, sound generation, physics modeling, game logics, artificial intelligence, user interactions and networking [22, 23]. These characteristics of GEs render GBVLs collaborative, distributed and immersive. The purpose of GEs is to provide a suite of development tools for games that make the common components of computer games reusable and adaptable. The GBVL presented here was built based on Garry’s Mod (GMod), one of the modifications of the ‘Source’ game engine. It is a user-friendly multi-player computer game platform that permits users to create their own applications, program these applications with G_LUA and plot their own stories [24, 25]. These advantages provide enough flexibility for building GBVLs.
FIGURE 3: VE CREATION FLOW CHART
3.4. Proposed VE Creation Procedures There are two possible ways for scanning an object with the Kinect. One is to move the object relative to the Kinect, and the other is to move the Kinect relative to the object. Since it is not convenient to move the objects to be scanned, a hand-held Kinect was used here. In order to obtain the pose of the Kinect, a fast stereo matching algorithm was employed. A 3D point cloud of the real scene was obtained once the coordinates of the Kinect had been determined. The details of tracking the pose of the Kinect were described elsewhere [28].
Different scanners have different advantages and disadvantages. 3D scanners can acquire depth information of real objects, thus reducing the complicated recovery of the depth information from 2D images. However, 3D scanners are usually very expensive. 2D scanners are much more affordable and popular, but they can only provide 2D information. In order to balance cost and efficiency, the Microsoft Kinect for
The next problem to be solved is to create the VE from this point cloud. In Figure 3, the procedures for creating the VE are shown. As illustrated in this flow chart, a set of sequential
4
color frames is captured to determine the pose of the Kinect using the ‘parallel tracking and mapping’ (PTAM) method [29, 30]. The PTAM method splits the tracking and mapping process into two threads that run in parallel. The model creation and VS creation share the same flow chart, but in order to create a penetrable map, the point cloud used to create the VS first has to be divided into several sub-point clouds. Each subpoint cloud represents one of the 6 orientations of the world. Then, every sub-point cloud is meshed and textured separately. During the model creation, a real object may be occluded by another object. Therefore, if an occlusion occurred, the k-means clustering algorithm is used to segment the two objects [31]. In addition, in the method proposed here, the scanned objects can be recognized. Hence, the processed point cloud is delivered to the classifier for classification. Finally, the point cloud is meshed.
more detailed description of this experiment was given elsewhere [37, 38].
3.5. Point Cloud Processing and VE Generation
The VS creation process is different, though. The most straightforward method for creating the VS is to detect all surfaces in the real world, mesh all of them into triangles and texture these triangles. However, this approach is too costly in terms of time and computer resources, especially considering the many unnecessary details of the world. In the map illustrated in Figure 5, every side of the laboratory was simplified into 2 large triangles, and the image of every side of the real laboratory was directly attached to the corresponding side. This processing method for the sides of the VS saves a lot of time and resources. Therefore, in order to decrease the number of vertices used for meshing and texturing, the keypoints of the laboratory were extracted from the sub-point cloud (e.g. corner coordinates of one wall or one large cabinet of the laboratory). These keypoints can represent the 6 sides of the laboratory from 6 directions coarsely. Although this map is not perfect, it is sufficient for the purpose of the implementation of experiments. More details about the creation of VSs can be found elsewhere [28].
In this prototype GBVL, the larger parts of the experimental devices including the base of the test tube and the test tube itself were created by the traditional method while the step motor and the Pitot tube were created by the method proposed here. The main reason for that is that the model creation requires a higher precision than that of the VS creation. Therefore, the pixel density of the models is much higher than that of the VSs. This means that large scanning objects require significant amounts of computer resources which exceed the ability of the hardware that was used to develop the GBVL presented here. Moreover, the PTAM method is only suitable for small spaces, and the dimensions of the test tube and its base also exceed this limitation. Therefore, they had to be built with the conventional modeling process (see Figure 1).
In GMod, the model and VS files have different formats. Thus, there are two different flow branches when processing the point cloud and generating the final VE. Figure 4 illustrates the entire procedure of point cloud processing and VE generation. In the branch of model creation, the information of the point cloud is stored in StudioMDL data (SMD) files [32]. The RGB information of the point cloud is mapped onto the surfaces of a unit cubic to generate six patches of texture files. The texture files are converted into VTF texture files with ‘VTFEdit.EXE’. Finally, the SMD, VTF and QC files are compiled into ready-to-use model files for GMod [33, 34]. In the branch of creating the VS, the point cloud is divided into six independent sub-point clouds. Each sub-point cloud is meshed separately. Then, these meshed sub-point clouds are written into the ‘Valve’ map format (VMF) file [35]. After that, the VMF file is compiled into a BSP map using three commands: VBSP.EXE renders the geometry, VVIS.EXE renders the texture of the geometry and VRAD.EXE renders the light [25, 28]. At the same time, the VS creation procedure shares the procedures for generating the VTF texture files with the model creation procedure. A Windows shell provides the API, which allows the EXE files to be executed automatically [ 36]. In addition, the EXE files used to create the VE take advantage of the convenience provided by the Windows shell to accomplish the VE creation automatically.
During the scanning of objects, it is a common occurrence that from different perspectives some of these objects are occluded by other objects as shown in Figure 6. In order to generalize the VE creation, segmentation and recognition of the obtained 3D point cloud become necessary. 3D point cloud segmentation is one method used to divide a point cloud into different clusters according to some specific criteria (e.g., color, intensity value, depth value, etc.). Therefore, the objects shown in Figure 6 (for instance the step motor, pump and Xplorer GLX) should be segmented and then reconstructed.
4. VALIDATION BY CREATION OF GAME-BASED VIRTUAL LABORATORY In order to demonstrate that the method proposed here is feasible, a GBVL was created. A flow rig experiment implemented in this GBVL is used in an undergraduate fluid mechanics course. Figure 5 shows the basic setup of this experiment, with the physical laboratory and physical experiment devices depicted on the left and the virtual models and map on the right. The virtual experiment set includes a step motor, a Pitot tube, a test tube and a test tube base. The physical laboratory is virtualized into the map of the GBVL. A
The scenario in Figure 6 was scanned with a hand-held Kinect. The segmentation was implemented based on the kmeans clustering method, which divides the point cloud into k clusters [31, 39]. In Figure 6, there are 3 items and thus, this point cloud was divided into 3 point clusters. In addition, even if the point cloud is segmented perfectly, there is usually at least one side of an object that is occluded by its support. Therefore, a point cloud database of various objects would be desirable, but this is left for future work.
5
For the scenario depicted in Figure 6, the accuracy in the object recognition was 83.4% for the step motor, 87.6% for the pump and 90.4% for the Xplorer GLX. The low accuracy in recognizing the step motor and the pump was attributable to their metal surfaces. This is due to the fact that the Kinect uses infrared light, and thus specular reflection in mirrors or metal causes problems for the Kinect. Therefore, the Kinect cannot capture the surface information of reflective surfaces very accurately [40].
(step motor, pump, Xplorer GLX) were selected as examples for demonstrating the classification and recognition. In order to validate the proposed method, a GBVL was developed and implemented. Although this GBVL is not perfect, it shows that it is possible and promising to create VLs. Until now, the method proposed here has only been applied for creating some small objects since the algorithm used to track the pose of the Kinect is limited to applications in small spaces. In the future, feature recognition capabilities would be desirable in order to broaden the range of applications of VLs. Therefore, future work will be focused on improving the precision of the model creation, the quality of the models and the ability to recognize objects including the features of the objects.
5. CONCLUSIONS In this paper, a smart method for creating VLs was introduced. GMod was selected as the basic development platform for a pilot VL, and a 3D reconstruction technique was applied to improve the efficiency of creating VEs. In addition, a pattern recognition technique was employed. Three devices
FIGURE 4: POINT CLOUD PROCESSING AND VE CREATION
FIGURE 5: GBVL WITH FLUID EXPERIMENT SETUP
6
FIGURE 6: OCCLUSION IN REAL SCENE
6. REFERENCES [8] McInerney, T. & Terzopoulos, D., 1996, “Deformable models in medical image analysis: a survey”, Medical Image Analysis, Vol. 1, No. 2, pp. 91-108.
[1] Earnshaw, R. A., 2014, “Virtual reality systems”, Academic Press, 2014.
[9] Werghi, N., Fisher, R., Robertson, C. & Ashbrook, A., 1999, “Object reconstruction by incorporating geometric constraints in reverse engineering”, Computer-Aided Design, Vol. 31, No. 6, pp. 363-399.
[2] Burdea, G. & Coiffet, P., 2003, “Virtual Reality Technology”, John Wiley & Sons, Inc. [3] Chang, Y., Aziz, E.-S., Zhang, Z., Zhang, M., Esche, S. K. & Chassapis, C., 2014, “A platform for mechanical assembly education using the Microsoft Kinect”, Proceedings of 2014 ASME International Mechanical Engineering Congress & Exposition, Montreal, Quebec, Canada, November 14-20, 2014. [4] Zhang, M., Zhang, Z., Aziz, E.-S., Esche, S. K. & Chassapis, C., 2013, “Kinect-based universal range sensor for laboratory experiments”, Proceedings of 2013 ASME International Mechanical Engineering Congress & Exposition, San Diego, CA, USA, November 15-21, 2013. [5] Fiz, I. & Orengo, H. A., 2007, “The application of 3D reconstruction techniques in the analysis of ancient Tarraco’s urban topography”, Proceedings of 35th International Conference on Computer Applications and Quantitative Methods in Archaeology, Berlin, Germany, April 2-6, 2007.
[10] Salzmann, M., Urtasun, R. & Fua, P., 2008, “Local deformation models for monocular 3d shape recovery”, Proceeding of 2008. IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, June 2328, 2008, pp. 1-8. [11] Varol, A., Shaji, A., Salzmann, M. & Fua, P., 2012, “Monocular 3D reconstruction of locally textured surfaces”, IEEE Transactions of Pattern Analysis and Machine Intelligence, Vol. 34, No. 6, pp. 1118-1130. [12] Choi, J., Medioni, G., Lin, Y., Silva, L., Regina, O., Pamplona, M. & Faltemier, T. C., 2010, “3D face reconstruction using a single or multiple views”, Proceedings of International Conference on Pattern Recognition, Istanbul, Turkey, August 23-26, 2010. [13] Kemelmacher-Shlizerman, I. & Basri, R., 2011, “3D face reconstruction from a single image using a single reference face shape”, Pattern Analysis and Machine Intelligence, IEEE Transactions, Vol. 33, No. 2, pp. 394-405. [14] Cheung, G. K., Kanade, T., Bouguet, J. Y. & Holler, M., 2000, “A real-time system for robust 3D voxel reconstruction of human motions”, Proceedings of 2000 IEEE Conference on Computer Vision and Pattern Recognition, Hilton Head, SC, USA, June 13-15, 2000.
[6] De Reu, J., De Smedt, P., Herremans, D., Van Meirvenne, M., Laloo, P. & De Clercq, W., 2014, “On introducing an image-based 3D reconstruction method in archaeological excavation practice”, Journal of Archaeological Science, Vol. 41, pp. 251-262. [7] Hibbard, L. S., Grothe, R. A., Arnicar-Sulze, T. L., DoveyHartman, B. J. & Page, R. B., 1993, “Computed threedimensional reconstruction of median eminence capillary modules”, Journal of Microscopy, Vol. 171, pp. 39-56.
7
[15] Baak, A., Müller, M., Bharaj, G., Seidel, H. P. & Theobalt, C., 2013, “A data-driven approach for real-time full body pose reconstruction from a depth camera”, Consumer Depth Cameras for Computer Vision, Springer London, pp. 71-98. [16] Pollefeys, M. et al., 2008, “Detailed real-time urban 3D reconstruction from video”, International Journal of Computer Vision, Vol. 78, No. 2, pp. 143-167. [17] http://reconstructme.net/projects/reconstructmeqt, accessed in August 2015. [18] http://www.3dsom.com/, accessed in August 2015. [19] http://insight3d.sourceforge.net, accessed in August 2015. [20] http://msdn.microsoft.com/en-us/library/dn188670.aspx, accessed in August 2015. [21]Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R. A., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A. J. & Fitzgibbon, A. W., 2011, “KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera”, Proceedings of 24th Annual ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, USA, October 16-19, 2011, pp. 559568. [22] Baba, S. A., Hussain, H. & Embi, Z. C., 2007, “An overview of parameters of game engine”, IEEE Multidisciplinary Engineering Education Magazine, Vol. 2, No. 3, pp. 10-12. [23] Thorn, A., 2010, “Game Engine Design and Implementation”, Chapter 1, 1st Ed., Jones & Bartlett Publishers. [24] http://garrysmod.com/, accessed in August 2015. [25] http://source.valvesoftware.com/, accessed in August 2015.
[29] Klein, G. & Murray, D., 2007, “Parallel tracking and mapping for small AR workspaces”, Proceedings of 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, November 13-16, 2007, pp. 225-234. [30] Newcombe, A., Steven L. & Andrew D., “DTAM: Dense tracking and mapping in real-time”, 2011, Proceeding of 2011 IEEE International Conference on Computer Vision, Barcelona, Spain, November 6-13, 2011, pp. 2320-2327. [31] Hartigan, J. A. & Wong, M. A., 1979, “Algorithm AS 136: A k-means clustering algorithm”, Applied Statistics, Vol. 28, No. 1, pp. 100-108. [32] https://developer.valvesoftware.com/wiki/Studiomdl_Data, accessed in August 2015. [33] Zhang, Z., Zhang, M., Chang, Y., Esche, S. K. & Chassapis, C., 2013, “Real-time 3D model reconstruction and interaction using Kinect for a game-based virtual laboratory”, Proceedings of 2013 ASME International Mechanical Engineering Congress & Exposition, San Diego, CA, USA, November 15-21, 2013. [34] Zhang, Z., Zhang, M., Chang, Y., Esche, S. K. & Chassapis, C., 2015, “Real-time 3D reconstruction for facilitating the development of game-based virtual laboratories”, Proceedings of 122nd ASEE Annual Conference & Exposition, Seattle, WA, USA, June 14-17, 2015. [35] https://developer.valvesoftware.com/wiki/Valve_Hammer _Editor, accessed in August 2015. [36] http://msdn.microsoft.com/enus/library/windows/desktop /bb762154(v=vs.85).aspx, accessed in August 2015.
[26] Zhang, M., Zhang, Z., Esche, S. K. & Chassapis, C., 2013, “Universal range data acquisition for educational laboratories using Microsoft Kinect”, Proceedings of 2013 ASEE Annual Conference and Exposition, Atlanta, GA, USA, June 23-26, 2013.
[37] Zhang, Z., Zhang, M., Tumkor, S., Chang, Y., Esche, S. K. & Chassapis, C., 2013, “Integration of physical devices into game-based virtual reality”, International Journal of Online Engineering, Vol. 9, No. 5, pp. 25-38. [38] Tumkor, S., Zhang, Z., Zhang, M., Chang, Y., Esche, S. K. & Chassapis, C., 2012, “Integration of a real-time remote experiment into a multi-player game laboratory environment”, Proceedings of 2012 ASME International Mechanical Engineering Conference & Exposition, Houston, TX, USA, November 9-15, 2012.
[27] Zhang, M., Zhang, Z., Chang, Y., Esche, S. K. & Chassapis, C., 2015, “Kinect-based universal range sensor and its application in educational laboratories”, International Journal of Online Engineering, Vol. 11, No. 2, pp. 26-35. [28] Zhang, Z., Zhang, M., Chang, Y., Esche, S. K. & Chassapis, C., 2014, “An efficient method for creating virtual spaces for virtual reality”, Proceedings of 2014 ASME International Mechanical Engineering Congress & Exposition, Montreal, Quebec, Canada, November 14-20, 2014.
[39] Kanungo, T., Mount, M., Netanyahu, S., Piatko, D., Silverman, R. & Wu, Y., 2002, “An efficient k-means clustering algorithm: Analysis and implementation”, Pattern Analysis and Machine Intelligence, IEEE Transactions, Vol. 24, No. 7, pp. 881-892. [40] Alt, N., Rives, P. & Steinbach, E., 2013, “Reconstruction of transparent objects in unstructured scenes with a depth camera”, Proceeding of IEEE International Conference on Image Processing, Melbourne, Australia, September 15-18 2013.
8