A Mixed Reality for Virtual Assembly Ulises Zaldívar-Colado, Samir Garbaya, Paúl Tamayo-Serrano, Xiomara Zaldívar-Colado and Pierre Blazevic
Abstract— Mixed reality (MR) is a hybrid reality where real and virtual objects are merged to produce an enriched interactive environment. Virtual Reality (VR) has been used in the simulation of production processes such as the product assembly and the execution of industrial tasks. Augmented reality (AR) has been widely used as an instructional tool to help the user to perform the task in real world conditions. Most of these works were focused on solving technical problems, specific to the type of application but they did not take advantage of the achievements realized in both VR and AR technologies. This paper presents a mixed reality system that integrates virtual assembly environment with augmented reality. This approach is mainly based on the development of a hybrid tacking system for the synchronization of the virtual and the real hand of the user. The evaluation of this Mixed Reality approach showed a statistically significant improvement of the user performance in the assembly task execution, compared to the task realized in virtual environment.
I. INTRODUCTION Product assembly is one of the most important activities performed by the human in the industrial process. Mainly it is present in the stages of design, training, final production and maintenance. Its relevance in the production process is so important that it represents up to 40% of the total production cost [1]. In any of these stages, there may be setbacks, like damage to the used parts, shortage of the available stock, parts’ design errors, or even, their manipulation can represent direct or indirect health issues to the users [2]. Virtual Assembly (VA) was used to reduce these problems. According to authors, different definitions was given to Virtual Assembly. In this work, for completeness, we will use the definition proposed by Seth et al. [3] “the capability to assemble virtual representations of physical models through simulating realistic environment behavior and part interaction to reduce the need for physical assembly prototyping resulting in the ability to make more encompassing design/assembly decisions in an immersive computer-generated environment”. The advantages of (VA) are: no need of physical parts for design validation and assembly learning; damage of parts due to the lack of experience is reduced; the stock of parts is not an issue in the virtual assembly learning phase; and ergonomics issues and human factors can be considered since the design stage. Ulises Zaldívar-Colado is with University of Sinaloa, México; and with Laboratoire END-ICAP, U1179 INSERM, UVSQ - France. E-mail:
[email protected]. Samir Garbaya is with Laboratoire END-ICAP, U1179 INSERM, ENSAM - ParisTech, Paris, 75013 - France (e-mail:
[email protected]). Paul Tamayo-Serrano is with Laboratoire END-ICAP, U1179 INSERM, University of Versailles Saint-Quentin-en-Yvelines, Versailles, 78000 France (e-mail:
[email protected]).
Augmented Reality (AR) is considered less obstructive than VR where the user is immersed in the synthetic environment with being disconnected from the real world. (AR) allows the user to see the real world with adding virtual objects over the real world. Therefore, (AR) replaces parts of real world instead of completely replacing it [4, 5]. This feature of merging the real world with virtual objects makes the (AR) very appealing to users. This feature allowed (AR) to make up for one of the imperfections of (VA) by providing the user with the access to the real and virtual environments. (AR) was used as an assistive mean for the operator to perform an assembly task. Some examples virtual assembly approaches are described in the next section. Mixed reality (MR) is a hybrid reality where the real and virtual worlds are merged to produce an enriched interactive environment where physical and digital objects co-exist. (MR) a mix of reality and virtual reality, encompassing both augmented reality and augmented virtuality [23]. In this paper, we present an AR system for virtual assembly. The following section presents the related work. The section III is dedicated for the system architecture. The system features are described in the section IV. The section V presents the experimental design. The data analysis and results are presented in the section VI. Finally the conclusion and the research perspectives are presented in the section VII. II. RELATED WORK Since the emergence of augmented reality, it has been used as a tool in industry. In the case of assembly, two main applications were addressed: The first is presented as a set of instructions on how to assembley a product;, this will be named as an augmented reality with instructional focus. The second presents the elements that the user can directly manipulate and assemble, this is will be called a noninstructional focus of the use of augmented reality. In instructional focus, (AR) is used to show instructions to the user, on which actions he should perform in the steps of the assembly process. This focus varies from just showing instructions, the user has to select the next step to be displayed; to more advanced systems, that can identify the parts to assemble, how to assemble them, and with which tools. Instructional (AR) has its advantages for the user performing; Xiomara P. Zaldívar-Colado, is with University of Sinaloa, México, Email:
[email protected]. Pierre Blazevic is with Laboratoire END-ICAP, U1179 INSERM, University of Versailles Saint-Quentin-en-Yvelines, Versailles, 78000 France (e-mail:
[email protected]).
however, this use of (AR) for assembly is different from the work described in this paper. Non-instructional focus of (AR) can include instructions for the user, but its main feature is the facility of interaction, manipulation and the assembly of virtual parts. This application of (AR) is more complex to develop than the instructional focus but it has better impact on the user performance in executing the assembly task. Pier Paolo Valentini [6] presents one of the first works of augmented reality for virtual assembly, which is a noninstructional focused system. It is based on a head mounted display with a mounted web cam. The user wears a data glove on his hand, over which, a fiducial marker is attached to track the user's hand. Another fiducial marker is located at the center of the assembly desk to generate the virtual world frame of reference. The system has three grasping poses, cylindrical, spherical and pinch that correspond to the conditions of grasping cylindrical, spherical and thin objects. The grasping technique is a based on proximity. If the hand has a grasping pose, within a minimum predefined distance, the system assumes that the user intends to grasp the part; hence, grasping takes place. The assembly algorithm is based on proximity and velocity, if a part is close enough to its mating part and the travel speed is less than a predefined value, the parts are assembled. The visualization of the scene is at 25 frames per second. The system includes the required features for interaction but its limitations are due to the used hardware such as the data glove, the web cam and the computer processing power. The system does not implement natural grasping of virtual objects, since it has only three predefined poses for the fingers of the hand. In addition, the system has the problem of the visual obstruction between camera and fiducial markers that cannot be temporary rendered. Finally, the system does not include dynamic behavior of parts; this does not allow the representation of natural and realistic interaction. Wu et al. [7] created a very well defined workspace in their system. This space is delimited as a box where the user can perform virtual assembly. The system makes use of an infrared pen as interaction tool. Two cameras with infrared filters follow the trajectory of the pen. These two cameras allow obtaining the tridimensional positions of the pen. Thanks to these settings, it is possible to use the pen to select, move and rotate virtual objects to assemble. To cycle between the possible actions that the user can perform with the pen, virtual buttons are included in the simulation, the color of the buttons indicate the command to execute at the appropriate time with the pen. The system has multimodal interaction; it includes voice and gesture recognition as a command for the user. The system also makes use of multi-markers for creating the simulated scene, these markers allow visual obstruction-free environment. This system has good features for virtual assembly; however, compared to the user’s hand the pen does not provide natural interaction. Ong et al. [8, 9] developed a system for virtual assembly with augmented reality that makes use of the user's bare hands. The system allows bimanual interaction to manipulate the parts for assembly and disassembly task. The algorithm identifies the palm and the fingers of the user, with giving more importance to the thumb and index finger, which are used for interaction with the virtual objects. When the program
is launched, the system loads CAD models to be used in the assembly simulation. During the assembly process the interaction data is saved by the system for later analysis of the user performance and the feasibility of the generated assembly sequence. The assembly algorithm can identify planar, cylindrical and conics part’s shapes. Additionally, the system includes virtual tools necessary to assemble parts. The system was tested with a refresh rate of 15 frames per seconds and a resolution of 512 x 384 pixels. The main advantage of this system is the ability to use the user's hands in a natural way, without extra tools. Even though, the visualization of the movements of the fingers and the gestures of the hands are limited because motion detection by the camera requires a direct sight. A very good feature of this system is the possibility of using virtual tools to perform the assembly task. Radkowski et al [10] presents a system that uses a Kinect® camera to identify and locate the user's hand used as a cursor. This system can identify five hand gestures: closed fist, open hand, closed hand, index finger and waving hand. It allows using the hand as a cursor to interact with the virtual objects. The cursor is represented in the simulation as a yellow sphere. A menu of buttons is shown to the user in the lower part of the screen allowing the execution of actions such as move, rotate, scale, etc. these actions can be applied to the virtual objects. This system includes a visual guide to perform the assembly task; when the user initiates the assembly of two parts, two small diamonds located at the exact position where the two parts have to be assembled. When a diamond is close enough to the other, a vector is defined, the parts can only be moved along the axis defined by that vector. This prevent incorrect assembly operation to take place. Ng et al. [11] extended the work previously presented in [8, 9], proposing ARDnA (Augmented Reality Design and Assembly). One of its novel features is the facility for the user to import and use basic parts to assemble a product. This avoids the need of a CAD system to create complex ED models of parts. However, once the structure is created with basic parts, it can be exported and modified in a CAD system to include more details to the model. Another new feature is allowing the use of real parts in the simulation, this is achieved thanks to the parts having fiducial markers to identify and project their CAD model in the simulation. The virtual model of the real part is transparent, so both, the real and virtual object can be seen at the same time. In addition to working with the user's hand without any invasive hardware since early versions of this system, the current version includes a gesture recognition algorithm that allows the user to modify the models of basic parts and accept commands. The gestures are based on the computation of the distance between the finger and the center of the hand. In an example described in the paper, the user created a car using basic parts and then he added real life motor as another virtual part. Murakami et al. [12] developed a system of augmented reality with portable haptic devices. The system has a physics engine for collision detection. The authors made an experiment with 10 participants, asking them to assemble a rail vehicle, 10 times with haptic and 10 times without haptics. They found that the use of haptic decreases the amount of errors in the assembly process.
III. SYSTEM ARCHITECTURE The developed system runs on a PC HP® WorkStation Z420 with a processor Intel® Xeon™ E5 @ 2.80GHz, with 16GB of RAM and a graphic card Nvidia® GeForce® GTX 660 of 2048MB. For the capture of the real the world workspace and the recognition of fiducial markers, a webcam of the type Logitech™ QuickCam® Pro 9000 with a resolution of 640 x 480 pixels at 30 frames per second was used. A Polhemus ™ Fastrak® tracker was used for capturing the position and the orientation of the user's hands. The tracker has to up to four sensors with six degrees of freedom. For capturing the movements of the fingers and the wrist, a data glove of the type CyberGlove II® with 22 sensors was used. The assembly scene is visualized on a 50-inch LCD TV screen and the auditory feedback is provided by using a conventional stereo speaker. The hardware architecture is presented in Fig. 1.
Figure 1. Hardware architecture Figure 2. Software architecture
The application was developed in C++ programming language. The system is made of two modules: The augmented reality module and the motion capture module. The augmented reality module was developed in ARToolKit library version 2.72. This module generates the video capture, which detects the webcam connected to the PC and opens the port to deliver a flow of images for the system at 30 images per second and a resolution of 640 x 480 pixels. The augmented reality module detects the fiducial markers in the scene captured by the webcam and computes the position and orientation of these markers relative to the webcam, so that virtual images can be mapped correctly over the real scene captured by the camera. The latency of the display of the (MR) scene depends on the graphics processing and the physical engine, it is kept at the minimum so that it does not exceed the time interval of two successive images of the framerate. The motion capture module was developed with the VHT ToolKit® of CyberGloveSystems®, this toolkit allows retrieving both, the geometric data of the Fastrak ® and the data Glove. The dynamic behavior of the objects was obtained by using the physics engine PhysX™ version 2.8.1 of Nvidia® Corporation. The engine was used to define the physical properties of the worktable, and to provide virtual parts with the gravity and the collision properties. For auditory feedback, a sound module using Windows® Multimedia libraries was implemented. Finally, the visualization of the scene is made by using OpenGL®. The software architecture is presented in Fig. 2.
A. Determination of the Virtual World's Coordinate System To identify the position and orientation of the worktable and the center of the virtual world's coordinate system, a multimarker was used. The multi-marker is a special kind of fiducial marker, while a fiducial marker is just one marker. The multimarker is a matrix of individual markers working as a single marker. In using a multi-marker, when a marker is detected, the other markers can be computed. If more than one marker are detected, the mean is computed using the transformation matrices of the detected markers and the user receives one transformation matrix only. The use of multi-marker has two main advantages over using single markers. First, the changes in the illuminating light not perceivable by the human eye are perceived by the computer camera. In using a single marker, the changes in the illumination are continuously perceived; they are showed as a flickering of the generated virtual objects. With the use of multi-marker, this flickering is reduced when more than one marker are detected. Second, the virtual object cannot be displayed in case of obstruction between the line of sight of the camera and the marker. The multi-marker has a group of markers working as one; the visualization of objects is not affected by occlusions if at least one marker is visible. Thanks to these advantages, the tracking system based on multi-markers is more reliable than a single marker. B. Dynamic Behavior of Objects Jayaram et al. [13] states the importance of simulating physical behavior in the virtual assembly task. This was confirmed by Burdea [14] who stated that modeling physical behaviors could significantly increase the sense of immersion and interactivity of the user, especially in applications that require high level of manipulation. The physics engine allows creating shapes and geometric models and assigning them to the virtual models of the scene. The dynamic behaviors assigned to the virtual objects are: 1. the property of collision detection between objects and the collision between the objects and the worktable and 2. the gravity of the objects. In order to perform an assembly task, the parts and their dynamic behavior must be simulated using non-convex shapes. However, the physics engine is implemented using convex shapes by default. A convex shape is one primitive type like spheres, capsules, rectangular prism. To implement an exact collision detection of non-convex shapes, it is required to use the triangular mesh for the virtual parts. The simulation using non-convex shapes is more time consuming for computation than using convex shapes, however, it provides more realistic behavior of parts during the assembly manipulation. Fig. 3 presents convex and non-convex shapes of two models of parts.
IV. SYSTEM FEATURES The system integrates specific features required to perform assembly tasks with providing the user with natural interaction in realistic environment.
Figure 3. A) Convex shapes. B) Non-Convex shapes.
C. Motion Capture of User's Hands Actually, a user cannot manipulate virtual objects with their real hands; virtual objects can interact with other virtual objects only. This is why it is necessary to make the synchronization between the real user's hands, and the virtual hands to provide the user of the illusion that he manipulates the virtual parts with his real hands in a natural way. The synchronization of virtual hands with real hands that allows the user to perform free movement is a problem not completely solved so far. This is due to the technique used for hand tracking in augmented reality systems. The typical implementation is based on camera; it is subject to the occlusion of the direct line of sight between the camera and the user's hands. This problem limits the movement of the user’s hand; hence, it does not produce natural manipulation of the virtual parts. In order to solve this problem, we previously proposed a solution described in [15]. This solution is a hybrid approach that makes use of two tracking techniques: the vision-based method using fiducial markers and the electromagnetic tracker of the type FastTrack. The vision-based method is implemented with fiducial markers attached to both hands of the user. The advantage of using this method is that the transformation matrix of the marker is used to position and rotate the virtual hands exactly where the real hands are. However, this method has two main disadvantages: 1. the visual obstruction of the markers by any object. For example by the user’s head or simply by the orientation of the hand to a posture where the line of sight of the camera of a given marker is obstructed. 2. the velocity of the hand movement is limited by the maximum speed that allows capturing clear image of the marker. These two limitations constrain the user’s activities in the manipulation of virtual parts. The electromagnetic tracker has the advantage of being immune to optical occlusion and it does not limit the velocity of the hand movement. However, the accuracy of the signal of this tracker can be affected by other electromagnetic signals present in the workspace and by the presence of metallic objects such as the frame of the physical worktable. Additionally, in order to obtain exact synchronization, it is necessary to position the electromagnetic emitter at the center of the workspace; this might constrain the movement of the user hand when executing the assembly task. The hybrid method integrates both systems: the visionbased system and the electromagnetic tracking system. This method takes advantages of both technologies. In practice, the receiver of the electromagnetic tracker and the fiducial marker are attached to each hand of the user. The process is made of two stage. In a first stage, the method uses the fiducial marker in order to synchronize (the position and the orientation) the real hand with the virtual hand. If the camera did not detect the fiducial marker, the method proceeds to the second stage. The detection of the marker by the camera can be interrupted in case of obstruction of the line of sight between the marker and the camera or when the velocity of the hand movement is so fast that the camera cannot detect the markers.
In the second stage, the method uses the last detected coordinates of the marker, saved in a transformation matrix 𝑡0 with position and orientation of the hand(𝑀𝑚𝑎𝑟𝑘𝑒𝑟 ) and the coordinates of the receiver of the electromagnetic 𝑡0 tracker(𝑀𝑟𝑒𝑐𝑒𝑖𝑣𝑒𝑟 ). Then, for each step of the simulation, the offset between the coordinates of the receiver at time t0 𝑡0 (𝑀𝑟𝑒𝑐𝑒𝑖𝑣𝑒𝑟 ), and the coordinates of the receiver in the actual 𝑡𝑖 position at time ti, (𝑀𝑟𝑒𝑐𝑒𝑖𝑣𝑒𝑟 ) is calculated in a new 𝑡
𝑖 transformation matrix (𝑀𝑜𝑓𝑓𝑠𝑒𝑡 ). This offset is finally added
𝑡
0 to the last known coordinates of the marker(𝑀𝑚𝑎𝑟𝑘𝑒𝑟 ), obtaining the actual new coordinates where the virtual hand 𝑡𝑖 must be placed(𝑀ℎ𝑦𝑏𝑟𝑖𝑑 ). This method maintains the synchronization of the virtual hand with the real hand, even when the markers are not detected. When the camera detects the marker again, the method comes back to the first stage.
The hybrid tracking system allowed solving the problems of obstruction and the limitation of the velocity of the hand movement. The positioning of the electromagnetic emitter at the center of the workspace is not necessary. In addition, this method does not require aligning the marker with its tracker in a particular position and orientation in the user hand. The differences of times is used to compensate for any imperfection in the alignment. D. Motion Capture of User's Fingers The flexion of the fingers of the user's hands is captured by the data glove, and processed by the VHT ToolKit®. This library delivers these values as the angle of flexion of each phalanx, the movements of abduction and adduction of the fingers, and the flexion of the wrist. These data are used to animate the virtual hand of the user according to the movement of the user’s real hand. E. Assembly Technique The mating phase of the assembly operation is made by the Snap-Fitting technique originally developed by Dewar et al. [16], and subsequently improved by Zaldivar et al. [17], SnapFitting is a proximity based assembly technique. During the assembly operation of two parts, the manipulated part is called the primary part and its mating part is called the receiving part. If the two parts are within a predefined minimum distance, a geometric transformation is applied to the primary part to move it to the final position and orientation relative to the receiving part. Additionally, a vector is defined at the centerline of each part. The distance between the two parts is measured by computing the distance between the extremities of the vectors allowing correct fitting of the parts together [17]. F. Visual Cues Carlson et al. [18] and Gomes de Sá et al. [19] confirmed that visual cues can help the user to perform the assembly task, especially in the learning phase. With the aim to reduce the cognitive load of the user and consequently to facilitate the completion of the assembly task, two visual cues are provided. These cues are in the form of a diagram that represents the parts that can be assembled together and lines that indicate the orientation in which the parts must be assembled. The assembly environment is presented in Fig. 4. In the upper right part of the screen, a diagram with all the virtual parts is visualized. Lines in the diagram indicates which parts can be
assembled with others. At this phase, this diagram shows all the possible connections of the parts but it did indicate the assembly sequence nor the orientation of parts. However, when the user grabs two parts, if they are “assemblable” together, the other visual cue is presented. It is in the form of small yellow lines that are visualized between the parts. These lines help the user by indicating the distance between the points that must be joined to execute the assembly operation and they provide information about the adequate orientation of the parts for correct assembly operation.
Figure 4. The mixed reality environment with the assembly diagram on the upper right part of the screen
G. Auditory Cues Gupta et al. [20] stated that the sound of collisions is a simple auditory feedback in virtual environment; it can be used as a cue to illustrate the contact between parts. It can also indicate the intensity of the impact between parts. To enrich the simulation and provide the user with auditory cues three different sounds where implemented in the system reported in this paper. They are triggered when a part falls on the worktable, when a part collides with another part and when two parts are joined together at their final assembly position and orientation. The Figure 5 presents for illustrate the mixed reality setup of the assembly environment.
Figure 5. The mixed reality system
V. EXPERIMENTAL DESIGN In order to evaluate the effectiveness of the implemented mixed reality assembly environment, an experimental study was conducted. This study is based on the performance metrics represented by the time taken to assemble the parts A and H of the product presented in the Fig. 6 and Fig. 7. This product is a mechanical transmission system made of eight cylindrical parts. To assemble the product, the operator has to take into account the geometric and the precedence relationship constraints between parts. In a complete version of this work, the operator will have the possibility to assemble the mechanical system by finding feasible assembly sequences.
Figure 6. CAD models of parts of the product
Figure 7. Cross section of the transmission system
The task defined in this experiment consists of assembling the parts A and H of the product in the mixed reality environment and compare the user performance with the scores obtained in the virtual environment and the real world. In the experiment carried out in virtual environment, the user manipulated the parts using the same data glove, in desktop setup, with the audio feedback and without the provision of haptic sensation. The application was developed using the same software platform that includes VHT ToolKit® and PhysX™ engine, it ran in a computer with the equivalent processing performance. The parts have the same graphics and dynamic properties as described in the mixed reality environment. The visualization was maintained at 30 frames per second. Eleven subjects from the university community participated to the experiment. They have a mechanical engineering background, they are volunteers, and all of them are male, right handed and aged between 25 and 43 years old. The experimenter explained the (MR) environment (Fig. 8) and after a training session of 15 min to be familiarized with the Mixed Reality interaction devices, each subject was requested to assemble the part A with the H. The time taken to complete the task was automatically recorded by the computer for further analysis. The virtual parts located on the worktable at the distance of 30 cm from each other. During the assembly operation only one part can be manipulated at a time, the mating part is constrained to be fixed on the worktable. The user is requested to grasp the part with his right hand and insert it inside the part fixed to the table.
Figure 8. Task condition in MR TABLE I. Subjects 1 2 3 4 5 6 7 8 9 10 11 Mean: STDEV:
Figure 9. Task Figure 10. Task condition condition in VR in RW RESULTS OF THE EXPERIMENTS
VR Environment TCT(sec) 34.08 20.84 27.08 38.69 16.92 15.81 24.75 10.89 23.42 18.31 18.39 22.65 8.01
MR Environment TCT(sec) 8.84 5.43 8.36 16.18 8.66 12.51 13.18 9.66 17.16 14.65 6.36 10.99 3.9
Real World TCT(sec) 6.13 3.07 5.19 4.62 4.95 4.52 4.0 5.47 4.61 6.04 2.7 4.66 1.0
The user performance was measured in terms of the Task Completion Time (TCT). This includes the grasping time, the time taken to move a part and locate it in its final assembly position and orientation. The TCT recorded for each subject in (MR) environment was compared to the data collected for the same experiment carried out in (VR) environment (Fig. 9) and in (RW) (Fig. 10). The collected data is presented in Table I.
VI. DATA ANALYSIS AND RESULTS In order to determine the added value and the effect of the mixed reality approach developed in this work, a between group analysis of variance (ANOVA) was carried out. The task completion time (TCT) as independent variable the experimental condition as factor for the analysis. The results of the (ANOVA) analysis showed a main statistically significant effect of the experimental condition of the task completion time: [F (2.32) = 32.74, P = 0] at 95% confidence level. TABLE II.
RESULTS OF THE ANOVA ANALYSIS
[3]
[4] [5]
[6]
[7]
[8]
[9]
The analysis of means showed that the task performance recorded in mixed reality condition is better than the obtained in virtual environment: 10.99 and 22.65 respectively. Despite the shorter task execution time recorded in real world due to the natural sensation during the manipulating of real parts, the mixed reality approach allowed better performance than virtual environment. The analysis of standard deviation of the task completion time confirms the better data stability obtained in (MR) environment compared to (VR). These results showed the advantage of (MR) based on the hybrid tracking system and by mapping real objects such as the worktable and the user hand in the digital environment. VII. CONCLUSIONS AND PERSPECTIVES This paper presents a virtual assembly with mixed realty approach based on hybrid tracking system. The system allows the visualization of the assembly system with mapping objects from the real world on the digital environment. The implemented system includes the technique of real-time synchronization of the user real hand and the associated virtual hand; this allowed a constraint-free movement of the user real hand. The manipulation tests showed that the (MR) allowed natural interaction with of virtual objects. The validation of the approach by running the assembly experiment of two parts showed the benefit of the (MR) setup, compared to (VR), in terms of the user performance represented by the task completion time. For the future work, additional experiments will be carried out to evaluate this approach by assembling different samples of parts. In order to determine the skills that could be acquired in generating assembly sequences from the (MR) assembly environment, the task of assembling the complete mechanical system presented in Fig. 6 will be experimented with a representative group of subjects. Subjective evaluation based on the user feedback about the perception of the (MR) system will be also conducted. REFERENCES [1]
[2]
A. Delchambre, “A pragmatic approach to computer-aided assembly planning” Robotics and Automation, 1990. Proceedings., 1990 IEEE International Conference on, 3(1):1600--1605, May 1990. Rajan. Venkat N., Sivasubramanian. Kadiresan and Fernandez. Jeffrey E., “Accessibility and ergonomic analysis of assembly product and jig
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
designs” International Journal of Industrial Ergonomics, 23(5-6):473-487, March 1999. Seth. Abhishek, Vance. Judy M and Oliver. James H. “Virtual reality for assembly methods prototyping: a review” Virtual Reality, 15(1):5-20, March 2011. Ronald. T. Azuma, “A Survey of Augmented Reality” In Presence: Teleoperators and Virtual Environments, 6(4):355--385, 1997. Van Krevelen. DWF and Poelman. R. “A Survey of Augmented Reality Technologies, Applications and Limitations” The International Journal of Virtual Reality, 9(2):1--20, 2010. Valentini. Pier Paolo, “Interactive virtual assembling in augmented reality” International Journal on Interactive Design and Manufacturing (IJIDeM), 3(2):109--119, 2009. Wu. Changliang and Wang. Heng, “A Multi-Modal Augmented Reality Based Virtual Assembly System” Proceedings of the International Conference on Human-centric Computing 2011 and Embedded and Multimedia Computing 2011, 102(1):65--72, January 2011. Ong. S. K. and Wang. Z. B., “Augmented assembly technologies based on 3D bare-hand interaction”, CIRP Annals - Manufacturing Technology, 60(1): 1--4, 2011. Wang. Z. B. Ong. S. K. and Nee. A. Y. C., “Augmented reality aided interactive manual assembly design”, The International Journal of Advanced Manufacturing Technology, 69(5-8):1311--1321, November 2013. Radkowski. Rafael and Stritzke. Christian., “Interactive hand gesturebased assembly for augmented reality applications”, The Fifth International Conference on Advances in Computer-Human Interactions, 303--308, 2012. Ng. LX, Wang. ZB, Ong. SK and Nee. AYC., “Integrated product design and assembly planning in an augmented reality environment”, Assembly Automation, 33(4):345--359, 2013. Murakami. K., Kiyama. R., Narumi. T., Tanikawa. T. and Hirose. M., “Poster: A wearable augmented reality system with haptic feedback and its performance in virtual assembly tasks”, 3D User Interfaces (3DUI), 2013 IEEE Symposium on, 161--162, March 2013. Jayaram. Sankar, Connacher. Hugh I. and Lyons. Kevin W., “Virtual assembly using virtual reality techniques”, Computer-Aided Design, 29(8):575--584, 1997. Burdea. Grigore C., “Invited review: the synergy between virtual reality and robotics”, Robotics and Automation, IEEE Transactions on, 15(3):400--410, 1999. P. Tamayo-Serrano, U. Zaldívar-Colado, X. Zaldívar-Colado, R. Bernal-Guadiana and S. Tamayo-Serrano, “Desarrollo y Evaluación de una Técnica Hibrida Opto-Electromagnética, para la Manipulación de Objetos Virtuales en Sistemas de Ensamblaje Virtual con Realidad Aumentada”, XVII COMRob 2015, 404--411, 2015. Dewar. R. G., Carpenter. I. D., Ritchie. J. M. and Simmons. J. E. L., “Assembly planning in a virtual environment”, Innovation in Technology Management - The Key to Global Leadership. PICMET '97: Portland International Conference on Management and Technology, 664--667, July 1997. Zaldivar-Colado. Ulises and Garbaya. Samir., “Virtual assembly environment modeling”, ASME-AFM 2009 World Conference on Innovative Virtual Reality, 157--163, 2009. Carlson. P, Peters. A, Gilbert. S. B, Vance. J. M. and Luse. A. “Virtual Training: Learning Transfer of Assembly Tasks”, Visualization and Computer Graphics, IEEE Transactions on, 21(6):770--782, 2015. Gomes de Sá. Antonino and Zachmann. Gabriel. “Virtual reality as a tool for verification of assembly and maintenance processes”, Computers & Graphics, 23(3):389--403, 1999. Gupta. Rakesh, Whitney. Daniel and Zeltzer. David. “Prototyping and design for assembly analysis using multimodal virtual environments”, Computer-Aided Design, 29(8):585--597, 1997. Garbaya. Samir and Zaldivar-Colado. U., “The affect of contact force sensations on user performance in virtual assembly tasks”, Virtual Reality, 11(4):287--299, October 2007. Zaldivar-Colado. Ulises, Lizárraga-Reyes. Jorge, Garbaya. Samir, Zaldívar-Colado. Xiomara, Murillo-Campos. Diego and MartínezTirado. Carolina G., “Técnica de Interacción para el Agarrado y Manipulación de Objetos en Ensamble Virtual” COMRob XII, 40--47, November 2010. P. Milgram and F. Kishino: "A taxonomy of mixed reality visual displays", IEICE. Transactions on Information and Systems (Special Issue on Networked Reality), vol.E77, no.12, pp.1321-1329, 1994.