Joint Virtual Reality Conference of ICAT, EGVE and EuroVR, 2012 Current and Future Perspectives of Virtual Reality, Augmented Reality and Mixed Reality: Industrial and Poster Track
Keywords: Virtual reality, augmented reality, mixed reality, industrial applications
Joint Virtual Reality Conference of ICAT, EGVE and EuroVR, 2012 Current and Future Perspectives of Virtual Reality, Augmented Reality and Mixed Reality: Industrial and Poster Track, 17-19th October, 2012 Madrid, Spain Edited by Angélica de Antonio (Universidad Politécnica de Madrid)
ISBN: 978-84-695-5470-8 Copyright © Universidad Politécnica de Madrid 2012
Technical editing Diego Riofrío Madrid, 2012
Preface The Joint Virtual Reality Conference (JVRC2012) of ICAT, EGVE and EuroVR is an international event which brings together people from industry and research including end-users, developers, suppliers and all those interested in virtual reality (VR), augmented reality (AR), mixed reality (MR) and 3D user interfaces (3DUI). This continues a successful collaboration begun in 2009, merging the 18th Eurographics Symposium on Virtual Environments, the 9th EuroVR Conference, and the 22nd International Conference on Artificial Reality and Teleexistence (ICAT). This year it was held in Spain in Madrid hosted by the Intelligent Virtual Environments Group at the “Decoroso Crespo” Laboratory (Computer Science School) and the Virtual Reality Lab at the Center for Smart Environments and Energy Efficiency (CeDInt), both at the Universidad Politécnica de Madrid (UPM). The aim of JVRC2012 is to provide an opportunity for all to exchange knowledge and share experiences of new results and applications, interact with live demonstrations of current and emerging technologies, and form collaborations for future work. This publication is a collection of the industrial papers and poster presentations at the conference. It provides an interesting perspective into current and future industrial applications of VR/AR/MR. The industrial Track is an opportunity for industry to tell the research and development communities what they use the technologies for, what they really think, and their needs now and in the future. The Poster Track is an opportunity for the research community to describe current and completed work or unimplemented and/or unusual systems or applications. There are presentations from large and small industries, universities and research institutions from all over the world. We would like to thank warmly the industrial and poster chairs for their great support and commitment to the conference.
Industrial chairs Claudio Feijoo (Universidad Politécnica de Madrid, Spain) Yuichi Itoh (Osaka University, Japan) Jérôme Perret (Haption, France) Dennis Saluäär (Volvo, Sweden)
Poster chairs Carlos Andújar (Universitat Politècnica de Catalunya, Spain) Giannis Karaseitanidis (ICCS, Greece) Haruo Noma (ATR, Japan) Jaime Ramírez (Universidad Politécnica de Madrid, Spain)
Angélica de Antonio, Sabine Coquillart, Yoshifumi Kitamura JVRC2012 General Chairs
3
4
Contents Preface
3
Poster Papers
9
3DPublish: a web application for building dynamic 3D virtual spaces
11
Pablo Aguirrezabal, Sara Sillaurren, Rubén Rodríguez (Tecnalia, Miñano Álava, Spain)
An experiment-based learning support system which presents haptic and thermal senses
for thermodynamics 13
Shou Yamagishi, Jun Murauama, Tetsuya Harada (Tokyo university of science, Noda-Shi, Chiba, Japan), Yukihiro Hirata (Tokyo university of science, Chino-Shi, Nagano, Japan), Makoto Sato (Tokyo institute of technology, Yokohama-Shi, Kanagawa, 226-8503, Japan)
Interaction in Augmented Reality using Non-rigid Surface Detection with a Range Sensor
15
Goshiro Yamamoto, Yuki Uranishi, Hirokazu Kato (Graduate School of Information Science, Nara Institute of Science and Technology, Takayama, Ikoma, Nara, JAPAN)
Biomechanically Valid Arm Posture Reconstruction Exploiting Clavicle Movement
17
Eray Molla, Ronan Boulic (Immersive Interaction Group, EPFL, Lausanne, Switzerland)
Cup-embedded Information Device for Supporting Interpersonal Communication
19
Kazuki Takashima (Tohoku University, Japan), Yusuke Hayashi, Kosuke Nakajima, Yuichi Itoh (Osaka University, Japan)
Support for a personalized 3D visualization and exploration of document collections via gestural interaction
21
Angelica de Antonio, Martín Abente, Cristian Moral, Daniel Klepel (Universidad Politécnica de Madrid, Boadilla del Monte, Spain)
User customized directional-view and sound device in a single display
23
Youngmin Kim, Ji-In Kwon, Yanggeun Ahn, Byoung-Ha Park, and Kwang-Mo Jung (Realistic Media Platform Research Center, Korea Electronics Technology Institute, Seoul, Korea)
Wind Direction Perception Using a Fan-based Wind Display: Effect of Head Position and Wind Velocity on Discrimination Performance Yuya Yoshioka, Takuya Nakano, Yasuyuki Yanagida (Graduate School of Science and Technology, Meijo University, Nagoya, Japan)
5
25
3DRecon, a utility for 3D reconstruction from video
27
T.W. Duckworth, D.J. Roberts (University of Salford, UK)
A Mobile System for Collaborative Design and Review of Large Scale Virtual Reality Models
29
Pedro Campos, Duarte Gouveia, Hildegardo Noronha (University of Madeira, Funchal), Joaquim Jorge (VIMMI Group, INESC-ID Lisbon, Lisbon, Portugal)
Augmented Dining Table for Affecting Our Food Consumption
31
Sho Sakurai, Yuki Ban, Takuji Narumi, Tomohiro Tanikawa, Michitaka Hirose (The University of Tokyo, Tokyo, Japan)
Monitoring a Realistic Virtual Hand using a Passive Haptic Device to Interact with Virtual Worlds
33
J.-R. Chardonnet (Arts et Métiers ParisTech, Chalon-sur-Saône, France), J.-C. Leon (Grenoble University, Grenoble, France)
Industrial Papers
35
Discrete Event Simulation using Immersive Virtual Reality for Factory Simulation
37
C. Freeman, S. Reddish, R. W. Scott (Virtual Reality Group. Nuclear AMRC,
Rotherham, UK.) Augmented Reality Pipe Layout Planning in the Shipbuilding Industry
41
Harald Wuest, Manuel Olbrich, Patrick Riess, Sabine Webel, Urlich Bockholt (Department Virtual and Augmented Reality Fraunhofer IGD, Darmstadt, Germany)
Virtual Assessment Meeting: a 3D Virtual Meeting Tool Integrated with the Factory World
45
Maria Di Summa, Gianfranco Modoni, Marco Sacco (Institute of Industrial Technologies and Automation National Research Council of Italy, Bari, Italy), Gabriela Cande, Ciprian Radu (Ropardo SRL, Sibiu, Romania), Ruggero Grafiti (Alenia Aermacchi, Grottaglie, Italy)
A Multi-View Display System using a QDA Screen
49
Shiro Ozawa, Satoshi Mieda, Yasuhiro Yao, Hideaki Takada (NTT Corporation, Yokosuka, Kanagawa, Japan), Tohru Kawakami, Senshi Nasu, Takahiro Ishinabe, Mitsuru Kano, Yoshito Suzuki, Tatsuo Uchida (Tohoku University, Sendai, Miyagi, Japan)
The development and usefulness of an automatic physical load evaluation tool Tim Bosch, Reinier Könemann, Gu van Rhijn (TNO Healthy Living, Hoofddorp, The Netherlands), Harshada Patel, Sarah Sharples (University of Nottingham, Nottingham, UK)
6
53
Innovation in space domain multidisciplinary engineering mixing MBSE, VR and AR
57
Valter Basso, Lorenzo Rocci, Mauro Pasquinelli (Thales Alenia Space Italia S.p.A. , Torino, Italy), Carlo Vizzi, Christian Bar, Manuela Marello (Sofiter System Engineering S.p.A. , Torino, Italy), Michele Cencetti (Politecnico di Torino, Torino, Italy), Francesco Becherini (Ortec s.r.l, Torino, Italy)
Integrating production scheduling with Discrete Event Simulation on a manufacturing line within the Virtual Factory Framework
61
L. Usatorre1, S. Alonso2, U. Martinez de Estarrona3, A. Díaz de Arcaya4 (TECNALIA Research & Innovation, Parque Tecnológico de Álava, Spain)
Virtual Factory Framework – HOMAG Industrial Use Case
65
Omar Abdul-Rahman, Günther Riexinger (Fraunhofer Institute for Manufacturing Engineering and Automation – IPA, Stuttgart, Germany), Ulrich Doll (HOMAG Holzbearbeitungssysteme GmbH, Schopfloch, Germany)
VFF Industrial Scenario: the COMAU case study
69
M. Sacco, W. Terkaj, C. Redaelli (ITIA-CNR, Milano, Italy), S. Temperini, S. Sadocco (COMAU, Grugliasco, TO, Italy)
A Full-Body Virtual Mirror System for Phantom Limb Pain Rehabilitation
73
Eray Molla, Ronan Boulic (Immersive Interaction Group, EPFL, Lausanne, Switzerland)
A3R :: A new insight into Augmented Reality. Transporting the Augmented Reality user into another dimension through the sound
77
Jorge R. López Benito, Enara Artetxe González, Aratz Setién Gutierrez (CreativiTIC Innova S.L., La Rioja Technological Centre, Logroño (La Rioja), Spain)
A haptic paradigm to learn how to drive a non-motorised vehicle manipulated through an articulated mechanism
81
Pierre Martin, Nicolas Férey, Céline Clavel, Patrick Bourdot (VENISE & CPU teams, Orsay, France)
Validation of a haptic virtual reality simulation in the context of industrial maintenance
85
M. Poyade, L. Molina-Tanco, A. Reyes-Lecuona (Dpto. Tecnologia Electronica, E.T.S.I. de Telecomunicacion, Universidad de Malaga, Malaga, Spain), A. Langley, M. D'cruz (Human Factors Research Group, Innovative Technology Research Centre Dept. of Mechanical, Materials and Manufacturing Engineering, University of Nottingham, University Park, Nottingham, United Kingdom) E. FRUTOS, S. FLORES (Tecnatom S.A., San Sebastián de los Reyes, Spain)
Haptic Motion: an Academic and Industrial achievement Nizar Ouarti (Institut des Systèmes Intelligents et de Robotique, Université Pierre et Marie Curie, Paris, France)
7
89
Experimental Prototype Merging Stereo Panoramic Video and Interactive 3D Content in a 5-sided CAVETM
93
F.P. Luque, L. Piovano, I. Galloso, D. Garrido, E. Sánchez, C. Feijóo (Center for Smart Environments and Energy Efficiency (CEDINT), Madrid, Spain)
Tracking multiple humans in large environments: proposal for an indoor markerless tracking system.
97
C. A. Ortiz, B. Rios, D. Garrido, C. Lastres, I. Galloso (Center for Smart Environments and Energy Efficiency (CEDINT), Madrid, Spain)
Usage of Haptics for Ergonomic Studies
101
J. Perret (Haption, Laval, France)
Using airplane seats as front projection screens Panagiotis Psonis, Nikos Frangakis, Giannis Communication and Computer System, Greece)
Karaseitanidis
105 (Institute
of
A Comparative Evaluation of Two 3D Optical Tracking Systems
109
Tugrul Tasci, Nevzat Tasbasi (Sakarya University, Faculty of Informatics, Sakarya, Turkey), Anton Velichkov, Uwe Kloos, Gabriela Tullius (Reutlingen University, Fakultät Informatik, Reutlingen, Germany)
System Effectiveness vs. Task Complexity in Low Cost AR Solutions for Maintenance: a Study Case M. Hincapié (Universidad de Medellín, Colombia), A. Caponio (University of Bari, Italy), M. Ortega, M. Contero, M. Alcañiz (LabHuman - Universidad Politécnica de Valencia, Valencia, Spain), J. L. Alcazar, E. González Mendivil (Instituto Tecnológico de Estudios Superiores de Monterrey)
8
113
POSTER PAPERS
9
10
3DPublish: a web application for building dynamic 3D virtual spaces Pablo AGUIRREZABAL, Sara SILLAURREN, Rubén RODRÍGUEZ Tecnalia, Albert Einstein 28 – Parque Tecnológico de Álava – 01510 Miñano Álava, Spain pablo.aguirrezabal,sara.sillaurren,rodriguez.lorenzo @tecnalia.com
Abstract. Today virtual spaces on the World Wide Web offer content through two basic methods: a simple view of objects (pictures, videos, presentations, 3d objects, etc.) using a content viewer, or through a custom designed 2D or 3D virtual scene in which these objects and the scene are both static. This paper describes a 3DPublish tool which represents an alternative to these two static solutions thereby giving the possibility to dynamically manage a 3D virtual scene (real or imaginary) and the objects that composes it to create 3D webbased virtual spaces for museums exhibitions, corporate presentations for companies, galleries, etc.
Keywords: Virtual Reality, Interactive Exhibition, Web-Communication, Multimedia.
1. Project aims and scope The 3DPublish main objective is to create a common framework to virtual spaces administrators to design final exhibitions/presentations dynamically and independently of a start empty virtual scene. This means that the administrator has the ability to choose one 3D modeled scene (real or imaginary) or use one of the basic available default scenes provided by the application. A secondary goal involves the ability to manage the contents of that scene externally, feeding the application with different objects in digital formats (jpg, png, ppt, pdf, avi, etc.). Once we have an empty virtual scene and the objects registered, the third goal is to give the possibility to position the objects inside the scene, including the option to create new spaces (which do not exist in the base scenario) with new walls with different heights and thicknesses in which, of course, it will also possible to place objects. Despite the current instability of the global economic situation creating virtual spaces is still a profitable idea because for instance it is known that museums offering collections information and images on their web sites will not reduce visits to the physical museum, and will likely enhance interest in making in-person visits to the museum [1].
11
2. System Description The application is focused on two different aspects: The part where the space administrator is responsible for the management and the part where the visitor just enjoys. 3DPublish has been designed in such a way to make all processes simple and intuitive using a single web interface solution based on tabs for the space administrator and the Unity 3D free engine web plugin to moving through the 3D scene. Figure 1 illustrates the general layout of the 3DPublish application.
Figure 1. 3DPublish Architecture Diagram.
References [1] Thomas,W., & Carey, S. (2005). Actual/virtual visits: what are the links? In J. Trant, & D. Bearman (Eds.), Museums and the web 2005: Proceedings, Toronto: Archives & museum informatics. http://www.archimuse.com/mw2005/papers/thomas/thomas.html
12
An experiment-based learning support system for thermodynamics which presents haptic and thermal senses SHOU YAMAGISHI1), JUN MURAUAMA1), YUKIHIRO HIRATA2), MAKOTO SATO3) and TETSUYA HARADA1) 1) Tokyo university of science, 2641 Yamazaki, Noda-Shi, Chiba, 278-8510, Japan 2) Tokyo university of science, Suwa, 5000-1 Toyohira, Chino-Shi, Nagano, 391-0292, Japan 3) Tokyo institute of technology, 4259 Nagatsuda-Cho, Midori-Ku, Yokohama-Shi, Kanagawa, 226-8503, Japan 1)
[email protected],{murayama, harada}@te.noda.tus.ac.jp 2)
[email protected] 3)
[email protected]
Abstract. This research developed an experience-based learning support system through presentation of visual, haptic, and thermal feedback, which helped the user to understand intuitively the changes of state of gas. The proposed system displays changes in molecular movement visualized in VR space. The system is equipped with the haptic interface SPIDAR, and the thermal interface using a Peltier device. The system changes pressure and temperature into safe values, and presents them to the user. With this system, the user can experience the four changes of state of gas (i.e., isochoric, adiabatic, isothermal and isobaric) continuously and intuitively.
Keywords: learning support system, thermodynamics, multimodal interface.
1. Introduction In mechanical engineering, thermodynamics is as important as strength of materials, hydrodynamics, and the dynamics of machinery, and is indispensable to the industrial world. However, since molecular movement, pressure, and temperature are invisible, it is difficult for students to understand the phenomena. Moreover, in the exchange of heat, there are risks involved in doing real experiments. In many experiment-based learning support systems, learners welcome devices displaying not only visual and auditory but also haptic senses. KIM, HAMZA-LUP and others developed the learning contents using PHANToM [1,2]. In thermodynamics, gas pressure, volume, and temperature are important physical quantities. The system proposed in this research changes them into safe values, and presents visual, haptic, and thermal feedback to the user, and it helps the user to understand intuitively the change of the physical quantities in virtual space.
13
2. An experiment-based thermodynamics
learning
support
system
for
The operation of the system is shown in Figure 1. A system execution screen is shown in Figure 2. This system is equipped with the haptic interface SPIDAR [3], and a thermal interface using the Peltier device. The user operates the piston in VR space by SPIDAR to change the volume of the gas. SPIDAR presents the user the force related to the pressure of the gas. The thermal interface presents the change of the temperature of the gas as a thermal sensation on the user’s fingertip simultaneously. The proposed system allows the user to experience the four changes of the state of gas (i.e., isochoric, adiabatic, isothermal, and isobaric) continuously and intuitively. The user can choose a change of state from the selection dialogs of changes of state. According to the selection, the visualized molecule movements of the gas are displayed in VR space. When the user choose an isochoric change from the dialogs, the dialog “endothermic and exothermic” for the operation of heat quantity will be displayed on the left side of the screen. The user can make the temperature of the gas rise and fall by 20K by clicking these dialogs. In the other state, the user can move the piston in VR space using SPIDAR, and can compress and expand the gas. The user can experience the force with which a gas molecule collides with a piston and feel a change of pressure. The user can feel the change of temperature by touching the Peltier device surface with fingertip. For the quantitative understanding of the phenomena, the user can observe the graphs showing the physical quantities which change dynamically according to the operation.
Figure 1. Operation of the system.
Figure 2. System execution screen.
References [1] KIM,Y., PARK,S.; KIM,H.; JEONG,H.; RYU,J.; (2011) Effects of Different Haptic Modalities on Students’ Understanding of Physical Phenomena. In Ploc. of IEEE World Haptics Conference.; pp. 379–384. [2] HAMZA-LUP,F.G.; ADAMS,M.; (2008) Feel the Pressure: E-learning Systems with Haptic Feedback. In Ploc. of symposium on Haptic interfaces for virtual environment and teleoperator systems.; pp. 445-450. [3]SATO,M.; HIRATA,Y.; KAWARADA,H.; (1991) SPace Interface Device for Artificial Reality -SPIDAR-. IEICE Trans. Inf.& Syst.(Japanese Edition); J74-D-II; 7; pp.887-894.
14
Interaction in Augmented Reality using Non-rigid Surface Detection with a Range Sensor Goshiro Yamamoto, Yuki Uranishi, Hirokazu Kato Graduate School of Information Science, Nara Institute of Science and Technology 8916-5, Takayama, Ikoma, Nara, JAPAN {goshiro, uranishi, kato}@is.naist.jp
Abstract. This paper describes an interaction system with computer graphics represented in augmented reality (AR) through deforming of a target object as an input method. Although some studies using only single cameras have achieved non-rigid surface detection in real time, a range image sensor has the possibility to make the detection easier. Because range image sensors have become widespread in general recently, it is more reasonable to use it for nonrigid surface detection rather than a single camera. The proposed detection method by a range sensor would be a suitable interaction technology since a surface that has no feature points can be also measured. For a practical use case scenario, the interaction system will be applied to an AR picture book solution.
Keywords: augmented reality, human computer interaction, non-rigid surface.
1. Introduction Augmented reality (AR) technology has become quite widespread in general recently and thus general population has experienced AR through games, applications for smart phones, state-of-the-art technologies in exhibitions, and so forth. In such AR experiences, many users would often try to interact with the computer graphics (CG) that don't exist in the real world. For example, the users were trying to touch the CG images, or shake the marker. That means people were hoping to interact with visible information instinctively. This is an important point for AR interface design. Gesture recognition and tool-usage studies have been done for such interaction methods. However, there are few studies that apply deformation of the marker objects as an input interface. This paper describes an interaction method that changes CG animation based on the deforming operation of the target objects as an input interface. Detection technology of deforming non-rigid surface that has feature points has been applied into the field of AR by using one camera [1, 2, 3]. These studies enable displaying of CG on deformed markers as AR representations because the shape of marker is estimated using feature points. Recently, the non-rigid surface detection has a real time processing advantage over the use of a single camera and has become one of interaction technologies.
15
On the other hand, it is reasonable to use a range image sensor for measuring scene structure on the point of obtaining measured result in a moment. Hence, the range sensor way would detect non-rigid surface easier than image based non-rigid marker recognition. This study tackles this detecting method using a range sensor. Additionally, a range sensor can detect non-texture surfaces while image based method cannot detect such a surfaces.
2. Method The proposed system consists of a color camera, a range image sensor, and a computer. It is a simple configuration that obtains color and depth information by a camera and a range sensor respectively. The transformation matrix between a camera coordinate system and a range sensor coordinate system are assumed known by calibrating them in advance. In the proposed system one condition, that target surface doesn’t change its own size, is set. Under this condition, a curved surface that fits the shape obtained by the range sensor can be calculated. At first, feature points on the target surface are detected via a color camera. Then, these three dimensional coordinates are calculated based on corresponding range data. The curved target surface, which has a constant size, is estimated using this data. At the same time, the system can measure the structure of a real world space where the target exists. Finally, the CG animation changes depending on the shape of target surface and the real world space around the target. For practical use, the system will be applied as an interaction method for an AR picture book. Figure 1 shows the proposed interaction method compared with a conventional AR picture book.
Figure 1. Left figure shows an AR picture book made of hard carton boards, middle figure is configuration of the proposed system, and right one is an example of the interaction system.
References [1] Zhu, J.; Lyu, M. R.; (2007) Progressive Finite Newton Approach To Real-time Nonrigid Surface Detection. In Proc. Computer Vision and Pattern Recognition [2] Pilet, J.; Lepetit, V.; Fua, P.; (2007) Fast Non-Rigid Surface Detection, Registration and Realistic Augmentation. Int. Journal of Computer Vision 76(2), 109-122 [3] Gay-Bellile, V.; Bartoli, A.; Sayd, P.; (2010) Direct Estimation of Non-Rigid Registrations with Image-Based Self-Occlusion Reasoning. IEEE Trans. On Pattern Analysis and Machine Intelligence 32(1) 87-104
16
Biomechanically Valid Arm Posture Reconstruction Exploiting Clavicle Movement Eray Molla, Ronan Boulic Immersive Interaction Group, EPFL, Lausanne - Switzerland
[email protected],
[email protected]
Abstract. Even though seven degrees of freedom (7-DOF) limb model has been intensively exploited by analytical inverse kinematics (IK) community, contribution of the clavicle for visually satisfactory arm posture reconstruction is usually ignored. In this work, a simple technique for capturing the motion of the clavicle will be demonstrated. In addition, a closed form solution for parameterizing biomechanically meaningful joint limits as swivel angle intervals relying on spherical polygon geometry, swing-twist parameterization and quaternion algebra will be presented.
Keywords: Analytical IK, Clavicle Reconstruction, Joint Limits.
1. Capturing Clavicle Movement For capturing the movement of the clavicle, three markers are attached around the chest for building a local coordinate system (CS). Another marker is placed just above the acromion. Please note that, there is no significant movement between the shoulder and that region other than the rotation of the sternoclavicular. Therefore, by estimating the relative movement of the shoulder marker in local chest CS we can infer the orientation of the clavicle. Please, refer Fig 1. for a visual illustration.
Fig 1 - Left: Any clavicle movement affects the shoulder and the marker above it in the same way. Middle: ϕ vs. θ curve. Right: Clavicle movement is captured with the introduced method.
17
2. Constraining Swivel Angle (ϕ) With Arm Joint Limits Finding all possible 𝜙s can be done in three steps. First, valid intervals of swivel constrained by shoulder joint’s swing and twist limits are estimated. Then, thanks to the anatomical symmetry of the joints from shoulder to wrist and wrist to shoulder, the problem can be reformulated as a reverse one to treat the wrist as the shoulder [1]. In this way, allowable ranges are obtained for the reverse problem. Finally, these solutions are transformed into the same domain and their intersection yields all valid ϕ. Here, only the first step is presented due to limited space, but, the second and third steps are straightforward to deal with [1]. The position of the elbow is determined by the swing movement of the shoulder. As noted in [2], spherical polygons are commonly preferred for limiting the range of this motion. Therefore, all 𝜙s leading a valid shoulder-swing can be estimated by intersecting the elbow circle with this spherical polygon. For finding the relationship between the shoulder-twist (𝜃) and 𝜙, we express all rotations bringing the end joint on its target position in 𝜙 terms. This can easily be done via unit quaternions. Before starting, estimated elbow flexion is applied. Let p denote the rotation bringing the elbow where 𝜙 = 0 and the wrist on the target position. Then, 𝑞𝑣 (𝜙) = 𝑞(𝑛𝜙 , 𝜙)𝑝 represents all valid rotations where 𝑛𝜙 is the normal of the elbow circle. Finally, θ for any given ϕ can be estimated by decomposing 𝑞𝑣 (𝜙) into its swing and twist components: 𝑞𝑣 (𝜙) = 𝑞𝑡𝑤𝑖𝑠𝑡 (θ)𝑞𝑠𝑤𝑖𝑛𝑔 . Working out the Math results in the following equations (see Fig. 2 middle): 𝜙 𝜙 𝜙 𝜙 + 𝑏 𝑠𝑖𝑛 , 𝑐 𝑐𝑜𝑠 + 𝑑 𝑠𝑖𝑛 � 2 2 2 2 𝜃 𝑎 − 𝑐 𝑡𝑎𝑛 2) 𝜙(𝜃) = −2𝑎𝑡𝑎𝑛( 𝜃 𝑏 − 𝑑 𝑡𝑎𝑛 2
𝜃 = 2𝑎𝑡𝑎𝑛2 �𝑎 𝑐𝑜𝑠
Finally, the valid intervals of the swivel restricted by shoulder-twist can easily be found by analyzing the behavior of 𝜙(𝜃) in θmin ≤ θ ≤ θmax, thanks to its monotonicity.
References [1] J.U. Korein (1985); "A Geometric Investigation of Reach", The MIT Press, Cambridge. [2] Baerlocher P, Boulic R. (2000). Parametrization and range of motion of the ball-and-socket joint. In Proceedings of the AVATARS conference Lausanne, Switzerland.
18
Cup-embedded Information Device for Supporting Interpersonal Communication Kazuki Takashima1, Yusuke Hayashi2, Kosuke Nakajima2, Yuichi Itoh2 1 Tohoku University, 2Osaka University
[email protected]
Abstract. We are proposing a novel concept of a cup-embedded information device for supporting interpersonal communication. The cup device can be held by the user’s hand, and it can be used not only for drinking but also to estimate conversational states through sensors, and to present external or private information during the conversation through an embedded display. We design the device and discuss the applicability of the concept based on the results of a questionnaire-based study.
Keywords: Ambient sensor and display, wearable computing
1. Introduction There are many studies on supporting conversation by using information systems. Most of them use sensors to recognize speakers’ conversational states, and it is popular to provide information (e.g., visual feedback [1], common interest) with an external display system in order to enhance the conversations. Generally, these methods require speakers to wear sensors near to their face (e.g., a microphone), which can restrict their natural behaviors and impression formations. Additionally, holding and focusing on their personal displays (e.g., mobile phones) during conversations is an impolite behavior especially for formal activities like parties. Thus, we propose a more feasible and natural way to sense users’ conversational states and to present personalized information to them by using a cup-embedded information device that naturally fits in typical conversational situations. For example, in buffet style parties, people are usually holding their own cup in their hand, and its position is usually not far from their face. Also, it is a known fact that eating and drinking together makes people more intimate [2]. These are the reasons why we adopt a cup device to sense user’s utterances, and to allow the user to naturally access personalized Figure 1. Cup-embedded device information.
19
2. Design and Discussion Fig. 1 shows the designed cup device. The sensors and displays need to be implemented in a way they will not to disturb users’ drinking. An iPod touch (MC547J/A) is disassembled and its microphone, touch display and main systems are embedded in an acrylic 300 ml cup specially designed using a 3D printer. Especially, the display and microphone are embedded in the front of the cup in a way they will not disturb users’ drinking while still keeping a good visibility of the display. No device part appears in the rear side of the cup, which is the part that a partner sees during a face-to-face communication. Also, an accelerometer is used to recognize users’ hand gestures and drinking actions. The measured data are sent to a server using a Wi-Fi network. A speaker and a vibrator are also used to present information, and a touch display allows users to smoothly interact with the cup. Therefore, the device realizes both sensing and displaying only by holding it. Through some controlled evaluations, we confirmed that the sensing system correctly gather user’s utterance and drinking actions at an enough accuracy of 98%. We also conducted a small user study with 6 participants (ave. 24 years) to investigate functionality and impressions of the cup device on a conversational scene. In the study, two persons at a time talked over a cup of tea in a face-to face situation, and answered some questions in a 5-point scale subjective rating (1:bad, 5:good). We got positive results on device weight of 230g (Average rating: 3.5), naturalness and fitness to the conversational scene (4.1), easiness of accessing personal information (4.3), and partner’s gaze movement to cup during conversation (4.1, meaning no problem). Also, we got neutral result on easiness of drinking (3.16). Overall, it is clear that the design of the cup device naturally fits a drinking sharing conversational scene and it is expected to be effective as an ambient sensor and a private display. Many scenarios exist for actual use of this system. The most promising one is to activate the conversation when talking with strangers. Based on the estimation of the conversational activity by the sensed data, the system could individually provide a variety of information (e.g., recent news, and common interest) in order to help activate the conversation. The interest information can be exchanged by a data transfer application by a simple and formal gesture of clinking their cups together. And, users can use this in a business meeting to record speaking log and to privately see information such as email (or YouTube videos) in a natural way. On the other hand, there is a limitation on sensing users’ utterances due to the weak microphone design. In the future work, we are planning to collaboratively use a unidirectional microphone and movements of user’s mouth captured by iPod’s camera. And we will explore additional sensors like heartbeat sensors embedded on the surface of the handle for better understanding of the social activity of conversations.
References [1] Kim, T., et al.; (2008) Meeting Mediator: Enhancing group collaboration using sociometric feedback. In Proc. CSCW '08. 457-466. [2] Mille, L., et al.; (1998) Food sharing and feeding another person suggest intimacy; two studies of American college students. European Journal of Social Psychology. 423-436.
20
Support for a personalized 3D visualization and exploration of document collections via gestural interaction ANGELICA DE ANTONIO, MARTÍN ABENTE, CRISTIAN MORAL, DANIEL KLEPEL Universidad Politécnica de Madrid, Facultad de Informática, Campus de Montegancedo, 28660 Boadilla del Monte, Spain
[email protected]
1. Introduction In spite of the powerful tools which are available nowadays to make easy the access to information contained in huge document collections, like WWW, satisfactory solutions haven’t been found yet which allow not only to easily locate potentially interesting documents but also: 1) help users to build a mental model on a set of documents (the whole collection or a query result), 2) allow users to explore a collection and interact with it in an intuitive way and to reorganize it according to their interests and preferences, or 3) are able to dynamically adapt on the basis of user features, observed-user behaviors, or task features. Three dimensional representations of document collections have shown their usefulness in helping users visualize the thematic structure of the collection [1]. However, a 3D visualization is not enough. New interaction paradigms and techniques need to be investigated, as well as more intelligent user support.
Figure 1. Gestural interface for the exploration of document collections
The approach we propose has three main axes: 1) New approaches for the conceptual modeling of document collections which allow to incorporate: thematic similarity analysis (as in the classical 3D visualization approaches), semantic knowledge (ontologies) about the document content, structure and features, excitatory and inhibitory relations among documents, and automated learning and evolution mechanisms; 2) Combining 3D virtual reality technologies, augmented reality for
21
mobile devices and touch interfaces to design new more intuitive, powerful, versatile and adaptive interaction mechanisms; 3) Incorporation of user-modeling mechanisms (taking advantage of both virtual and augmented reality technologies to observe and track the user actions), task-modeling mechanisms, and dynamic adaptation.
2. The prototype system A prototype has been developed fully automating the visualization pipeline for unstructured document collections. A set of plain text or pdf documents is provided as input. Pre-processing the documents implies transforming them into a conceptual representation that can be further analyzed. The Vector Space Model has been adopted [2] for this preliminary representation. A multidimensional vector is computed for each document in which each position corresponds to one concept appearing in the collection. Several options have been proposed in the literature to compute the thematic similarity between any two documents based on this vector representation [3,4]. The next step is reducing this n-dimensional representation into a 3D representation, and assignment of visual characteristics to the significant document features to be visualized. A Force Directed Placement [5] approach has been selected for this step, with an improvement proposed to enhance the user’s capability to adapt the visualization according to their current goal and personal preferences. A clustering algorithm is also applied to the conceptual representation in order to group documents into thematic clusters visualized in different colors. For this step the k-means algorithm [6] has been selected because of its unique property to allow the user to select the number k of clusters to generate. A gestural interaction mechanism has been developed to allow users to navigate throughout the 3D space, select a single document or a whole cluster, invert a selection (select whatever was not selected), hide the selection, open an excerpt of a single document, change the number of desired clusters, strengthen the attraction force among similar documents, or strengthen the intra-cluster attraction force (see figure 1). An experimental user evaluation is currently underway.
References [1] Teyseyre, A. R.; Campo, M. R.; (2009). An overview of 3d software visualization. IEEE Transactions on Visualization and Computer Graphics, 15(1):87-105. [2] Salton, G., Wong, A., and Yang, C. S. (1975). A vector space model for automatic indexing. Communications of the ACM, 18(11):613-620. [3] Yu, C. T.; Salton, G.; (1976). Precision Weighting--An Effective Automatic lndexing Method. Journal of the ACM, 23(1):76-88 [4] Jing, J.; Zhou, L.; Ng, M. K.; Huang, Z.; (2006). Ontology-based distance measure for text clustering. Proceedings of SIAM SDM workshop on text mining, Bethesda, Maryland, USA. [5] Fruchterman, T. M. J. and Reingold, E. M. (1991). Graph drawing by force directed placement. Software: Practice and experience,21(11):1129-1164. [6] Dhillon, I. S. and Modha, D. S. (2001). Concept decompositions for large sparse text data using clustering. Machine Learning, 42(1):143-175.
22
User customized directional-view and sound device in a single display Youngmin Kim, Ji-In Kwon, Yanggeun Ahn, Byoung-Ha Park, and Kwang-Mo Jung Realistic Media Platform Research Center, Korea Electronics Technology Institute, Sangam-dong, Mapo-gu, Seoul, 121-835, Korea
[email protected]
Abstract. We propose a user customized display device using a polarizing glasses method. The proposed method provides directional-views and corresponding sounds according to the observers. This method can be implemented by stereoscopic display, such as a polarizing glasses method or a shutter glasses method.
Keywords: directional-view, directional sound, stereoscopic display, polarizing glasses method.
1. Introduction Recently, three-dimensional (3D) display is the most promising way to express a natural image. Among the 3D display methods, directional-view displays that can deliver the view images to the pre-determined observers from a single display have been studied extensively during the past years. Several companies have introduced directional-view display panels using autostereoscopic display method that can display multiple images simultaneously, depending on the observer positions [1]. So far, very little has been done in use of 3D sound contents because the only one sound content is needed in a typical 3D display. However, in such directional-view case, individual observers need to experience different views with different sound contents at their corresponding positions. To provide a complementary effort to this need, we lately proposed a directional-view and sound system using tracking system [2]. However, in this method, it is necessary to use expensive ultrasound speakers and tracking system. So, we propose here and investigate a user customized display device providing a directional-view and sound corresponding users using a polarizing glasses method and modified earphones. We believe that the proposed method may be helpful to train observers in the virtual reality environments as well as e-learning.
23
2. Principle of the proposed method We used a stereoscopic display method using a polarizing glasses method to implement user customized display with sound. Instead of using normal polarizing glasses, we made modified polarizing glasses to provide a directional-view image. Each glass lens of the polarizing glasses has same polarizing direction, while each observer wears different type of polarizing glasses. Therefore, each observer can experience different views with different type of polarizing glasses. This method can be applicable for any type of stereoscopic display. In the case of the polarizing glasses method, they can provide two perspective views because of their polarizing direction (x, y polarizing directions or left, right circular polarizing directions). To provide sound contents with directional-view image, the polarizing glasses were equipped with a stereo earphone. The earphone is working with Bluetooth, so each observer can experience different view image with different sound regardless of their positions. We used stereoscopic monitor with polarizing glasses (LG D2743). Figure 1 shows the experimental results of the proposed method. The images were captured through the polarizing glasses. Each directional-view image was composed of 1280 (H) × 360 (V) pixels. As shown in Fig. 1(a) and (b), each directional-view image was well-separated at the corresponding glasses. As we predicted, two observers can experience the different video with different sound contents, regardless of their positions.
Figure 1. Experimental results (left: view 1 with left circular polarized glasses, centre: view 2 with right circular polarized glasses, right: without glasses).
Acknowledgement This research was supported by a grant from the R&D Program (Industrial Strategic Technology Development) funded by the Ministry of Knowledge Economy (MKE), Republic of Korea. Also, the authors are deeply thankful to all interested persons of MKE and KEIT (Korea Evaluation Institute of Industrial Technology).
References [1] Kuo, W.-H.; Chou, W.-B.; Cheng, T.-C.; Yeh. P.-C.; Jeng, Y.-S., Hu, C.-J., and Huang, W.M.; (2008) 2D/3D dual-image switchable display. Presented at the SID Int. Symp. Digest Tech. Papers. [2] Kim, Y.; Hahn, J.; Kim. Y.-H.; Kim. J.; Park. G.; Min. S.-W. and Lee. B.; (2011) A directional-view and sound system using a tracking method. IEEE Trans. on Broadcasting.
24
Wind Direction Perception Using a Fan-based Wind Display: Effect of Head Position and Wind Velocity on Discrimination Performance Yuya YOSHIOKA, Takuya NAKANO, and Yasuyuki YANAGIDA Graduate School of Science and Technology, Meijo University 1-501, Shiogamaguchi, Tenpaku-ku, Nagoya 468-8502, Japan {{123430042, 113430024}@ccalumni., yanagida@}meijo-u.ac.jp
Abstract. In this study, we examined the effect of head position on the human ability to discriminate between wind directions by using a fan-based wind display system. We conducted an experiment to measure the just noticeable difference of wind direction for three target positions: tip of the nose, base of the nose, and center of the head. In addition, we examined the effect of wind velocity on wind direction perception, by using a fan-based wind display system.
Keywords: haptics, wind display, wind direction, perception, wind velocity
1. Introduction Use of wind is expected to enhance users’ sensation of being involved in virtual environments. Researchers have recently built wind display systems, such as WindCube, developed by Moon et al. [1]. However, in many systems, fans are arranged rather sparsely and such a configuration has not been proved to have sufficient ability to convey wind direction. In order to find the optimal configuration of fan-based wind displays, we have examined human discrimination characteristics for wind direction aimed at the face. We measured the just noticeable difference (JND) of wind direction perception based on the method of constant stimuli [2]. In our previous study, however, a few subjects were almost unable to discriminate between wind directions. We suspected that the target position where the wind sources are directed might affect wind direction perception, as a slight offset in the fan arrangement could result in a change in the part of the face being stimulated (i.e., wind from the right blows onto the left cheek). In the present study, we examined the effect of head position relative to the target position of the wind display system by measuring JND for three head positions. In our previous experiment, wind velocity was fixed to the maximum value that the fan could provide. However, it has been suggested that haptic sensations provided by the wind differ depending on wind velocity. Therefore, in this study, we set up the system so that the fans were directed to the center of subject’s head, and measured JND values of wind direction perception for two wind velocities, based on the method of constant stimuli.
25
2. Experiment We used the same wind display system as the one used in our previous study [2]. In the previous experiment, a standard stimulus was given by the front fan (0 degrees), and a comparison stimulus was given by one of the other fans, within a range of +30 to −30 degrees at intervals of 10 degrees (Figure 1). A chin and forehead support was used to fix the subject’s head. The wind speed was 1.3 m/s. The subject was asked to state whether the comparison stimulus was to the left or to the right of the standard stimulus. We measured JND for 3 conditions in which the center of the arc-shaped frame (target position of the wind display) was aligned at the tip of the nose, base of the nose, or the center of the head. Ten male subjects volunteered for this experiment. The averaged JND for tip-of-the-nose, base-of-the-nose, and center-of-the-head conditions was 21.1 degrees, 12.9 degrees, and 6.1 degrees, respectively. A t-test showed that the difference was statistically significant (p < .01) for every combination of these conditions. Next, we measured the JND of two wind velocities by using the same experimental method. The center of the arc-shaped frame (target position of the wind display) was set at the center of the head. Ten new male subjects who had not participated in the previous experiments volunteered for this experiment. Average JND was 11.32 degrees for 0.3 m/s and 7.95 degrees for 1.3 m/s (Figure 2). The difference was not significant at the 5% level (two-sided p-value = 0.0986). However, we found a trend that subjects discriminated wind directions better when the wind velocity was faster.
Figure 1. Setup
Figure 2. JND (head position) Figure 3. JND (wind velocity)
In conclusion, we measured the JND of wind direction perception for different head positions. The results imply that the alignment of the user’s head is important when using compact-fan-based wind display systems. We also measured the JND of wind direction perception for two different wind velocities, and found that the JND value was lower for 1.3 m/s than for 0.3 m/s. These results show that increased wind velocity improves perception of wind direction.
References [1] Moon, T., Kim, G. J. (2004) Design and Evaluation of a Wind Display for Virtual Reality. Proc. ACM Symposium on Virtual Reality Software and Technology. pp. 122–128. [2] Nakano, T., Saji, S. Yanagida, Y. (2012) Indicating Wind Direction Using a Fan-Based Wind Display. Proc. EuroHaptics 2012, Vol. II, pp. 97–102.
26
3DRecon, a utility for 3D reconstruction from video T.W. Duckworth, D.J. Roberts University of Salford, UK
[email protected]
Abstract. Video based 3D reconstruction (VBR) has applications including telepresence and free-viewpoint video for television and film. There has been significant research into VBR algorithm performance and quality, but little analysis of the impact camera placement and calibration quality have on the reconstructed form. 3DRecon is a utility application that enhances understanding of how camera placement and parameters affect the quality of reconstruction. It can be used in conjunction with a set of live calibrated cameras, pre-recorded datasets from disk, or in an entirely simulated context using virtual cameras and objects. Such a tool could be of great benefit to those wishing to implement a VBR system; from using the simulated setting to decide on the number and placement of cameras before deployment, to analyzing the impact of each camera on form and texture genesis in a live system.
Keywords: 3D Reconstruction, 4D imaging, multiple image modeling.
1. Motivation VBR is a technique that can be used to create 4D models of objects from multiple video streams; it may be regarded as an extension into four dimensions of image based reconstruction (IBR) for which numerous approaches exist [1] [2]. Cameras placed around the object being modeled each provide a partial contribution to the form and texture of the reconstruction. Reconstruction quality can be defined in terms of three measures: Spatial quality defines the faithfulness of the form, visual quality provides a measure of how well the reconstruction resembles the original, and temporal quality measures how quickly the reconstruction is achieved. In general, as spatial and visual quality increase temporal quality decreases, and therefore for realtime applications such as telepresence these qualities must be balanced to achieve the desired faithfulness of representation and frame rate. Spatial and visual quality are largely determined by the complexity of inputs to a reconstruction system, namely the number of cameras and their resolution. Adding cameras will increase spatial quality by providing more constraints to the reconstruction algorithm. Increasing camera resolution will improve both spatial and visual quality by capturing more detail that can be used for reconstruction and texturing respectively. However, a careless approach to increasing input complexity may significantly reduce temporal quality whilst having a negligible effect on spatial or visual quality. For example, when
27
adding cameras to a reconstruction system careless placement may result in an increased processing burden without an appreciable improvement in faithfulness of the reconstruction. Furthermore, specific VBR system infrastructures will have inherent data bandwidth constraints resulting in an effective ceiling to the possible number of cameras.
2. Contributions Given the challenges of balancing qualities to achieve the most faithful reconstructions at the lowest processing overhead, we present an interactive utility application, 3DRecon, that can be used to further understanding of many aspects of the VBR process, including the impact of camera placement and parameters. The tool allows in depth analysis of individual frames and sequences of frames from disk, as well as from a live camera set. Similar tools already exist [3] for the analysis of prereconstructed geometry, but do not include the reconstruction back-end containing the algorithm that reconstructs the form of the object from images, and hence only texture contribution from cameras may be analysed. To the best of our knowledge 3DRecon is the first tool that includes the reconstruction back-end, enabling interactive experimentation with the camera set in terms of which cameras contribute to generation of form and texture. Furthermore, since the reconstruction back-end is included, 3DRecon is able to provide a simulator in which synthetic objects can be reconstructed from virtual cameras. This enables rapid prototyping and evaluation of different camera arrangements, which can be a time consuming process when using real cameras [4]. The principle contributions of 3DRecon over similar tools [3] are: • Includes reconstruction back-end, enabling interactive control over camera selection for both form reconstruction and texture genesis. • Runs from pre-recorded data on disk or live cameras via a network connection. • Built in simulator enables rapid prototyping and evaluation of camera placement and parameters using virtual cameras and synthetic objects.
References [1] FRANCO, J.: (2009) Efficient Polyhedral Modeling from Silhouettes. Pattern Analysis and Maching Intelligence, IEEE Transactions on 31, 3 (2009), pp 414-427. [2] CHEUNG: (2000) A real time system for robust 3D voxel reconstruction of human motions. Computer Vision and Pattern Recognition, 2000. IEEE Conference on 2 (2000), pp 714-720. [3] INRIA: Lucyviewer. http://4drepository.inrialpes.fr/lucy_viewer [4] MITCHELSON, J. HILTON, A.: (2003) Wand-based multiple camera studio calibration. Technical Report VSSP-TR-2/2003, University of Surrey, CVSSP, 2003.
28
A Mobile System for Collaborative Design and Review of Large Scale Virtual Reality Models Pedro Campos1,2, Duarte Gouveia1, Hildegardo Noronha1 and Joaquim Jorge2 1 University of Madeira, Campus Universitário da Penteada, 9000-390 Funchal VIMMI Group, INESC-ID Lisbon, R. Alves Redol, 9, 1000-029 Lisbon, Portugal
[email protected],
[email protected],
[email protected],
[email protected]
2
Abstract. Several tools and research prototypes have been developed with the goal of improving the visualization, manipulation, design and review of 3D virtual reality models. However, most of the interactive technologies deployed in real world engineering contexts are still difficult to use. We present a novel virtual reality system specifically designed to support the needs of engineering teams working at oil platforms. CEDAR (Collaborative Engineering Design And Review) is based on multitouch and accelerometer input, and was designed and evaluated in close cooperation with researchers and engineers of a large oil industry company. The system allows the navigation, reviewing and annotation of 3D CAD (Computer-Aided Design) models in a mobile, collaborative context.
Keywords: virtual reality, interface design, collaborative interfaces.
1. Collaboratively Reviewing 3D CAD Models Virtual Reality (VR) user interfaces have revolutionized the way we work thanks to many aspects, including the combination of different input modalities [1]. On the other hand, multitouch technology has become mainstream and tablet-based multitouch has emerged as a mobile interaction style standard, especially due to the success of products such as the iPad. Despite these significant advances, most of the VR tools deployed in real world design and engineering contexts are still regarded as being difficult to use, especially when engineering teams need to collaboratively visualize and review large scale 3D CAD (Computer-Aided Design) models. This is precisely what happens with the oil platform industry, which necessarily involves large teams that review, manipulate and discuss around large CAD models, which are sometimes difficult to visualize and navigate through. In this paper, we argue that the manipulation of CAD models can benefit significantly from the so-called natural interaction techniques [2]. More specifically, we present a new mobile-base system that employs multitouch and accelerometer inputs. This tablet-based solution can be useful for engineering teams that are interested in design and review tasks. This new VR multimodal interface was
29
designed to support those tasks in a mobile context of usage (e.g. one engineer at the offshore oil platform, another engineer at the central office in the mainland). The system allows the navigation, reviewing and annotation of 3D CAD models in a mobile, collaborative context, coupling a fast OpenGL-based framework with an efficient communication protocol. Annotations are performed by touching on a specific 3D point relevant to any of the engineering objects. There are two variants of the CEDAR mobile user interface: multitouch-only and multitouch coupled with accelerometer-based input. The multitouch-only user interface uses two “joysticks”, which are used to navigate through the 3D platform. The left button is used to control the displacement along the Z-axis (i.e., moving forward or backwards), the right button is used to simultaneously control the X and Y position of the camera (i.e. where the user is looking at). In the second version, we replaced the right “joystick” button with accelerometer-based input, so that users can move forward or backwards using the left button, but can simply tilt the tablet device left/right or up/down in order to control where they are looking at (see Figure 1).
Figure 1. The multitouch + accelerometer-based (MT+A) interface uses a “joystick” at the left side of the screen, but adds the accelerometer to control where the user is looking at.
References [1] Bolt R. A.(1998). “Put-that-there”: voice and gesture at the graphics interface. In Readings in intelligent user interfaces (San Francisco, CA, USA), Maybury M. T., Wahlster W., (Eds.), Morgan Kaufmann Publishers Inc., pp. 19–28. [2] McMahan R. P., Alon A. J. D., Lazem S., Beaton R. J., Machaj D., Schaeffer M., Silva M. G., Leal A., Hagan R., Bowman D. A. (2010). Evaluating natural interaction techniques in video games. In Proceedings of the 2010 IEEE Symposium on 3D User Interfaces (Washington, DC, USA, 2010), 3DUI ’10, IEEE Computer Society, pp. 11–14.
30
Augmented Dining Table for Affecting Our Food Consumption Sho Sakurai, Yuki Ban, Takuji Narumi, Tomohiro Tanikawa, Michitaka Hirose The University of Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo, Japan {sho | ban | narumi | tani | hirose}@cyber.t.u-tokyo.ac.jp
Abstract. In this paper, we propose a tabletop system for affecting our food consumption by controlling a size of a projected image around the food. Recent psychological studies have revealed that the amount of food consumed is influenced by both its actual volume and external factors during eating. Among these studies, our research focuses on the apparent volume of food for affecting our eating behavior. Given that estimating portion size is often a relative judgment, apparent food volume is assessed according to the size of neighboring objects such as dishes and cutlery. Therefore, we considered we can affect our eating behavior and control our energy intakes appropriately by changing a size of a projected image around the food. Based on this hypothesis, we construct a tabletop system which projects virtual dishes around the food on it, in order to change the assessed apparent food volume interactively.
Keywords: Human Food Interaction, Cross-modal Interaction, Augmented reality, Food Consumption.
1. Affecting Our Food Consumption by Environmental Factors To decrease rates of obesity, many researchers have developed systems to promote physical activity using human-computer interaction techniques [1]. Conversely, there are few studies on how such techniques can be used to modify food intake although computer-mediated human-food interactions are attracting recent research attention [2]. One limitation of these methods is that they are based on conscious education. This requires continuous effort on the part of the consumer in order to change their eating habits. Sustaining highly conscious effort when performing an intended behavior can be difficult. However, if aspects of eating behavior are controlled subconsciously, we may be able to modify food intake with minimal effort. Thus, our research focuses upon controlling food intake implicitly by using human-computer interaction techniques. Recent studies in psychology have revealed that the apparent volume of food can influence eating behavior. Given that estimating portion size is often a relative judgment, apparent food volume is assessed according to the size of neighboring objects such as dishes and cutlery [3, 4].
31
We hypothesize that this effect can be applied to a method for controlling our food intake. Therefore, we propose a tabletop system for affecting our food consumption by inducing the effect using projected image around the food.
2. A Tabletop Display for Affecting Our Food Consumption
Figure 3: Projecting a virtual dish around the plate with AR marker.
We propose a method for changing a size of projected image of dishes around the food using a tabletop display for controlling our food consumption (Fig 2). As described in the previous chapter, the comparison of an objects’ size is a relative judgment, and this relativity can evoke a change in perceived food-volume. Therefore we aim to affect our eating behavior by changing the size of projected dishes and the relative size of the food on it. This gives the users the impression that there is a difference in food-volume on a virtual dish although amount of food is same. Each transparent plastic plate is round shape and attached an AR marker (Fig. 3) for detecting the position of the plate and food on it. The AR markers can be seen through the semi-transparent lacteous acrylic board from under the table when the plates are put face up on the board. A WEB camera put under the table captures the AR markers and the system calculates their position using ARToolKit. This enables the tabletop system to aware position of each plates and food on it, and projected virtual dishes around each plate. A mirror supply distance enough to project image of dishes around each plate on the tabletop which parallel to the floor. The apparent food-volume changes by changing the size of virtual dishes.
References [1] J. Maitland and M. Chalmers: Designing for peer involvement in weight management. In Proc. of the 2011 conf. on Human factors in computing systems, pp.315-324, 2011. [2] A. Grimes and R. Harper: Celebratory technology: new directions for food research in HCI. In Proc. of the 2008 conf. on Human factors in computing systems, pp. 467-476, 2008. [3] B. Wansink and M. M. Cheney: Super Bowls: serving bowl size and food consumption. The Journal of the American Medical Association, 13; 293(14):1727- 1728, 2005. [4] B. Wansink B, K. Van Ittersum and J. E. Painter: Ice cream illusions bowls, spoons, and self-served portion sizes. American Journal of Preventive Medicine, 31(3):240- 243, 2006.
32
Monitoring a Realistic Virtual Hand using a Passive Haptic Device to Interact with Virtual Worlds J.-R. CHARDONNET1 and J.-C. LEON2 1
Arts et Métiers ParisTech, CNRS, Le2i, Institut Image, 71100 Chalon-sur-Saône, France 2 Grenoble University, INRIA, Jean Kuntzmann Laboratory, 38041 Grenoble Cedex 9, France
[email protected] [email protected]
Abstract. We present a prototype of a hands-on immersive peripheral device for controlling a virtual hand with high dexterity. This prototype is as easy as a mouse to use and allows the control of a high number of degrees of freedom (dofs) with tactile feedback. The goals corresponding to design issues, physiological behaviors, include the choice of sensors’ technology and their position on the device, low forces exerted while using the device, relevant multi-sensorial feedback, performance of achieved tasks.
Keywords: navigation device, passive haptics, interaction, manipulation.
1. Introduction Our hands are an essential tool to interact with our environments, for example to manipulate objects. However, interactions may not be easy because for the same object there are generally different possible grasping configurations. This issue still remains a great challenge in virtual reality as current devices, e.g., [1,2], and software do not allow reproducing in a natural and realistic way a grasping motion in virtual environments and they are not devoted to general audience use. Our proposed solution is based on a passive haptic feedback and a hands-on interaction. We present an extension of the HandNavigator described in [3]. Unlike [1], we conducted validation tests of the existing prototypes on grasping tasks scenarii, to obtain the design of new requirements of the HandNavigator with a more ergonomic shape, sensors improving dexterity and interactions, while integrating tactile feedback for an enhanced immersion.
2. Design of the new prototype Validation tests on previous versions pointed out several issues that are: the choice of sensors’ technology, minimization of muscular efforts and the inclusion of tactile
33
feedback. We designed a new prototype, called V4, where several sensors are used to control each finger: a lever-switch for controlling the virtual finger in a free motion mode, with a low displacement amplitude of a real finger, a vibrator to give the user a tactile feedback when a finger touches a virtual object, and a pressure sensor for controlling the virtual finger in a constrained motion configuration (typically when the user holds a virtual object) and to give the user grasp feelings. All these sensors are integrated in a module depicted in Figure 1. There is one module for each finger, altogether four modules. A spring between the lever-switch and the pressure sensor separates free motion mode and constrained hand motion mode. We added a siliconbased coating on the device as damping material to isolate each finger and prevent vibrations from propagating in the whole structure, so that the user can immediately know which finger touches a virtual object. We tested our device on a simple scenario where the goal is to position a virtual hand on a part coming from a food processor. A comparison between the real hand configuration and the virtual one achieved using our device is depicted on Figure 2.
Figure 1. Our new prototype (on the left, a module for one finger).
Figure 2. Configuration of a real/virtual hand for object grasping.
References [1] SCHLATTMAN M., KLEIN R.: (2007) Simultaneous 4 gestures 6 dof real-time two-hand tracking without any markers. In ACM symposium on Virtual Reality Software and Technology, pp. 39–42. [2] BOUZIT M., BURDEA G., POPESCU G., BOIAN R.: (2002) The Rutgers Master II-new design force-feedback glove. IEEE/ASME Transactions on Mechatronics 7, 2. [3] CHARDONNET J.-R., LÉON J.-C.: (2010) Design of an immersive peripheral for object grasping. In ASME International Design Engineering Technical Conferences & Computers and Information in Engineering Conference (IDETC/CIE).
34
INDUSTRIAL PAPERS
35
36
Discrete Event Simulation using Immersive Virtual Reality for Factory Simulation MR C. FREEMAN Virtual Reality Group. Nuclear AMRC, Rotherham UK. email:
[email protected]
MR S. REDDISH Virtual Reality Group. Nuclear AMRC, Rotherham UK. email:
[email protected]
DR R. W. SCOTT Virtual Reality Group. Nuclear AMRC, Rotherham UK. email:
[email protected]
Abstract. Discrete Event Simulations (DES) of assembly processes were converted into Immersive Virtual Reality (IVR) environments to aid the client in the decision making stages of factory design, layout and assembly process. The work was commissioned by Rolls-Royce as part of Project Power, a factory build for reactor pressure vessels, and conducted at the Nuclear Advanced Manufacturing Research Centre (Nuclear AMRC). The work also falls within the Cooperation Environment for Rapid Design, Prototyping and New Integration Concepts (Copernico), as part of the European Framework Project 7 (FP7). Simulation was used to visualise work flow, process management, and Health, Safety, & Environment (HS&E) issues within the factory. It is shown that IVR is well suited to the visualisation of the interaction of processes and factory layout, and has proven to be an intelligent tool for use in the decision making process of both factory and process planning.
Keywords: Virtual Reality, Factory simulation, Copernico, Discrete Event Simulation, Small Medium Enterprise
1. Introduction A Discrete Event Simulation (DES) model was developed after predicting that transportation within a manufacturing facility was to take up to an estimated 20% of the manufacturing lead time. Of this time, an estimated 70% was predicted to be complex lifts, causing both increased disruption within the plant and greater Health, Safety and Environment (HS&E) risks. A primary driver was to reduce the number of
37
complex lifts, and gain a greater understanding of the influences and effects of HS&E on factory flow. In a factory where there is no steady flow, each component has its own routing, often intersecting the routes of other parts. Paper methods of documentation describing these routes can be difficult to visualize. The generation of the DES model within WITNESS helped create a statistically accurate simulation, enabling predictive experimentation. A number of different permutations were also added, by testing the system with ‘what if?’ scenarios that included; re-work, shift patterns, number of operators, safe working distance under crane, crane speed (loaded and unloaded), quantity of resources, breakdowns, planned maintenance. This helped to identify the long term load and future capacity of the factory. However, the DES model could only provide a limited amount of information and understanding of the process flow and scheduling. After discussion with the Nuclear AMRC, an immersive 3D Virtual Reality (VR) model of the facility was developed, and the results of the DES simulation were linked to, and demonstrated in this virtual environment. This approach, whereby the numerical simulation of the DES model and the architectural and mechanical constraints of the built environment were integrated into one environment has provided a greater degree of understanding than the two simulations alone.
2. Methods The project was undertaken within the Nuclear AMRC [1] as part of FP7, Copernico [2] and with data provided by Rolls-Royce. DES data from Lanner Witness [3] was integrated with the VR capable PTC Division MOCKUP2000i2 [4] to display the simulations on hardware supplied by Virtalis. The simulation was used to identify issues with work flow, process management, HS&E, and collision detection. Rolls-Royce commissioned the development of the DES model in order to optimize transportation, and minimize non-value added activities like complex lifts. As part of the DES model nine main processes (Transport; Welding; Heat Treatment; Machining; Non Destructive Examinations (NDE); Work in Progress (WIP); Dress and Deburr; Inspection; and Setup) were identified [Figure 1], with each process having variants depending on the component and the stage of manufacture. The nine processes were imported in the VE, and represented in a Cartesian space time coordinate system, (x1, y1 , z1, t1) in state one, and (x2, y2, z2, t2) in state two. This gave information about the position, the applied process and the time of each individual component in the system The data was separated into assembly sequences for the component parts and the completed components. The parts simulations were then combined to create vessel simulations. These were finally amalgamated into a final construction schedule for the full eighteen months of the factory cycle. The simulation adjusted the timings of each process so that twelve hours of real time represented a second in simulation time. This condensed the simulation to over twenty minutes run time.
38
Figure 1: Simulation with color code and time frame The programming of both the part movements and the assembly process was achieved using a sequence of movements. The sequencing worked by recording the state of all the components in the time line either frame by frame, or continuously once the record button is pressed. The result was a long sequence of start times and subsequent time codes that set the Cartesian coordinates and orientation of the parts. One of the main drivers of the model was to visualize the working environment required for each process and the impact of this on other nearby areas. To achieve this, translucent ‘cages’ were created around each component which changed colour to represent the scheduled processes. The size of the cage was dictated by the area required for the component, the equipment and fixtures of each process. The results showed the region of space around the component that must not be impinged upon by any other component or HS&E field.
3. Results The goal of the project was to prove that VR simulation of production sequences was possible, and could be of practical use to SME companies. The simulation allows the viewer to watch and record where parts are interacting and potentially causing delays, this is particularly useful in non-linear routed flow factories. Factory process simulation is possible, and below is a discussion of problems that have been highlighted during the project. Certain areas of the factory were very closely packed with often more than one vessel, undergoing a process like heat treatment or assembly in the same bay at the same time. This significantly reduced the amount of equipment and people that could be at work in the area at one time, and could also have caused problems with cross process pollution and safety concerns. Other areas were also identified where vessel movement was also restricted by reduced areas. These challenges have since been overcome through iterative review and reconfiguration within the VE.
39
HS&E considerations meant that while a load was craned over another process within the factory, that process must be suspended for the duration of the load transit. The simulation demonstrated the full extent of the disruption that this would have within the factory. Challenges were highlighted in many areas of the factory that held the potential to become congested. The air ventilation system itself vastly reduced the available working space, by restricting movements in the assembly bays and blocking movements into the furnace. The vents were later removed to allow greater access. The buildings support pillars that housed the gas extractors, electricity and gas lines, prevented direct movement between the assembly bays and machining areas. This required all components to be craned or moved by air-skate to the access road. This caused movements to bisect the inspection and unloading area, which was often full of components, heavily restricting any movements. Air-skate and crane movements were subsequently improved to mitigate this problem.
4. Conclusion Since the completion of the project the simulation has proved an invaluable tool for members of Rolls-Royce looking to inform and educate all affected areas of business. Helping to prove the advanced techniques the company is using in its factory design process, and to maintain its position at the forefront of technological development. The potential to save money, increase safety, and visualize problems before building and also afterwards is limited only by the designers. It is the authors’ recommendation that in future the process be incorporated by design teams creating any factory, at a substantially reduced cost. Identifying issues in the virtual factory would prove cheaper to resolve in the VE than on the building site, or on the factory floor. By showcasing and developing this capability it is hoped that SME’s will be encouraged to embrace the technology and take part in the further development of it.
Acknowledgements The authors acknowledge the support of the European Commission through the 7th Framework Programme under NMP-2008-3.4-1 Rapid Design and Virtual Prototyping of Factories (COPERNICO: contract number 229025-2).The authors would also like to thank the Nuclear AMRC for its ongoing support.
References [1] Nuclear AMRC - http://www.shef.ac.uk/amrc [2]Copernico - http://copernico.co/default.aspx [3]Lanner Witness - http://www.lanner.com/en/witness.cfm [4]PTC Division Mockup2000i2 - http://www.ptc.com/product/division
40
Augmented Reality Pipe Layout Planning in the Shipbuilding Industry Harald Wuest, Manuel Olbrich, Patrick Riess, Sabine Webel, Urlich Bockholt Department Virtual and Augmented Reality Fraunhofer IGD, Darmstadt, Germany
[email protected]
Abstract. As large ships are never produced in large quantities, it often occurs that the construction process and production process overlap in time. Many shipbuilding companies have problems with discrepancies between the construction data and the real built ship, and the assembly department often has to modify CAD data for a successful installation. We present an augmented reality system where a user can visualize the construction data of pipes and modify these in the case of misalignment, collisions or any other conflict. The modified pipe geometry can be stored and further used as an input for CNC pipe bending machines. To guarantee an exactly orthogonal passage of the pipes through aligning bolt holes, we integrated an LED-based optical measurement tool into the pipe alignment process.
Keywords: augmented reality, mixed reality, industrial applications
1. Introduction Discrepancy checks between CAD data and real objects are of high industrial interest in the areas of prototype development and planning of industrial plants. When large ships are built the production is often carried out while the construction and the design are still in progress. Therefore a closer relation between planning and final installation can be of great advantage. The use of an augmented reality application, where CAD-models are superimposed on partial manufactured objects, can facilitate the assembly and visualize potential conflicts before the actual installation. The application we present in this paper handles the planning and the installation of pipes in large ships. So far in the current manufacturing process a so-called wire model is produced according to the pipe geometry, and used to check the accurate fit of the pipeline. The operator manually adjusts this wire by bending until it fits. This modified wire is then digitalized with a measurement system, and the resulting geometry data is used to bend a real pipe with a CNC bending machine. To speed up this process we developed an augmented reality application with which pipes can be virtually visualized, modified and tested for accuracy in fitting before the actual installation. A key feature of our application is a measurement tool, which can be used to automatically align a pipe along two 3D points.
41
Figure 1. The left image shows the virtual pipe planning application running on a tablet-PC. A user can interactively modify virtual pipe segments by touch interaction. On the right a typical screen of the user interface can be seen.
The problem of detecting differences between virtual and real objects in industrial scenarios has been recently addressed, but most of these approaches are only used for inspection and not for modifying and processing the augmenting data [1,3]. In contrast to all these application, our system does not only target the visualization of differences between virtual and real models, but also the modification and adaption of virtual content to exactly fit into an existing partially manufactured industrial object.
2. AR Pipe Editor The editor can load, modify and save data stored in a CAD format which contains parameterized geometrical descriptions rather than 3D geometries. The application runs on instantReality, a framework for mixed reality applications, which is based on the X3D standard [2]. Possible changes to the pipe include the movement of segments as well as inserting and deleting bending points. When a pipe segment is moved, all connected elements are modified as well. To keep the model as close as possible to the based file format, all modifications and validity checks are done in the formats description space. The main editing controls for a single pipe are located on the right of the tablet PC's screen. A large slider with an axis selector on top is used to move pipe elements. The slider is used to modify the position in the selected direction. The element below the slider is used to select the current pipe segment, which can also be done by directly touching the desired segment. The lock is used to prevent direct and indirect modification for a single pipe segment. This is helpful if a segment is already in the desired position, but neighbor segments still need to be modified. Below is a button to trigger the pipe alignment process, which is described later in section 3. The descriptive file format provides a set of component types which are used to construct the complete pipe. The most common types are pipe components (which refer to a straight piece of pipe) and bend components. For each component type, a point set representation is calculated, which describes a line along the element. To generate a 3 dimensional object from this point set, an X3D Extrusion node is used.
42
Figure 2. Occlusion based on invisible objects (left) and aligning a pipe element to 2 input positions (right)
This node can extrude a two-dimensional shape along a point set. The twodimensional shape for this process is a simple circle in the size of the pipes diameter. Mixed reality applications need to take many steps to visually embed a virtual object in a real environment. One problem in this area are collisions between virtual objects and real environment. Since we already have a model of the surrounding geometry, we can use it to mask out parts of the virtual pipe which are occluded by real structures. The top and bottom part of the left image in Figure 2 show the same scene, but in the bottom image, the model is masked out. This image can directly be rendered over the camera image, giving the impression of real structures occluding a virtual pipe. A key feature of the presented editor is the ability to adjust pipe elements according to a position determined with a special tool, which can be fixed to connection passages. The user can touch the align-button to move and rotate the element into the right spot. Figure 2 (right) shows a pipe before and after this process. Not only the selected pipe element is modified in this action, but also the connected bends and the pipes connected to them.
3. Tracking System Overview Our tracking system mostly relies on a structure-from-motion framework which was presented in previous work [4]. This framework creates a reconstruction of natural feature points and uses those for frame-to-frame tracking and initialization. From an application perspective there are two challenges. First the generation of precise 3D reference data, which is performed in a pre-processing setup step, and second, a fast and robust real-time tracking, which is used when the actual application is running. To retrieve a feature map in the coordinate system of a given CAD model, first a structure-from-motion based reconstruction is carried out on a video sequence, which shows the industrial object from relevant viewing positions. Then, by manually defining at least 3 correspondences between the reconstructed point cloud and the CAD model, a transformation consisting of rotation, translation and scale can be estimated. In the frame-to-frame tracking stage, not only the reference feature map serves as input for the camera pose estimation, but also additional feature points, which are reconstructed during the runtime of the application.
43
Figure 3. Calibration tool (left). In the middle image the reconstructed 3D points of the calibration tool are visualized by two spheres. The result after aligning the selected pipe segment along the calibration tool is shown in the image on the right.
To simplify the pipe planning and verification process within the pipe editor application we developed a calibration tool, which can be used to align a straight pipe segment along an axis which passes orthogonal through a bolt hole. This calibration tool consists of an iron body which can be screwed into a pipe hole, and an attached stick with two illuminating bodies which are located on the orthogonal axis through the center of the bolt hole. After the illuminating balls have been detected by color segmentation in several image frames, the resulting points are used for an on-line reconstruction of the calibration tool. Figure 3 shows two screen shots of the application before and after the alignment step. The application was tested on a tablet PC with a basic consumer webcam as a video device. A typical application workflow is demonstrated in a video, which can be seen at the webpage http://youtu.be/HJIbcIYWiVc.
4. Conclusion We presented a pipe planning application, which can be used to virtually inspect and modify pipes before the actual manufacturing and installation in a real ship. A measurement tool with two illuminating points was used to align virtual straight pipe segments in such a way that they fit exactly through existing bolt holes.
References [1] P. Georgel, P. Schroeder, S. Benhimane, S. Hinterstoisser, M. Appel, and N. Navab; (2007) An industrial augmented reality solution for discrepancy check. In Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR). [2] Web3DConsortium, X3D; http://www.web3d.org/x3d/. [3] S.Webel, M. Becker, D. Stricker, and H.Wuest; (2007) Identifying differences between cad and physical mock-ups using ar. In Proceedings of the International Symposium on Mixed and Augmented Reality (ISMAR). [4] F. Wientapper, H. Wuest, and A. Kuijper ;(2001) Reconstruction and accurate alignment of feature maps for augmented reality. In 3DIMPVT: The First Joint 3DIM/3DPVT Conference.
44
Virtual Assessment Meeting: a 3D Virtual Meeting Tool Integrated with the Factory World MARIA DI SUMMA1 GIANFRANCO MODONI1, GABRIELA CANDEA2, CIPRIAN RADU2, RUGGERO GRAFITI3, MARCO SACCO1 1
Institute of Industrial Technologies and Automation National Research Council of Italy, Via Paolo Lembo, 38/F, Bari, 70124, Italy 2Ropardo SRL, R&D Department, 2A Reconstructiei, Sibiu, 550129, Romania 3Alenia Aermacchi, S.P. 83, Grottaglie, Italy {mara.disumma, gianfranco.modoni}@itia.cnr.it, {gabriela.candea, ciprian.radu}@ropardo.ro,
[email protected],
[email protected]
Abstract. Stand-up meetings have become a common ritual of many factory's teams: they provide a status update to the team members and let to communicate problems, solutions, and promote team focus. This paper concentrates on Virtual Assessment Meeting (VAM), a Virtual Reality application for virtual meeting, developed for Alenia Aermacchi, as part of the Virtual Factory Framework project. VAM is an integrated set of software modules where an industrial complex virtual structure is represented. The added value is its interoperability with other software tools, providing at the same time both data reliability and constant information update. A particular focus has been placed on the integration of Giove Virtual Meeting (Giove VM), a tool for the 3D representation and the interaction within the Virtual Factory, with iPortal, a virtual location offering a collaborative workplace, a central information point and a Decision Support System. The purpose of this work is to demonstrate how the effectiveness of a virtual meeting is greatly improved when it is no longer only a 3D representation of a real physical space, but a shared space where to benefit, in real-time, of the factory’s information, managed and updated automatically.
Keywords: Virtual Reality, Virtual Meeting, Virtual Factory, interoperability, real-time update, decision support.
1. Introduction The coordinated evolution of the product/process and factory worlds and the related data management and tools is the current challenge in manufacturing engineering, aiming at creating a flexible system that is able to adapt to sudden market changes [1]. This paper presents an integrated set of software tools, developed for the Alenia Aermacchi Company, named Virtual Assessment Meeting (VAM). It adopts the approach proposed by the Virtual Factory Framework (VFF) project [2], consisting in a holistic virtual environment that integrates several decoupled functional tools
45
sharing the same semantic data model, to support the design and management of factories [3]. The integration of the different tools within the VFF platform is obtained using the Virtual Factory Manager (VFM) that handles a common and shared communication layer between already existing and newly developed software tools to support the factory design and management [4] [5]. In Alenia Aermacchi a stand-up meeting is an important time to monitor all activities within the plant, thus providing a status update to the team. In this context, we propose the use of VAM to support the Alenia Aermacchi stand-up meeting process.
2. Virtual Assessment Meeting VAM is a software platform based on the integration of three tools: iPortal, Giove Virtual Meeting and iDS. All stand-up meeting participants first login in the iPortal web application. From there, they can manage the documents related to the meeting and the activities assigned to them, as result of different meetings. The meeting place and the whole Alenia Aermacchi factory can be visualized in 3D using GIOVE VM. This tool allows you to monitor the entire manufacturing process through a traffic light system. A Decision Support tool like action plan can be used to create and assign activities as a result of the meeting, or to create new meetings. People who activities were assigned can be notified automatically through email.
2.1 iPortal iPortal [7] is a virtual location, web-based application, accessible via Internet through a browser, with a dashboard-like interface, offering to the user a collaborative workplace and a central information point. It provides conceptual integration for the business processes related to different digital activities, and functional integration of informatics tools through the Portlet technology. Some of its features are: single point of entry, customization, user & project management, email & message management, document management, calendar, real time update system (of the information in the library), news/announcements, project versioning.
2.2 Giove Virtual Meeting GIOVE Virtual Meeting (GIOVE VM) is a 3D virtual working environment, which provides the users with realistic and simultaneous navigation as well as with immersion and interaction capabilities. The main functionality is the virtualization of the factory layout, buildings, infrastructure, paths and a specific area, where the standup meeting takes place. The stand-up meeting zone includes a monitoring system based on traffic lights, in order to have the immediate updated status of the factory’s activities. Alarms will be reported if there are any deviations between the planned and wasted time to perform a specific task. During a multi-user collaborative session, each participant will have his own copy of the graphical user interface, which will present a
46
rendered 3D view of the virtual production system. All users will be able to interact with the virtual environment at any time. The integration of a voice conference system, using the Skype API, also allows the continuous communication among the online participants. Any number of users will be able to join a collaborative session using TCP/IP over Local or Wide Area Networks. GIOVE VM is built on a set of software modules based on GIOVE (Graphics and Interaction for OpenGL-based Virtual Environments), a set of C++ software libraries developed by ITIA for the creation of Virtual Environments [6].
2.3 iDS iDcision Support (iDS) [8] is a collaborative working environment where team members attend to different type of meetings. Individual and group decisions are facilitated by different decision support tools. Some of iDS’s features are: remote work, synchronous and asynchronous collaboration, reduced Decision Process time, email notifications (about the meeting and its protocol), direct access to the meeting (without any knowledge about the whole system), and Decision Support tools (vote, brainstorming, action plan, mind map, etc.). During an Alenia Aermacchi stand-up meeting, some actions related to certain activities are decided and assigned to certain people using the action plan tool.
3. The Integration Through VFM The aforementioned software tools can interoperate by exploiting the Virtual Factory Manager (VFM) that acts as a server supporting the I/O communications and provides access to the data repository, guaranteeing data consistency. An ontology based Virtual Factory Data Model (VFDM) has been adopted to model the data and the relationships between different pieces of information, providing in this way semantics [9]. The VFDM has been decomposed into macro areas, creating a hierarchical structure of ontologies, thus decomposing the problem and downsizing its complexity, while keeping a holistic approach. Each VFDM area consists of one or more ontologies, using the Web Ontology Language (OWL). The VFDM defines only the so-called metadata, whereas the actual instances will be stored in the VF Data Repository of the VFM. The adoption of a file-based system instead of a Database Management System (DBMS) is justified by its flexibility and the possibility to apply a versioning system. Furthermore, to interact with the VF Manager, a specific component, called VFM Connector, has been implemented: the VFM Connector takes care of the Web Service client implementation and the connection state mechanism. The VF Connector can parse, create and modify the ontologies thanks to an internal mapping between OWL classes and internal classes. Moreover it can import/export ontologies serialized in RDF/XML format. Briefly put, each VAM software component is a VFF module that uses a VF connector to read and write semantic data from and to the VFF ontology. The VAM tools communicate through the VFF ontology.
47
4. Conclusions The goal of this application is to support the stand-up meeting, a moment of undeniable importance in the daily functioning of the factory operations. The most important result that we expect from the use of this environment is to have continually updated information, immediately and easily accessible from anywhere in the virtual factory thanks to the interoperability guaranteed by VFM. The testing of this environment within an industrial plant, such as Alenia Aermacchi, will provide significant feedbacks to its optimization.
Acknowledgement The research reported in this paper has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement NMP2 2010-228595, Virtual Factory Framework (VFF).
References [1] TOLIO, T.; CEGLAREK, D.; ELMARAGHY, H. A.; FISCHER, A.; HU, S.; LAPERRIERE, L.; NEWMAN, S. and VANCZA, J.; (2010) SPECIES -- Co-evolution of Products, Processes and Production Systems. CIRP Annals - Manufacturing Technology, vol. 59, no. 2, pp. 672--693. [2] [Online]. http://www.vff-project.eu/. (2008) VFF, Holistic, extensible, scalable and standard Virtual Factory Framework. (FP7-NMP 2008-3.4-1, 228595). [3] SACCO, M.; PEDRAZZOLI, P. and TERKAJ, W.; (2010) VFF: Virtual Factory Framework. Proceedings of ICE - 16th International Conference on Concurrent Enterprising, Lugano, Svizzera. [4] BOER, C. R. and al.; (2011) Virtual Factory Manager of Semantic Data. Proceedings of DET2011 7th International Conference on Digital Enterprise Technology, Athens, Greece. [5] SACCO, M.; DAL MASO, G.; MILELLA, F.; PEDRAZZOLI, P.; ROVERE, D. and T TERKAJ, W.; (2011)Virtual Factory Manager. HCI International, Orlando, USA. [6] VIGANO, G. P.; GRECI, L. and SACCO, M.; (2009) GIOVE Virtual Factory: the digital factory for human oriented production systems. Proceedings of CARV 3rd CIRP International Conference on Changeable, Munich, Germany. [7] CANDEA, C.; GEORGESCU, A. and CANDEA, G.; (2008) iPortal-Management Framework for Mobile. Proceedings of the International Conference on Manufacturing Science and Education (MSE), Sibiu, Romania. [8] CANDEA, C.; CANDEA, G. and FILIP, F.; (2012) iDecisionSupport – Web-Based Framework for Decision Support Systems. 14th IFAC Symposium on Information Control Problems in Manufacturing, Bucharest, Romania. [9] GHIELMINI, G.; PEDRAZZOLI, P.; ROVERE, D.; TERKAJ, W.; BOER, C. R.; DAL MASO, G.; MILELLA, F. and SACCO, M.; (2011) Virtual Factory Manager of Semantic Data. Proceedings of DET2011 - 7th International Conference on Digital Enterprise Technology, Athens.
48
A Multi-View Display System using a QDA Screen Shiro Ozawa1†, Satoshi Mieda1, Yasuhiro Yao1, Hideaki Takada1, Tohru Kawakami2, Senshi Nasu23, Takahiro Ishinabe2, Mitsuru Kano2, Yoshito Suzuki2 and Tatsuo Uchida23 1
NTT Media Intelligence Lab., NTT Corporation, Yokosuka, Kanagawa, Japan New Industry Creation Hatchery Center, Tohoku University, Sendai, Miyagi, Japan 3 Sendai National College of Technology, Sendai, Miyagi, Japan
2
†
[email protected]
Abstract. We have developed a multi-view display system that uses “Quantized-Diffusion-Angle screen (QDA screen)” and multiple projectors. The use of the QDA screen offers the benefit of a deep, wide viewing area that enables many observers to watch the display at the same time. It also provides convenience in that it is unnecessary to accurately orient the projectors, which facilitates easy system construction.
Keywords: multi-view display, projection display, QDA screen.
1. Introduction Multi-view display systems provide multiple images in different viewing regions, and are expected to trigger the emergence of new display applications such as displaying 3D-CG objects in the museum, explaining virtual space, remote conferencing and so on. A typical example of the multiple directional viewing display is the triple-view LCD using a liquid crystal TFT-panel and a parallax barrier [1, 2]. However, this method has difficulty in increasing the number of viewing directions beyond four because of the attendant decrease in resolution and high alignment accuracy requirement in production. In addition, this method also suffers from a very shallow viewing area, because of diamond shape of the individual viewing areas as shown by A, B and C in the left image of figure 1; outside of these diamond areas, a mixture of individual images is perceived leading to the cross-talk problem. Against these problems, a multi-projection technique has been proposed that uses a lens array and many projectors. However, the technique has limited image quality and image resolution because it places a projection screen between the projectors and lens array. In order to solve these problems, we had developed a new system using a Quantized-Diffusion-Angle screen (QDA screen) [3], as shown in right side of figure 1. The different multiple images created by projectors at different directions are diffused to the quantized diffusing angle regions which yields multiple (different) images in different viewing areas with no limit placed on the distance from the screen.
49
Figure 1. Left: Observable area of the conventional display [3]. Right: Observable areas of the QDA screen [3].
Even in using a QDA screen, there are two problems needing to be addressed. The first involves projection distance and the second concerns view deformation. First, we describe the projection distance problem and our method of addressing it. In principle, the QDA screen requires narrow-angle projectors because the angle of incidence is limited in each projection area. This requirement is overly restrictive in that the required projection becomes longer as the screen size becomes larger. Second, all multi-view display systems share a common problem in that users who stand outside of the center area cannot observe geometrically correct images [4]. In this paper, we propose a tiled image method and a perspective transform method to solve these problems.
2. Multi-view display system 2.1 QDA screen QDA is one of the screen types used for projection display. The structure of our optical device is fundamentally composed two lenticular lens sheets. When using a QDA screen, multiple, different images created by projectors at different directions are diffused to quantized diffusing angle regions, yielding multiple, different images in different viewing areas with no limit placed on the distance from the screen. There is no “diamond restriction” in viewing area like conventional multi-view display using parallax barrier or lenticular lens.
2.2 Tiled image method In this method, an image to be displayed in one viewing area is divided into parts and the parts are generated by different projectors. Figure 2 illustrates the concept of the method when three projectors are used. The left block shows original images in the
50
center and left areas for purposes of illustration. The left side projector A shows the left half of image 1. The center projector B shows the right half of image 1 and the left half of image 2. The right projector C shows the right half of image 2. In this way, both images are synthesized by fusing the image portions generated by adjoining projectors. This makes it possible to shorten the projector distance to the minimum depth of field as given in the projector specifications, as long as the number of projectors is sufficient to fill the area of the screen. Input Image
Divided image by Tiled Image Method
Observed image in each area
Image of projector A
Image of projector B Observed image in the center area
“Image 1” for center area
Image of projector C Observed image in the left area
“Image 2” for left area
Figure 2. Concept of the tiled image method.
2.3 Perspective transform method Figure 3 illustrates the concept of the method. The left block shows the difference in views obtained by observers within and outside the center area. The observer A, who is in the center area, gets a frontal view of images and the display frame looks rectangular. However, the observers B, who are left of center, get a distorted image. This is not a correct view for these observers. To achieve the goal of correcting the distorted view outside the center area, we use the perspective transform method. The method can generate the image of the plane at the time of seeing an image from arbitrary viewpoints. Therefore the shape of an original view image is transformed into a rectangle, and the user can obtain a correct frontal view from each viewing area. Multi-view display Frame of screen Frame of screen View image
Observer A
Observer B
Frame of screen View image
View image Before applying the perspective transform method
After applying the perspective transform method
Figure 3. Concept of the perspective transform method.
51
3. Results and Conclusion Figure 4 shows results of our approach. The left block depicts a prototype of the multi-view display system we constructed. In this implementation, the QDA screen has three observable areas and consequently there are three projectors. The screen is 68.7 cm high and 52.5 cm wide. The three LCD monitors in the upper image of the middle block show the images of projectors A, B, and C from left to right. The bottom image of the middle block shows the images the observer gets in the center and left areas. These results confirm that the method is able to shorten the projector distance for a given screen size. In this case, the distance was shortened from the original about 180 cm to 90 cm. The right block shows a result obtained using the perspective transform method. The left side picture shows the original views and the right side one shows these obtained with our method. It can be seen that the horizontal sides are parallel and that the displaying area is rectangular. Clone image of projector B Clone image of projector C
Projectors
Clone image of projector A
A B C
QDA Screen
Synthesized image for the center area
Synthesized image for the left area
Before applying the perspective transform method
After applying the perspective transform method
Figure 4. Results of our approach. We have developed a multi-view display system that uses a QDA screen and multiple projectors. To address the problem of large projector-to-screen separation, we also developed a method we call the “Tiled Image Method”. Furthermore, we developed a perspective transform method to address the problem in multi-view displays of users standing outside the center area not being able to observe geometrically correct images. In implementing these methods in a prototype system, we confirmed that these problems are involved.
References [1] T. Takaya; (2007) Sharp Technical Journal, No. 96, pp.21-23. [2] Sharp Triple Directional Viewing LCDs, http://sharp-world.com/products/device/about/lcd/dual/ [3] T. Kawakami, B. Katagiri, T. Ishinabe and T. Uchida; (2010) Multiple Directional Viewing Projection Display Based on the Incident-Angle-Independent, Diffusion-Angle-Quantizing Technology, Proc. IDW 2010, pp.1479-1482. [4] M. A. Nacenta, S. Sakurai, T. Yamaguchi, Y. Miki, Y.Itoh, Y. Kitamura, S. Subramanian, and C.Gutwin; (2007) “E-conic: a perspective-aware interface for multi-display environments,” Proc. UIST '07, pp.279-288.
52
The development and usefulness of an automatic physical load evaluation tool Tim Bosch, Reinier Könemann, Gu van Rhijn, TNO Healthy Living, Hoofddorp, The Netherlands Harshada Patel, Sarah Sharples University of Nottingham, Nottingham, UK Tim Bosch, PhD, PO Box 718, NL-2130 AS Hoofddorp, The Netherlands,
[email protected]
Abstract. Recent technology developments present new ways of assessing risk and make it possible to assess physical activities automatically. This paper describes the development of a modular automatic physical load evaluation system. The usefulness of the system was evaluated by different users (SME management and ergonomic experts) and future development plans are proposed.
Keywords: Risk assessment, Human movement tracking, Physical load
1. Introduction The prevention of work-related musculoskeletal disorders (MSDs) has become a national priority in many countries. In some, 40% of the costs of workers’ compensation are caused by MSDs. The assessment of physical workload is one way of identifying risks and preventing the onset of musculoskeletal disorders. Observational risk assessment is probably the most often used approach to evaluate physical workload in order to identify hazards at work and monitor the effects of ergonomic changes [1]. However, studies showed that these observational methods may be subject to inter-observer variability [2] and lead to posture misclassification. In particular, wrist postures appear to be difficult to assess from observations and tended to be significantly underestimated by expert observations. Although observational methods have been shown to be a good alternative to self-reports, observations and subsequent analysis are still time-consuming and thereby expensive to use in SMEs. Recent technology developments present new ways of assessing risk and make it possible to assess physical activities automatically. Ray and Teizer [3] recently developed a system, based on 3D range camera technology, to monitor a worker in motion, recognizing and evaluating global body postures when performing construction work. However, to our knowledge, the automatic evaluation of repetitive upper extremity movements has not been reported before.
53
In the current paper we introduce a modular automatic physical load evaluation system. Different users (SME management and ergonomic experts) evaluated the usability/usefulness of this system.
2. Modular tool development A modular and flexible software application was developed based on human factor methods and technology (figure 1). The fundamental technological elements of this work load assessment system were: (1). a motion capturing system (i.e. Xsens MVN suit,[4]); (2) a software module converting 3D anatomical landmarks (based on the MVN biomechanical model) to relevant time-series of body segment angles. Angles were based on definitions provided by relevant guidelines on physical workload (i.e. ISO 11228-3); (3) an algorithm deriving movements and postures from the angle time-series; (4) a software module with guidelines on physical load; (5) a visualization element to present the outcomes (i.e. traffic lights for individual body parts, detailed data on movement frequencies and posture duration) to the different users of the system. All individual elements were connected using middleware (ICE).
Figure 1. Capturing of posture and movements with the Xsens MVN system (left), Digital human model representation of the operator (middle), Workload assessment outcomes (right).
The iterative development of the tool was tested in three laboratory trials. Participants performed simulated manual assembly work while postures and movements were recorded. The focus of development and improvement of the tool was supported by a self-report questionnaire which involved an examination of common issues structured observation of participants interacting with the technology, expert heuristic evaluation and post-task interviews with participants, focusing on any negative responses given in the questionnaires.
3. Evaluation of a prototype The evaluation of a prototype of the tool consisted of a field demonstration in a manufacturing company, an evaluation of usefulness by ergonomic experts and a workshop for SME managers. The field demonstration of the tool was performed during real time operations at an office chair manufacturing company in the Netherlands. Three workstations were
54
chosen (cycle time 1 minute) and on each station three operators were evaluated. Therefore continuous registration of movements and posture using the Xsens MVN suit was performed. Every operator was measured during 1 hour of operations. The ergonomic experts were shown a demonstration of the assessment tool, with an operator completing a task. Following this, they were asked to complete a questionnaire which collected the information on their opinions of the application and its potential usefulness. The questionnaire was composed of both open-ended questions (which captured participants’ general views on the application) and Likertscale rating questions (which focused on specific parts of the application and usability). The field and expert evaluation was followed by a workshop for SMEs from industry. During the demonstration workshop the tools were evaluated by 22 participants (management and engineers) from 15 manufacturing companies.
4. Results 4.1 Ergonomics experts The ergonomic experts generally liked the application, in particular the fact that it is easy to use and produces fast, objective assessments of physical load. The majority of the experts also believed that ergonomic assessment of workstation design would be more accurate using the application, however there are some reservations about the current range of functionality (in terms of the breadth of guidelines used and the omission of force measurements). Most of the participants also thought that for detailed interpretation of the results, the users must have knowledge in ergonomics. However, they acknowledged that at an overview level, an important benefit of this application is that it can be used to clearly communicate workload issues to managers.
4.2 SME workshop The majority of the 22 participants found the application to be useful. It offers a quick, easy-to-use and objective evaluation of physical load. However they acknowledged the need for a continuing role of an ergonomist in interpreting the results and finding solutions to address high-risk work situations.
5. Conclusion and future plans This short industrial paper describes a novel system to assess physical load in an occupational setting. The system works well and the different users of the system were enthusiastic about its potential. However, there are some improvements that need to be considered based on the results from the different evaluation trials: more
55
guidelines on physical load (e.g. lifting or carrying) need to be implemented; force measurements need to be included; a personalized feedback module with feedback on posture and movement might be useful as well as more detailed information on the major bottlenecks in workstation design. Finally, whereas the Xsens suit provides detailed information, a low-cost and less interfering system might be useful to explore. MS Kinect may be a good alternative system with advantages over most other devices. Similar to most computer vision based approaches, the Kinect suffers from the physiognomies of body and faces. Furthermore, both body and face can appear to be distinct from itself due to pose or environmental noise (e.g. lighting changes). However, several studies seemed to solve many of these problems rapidly and a recent study by van Teijlingen [5] showed good results for a direct comparison between MS Kinect and the Xsens MVN suit. Future tests are planned with assembly and logistic companies to test its reliability, robustness and usability in more detail.
Acknowledgements We would like to thank Anti Kolu (Tampere University of Technology, Finland), Sauli Kiviranta and Boris Krassi (VTT Technical Research Centre, Tampere Finland) and Nikos Frangakis (ICCS, Athens Greece) for their assistance in the development of this system within the framework of the ManuVAR project. This work was financially supported by TNO and by the FP7 Programme (FP7/2007-2013) under grant agreement no. 211548 ManuVAR.
References [1] TAKALA E.P.; PEHKONEN I.; FORSMAN M.; HANSSON G.A.; MATHIASSEN S.E.; ET AL. (2010) Systematic evaluation of observational methods assessing biomechanical exposures at work. Scandinavian Journal of Work Environment and Health,36 (1), 3-24 [2] BURDORF A. (1992) Sources of variance in exposure to postural load on the back in occupational groups. Scandinavian Journal of Work and Environmental Health,18, 361-367. [3] RAY SJ AND TEIZER J (2012). Real-time construction worker posture analysis for ergonomics training. Advanced Engineering Informatics, 26(2), 439-455. [4] ROETENBERG D.; LUINGE H.; SLYCKE P. (2008) 6 DOF motion analysis using inertial sensors. In: A.J. Spink, M.R. Ballintijn, N.D. Bogers, F. Grieco, L.W.S. Loijens, L.P.J.J. Noldus, G. Smit, and P.H. Zimmerman (Eds.), Proceedings of Measuring Behavior 2008: 6th International Conference on Methods and Techniques in Behavioral Research (Maastricht, The Netherlands, 26-29 August 2008), 14-15. Wageningen, The Netherlands: Noldus IT. [5] TEIJLINGEN VAN W.; VAN DEN BROEK E.L.; KÖNEMANN R.; SCHAVEMAKER J.G.M. (in press, 2012) Towards sensing behavior using the Kinect.
56
Innovation in space domain multidisciplinary engineering mixing MBSE, VR and AR Valter Basso, Lorenzo Rocci, Mauro Pasquinelli, Carlo Vizzi*, Christian Bar*, Manuela Marello*, Michele Cencetti**, Francesco Becherini *** Thales Alenia Space Italia S.p.A. / Sofiter System Engineering S.p.A.*/Politecnico di Torino**/Ortec s.r.l*** Torino, Italy
[email protected]
Abstract. Thales Alenia Space Italia (TAS-I) is researching new and innovative solutions to constantly improve the integration of the internal engineering disciplines activities, oriented to the design and verification of its products during the whole lifecycle, in compliance with the existing information systems in a B2B of extended enterprise organization (with customer in the loop). The modeling, analysis and representation of multi-disciplinary data play a main role in the space domain, but also in the general engineering processes, due to the variability of these environmental factors. TAS-I strongly believes that the use of virtual product technologies would dramatically enhance the current industrial processes and concentrates its R&D activities on the building of an innovative framework called DEVICE (Distributed Environment for Virtual Integrated Collaborative Engineering). Such an environment is based on the Model Driven Engineering (MDE) methodologies derived from the context of software engineering where the definition phase of a system model is integrated with its physical and functional simulation in a Service-Oriented Architecture (SOA) during the entire lifecycle. This allows to perform deeper analysis of the system and to verify the feasibility in improving the multidisciplinary system knowledge and to represent big amounts of data in an interactive 2D, 3D or 4D (+time), desktop or immersive solution using Virtual and Augmented Reality.
Keywords: MBSE, Virtual Reality, Engineering, Collaborative Environment
Augmented
Reality,
Concurrent
1. Introduction TAS-I represents a worldwide reference for space development: from navigation to telecommunications, from meteorology to environmental monitoring, from defense to science and observation and constantly has to deal with multidisciplinary data. This leads to the need for coping with the following issues: − Data definition and collection: the design and development phases play an increasing key-role, due to the increased call for more complex missions beside reduced development time and budget.
57
− Data parsing and conversion in suitable formats for graphical representation: a big amount of heterogeneous data all together should be integrated through a scalable and generic solution, avoiding ad hoc approaches, which are barely maintainable. − Investigation of a clear and complete graphical representation: complex data have to be represented in a clear and exhaustive way, in order to be significant for both specialized user and generic public. These issues can be tackled by means of a distributed environment where the Concurrent Engineering (CE) and the Virtual and Augmented Reality technologies cooperate together. TAS-I idea represents the evolution of the present concurrent facilities and tools that are mainly used to assess projects and studies feasibility. TASI approach allows to enhance this situation managing not only preliminary studies, but also further phases of a project such as the engineering design, where the conceptual idea is transformed in prototype models up to the development and qualification of the flight model.
2. Device DEVICE is a TAS-I model-based distributed environment aimed at supporting project teams composed by people belonging to engineering and/or scientific areas and having different background knowledge and skills. In this context Virtual Reality (VR) and Augmented Reality (AR) become useful technologies for interfacing the user with the available data, providing an effective collaboration mean between different disciplines. VR and AR allow viewing directly data and information that are often difficult to read for those who may not have the right technical background, although they are involved in the design process of a system. DEVICE has the following objectives: − To facilitate the multidisciplinary design key-phase − To facilitate the integration/utilization of big amount of data and to allow customized discipline specific views (e.g. 2D or 3D or filtered by parameters) of these data − To be as much as possible COTS independent − To ease the implementation and control of changes and maintaining data consistency − To allow synchronous and asynchronous user interaction from stand-alone desktop to dedicated facilities (e.g. CDF or Cave) − To avoid changing the tools used by the disciplines, creating adapters − To allow disciplines optimizing their internal process to be compliant with the real system needs
2.1 VR/AR for collaborative engineering Model-Based System Engineering (MBSE) methodologies show promising methods
Figure 2. Functional Modeling: example of product composition
58
to manage the increasing system complexity, making the integration easier across modeling domains, improving also the Collaborative Design patterns (e.g. through the traceability of the requirements). MBSE is the term currently used to represent the transition between system engineering (SE) data management through documents (e.g. specifications, technical reports, interface control documents) to standard-based semantically meaningful models, to be processed and interfaced by engineering software tools. MBSE methodologies enable a smoother use of VR and AR in support of engineering teams. The core of a MBSE approach is the so called System Model, which is the collection of different models, handled by a Web Editor in the DEVICE case, compatible with current data exchange standardization efforts. This shall ensure less sensitivity to error than the traditional document-centric view, still widely used for system design. As an example, the topological information of a product is associated to numerical values corresponding to physical properties of the object. Such data may be derived from a dedicated interface between the Web Editor and a CAD application, and then converted in a format readable by the VR applications [see Figure 2]. The system model allows having a meaningful and representative view of the object itself (e.g. its state, requirements and verification state, functions). The basic idea of the functional modeling is that the physical object results from the composition of its functions with its topological definitions (i.e. structure and interface). [See Figure 1]. A bottom-up approach is used to model peculiar aspects, and at the same time, a top-down model is defined to rapidly assess the system behavior (e.g. during a trade off phase, when the degree of approximation is high). The main objective is to find a methodological basis for modeling physical systems within the system model structured framework [See Figure 3]. While MBSE methodologies provide the necessary tools to formally associate the possible aspects of a given system, VR and AR allow an extended definition of the system architecture, while ensuring greater availability of information and also the access to the most up-to-date representation of the system is ensured. In the last ten years TAS-I put a lot of effort in researching and developing VR and AR technologies in its Virtual Reality Laboratory (VR-Lab) in order to obtain an even more faithful representation of the reality. In such a way it is possible to avoid the construction of a real mock-up and to follow the trend towards the reduction of economic costs, in particular in aerospace industry where both the complexity of the system involved, the high amount of changes to be handled and the hostile operational scenarios (difficult to simulate on Earth) require a limitation of the physical prototypes that can be built. VERITAS (Virtual Environment Research in TAS-I) is a 4D framework (space and time variable) that allows the generation of virtual environments and scenarios, usually not reproducible on Earth, setting realistic scenes where the system can operate [See Figure 4]. The Virtual Environment also allows avoiding all the possible unsafe situations for the users (i.e. evaluating the radiation dose absorbed by the astronauts or the spacecraft during an interplanetary mission).
Figure 4. VR integration through the system model
59
Figure 3. Integrated system framework
Figure 5. Example of VERITAS application
The simulation capabilities are also used in TAS-I facilities to support AIT (Assembly Integration and Test) activities, in particular for planning and defining assembly integration procedures, or for training purposes. Procedures can be created and validated in VR and then made available in AR format to guide the users during hands-free assembly tasks execution. The use of virtual models also brings advantages when the analysis of possible alternatives is needed or if more disciplines/engineering teams need to cooperate even being located in different facilities meaning also an enhancement of TAS-I processes.
3. Conclusions DEVICE is an innovative and highly technological framework with the aim to include in a MBSE environment (back-end ) the Virtual and Augmented Reality (front-end) and also developing new applications and tools to support inner research, disciplines and engineering teams at TAS-I. This approach makes possible to improve the quality of the whole engineering process: from the collection and validation of the requirements to the final realization of the product itself. This allows first, having an integrating tool in all the decision making phases of a project, by supporting engineering tasks and other well-known instruments (e.g. CAD) and overcoming their limitations; second, being able to realistically reproduce hostile, extra-terrestrial environments and therefore supporting disciplines to properly understand/trade-off operational behavior under extreme conditions. In a nutshell, this collaborative environment is a centre of attraction to improve knowledge, technical skill and knowhow capability. TAS-I objective is to keep improving this environment in several directions: by implementing new features and applications according to the needs of the engineering teams and allowing a natural interaction with them through specific devices, by involving a higher number of disciplines in order to achieve a vision as complete as possible of the simulated environment. This approach should be beneficial also for other domains in which multidisciplinary specialists are engaged e.g. scientists’ cooperation (medical science, weather science, etc), civil protection and other industries.
References [1] BECHERINI, F.; CENCETTI, M.; PASQUINELLI, M.; (2012) System Model Optimization through Functional Models Execution: Methodology and Application to System-level Analysis. CoMetS'12, Toulouse (FR). [2] BASSO, V., PASQUINELLI, M.; ROCCI, L.; BAR, C.; MARELLO, M.; (2010) Collaborative System Engineering Usage at Thales Alenia Space Italia, System and Concurrent Engineering for Space Applications. SECESA 2010, Lausanne (CH).
60
Integrating production scheduling with Discrete Event Simulation on a manufacturing line within the Virtual Factory Framework L. Usatorre1, S. Alonso2, U. Martinez de Estarrona3, A. Díaz de Arcaya4 TECNALIA Research & Innovation, Parque Tecnológico de Álava, C/Albert Einstein, 28,01510 Miñano, Spain e-mail: { luis.usatorre1, silvia.alonso2,}@tecnalia.com
Abstract This paper proposes a tool for virtual factory modelling validating the impact of a modification of the production scheduling on discrete event simulation. This tool will support designers and decision makers in rapid prototyping and reconfiguration processes of scheduling design. In this way, a high quality customer service is provided and customers are helped when positioning themselves and their new products in the current complex market. This tool has been developed within the European project “Virtual Factory Framework” (VFF), which focuses on developing an integrated framework to implement next generation virtual factories.
Keywords: production, simulation, semantic, maintenance.
1. Introduction Traditional methods are extremely time-consuming and unable to support decision making. Simulation can help companies making strategic business decisions for the design and operation of their production lines. Simulation software must potentially be a strategic decision-making tool for process redesign and continuous improvement. Notwithstanding these proposed alternatives, the lack of integration tools between the enterprise information systems yields large time losses and wasted resources in design or redesign activities related to manufacturing processes. If all the professionals involved in the design or redesign process use the same ontology to describe their data, the integration of the data is more cost-efficient. This work joins this rationale and proposes to integrate discrete event simulation with a a preliminary production schedule with different types of machines, parts, consumptions and cycle times through a semantic database in order to create a whole production line simulation in terms of quantity produced, time to reach a fixed production, energy consumed and levels of storage. An industrial case study is used to validate the tool.
61
2. Virtual Factory Framework (VFF) The Virtual Factory Framework consists of an integrated simulation environment that considers the factory as a whole and provides advanced planning, decision support and validation capabilities. VFF promotes major time and cost savings while improving collaborative design, management, (re)configuration and evaluation of new or existing facilities. This requires the capability to simulate dynamic complex behaviour over the entire life cycle of the factory that is considered as a complex and long living product [6]. This framework lies on four key pillars: (I) Reference Model, (II) Virtual Factory (VF) Manager, (III) Functional modules and (IV) Integration of Knowledge. The module presented in this manuscript belongs to Pillar IV. The EUVEDES module is a discrete event simulator to simulate production chains and to evaluate the performance of the factory. It is driven by the VF Manager through the interaction with other modules that access and modify the VFDM [6].
3. Discrete Event Simulator (EUVEDES) Discrete Event Simulation (DES) infers the behaviour of the real manufacturing process by using a mathematical model of the process. The input parameters are related to the process itself, such as the cycle time of the machines to produce each kind of piece, the mean time between failures (MTBF) of the machines, their mean time to repair (MTTR),. This module aims at integrating analytical modelling for non-complex elements with simulation-based modelling, taking into account the interaction among many parts of the production systems. Discrete mathematical models are used to represent production processes and logistics, including downtimes for preventive maintenance. Thus, the total production volume can be accurately predicted. In order to attain a more realistic result, the EUVEDES tool is driven by three key factors: (1) Scheduling of pieces entering the production chain: The flexibility of the module is given by the possibility of entering a production scheduling. In the real manufacturing lines not all pieces have the same production time in a certain machine. Therefore, it is deemed essential to model this feature in the simulator. (2) Maintenance operations and machine’s failure time (MTBF and MTTR) and Approximated energy consumption of the machines: Also the tool renders an estimation of the consumption of the whole line 1) when machines are working, and 2) when machines are idle. By considering these two estimated figures, manufactures can increase their productivity, minimise maintenance operations cost and enhance the corporations’ competitiveness.
62
3.1 Outputs/Inputs EUVEDES module calculates different parameters and graphs depending on the introduced inputs variables. The outputs produced by the EUVEDES module are 1) efficiency of the production system (time to get a fixed production); 2) resource utilization (idle times of the machines); 3) levels of storage (production volumes or throughput of the simulated production line); 4) time to produce a fixed production volume; and 5) energy consumption of the line. All these outputs are triggered by several inputs specified by the use case at hand. There are two type of inputs: automatic and manual. Automatic inputs come directly from the Virtual Factory Manager Data Model through a semantic connector (Production schedule, type of parts, cycle time per part and per machine) , while the manual inputs are selected by the user during the simulation (MTTR, MTBF, energy consumption…). It is important to bear in mind that the simulation itself is also an important way to understand a process, so the user must have the freedom to be able to modify data.
3.2 Connector The VF Data Model can be regarded as the core of the framework. It assembles a common data model to which all the VF modules have to adhere. It defines the procedures and formats in which the data exchanged between the VF Manager and the VFF modules is structured. This enables a broad interoperability between modules and overrides the need for duplicate data and stand-alone processing. A specific connector has been developed for the communication between the VF Manager and the EUVEDES module. The connector will obtain the data required by the module from the VF Manager,. The output will be the opposite operation: the EUVEDES module will generate the agreed data and the connector will therein place the results obtained from the simulation in the VF Manager.
4. Homag use case In order to assess the feasibility of the developed module in a realistic application scenario, a use case involving HOMAG (considered as one of the world’s market leader in wood working machinery) has been performed. The purpose of the HOMAG industrial use case is to demonstrate the possibilities, limitations and benefits of an integrated holistic planning and engineering process for a new production line. It deals with the management of KPIs and requirements, planning activities as well as the visualization and simulation of manufacturing lines.. To meet these objectives, the EUVEDES module is in charge for simulating some of the optimized requirements such as the factory performance planning.
63
5. Concluding Remarks The ability to read automatically production scheduling from a discrete event simulation for manufacturing lines, linked with the user ability to modify machine uptimes and other characteristics, permits to obtain interesting values, e.g. the time to get a fixed production and the energy consumption of the whole manufacturing line. Although there are similar tools to simulate manufacturing lines, the innovation of EUVEDES underlies in two aspects: a- the friendliness of the tool, focused on SME companies, and b- the connection to the Virtual Factory Framework. This connection allows the EUVEDES to collect the data needed for the simulation, after which the results are uploaded to the VF Manager.
Acknowledgements The research activities conducted in this paper are funded by the European Commission under the Project: “VFF - Holistic, extensible, scalable and standard Virtual Factory Framework”, FP7-NMP-2008-3.4-1. The authors of this paper would also like to thank VFF project partners for the opportunity to participate in the project VFF Project Coordinator: Marco Sacco from the Institute of Industrial Technologies and Automation (ITIA).
References [1] L. Mönch, P. Lendermann, L.F. McGinnis, A. Schirrmann, (2011), “A survey of challenges in modelling and decision-making for discrete event logistics systems”, Computers in Industry, Vol. 62, pp. 557-567. [2] M. Semini, F. Hakon, S. Jan Ola, (2006), “Applications of Discrete-Event Simulation to Support Manufacturing Logistics Decision-Making: A Survey”, Simulation Conference, WSC 06, Proceedings of the Winter, pp. 1946-1953. [3] M. d’Aquin, N.F. Noy, (2012), “Where to publish and find ontologies? A survey of ontology libraries. Web Semantics: Science, Services and Agents on the World Wide Web, Vol 11, pp. 96-111. [4] J. Bathelt, D.P. Politze, N. Jufer, A.K. Jönsson, A., (2010) “Factory of the Future enabled by the Virtual Factory Framework (VFF)”, 7th International DAAAM Baltic Conference, "INDUSTRIAL ENGINEERING", Tallinn, Estonia. [5] Marco Sacco, Paolo Pedrazzoli, Walter Terkaj, (2010), “VFF: Virtual Factory Framework”, ICE 2010, Advances in Concurrent Engineering. [6] H. VFF, extensible, scalable and standard Virtual Factory Framework (FP7-NMP-2008-3.41,228595), WWW page. http://www.vff-project.eu/.
64
Virtual Factory Framework – HOMAG Industrial Use Case OMAR ABDUL-RAHMAN Fraunhofer Institute for Manufacturing Engineering and Automation – IPA, Stuttgart, Germany Omar.Abdul-Rahman@ ipa.fraunhofer.de
ULRICH DOLL
GÜNTHER RIEXINGER Fraunhofer Institute for Manufacturing Engineering and Automation – IPA, Stuttgart, Germany Guenther.Riexinger@ ipa.fraunhofer.de
HOMAG Holzbearbeitungssysteme GmbH Schopfloch, Germany
[email protected]
Abstract: This paper describes an industrial use case within the scope of the European project Virtual Factory Framework (VFF) for HOMAG, a manufacturer of machines and equipment for the woodworking industry. The overall goals of this industrial use case is to improve and enable the holistic planning and implementing of manufacturing lines from the requirements analysis to the designed, optimized and finally simulated planning status. To reach these goals, several functional modules are developed and integrated within the VFF for supporting the planning phases in this use case.
Keywords: Virtual Factory Framework, holistic planning, simulation.
1. Introduction to VFF and HOMAG In order to be sustainably successful in the manufacturing of machines and equipment, the efficiency and quality of all processes of planning and implementing these production lines have to be improved. Therefore, an industrial use cases are developed within the EU-Project VFF - Holistic, extensible, scalable and standard Virtual Factory Framework”, FP7-NMP-2008-3.4-1. The objective of VFF is the research and implementation of a new conceptual framework designed to implement the next generation Virtual Factory, constantly synchronized with the real production. One of the VFF industrial use case is developed with the world’s market leader in wood working machinery “HOMAG Holzbearbeitungssysteme GmbH”. HOMAG currently employs more than 1500 people (250 in R&D). Targeting the furniture industry or its suppliers as well as the cabinet shop and interior fitter, core competence of HOMAG lie in machinery for sizing and edge banding, soft forming and post forming, as well as stationary CNC machines. HOMAG is part of the HOMAG Group AG, whose product range covers the complete process chain from sawing of the raw material until robotic based handling, assembly and packaging of the machined parts. HOMAG has broad expertise in modular machine design and customer specific development of machinery. Next to the stand-alone machine business the sale and distribution of complete production lines made of several machines together with handling and transportation
65
is becoming more and more important. This use case scenario “Next Factory” is to demonstrate the possibilities, limitations and benefits of an integrated holistic planning and engineering process for a new production line. It deals with the management of KPIs and customer requirements, planning activities concerning the performance of the production line as well as the visualization and simulation of HOMAG manufacturing lines.
2. Scope and Objectives of the “Next Factory” Scenario The overall objective of HOMAG in the project Virtual Factory Framework VFF is the improvement of the planning and implementing of manufacturing lines from the requirements analysis to the designed, optimized and finally simulated planning status. For this, planning performance in terms of planning time and planning effort shall be improved by using an integrated planning framework with optimized modules for each planning task. Also planning quality shall be improved by clear definition of the requirements and their continuous monitoring throughout the complete planning and implementation process even until factory operation. By this, a closed loop between requirements and planning results shall be build up. The continuous exchange of information based on the factory data model shall also allow for better feedback between the different planning instances and shall avoid discontinuities in the information flow. Finally, VFF HOMAG “Next Factory” Scenario shall help to reduce the variety of different technical solutions by offering a best practice knowledge base and the re-use of previous solutions. These objectives will be achieved by setting up an optimized planning process based on the Virtual Factory Manager and newly developed or attached functional modules to cover the following planning tasks: • Support for external (e.g. from the customer) and internal requirements management with identification of suitable combination of requirements; • Reuse past similar manufacturing lines configurations; • Design and optimize of manufacturing lines concerning defined performance indicators; • Generation of production operation steps concerning customer orders as well as production requirements and fine planning of the production sequence within HOMAG manufacturing line; • Evaluation of the manufacturing lines using the discrete event simulation based on the production sequence concerning defined criteria (e.g. throughput and energy consumption); • Fine calculation of KPIs referred to the cycle time using hardware-in-the-loop; • Interoperate the various modules (intra-process and inter-process interoperability). The following table shows the different functional modules needed to support the different phases as well as their developers and goals for the HOMAG scenario.
66
Module
Acronym
Developer
Goal
Requirement Management and Planning
RMP
ETHZ
Capturing requirements, describing rough solutions, linking KPIs and defining target values
Factory Performance and Process Planning
FP³
FhG-IPA
Production Resource Planning, Performance of the Production Line (Calculation of defined performance indicators of the manufacturing line)
Fine Production Planning
FPS
PSI
Generation of production operation steps concerning assigned customer orders and planning of its production sequence
Discrete Event Simulation
DES
Tecnalia (EUVE)
Calculation of throughput consumption using DES.
Closed Loop based on Process Automation Designer
PAD
Tecnalia (Fatronik)
Fine calculation of KPI referred to the cycle time in one machine using HIL.
Knowledge Repository and Knowledge Association Engine
KR–KAE
LMS
Retrieval and Reuse of past solutions based on sample past manufacturing lines.
and
energy
Table 1. Scenario Module Overview and Planning Goals
2.1 HOMAG “Next Factory” Scenario Planning Phases The HOMAG scenario consists of four different phases for the planning of manufacturing lines. The first phase is the management of key performance indicators and requirements to capture the functional requirements and to identify key performance indicators for tracing their fulfillment. This phase includes two different activities, starting with a clear understanding of the strategic goals. In a next step, the customer requirements and demands related to the product to be produced as well as the HOMAG specific requirements will be identified and managed. Based on the results, the needed performance indicators will be defined and thus an accurate performance measurement frame for the subsequent steps is set up. In this way internal and external functional requirements may be captured and their fulfillment could be traced. The requirements and the needed performance indicators of the new manufacturing line are compared with the requirements and the performance indicators of manufacturing lines produced in the past. The most similar past line configurations are retrieved utilizing similarity measurement algorithms and inference rules. The layout of the retrieved manufacturing lines can be used as an additional input for the activities of the second phase. Based on the results of the first phase, the manufacturing line down to the single machine and the machine units will be designed and optimized. A rough process list sets the basis for a detailed design of machines and their production units. In a further step, the performance of the manufacturing line will be calculated bottom up by taking into account the defined performance indicators. Finally the manufacturing line gets optimized. The planning and optimization of the manufacturing line units as well as the calculation of their performance belong to the second phase production design. After a successful planning and optimization of
67
the manufacturing line, the production operation steps concerning assigned customer orders and production requirements for the HOMAG manufacturing line are generated. Based on these orders, the production sequence can be planned and the manufacturing line can be evaluated using the discrete event simulation as well as the closed loop simulation. The planned production sequence will be a part of the input for the discrete event simulation. The planning of the production sequence and the evaluation of the line are the activities of the third planning phase simulation. The last phase is factory operation. Within this phase, the real data of the manufacturing line will be used to be able to monitor defined key performance indicators KPIs. Finally, a fine calculation of the cycle time concerning the handover machine using Hardwarein-the-loop will be performed.
3. VFF Use Case Validation HOMAG defined a number of KPIs to measure the fulfillment of the objectives. The first group of KPIs describes the technical features and attributes of the production line itself. These KPIs correspond to the requirements of the customers, and must be fulfilled by the planned manufacturing line. The defined performance indicators by customer requirements of the manufacturing lines are e.g., the Cycle Time, Technical Availability and Output of manufactured parts. The performance indicators concerning the manufacturing line have to be improved, since VFF is enabling HOMAG to design a better technical solution due to the holistic approach. Furthermore, the VFF Framework will improve the planning process significantly. Therefore, KPIs like planning time, cost and quality as will demonstrate the success of the project.
References and Acknowledgements The research activities conducted in this paper are funded by the European Commission under the Project: “VFF - Holistic, extensible, scalable and standard Virtual Factory Framework”, FP7-NMP-2008-3.4-1. The Project Partners in the VFF HOMAG Scenario, who contributed to the Scenario Development and Scenario Description, are: Fraunhofer Institute IPA, Germany with Omar Abdul-Rahman, Günther Riexinger and Axel Bruns, Swiss Federal Institute of Technology Zürich (ETHZ), Switzerland with Daniel Patrick Politze, PSI Production GmbH, Germany with Heinrich Weiß, TECNALIA, (Vitoria Alava) Spain with Luis Usatorre and Silvia Alonso Arin, TECNALIA, (San Sebastián) Spain with Mildred J. Puerto, Josu Larrañaga Leturia and Jon Agirre Ibarbia, Laboratory for Manufacturing Systems and Automation (LMS), Greece with Kostas Efthymiou. VFF Project Coordinator: Institute of Industrial Technologies and Automation (ITIA) with Marco Sacco.
68
VFF Industrial Scenario: the COMAU case study M. Sacco1, W. Terkaj1, C. Redaelli1, S. Temperini2, S. Sadocco2 1 ITIA-CNR via Bassini 15, Milano, Italy
[email protected],
[email protected],
[email protected] 2 COMAU via Rivalta 49, Grugliasco, TO, Italy
[email protected],
[email protected]
Abstract. Virtual Factory Framework is a research project funded by the European Commission that aims at realizing a Holistic, extensible, scalable and standard Virtual Factory Framework. After completing the collection of requirements from the industrial partners and the development of software tools to meet these requirements, the project is currently dealing with the implementation of the industrial demonstration scenario. In particular, the COMAU industrial case is focused on a large industrial company and the possibilities offered by new technologies, communication protocols and simulation tools to support the business processes.
Keywords: Virtual Factory Framework (VFF), Discrete Event Simulation, Automotive Industry, Case Study, Interoperability
1. Introduction Virtual Factory Framework (VFF) is a European research project involving many industrial partners different for their size, need, products and market sector [1]. The Virtual Factory Framework (VFF) can be defined as “An integrated collaborative virtual environment aimed at facilitating the sharing of resources, manufacturing information and knowledge, while supporting the design and management of all the factory entities, from a single product to networks of companies, along all the phases of the their lifecycles” [2]. Since the final goal of VFF is to improve the performance of the real factories, the cooperation of industrial companies was necessary to define demonstration scenarios that aim at testing and validating the framework: 1. Factory Design and Optimisation in the machining sector with the cooperation of the industrial partners Compa S.A. and Ficep S.p.A. 2. Factory Ramp-up and Monitoring phases in the automotive and aerospace sectors with the cooperation of the industrial partners Volkswagen Autoeuropa and Alenia Aeronautica S.p.A. 3. Factory Reconfiguration and Logistics in the automotive and white-goods sectors with the cooperation of the industrial partners Audi Hungaria Motor Kft. and Frigoglass S.A.I.C.
69
4.
The final scenario is named Next Factory and aims at demonstrating the applicability of the VFF on the entire factory lifecycle. This integrated scenario focuses on the wood-working and automotive sectors represented by HOMAG AG and COMAU Powertrain S.p.A. This paper presents the COMAU industrial case starting from the requirements and the current situation of the company. Then the software tools adopted during the project are analyzed. Finally, the application of VFF solution to a specific pilot case is described.
2. COMAU industrial case COMAU is a worldwide company, leader in sustainable automations and service solutions. With more than 40 years of experience, COMAU comes from a merging of small manufacturer companies in Turin (Italy) and nowadays runs its business in 13 countries. COMAU industrial use case in VFF deals with the design, implementation and monitoring business processes for a production system that is sold to a customer playing in the automotive market. The production system consists of manufacturing line(s) and/or an assembly line(s) decomposed into resources (both machines and operators). The design, implementation and monitoring processes are complex since several activities and actors are involved and have to cooperate. The complexity of the problem is highlighted in the “As Is” situation by describing the business processes to be addressed by COMAU: • Proposal: COMAU receives the bid inquiry from a customer and works on one or more technical and commercial bids. Several departments are involved to bring on the order and consequently different documents are generated. One of the most important goals of the Proposal phase is to create alternative solutions for the customer, in order to increase the chance to win the order. Therefore it is necessary to improve how the data are managed and shared among the COMAU departments. • Design and Development: after receiving the order from the customer, COMAU starts working on the specifications and the final design of the production system. This phase is technically more detailed than the Proposal, but the same actors are involved. The concurrent design is fundamental to continuously update the project design. • Build and Install: after an agreement between both parts, COMAU order parts and components to assemble the production system and then install it at the site of the customer. • Run and Monitor: the production line starts working. Since the beginning it is important to monitor the status of the line, and compare the expected and actual performance. The data retrieval during monitoring is of key importance. • Performance Improvement: the data coming from the shop floor are analysed and exploited to improve the performance of the production system by implementing reconfigurations. Currently COMAU does not manage the project of a production line in a concurrent way since each process is characterized by different requirements and deadlines,
70
thus leading to inefficiencies. COMAU aims at improving the efficiency and effectiveness of its processes thanks to the interoperability between the software tools that are integrated in VFF [3] to address specific problems while referring to a common data model [4]. The software tools integrated into VFF becomes Virtual Factory (VF) modules that work on a shared Data Repository.
3. VF Modules for COMAU industrial case Four VF modules have been adopted to support the COMAU industrial case: • GIOVE Virtual Factory • ARENA Simulation Module • Design Synthesis Module (DSM) • Dysfunction Analysis Module (DAM) GIOVE Virtual Factory (GIOVE-VF) [5] is a 3D virtual environment providing a friendly interface to support the design of a production system while evaluating the layout, the transport system and allocation of operators. GIOVE-VF will support the Proposal and Design process. Discrete Event Simulation (DES) is a common solution to evaluate the performance of a production system [6]. The commercial simulation software tool ARENA has been integrated into VFF by developing the ARENA Simulation Module [7] that can be used to estimate main performance indicators such as throughput, resource utilization, average system time, etc. The cooperation between GIOVE-VF and ARENA helps to reduce the time to carry on the system reconfiguration and evaluation loop, thus enabling the design of more alternative solutions to be submitted to the customer during the same time period. The Decision Synthesis Module (DSM) [8] allows sharing the current configuration of the production line, machines and components between designers. DSM connects to the shared Data Repository and external databases to retrieve detailed information regarding machine components. The access to the data repository enables to reduce inefficiencies related to the generation and sharing of documents. DSM will support the Proposal, Design and Performance Improvement processes. The Dysfunction Analysis Module (DAM) receives data from the monitoring system of the shop floor and supports the analysis of the production system performance during the Performance Improvement process, in particular during the ramp-up phase. The data extracted by DAM will be compared to the estimates available in DSM. The ramp-up phase will be reduced, thanks to an analysis of failures and malfunctioning of the line.
4. COMAU pilot case The VFF applied to the COMAU industrial case will be validated on a specific pilot case that is derived from the FIAT production plant in Bielsko-Biala (Poland) that has recently received the gold medal in World Class Manufacturing and where engines
71
and cylinder heads are produced. This plant is one of the most important mechanical FIAT plants where COMAU is a supplier since several years. The pilot case deals with the production of cylinder heads and consists of a machining line and an assembly line. COMAU has supplied 100% of the assembly line, whereas in the machining line also non-COMAU machines can be found (e.g. washing machines, auxiliary machines like seat and guide press machines). The machining line consists of modern single-spindle machining centres and is highly automated. For both lines it is possible to gather data from the monitoring system. The attention will be focused on the assembly line that is characterized by both automated and manual stations. The line can be decomposed in several sub-lines and it was optimized for the logistics operations. The whole assembly line of the engine is quite large, therefore its performance was evaluated by developing three separated simulation models for the sub-systems producing the short block, long block and cylinder-head.
Acknowledgements The research reported in this paper has been funded by the European Union Seventh Framework Programme (FP7/2007-2013) under the grant agreement No: NMP2 2010-228595, Virtual Factory Framework (VFF).
References [1] VFF, Holistic, extensible, scalable and standard Virtual Factory Framework (FP7-NMP2008-3.4-1, 228595). [Online]. http://www.vff-project.eu/ [2] Sacco M, Pedrazzoli P, Terkaj W (2010) VFF: Virtual Factory Framework. Proceedings of ICE - 16th International Conference on Concurrent Enterprising, Lugano, Svizzera. [3] Sacco M, Dal Maso G, Milella F, Pedrazzoli P, Rovere D, Terkaj W (2011) Virtual Factory Manager. Lecture Notes in Computer Science, 2011, Volume 6774/2011, 397-406. Springer. [4] Terkaj W, Pedrielli G, Sacco M (2012) Virtual Factory Data Model. Proceedings of OSEMA 2012 Workshop, 7th International Conference on Formal Ontology in Information Systems, Graz, Austria, July 24-27, 2012. [5] Viganò GP, Greci L, Mottura S, Sacco M (2011) GIOVE Virtual Factory: A New Viewer for a More Immersive Role of the User During Factory Design," in Digital Factory for Human-oriented Production Systems, L., Redaelli, C., Flores, M. Canetta, Ed.: Springer, 2011, pp. 201-216. [6] Pedrielli G, Sacco M, Terkaj W, Tolio T (2012) Simulation of complex manufacturing systems via HLA-based infrastructure. Journal Of Simulation. [7] Terkaj W, Urgo M (2012) Virtual Factory Data Model to support Performance Evaluation of Production Systems. Proceedings of OSEMA 2012 Workshop, 7th International Conference on Formal Ontology in Information Systems, Graz, Austria, July 24-27, 2012. [8] Hints R, Vanca M, Terkaj W, Marra E, Temperini S, Banabic D (2011) A Virtual Factory tool to enhance the integrated design of production lines. Proceedings of DET2011 7th International Conference on Digital Enterprise Technology, Athens, Greece, 28-30 September 2011.
72
A Full-Body Virtual Mirror System for Phantom Limb Pain Rehabilitation Eray Molla, Ronan Boulic Immersive Interaction Group, EPFL, Lausanne - Switzerland
[email protected],
[email protected]
Abstract. After amputation, people usually experience vivid sensations on their absent body part as if it were still present. In majority of cases, this is a painful experience known as phantom limb pain. Several studies have demonstrated that providing the patient with convincing visual feedback imitating the movement of the absent limb can stimulate muscle sensations from the phantom limb and thus the pain can be alleviated. In this project, we develop a system furnishing the patient with full-body visual feedback thanks to immersive virtual reality techniques for the treatment of this phenomenon.
Keywords: Immersive VR Based Pain Treatment, Phantom Limb Pain
1. Introduction Phantom limb pain [1] is a common consequence of amputation. Amputees suffer from severe pain and this may even limit their daily lives. Several methods have been proposed for its treatment; however, most of them have been reported as inefficient [2]. One notable success has been achieved by Ramachandran et al. where they developed a device, called mirror-box, for appropriate visual input manipulation [3]. It is a simple box separated into two parts by a mirror. The patient places both his intact and absent limbs into the box. When looked from an angle, the reflection of the remaining limb on the mirror superimposes the felt position of the phantom limb in the participant’s visual field. Therefore, movement of the intact limb creates a visual illusion such that the patient feels like the amputated limb is moved, too, so, the pain is reduced. Despite its considerable contribution, some limitations of mirror box therapy have been pointed out [4]. For instance, during the therapy, the participant has to keep his intact limb into the constrained space of the box. Moreover, the viewpoint and the head direction can have only subtle changes since the patient has to stay focused on the mirror. To deal with such limitations, several augmented [5, 6] and virtual reality (VR) [7, 8] setups offering pain treatment therapies have been proposed. All these therapies aim at letting the patient control the phantom limb to achieve some tasks, like touching a target, holding a ball, in the virtual world for relieving the pain. They differ in the way that the amputated arm is represented in the immersive environment. Whereas [5, 6, 8] track the movement of the valid limb and mirror the captured joint
73
angles to the phantom one, [7] relies on capturing the remaining portion of the amputated limb for reconstructing the posture of the absent limb in the virtual world. The main drawback of the former is that driving a mirrored arm is not intuitive. In addition, it only supports experimental tasks relying on bilateral movement or task symmetry. Although the latter is not limited to such tasks, it does not seem applicable on patients whose limbs are paralyzed. A thorough review of those systems can be found in [9].
2. Our Approach: A compact summary of our experimental setup can be seen on Fig 1.a. In the virtual environment, the patient is assigned a task which he/she carries out at a seated position. An immobile sphere is displayed in front of the avatar of the patient’s field of view. The participant has to reach and grab this target with the hands of his/her avatar. When only both hands are in contact with the object, it is attached to the valid hand and allowed to move. At this moment, another, slightly bigger sphere pops up at a different place. The task is achieved once the first sphere is carried inside the second one (Fig 1.b). An important feature to note is that, sometimes, the virtual representation of the absent arm cannot reach the target sphere. In this case, the first sphere remains immobile. The patient has to orient and position his body appropriately to let both virtual arms reach the target. This intends to encourage the patient for greater involvement to the therapy. Our experimental setup has crucial distinctions from the introduced systems. First of all, we capture the movements of the full upper-body by placing visual markers around pelvis (4), chest (4), head (5) and intact wrist (4), elbow (2), and shoulder (1), rather than only limbs (see Fig. 3). We use analytical-inverse kinematics (IK) techniques for full-body reconstruction including legs. Leg reconstruction is done without capturing their movement. It relies on positioning the feet on valid positions on the ground.
Figure 1.a Training setup: The patient is seated in front of a projection screen and equipped with stereo glasses. His body movements are tracked by a motion capture system.
Figure 1.b The task: The patient’s avatar is displayed at a seated position in front of a mirror where he can observe his reflection. Intact (dark) body parts are motion captured and amputated (light) arm is automatically reconstructed. Left: The patient is to reach and grab the small sphere with both hands. Right: The patient moves it into the big sphere.
74
Figure 2 – System Overview One question which may arise at this point is whether full-body reconstruction makes sense for increasing self-body-awareness due to the restricted field of view. In other words, it is not possible for one to see all of his/her body from a first person perspective. To overcome this issue, we place a mirror into the virtual scene where the patient can see the reflection of his/her avatar (see Fig 1.b). In this way, a better correlation between the body movements of the patient and the visual feedback he obtains is achieved. This is a key factor for tricking the brain. Please note that, in order to avoid any occlusion between the field of view of the patient in the virtual world and the mirror; we propose the use of transparent objects for the tasks. Another important difference with the existing systems is the way we drive the absent limb in the virtual world. We infer how the end joint, wrist or ankle, should be positioned and oriented to grab an object by using its position. Then, analytical IK techniques are used to reconstruct the absent arm for satisfying these constraints.
3. Design Considerations and System Overview (Fig 2.) One of the most important aspects of such a setup is the comfort of the patients. Depending on the type and the place of the amputation, the body characteristics of the patients may differ. Therefore, the use of an industry-standard motion capture suit can be uncomfortable or, even, impossible. Considering the fact that these suits are too costly to tailor, we decided on producing personal suits by ourselves with cheap materials such as a tight t-shirt, straps and velcro tapes. An equipped person with a handmade suit can be seen on Figure 3. For immersive display, we planned to use a VR1280 HMD with stereoscopic rendering, in the beginning. However, due to the patients’ possible sensitivity to pain we decided on supporting, also, a more lightweight technology: The rendering is done with quad-buffering and the scene is projected onto a big screen in front of the patient. He/She is equipped with NVIDIA 3D Vision® Pro for stereo effect. For a better immersive experience, cave rendering can be considered as an alternative.
4. Results and Discussion In this paper we have introduced a new type of VR setup for phantom limb pain treatment. It provides the patient with full-body visual feedback and offers a more intuitive control over the phantom limb than previous works. The use of Analytic IK for full-body reconstruction guarantees responsiveness and accuracy which are crucial for gaining agency and a better immersive experience. We demonstrated it on a reach,
75
Figure 3 – Left: An equipped user. Middle, Right: A person is using the system. grab and move task. However, more entertaining tasks can easily be designed by using the setup we described. For this work, phantom limb pain phenomenon has been focused, but it can easily be modified for other types of pain rehabilitation like surgeries restricting the mobility of the limbs. In this way, we would be able to reach more people to alleviate their pains. We may perform a pilot study with a patient suffering from brachial plexus injury, an injury of the nerves controlling the muscles on the arm. We will be able to provide a qualitative comparison of our setup with respect to others in the end of such study.
References [1] Flor H (2002) Phantom limb pain: characteristics, causes and treatment. Lancet, 1, 182-189. [2] Sherman RA, Sherman CJ, Gall NG (1980). A survey of current phantom limb pain treatment in the United States. Pain; 8: 85–99. [3] Ramachandran, V. S.; Rogers-Ramachandran, D. C. (1996), "Synaesthesia in phantom limbs induced with mirrors", Proceedings of the Royal Society of London 263: 377–386. [4] Murray CD, Pettifer S, Caillette F, Patchick E, Howard T (2005) Immersive virtual reality as a rehabilitative technology for phantom limb experience. University of Southern California, Los Angeles, CA, USA Proc Fourth Int Workshop Virtual Real:144–151. [5] O’Neill K, de Paor A, MacLachlan M, McDarby G (2003) An investigation into the performance of a virtual mirror-box for the treatment of phantom limb pain in amputees using augmented reality technology. In: Human-computer-interaction international 2003, conference proceedings. [6] Desmond D, O’Neill K, De Paor A, McDarby G, MacLachlan M (2006) Augmenting the realityof phantom limbs: Three case studies using an augmented mirror-box procedure. J Prosthet Orthot 18(3):74–79. [7] Cole J, Crowle S, Austwick G, Henderson Slater D (2009) Exploratory findings with virtual reality for phantom limb pain; from stump motion to agency and analgesia. Disabil Rehabil 31(10):846–854. [8] Murray CD, Patchick EL, Pettifer S, Howard T, Caillette F, Kulkarni J, Bamford C (2006c) Investigating the efficacy of a virtual mirror-box in treating phantom limb pain in a sample of chronic sufferers. Int J Disabil Hum Dev 5:227–234. [9] C. D. Murray, S. Pettifer, T.L.J. Howard, F. Caillette E. Patchick, and Joanne Murray (2010). Virtual solutions to phantom problems: Using immersive virtual reality to treat phantom limb pain. In C.D. Murray, editor, Amputation, Prosthesis and Phantom Limb Pain, pages 175-197.
76
A3R :: A new insight into Augmented Reality. Transporting the Augmented Reality user into another dimension through the sound JORGE R. LÓPEZ BENITO, ENARA ARTETXE GONZÁLEZ, ARATZ SETIÉN GUTIERREZ CreativiTIC Innova S.L. [1] La Rioja Technological Centre, Avda. Zaragoza 21, 26071, Logroño (La Rioja), Spain
[email protected],
[email protected],
[email protected]
Abstract. Based on different researches and libraries developed both from the University of La Rioja and the AHOLAB signal processing department of the University of The Basque Country, the project A3R aims to include a new input in the creation of applications in Augmented Reality: the audio. The recognition of the surrounding sounds and their representation in images will provide not only an invaluable help for hearing impaired people but will also open a door for a whole new world in fields like tourism, marketing or education.
Keywords: Augmented Reality, audio, R&D, innovation, CreativiTIC, audiopositioning, diversity.
1. Introduction The project to be presented is an in-process R&D work which comes from a previous collection of studies developed by the Department of Mathematics and Computer Science of the University of La Rioja, Spain. These studies have dealt with the following topic: “Academic adjustments and useful tools for students with severe disabilities in communication production in computer science degrees” [2] referenced in the “International Journal for Knowledge, Science and Technology, Nº2 Vol2, October 2010”. The idea behind this project is the development of a technology that will integrate an audio input into Augmented Reality (AR) applications. The A3R technology is defined as: “a core that provides with a specific audiopositioning interface to be used in augmented reality applications (own or third parties’) that detects, identifies and analyses the nature of the sound and its parameters (height, intensity, length and pitch) in the frequencies of audible spectrum”. This new technology will take the AR user to a new experience and combine the visual experience of the augmented reality with new sound experiences.
77
Figure 1. A3R diagram.
2. State of the art In general, the recognition of the audio is based on statistic techniques (Gaussian Mixture Models –GMMs-) that work properly for nearly everything (the voice recognition, voice/music classification). In the acoustic event classification, as sounds are of a very different nature (breaking glass, paper movement, doors opening/closing, bells) other classifiers are also used (Support Vector Machines – SVM- and detectors for FFT based stationary sounds). The use of TTS (Text to Speech) algorithms is widely spread and is the most commonly used technique for audio treatment. This is the less differentiating part of the R&D project. On the other hand, nowadays we can find different frameworks that make the AR application development easer in many technologies. However, none of these commercial tools covers nowadays the analysis and integration of the sound events into them. The development of languages for AR and their integration with the sound is the most differentiating part of the project. It’s worth mentioning, that in the field of investigation, the Polytechnic University of Madrid is patent-pending of an experiment with an AR visual system focused on hearing impaired people, that is able to capture acoustic signals with microphones situated in the position of a human’s ears (such as glasses arms) and visually incorporate them, having as a result “AR glasses” [3].
3. Methodology This R&D project consists of two different phases: Reception and study of the surrounding sound: This method, called diarization, consists in the study of the different sound events from a same source generated in our environment through algorithms that look for characteristic points. In this phase, all the information coming from those sound events is analysed according
78
to different parameters, such as frequency range, pitch... so the different sources (music, voice, ambient noise…) can be identified. Integration of the audiopositioning with AR: To achieve this, a new platformindependent framework is being developed, that includes the functionalities of the sound recognizing, allowing so developers and users of A3R to add and expand the capacities and functionalities of current and future AR applications in any existing platform such as mobile environments or web systems. In a first stage, the implementation of a library core (SDK) is bound to be implemented and freed as a beta version for its use in internal and third parties’ developments. After that, the idea is the creation of our own development environment starting from a free development environment. This new environment would generate AR applications for different devices integrating the new capacities of the A3R technology.
4. R&D work Basing on the studies made by AHOLAB (a signal processing department of the University of The Basque Country, Spain) we work with some libraries that provide us with the algorithms that carry the necessary information to characterize the sound events. These kinds of libraries have already been tested in numerous studies and doctoral thesis. At the same time, we are investigating the existing AR platforms and SDKs and contacting the different leading companies in AR software development to decide whether we can develop our own platform using as a base an existing one, or we are going to design a whole new one.
5. Impact As mentioned above, A3R was firstly conceived as a branch of a bigger educational project we’re developing together with the University of La Rioja aimed to improve the conditions of people with special needs. During the researches, they have been analysed the educational strategies used with Computer Science students with severe disabilities in the communication production as well as in the aid that TICs offer to the continuation of their studies.In this context the A3R technology is a very powerful social tool and it is especially helpful for people with hearing problems. Used together with a visual support it can warn them of imminent dangers, improve their communications, help them when attending classes or visiting certain places such as museums… At the same time, A3R can be an added value in many other fields such as videogames (giving them another dimension), marketing (revolutionizing the current concept of advertising and personal brand), audio-visual spectacles…
79
Figure 2. Example of A3R use for helping hearing impaired people in lectures/conferences.
6. Conclusion As a conclusion, the inclusion of the sound recognition into AR applications, combined with visual display and geopositioning, fills a current vacancy. Not only does it provide an opportunity to release new applications in the market but it is also a huge step in the improvement of the quality of life of hearing impaired people. In addition, once implemented, the core of A3R can be adapted to each personal case, giving a personalized help and, thus, being much more effective than any global solution.
References [1] CreativiTIC is a micro-SME ICT engineering start-up that offers solutions in the crossroads of Augmented Reality (AR) and innovative audiovisual technologies. The company has two interdependent areas of specialization: Augmented Reality (AR interfaces and AR development frameworks and tools (SDK, API, IDE)) and audiovisual production (visual effects (VFX), post-production, management of audiovisual projects) and its open R&D lines are: - A European project in the call FP7-ICT-2011-8 under the topic “Technology enhanced learning”. - Applications of AR for overcoming disability hurdles in learning environments: development platforms and tools with a view towards final applications in technology-enhanced learning based on AR, 3D and immersive environments; - Implementation of Audio inputs in AR: processing of audio inputs and merging AR with cybernetics to enhance perception capacities of hearing impaired people by translating the acoustic information into visual representations and alerts [2] Muniozguren L., Domínguez C., Jaime A.; (2010) JENUI, Department of Mathematics and Computer Science, University of La Rioja, Spain. [3] Patent request Nº ES2347517, Spain.
80
A haptic paradigm to learn how to drive a non-motorised vehicle manipulated through an articulated mechanism Pierre Martin, Nicolas Férey, Céline Clavel, Patrick Bourdot VENISE & CPU teams, CNRS/LIMSI, B.P. 133, 91403 Orsay cedex (France)
[email protected]
Abstract. Virtual Environments are sometimes used to learn how to manipulate some complex and/or dangerous equipments. In this context, we designed a virtual simulator dedicated for the learning of driving a specific vehicle. In our case, the driving task is performed through an articulated mechanism, and requires a physical involvement of the driver. This paper explains how we designed a new haptic paradigm that aims to provide a realistic sensorimotor stimulation, especially in term of physical involvement.
Keywords: Virtual Reality, Haptic Navigation and Driving, Learning.
1. Introduction Virtual Reality (VR) technologies provide solutions for simulating realistic situations in a controlled and secure context. We explain in this paper the simulation technics and tools used to design an interactive paradigm for the virtual driving of a specific vehicle. We especially focus on a non-motorised forklift, controlled by an articulated handle. The manipulation of this articulated mechanism allows to push/pull the vehicle, and control its direction (Figure 1). Some works also focus specifically on VR applications dedicated to motorised forklifts. Safety is one of the application fields. [1] studied the impacts on the forklift occurring on drive-in racking structures. They proposed a general method for calculating forces generated under forklift truck impact. In [2] the prevention of forklift capsize is considered. Based on the fact that drivers’ training and proper safety procedures are not sufficient enough to reduce accidents, they developed an intelligent control system, embedded on the forklift, which analyzes onboard sensors data and proposes some corrections. Finally, [3] and [4] described a prototype of a full-immersive simulation of forklift truck operations for safety training. To provide a realistic sensorimotor stimulation during the driving of a nonmotorised vehicle manipulated through an articulated mechanism, we designed a new haptic navigation paradigm. It simulates and to haptically renders the articulated mechanism of the handle used to manipulate forklift velocity and direction providing feedback of the ballistic properties through the virtual handle, such as inertia.
81
Figure 6. The real and virtual non-motorised forklift and articulated handle
2. A haptic virtual mechanism simulating an articulated handle As explained in the last section, we focus on the virtual driving of a non-motorised vehicle manipulated through an articulated mechanism. Thus, to simulate the real forklift’s handle, we designed a virtual mechanism based on a rigid body with hinge constraints (Figure 2 - right). We used a software setup based on Virtools™ (a VR platform from Dassault Système) and a specific plugin, called IPP™ and developed by Haption, and a hardware setup composed by a Virtuose™ 6D haptic device, also provided by Haption (http://www.haption.com). A first hinge (blue sphere, red arrow) controls the orientation of the forklift’s wheels. A second hinge (green arrow), allows user actions to raise or lower the forklift’s handle.
Figure 7. The Virtuose™ 6D 35-45 haptic arm from Haption (left). The virtual mechanism of the forklift handle (right) implemented with the Interactive Physical Pack (IPP™/ IPSI™) developed by Haption for the Virtools platform used for the visual immersion.
3. The Simulation of the ballistic behaviour of the vehicle This section describes how we compute the ballistic behaviour of the virtual forklift. In the real context, the user walks behind the forklift and controls its velocity by
82
applying pulling/pushing force through the handle. Then, the velocity of the forklift has to be simulated according to this force. We also have to take into account the weight (400 kg), as well as friction/damping in the real condition, obtained empirically by velocities and deceleration measurements between two points (stopping distance 4.25 m, initial velocity 1 m/s). We express the new velocity of the vehicle according to these factors. Equation 1 is the expression of the new velocity according to F, while Δt is the time variation, t the current time, V(t) the velocity at t, β the damping factor, m the object weight, and α a control constant. (1) All parameters of Equation 1 are known, except β: this variable takes into account all the forces applied to the object, including the friction forces. In our physical simulation of the forklift, we assume that β is constant, its variation being negligible. To determine its value, we ran the simulation with the forklift moving at a speed of 1m/s, and adapted the value of that constant to make this virtual vehicle stopping at the right distance (the one measured on the real forklift) from its starting point. The only parameter that one must define is α, a constant factor that we could define arbitrarily, in order to be able to scale the force applied on the simulation. Moreover, we determined the direction of the forklift according to the angle between the green bar and the blue horizontal one (Figure 2 - right). We empirically found the linear mapping between this angle and the inverse of the curvature radius. Using this curvature radius, the current velocity and Equation 1, we were able to compute the next position and orientation of the virtual forklift.
4. Combining a virtual mechanism and a neutral referential technique to control the forklift velocity The previous sections explain how we haptically render the mechanical behaviour of the forklift, how the direction of the forklift is controlled using the handle angle, and finally how we compute the next position and orientation of the virtual forklift according to the current forklift velocity using Equation 1. We explain in this section how were computed the force used in Equation 1, to control the velocity of the virtual forklift, and which force feedback is provided to the user to simulate a physical involvement during the driving task. We were inspired by a non-haptic navigation technique based on a non-isomorphic rate-control using the comparisons of a current tracked referential with an initial neutral referential [5]. This technique is especially well adapted to perform navigation task using haptic device, because the base of the haptic device is fixed, and the neutral referential could be chosen as the centre of the haptic workspace. This concept was thus haptically extended. An elastic force allows the end of the virtual handle described in the previous section (the blue sphere in Figure 2, right) to retrieve the neutral referential placed on the centre of the green bar. The blue sphere is also constrained along this green bar. The opposite of this elastic force is scaled and
83
used to control the forklift velocity according the Equation 1. This elastic force is also scaled and used as the force feedback provided by the user. It is indeed not possible to provide the same intensity of force feedback through the haptic device (device limitation) than in the real context. The elastic force feedback can also be dynamically linearly scaled according to current velocity, in order to perceive the inertia of the forklift. In this way, more the blue sphere goes forward on the green bar more the force applied by the user on the vehicle is large. The elastic force feedback, depending on the force applied to the virtual forklift and its current velocity, allows our haptic driving paradigm to simulate a physical involvement of the user.
5. Conclusion We designed an innovative driving paradigm dedicated for the learning of driving a specific vehicle requiring the manipulation of an articulated mechanism and a physical involvement of drivers. On the one hand, the rigid body model provided by IPP™ (Haption) into Virtools™, allows the users to haptically perceive the physical and mechanical behaviour of the forklift and its handle used to control the velocity and the direction. On the other hand, the velocity control of the virtual forklift is simulated with a push/pull interactive technique using elastic feedback to feel the forklift inertia and to simulate a physical involvement of the user. According to a previous ergonomic study we lead [6], our haptic technique clearly appeared more realistic than the joystick one, considering various dimensions of realism, especially in term of performance and in term of behaviour and psychological processes transfers.
References [1] Gilbert, B. P., Rasmussen, K. J. R.; (2011) Determination of accidental forklift truck impact forces on drive-in steel rack structures. In Engineer. Struct., 33(5), pp. 1403-1409. [2] Rinchi, M., Pugi, L., Bartolini, and F. Gozzi, L.; (2010) Design of control system to prevent forklift capsize. Int. J. of Vehicle Systems Modelling and Testing, 5(1), pp. 35–58. [3] Yuen, K.K., Choi, S.H., and Yang, X.B.; (2010) A full-immersive CAVE-based VR simulation system of forklift truck operations for safety training. In Computer-Aided Design and Applications, 7(2), pp. 235-245. [4] Bergamasco, M., Perotti, S., Avizzano, C.A., Angerilli, M., Carrozzino, M., Facenza, G., and Frisoli, A.; (2005) Forklift truck simulator for training in industrial environments, Research in Interactive Design: Proc. of Virtual Concept. [5] Bourdot, P. and Touraine, D.; (2002) Polyvalent Display Framework to Control Virtual Navigations by 6DOF Tracking. In Proc. of IEEE Virtual Reality, pp. 277-278. [6] Martin, P., Férey, N. Clavel, C., Darses, F. and Bourdot, P.; (2012) Sensorimotor Feedback for Interactive Realism: Evaluation of a Haptic Driving Paradigm for a Forklift Simulator, Lecture Notes in Computer Science, 7282/2012, pp. 314-325.
84
Validation of a haptic virtual reality simulation in the context of industrial maintenance M. POYADE1, L. MOLINA-TANCO1, A. REYES-LECUONA1, A. LANGLEY2, M. D'CRUZ2, E. FRUTOS3, S. FLORES3 1 Dpto. Tecnologia Electronica, E.T.S.I. de Telecomunicacion Universidad de Malaga, Campus Universitario de Teatinos, 29071 Malaga, Spain (matthieu.poyade,areyes,lmtanco)@uma.es 2 Human Factors Research Group, Innovative Technology Research Centre Dept. of Mechanical, Materials and Manufacturing Engineering University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom (Alyson.Langley, Mirabelle.Dcruz)@nottingham.ac.uk 3 Tecnatom S.A., Avda. de Montes de Oca 1 28703, San Sebastián de los Reyes, Spain (efrutos, sflores)@tecnatom.es
Abstract. This paper presents a VR simulator enhanced with haptic force feedback, that enables training on the performance of fine grinding and polishing tasks and some of the results from an experimental study conducted during the demonstration phase within the ManuVAR project.
Keywords: Haptics, Virtual Reality, training simulator, motor skills training.
1. Introduction In order to prevent failure in manufacturing processes, industrial plants conduct maintenance campaigns during which engineering services companies as Tecnatom S.A. are required for the inspection of operating components. The Metallographic replica is a non destructive inspection technique that requires prior fine grinding and polishing operations to remove oxide scale from material surfaces [1]. The performance of those tasks requires advanced skills in angling and pressuring power tools on the surface of the material. However, training unskilled workers is troublesome in the extent that supervisors cannot provide accurate feedback on the on-going performance and movements characteristics. Virtual Reality (VR) enhanced with haptic force feedback (FF) would enable training complex mechanical tasks and thereby supplements conventional training. It has been demonstrated that VR training enhanced with FF and augmented feedback (AF) enables learning force skills required in mechanical operations [2]. However, no study has reported the effectiveness of VR training to acquire tool inclination skills. In this paper, we present and assess a VR training system enhanced with FF and AF for fine grinding and polishing tasks.
85
2. Description of the VR system The VR training system consists of a simulator that enables practicing fine grinding and polishing tasks in a virtual environment (VE). A haptic device was used to simulate the functioning of a portable flexi-drive tool (vibrations and tangential forces) and interact within the VE. The training simulator provides the VE which consisted in a 3.0× ~4.5 cm area located on the lateral of a pipe in a factory floor. The training was enhanced with AF in the form of a colour map that used a colour scale (from red to green) to indicate in real time the completion of the task in the metallographic replica area. The colour map was overlaid on the metallographic replica area and magnified in a right lateral window. The colour map consisted of a texture of 64×64 pixels which were identified as elements of a matrix 𝐴64×64 (𝑡). Each element of 𝐴(𝑡) stored the time of correct performance of the task on the corresponding pixel as in Eq. 1. Correct performance for the fine grinding task consisted in generating scratches by flattening the rotating wheel on the material surface and maintaining applied angle and force within ranges of correctness respectively set to 75º to 90º and 1N to 5N. These thresholds defined a segment area 𝑆+ on the flattened rotating wheel in which produced scratches had the desired direction. Outside these thresholds, the area 𝑆− is defined. The flattened rotating wheel generated scratches with inappropriate direction. Correct performance for the polishing task consisted in flattening the rotating wheel by maintaining applied angle and force within ranges of correctness respectively set to 0º to 20º and 1N to 5N. These thresholds defined a segment area 𝑆+ on the rotating wheel, considered as the optimal flattening on the material surface. In the case of the polishing task, there was no area 𝑆− . ∀ 𝑖 ∈ [1,64] Eq. 1 � ⇒ 𝑎𝑖,𝑗 (𝑡) = 𝑎𝑖,𝑗 (𝑡 − 1) + 𝑏[𝑖,𝑗] (t) ∀ 𝑗 ∈ [1,64] Where 𝑏(𝑡) was thus expressed as below (Eq. 2):
𝑏[𝑖,𝑗] (t) =
(𝑡) ∉ 𝑆+ �𝑎 � ⎧ 0 ⇔ � 𝑖,𝑗 𝑚×𝑛 ⎪ �𝑎𝑖,𝑗 � 𝑚×𝑛 (𝑡) ∉ 𝑆−
⎨ 𝛥𝑇 ⇔ �𝑎𝑖,𝑗 � 𝑚×𝑛 (𝑡) ∈ 𝑆+ ⎪ ⎩ −𝛥𝑇 ⇔ �𝑎𝑖,𝑗 � 𝑚×𝑛 (𝑡) ∈ 𝑆−
Eq. 2
Where 𝛥𝑇 is the elapsed time between two graphical frames.
3. Validation of the VR system An experiment was conducted during the demonstration phase of the EU funded project ManuVAR [3], to (1) evaluate the effectiveness of the VR system for training on fine grinding and polishing tasks and (2) study the external validity of the system. Six trainees (1 female and 5 males) aged from 30 to 55 and two experts (2 males) aged 31 and 35 took part in our experiment. All participants were workers in Tecnatom S.A. One trainee stated to be skilled in performing the metallographic replica technique, other four had little knowledge and another was completely novice. Participants reported no physical disabilities but one trainee was colour-blind. All participants received a prior part task training of angle and force skills in VR.
86
The VR system ran on the ManuVAR platform distributed on two workstations. PC 1 supported the ManuVAR components that managed the platform and a haptic server [1]. A Phantom Desktop, able to render a maximum force of 7.9 N on 3 DOF, enabled interacting within the VE. PC 2 displayed the simulator on a 3D screen (W: 1.5 × H: 1.2 m) with a resolution of 1280 × 960 pixels. The 3DVia Virtools VR player rendered the VE at 60Hz refresh rate. The VE was visualized through passive stereoscopic glasses tracked by 6 infrared cameras from Natural Point Optitrack. A separate laptop located in an adjoining room was used to display the instructions. The experiment procedure consisted of a within-subject design involving experts during the first day, and trainees during the two next days. Trainees were randomly distributed in two groups: One trained on fine grinding (FG) and the other on polishing (POL).Before starting, participants received instructions about the task objective and were explained how to interpret the augmented feedback. Participants stood at about 1 m in front of the screen. The haptic device was placed in front of them and elevated so the haptic workspace physically matched with the manipulation workspace in the VE. Participants handled the haptic device as a real portable tool. Trainees performed a pre-evaluation step composed of two items of 3 minutes: (1) a familiarization item during which they were asked to perform the task they were assigned being assisted by the colour map; (2) an evaluation item during which they performed the task with no visual aid. Then, trainees practiced the task during two items of 3 minutes with the colour map being displayed on demand. Finally, trainees were evaluated while performing the task during 3 minutes with no colour map. Experts went through the training procedure described for trainees for both tasks.
4. Results The average completion rate of the task reported on the colour map was measured for each participant during the pre and post-evaluation steps in order to be compared. All participants from FG (Figure 2.a) and two participants from POL improved their performance after training (Figure 2.b), although one of them was colour-blind. However, for one participant, training resulted in a negative effect probably due to poor visual cues dominating haptic cues for contact information. Thus, enhanced visual cues may improve this aspect.
Figure 8. Completion of (a) fine grinding and (b) polishing tasks before and after training.
87
The difference in the performance of fine grinding (Figure 3.a) and polishing (Figure 3.b) tasks between experts and trainees during the pre-evaluation step validates the system as a good representation of the real world task.
Figure 9. Experts performed (a) fine grinding and (b) polishing tasks better than trainees.
5. Conclusion This study reported the effectiveness of a VR training system to improve the performance of mechanical tasks simulated in a VE. Moreover, performance measures enabled differentiating levels of expertise. Thus, external validity for the simulator was found. In the future, we will adapt the colour map to colour-blindness.
Acknowledgements The above mentioned research has received funding from the EC Seventh Framework Programme FP7/2007-2013 under grant agreement 211548 “ManuVAR”.
References [1] POYADE, M., REYES-LECUONA, A., FRUTOS, E., FLORES, S., LANGLEY, A., D’CRUZ, M., VALDINA, A., TOSOLIN, F.: (2011) Using Virtual Reality for the Training of the Metallographic Replica Technique Used to Inspect Power Plants by TECNATOM S.A., Joint VR Conf. of euroVR and EGVE, Nottingham, UK. [2] MORRIS, D., TAN, H., BARBAGLI, F., CHANG, T., & SALISBURY, K.: (2007) Haptic Feedback Enhances Force Skill Learning. Proceedings of 2nd Joint EuroHaptics Conference, 2007 and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, World Haptics 2007, WHC'07, Tsukuba , Japan, pp. 21-26. [3] KRASSI, B., D'CRUZ, M., & VINK, P.: (2010) ManuVAR: a framework for improving manual work through virtual and augmented reality. In Proc. of the 3rd Int. Conf. on Applied Human Factors and Ergonomics (AHFE), Miami, FL.
88
Haptic Motion: an Academic and Industrial achievement Nizar OUARTI Institut des Systèmes Intelligents et de Robotique, Université Pierre et Marie Curie, CNRS UMR 7222, 4 Place Jussieu, 75252 Paris Cedex, France
[email protected]
Abstract. We present a different usage of haptic feedback that can produce a sensation of self-motion only by stimulating hands. This technology was developed with the support of the company Haption. A first demonstrator using the Virtuose haptic feedback and a virtual world was created. Based on the international patent outcome from this concept, different technological applications are in progress.
Keywords: haptic, self-motion, vection.
1. Introduction In a simulator, an important issue is to render the displacement of the user in a virtual world. Usually this is achieved by different techniques like using visual stimulations. For instance, watching a large screen or wearing a head mounted display that covers the most important part of the field of view, is classical in virtual reality systems. This kind of technology can induce spatial presence and one of its components can be vection. Vection is a well-known illusion of self-motion [1,2,3,4]. Most people have experienced at least one time, in a train, a visually induced illusion of self-motion, caused by the optic flow of another train, observed by the window, which starts. However, visually induced self-motion is limited as vision is well adapted to detect velocity information but not acceleration [2]. Another type of stimulation consists to produce a real motion of the whole body. It is the strategy of the hexapod and the railbased devices. Stewart platforms [5,6] also called hexapods and its variants are widely used in flight or driving simulators. The main principle underlying this technology is to move the platform in six degrees of freedom (3 rotations and 3 translations) with six hydraulic cylinders (hexapod). However, classical hexapods are very limited in term of workspace. It is one of the reasons why hexapod on rails was invented. Platforms on rails give the possibility to have a wider range of linear accelerations. Typical examples, developed for the automotive industry are the Ultimate Platform developed by Renault and the Toyota driving simulator. Another manner to induce self-motion in industry is based on vibrotactile actuators. The vibrotactile actuators used in current motion simulators are characterized by a narrow range of frequencies [7] or related to the sound of an engine [8]. The last techniques that can be quote are walking on a treadmill or pedaling on a bicycle [9] that can be
89
also quoted as a manner to induce self-motion. All the technologies developed so far to generate self-motion sensation in virtual environments suffer from intrinsic limitations. The principal limitations are the workspace requirement, the price of the devices, the duration of the illusion, and the directionality of the stimulation. We developed, with the support of Haption, a new solution called “haptic motion” using the Virtuose haptic feedback of Haption. In this paper, we will explain the concept of the technology involved in haptic motion, how Haption interacted with us to develop a proper demonstrator and finally we will present how we patented our invention.
2. The Technology 2.1 Concept We propose a change in the usage of haptic devices. Haptic feedback has been mainly used to interact with objects in a virtual environment. Haptic devices enable to give the sensation of touching or moving objects, to feel their weight or their texture. The radical change in our concept is that haptic feedback is used to produce the illusion that the user's body is moving in the 3D space. The key idea of our approach is to send a force to the user which is proportional and in the same orientation than the inertial acceleration vector visually perceived. Applying such force on subject's hands is expected to produce a sensation of whole-body self-motion (Figure 1).
Figure 10. General principle of haptic motion. Virtual 3D acceleration is converted to 3D force.
90
2.2 Demonstrator For the development of the demonstrator two key elements were important. The first one was to have a manner to grasp the haptic device with the two hands. Haption developed a specific apparatus that allows it (Figure 3). The second important factor was to produce an immersive environment that can provide an impressive sensation to the user.
Figure 11. Device adapted to be grasped with the two hands. The 3D force only stimulate hands to provide a sensation of self-motion
Among the different factors involved in immersion we design a 3d virtual environment which is a configurable tunnel which owns physical properties like gravity and friction (Figure 3). This demonstrator was presented to JVRC 2010.
Figure 12. The virtual tunnel developed to provide a more realistic immersion. Left: the textured version, Right the wired version.
2.3. Results We showed with two experiments that the haptic feedback improved the sensation of self-motion. The frequency of occurrences was compared for visuo-haptic condition versus visual only condition. The difference was significant Wilcoxson signed-rank test (p=0.039). The duration of the illusion was also increased and the onset decreased.
91
3. Patent After the realization of our demonstrator, we applied for a patent. Our first application was only national. However, a growing interest in our technology by different companies has motivated our application for an international patent [10]. Our patent targets in priority entertainment companies, but can also be applied to teleoperation. With Haption as partner, we envisaged to impact a lot of different markets.
4. Conclusion This article shows how an academic concept can be implemented as an industrial solution. The interaction with companies like Haption was important to obtain the right feedback and a good support. And finally for academics the patent process, even if it is time consuming, allows to promote their research with an industrial partner.
References [1] A. Berthoz, B. Pavard, and L. R. Young. Perception of linear horizontal self-motion induced by peripheral vision (linearvection) basic characteristics and visual-vestibular interactions. Exp Brain Res, 23(5) :471–489, Nov 1975. [2] B. Pavard and A. Berthoz. Linear acceleration modifies the perceived velocity of a moving visual scene. Perception, 6(5) :529–540, 1977. [3] J. Huang and L. R. Young. Sensation of rotation about a vertical axis with a fixed visual field in different illuminations and in the dark. Exp Brain Res, 41(2) :172–183, 1981. [4] L. C. Trutoiu, B. Mohler, J. Schulte-Pelkum, and H. H. Bulthoff. Circular, linear, and curvilinear vection in a large-screen virtual environment with floor projection. In Proc. IEEE Virtual Reality Conference, pages 115–120, 8–12 March 2008. [5] D. Stewart. A platform with 6 degrees of freedom. In Proc. of the Institution of Mechanical Engineers, 1965. [6] B. Dasgupta and T. Mruthyunjaya. The stewart platform manipulator : a review. Mechanism and Machine Theory, 35(1) :15–40, 2000. [7] A. Valjamae, P. Larsson, D. Vastfjall, and M. Kleiner. Vibrotactile enhancement of auditory-induced self-motion and spatial presence. Journal of the Audio Engineering Society, 54 :954–963, 2006. [8] B. E. Riecke, J. Schulte-Pelkum, F. Caniard, and H. H. Bulthoff. Towards lean and elegant self-motion simulation in virtual reality. In Proc. IEEE Virtual Reality VR, pages 131–138, 2005 [9] N. Ouarti, and A. Berthoz, and V. Lecuyer, (2010). Haptic Motion Simulator: Improved Sensation of Motion in VirtualWorlds with Force Feedback. JVRC (Joint Virtual Reality Conference), 2010. [10] N. Ouarti, and A. Lecuyer, and A. Berthoz (2011). Method for simulationg specific movements by haptic feedback,and device implementing the method (WO/2011/032937).
92
Experimental Prototype Merging Stereo Panoramic Video and Interactive 3D Content in a 5-sided CAVETM F.P. Luque1, L. Piovano1, I. Galloso1, D. Garrido1, E. Sánchez1, C. Feijóo1 1
Center for Smart Environments and Energy Efficiency (CEDINT), Technical University of Madrid, Madrid, Spain [franluque, lpiovano, iris, dgarrido, esanchez, cfeijoo]@cedint.upm.es
Abstract. Immersion and interaction have been identified as key factors influencing the quality of experience in stereoscopic video systems. An experimental prototype designed to explore the influence of these factors in 3D 1 video applications is described here . The focus is on the real-time insertion algorithm of new 3D models into the original video streams. Using this algorithm, our prototype is aimed to explore a new interaction paradigm – similar to the augmented reality approach – with 3D video applications.
Keywords: 3D panoramic video, stereo matching, real-time insertion algorithm, immersive and interactive 3D video application.
1. Introduction A widely pursued objective of entertainment systems has been to combine the realism of video or cinema with natural interactions with scene contents in order to provide an immersive user experience overcoming media limitations [1]. Given this context, this paper proposes a prototype developed as part of a research project called ImmersiveTV [2] Our system is aimed to provide multimedia experiences that merge stereoscopic videos and interactive 3D content over a 5-sided CAVE™ infrastructure (I-Space). The main functionalities identified in our system are: (i) estimating depth given a stereoscopic video; (ii) merging additional synthetic content supporting realtime interaction.
2. Stereo depth map estimation Without a loss of generality, both synthetic and real scenes could be used for our displaying purposes (see Figures 1 and 2). In both cases, a fundamental requirement is 1
This work has been partially funded by the Spanish Ministry of Industry Tourism and Commerce through the project TSI-020302-2010-61.
93
that the 3D content has to be provided through a so-called depth map. That is, a bidimensional image where each real point of the scene (represented by a pixel in the image) is assigned a value expressing how far is from the observer.
Figure 1. Case study frames of synthetic videos with their depth maps.
Figure 2. Panoramic frame (about 230o of visual horizon) of the city of Barcelona, as resulting after stitching together three different left views together. This is the sample used throughout our experiments.
While the 3D information inherently stored into a synthetic scene model can be easily extracted through geometrical projection; real scenes (commonly a pair of stereo images) must be further processed in order to extract its depth content and infer its spatial geometry. To do so, it is mandatory to rely on suitable stereo matching 2 algorithms . Even if stereo matching issues have been extensively analyzed in the last decades (see [3,4,5,6]), this research field is still opened to innovation, as several challenges are far to be eventually solved. In particular, outdoor scenes (see Figure 2), due to the complex dynamic of the environment, show a number of them (e.g., different weather conditions and light effects as well as moving objects, multiple occlusions and bigger disparity ranges) heavily affecting the final results [7]. Given this context, we are relying on the stereo algorithm described by [8], which is explicitly tuned for outdoor environments. However, we add a pre-processing step in order to both limit the different illumination conditions and detect reliable features (e.g., corners, segments) to enhance the disparity estimation process. Our rationale considers competing information sources, once integrated in a cooperative framework, as a winning strategy to limit the presence of outliers and therefore make the final result more robust and visually appealing.
2
http://vision.middlebury.edu/stereo/.
94
3. Objects insertion and application management Attending to the depth information of both, the computed depth-map sequence and the synthetic objects, we have implemented an algorithm aimed to do the merging processing. For each frame cycle, a new image pair with the additional objects inserted is generated to replace the original frames. In order to prevent unwanted performance dropdowns, the whole process is fulfilled within the time imposed by the source video frame rate (25 fps sets a maximum processing time of 40ms per cycle). The components of the algorithm have been represented in the next diagram (Figure 3) and are described below.
Figure 3. One cycle of the insertion algorithm and preliminary results in each module.
The video player module takes the stereo recording as an input, with the depth information and the separate views for each eye. As a result of the whole processing chain, special care must be paid to avoid possible mismatches between the two streams of the stereoscopic video. For that purpose the module has been programmed to change the speed of the affected video playback until it becomes synchronized again. The 3D engine manages all the synthetics objects that are allowed to be dynamically modified and repositioned by the user interactions. Consequently, new object occlusions are generated and have to be computed in real time according to the video scene being played. This issue has been solved by modeling a planar displacement surface which mesh is morphed according to the current depth-map texture. As a result, objects appear to be hidden in those areas which vertices occupy a further position than the displaced plane. Finally the texture renderer creates a picture of the view camera frustum with the merged objects that is superimposed over the original eye frame for each eye perspective.
95
4. Results and conclusions Preliminary tests have been run on an Intel Xeon 3.2GHz CPU with 6GB RAM memory and an nVidia GTX 580 graphic card. Under these conditions, the object insertion algorithm, for synthetic videos, has achieved an output of 75 frames per second which is approximately three times the frame rate needed to prevent video stutter. On the other hand, the stereo reconstruction is still suffering of a lack of precision in some regions and therefore the object insertion is made more difficult. This is mainly due to the lighting conditions, noisy and low contrast frames. For these reasons, our very next steps will focus on a double objective: enhancing the depth estimation algorithm and performing this task in real-time.
References [1] F. Isgro, E. Trucco, P. Kauff, and O. Schreer, “Three-Dimensional Image Processing in the Future of Immersive Media,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, no. 3, pp. 288–303, Mar. 2004. [2] http://www.immersivetv.es/ [3] D. Scharstein, R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” International Journal of Computer Vision, no. 1, pp. 131–140, 2002. [4] B. Zitová and J. Flusser, “Image registration methods: a survey,” Image and Vision Computing, vol. 21, no. 11, pp. 977–1000, Oct. 2003. [5] M. Gong, R. Yang, L. Wang, and M. Gong, “A performance study on different cost aggregation approaches used in real-time stereo matching”, International Journal of Computer Vision, 75(2), pp. 283 - 296, 2007. [6] S. M. Seitz, B. Curless, J. Diebel, D. Scharstein, and R. Szeliski, “A Comparison and Evaluation of Multi-View Stereo Reconstruction Algorithms”, International Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 519-528, 2006. [7] L. Nalpantidis and A. Gasteratos, “Stereo vision for robotic applications in the presence of non-ideal lighting conditions,” Image and Vision Computing, vol. 28, no. 6, pp. 940–951, Jun. 2010. [8] M. R. Andreas Geiger, “Efficient Large-Scale Stereo Matching,” in Computer VisionACCV 2010, PT. Lecture Notes in Computer Science, 2011, vol. 6492, pp. 25–38.
96
Tracking multiple humans in large environments: proposal for an indoor markerless tracking system. C. A. Ortiz, B. Rios, D. Garrido, C. Lastres, I. Galloso Author Center for Smart Environments and Energy Efficiency (CEDINT), Madrid, Spain
[email protected]
Abstract. .In this work, a design for a markerless tracking system based in cheap, off-the-shelf technology is proposed. The main goal is the accurate detection of target position and body movements even in adverse lighting conditions, focused on indoor areas. The architecture is designed to support large zones with arbitrary geometry, thus making it suitable for different scenarios.
Keywords: Interaction techniques, Kinect, Motion tracking, Motion capture
1. Introduction Human motion capture, or human tracking, is the process where movements of one or more users are detected, processed and registered during a period of time. When realtime processing is supported by the system, the term pose estimation is used, while human motion analysis is employed when movements are processed over time. The system presented in this paper is being conceived as a markerless tracking system design based in cheap, off-the-shelf technology capable to capture and store human movements for its later use in any class of human tracking application. The main goal is the accurate detection of target position and body movements in any light condition, minimizing environment occlusions, focused on indoor areas. The architecture is designed to support large zones with arbitrary geometry, thus making it suitable for different scenarios. Also, the architecture is being conceived to support large zones with rooms of different dimensions, a feature that allows an easy installation in different scenarios.
2. System design Human tracking is a widely exploited research field backed by the great number and variety of its applications. Moeslund and Granum in [1] and Moeslund, Hilton and Kruger in [2] classify these applications into three areas: surveillance, control and analysis.
97
Surveillance area covers those applications related to automatic detection and tracking in crowded locations and monitored for specific actions, so a wide area tracking system is needed. Control applications are those where movements are used for human-machine interaction, so a real time tracking system is mandatory in order to keep interactive fluidity. Finally, the analysis area contains those applications which make a detailed analysis of detected movements, being focused in small areas, high precision human motion capture. The features of the environments in which our system will be placed -large zones with different spatial arrangements- make necessary a modular, flexible, configurable architecture applicable to diverse design contexts. In order to track these zones and cover possible limited areas of the chosen sensor for the project, it will be necessary to create a grid of sensors with several nodes. When a user is outside of a specific sensor range of view, he will be captured by another camera and the system will keep the coherence of data, registering his movements accurately along time. Finally, it is intended to capture the movements in a non-invasive way to allow the users to move in a natural manner. This project initially does not cover the implementation of any tool for data processing, but it will be capable to generate an output a file containing all the information required by other specific analysis software. The proposed architecture is based on a grid of multiple computers, with several depth processing cameras (Kinect™) connected to each computer, as can be seen in figure 1. Every computer works as a unique tracker, independently to the number of Kinects™ attached to it. Communication between computers is made using VRPN [3]; the use of this standard makes possible the coexistence and collaboration with other systems, independent of the technology they use. Kinect™ (and any device based in PrimeSense™ technology, as the Asus Xtion™) is and active range camera based on structure lighting. An structured infrared laser projector emits a pre-generated pattern of infrared dots and a monochrome CMOS camera compute the depth of the scene the comparing the reflected pattern with a hardcoded pattern stored into chip logic of the sensor.
Figure 1: System structure.
Kinect™ is designed as an accessory for Xbox 360™ console games but the availability of a cheap range camera has propitiated many independent developments, usually amateur software intended to test the device capabilities
98
Kinect™ has two different development platforms the main distributor platform (Microsoft drivers and Microsoft SDK), and the hardware developers platform (PrimeSense™ drivers and OpenNI™). The general framework structure and completeness, and the less restrictive license are our main reasons for using OpenNi™ as our development platform, despite a simplified skeleton and sometimes problematic Kinect™ drivers, but a Microsoft® SDK version is not discarded.
3. Current state. Preliminary work has been focused in the design and viability evaluation of the system. As previously mentioned, Kinect™ is designed as a videogame device, developed for a specific use scenario, so some hardware and software restrictions must be overcome to adapt it to further uses. The first restriction in a single-computer multiple-Kinects™ configuration is the USB 2.0 bus bandwidth limitation. Kinects™ data streams maximize the USB transmission capacity so each device must be connected to a different USB concentrator. Further testing using USB PCI cards and/or USB 3.0 must be done. The main infrared interference in an indoor multiple Kinect™ environment are the Kinects™ themselves. Quality degradation in depth images in a multiple Kinects™ environment has been studied and hardware solutions are proposed in [3]. However, interferences between Kinect's™ infrared emitters, specifically in our multiple devices scenario, has been tested and not significative loss in tracking speed or accuracy has been found. Although any raw depth data can be easily obtained in a multiple Kinects™ configured system (as can be seen in figure 2), bugs in the prime sense drivers make impossible associate a specific depth stream to a specific scene analysis module in a single program environment. In order to overcome this limitation, an independent process is used for each device connected to the computer.
Figure 2: Raw OpenNI™ depth stream, with a recognised target; its skeleton joints drawn. Same frame in frontal (left) and upper-right rotated (right) positions.
Communication between processes is also carried using VRPN. As a result, the architecture loses the differences between local and remote levels; the trackers client can be used to connect local devices or remote trackers.
99
4. Further development. This paper presents a simple, modular design for markerless tracking systems oriented to indoor large areas. Currently, the basic one-computer multiple-Kinects™ tracker has been successfully implemented and tracker integration on the grid architecture is being started. The system is going to be validated in two different environments: a laboratory for studying elderly people mobility designed according to Living-Lab "SeniorLab®” premises, and a surgical training center. The first one, is a "social researching home" created to carry out studies about habits in normal living conditions. In the second scenario movements of surgeons and assistants operating in a room will be captured. In both cases, the markerless tracking system will be used to evaluate the accessibility of the environment and the ergonomics of the furniture and/or surgical equipment. Some development lines are currently being evaluated: • The system is expected to be used in very large, many targets tracking areas. A more accurate identification is an interesting addition in those scenarios, so the development and integration of a more powerful, facial identification based system, is needed. • Target identification between trackers remains unresolved. A simple biometric identification system based in the Kinect™ reported skeleton proportions is being designed. This recognition module is mainly intended to test the NITE™ joints positions stability, but a performance good enough for its use in environments with low number of targets (4-6 persons) is expected. • The occlusion avoiding capabilities of multiple cameras architecture allows an improvement in complex objects tracking. Hands and face motion capture are obvious candidates for the system extension.
References [1] MOESLUND T. B.; GRANUM E.; (2001) A survey of computer vision-based human motion capture. Comput. Vis. Image Underst. 81, 3, 231–268. [2] MOESLUND T. B.; HILTON A., KRÜGER V.; (2006): A survey of advances in visionbased human motion capture and analysis.Comput. Vis. Image Underst. 104, 2 90–126. [3] TAYLOR II R. M.; HUDSON T. C., SEEGER A.; WEBER H., JULIANO J., HELSER A. T.; (2001); Vrpn: a device independent, network-transparent vr peripheral system. In Proceedings of the ACM symposium on Virtual reality software and technology VRST ’01, ACM, pp. 55–61. [4] SCHRÖDER Y.; SCHOLZ A.; BERGER K.; RUHL K.,GUTHE S.; MAGNOR M. (2001): Multiple Kinect Studies. Tech. Rep.09-15, ICG.
100
Usage of Haptics for Ergonomic Studies J. Perret Haption, Laval, France
Abstract: Digital human model simulation is increasingly used in the industry for ergonomic studies. By combining it with motion capture systems, it is possible to analyze postures in real-time, thus making an interactive assessment possi-ble. However, the effect of forces is missing. In this paper, we describe how haptic technology can be implemented in combination with digital human simulation, in order to achieve a realistic ergonomic evaluation of activities.
Categories and Subject Descriptors (according to ACM CCS): H.5.2 [User Interfaces]: Haptic I/O; I.3.6 [Methodol-ogy and Techniques]: Ergonomics – Interaction techniques
1. Introduction Today, most ergonomic studies in the industry are still performed with a video camera recording the motion of real operators inside a real workshop. In this process, the quality of the results depends heavily on the ability of the ergonomist for assessing the postures from the video foot-age. With the introduction of digital human simulation into commercial CAD/CAM (Computer Aided Design/Computer Aided Manufacturing) products, ergonomic analyses can be performed much earlier during the manufacturing process planning, before any piece of real equipment is available. The combination with motion capture systems and VR (Virtual Reality) displays creates the possibility of interactive ergonomic assessment sessions, where all stakeholders (design engineers, ergonomists, manufacturing planners, factory managers) can interact and optimize factory design in real-time. However, in such simulations the ergonomic assessments are still posture-based, i.e. only postural information (joint angles, vertical position of the arms with respect to the heart, etc) is taken into account. In some cases, the load carried by the operators can be added in a later post-process phase, and used for computing forces in the operator’s lower back. In this paper, we show how haptic technology can improve the interactive simulation process and lead to real-time evaluation of joint forces.
101
2. Method 2.1 Digital human model The method proposed here is based on work previously done with the plug-in RTI for Delmia V5, based on our IPSI (Interactive Physics Simulation Interface) technology [Hap12]. As a consequence, the human model is considered as a force-controlled robot, built of rigid segments attached with joints. The typical way to build the human model is to start from the waist, considered as root segment, and describe a hierarchical kinematic structure ending with five “end-effectors” (two hands, two feet, one head). The information coming from the mocap system is attached to individual segments through a virtual coupling (6-DOF spring/damper system), thus delivering a force/torque input to the robot. The weight of the robot segment is set to zero for the interactive part of the simulation, otherwise the robot could never reach the posture of the operator.
2.2 Explicit forces Here we designate as “explicit” the forces coming from the environment, which can be expressed analytically with no more information than the current relative positions of the objects in the physical scene. For example, the weight of the object carried by the operator is an explicit force. The same is valid for a weight-lifting device using springs or counter-weights. The explicit forces do not need any specific treatment and can be introduced directly into the haptic rendering stage.
2.3 Force profiles When working on an assembly task, the operator manip-ulates many objects not easily described with a simple physical model. For example, the forces and torques dis-played by a plastic clip or an electric screwdriver do not derive from their geometry and mass. In order to simulate their behaviour in an accurate way, a multiphysics model would be necessary, which creates new difficulties. A simple way to handle such objects is to describe a “force profile”, applicable along specific degrees-of-freedom and within specific circumstances. The force profile can be measured on a real part or equipment, as is shown here below. But it can also be derived from a catalog of typical force profiles, and adapted with a small number of parameters such as the maximum force and the total displacement. Force profiles are not always simple, therefore they do not fall in the category of “explicit forces”. Because of possible hysteresis and on/off controls, it is often necessary to manage a small state automaton for each force profile. Some profiles might also include damping or inertia effects. However, in a “human-centred” approach, only those force profiles which can be actually felt by the real operator, are of interest. Therefore, we will consider that the output of the force profiles can be fed directly into the haptic rendering stage.
102
Figure 1: Force profile of a push-button, based on real measurement (courtesy of Valeo)
2.4 Implicit forces Contrary to the above, “implicit” forces cannot be ex-pressed analytically or defined within a profile. Contact forces fall into that category, as well as constraint forces (joints). Implicit forces need to be computed by the physics engine solver. We will not enter into the discussion of problem solving algorithms in this article. We will just point out that, in most physics engines, the implicit forces which are computed are not “realistic” at all. Basically, their value tend to be what is necessary to enforce the physical constraints (non-interpenetration or bilateral joint), no more. Therefore, implicit forces should never be fed into the haptic rendering stage. On the contrary, we will ignore implicit forces and instead use a simple force control scheme: we will attach the haptic device to a virtual coupling, which will deliver a control force to the human model in one direction, and feed back information about physical constraints (such as contact with obstacles) in the other.
2.5 System architecture The resulting software architecture is shown in the graph below. The physics engine is responsible of detecting colli-sions between objects in the scene (including the human model), and solving the uni-lateral (contacts) and bi-lateral (joints) constraints. Explicit forces and force profiles are added in the haptic rendering stage, which can run at a higher frame-rate (figure next page).
3. Results The first result of the method proposed here is a higher immersion for the operator performing a task in an industrial context (assembly, maintenance, etc.). As a consequence, the operator can give a better feedback on the difficulty of the operation to be performed. For example, if he needs to insert a clip with his arm in an
103
uncomfortable position, he might be unable to exert the needed force, and the force profile of the clip will never reach the “clipped” state. A more interesting result for our purpose here is the force output to the haptic device, which reflects the work load for the human model. That information is available in real-time, and it can be used to feed a biomechanical model of the human body in order to compute joint torques. One caveat of the method is the co-localization of haptic and mocap spaces: if the position of the haptic device in the mocap space is not measured precisely, then the position errors will add up and create large force residuals. The best way to solve the problem is to add a mocap target on the haptic device, and to insert it as a position offset in the haptic rendering stage.
Figure 2: System architecture
4. Conclusion In this article, we have proposed a method for imple-menting haptics in the context of digital human simulation for ergonomic studies. Our method can not only improve the immersion for the operator performing the task, but also provide work load values in real-time for biomechanical analysis.
References [Hap12] http://www.haption.com, follow “Products/Software/RTI Delmia V5”
104
Using airplane seats as front projection screens Panagiotis Psonis, Nikos Frangakis, Giannis Karaseitanidis Institute of Communication and Computer Systems panpsonis,frangakis,
[email protected]
Abstract. A case study for the feasibility of using airplane seats as projection screens. Projector position, image correctness and quality are investigated.
Keywords: back projection, airplane, projector.
1. Introduction Within the framework of the EU project VR-Hyperspace, which investigates technologies that can be deployed in the aircraft of the future in 2050, we are performing a case study for the future air cabin which is based on using projectors to transform the back of the seats to interactive screens. The rational spawning this research is that in the near future computer screens will be ubiquitous and everyday objects will be used as interaction devices. This trend will definitely be pushed in the airplane interior, although adaptation of new technologies in the aviation field is always following technology advances by few years. Moreover, the revolution of foldable screens has already began and we believe that in the near future, screens will be able to cover entire surfaces and transform them into interactive displays. However, taking into consideration the lag of the aviation industry to adopt new technologies, the current level of technology immaturity and the probable high cost of flexible screens, we believe that a more feasible and intermediate solution is the use of projectors, that are able to project on the back part of the airplane seats.
2. Technical Setup Our setup includes two rows of double seats, where the users are seated only on the back row, while the front row (back part) is used for the projection. The two seat rows are in front of a powerwall that can be used to project the entire cabin for a better user immersion. The airplane seats have been taken from a thirty-year old Boeing 747. For the projector, we have chosen a wide-field 3D capable projector from Benq with a luminance of 2500 lumen, which is able to project an image up to 55’ from one meter distance. The seats have been positioned with usual distances like in most airliners.
105
The projector has been positioned on a pole, half meter behind and half meter above the back row of seats and was tilted 30 degrees. Using this setup we have managed to use the projector in a way that this could project fully on the back part of the two seats. Finally, we included an optical tracker in the setup, which is positioned at two meters height and on the side and front of the front row, which will be used in future work, in order to allow for adjustment of the projected view to the end users field of view.
Figure 1. Hardware Setup.
3. Advantages and Disadvantages of using projectors As already mentioned in the previous section, there are two main advantages of using projectors in order to transform the back of the seats to a computer screen and then to an interaction device. The first advantage is that no seat modification is needed for the current airplane seats, since the projection will be done directly. The shape and inclination of the seat will resort in the deformation of the projected image, which will have to be corrected through software. Specifically, by using the 3D model of the seat and by tracking the inclination it is possible to correct the projection skew. The second advantage is the lack of need of placing expensive foldable screens on the seats. The use of a single wide angle projector could cover up to three seats, presenting this way an affordable solution. The main disadvantages in using projectors, are firstly their excessive need for cooling, because of their lamp heating up and secondly the image quality, which inherently will be degraded compared to a normal computer screen.
106
4. Future Work Our future work involves the proper use of the head tracking system to implement a virtual window application, which will allow the users to use the back of the front seat as a window to another world. Head tracking is crucial in such an application, because the user’s viewpoint changes, as the head is moving around. Moreover, tracking will be used to detect the seat inclination and form and then compensate the image projection, so it will appear correctly on the seat’s back. This process should be automated, in order to automatically track the inclination, which can be changed manually by the passenger. We aim also to experiment with alternate projector positions, including the armrest and the space in between the seats upper part. Another field of applications that we aim to explore will be the gesture based interaction metaphor, which will allow the user to control the environment and the application using only hands gestures. Last but not least we will use touch screens adjusted to the projection area in order to investigate this interaction metaphor as well. The latter two applications will be coupled with the development of proper use case scenarios that will emanate from either communication or gaming fields.
Figure 2. Calibrating the projector
Figure 3. Test Pattern Appearance
References [1] VR-Hyperspace (2011-2013), http://www.vr-hyperspace.eu. “The research leading to these results has received funding from the European Community's Seventh Framework Programme FP7/2007-2013 under grant agreement no. AAT-285681 "VR-Hyperspace” [2] POLkEA (2012), http://www.polkeoa.gr/ The Airplane seats are a kind offer of POLkEA.
107
108
A Comparative Evaluation of Two 3D Optical Tracking Systems TUGRUL TASCI1, NEVZAT TASBASI1, ANTON VELICHKOV2, UWE KLOOS2, GABRIELA TULLIUS2 1 Sakarya University, Faculty of Informatics, Sakarya, Turkey Reutlingen University, Fakultät Informatik, Reutlingen, Germany 1 ttasci;
[email protected] 2 Anton.Velickhov;Uwe.Kloos;
[email protected] 2
Abstract. Accuracy of optical tracking systems are typically performed through comparing calculated positions based on sensory data with a known coordinate system. A special measuring environment is usually required in order to obtain data from these systems. In this study, a comparative evaluation of two 3D optical tracking systems is presented: WorldViz Precision Position Tracking (PPT) and Low Cost Tracking (LCT). For each 3D optical tracking system under evaluation an extraction algorithm was implemented to ensure the acquisition of exact positions. The results indicated that WorldViz PPT system achieved more precise, accurate and robust tracking performance compared to its peer. The findings showed that the evaluation set-up in general is valid and can be used for further investigations of such systems.
Keywords: artificial, augmented and virtual realities, evaluation/methodology.
1. Motivation This study was performed in the virtual reality laboratory (VRLab) of Reutlingen University (RU) which was established in 2005 for educational and research purposes. The VRLab was equipped with a commercial system called WorldViz in order for 3D optical tracking of objects.Itis claimed that the WorldViz tracking system is too expensive and the use of this system for demonstration is not very convenient [1]. Thus, they presented a low cost tracking (LCT) system aiming to perform 3D object tracking with low cost hardware. LCT is a stereo tracking system developed by RU VRLab researchers for educational purposes. The main goal of LCT system is to reconstruct 3D coordinates of multiple infrared markers which are moved by the subject in a pre-defined scene. The system is based on the implementation of calibration, un-distortion and rectification of camera images. In this study, a special measuring environment was designed to comparatively evaluate the efficiency of these two 3D optical tracking systems in terms of accuracy.
109
2. Experimental Setup Experimental setup was placedin a 4x4 m2 stage getting within the sight of both systems.A special mechanism which is capable of moving along three orthogonal directions (Fig. 1.)was developed in order to obtain the real world locations of the 3D point coordinates tracked by the systems. The mechanism consists of a board which is thoroughly moveable on a rail and an Infrared LEDs panel mounted on this board which is also moveable in a planar fashion (Fig. 2.).The measurements were performed by tracking the evenly-spaced 16 LEDs’ positions mounted on this panel. An electronic circuit was located behind the Infrared LEDs panel controlling the flashing LEDs.
Figure 1. Illustration of the measurement setup
Figure 2. Measurement board with infrared panel
3. Method 3.1 Data Acquisition Data used in measuring accuracy of the two tracking systems was obtained via catching known positions of LEDs on the infrared panel. In order to process accurate data for accurate results, it was verified that both LCT and WorldViz PPT tracking systems were calibrated before [1][3].Both systems under evaluation support Virtual Reality Peripheral Network (VRPN) which provides a device-independent and network-transparent interface to virtual reality peripherals[2].The VRPN messages coming from these servers were received simultaneously and stored into a database using a client application developed for this work.
3.2 Data Extraction In data extraction phase, it was aimed to extract the precise positions of 16 infrared LEDs mounted on the panel. Because acquired data contains changeover positions between consecutive LEDs and unwanted light blobs close by the region of interest, additional processing was needed to be performed in order to remove redundant data
110
and to obtain essential positions. For this purpose, a separate position extraction algorithmwas implemented for each system.
4. Results According to the measurements, it can be seen in that WorldViz PPT tracks all the points within the target volume (Fig. 3.), while LCT tracks some part of the same volume. Also, for a single measurement along Z direction (Fig. 4.), WorldViz data forms a shape similar to square, on the other hand, LCT data seems like a distorted rectangle. Additionally, changes occurring in X and Y directions WorldViz data generates a step function-likegraphic (Fig. 5.a. and 5.b.), but LCT data graphic contains some distortions (Fig. 6.a. and 6.b.).
Figure 3.Measured Volume (LCT: Blue WorldViz PPT: Red + Blue)
Figure 4.Tracked Points in 2D LCT vs. WorldViz PPT
Figure 5.a. WorldViz PPT - Changes in X direction
Figure 5.b. WorldViz PPT - Changes in Y direction
Figure 6.a. LCT - Changes in X direction
Figure 6.b. LCT - Changes in Y direction
5. Implementation The implementation of this work was performed using Visual Studio.NET C# language. A VRPN client application was developed and used with the support of
111
Microsoft Reactive Extensions (Rx) library [4] in order to ensure simultaneous data acquisition from multiple VRPN servers. Data extraction process was implemented using T-SQL scripts with nested cursor structure.
6. Conclusion In this study, two 3D optical tracking systems (WorldViz PPT and LCT) were evaluated. An experimental setup was made in virtual reality laboratory (VRLab) of Reutlingen University (RU) for the evaluation process. A pre-defined volume was targeted for tracking 3D points. In total, 312 separate measurements were carried out in order to obtain 4,992 distinct 3D point coordinates. The measured 3D point position data was transmitted through VRPN over the network and was saved to a database afterwards. For each 3D optical tracking system, an extraction algorithm was implemented to acquirethe exact positions. The results indicated that WorldViz PPT system achieved more precise, accurate and robust tracking performance compared to its peer. It was also found that with producing recoverable distortions LCT system was able to fully track points within a particular region of the scanned volume and to partially track the points located in the neighborhood of this region. However, it was deduced that more satisfactory results might be derived for LCT system by relocating the cameras and modifying the underlying stereo reconstruction algorithm. The findings showed that the evaluation set-up in general is valid and can be used for further investigations of such systems.
References [1] Hermann E., Meißner C., Tullius G., Kloos U., "Low Cost Tracking", (2011), JVRC [2] Russell M. Taylor et. al., "VRPN: A Device-Independent, Network-Transparent VR Peripheral System", (2001), VRST' 01 [3] WorldViz Web Site, "http://www.worldviz.com", June 2012 [4] MSDN Web Site, "http://msdn.microsoft.com", June 2012
112
System Effectiveness vs. Task Complexity in Low Cost AR Solutions for Maintenance: a Study Case M. HINCAPIÉ1, A. CAPONIO2, M. ORTEGA3, J. L. ALCAZAR4, E. GONZÁLEZ MENDIVIL4, M. CONTERO3 AND M. ALCAÑIZ3 1Universidad de Medellín, Carrera 87 No 30 - 65 2University of Bari 3LabHuman - Universidad Politécnica de Valencia, Camino de Vera s/n, 46022, Valencia, Spain 4Instituto Tecnológico de Estudios Superiores de Monterrey
[email protected]
Abstract. In this paper we present a novel analysis of Augmented Reality effectiveness vs. task complexity in maintenance. Augmented Reality has been proved to be greatly beneficial in maintenance, considerably speeding up completion time of several kinds of tasks. But how beneficial Augmented Reality is depends also on the complexity of the task we use it for: intuitively, the benefits introduced by Augmented Reality become more and more marked as task complexity increases. We here offer a test case analysis not only to prove that Augmented Reality effectiveness is strongly related to task complexity, but also that as the task becomes simpler, Augmented Reality risks to become harmful and to actually delay task completion. Two different assembly task were considered and illustrated by an Augmented Reality based guide and a Multimedia Design Guide. Then several people were selected and divided into two control group according to their psycho-technical abilities. Each group performed both test case by using the Augmented Reality based guide or the Multimedia Design Guide. Results' analysis proved our hypothesis, showing that for the simplest assembly case, Augmented Reality actually slew down test subjects.
Keywords: Information Interfaces and Presentation [H.5.1]:Multimedia Information Systems—Artificial, augmented, and virtual realities; Evaluation/methodology; Computers and Education [K.3.1]:Computer Use in Education—Computer-assisted instruction.
1. Introduction Maintenance is the process of correctly attend upkeep of a machine or a system. The effectiveness of a system and its longevity depends on the way we maintain it and, for this reason, maintenance is an important voice in the budget of any kind of industry. Augmented Reality (AR) has proved to be an excellent tool to speed up and improve maintenance processes [1]: main advantages of AR systems are to reduce head and eye movement, context switching and the time of repair sequence transitions and to provide real time collaboration and historical documentation and training
113
complementary information. Summing up, we could say that AR can not only improve maintenance effectiveness, but also that AR based maintenance can be cheaper than ordinary maintenance. So, while AR allows to considerably speed up maintenance procedures, it is also an expensive technology. There must be then a balance point where the gain due to a faster maintenance meets the expenses due to AR technology: when the speed up is lower than this balance point, AR technology is unproductive, as it costs more than it saves. When the speed up is greater, AR technology becomes highly beneficial. The main objective of this article is to prove that AR is as much effective as more complex is the assembling task we want to realize. To prove our hypothesis we designed two different assembling tasks and then we tested two groups of subjects: one used AR for completing both tasks, while the second used a Multimedia Design Guide (MMDG), which is practically an advanced multimedia version of common paper instructions.
2. Analysis of AR Effectiveness To prove our hypothesis we will use two assembly procedures of different complexity. Both procedures will be carried out by two groups of people and by means of two different methods: one group will use the Multimedia Design Guide (MMDG) which consists in multimedia and interactive instructions, and the other group will use the AR interactive guide. By analyzing performances of both groups, we will show that AR effectiveness increases when the assembly task becomes more complex.
1.1. Test Cases In our study we considered two different assembly procedure of different complexity level: the first one is simpler, made up of few components easy to identify, and does not need the user to follow a specific assembling order: a toy construction blocks assembly, as in [2], to realize a little toy plane. The second test case is more complex due to the number and the characteristics of its parts, which more difficult to be identified, and the need to use specific tools to realize the assembly: the fuel propelled model aircraft motor RCGF 45 cc.
1.2. The AR Solution In order to determinate the position and orientation of the different parts of the engine, in the physical space, we used a optical tracking marker. Specifically, we use a software library called Aumentaty [3]. The AR guide for test case A was realized as a series of animations of all the assembling steps, showing how to place the relative piece in the assembly sequence. For test case B, the most important components' feature is the shape, while color is pretty much uniform. For this reason the virtual components were painted in different colors, so that user can better distinguish among the pieces.
114
1.3. The Multimedia Deign Guide MMDG consists of a series of interactive and multimedia instructions meant to give enough informations for successfully terminating simple or complex assembling procedures. The guide itself depends on the assembling procedure we are interested in: the MMDG for the test case A, simply consists in a series of 16 slides showing the list of all the pieces used in the experiment. The MMDG for the test case B is made up of two different documents: the first one is a list of all the components needed to assemble the motor, showing differen information for each component. The second document, is an interactive exploded 3D-view of the motor that the user can manipulate to look at better perspectives of the components and of the whole, checking out mutual positions and references.
3. Results We invited several bachelor students to perform our experiment and divided them into 2 groups, each one consisting of 7 people. Group 1 performed the assemblies by means of the AR guide, and, to avoid resistance to the new technology, people in this group were given some material explaining what is AR and how to correctly use the AR guide properly. Group 2 performed the assemblies using the MMDG and did not need to pursue any special formation before starting the test. Before the test case A actually started, we gave to the subject the base on which the assembly will be realized and a set of 40 construction blocks; it is important to underline that not all the blocks were needed in this assembly process. Once everything was set the AR guide or the MMDG was launched, the test started and the subjects were timed and video-recorded. A similar approach was used for test case B: subjects received a base where to build the motor and all the pieces and tools they will need; in this case, though, all components received by subjects were needed to complete the assembly. Case A (simple)
Case B (complex)
AR
MMDG
AR
MMDG
Pick and place time
83,28
60,16
283,59
568,97
Time to assemble
131,65
93,09
997,92
1253,87
Average pick time
3,19
1,96
8,56
13,98
Average place time
2,76
2,34
14,62
19,2
Average pick and place time
5,95
4,3
23,18
33,17
Table 1: time comparison between AR guide and MMDG
Table 1offer an analysis of average times needed by subjects during the experiments. The first row of this table represent the average pick and place time, which is the time needed by users to pick a component and place it in the assembly. The second row shows the average time to assemble, which is the average of total time needed by each subject to complete the assembly. The average pick time, written
115
in the third row, represents the time needed on average to locate and pick a specific component for each step, while the average place time, in the fourth row, is the time needed to place the component in the correct position, for each step. Row five shows the sum of the average pick and the average place times. By a look to these times it is clear that assembly time given by group 1 were bigger for test case A, and smaller for test case B. This means that AR guide considerably improved time performance for test case B, the more complex one, while lightly slew down the assembly for test case A, the simpler one. For what concerns the error analysis, we can see that both groups made the same number of errors while performing test case A. On the contrary, for test case B, we notice that group 1 committed averagely 1.5 errors per person, while group 2 committed 3.5 errors per person on average: i.e. AR also improves tasks results for the more complex test case.
4. Conclusions In this article we proved that the effectiveness of AR in maintenance industry strongly depends on the tasks we are dealing with. This means that not always the use of AR technology is beneficial but, on the contrary, for simple or trivial tasks AR can actually be a drawback. When tasks become more complex, though, AR starts to be beneficial and can considerably speed up maintenance processes and improve overall results. On the other hand AR is also a cost, so it is important to weight its effectiveness from an economical point of view: time gained by a worker in a maintenance process thanks to AR must be more valuable than the AR technology itself, otherwise the whole maintenance process will become more expensive. In the future we will offer some practical methods to actually evaluate when AR technology becomes beneficial and when, instead, it would just increases costs of maintenance: these methods will not only consider the speed-ups introduced by AR, but also the increased quality of the overall maintenance process.
References [1]FRIEDRICH, W.; (2002) Arvika “augmented reality for deveopment, production and service. In proceedings of the 1st International Symposium on Mixed and Augmented Reality [2] TANG, A.; OWEN, C.; BIOCCA, F.; MOU, W.; (2003) Comparative effectiveness of augmented reality in object assembly. In proceedings of the SIGCHI conference on Human factors in computing systems. [3]MARTÍN-GUTIERRÉZ, J.; SAORÍN, J. L.; CONTERO, M.; ALCAÑIZ, M;PÉREZLÓPEZ, D. C.; ORTEGA, M.;(2010) Design and validation of an augmented booj for spatial abilities development in engineering students. Computers & Graphics 34, 1; 77 - 91
116
117
The Joint Virtual Reality Conference (JVRC2012) of ICAT, EGVE and EuroVR is an international event which brings together people from industry and research including end-users, developers, suppliers and all those interested in virtual reality (VR), augmented reality (AR), mixed reality (MR) and 3D user interfaces (3DUI). This year it was held in Spain in Madrid hosted by the Intelligent Virtual Environments Group at the "Decoroso Crespo" Laboratory (Computer Science School) and the Virtual Reality Lab at the Center for Smart Environments and Energy Efficiency (CeDInt), both at the Universidad Politécnica de Madrid (UPM). This publication is a collection of the industrial papers and poster presentations at the conference. It provides an interesting perspective into current and future industrial applications of VR/AR/MR. The industrial Track is an opportunity for industry to tell the research and development communities what they use the technologies for, what they really think, and their needs now and in the future. The Poster Track is an opportunity for the research community to describe current and completed work or unimplemented and/or unusual systems or applications. Here we have presentations from around the world.
ISBN: 978-84-695-5470-8