Using Google Glass for mobile maintenance and ... - IEEE Xplore

7 downloads 244839 Views 5MB Size Report
of Google Inc. prototypically is used to improve the periodical calibration of the driver assistance ... position at the edge of the users field of view. During testing ... of the ”semi-transparent dis- play” with a resolution of 640 x 360 pixels) and the.
Using Google Glass for mobile maintenance and calibration tasks in the AUDI A8 production line Sebastian Rauh∗ , Daniel Zsebedits∗ , Efim Tamplon∗ , Stephan Bolch† and Gerrit Meixner∗ ∗ UniTyLab

Heilbronn University, 74081 Heilbronn, Germany Email: [email protected], [email protected], [email protected], [email protected] † N/PN-48E AUDI AG, 74148 Neckarsulm, Germany Email: [email protected] Abstract—The integration of new technologies in existing work tasks requires the integration of the users in all steps of the development process. In this work the human-centred design process according to the ISO 9241-210 is applied to fulfil the necessary participation of users. The Head Worn Display Glass of Google Inc. prototypically is used to improve the periodical calibration of the driver assistance system testing bay of the AUDI A8 production line. Therefore the users where observed and interviewed during the performance of the calibration and furthermore highly integrated in the iterative design process of the user interface, the human computer interaction and the program flow. At the end a final prototype could be tested.

I.

Fig. 1. Google Glass (left) and Vuzix M100 (right). While Google uses a semi-transparent display Vuzix relies on a non-transparent casing for the display.

I NTRODUCTION

Improving the work instructions and documentation by introducing state of the art soft- and hardware is a challenge companies are facing. The rising pace of innovation and miniaturisation demands increasing attention on current developments. New trends have to be evaluated on their suitability for industrial processes. Besides keeping an eye on recent proceedings in the relevant fields feasibility studies need to be performed with promising technologies to explore limits, application areas and points of contact for further research. The showcase integration of the Head Worn Display (HWD) Glass of Google Inc.1 (below referred as Google Glass) in the final check process of the AUDI A8 production line at the AUDI facility in Neckarsulm, Germany was initiated by integrating the device into the manual functional check and initial start-up of the cars [1]. In this subsequent work the Google Glass is integrated in a periodical maintenance process with recurrent work tasks. Therefore the calibration of the driver assistance system testing bay at the very end of the final check was selected. In this use case a worker of the Monday morning shift has to perform several calibration steps to guarantee stable quality of the driver assistance system testing bay. All sensors and actuators are checked in a determined interval (weekly, monthly, yearly). The Google Glass will be used to replace the current paper based work instructions and documentation to enable the worker to receive necessary information and to document executed test steps on side without interrupting the work flow. 1 http://www.google.com/about/

c 2015 IEEE 978-1-4673-7929-8/15/$31.00

II.

S TATE OF THE A RT

Head Worn Displays (HWD) are systems which bring complex visual information into the user’s line of sight. HWD usually are supported by a mobile or even wearable computer. In the following the term HWD might include Head Mounted Displays (HMD), because they are considered as predecessor technology. HWD extend HMD in the way they can be put on easily, that is they are worn like a pair of glasses. HWD which provide one display in front of one of the user’s eyes are called monocular, devices that provide one display in front of each eye of the user are called binocular. There are devices with semi-transparent displays, to enable the user to see what is behind it, called optical See-through HWD. Furthermore there are devices using a non-transparent display, which can stream a camera image to enable the user to see what is behind the display. If streaming is used, these devices are called video See-through HWD. Google Glass (depicted in Figure 1, left) is a monocular optical See-through device which brings the display into the upper right corner of the line of sight of the right eye. Currently the device is not available for purchase. Google Inc. distributed the Explorer Edition (XE) with an invite system which was closed in April 2014. From that day on the device was sold as beta version. In January 2015 Google Inc. announced to stop the Explorer Program to include the user feedback and enhance the device in a conventional department instead of the Google X research laboratory. Developers at Vuzix

Within the initiative “Industrie 4.0” BMW Group3 presented a Google Glass application to document deviations occurring in the quality check of pre-series vehicles. They aim to improve the communication between the employees of the quality check and the development engineers by using messages and photos. The project is supposed to be extended to enable video-telephony for direct conversations. BMW points out that the possible hands free usage is a considerable benefit compared with traditional systems based on hand-held devices, clipboards or fixed terminals. [3]

Fig. 2. Google Glass teardown by Scott Torborg and Star Simpson (retrieved from [2]).

Corporation2 in contrast used a non-transparent display in the M100 (depicted in Figure 1, right), a HWD which was released in December 2013. If combined with the devices’ camera a video See-through HWD is possible. Both devices come along with restraints based on the display size an the position at the edge of the users field of view. During testing both devices it appeared, that caused by the small displays and their position the scope of mixed reality applications is very limited. Mostly these devices seem to be useful to support the user by showing information like message notifications. Hence these devices are mainly supposed to extend the user experience and functionality of conventional smartphones. The components of the Google Glass Explorer Edition (XE) are depicted in Figure 2. Scott Torborg and Star Simpson [2] disassembled the device to examine its components. A selection, which characterises the device is listed below. 1: Side Touchpad Module based on a Synaptics touchpad controller (Synaptics T1320A). 2: Display Frame, among others holding the display, the prism (basic components of the ”semi-transparent display” with a resolution of 640 x 360 pixels) and the Camera (with a resolution up to 5 MP for pictures and 720 P for videos). 3: Main CPU Board, among others holding a Systemon-a-Chip component (Texas Instruments OMAP4430, including an ARM Cotrex-A9, operating up to 1 GHz), a flash storage component (SanDisk SDIN5C2-16, 16 GB storage space) and a RAM component (ELPIDA B816B3PF-8D-F, 1 GB DRAM).

In 2013 metaio GmbH4 presented a first mixed reality application for Google Glass on their in-house exhibition insideAR in Munich [4]. The user is guided through maintenance and operating steps of his car. For example, if the oil level of the cars engine is low the user is notified and guided through the refill process. There also has been previous work on using HWD and Mixed Reality in industrial maintenance tasks. For example Oda et al. present a 3D referencing technique to improve remote maintenance tasks with a more accurate and efficient augmentation [5]. III.

P ROCEEDINGS

The integration of Google Glass into the working routine of the periodical calibration of the driver assistance system testing bay was performed using the human-centred development approach based on ISO 9241-210 [6]. This approach is based on iterations which are separated into analysis and evaluation and the implementation phase. Each iteration starts with an analysis of the current state of the application for Google Glass. In these analyses the users are highly integrated. On the one hand they can introduce the developers into their work routine and thereby advise them to the critical success factors. On the other the hand they can evaluate the developed software by tentatively integrating it into to their work processes. The results of these methods of collecting data are evaluated by the developers in collaboration with the stakeholders and the necessary tasks are derived after which the next implementation phase starts. The results of the implementation phase are the basis for the next iteration, starting again with an analysis. Within the preceding project at AUDI [1] the available HWD Google Glass and Vuzix M100 were compared. Back then, the Vuzix M100 only had very limited interaction possibilities, which were reduced to three small and difficult to distinct buttons on the top of the casing. Hence Google Glass, already offering speech recognition and touch gestures, was chosen. This decision is maintained in the described project. A. Analysis and Evaluation

5: Single-Cell Lithium Polymer Battery with a capacity of 2.1 Wh (about 570 mAh).

Based on the limited project duration of four months and the appraisal of the authors as well as of the users that it is adequately optimised the general work flow remains unchanged. Furthermore the workers interviewed are new to the concepts of the human computer interaction of the Google Glass, so first design drafts are created without the involvement of the users.

C: Parts of the devices case (orange plastic parts).

The analysis starts with an examination of the current working routines, the stakeholders and the work environment.

4: Bone Conduction Speaker.

3 https://www.bmwgroup.com/ 2 http://www.vuzix.com/

4 https://www.metaio.com/

Therefore the employee who performs the calibration of the driver assistance system testing bay is observed while operating. To prevent ambiguities and misinterpretations the worker is invited to describe the work steps (“thinking aloud”) and his motivation to execute the tasks in the way he does. The observer is taking notes and also is asking further question if necessary. The users of the system are the workers of the Monday morning shift of the driver assistance system testing bay which are included in the analysis phase. In addition employees that are not related to the testing bay are acquired to validate the results. The activity is subdivided in recurrent tasks which are following a determined flow. It is performed every Monday morning in varying scopes. There are weekly, monthly and annual calibration tasks. At the moment the work is structured by using a paper based work instruction and the results also are documented on paper. Data of the driver assistance system testing bay calibration system is displayed on various screens around the testing bay. The should-be values (from the work instruction) can be compared with this data (actual value) to adjust deviations or inform the person in charge, for example the maintenance staff. The performed tasks are documented by the employee by stamping them in the protocol book. Some tasks like manual distance measurements with an optical ranger are not documented and hence they are performed on the employee’s responsibility. The individual tasks are varying considerably between one another. This is a serious challenge especially when the trained employee is absent and someone less experienced or even untrained has to fill in for him. In this case a routine flow is not possible, but the calibration has to be finished when the shift starts to prevent the production line from being delayed. In addition the initial analysis delivers the following results: •

All steps within the calibration are based on information processing for selecting the next steps. Every task is designed to enable the employees to retrace the necessary information (from the work instruction, surrounding displays etc.) and then realise the required tasks. There is an inevitable dependency on the entirety of the task and especially on the individual work steps. Hence it can be assumed that focussing on design, comprehensibility, consistence and the relations among the particular information in the implementation phases will improve the resulting application.



The execution of the calibration process should be designed as learning process. This is based on the necessary training of each employee when performing the calibration the first time and will be the main goal to be considered in the implementation phase.





Abstracting components (elements to calibrate), tasks (sub-task of one calibration tasks) and sequences (all sub-tasks of one calibration task executed in the right order) with their following aggregation is supporting the generation of schemata. Providing schemata leads to a reduced cognitive load. Symbols and icons should be designed to visually represent the associated tasks in an easy to understand manner.

Fig. 3. Conditional view asking for marking the calibration step “Fender: wheel arch heights and centre of the wheel” as “OK”, “not OK” or “postboned”.



The position of each component has to be depicted by the assisting system.



The calibration is performed between the shifts, hence the reduced noise level allows using the voice control feature of Google Glass. This enables a touch less interaction in every step of the calibration process.



To optimise the users field of view the background of the application should be transparent.

In the interim analyses, performed in the beginning of each iteration, the method “thinking aloud” is used as well. The users are introduced to the application in its current state of development and are able to accustom themselves to the device and the interaction concepts. To evaluate the user interface an android based tablet PC is used. This comes along with some advantages compared to directly using the Google Glass. Not only the user can see the interface but also the developer. Both can discuss about what is understandable in the current design and what needs revision as well as point out if there are missing elements or the like. In addition, the observer can follow the line of action of trained and untrained employees. This allows a better understanding of how the users are thinking and working. Hence Google Glass is an ”Explorer Edition” which means that the device officially is sold with known and unknown weaknesses reasonable areas of application are limited. Especially the limited battery life an the lack of adequate heat conduction combined with the concentrated design (cf. Figure 2) which both lead to a highly reduced operating time are mentionable. A tablet PC does not exhibit these weaknesses. In addition the used version of Google Glass cannot be put on by several persons wearing glasses for medical reasons with a wide and angular frame. This is based on the fact that these persons are forced to wear Google Glass above the medical glasses. Especially for users having medical glasses with a frame wider than the one of Google Glass are not able to use the device. The interaction via the touch pad on the right frame temple also appears to be problematic for unskilled users. It is assumed that this is based on conversion of the mental model of a custom (laptop) touch pad to the one of Google Glass. A swipe forward or backward on the touch pad of Google Glass’

TABLE I. T OUCH G ESTURES FOR I NTERACTING WITH THE C ALIBRATION S YSTEM OF THE D RIVER A SSISTANCE S YSTEM T ESTING BAY. gesture

description

possible interaction

confirm/mark step as “OK” “one finger tap”

“two finger tap”

Fig. 4. Informative view with status bar and symbols showing the request ”inform the maintenance” and offering contextual help by two finger tapping on the touch pad.

right temple correlates to the familiar behaviour of a swipe to the right or left on a custom touch pad. The mental model initially has to be recognised and learned to use the system. Hence it is recommended and asked by the users to limit the amount of gestures to the minimum to diminish the training effort. B. Implementation The analysis results are implemented as a prototype. Each preliminary result is evaluated with the users (human-centred design approach) in the following analysis phase to iteratively push forward the development. It is aimed to create a digital work instruction and documentation system based on a Google Glass application which should be able to completely substitute the existing paper based system. Within the periodical calibration of the sensors of the driver assistance system testing bay and the belonging calibration system the user is guided with an application to document his work progress and the calibration results. The application is controlled with a limited set of gestures on the touch pad on the right frame temple of Google Glass. These gestures are represented in the user interface using symbols if available (depicted in Figure 3 and Figure 4). Table I shows the used gestures and the resulting possible interpretation of the system. There are conditional views (depicted in Figure 3) and informative views (depicted in Figure 4). While conditional views require a decision by the user, for example marking a calibration step as “OK”, “not OK” or “postponed”, informative views are used to give instructions like “Inform the Maintenance”. Informative views can demand additional activities from the user, which are not part of the system functionality, but also function as simple instructions to maintain the work flow. To offer a chronological overview of calibration process a progress bar is introduced on the top of the view. This progress bar is hidden in the conditional views to rise their lucidity. IV.

“swipe forward/backward”

multifunctional, can be derived from the views context

Neckarsulm, Germany. Any employee independent from his training level now is able to perform the calibration of the driver assistance system testing bay in an acceptable time span. The application was developed based on the human-centred design approach following ISO 9241-210. Initially a suggestive user interface was presented to the users which then were included in the iterative advancement process. Interaction concepts have been evaluated, selected and optimised. Especially when using the Google Glass for live tests its weak points appeared. The two main problems identified are the low battery capacity respectively battery life and the fast heat development. These difficulties prevent an introduction of the Google Glass into the regular work flow. Furthermore Google Glass cannot be worn by spectacle wearers, if the used medical glasses have a wider and angular frame (for example Horn-rimmed glasses). Hence it has been desisted from adapting the work processes to improve the usage of Google Glass. But it could be shown, that devices of the type HWD receives positive feedback from the assembly employees in the automotive industry. It can be resumed that an understandable and consistent interaction concept and a salient user interface are the key features for introducing HWD into the everyday working processes. R EFERENCES [1]

[2] [3]

[4] [5]

C ONCLUSION

Within this project an application for Google Glass was developed that optimises the work and training processes at the production line of the AUDI A8 at the AUDI facility in

show contextual help, that wit the system will show tips according to the step the user is performing

[6]

G. Meixner, S. Rauh, M. Koller, D. Kalem, M. W¨ohr, S. Schwager, S. Bolch, Einsatz der Google Glass zur Optimierung der manuellen Inbetriebnahme und Funktionspr¨ufung in der Audi A8 Fertigung. VDIBericht 2258: “16. Branchentreff der Mess- und Automatisierungstechnik, AUTOMATION 2015, Benefits of Change - the Future of Automation” D¨usseldorf, Germany: VDI Verlag GmbH, 2015. Scott Torborg and Star Simpson, Google Glass Teardown. http://www.catwig.com/google-glass-teardown, 2013. BMW Group, Visual inspection with memory function: BMW Group tests smart eyewear for quality assurance in production. https://www.press.bmwgroup.com: BMW Group, 2014. T. Lord, The Case for Wearable Computing insideAR: The Augmented Reality Magazine, no. 6, pp 16-18, October 2013. O. Oda, M. Sukan, S. Feiner, B. Tversky, Poster: 3D referencing for remote task assistance in augmented reality 2013 IEEE Symposium on 3D User Interfaces (3DUI), pp. 179-180, Orland, FL, USA: IEEE, March 2013 ISO, ISO 9241-210:2010 Ergonomics of human-system interaction – Part 210: Human-centred design for interactive systems, 1rst ed. Geneva, Switzerland: International Organization for Standardization, 2010.

Suggest Documents