Michael Boronowsky, Tom Nicolai, Christoph Schlieder and Ansgar Schmidt. Center for Computing Technologies, University of Bremen,. P.O.B. 33 04 40, ...
Winspect: A Case Study for Wearable Computing-Supported Inspection Tasks Michael Boronowsky, Tom Nicolai, Christoph Schlieder and Ansgar Schmidt Center for Computing Technologies, University of Bremen, P.O.B. 33 04 40, D-28334 Bremen, Germany Tel.: ++49-421-218 4554 {mb, nicolai, cs, ansi} @ tzi.de Abstract This paper introduces the Winspect project - an application of wearable computing in an industrial inspection process - with focus on its user interface. We are presenting a case study to demonstrate the benefit of wearable input devices and the use of implicit interaction as a complementary technique. Two almost independent tasks from the application domain will be addressed: the input of findings for inspected components in a harsh environment, and a technique to overcome the display resolution when browsing in a hypertext-like documentation.
1. Introduction Ever since the emergence of wearable computing there has been a continuous discussion about how to open up the full potential of this technology for inspection and maintenance tasks. This paper addresses the issue from the perspective of industrial use. We report on results obtained in the Winspect Project (Wearable Computers in Inspection) in which wearable computing supports an inspection process at our industrial partner’s, Stahlwerke Bremen GmbH, a steel plant operating in the German city of Bremen. The production of steel requires a high amount of transportation with the help of hundreds of cranes. In order to reduce the risk of failure an important part of the company’s zero fault strategy is the so-called preventive maintenance. This is done periodically during machine-running time. For the inspection the staff have to climb up the moving cranes. Ana abnormality during the inspection process can be recorded only after leaving a crane or after reaching a firm stand, resp. The findings are written down with a pencil on a sheet of paper and have to be typed into the computer system at the end of an inspection cycle. The cranes to be inspected are located all over the company area, each of them containing hundreds of different components with various subcomponents for parts like engines, brakes, or linkages. These sub-components may adopt several qualitative states which the inspector has to
inspect and then to record. Furthermore it should be possible to browse the technical documentation of a crane during the inspection process.
2. System Requirements With these circumstances of the inspection process in mind, the following problems should be solved: • Directing attention towards real-world objects The system should not bind too much attention, neither in the visual nor the additive channel. • Recording qualitative states of sub-components Winspect needs an appropriate method to select one of the many sub-components prior to entering its state. • Displaying large technical drawings Size limitations of computer screens are an obstacle when technical drawings have to be viewed. • Wearing protective gloves The inspector has to wear protective gloves which do not allow the use of common input devices.
3. Winspect’s Interaction Concept Human-computer interaction techniques can be distinguished by the type of cognitive control involved in the interaction: explicit interaction relies on conscious control whereas implicit interaction does not (e.g. see [1]). When designing the interaction concept our goal was to reduce the complexity of the UI and to find an alternative to the mouse pointer similar to the outlines in [3]. We can perform at least two different functions with a pointing device: first, selecting a widget on the screen, and second performing some direct action on that widget. The Winspect approach is based on this distinction. Entering Findings A combination of different interaction principles in Winspect allows the inspector to enter his assessment of a specific sub-component. A static to-do list would be incompatible with a flexible inspection workflow which can be altered by the inspector. Also, selecting the subcomponent of interest from the list of all available compo-
nents is not feasible, because this list is far too long. With implicit interaction it is possible to avoid both problems if the computer system knows the position of the inspector. In this case, the system can narrow down the search space in that it only lets the user select a component which is within his current range. A wearable RFID (radio frequency identification) system provides the Winspect system with this kind of position information [2, 6]. It consists of a reader unit and numerous passive tags which are attached to the components of the cranes. As soon as the inspector approaches one of these, the computer reads the unique code stored in each tag. Then Winspect displays a list with all sub-components on the recognized component, sorted by priority. It waits for further explicit input via the list widget. The process of entering a finding can be reduced to two successive selections from short lists – the first describes sub-components, the second possible findings. The selection of items is easily mapped onto a one-dimensional motor process of the user. The Winspect system works with a onedimensional tilt sensor and a button to trigger the selection. The tilt sensor is attached to the wrist of the inspector, enabling him to control the selection process through the rotation of the wrist. Browsing technical documentation The documentation is hyper-text structured and consists of large technical drawings which are likely to exceed the limitations of the HMD. A well-tried approach to overcome resolution limitations consists of body-stabilized presentation such as the “one hundred million pixel display” [5] in which the viewport moves according to the changing viewpoint orientation of its wearer. Instead of head tracking Winspect tracks the position of its user’s hand and maps the three-dimensional movement on the movement of the viewport over the whole page. The user may also release the page – in order to have his hand free – and grasp it again by using a button. The third dimension can be used both to magnify a detail in a drawing and to navigate along hyperlinks. The cursor is fixed to the centre of the screen, and the documentation page moves instead.
4. Winspect Hardware The hardware setting of the Winspect prototype is based on the Xybernaut MA IV [4] with the XyberView HMD. The equipment (the computer, the battery etc.) is fit into the pair of dungarees. For the interaction we realized a special working glove by integrating a RFID reader, buttons, a tilt sensor, and an ultrasonic device. Inside this glove there are three buttons for finger pressing. Magnetic reed contacts are sewn into the tip of the index, middle, and ring fingers and a small magnet is sewn into the thumb of the glove. The position
of the user’s hand is tracked down by an ultrasonic sensor. Its receiver is fit into the front pocket of the dungarees, and the transmitter is sewn inside the glove (figure 1.).
Figure 1: The Winspect hardware
5. Summary This paper has presented actual results of the Winspect project. Although the focus of the project is on software aspects, questions related to the hardware should not be ignored when designing an appropriate interaction concept. In the centre of Winspect is the working-cum-dataglove, and its application in two use cases: data-collection and document-browsing, both of which can be completely carried out without the necessity to take off the gloves. The solution presented in this paper is a wearable one, because the worker is wearing computer hardware and input-devices on his body. These can be used hands-free in the sense that he does not need to put his hands explicitly on an input device in order to operate the system.
6. References [1] Schmidt, Albrecht, “Implicit Human Computer Interaction Through Context”, Personal Technologies Vol 4(2), June 2000, pp191-199 [2] Schmidt, A., Gellersen, H.W., Merz, C. “Enabling Implicit Human Computer Interaction - A Wearable RFID-Tag Reader”, Reader International Symposium on Wearable Computers, Atlanta, GA, USA, 2000, pp193-194 [3] Schmidt, A. Gellersen, H.W. Beigel, M. Thate, O., “Developing User Interfaces for Wearable Computers – Don’t Stop to Point and Click”, Intelligent Interactive Assistance & Mobile Multimedia Computing (IMC'2000), RostockWarnemünde, Germany, November 9-10, 2000 [4] Xybernaut. http://www.xybernaut.com [5] Reichlen, B. SparcChair, ”One Hundred Million Pixel Display”, Proceedings IEEE VRAIS ’93, Seattle WA, September 18-22, 1993, pp. 300-307. [6] Boronowsky, M., Werner, A., “Applying Wearable Computers in an Industrial Context”, Proceedings of the 5th International Conference on Wearable Computing, Fairfax, 2000