Augmented Reality-Based Manual Assembly Support With Visual ...

35 downloads 4740 Views 407KB Size Report
Features for Different Degrees of Difficulty. Rafael Radkowski, Jordan ... computer interaction that superimposes the natural visual perception of a human user ...
Intl. Journal of Human–Computer Interaction, 31: 337–349, 2015 Copyright © Taylor & Francis Group, LLC ISSN: 1044-7318 print / 1532-7590 online DOI: 10.1080/10447318.2014.994194

Augmented Reality-Based Manual Assembly Support With Visual Features for Different Degrees of Difficulty Rafael Radkowski, Jordan Herrema, and James Oliver Virtual Reality Applications Center, Iowa State University, Ames, Iowa, USA

This research investigates different visual features for augmented reality (AR)–based assembly instructions. Since the beginning of AR research, one of its most popular application areas has been manual assembly assistance. A typical AR assembly application indicates the necessary manual assembly operations by generating visual representations of parts that are spatially registered with, and superimposed on, a video representation of the physical product to be assembled. Research in this area indicates the advantages of this type of assembly instruction presentation. This research investigates different types of visual features for different assembly operations. The hypothesis is that in order to gain an advantage from AR, the visual features used to explain a particular assembly operation must correspond to its relative difficulty level. The final goal is to associate different types of visual features to different levels of task complexity. A user study has been conducted in order to compare different visual features at different operation complexity levels. The results support the hypothesis.

1. INTRODUCTION Augmented reality (AR) technology is a type of human– computer interaction that superimposes the natural visual perception of a human user with computer-generated information (i.e., three dimensional [3D] models, annotation, and text; Azuma, 1997). AR presents this information in a context-sensitive way that is appropriate for a specific task and, typically, relative to the user’s physical location. The general approach to realize AR is to merge the physical and virtual worlds by exploiting rapid video processing, precise tracking, and computer graphics. In a typical AR system a video camera is used to capture the physical world. Because the location of the camera and the user is known, AR software systems use fast image-processing techniques to identify one or more markers placed in the scene. Using the optical properties of the cameras, the position and orientation of the markers is calculated precisely. Then, rather than presenting the raw video to the user, the system composites the video image Address correspondence to Rafael Radkowski, Virtual Reality Applications Center, Iowa State University, 1620 Howe Hall, Ames, IA 50011, USA. E-mail: [email protected]

with computer-generated images of virtual objects in positions relative to the physical markers. The effect, from a user’s point of view, is a representation of the physical world that has been “augmented” with virtual objects. One of the early applications of AR research is manual assembly assistance. An AR application is used to superimpose physical parts with computer-generated visual features. Here, the term “visual features” refers to all virtual objects (virtual 3D models and animations, icons, text, etc.) that are used to present information onscreen. This information indicates, for example, which part to pick next, where to assemble a particular part, or which tool should be used next. Because the computer-generated information presented is context sensitive, spatially registered with, and superimposed on the physical part, the information is easier to comprehend, and the user does not need to seek information elsewhere (Neumann & Majoros, 1998; Sausman, Samoylov, Harkness Regli, & Hopps, 2012; Tang, Owen, Biocca, & Mou, 2003). The first application of this type was introduced by Caudell and Mizell (1992). Since then, several studies have been conducted that indicate advantages of AR in comparison to typical instruction media such as paper (i.e., Tang et al., 2003; Wiedenmaier, Oehme, Schmidt, & Luczak, 2003). The goal of former studies was to investigate the feasibility of AR applications for manual assembly assistance and to show its effectiveness. Most of the research compared AR applications with computer terminals (instructions are shown on a display) and paper manuals. The results of these studies show that AR reduces the number of assembly errors, the time to identify and locate parts, and the assembly time in general. In addition, the hand–eye coordination tasks and mental workload can me minimized. However, technical limitations of the required hardware remain an obstacle for a broad usage. In summary, research has shown the feasibility of AR for manual assembly assistance. However, there is a huge diversity among these studies with respect to different interface setups, visual features, and level of complexity of the product. Thus, different factors that influence the success of an AR application for assembly assistance are still nebulous. An analysis of former studies has suggested two factors that may influence the effectivity of AR applications:

337

338

R. RADKOWSKI ET AL.

the complexity of the visual features, and the complexity of the product. In general, the more complex the visual features used for assembly instructions, the more time the user needs to complete an assembly task. Previous studies in the field of assembly assistance used a variety of visual features to guide the user. They range from simple text on a display to animated 3D models, which mimic the physical components to be assembled. It is known that simpler visual features are recognized faster and are easier to understand, as well as that the number of symbols that an AR display depicts at a time should be as little as possible in order to optimize performance (Zarraonandia, Aedo, Díaz, & Montero Montes, 2014). In comparison, a user needs more time to understand complex 3D models, which is one reason why their usage is not recommended to display instructions. Findings from Pathomaree and Charoenseang (2005) and Aguzzi and Lamborelle (2012) support this assertion. The complexity of the product and, thus, the type of information that needs to be presented on-screen affect the assembly time. Manual assembly incorporates three major operations: identification of the part to assemble, alignment at the assembly locations, and installation/fastening of the part. An AR application for a real-world use case must present information for all three types of operation. Several studies do not comply with this requirement. Studies have been conducted using LEGO bricks, simple puzzles, or computer motherboards. Systems focused on these types of products assist the user in only two of the operations just described: identification of the next part to assemble and alignment at the assembly location. The installation is simple, for example, for LEGO bricks, the user simply has to push them. This leads to the next point: The installation step’s difficulty can also differ. It ranges from simple push operations (LEGO, computer parts) to installation steps that require difficult handholds and several substeps (engineering products). Wiedenmaier et al. (2003) found that a user does not benefit from AR instructions when the difficulty level of the installation step is too low. Our hypothesis is that the complexity of the visual feature must comply with the difficulty level of the assembly step. When performing a simple installation task, an AR user can also gain advantage if the displayed visual feature is also simple. For instance, the installation of a mechanical clip should be indicated by a simple visual feature like a text or an arrow. A 3D model of a clip might provide more information than is necessary for this situation’s difficulty. The user requires more time to understand the visualization than to install the clip. This research investigates this hypothesis. We compared different visual features for different assembly tasks at different difficulty levels. We use a mechanical pump, which represents a relatively complex product that incorporates multiple substeps. We prepared an AR application with two different sets of visual features. Our baseline for the comparison was a paperbased manual. Users were asked to assemble the pump using

either one of the two sets of visual features or the paper manual (between subject design). This article is structured as follows. The next section introduces the related work. In section 3, we explain the metric that associates visual features to different assembly tasks. The user study, hardware setup, and the procedure are presented in section 4. We close this article in section 5 with a conclusion. 2. RELATED WORK AR assistance for manual assembly tasks was one of the first application areas investigated in research. The first application of this type was introduced by Caudell and Mizell (1992). They introduced an AR application for wiring harness assembly. Users wear a head-mounted display (HMD), which marks the assembly path for individual wires on a large mounting plate. A subsequent study was conducted by Curtis, Mizell, Gruenbaum, and Janin (1998). They could show the feasibly of this application. Nevertheless, they encountered several usability issues due to hardware and software limitations. Reiners, Stricker, Klinker, and Müller (1998) introduced an AR application for doorlock assembly. The authors’ primary focus was on the technical realization of the AR application, in particular on the visualization of 3D parts and paths. Thus, it remains unclear what additional visual features were used. They deployed a proof-of-concept application and did not conduct a user study. Raghavan, Molineros, and Sharma (1999) developed an AR assembly application that facilitates the assessment of assembly sequences. This tool is designed for manufacturing planning engineers that define and analyze an appropriate assembly sequence. It provides visualization of 3D models of an assembly planning tool superimposed on physical parts. The assembly planner can test different sequences by performing the assembly tasks. During this process, the planner is guided by the visual features. The authors developed a prototype to show the proof-of-concept. Baird and Barfield (1999; Baird, 1999) investigated the effectivity of an AR application that assists during a manual assembly task. Subjects were asked to assemble a motherboard using four types of instruction media: paper, model on display (PowerPoint), video-see-through, and an optical-see-through HMD. The assembly time and error rate were measured. The results provide a strong indication for the advantages of AR: The assembly time decreases and the operators made fewer assembly errors. Boud, Haniff, Baber, and Steiner (1999) investigated whether virtual reality or AR is better for training of manual assembly tasks. The authors compared a paper-based instruction (baseline) with a desktop, a virtual reality, and an AR application. The AR application showed virtual 2D frames to indicate the next part to assemble and the assembly location. Users were asked to assemble a contrived mechanical construction, and

AUGMENTED REALITY ASSEMBLY SUPPORT

completion time was measured. The results show that AR yields the fastest completion time. However, the authors had a small sample, and the mechanical parts did not pose an assembly challenge. They used a stack of mechanical components that had to be stacked on a pivot. Friedrich (2002) presented an overview of the ARVIKA project. The goal was to investigate use cases for AR in product development, production, and service. One use case is assembly assistance and training, which were tested within the ARVIKA project. The research deployed several AR systems as proofsof-concept. The results indicate that AR reduces the error rate and improves assembly training. Tang et al. (2003) investigated the effectiveness of spatially overlaid assembly instructions for AR. Therefore, the authors compared an AR application with common paper-based instructions, as well as instructions shown on an LCD display. Users were asked (between-subject design) to assemble Duplo bricks. The error rate, assembly time, and mental workload were measured. Although the results indicate a decrease of error rate and mental workload when using AR, the assembly time was similar in comparison to the baseline. Zauner, Haller, Brandl, and Hartmann, (2003) introduced an AR assembly application to aid furniture assembly. Their application indicates the next part to assemble using semitransparent 3D models that cover the physical part. In addition, 3D models and virtual characters also show the assembly location. The authors built a proof-of-concept application that demonstrates feasibly. The ARToolkit (Kato & Billinghurst, 1999) tracking was used to track the single parts, and an HMD was used as the output device. Reinhart and Patron (2003) introduced an AR assembly system that is connected to a CAD/PDM tool in order to gain access to the relevant assembly data. Their tool uses 3D models and text features to show assembly information. However, the authors focused only on the technical realization of this approach and deployed a prototype. Liverani, Amati, and Caligiana (2004) demonstrated a similar interface between an AR application and a CAD tool, which also addresses the area of assembly. However, neither of these integration exercises included a user study. Wiedenmaier et al. (2003) investigated the effectivity of an AR application for assembly assistance. They compared an AR application with a paper manual and an instructor. One goal of their research was to investigate different assembly tasks (i.e., wiring, panel installation) with different difficulty levels. A user study was conducted to assess effectivity. Users were asked to assemble parts of a vehicle door, and the assembly time was measured. The results show that AR can reduce assembly time when the assembly task is difficult. There was no difference between paper and AR-based instructions when the tasks are simple. Pathomaree and Charoenseang (2005) presented an AR assembly assistance system that provides assembly-related graphical information superimposed on top of physical parts

339

that need to be assembled. The authors conducted a user study to compare a visual 2D frame with a 3D frame to indicate the assembly location and to identify the next part to assemble. Users were asked to assemble a simple 2D puzzle. They measured the assembly time and the steps the users needed. The results indicate a decreasing assembly time when 2D features are used instead of 3D features/models. Yuan, Ong, and Nee (2008) introduced a virtual interactive tool for assembly guidance. The tool is a tracked pen that gives access to assembly information. Their application offers button icons on screen and within the virtual world. The user can select them to get assembly-related information. These are text features and images of the assembly steps that appear as a billboard. This AR assembly tool provides a somewhat different method for guidance in that the user can access additional information on demand. A prototype implementation using a tool train was described to demonstrate their approach, but no user study was presented. Pang, Nee, Ong, Yuan, and Youcef-Toumi (2006) also introduced an AR application for assembly assistance. Their application indicates the assembly sequence by showing virtual numbers superimposed on the physical parts. The numbers show the assembly sequence. They deployed a proof-of-concept application to demonstrate the feasibility. Siltanen et al. (2007) presented a multimodal AR interface for manual assembly operation assistance. The AR application provides a speech and gesture interface for interaction. The authors deployed a prototype to demonstrate the capabilities of the system. Users were asked to test this prototype and to compare it with a paper manual. They were asked to assemble a small puzzle-like assembly. Virtual arrows were used to indicate the next part to assemble, and 3D models of the part indicate the assembly location. Installation steps did not exist, because the users were required only to align the part. The authors received qualitative feedback that reports advantages of the AR-based assembly support. However, the authors did not carry out a formal study. Seok and Kim (2008) also investigated the efficiency of ARbased assembly instructions. They compared an AR application with a paper manual and a web application. The paper manual and web application used sketches and text to explain the assembly steps. The AR application indicates the location of computer components using virtual 2D frames and text, both superimposed on a motherboard. Users were asked to assemble auxiliary components onto the computer motherboard. The results show the efficiency of AR: The assembly time decreased up to 60% compared to the other methods. Hakkarainen, Woodward, and Billinghurst (2008) investigated the capabilities of a mobile AR application on a cell phone for assembly support. The application shows 3D models that indicate the next part to assemble and the assembly location. They conducted a user study to assess the feasibility. Users were asked to use this system to assemble a small puzzle and to answer a qualitative questionnaire (7-point Likert

340

R. RADKOWSKI ET AL.

scale). The results show the feasibility of the presented application. However, they do not compare the mobile AR solutions with other instruction media. Song, Jian, Sun, and Gao (2009) developed an AR assembly application with a focus on perception support mechanisms. Perception support mechanisms are methods to deal with object occlusion and collision between virtual and physical objects. The article describes the technical realization. A prototype implementation was described to demonstrate the feasibility of the presented mechanisms. Webel and Colleagues. (Webel et al., 2013; Webel, Bockholt, & Keil, 2011) investigated the capabilities of tablet computers for assembly and maintenance assistance. The application addresses the training for these tasks. It shows simplified 3D models on screen, which indicate the parts to assemble, superimposed on physical objects. Additional visual objects provide handling information. The authors also have suggested design criteria for AR assembly applications and proposed several visual widgets as best practice. They conducted a study that asked users to assess an application that follows their criteria. The results indicate the applicability of their criteria. However, they did not compare their suggestions with different possible design criteria. Chimienti, Iliano, Dassisti, Dini, and Failli (2010) introduced guidelines that facilitate the implementation and the design of an AR system for assembly training. The article describes a general procedure for the design of a related AR application. It addresses the fields software and assembly instruction design. The presented AR application used mostly virtual arrows to indicate the assembly location, text for instructions, and photos to show the parts to assemble. The authors conducted a user evaluation to assess the guidelines. Users were asked to assemble a gearbox. A qualitative questionnaire was used to obtain the users’ opinion. The results indicate the feasibility of the guidelines. The authors also report that AR supported assembly did not decrease the assembly time. However, an objective user and data are not presented in the article. Peniche, Treffetz, Diaz, and Paramo (2012) investigated the efficiency of a combined VR/AR assembly training. They suggested users should start with VR-based assembly training, followed by AR-based training. They conducted a between-subject study. Users were asked to assemble an electro-mechanical milling machine either with AR support or with instructor support. The results show efficiency of the combined VR/AR training equal to the conventional instructor-based training. A reduction of assembly time was not observed. However, the authors used 3D models, as well as text and images of the final assembly, as instruction. The users needed time to read the text and to understand the content of the images. The authors also recorded a learning curve. However, the curve does not show differences between the AR-supported and the instructor-supported assembly support.

Westerfield, Mitrovic, and Billinghurst (2013) investigated the combination of an AR application with an intelligent tutoring system to train manual assembly steps. The user sees visual information and receives additional information that fosters assembly training. They conducted a user study (betweensubject design) to compare the outcome of the assembly training with and without the intelligent tutor support. Users were asked to assemble auxiliary cards (graphic, network) on a computer motherboard. Simplified semitransparent 3D models showed the assembly location of these parts. The results indicate that the use of an intelligent tutor decreased the assembly time (up to 30%) in comparison to the conventional AR application. However, they used a computer motherboard. The installation of the hardware is relatively simple compared to the installation in a complex mechanical assembly. Recently, Hou and Wang (2013) presented a study that investigates the influence of different genders in an AR assembly task. Their hypothesis was that the effectiveness of AR-based assembly training relies on the gender. They carried out a user study in which subjects were asked to assemble a LEGO robot using either a paper manual or AR instruction. The authors measured the assembly time, the error rate, and the mental workload. The results show that AR-aided assembly results in a decrease of assembly time and error rate. Nevertheless, the results do not indicate a difference between genders. The authors also recorded a learning curve. The users were asked to assemble the robot four times. The results indicate better learning when AR instructions are used. AR usage has also been investigated in similar fields like maintenance and service. For instance, Raczynski and Gussmann (2004) presented an AR system for service and training, mostly for large installations. Their system is part of the STAR project that investigated the capabilities of AR for training and service tasks. Their AR application addresses service support. It indicates the next service step by superimposing visual features as overlay on the physical product. They built a prototype and asked users to test it. They obtained their qualitative opinion via a questionnaire. The results indicate advantages of the AR solution. Henderson and Feiner (2009) explored the efficiency of an AR maintenance system for maintenance tasks inside the turret of an armored vehicle. AR was used to direct the user’s attention and to provide maintenance-related information. The AR application mostly showed virtual arrows to direct the user’s attention, as well as 3D models and features to identify a part and to present maintenance information. The authors conducted a within-subject study with professional maintenance personnel. The results indicate that task location is faster and that the number of head movements can be reduced. Radkowski, Fiorentino, and Uva (2012) investigated the efficiency of natural interaction for maintenance tasks. In addition to previous research, the feasibility of natural gestures for document navigation and selection was demonstrated.

AUGMENTED REALITY ASSEMBLY SUPPORT

Recently Markov-Vetter and Staadt (2013) introduced an AR system for procedural guidance support on the international space station, which incorporates the assembly/disassembly of test rigs. Their experiment focused on the reduction of mental workload; the results show this reduction. Ong, Yuan, and Nee (2008) presented a survey that addresses the area of AR for manufacturing. Assembly is a part of their survey. It provides an overview about the area and introduces the research. In summary, the related research presents a strong evidence for the effectivity of AR applications for manual assembly support. The studies presented in Baird and Barfield (1999), Tang et al. (2003), Hou and Wang (2013), Boud et al. (1999), Hakkarainen et al. (2008), Sausman et al. (2012), and Webel et al. (2011) support this assumption. Nevertheless, there is a broad diversity in methods and techniques. Different products were used as assembly examples. Some of them did not reflect products of mechanical engineering and were too easy to assemble. Using a vehicle door, Wiedenmaier et al. (2003) found evidence that an operator obtains an advantage from AR support only when the assembly task is difficult, although the user obtains useful information from the content shown. However, the authors did not compare different visual features. It may be possible that the user’s performance will increase if a simple visual feature is used during a simple assembly task. The results of Pathomaree and Charoenseang (2005) and Seok and Kim (2008) support this assumption. These authors compared different visual feature and were able to show that the simpler visual feature is the better one. However, the authors used a puzzle and a computer motherboard, products in which part installation is considered simple. In our research we compared different visual features in tasks with a different difficulty levels. We use a mechanical pump as the subject product to assemble, which assembly incorporates several substeps and difficult installation steps. 3. VISUAL FEATURES FOR MANUAL ASSEMBLY The following section presents the theoretical framework of the research. The objective is to identify plausible visual features for assembly support, which can be assessed in a user study. It addresses the field of manual assembly and visualization. In addition, our approach for this study is presented. 3.1. Manual Assembly The term manual assembly refers to a manufacturing process that involves the use of various alignment and fastening methods in order to attach two or more mechanical parts and/or subassemblies together (Ikeuchi & Suehiro, 1992). Assembly tasks incorporate the manual operations manipulation, alignment, joining, adjustment, and checking, as well as supporting operations (i.e., identification, carrying). This includes processes for permanent or nonpermanent assembly. The single tasks are performed in a sequence, step by step, which result

341

is the final product: a mechanical assembly, a composition of single parts, or single parts and subassemblies. Modern products also incorporate parts and/or subassemblies from several domains such as electronics. The degree of difficulty of a manual assembly operation is, in general, a parameter that indicates the effort of mental and physical work in order to accomplish a task. The degree of difficulty for manual assembly is not clearly defined in literature. Mostly a common concept of difficulty and complexity is addressed that expresses the complexity as a factor that depends on the number of items, which are involved in an action, and the number of associations between the involved items. Adapted to the field of manual assembly and manufacturing, this difficulty concept is applied to three aspects of manual assembly operations: the types of manual operations, the involved parts, and ergonomic aspects. The types of manual operations address the different manual actions that an operator performs on his or her workstation; these include Boothroyd, Dewhurst, and Knight (2010) and Nof, Wilhelm, and Warnecke (1997): identification, handling, alignment, joining, adjustment, and checking. • Identification (supporting action): The operator needs to identify the parts to assemble, as well as the tools that he or she needs to assemble a part (in mechanical engineering, this task is considered as a supporting action because it contributes only indirectly to the operation of assembly). • Handling: This task incorporates all manual material moving operations, for example, carry a part from a storage to a workbench. • Alignment: The operator locates the active surface (the surface where the function of a part comes into effect, e.g., it transfers force) of a mechanical part to the active surface of a second part. • Joining: The operator creates a fixed or detachable connection between two parts. • Adjustment: The operator adjusts the setting or the location of a part or a connection, for example, change the torque of a nut-bolt-connection. • Checking: The operator assesses the quality of a connection, adjustment, and alignment. The single tasks in these classes are well defined in the industrial assembly and manufacturing literature. Although different models exist to estimate the manufacturing time, quality, and difficulty in advance (Nof et al., 1997; Zhu, Hu, Koren, & Marin, 2008), as well as for productivity measurement (Cocca & Alberti, 2010; Jamil & Mohamed, 2011), these models are supposed to optimize the layout of the work area to maximize the assembly efficiency. Their feasibility for interface design is unknown. For interface development, a separate assessment is required due to the fact that an AR interface developer may select several visual features to support a particular task. For instance,

342

R. RADKOWSKI ET AL.

frame or arrow features could be used to support part identification, 3D models for the alignment task, and abstract features (e.g., animated arrows that show a movement direction) for the joining operations. To enhance a visual feature for one task, the performance of a user in a particular task needs to be known. The level of difficulty can also be assessed by considering the number of involved parts, the interaction between parts, and the interactions between parts and the operator (tool handling). In particular, • Maximum possible orientations is considered a count of the number of possible moving directions of a part (Boothroyd et al., 2010). The higher the number of possible orientations, the higher the difficulty level. • Number of connection/contact points considers the number of active surfaces that the operator needs to keep aligned during a joining procedure. As the number of contact points to be considered increases, the difficulty of part alignment increases. • Number of involved parts to assemble: The more parts and tools the user has to handle, the more difficult the assembly task (Nof et al., 1997). As the number of parts per assembly task increases, the chances of assembly errors increase. • Level of hiearchy of parts and operations: Users consider assembly operations as a hierarchy; the hierarch addresses operations as well as parts and subassemblies (Tversky & Hemenway, 1984). Operations on a lower level must be performed before operations on a higher level. Subassemblies are considered as assemblies on a lower level of hierarchy. They must be assembled before the entire product can be assembled. The larger the hierarchy, the more complex and difficult the assembly. The ergonomic issues address human factors of work and workload: • Visibility: The part to assemble can be covered or partially hidden (Boothroyd et al., 2010). The more hidden the part is, the more difficult the assembly task. • Posture of the operator: According to Rapid Upper Limb Assessment and Rapid Entire Body Assessment, two common models to assess the posture of an operator, the assembly task is considered more difficult if the operator has to assemble a part in a lying, crouching, sitting, standing, or overhead position (Jovanovic, Tomovic, Cosic, Miller, & Ostojic, 2007). These parameters are difficult to quantify. However, interface prototyping with formal user studies can facilitate assessment. To our knowledge, the design of AR interfaces corresponding to assembly task difficulty has not been addressed by previous research.

3.2. Graphical Interfaces and Visual Features One purpose of a graphical interfaces is to enhance user understanding of data and its related information. The components of a graphical interface are graphical elements or visual features (e.g., an arrow, a 3D model, etc.) and their appearance is defined by visual attributes (i.e., color, texture, brightness, etc.). The ideal interface is designed in a way that presents information for a particular task without overwhelming the user with complexity (Agrawala et al., 2003; Zarraonandia et al., 2014). An obstacle to achieve this goal is the limited mental capacity of the prospective user and the limited attention a user can spend on his or her environment. It is impossible to see everything and to react to everything at the same time (Tsotsos, 1990). The limited mental capacity limits the amount of information a user can obtain from a graphical interface (Wolfe & Horowitz, 2004). Mostly, these limitations impair the time a user needs to understand the information and the accuracy of the understanding (how much does the understanding of the user comply with the message the author wants to provide). This limits the user’s task performance. To cope with this limitation, research in the field of scientific and information visualization has investigated the most appropriate visual feature for several use cases. In general, it is suggested to use 2D interfaces and 2D visual features, if precision and accuracy are a goal of a computer graphics application. In contrast, 3D interfaces and 3D visual features should be used only if the goal is to provide a brief overview about information or to assess spatial structures (Springmeyer, Blattner, & Max, 1992; Tory, Kirkpatrick, Atkins, & Möller, 2006). Several studies (John, Cowen, Smallman, & Oonk, 2001; Smallman, John, & Oonk, 2001; Wickens, Merwin, & Lin, 1994) support this suggestion. Two issues are considered as reasons for the limited accuracy of 3D interfaces. First, the user needs time to perceive and process the 3D data. Second, the distorted perception of distances, positions, and angles of 3D interfaces causes uncertainty by the user, which delays the user’s action. However, research also suggests that the task difficulty and the user’s experience must be considered when designing an interface. In general, we can assume that a difficult task requires more information than a simple task. In this case, a 2D interface might be more appropriate to show information than a 3D interface. Sebrechts and Colleagues demonstrated that task difficulty affects the users’ performance using a graphical interface (Sebrechts, Vasilakis, Miller, Cugini, & Laskowski, 1999). The user needs less time to understand complex information if they are shown, for example, a 2D sketch. Study results from Rodriguez (2002) also indicate the advantages of 2D sketches, especially when they are used without text. Zarraonandia et al. (2014) demonstrated similar results for an AR application. Their research indicates that AR increase the communication performance when simple interfaces with a reduced number of symbols and visual features are used. Nevertheless, users gain advantages from 3D interfaces and 3D visual features when they are experienced computer users (Sebrechts et al., 1999; Swan &

AUGMENTED REALITY ASSEMBLY SUPPORT

Allan, 1998). The more the users are familiar with computers, the easier they map spatial 3D interfaces to their mental map, thus the easier they understand the presented information. In summary, we can conclude that difficult tasks require visual features that can be easily understood. In this case, 2D visual features and 2D interfaces should be used. Furthermore, 3D visual features can be used when the difficulty level of the task is low. 3.3. Visual Features for Assembly Support We suggest the use of two different types of visual features for alignment and joining tasks due to the degree of difficulty of a task and the complexity of the information that needs to be presented to foster understanding. Figure 1 shows a metric with different interface designs. The columns represent the degree of difficulty of the task, whereas the rows indicate the assumed degree of difficulty to understand the presented information. The cells a) through d) show the proposed interface designs. Each of them incorporates a different set of visual features. The interface a) incorporates text on screen and static 3D (virtual) models. The text information describes the task required of a user for a particular assembly step. In general, textual information is omnipresent. It is known that no special training or computer experience is necessary to understand them (Wolfe & Horowitz, 2004). The static 3D model indicates the assembly location. The interface b) uses text to explain the assembly step. To simplify the visual feature, one or multiple 3D arrows are used to indicate the assembly location. The arrows show the fixation point or, if several arrows are used, they indicate an assembly path.

high

animation

Physical part a)

Concrete

Text on Display

c) Text on Display

Abstract

Text on Display

low

Complexity visual interface

Text on Display

d)

b) low

high Degree of task difficulty

FIG. 1. Metric of the different visual features for the alignment and joining tasks in manual assembly operations Note. The indicated part represents the physical part shown in the video image. All other graphical elements present virtual information.

343

The interface c) adds additional information to interface a) by adding animations to the 3D models. This information shows the assembly method and assembly direction. For the same purpose, a 2D sketch is added to interface d) in addition to the 3D arrow. The interfaces a) and b) are intended to support mechanics for tasks with a low degree of difficulty. The interfaces c) and d) are designed to support assembly steps, which are considered as difficult. If a task is difficult, we assume that the interfaces b) and d) are more efficient than a) and c). The user needs more time to understand the information when a 3D model of the part is presented. To simplify the following discussion, we refer to the upper two interfaces as concrete interfaces and to the lower two interfaces as abstract interfaces. Note that despite the fact that the degree of difficulty for actual assemblies ranges between low and high, we divide the degree of difficulty into only two distinct levels low and high. There are two reasons for this: interface usability and the study design. From a usability point of view, interfaces should present similar information in a consistent way in order not to distract a user. Because of the contradicting nature of this rule to the need for different visual features for different degrees of difficulty, we decided to use the minimum separation of two. In addition, a high granularity of the degree of difficulty may also result in nondistinguishable results. From a study design point of view, we need significant differences to be able to conclude the validity of our hypothesis. Assembly instructions for a whole product are presented as a sequence of single steps. Every step explains a single assembly operation. According to research in this area (Novick & Morse, 2000), this is the way that users prefer. 4. USER STUDY The goal of the user study was to investigate the efficiency of the visual features for AR assembly applications and to validate the hypothesis. We deployed an AR application for assembly support and asked users to assemble a mechanical axial piston motor either with AR support or with a paper manual, which was the baseline for this study. This section outlines the user study by describing the study subject, the experimental hypotheses, the interface setup, the study structure, and the hardware setup. 4.1. Test Setup and Hypotheses The test object utilized by this study was an axial piston motor which the users were required to assemble (Figure 2). The entire motor measures approximately 7-in. tall, 7-in. wide, and 9-in. long, including the output shaft, and weighs 34 pounds. For this study, the motor consists of 30 parts that have to be assembled. Two additional parts (a ball bearing and a sealing ring) have already been preassembled, as tools and training are required to assemble them. These tools were not available and

344

R. RADKOWSKI ET AL.

2d schema shows how a part needs to be assembled.

3d frame encases the next part to assemble.

FIG. 2. The axial piston motor assembly from Sauer Danfoss.

were too difficult to handle for untrained subjects. An 8 mm hex key and a set of snap ring pliers are required, both within one assembly step. In total, 16 manual assembly steps are necessary to assemble the entire motor. We assessed the degree of difficulty of the entire assembly of the motor as low except for two steps: the assembly of a swashplate (referred to as Step 2) and the assembly of a slipper retainer guide (referred to as Step 6). The assembly of the swashplate was difficult, because the user had to change the moving direction during installation. The part also needs to be inserted into the pump body. Thus, the entire process was hidden. It also requires that a previous installed part fits correctly into the body; a hole in a piston must be correctly aligned with the body. The assembly of the slipper retainer guide was difficult because the user had to align it loosely on top of three pins. For an inexperienced user, it was not obvious if the assembly step was correctly carried out. For all other steps, the degree of difficulty was considered low because the motor provided mechanical guidance that help to notice when the step was successfully completed. To verify the overall hypothesis, the abstract AR (AAR) interface, the concrete AR (CAR) interface, and the paper-based instructions (PBI) need to be compared in a user study, which shows the task performance. Figure 3 shows the AAR interface setup. The AAR interface implements the visual features of the type shown in Figure 1b and d, in which a 3D arrow is used to indicate the assembly steps when the difficulty level is considered low, and the arrow plus a 2D sketch for steps considered as difficult (Steps 2 and 6). The CAR interface setup is displayed in Figure 4. It implements the visual features of the type shown in Figure 1a and c, where interface setup a) is used for simple tasks, and c) for all tasks considered as difficult (Steps 2 and 6). The colors for all visual features of both interface types have been chosen from a set of colors that is considered highly distinguishable (Healey, 1996; Ware & Beatty, 1988). Paper-based instructions are used as the baseline for the study. The paper instructions show 2D images of the parts that

Spatial registered 3d arrows indicate the assembly location.

Text information explains the next step.

FIG. 3. The abstract AR setup uses text, 3D arrows, and 2D schemas to present the assembly information.

Text information explains the next step.

Spatial registered 3d models indicate the assembly location.

FIG. 4. The concrete AR setup incorporates 2D text, 3D models, and animations to provide assembly information.

need to be assembled, as well as the appearance of the entire pump after a part has been assembled. We assume that the CAR setup is more efficient than the AAR setup, except in Step 2 and Step 6. In both steps that are considered difficult, the AAR interface should lead to a more efficient assembly than the CAR. We expect that abstract visual features are faster to recognize. Thus our hypotheses for this experiments are as follows: Hypothesis 1 posits that CAR features are more suitable for alignment and joining tasks than AAR elements, except in the two difficult steps. The rationale behind this hypothesis is that 3D models and animations were shown to be extremely concise

345

AUGMENTED REALITY ASSEMBLY SUPPORT

indicators of manipulations, whereas abstract elements provide very accurate information, which are easier to understand. Hypothesis 2 posits that AAR elements are more suitable for part and tool identification than CAR elements. This is based on the logic that although visually comparing 3D models to physical parts is very intuitive, bounding the physical parts with a frame requires no visual comparison and is therefore both faster and more reliable. 4.2. User Study Structure We conducted a between-subjects study with 33 participants. They were split into three test cases via random assignment. The participants were students from Iowa State University, mainly with mechanical engineering background; a few students were from industrial engineering and agricultural engineering. The average age of the participants was 22.5, and the gender breakdown was 63.6% male, 36.4% female. Upon arrival, each volunteer was asked to read and sign an informed consent document. Next, every volunteer was given a brief prequestionnaire to record age, sex, level of education, and field of study or profession. Depending on the test case to which the volunteer had been assigned, he or she was told that assembly instructions would be presented either on the LCD screen in front of them or via a paper instruction manual. They were also told that each on-screen step or each page corresponded to a single assembly step and that their goal was to finish the entire assembly process as efficiently as possible while taking as much time as necessary to feel that the information being presented was fully understood. The time each participant needed to complete the task was recorded. An experimenter also observed each participant and noted all errors. The errors were classified according to Gharsellaoui, Oliver, and Garbaya (2011). First, errors were designated as either errors of parts (EoP), in which the subject grasps with his or her hand the wrong assembly component or tool, or errors of orientation (EoO), in which the subject attempts to place a component or use a tool in an incorrect location or with the wrong orientation. Participants was also asked to complete a postquestionnaire in which they were asked how confident they were about the tasks before and after they completed the study. A 5-point Likert scale was used to record the confidence level. 4.3. Hardware and Software Setup The user study was implemented with a tabletop AR workstation (Figure 5). The primary components were a 24-in. LCD display for the subject and a video camera to observe the working area. The LCD display was mounted at a height of 3.5 ft (center) above the ground to provide an ergonomic working height for a seated position. The video camera (a Logitech Pro 9000) was assembled on a stand, 3 ft above the table. All relevant areas of the working area were inside the field of view. The working area, a table, has a rectangular size of 3 × 3 ft and was 28 in. above the ground.

Video camera was mounted above the display and aligned towards the table. Display

Parts of the pump.

FIG. 5. The setup for the user study. Note. A tabletop AR application with a display as output device.

The entire setup was operated using a desktop computer with Intel Xeon 3.47GHz CPU, 6GB RAM, a NVIDIA Quadro 5000 video card, and a Windows 7 Enterprise 64-bit operating system. The software that was used for the study is named ARMaker, a self-developed AR application designated for research projects. It relies on OpenSceneGraph, OpenCV, and ARToolkit. OpenSceneGraph (www.openscenegraph.org) is an open source computer graphics application framework that is used as 3D graphics rendering. OpenCV (opencv.org) is an open source computer vision programming framework. It supports video handling within the application. The ARToolkit is a maker-based tracking library (Kato & Billinghurst, 1999). It uses fiducial markers to track. Markers were attached on several major parts and the table in order to track the objects that need to be assembled. Untracked parts were located on a fixed location in the working area. 4.4. Results The results for average total time and average overall error rate are depicted in Figure 6 and Figure 7, respectively, which feature of the three test cases along the horizontal axes, total time in seconds, or total number of errors committed on the vertical axes, as well as minimum and maximum values illustrated using error bars. Figure 6 considers only results of users who completed the entire assembly. In total, 27 out of 33 volunteers were able to complete all tasks. The remaining five made assembly mistakes, which prevented completion of the assembly, as subsequent parts did not fit into there designated position. Thus, these subjects had to stop the assembly assignment. Figure 7 distinguishes between EoP (slashed pattern) and EoO. The first refers to improperly selected parts. The second refers to assembly errors in which users misalign or did not fix a part correctly. An initial appraisal of the results shows that the

346

R. RADKOWSKI ET AL.

140

2500

120 2000 Time (s)

Time (s)

100 1500 1000

80

CAR

60

AAR

40 500

20 0

0 AAR

PBI

FIG. 6. Average completion times for correct assemblies. Note. CAR = concrete augmented reality; AAR = abstract augmented reality; PBI = paper-based instruction.

12 EoO 10

EoP

Errors

8 6 4 2 0

Step 2

2 CAR

1.5

AAR PBI

1 0.5 0 Confidence increase same task

CAR

AAR

PBI

FIG. 7. Average overall error rates. Note. EoO = errors of orientation; EoP = errors of parts; CAR = concrete augmented reality; AAR = abstract augmented reality; PBI = paper-based instruction.

AAR has the highest average error rates, as well as the longest average completion times. To evaluate the hypothesis a t test and an analysis of variance have been applied. Hypothesis 1 posited that concrete AR elements are more suitable for indication of part and tool manipulation than abstract AR elements. This hypothesis was evaluated by examining specifically EoO. The EoO of the CAR and AAR setup were used to calculate significance. An F test was conducted yielding F(8, 13) = 0.190 and p > .9, indicating homoscedasticity. Consequently, a one-tailed t test for two-sample, homoscedastic data was conducted, t(21) = 4.264, p < .01. Thus it can be concluded with 99% confidence that the true average EoO rate for the CAR test case is lower than that for AAR test case. In addition, this hypothesis was evaluated by examining the total time spent on Assembly Steps 2 and 6, which were observed to be especially prone to EoOs (Figure 8). Two F tests found F(4, 4) = 1.645, p > .3 for Step 2 and F(4, 4) = 0.270, p > .8 for step 6. Consequently, two-tailed t tests for two-sample, homoscedastic data were conducted for both steps, t(8) = 0.585, p > .5 for Step 2 and t(8) = 0.376, p > .7 for Step 6. Thus, this result does not support the hypothesis.

Step 6

FIG. 8. Results of the Steps 2 and 6; both steps are considered as difficult. Note. CAR = concrete augmented reality; AAR = abstract augmented reality.

Averaage Ranking Difference

CAR

Confidence increase other task

FIG. 9. Results of the questionnaire. Note. CAR = concrete augmented reality; AAR = abstract augmented reality; PBI = paper-based instruction.

Hypothesis 2 posited that abstract AR elements are more suitable for part and tool identification than concrete AR elements alone. This hypothesis was evaluated by examining specifically EoP. The EoP average was lower for the CAR case than the AAR. An F test was conducted between the two EoP data sets with F(8, 13) = 1.223, p > .3, dictating that the data be treated as homoscedastic. A two-tailed t test for two-sample, homoscedastic data was conducted, t(21) = 0.123, p < .8, and thus is insufficient to support our hypothesis. Figure 9 shows the result of the questionnaire. The results on the left show the confidence of performing the same task again, and the results on the right the confidence to apply the knowledge on a similar, different task. The results indicate the difference between the confidence before the study was carried out and after the study was completed. The results show that both AR setups result in a higher confidence level. Two F tests (CAR vs. PBI) found F(9, 13) = 0.986, p > .4 and (AAR vs. PBI) F(8, 9) = 0.206, p > .9, indicating homoscedasticity for both. Two t tests were applied that found t(22) = 2.254, p < .02 (CAR vs. PBI) and t(17) = 0.709, p > .2 (AAR vs. PBI). Thus, the results show a significant difference between the CAR and PBI setups. Nevertheless, there is no significant difference between AAR and PBI.

AUGMENTED REALITY ASSEMBLY SUPPORT

4.5. Discussion The goal of the study was to evaluate the dependency between the level of difficulty of a manual assembly task and the visual feature used to present the assembly steps. Wiedenmaier et al. (2003) concluded that a user gains advantage from AR only when the assembly step is difficult. However, results from Pathomaree and Charoenseang (2005) and Seok and Kim (2008) indicated that simpler visual features can be used when 3D models overwhelm the user. The study presented in this article was designed to investigate the assumption of Wiedenmaier and considers different visual interfaces for different task levels of difficulty. The results indicate that users gain advantages when using a concrete AR setup in assembly steps that are considered as simple. In comparing the abstract AR interface with the concrete AR interface, the latter reduces the overall error rate and assembly time. This complies with results of Pathomaree and Charoenseang (2005) and Seok and Kim (2008). In comparison to paper-based instruction, users may not realize time or error advantages from either the CAR or the AAR interface, which does not comply with the results presented in most of the reported studies. One reason for this can be the amount of information that had been presented in the paper manual. Pictures of the parts to assemble and the assembly after completing a step were shown. Thus, users with average mechanical skills were able to produce the result shown in the image. The pictures shown in the instructions were color images, which also provide additional information about the structure of the overall assembly. We assume that these details and the additional color information may have influenced the results. A second reason may be the subject of the study, the axial piston motor. In comparison to studies that employ LEGO and Duplo bricks or computer parts as assembly subject, the motor can be considered as more difficult to assemble than computer interface cards and Duplo bricks. Figure 8 shows ambiguous results. We expected a decrease in assembly time in Step 2 and Step 6 when using the AAR interface due to the higher accuracy of the 2D sketches. The time measurement of Step 6 complies with our expectation. In Step 2, however, CAR outperforms AAR. At this point the data do not comply with our hypothesis. Nevertheless, we feel that more data could yet validate this hypothesis. During the experiments, we observed several alignment problems within the two “difficult” steps. We also observed that the additional information (2D sketched and animations) were helpful to understand the task. Thus, these observations indicate that we might be right. On reason for the results may be a mistaken initial assumption of the degree of difficulty. The assembly procedures for Step 2 and Step 6 are different. Although the final position of the swashplate in Step 2 was hidden, a mechanical guidance actually indicates the correct alignment. In Step 6, no mechanical guidance or any mechanical marks indicate the correct alignment. Perhaps Step 6 was more difficult than Step 2.

347

Figure 9 indicates that AR generates more confidence. This is probably due to the graphical representation of the visual feature which is coaligned with the physical part. The user knows the correct alignment position, which helps to establish confidence. The paper-based instructions show only the part and the assembly after installing the parts. There is no information that indicates how to align a part. In addition, the user can interact with the physical parts when using AR, he or she can rotate and move the part and see the instructions from different viewing angles. However, the user may be mistaken, as the increase of confidence may not be justifiable and could be based only on a user’s gut feeling. Otherwise the decrease in the error rate and time when comparing CAR with AAR results underpin the higher confidence. 5. CONCLUSION AND FUTURE WORK This research investigated different visual interfaces to support manual assembly with AR. The hypothesis is that the visual interface must comply with the degree of difficulty of a particular task. An AR application with two different interface approaches (CAR and AAR) were deployed, as well as a paperbased setup. For CAR and AAR, two different sets of visual features were prepared to correspond with the expected level of difficulty. Users were asked to assemble the axial piston motor. The results indicate that CAR outperforms paper and abstract AR. Nevertheless, the results are not significant; thus, they do not support our hypothesis. Thus, we conclude that the difficulty level of the task does not affect the user’s assembly performance. Nevertheless, we can conclude that using a CAR setup outperforms an AAR setup. In addition, the results of the questionnaire show a significant increase in the users’ confidence when using AR. Thus, we conclude that an AR application increases the user’s confidence to carry out the same task again and to transfer the learned skills to other tasks. In future research, we will enhance the study design and conduct a similar study. First, we need to reconsider the assessment of the degree of difficulty of the assembly tasks. We will also consider switching to a different assembly, which could make the assessment more obvious. Second, the paper manual’s content will also be adapted to comply with manuals that are used on the factory floor in many companies. The typical instructions provide assembly information as text and 2D sketches instead of photos. Photos may simplify the task for untrained volunteers such as college students. However, at this time, it is likely that the photos and the detailed information shown in these pictures may have affected the study.

REFERENCES Agrawala, M., Phan, D., Heiser, J., Haymaker, J., Klingner, J., Hanrahan, P., & Tversky, B. (2003). Designing effective step-by-step assembly instructions. ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH 2003, 22, 828–837.

348

R. RADKOWSKI ET AL.

Aguzzi, M., & Lamborelle, O. (2012). A proposal of visual guidelines for onboard procedures. In Proceedings of the 63rd International Astronautical Federation Congress, 4962–4971. Naples, Italy. Azuma, R. (1997). A survey of augmented reality. Presence: Teleoperators and Virtual Environments, 6, 355–385. Baird, K. M. (1999). Evaluating the effectiveness of augmented reality and wearable computing for a manufacturing assembly tasks (Unpublished master’s thesis). Virginia Polytechnic Institute and State University, Blacksburg. Baird, K. M., & Barfield, W. (1999). Evaluating the effectiveness of augmented reality displays for a manual assembly task. Virtual Reality, 4, 250–259. Boothroyd, G., Dewhurst, P., & Knight, W. A. (2010). Product design for manufacture and assembly, 3rd edition. Boca Raton, FL: CRC Press. Boud, A., Haniff, D., Baber, C., & Steiner, S. (1999). Virtual reality and augmented reality as a training tool for assembly tasks. Proceedings of the 1999 IEEE International Conference on Information Visualization, 32–36. Caudell, T., & Mizell, D. (1992). Augmented reality: An application of headsup display technology to manual manufacturing processes. Proceedings of the International Conference on System Sciences, 659–669. Chimienti, V., Iliano, S., Dassisti, M., Dini, G., & Failli, F. (2010). Guidelines for implementing augmented reality procedures in assisting assembly operations. Proceedings of the 5th IFIP WG 5.5 International Precision Assembly Seminar, 174–179. Cocca, P., & Alberti, M. (2010). A framework to assess performance measurement systems in smes. International Journal of Productivity and Performance Management, 59, 186–200. Curtis, D., Mizell, D., Gruenbaum, P., & Janin, A. (1998). Several devils in the details: Making an ar application work in the airplane factory. Proceedings of the International Workshop on Augmented Reality, 47–60. Friedrich, W. (2002). Arvika-augmented reality for development, production and service. Proceedings of the International Symposium on Mixed and Augmented Reality, 3–4. Gharsellaoui, A., Oliver, J., & Garbaya, S. (2011). Benchtop augmented reality interface for enhanced manual assembly. Proceedings of the IEEE Simulation in Aerospace Conference, 1–10. Hakkarainen, M., Woodward, C., & Billinghurst, M. (2008). Augmented assembly using a mobile phone. Proceedings of the IEEE International Symposium on Mixed and Augmented Reality, 167–168. Healey, C. G. (1996). Choosing effective colours for data visualization. Proceedings of Seventh Annual IEEE visualization ’96, 263–270. Henderson, S., & Feiner, S. (2009). Evaluating the benefits of augmented reality for task localization in maintenance of an armored personal carrier turret. Proceedings of the 2009 8th IEEE International Symposium on Mixed and Augmented Reality, 135–144. Hou, L., & Wang, X. (2013). A study on the benefits of augmented reality in retaining working memory in assembly tasks: A focus on differences in gender. Automation in Construction, 32, 38–45. Ikeuchi, K., & Suehiro, T. (1992). Towards an assembly plan from observation. Proceedings of the 1992 IEEE International Conference on Robotics and Automation, 2171–2177. Jamil, C. M., & Mohamed, R. (2011). Performance measurement system (PMS) in small medium enterprises (SMES): A practical modified framework. World Journal of Social Sciences, 1, 200–212. John, M. S., Cowen, M., Smallman, H., & Oonk, H. (2001). The use of 2D and 3D displays for shape—Understanding versus relative-position tasks. Human Factors, 43, 79–98. Jovanovic, V., Tomovic, M. M., Cosic, I., Miller, C., & Ostojic, G. (2007). Ergonomic design of manual assembly workplaces. 2007 ASEE Illinois/Indiana Section Conference, 1–8. Kato, H., & Billinghurst, M. (1999). Marker tracking and HMD calibration for a video-based augmented reality conferencing system. Proceedings of the 2nd international workshop on augmented reality, 85–94. Liverani, A., Amati, G., & Caligiana, G. (2004). A CAD-augmented reality integrated environment for assembly sequence check and interactive validation. Concurrent Engineering: R&A, 12, 67–77. Markov-Vetter, D., & Staadt, O. (2013). A pilot study for augmented reality supported procedure guidance to operate payload racks on-board the international space station. Proceedings of the International Symposium on Mixed and Augmented Reality, 1–6.

Neumann, U., & Majoros, A. (1998). Cognitive, performance, and systems issues for augmented reality applications in manufacturing and maintenance. Proceedings or Virtual Reality Annual International Symposium, 1–8. Nof, S. Y., Wilhelm, W. E., & Warnecke, H.-J. (1997). Industrial assembly. Chapman & Hall: London, UK. Novick, L. R., & Morse, D. L. (2000). Folding a fish, making a mushroom: The role of diagrams in executing assembly procedures. Memory & Cognition, 28, 1242–1256. Ong, S., Yuan, M., & Nee, A. (2008). Augmented reality applications in manufacturing: a survey. International Journal of Production Research, 46, 2707–2742. Pang, Y., Nee, A. Y., Ong, S. K., Yuan, M., & Youcef-Toumi, K. (2006). Assembly feature design in an augmented reality environment. Assembly Automation, 26, 34–43. Pathomaree, N., & Charoenseang, S. (2005). Augmented reality for skill transfer in assembly task. Proceedings of 14th IEEE International Workshop on Robots and Human Interactive Communication, 500–504. Peniche, A., Treffetz, H., Diaz, C., & Paramo, G. (2012). Combing virtual and augmented reality to improve the mechanical assembly training process in manufacturing. Proceedings of the 2012 American Conference on Applied Mathematics, 292–297. Raczynski, A., & Gussmann, P. (2004). Services and training through augmented reality. 1st European Conference on Visual Media Production, 263–271. Radkowski, R., Fiorentino, M., & Uva, A. E. (2012). 2D/3D technical documentation navigation using natural interaction and augmented reallity for maintenance. Proceedings of the International Conference for Tools and Methods in Competitive Engineering, 841–850. Raghavan, V., Molineros, J., & Sharma, R. (1999). Interactive evaluation of assembly sequences using augmented reality. IEEE Transaction on Robotics and Automation, 15, 435–449. Reiners, D., Stricker, D., Klinker, G., & Müller, S. (1998). Augmented reality for construction tasks: Doorlock assembly. Proceedings of the 1st International Workshop on Augmented Reality, 31–46. Reinhart, G., & Patron, C. (2003). Integrating augmented reality in the assembly domain—fundamentals, benefits and applications. CIRP Annals— Manufacturing Technology, 52, 5–8. Rodriguez, M. A. (2002). Development of diagrammatic procedural instructions for performing complex one-time tasks. International Journal of Human– Computer Interaction, 14, 405–422. Sausman, J., Samoylov, A., Harkness Regli, S., & Hopps, M. (2012). Effect of eye and body movement on augmented reality in manufacturing domain. IEEE International Symposium on Mixed and Augmented Reality 2012, 315–316. Sebrechts, M., Vasilakis, J., Miller, M., Cugini, J., & Laskowski, S. (1999). Visualization of search results: A comparative evaluation of text, 2D, and 3D interfaces. Proceedings of the 22nd Aannual International ACM SIGIR Conference on Research and Development in Information Retrieval, 3–10. Seok, K.-H., & Kim, Y. S. (2008). A study on providing prompt assembly information using AR manual. Third 2008 International Conference on Convergence and Hybrid Information Technology, 693–695. Siltanen, S., Hakkarainen, M., Korkalo, O., Salonen, T., Sääski, J., Woodward, C., . . . Potamianos, A. (2007). Multimodal user interface for augmented assembly. 2007 International Workshop on Multimedia Signal Processing, 78–81. Smallman, H., John, M., & Oonk, H. (2001). Information availability in 2D and 3D displays. IEEE Computer Graphics and Applications, 21(5), 51–57. Song, J., Jian, Q., Sun, H., & Gao, X. (2009). Study of the perception mechanisms and method of virtual and real objects in augmented reality assembly environments. 4th IEEE Conference on Industrial Electronics and Applications, 1452–1456. Springmeyer, R., Blattner, M., & Max, N. (1992). A characterization of the scientific data analysis process. Proceedings of the IEEE Visualization Conference 1992, 235–242. Swan, R., & Allan, J. (1998). Aspect windows, 3-D visualization, and indirect comparison of information retrieval systems. Proceedings of the 21st Annual

AUGMENTED REALITY ASSEMBLY SUPPORT International ACM SIGIR Conference on Research and Development in Information Retrieval, 173–181. Tang, A., Owen, C., Biocca, F., & Mou, W. (2003). Comparative effectiveness of augmented reality in object assembly. Proceedings of the Conference on Human Factors in Computing Systems, 73–80. Tory, M., Kirkpatrick, A., Atkins, S., & Möller, T. (2006). Visualization task performance with 2D, 3D and combined displays. IEEE Transaction on Visualization and Computer Graphics, 12, 2–13. Tsotsos, J. (1990). Analyzing vision at the complexity level. Brain Behavior Science, 13, 423–469. Tversky, B., & Hemenway, K. (1984). Objects, parts, and categories. Journal of Experimental Psychology: General, 13, (2), 169–193. Ware, C., & Beatty, J. C. (1988). Using color dimensions to display data dimensions. Human Factor, 30, 127–142. Webel, S., Bockholt, U., Engelke, T., Gavish, N., Olbrich, M., & Preusche, C. (2013). An augmented reality training platform for assembly and maintenance skills. Journal of Robotics and Autonomous Systems, 61, 398–403. Webel, S., Bockholt, U., & Keil, J. (2011). Design criteria for ar-based training of maintenance and assembly tasks. R. Shumaker (Ed.), Virtual and mixed reality, Part i, Proceedings of the Human Computer Interaction International Conference, 123–132. Westerfield, G., Mitrovic, A., & Billinghurst, M. (2013). Intelligent augmented reality training for assembly tasks. In H. Chad Lane, K. Yacef, J. Mostow, & P. Pavlik (Eds.), The 16th international conference on artificial intelligence in education (pp. 542–551). Memphis, Tennessee. Wickens, C., Merwin, D., & Lin, E. (1994). Implications of graphics enhancement for the visualization of scientific data: Dimension, integrality, steropsis, motion, and mesh. Human Factors, 36, 44–61. Wiedenmaier, S., Oehme, O., Schmidt, L., & Luczak, H. (2003). Augmented reality (ar) for assembly processes design and experimental evaluation. International Journal of Human-Computer Interaction, 16, 497–514. Wolfe, J., & Horowitz, T. (2004). What attributes guide the deployment of visual attention and how do they do it? Nature Reviews Neuroscience, 5, 495–501. Yuan, M., Ong, S., & Nee, A. (2008). Augmented reality for assembly guidance using a virtual interactive tool. International Journal of Production Research, 46, 1745–1767.

349

Zarraonandia, T., Aedo, I., Díaz, P., & Montero Montes, A. (2014). Augmented presentations: Supporting the communication in presentations by means of augmented reality. International Journal of Human–Computer Interaction, 30, 829–838. Zauner, J., Haller, M., Brandl, A., & Hartmann, W. (2003). Authoring of a mixed reality assembly instructor for hierarchical structures. The Second International Symposium on Mixed and Augmented Reality, 273–282. Zhu, X., Hu, S. J., Koren, Y., & Marin, S. P. (2008). Modeling of manufacturing complexity in mixed-model assembly lines. Journal of Manufacturing Science and Engineering, 130: 1–10

ABOUT THE AUTHORS Rafael Radkowski is Assistant Professor in the Department of Mechanical Engineering at Iowa State University. His research addresses the areas augmented reality, in particular, computer vision-based tracking and human–computer interaction for use cases such as manual assembly assistance, inspection/non-destructive evaluation, and product development. Jordan Herrema works as engineer at Honeywell. He received a master’s degree from Iowa State University in 2013. His research addressed augmented reality application for assembly support. The design of the interface was the focus of his research. James Oliver directs the Virtual Reality Applications Center and its Interdepartmental Graduate Program in Human– Computer Interaction at Iowa State University. His research spans a wide array of emerging interface technologies, encompassing computer graphics, geometric modeling, virtual and augmented reality, and collaborative networks for applications in product development and manufacturing.