Computers & Graphics 25 (2001) 755–763
A tangible AR desktop environment Holger Regenbrecht*, Gregory Baratoff, Michael Wagner DaimlerChrysler AG, Research and Technology 3, Virtual Reality Competence Center, FT3/EV, Wilhelm-Runge-Strasse 11, 89013 Ulm, Germany
Abstract We present an AR desktop environment which integrates the standard 2D computer desktop into an augmented 3D space. The underlying physical space is given by the standard office desk used in everyday work. Instead of sitting only in front of a computer screen, the user wears a video-through head mounted displays and interacts with the environment in both tangible and virtual ways. In this augmented environment, standard 2D application windows are attached to physical clipboards and can be freely positioned in the working space by simply moving the clipboard. Additionally, 3D content can be brought into and manipulated within the same space. To support interaction with 3D content we have experimented with a circular platform, on which a 3D model can be placed, and which the user can turn in a natural tangible way. By fusing familiar desktop utensils, two-dimensional computer desktop applications, and three-dimensional models, we provide a seamless transition from the traditional 2D computer desktop to a 3D augmented working environment. The paper describes the concept and implementation of the system and illustrates some interfaces and interaction techniques used in the context of CAD engineering. r 2001 Published by Elsevier Science Ltd. Keywords: Augmented reality; Human-computer interface; Tangible user interface; Virtual reality; Computer aided design
1. Introduction Augmented reality (AR) attempts to enrich a user’s real environment by adding virtual objects (3D models, 2D textures, textual annotations, etc) to it (see [1,2] for definitions). The goal is to create the impression that the virtual objects are part of the real environment. This will arguably be the case if they look, sound, feel, and move like physical objects. In AR, this is achieved by visual, acoustic, and haptic rendering of the virtual objects in register with the real world. The promise of AR in the workplace is that by believably augmenting the workplace with task-related virtual objects the user’s productivity can be increased. The users of the AR system experience the augmented environment through special display devices, which are either worn on the body or are placed in the working *Corresponding author. +49-731-505-4307; fax: +49-731505-4224. E-mail address:
[email protected] (H. Regenbrecht).
environment. For example, head mounted displays (HMDs) or projection surfaces are used for visual rendering, headphones or surround sound systems for acoustic rendering, and force feedback devices for haptic rendering. From a technical point of view, AR faces three major challenges : (1) to generate high quality rendering, (2) to precisely register (in position and orientation) the virtual objects with the real environment, and (3) to do so in interactive real-time. Additionally, ergonomics and usability aspects must be taken into account. The AR gear should be comfortable to wear and not too encumbering, especially if it is to be used over prolonged periods of time. Our general goal is to improve work processes in industrial design and production by introducing novel digital tools. Since it is neither feasible nor desirable to replace existing tools with new ones from one day to the next, it is important to integrate existing tools and work practices with new ones, so as to provide a smooth transition to the new methodology. In the research presented in this article, our focus is on designing the future workplace for office workers in general and for
0097-8493/01/$ - see front matter r 2001 Published by Elsevier Science Ltd. PII: S 0 0 9 7 - 8 4 9 3 ( 0 1 ) 0 0 1 1 8 - 2
756
H. Regenbrecht et al. / Computers & Graphics 25 (2001) 755–763
CAD designers in particular. Such a workplace currently consists of a traditional office desk with various desktop utensils, and of a desktop computer operated using keyboard and mouse through a 2D user interface. More specialized CAD workplaces add some limited form of 3D interaction and visualization, based on special 3D input and output. In the next section, we motivate the need for an integration of 2D and 3D CAD information and interfaces by analyzing the current processes in CAD engineering and illustrating the possibilities given with our approach. The third section describes the concept of the MagicDesk followed by the implementation details of the first system set up at our laboratory. In Section 5, we discuss this implementation and outline future extensions.
2. Motivation In order to understand the potential for improvement it is helpful to first analyze the current work processes and habits of traditional office desk workers, of computer desktop users, and of CAD engineers. 2.1. The traditional office desk environment The traditional office desk environment is characterized by a natural user interface that is tangible and spatial. A host of desktop utensils (pens, erasers, rulers, notepads, PostItTM notes, clipboards, files, telephone, desk lamp,y) populate the desk. During regular office work, the user has to reach for these utensils, grasp them, and bring them in spatial relation toFor in contact withFeach other. This forms the tangible aspect of the interface. The user will grasp a pencil to write on a piece of paper, and will additionally use a ruler to draw a precise sketch. He might open a book and place it next to the notepad in order to double check a formula, and might reach for the calculator to perform some calculations related to his or her sketch. An important aspect of such desk work is that it usually involves two-handed interaction. Typically, there will be a central place on the desk where things get done, tangibly: documents get written, sketches drawn, lists checked, etc. In contrast, the peripheral areas of the desk serve as more or less temporary repositories that can be visually scanned and from which information or objects can be retrieved when needed. So, once the sketch or document is completed, the user will move the piece of paper to the side, possibly on top of one of several piles on his desk, or he might attach it to a clipboard, so he can pass it to a colleague or take it with him to a meeting. To remind himself to take care of some important matter, he might write a PostItTM note and stick it someplace where he can easily
visually locate it again. This spatial aspect of the interface is central to productive office work, since it allows the user to take advantage of the desk as an external working memory that can be efficiently accessed, both visually and tangibly. 2.2. The 2D computer desktop environment The introduction of the computer has tremendously simplified many office-related tasks, such as filing, searching, letter writing, report generation, and many others. The modern day desktop computer is equipped with a 2D user interface that in many ways mimics the traditional office desk. Windows containing documents can be freely positioned on the computer monitor, can be iconified, closed, opened, and moved around using a 2D mouse. Since these and other operations are similar to operations performed with the hands at the traditional office desk, this type of interaction has been named ‘‘direct manipulation’’. Some of the naturalness of the interaction has been lost, however, since the tangibility present in the physical direct manipulation of objects has been abstracted away in the process. But, perhaps the main problem with this user interface is that it was designed to be a replacement for the traditional office desk. As a consequence, it does not integrate the desk space, bypassing the traditionally used work and repository areas. Since a computer monitor usually has a small area compared to the entire space above the surface of an office desk, the spatial aspect of the office desk environment is largely lost. For this reason, and for the fact that not all traditional desktop utensils have an analogue in the computer, most office workers perform some of their tasks with traditional tools at the desk. Sometimes, this is even the case for tasks that could be performed with the computer. Given a document in electronic form to proofread or to correct, quite often people will print out a copy, place the document on the desk surface to read it, make corrections with a pen, and finally integrate the changes into the electronic document. This example and other similar ones show that the 2D WIMP (Windows, Icons, Menues, Pointer) interface cannot (optimally) support all office tasks, and that it would be desirable to integrate it with the traditional office desk environment. 2.3. The 3D CAD workplace environment The CAD engineer uses specialized programs to create, visualize, and modify 3D models of the objects. When working with 3D models, he might switch to a special 3D input device such as a SpacemouseTM. In contrast to the 2D user interface, where the mouse is the standard input device, there is no standard 3D input device, so the engineer may well use a plethora of different devices, each for a special purpose. In order to
H. Regenbrecht et al. / Computers & Graphics 25 (2001) 755–763
get a good impression of the 3D shape of object models, the engineer will use some form of stereo-enabled display, such as an active stereo graphics system with shutter glasses. Sometimes it is necessary to visualize a life-size model of a fairly large object, such as a car. In this case the engineer would use a large-size VR display, such as a CAVE-like environment or a large projection screen. Since such displays will most probably be located in a specially set up room, he will have to leave behind his office desk with all its useful and familiar utensils. While working with 3D models, the engineer might need to get some information from a 2D application. Since the devices employed when working in the 3D mode are different from those for working in 2D, accessing information from the 2D application is a timeconsuming task, and places a physical (grasping a different input device, taking off the shutter glasses) and cognitive burden (context switch to a different application and interaction form) on him. There have been previous attempts at combining 2D WIMP applications with 3D environments [3,4]. In [3], 2D X windows are transparently displayed on an HMD using an overlay technique, their positions corresponding to tracked 3D locations. Because of the overlaying, a window is always oriented perpendicular to the user’s line of sight and of fixed size. In contrast, [4] uses texture mapping to map the portion of the frame buffer corresponding to a window to an arbitrarily oriented plane in 3D space. This is also the approach taken in our system. While the 1 Hz performance of their system was not sufficient, we achieve interactive update rates by profiting from faster hardware and by employing compression techniques described in [14]. In contrast to [3,4], which exclusively target the X Windows system, we have based ours on the Microsoft Windows platform to reflect today’s migration of CAD working environments towards PC-based solutions. The main limitations of the systems presented in [3,4] are their lack of support for comprehensive and intuitive interaction techniques. For example, they do not provide a natural way for positioning 2D windows in 3D space. In our system we have addressed this issue by adapting the tangible user interface technique described in [7,8]. One principal question is how to interact with 2D applications in 3D space. The two main approaches are : using a 3D pointing device [4] or using the conventional 2D mouse [3]. In our system we use a single physical device, the 2D/3D-mouse to support both forms of interaction. While these early approaches try to simply combine 2D applications with 3D environments, we feel that a stronger interaction between these two worlds is necessary. For example, mechanisms should be provided for accessing the same data through either 2D or 3D
757
interfaces, switching dynamically to whichever is more appropriate for a specific task. 2.4. Evaluation In summary, we have shown that the 2D WIMP interface is not well integrated with the traditional desktop on one hand and with 3D user interfaces on the other. Although the 2D computer desktop introduces functionality not available withFor superior toFthe traditional office desk, and although a 3D user interface has specific advantages over the 2D computer desktop user interface, they form three separate worlds, since they are based on different representations, and each offer different interaction devices and metaphors to manipulate objects and documents. Because of these reasons it is difficult to exchange documents and objects between them. Moreover, even when an exchange is possible, it is not seamless, since it forces the user to consciously perform a context switch. This reduces his effectivity, since it restricts his ability to access the functionality afforded by the system as a whole. Bringing together the different dimensions of interfaces [5] into one integrated configuration seems to be a very promising approach. The next chapter presents our concept of a system which combines all three of these environments in one.
3. Concept To address the needs described above, we have initiated a concept called the MagicDesk, which provides some first tools and metaphors for a seamless integration of virtual 2D and 3D information into the real world. Although the concept is based on or limited by the device technology available today, it serves as a testbed for a number of possible application scenarios. The starting point for the concept is an ordinary CAD desk of an engineer found in most design and development companies. The engineer sits in front of one or two computer screens and operates his applications with keyboard and mouse. The main applications are: CAD programs, spreadsheet applications, text processor, email program, group- and org-ware, electronic catalogues, and a web browser. On the desk itself many traditional office desk utensils can be found that the engineer uses in his daily desk work. Of particular interest for the core work of the CAD engineer are : an electronic calculator, reference books and catalogues, tables or table books, folders with documents, and of course paper, pencils, rulers, etc. With the MagicDesk we augment this existing environment by virtual and real elements in two ways: (1) by extending the 2D application space to the whole desk space (instead of limiting it to the monitor), and
758
H. Regenbrecht et al. / Computers & Graphics 25 (2001) 755–763
(2) by placing a virtual 3D object world into the desk space. None of the traditional utensils are removed, only new objects are added to the desk. In contrast to a projection-based, ubiquitous augmented interface, as for instance introduced by H. Fuchs and colleagues [6], we decided to focus on a headmounted display (HMD) setup because of the availability of the technology. An HMD can be easily taken off or clipped upwards when convenient, such as when the user wants to work for a longer period of time with the monitor/keyboard/mouse combination. This will especially be the case in the early phases of using the MagicDesk. The concept itself described here can be used in almost any configuration and does not depend on any particular HMD model. The main goals of the MagicDesk are: (a) providing 3D content and interaction in a desktop environment by integration of VR technology, (b) providing 2D content and interaction in a fairly large working space, (c) inclusion of the standard desktop environment, without substitution of 2D workflow, (d) seamless integration of real world, 2D computer world, and 3D (VR) world, and (e) natural, intuitive, and consistent interaction techniques within the three domains. 3.1. 2D application space Besides the two-dimensional working area on the monitor, which is still present, the system allows the main applications to be placed in space using a clipboard metaphor (Fig. 1). Physical clipboards hold standard 2D Window applications and can be placed on the desk wherever needed. To keep things simple, exactly one 2D application (e.g. Microsoft Word) is attached to each clipboard. The clipboard/application can be laid down on the table, can be put on a document holder, can be exchanged with other users, or can be stashed away into a bag or in a drawer. This tangible way of moving applications is very natural, easy to use and immediately understandable (see [7,8] for tangible user interfaces). Instead of clicking and dragging with a mouse on the (limited) surface of a computer monitor, the user simply
grabs the clipboard with the application attached to it and moves it to the desired location. In an investigation recently undertaken by Lindeman, Sibert, & Templeman [9], the task performance in using a tangible interfaces in VR was better than other non-tangible interfaces. We think that this benefit will also transfer to AR applications such as ours. In addition to the concept of clipping 2D windows to the clipboard, 3D contents can also be placed on physical clipboards, thus providing them with the same tangible interface. 3.2. 3D object space Analyzing the main tasks of a CAD engineer in using 3D geometrical models, one makes the observation that 3D objects are not usually placed in space in an unconstrained manner. Rather, most placement is done relative to different planes of reference. Either the frame of the monitor or the edges of some objects close to the 3D object to be displayed are used to orient the objects in (quasi-) 3D space. In our desktop setup the table surface serves as such a reference frame, since it defines a consistent upward direction. We have developed a 3D object viewing device that allows 3D object models to be viewed from different sides by rotating them around the vertical direction (see Fig. 2). This device, called the ‘‘cake platter’’, because it actually is one, possesses a circular top surface on which virtual 3D object models are placed. Turning the platter is very simple: simply by turning the outer ring with one or both hands. Since this is a natural interaction metaphor, novice users intuitively know how to use the cake platter right away. 3.3. Main interaction One of the goals of our concept is that 2D interaction techniques should not only be allowed, but rather actively supported by The MagicDesk. Therefore, we support 2D interaction in three different ways: (1) by using the standard 2D metaphors with the monitor on the desk, controlling the WIMP-applications with
Fig. 1. A real and an augmented clipboard.
H. Regenbrecht et al. / Computers & Graphics 25 (2001) 755–763
759
Fig. 2. The real and augmented cake platter.
Fig. 3. Mouse in 2D and 3D mode.
keyboard and mouse, (2) by selecting one of the applications running on a clipboard and controlling it in the familiar way, and (3) by interacting with the clipboard applications using a 3D ray cast device. These procedures are integrated into one interface which allows a transition from old to new metaphors. The computer mouse on the desktop can be turned upside down (actually downside up) and becomes a 3D raycast device (Fig. 3), in which case a virtual ray emerges from the front side of the mouse. In this mode, the mouse functions either as a 6DOF device in 3D space, or as a 2D device operating on the surface of a WIMP application. Due to the visual feedback of the interface the user has no difficulty in determining the current mode of operation. 3D operations on the cake platter model can be performed in the 3D mouse mode, where the device is used either as a 3D position or as a 6DOF raycast device. In the 3D position mode only the tip point of the ray (where it intersects a selected object) is used for interaction, e.g. for moving 3D points within the virtual model. In the 6DOF mode the ray is used for manipulating more remote objects or for more complex operations, like turning the whole model or parts of it. It is also used for all selection tasks in 2D and 3D. Two special devices are used in the conceptual setup for tasks identified as primary for CAD engineering: clipping planes and lighting props. A clipping plane slices the virtual object for better interpretation of the
inside structure of the model. The MagicDesk realizes this by providing a rectangular transparent plate with a tangible interface. Moving the plate through the model clips it at the appropriate location and orientation (Fig. 4). The lighting concept follows the same simple principle of simplicity. A real desk lamp serves as a prop for the virtual light. Moving and turning the real lamp moves and turns a virtual light, which illuminates the virtual model on the cake platter (Fig. 5). In addition to these two 3D interaction techniques a very basic form of system control, namely file or model selection, is implemented using the MagicBook metaphor introduced by Billinghurst et al. [10] and discussed in [20]. Different 3D models can be chosen out of a book, to be placed on the cake platter (Fig. 6). Besides the interfaces described here a couple of other techniques are used in experimental setups within the MagicDesk concept. These techniques will be published in detail in [11].
4. Implementation 4.1. General setup We have implemented the MagicDesk concept on PC platforms running Microsoft Windows for an easy
760
H. Regenbrecht et al. / Computers & Graphics 25 (2001) 755–763
integration into the working processes of today’s CAD engineers. Two different setups of HMD/camera combinations are used, depending on the budget of the targeted customer: a more expensive solution with a Sony Glasstron LDI-100BE HMD (comparable to the PLM-S700) and a Toshiba IK-CU50 mini camera, or a less expensive version with an Olympus EyeTrek FMD-700 HMD and a Visual Pacific PC-605 mini camera. The quality of the configurations differs
Fig. 4. Real Clipping Plane.
significantly: the Olympus version does not really allow reading of texts but is only a third as expensive as the Sony version. The camera signal is captured by a bt878 frame grabber card (WinTV Go!) in full PAL/NTSC resolution (interlaced). This card is built into the MagicDesk computer, which is currently a Dual-Pentium 933 MHz PC running Windows 2000. The graphics card (Elsa Synergy III) supports two distinct render areas, each with its own VGA output, which permits us to run the system on a single PC. The augmented reality view (live camera signal overlaid with virtual 2D and 3D content) is drawn into the first render area, with the associated VGA output connected to the HMD. The second render area serves as the off-screen 2D area (Fig. 7). Its VGA output can be connected to a monitor for test purposes, but is usually left disconnected during normal operation. In addition to the single PC solution, we have run the system in a distributed mode on up to four PCs. The software is based on DBView technology, which is our in-house Virtual Reality software that has been developed over the past 5 years. Especially aspects of network distribution and rendering (based on OpenInventor) are used extensively. Although also other techniques were used in the past, the current version of the MagicDesk 6DOF tracking is implemented on top of the ARToolKit library [12, 13], with some major extensions regarding tracking fidelity and multi-marker tracking. All objects in real space are tracked by colored markers printed out on an ordinary printer. Each object in the real world is tracked by at least two markers for reliable accuracy. At this stage of the setup a total number of 24 markers are being tracked in real-time. 4.2. Software architecture
Fig. 5. Real and virtual lamp.
The MagicDesk relies on a system of distributed software components communicating through event channels (publish/subscribe) as well as by peer-to-peer mechanisms. We distinguish two domains in which an application can run a 2D and a 3D domain (Fig. 8). 2D
Fig. 6. Real and virtual MagicBook for model selection.
H. Regenbrecht et al. / Computers & Graphics 25 (2001) 755–763
761
Fig. 7. Render areas for clipboard technique.
Fig. 8. Schematic overview of the software architecture.
applications can run in either domain, as standard applications in the Windows 2000 desktop, or in 2D ‘‘areas’’ (see below) on planes in the 3D domain. 4.2.1. 2D domain The 2D domain consists of the typical MS-Windows desktop with normal 2D applications running as well as an additional MagicDesk application daemon. The daemon monitors desktop activities such as launching and closing of applications or window output (redraw, resize, etc.), and posts these activities to an event channel. The window output is captured within the 2D domain, and is transmitted in the form of compressed bitmaps over the bitmap channel. If an application creates new windows, the 2D application daemon arranges them side by side using a simple layout
algorithm to prevent overlapping regions. This is to ensure that window bitmaps can be simply copied from the screen buffer, without having to switch Windows desktop contexts or reconstruct them from cached information. Since 2D areas in the 3D domain can publish 2D device events, the daemon subscribes to these events on the device channel and forwards them to the appropriate applications. We are currently using a modified VNC (Virtual Network Computing) server with hextile encoding [14] for the application daemon. 4.2.2. 3D domain The 3D domain has several components which are based on the DBView virtual reality platform, with the main components being the area manager and the device
762
H. Regenbrecht et al. / Computers & Graphics 25 (2001) 755–763
manager. Other components are custom application components of the VR platform. The 3D domain follows a so called area concept, which includes twoand three-dimensional areas handled by an area manager. ‘‘Areas’’ are functional and visual entities of the virtual environment which can be seen as three-dimensional counterparts of the windows metaphor on desktop applications. Areas can be visualized and they can receive events from the device manager. We distinguish between 2D areas and 3D areas. 3D areas contain threedimensional content such as CAD-models or volumetric data sets. 2D areas are a special case of 3D areas since they have no depth. The class responsible for rendering 2D applications in 3D is the 2D application area. It derives from the 2D area and listens on the bitmap channel for updates. When an update arrives, it renders the 2D application window by decompressing the received bitmap and by applying it as a texture to a plane in 3D. All areas are registered with an area manager, which controls the position and orientation of the areas in its world space. Since the world space of an area manager is itself a 3D area, we can build up recursive structures with multiple area managers, although in our current system there is just one top-level manager. By default, the area manager subscribes itself for pointer events on the device channel and delegates them to the associated areas. When delegating a pointer event to an area, the area manager converts it to a 2D mouse motion/button event and publishes it on the device channel. At the other end, the 2D application daemon retrieves the event from this channel, and propagates the event to the 2D desktop application which updates its window. The 2D application daemon monitors the update and generates a new bitmap update event on the bitmap channel. This in turn causes the textured plane to be redrawn in 3D, as described above. This illustrates the interaction between 2D area and 2D application. All input devices of the virtual environment are registered with a device manager which is the interface to the device event channel. For example, many 3D input devices produce pointer events consisting of a 3D point, a direction (pointer-motion), and optionally one ore more button states (pointer-button). If a component of the virtual environment is interested in device events, it registers a device event handler with the device manager. In our case the area manager registers itself for pointer-motion, button, keyboard, and speech events. Based on the current pointer event, the area manager determines the active area, to which it subsequently routes all keyboard and speech events. 4.2.3. Inter-process-communication Applications in the 2D and in the 3D domain can interact with each other by communicating via the
application channel. For example, one could write a 2D control application to interact with the VR environment, or one could implement a 3D application that controls a 2D application through a 3D user interface. We have used the Adaptive Communication Environment (ACE) [15] to implement a simple publishsubscribe mechanism similar to the CORBA event channel architecture. In order to base our system on a standardized middleware layer we plan to use a CORBA Notification Service for the event channels and the TAO [16] real-time object request broker.
5. Conclusion and outlook In this paper we have presented MagicDesk, an Augmented Reality system which combines the traditional workplace of a modern office worker or CAD engineer with new tangible interfaces for 2D and 3D information. One main characteristic of the system is the integration of 2D computer desktop applications, 3D CAD programs and VR technology within a single augmented 3D environment embedded in the real threedimensional office desk workspace. The proposed tangible user interface based on various props (clipboard, cake platter, 2D/3D mouse, clipping plane, lamp, MagicBook) leads to natural and consistent interaction within real and virtual 2D and 3D domains. This consistency of interaction improves the interoperability and data exchange between 2D and 3D domains and should markedly improve the user’s productivity. A further advantage of the integrated environment is that it brings VR technology, i.e., 3D content and interaction, to the desktop, reducing the need to move to largescale display environments such as a CAVE. Finally, it extends the 2D application space to the whole desk space, freeing it from the confines of the computer monitor, while still retaining the normal workflow of a 2D desktop user interface. At the time of writing of this article no formal user studies have been undertaken. First tests within our laboratories show that the MagicDesk is a very promising concept for bringing AR technology to the desk. The current implementation is fast and robust enough to evaluate the concept. For more comprehensive user studies the application setup has to be improved to get closer to the needs of a CAD engineer in automotive styling and design, which is our main scenario field. 5.1. Outlook After a positive evaluation of the current implementation we intend to (a) fully integrate the MagicDesk into the Virtual and Augmented Reality System DBView/ VPE, (b) implement a standardized communication
H. Regenbrecht et al. / Computers & Graphics 25 (2001) 755–763
infrastructure (CORBA, TAO), (c) add new interaction devices like Anoto [17], Wacom [18], or MagicPen [19], and (d) specialize the system to particular customer needs, perform usability tests with it, and transfer the whole system to the customer for everyday use. We are also considering extending the system to a multi-user setting. Since the MagicDesk is embedded in the physical workplace, already now other users than the primary one can interact with the system by making use of the tangible interface : they can hold and move clipboards with 2D applications, they can turn the 3D model on the cake platter, and they can move the virtual lamp. This represents a first form of collaborative AR work. Of course, only the primary user can see the world from his own perspective. For the other users, we currently set up an additional monitor to which we send the same VGA signal as the primary user’s HMD. As a next step, we plan to extend the system to cover an office with multiple users, each at its own MagicDesk. In this scenario, users would be able to exchange 2D application documents and 3D models by simply exchanging clipboards. The MagicDesk application daemons would then be responsible for synchronizing the data over the local network connecting the individual systems with each other. A further interesting variation of the multiuser scenario would be a remote collaboration setting. Of course, a physical exchange of clipboards is not possible in such a scenario. However, the feeling of a shared common workspace could be created by installing an additional camera overlooking the desk of each user at each site, with the camera images being sent to the remote site via a video-conferencing link. This would provide the remote collaborator with an augmentable view of the partner’s desk.
Acknowledgements We would especially like to thank Steffen Cersowsky and Muriel David for their contributions to the implementation and calibration of the project. References [1] Milgram P, Takemura H, Utsumi A, Kishino F. Augmented reality: a class of displays on the reality-virtuality continuum. Proceedings of Telemanipulator and Telepresence Technologies, SPIE, 1994;2351:282–92. [2] Azuma R. A survey of augmented reality. Presence:Teleoperatore and Virtual Environments 1997;6(4):355–85.
763
[3] Feiner S, MacIntyre B, Haupt M, Solomon E. Windows on the world: 2D windows for 3D augmented reality. Proceedings of the UIST ’93 (ACM Symposium on User Interface Software and Technology), Atlanta, GA, November 3–5, 1993. p. 145–55. [4] Dykstra P. X11 in virtual environments. Proceedings of the IEEE 1993 Symposium on Research Frontiers in Virtual Reality, San Jose, CA, October 25–26, 1993. [5] Schmalstieg D, Fuhrmann A, Hesina G. Bridging multiple user interface dimensions with augmented reality systems. Proceedings of ISAR ’2000. New York: IEEE, 2000. p. 20–9. [6] Raskar R, Welch G, Cutts M, Lake A, Stesin L, Fuchs H. The office of the future: a unified approach to image-Based modeling and spatially immersive displays. Proceedings of SIGGRAPH 98, Orlando, FL, July 19–24, 1998. [7] Ullmer B, Ishii H. The metaDesk: models and prototypes for tangible user interfaces. Proceedings of the UIST ’97. New York: ACM, 1997. p. 223–32. [8] Ishii H. Tangible bits: coupling physicality and virtuality through tangible user interfaces. In: Ohta & Tamura: mixed realityFmerging real and virtual worlds. Berlin: Ohmsa/Springer, 1999. [9] Lindeman R, Sibert J, Templeman J. The effect of 3D Widget represantations and simulated surface constraints on interaction in virtual environments. Proceedings of the IEEE Virtual Reality 2001, Yokohama, Japan, March 13–17, 2001. [10] Billinghurst M, Poupyrev I, Kato H, May R. Mixed realities in shared space: an augmented reality interface for collaborative computing. Proceedings of ICME 2000. New York: IEEE, 2000. p. 1641–4. [11] Regenbrecht H, Wagner M. Prop-based interaction in mixed reality applications. Paper accepted for Human– Computer Interaction International 2001, New Orleans, LA, August 2001. [12] Kato H, Billinghurst M. Marker tracking and HMD calibration for a video-based augmented reality conferencing system. Proceedings of the Second International Workshop on Augmented Reality, 1999. p. 85–94. [13] Billinghurst M, Kato H. Real world teleconferencing. Proceedings of CHI ’99 Abstracts and Applications, 1999. [14] VNC http://www.uk.research.att.com/vnc/index.html. [15] ACE http://www.cs.wustl.edu/Bschmidt/ACE.html. [16] TAO http://www.cs.wustl.edu/Bschmidt/TAO.html. [17] Anoto http://www.anoto.com. [18] Wacom http://www.wacom.com. [19] Regenbrecht H, Baratoff G, Poupyrev I, Billinghurst M. A cable-less interaction device for AR and VR environments. Proceedings of ISMR 2001, Yokohama, Japan, March 14–15, 2001. [20] Billinghurst M, Kato H, Poupyrev I. The Magic Book: a transitional AR interface. Computers & Graphics (Special Issue on ‘‘Mixed RealitiesFBeyond Conventions’’).