Chapter 8
Active Tangible Interactions Masahiko Inami, Maki Sugimoto, Bruce H. Thomas, and Jan Richter
Abstract This chapter explores active tangible interactions, an extension of tangible user interactions. Active tangible interactions employ tangible objects with some form of self automation in the form of robotics or locomotion. Tangible user interfaces employ physical objects to form graspable physical interfaces for a user to control a computer application. Two example forms of active tangible interactions are presented, Local Active Tangible Interactions and Remote Active Tangible Interactions. Local Active Tangible Interactions (LATI) is a concept that allows users to interact with actuated physical interfaces such as small robots locally. The Remote Active Tangible Interactions (RATI) system is a fully featured distributed version of multiple LATI’s. The underlining technology Display-Based Measurement and Control System is employed to support our instantiations of Local Active Tangible Interactions and Remote Active Tangible Interactions.
Introduction Active tangible interactions [1, 2] are an extension of tangible user interactions [3] that employ tangible objects with some form of self automation in the form of robotics or locomotion. Tangible user interfaces (TUI) are graspable physical interfaces that employ physical objects such as blocks, miniature models, and cardboard cut-outs for controlling a computer system. A TUI does not have the user manipulate GUI control elements on a display, such as buttons or sliders, through a traditional mouse and keyboard combination. Instead a TUI encourages users to manipulate physical objects that either embody virtual data or act as handles for virtual data. Such physical interactions are very natural and intuitive for users for the following reasons: M. Inami (B) Graduate School of Media Design, Keio University, Yokohama, Kanagawa 223-8526, Japan e-mail:
[email protected]
C. Müller-Tomfelde (ed.), Tabletops – Horizontal Interactive Displays, Human-Computer Interaction Series, DOI 10.1007/978-1-84996-113-4_8, C Springer-Verlag London Limited 2010
171
172
M. Inami et al.
1. they enable two-handed input and 2. provide us with spatial and haptic feedback that aids our understanding and thinking [4, 5]. The physical objects that construct a TUI are referred to as tangibles, and can be categorized as passive or active. TUI’s have a natural relationship with tabletop systems [6–11]; if for no other reason than the objects require a surface to set them upon. The interaction with the surface and other objects on the table is an active area of current investigation [12–16]. The ability to track the physical objects and to provide these objects with computer controlled self-locomotion is the main research goal presented of this chapter. Tangible user interfaces feature many benefits over traditional GUIs. Fitzmaurice, Ishii and Buxton [4] identified the following TUI advantages: 1. they allow for more parallel user input, thereby improving the expressiveness or the communication capacity with the computer; 2. they leverage our well developed, everyday skills of prehensile behaviours for physical object manipulations; 3. they externalize traditionally internal computer representations; 4. they facilitate interactions by making interface elements more “direct” and more “manipulable” by using physical artefacts; 5. they take advantage of our keen spatial reasoning skills; 6. they offers a space multiplex design with a one to one mapping between control and controller; and 7. they afford multi-person collaborative use. As previously mentioned, active tangible interactions are concerned with tangible objects that have some form of self propulsion. A good example of support for active tangible interactions is the Display-based Measurement and Control System (DMCS) [2] is a tabletop robot/tracking technology. This system provides a suitable research platform for an active tangible user interface. DMCS has the following advantages: easily scalable, control of orientation, requires minimal calibration, robust tracking system, allows tangibles to be lifted up to 20 cm off the table while still allowing for reliable tracking, and supports both top and rear-projected environments (so occlusion by top-projection can be avoided if need be). This chapter will provide an overview of the existing active tangible interactions research concerned with tabletop surfaces. In particular the DMCS technology will be explored. Two main forms of active tangible interactions are presented, Local Active Tangible Interactions and Remote Active Tangible Interactions. We have built a number of systems based on these technologies, and this chapter will explore these systems in detail. Finally the chapter will conclude with a discussion of this user interaction paradigm and future directions.
8
Active Tangible Interactions
173
Background There are main two concepts that underpin our investigations. The first as previously mentioned is tangible user interfaces. The second is TUI’s employed for distributed collaboration. This section provides an overview of these concepts.
Tangible User Interfaces An early example of TUI is the Active Desk [4]; an application supporting interaction through simple LegoTM -like bricks on a table surface that act as grips for manipulating graphical objects. The application merges space-multiplexed I/O (every input device has just has one function) and time-multiplexed-I/O (a single input device has multiple functions at different points in time). Other earlier work on TUI’s include Ullmer and Ishii’s metaDESK [17] and Rauterberg et al.’s BUILD-IT system [18]. The metaDESK goal is to complement (not substitute) conventional GUIs with physical interactions. These interactions enable users to shift two building models to navigate, zoom and warp a map. The map orients and positions itself so that the two building models always correspond with their location on the map. BUILD-IT is a comparable TUI, where users manipulate bricks as handles to layout a factory plant. Both system are not appropriate for active tangible interaction, as the bricks are passive (have no means of self propulsion). Furthermore, the bricks cannot be moved automatically by a controlling system. A useful extension to tangible user interfaces is to couple them with augmented reality (AR) [19] or mixed reality (MR) [20]. Kato et al. [21] combine TUI’s with AR (called tangible augmented reality (TAR)) to overlay tangibles with virtual images. A TAR removes the need for users to remember which tangibles represent which set of digital data.
TUI’s in Distributed Collaboration The connected rocking chairs developed by Fujimura, Nishimura and Nakamura [22] share rocking motions with each other, allowing the users rocking the chairs to feel connected. The two networked rocking chairs are controllable via a linear motor attached to the base of the chairs. Equally the connected rocking chairs and the inTouch system of Brave, Ishii and Dahley [23] concern themselves with the concept of distributed haptics. The synchronization of physical interfaces has a similar motivation to active tangible interactions; however they vary in that the above mentioned systems do not concentrate on maintaining remote TUI’s synchronization. Their main purpose is providing haptic feedback over remote physical interfaces.
174
M. Inami et al.
Brace, Ishii and Dahley [23] initially purposed the concept of distributing a TUI. An early exploration is their PSyBench, developed to substitute conventional video/audio conferencing with tangible interfaces. With the use of Synchronized Distributed Physical Objects (SDPO), PSyBench provides the user with the impression that they are sharing matching objects even though they are remote. The PSyBench is based on two networked motorized chessboards. A major limitation of the hardware is the incapability of controlling the orientation of the tangibles. A comparable concept is Rosenfeld et al.’s [5] bidirectional UI, where tangibles can be employed for both input and output, and can be controlled equally by the computer and the users. Rosenfeld et al.’s Planar Manipulation Scale (PMD) implements “dumb robots” to produce a bidirectional user interface. The two wheel motorized robots can freely move around a table surface. Two pulsing LED’s (for position and orientation) underneath each robot enable tracking. To evaluate their investigations, the authors developed a furniture layout application, where seven robots depicting furniture automatically and simultaneously move to form one of a number of pre-set room layouts. The PMD is not implemented in a distributed TUI, but is a suitable technology. One of a small number of tangible interfaces that supports duplex input and output (IO) is the actuated workbench project [24]. Magnetic pucks (or non-magnetic objects outfitted with magnets) operate on a table surface with a built-in array of electromagnets that are energized to shift the objects. Users can manually slide the magnetic pucks, and these pucks are fitted with infrared (IR) LED’s that are sensed with an IR vision system. The quantity of pucks concurrently movable is restricted due to the intricate magnetic array required to move them, and puck orientation is not presently controllable. The objective of the actuated workbench is to allow computer control of the tangibles. Everitt et al. [25] explicitly investigated the practicability of tangible user interfaces for distributed collaboration activities. They employ networked smart boards onto which Post-itTM notes can be fixed for design intentions. A high resolution camera captures the information on any of the newly added notes, and stores this as an image on a central server. The other client then displays the image with a projector on its smart board. Users can reorganize equally tangible and digital notes, by physically moving them or through computer interaction via a pen or mouse. As the physical notes are not electronic (a limitation), faint grey shadows are projected under each physical note. These annotations turn red when a physical note is required to be shifted to a new position. Fresh notes can be added in parallel, but existing notes can only be moved one at a time. Simple but elegant shadow silhouettes supply a sense of distributed presence while decreasing bandwidth required for complete video-conferencing.
Display-Based Measurement and Control System The Display-Based Measurement and Control System (DMCS) [2] system is a perfect enabling technology for implementing active tangible interactions. The DMCS
8
Active Tangible Interactions
175
Fig. 8.1 Example robot
removes the need for an external tracking system of the robots in a projection space. The system uses a display device as a dynamic ruler to track the robot. Rather than implementing the tracking and controlling of the robots as disconnect systems (traditionally via a separate camera and data communications to the robot respectively), the DMCS combines both concepts and encapsulates them inside the robot. Communication with the robot is two-way. DMCS is a close-loop solution, thus ensuring the robots are in the correct position on the table. Each robot is fitted with five phototransistors to sense the light intensity, see Fig. 8.1. A unique fiducial marker featuring a gradient from black to white is centred over the set of phototransistors, see Fig. 8.2. The fiducial marker is designed to measure relative positions and directions between the marker and robot independently. Each robot transmits the brightness values of the phototransistors back to a central computer that employs this data to maintain the fiducial marker’s position centred over each robot’s phototransistors, see Fig. 8.3. Controlling the robot is performed by sending signals to the robot via a cable or radio. The DMCS’s tracking system is closed-loop, as the phototransistors determine the relative position compared to the marker and is self adjusting to have them overlap dynamically [26]. The tracking algorithm continuously employs feedback from each robot to adjust the coordinates of that robot’s fiducial tracking marker. The attribute of the DMCS system that is critical for active tangible interactions is that it enables input and output via the robot. The robot can be manipulated by a human to adjust the system’s internal state, while at the same time the alteration of information inside the system can be visualized in the physical world by automatically moving the robots. Each robot is fixed with two separately controlled wheels for this function. Figure 8.4 depicts our new robot with retractable wheels for easier manipulation and an integrated sound system. The range of robot sizes is shown in Fig. 8.5. The robots can be augmented with additional features, such as grasping pincers as shown in Fig. 8.6.
176
Fig. 8.2 DMCS marker system
Fig. 8.3 Overall layout
M. Inami et al.
8
Active Tangible Interactions
177
Fig. 8.4 New robot with sound and retractable wheels
Fig. 8.5 Example of two different sizes of tabletop robots
Local Active Tangible Interactions Local Active Tangible Interactions (LATI) is a concept that allows users to interact with actuated physical interfaces such as small robots locally [2]. This section describes the concept of LATI, and provides an example application Augmented Coliseum.
LATI LATI brings dynamic physical expressions, haptic feedback and intuitive manipulation to the interactions between the users and the interfaces. We employ DMCS
178
M. Inami et al.
Fig. 8.6 A robot with a grasping device on the front to move physical objects
as a useful technology to develop these kinds of interactions. This technology can control the actuated interfaces by just drawing fiducial makers on a computer display. Examples of using Adobe Flash1 to control robots have been developed, such as a set of robots in synchronised motion dancing to music. These Flash controlled robot applications do not require any knowledge of the underlining technology. Just move and rotate the markers on the display, and the robots follow. This concept is extended to combine TUI’s and dynamic changing virtual environments. The input/output nature of DMCS makes it a natural technology for LATI. Furthermore, graphics can contribute to dynamic expressions with the actuated physical interfaces. Movements within the interfaces are able to be enhanced by animated background graphics. When the background graphics move with TUI’s, the users can perceive movements in both real and virtual environments. Relative Motion Racing [27] is one example of this effect, see Fig. 8.7. The system controls small robots and background graphics simultaneously. It can express high velocity motions of the robots by using combination of slow robot motions and large magnitude graphics motions. This method can expand the virtual size of workspaces in active tangible systems.
Augmented Coliseum The input and output via the same physical object nature of a LATI is demonstrated in an AR game, Augmented Coliseum [28]; a game based on the Display-Based Measurement and Control System. The game environment employs AR technology to enable virtual functions for playing a game with small robots, similar to how 1 http://www.adobe.com/products/flashplayer/
8
Active Tangible Interactions
179
Fig. 8.7 Relative motion racing
children can use their imaginations to transform a normal room into the sky for playing games with toy airplanes. Augmented Coliseum embodies this concept of imagination virtually by superimposing computer graphics onto toys (in the form of robots) in the physical world. Playing with toys such as small robots and vehicles is improved and enhanced by projecting images to the areas corresponding to the actual positions and directions of the toys. In addition, objects are able to express the phenomena of this augmented reality world by controlling actuators according to the results of simulations. In the environment that we propose, the games are played with small robots and vehicles that can be treated as actual toys of the real world. The example game Augmented Coliseum is played with two users interacting with standard games controllers to move (see Fig. 8.8) and fire on opponents (see Fig. 8.9). Two robots are connected to the same projector/computer [29] system and each is operated by a human player. Each user attempts to “blow up” their opponents’ robots, see Fig. 8.10. The robots represent physical battle tanks, which can shoot virtual missiles and lasers at each other. The projectiles are represented visually in the virtual world. Each robot is driven by the human players, which embodies the input in the form of the battle tank’s placement in the game space. The playing area (see Fig. 8.11) can be tabletop size or quite large, approximately 3 by 4 m. A novel feature of the use of virtual and physical representations is the robots react to both physical and virtual interactions on the game board. The black circles are regions the robots cannot enter. A simple physics model detects collisions between the robots and other virtual objects inside the game, and these collisions are physically represented by making the robot rock as they collide. This represents output in the form of physical feedback. The concept of input and output via the same physical object is missing from a TUI, but a LATI adds this extra sensation of interaction to the system.
180
M. Inami et al.
Fig. 8.8 Example robot moving in the game
Fig. 8.9 Example of one robot firing on another
Remote Active Tangible Interactions The Remote Active Tangible Interactions (RATI) [1] system extends the concept of LATI to a fully remote distributed system. This section first highlights the major components the RATI. The section finishes with explanation an interior design application developed with a RATI.
RATI The RATI system is an extension of the original DMCS system. The DMCS system was extended to allow multiple displays, each with their own application, to
8
Active Tangible Interactions
181
Fig. 8.10 Example of an explosion on a robot
Fig. 8.11 Overview of the game space
communicate and synchronise other over a network with a custom XML messaging protocol. The original RATI consisted of two connected DMCS clients with two robots paired together (one on each table). Our latest version of RATI has been tested on three tables with six robots on each table, see Fig. 8.4. The robots on any table could be translated and/or rotated while maintaining a common state between both clients. A set of virtual obstacles and collision-avoidance techniques were developed to prevent robots from colliding with other robots or the obstacles. For reasons of scalability, an elegant client/server architecture was chosen over P2P as the method to unite the DMCS systems together. The server permits essential data to be stored; such as obstacle coordinates for virtual objects to be shared by all clients. In addition, data on all clients connected (number of robots connected, the physical screen size, resolution, etc) is also stored.
182
M. Inami et al.
XML was selected to support the protocol because messages are plaintext, platform independent, and extendable. The present protocol defines message structures for sending and receiving translate and rotate commands, querying any client’s environment data (e.g. screen resolution, number of robots connected), and querying exact data about any robot (e.g. coordinate, orientation). The server understands each XML message and transmits it accordingly; queries are delivered to the appropriate client, while translate and rotate commands are broadcasted to all other clients. The DMCS clients are synchronized by sending a movement XML message to the server for each occasion a robot/tangible is moved by a user. Three distinctive movement modes are integrated into the RATI system. The selected mode determines when a movement structure should be available. The first mode only publishes the end coordinate of a move. DMCS clients synchronizing the state of their tangibles ignore the path along which that tangible was moved, and simply move to the target coordinate in a straight line. In this mode the client is required to calculate a path that avoids and virtual or physical obstacles. The second mode publishes the exact path of the tangible that was moved. This is realized by publishing the tangible’s coordinates in a real-time succession of data packets. In mode three a finite number of waypoints from a tangible’s path are supplied. This mode can be used to approximate the path that the robot traversed. The spacing of the waypoints is left to the client, and will be dependent on how often the coordinates are published. The selection of modes is application specific. For example, if the tangibles represent individual sessions in a schedule, then the tangible’s position is important, not how the tangible moved to that position. On the other hand if the tangibles are deployed as an action game, then the exact path of each tangible is very important. Physical and virtual obstacle avoidance is included in the system to increase the functionality of the system. The obstacles are assumed to be static (i.e. fixed location) when the collision avoidance algorithm is executed so that it would only have to be executed for one instance of each robot move. Obstacle avoidance is determined by the breadth first search (BFS) algorithm, which operates on a graph that defines the robot and all obstacles. The graph is constructed by dividing the screen into a grid. All robots and obstacles are associated with their closest grid node, which is then connected together to form an undirected cyclic graph. This graph is given to the BFS algorithm. The resulting BFS path is pruned to remove any obsolete nodes, after which the robot can be driven along the obstacle-free path.
Furniture Application An interior design application is described as an example of the suitability of the DMCS technology for remote active tangible interactions. The furniture placement application supports users to visualise the layout of a room of furniture, see Fig. 8.12. The furniture application depicts on a tabletop display six pieces of
8
Active Tangible Interactions
183
Fig. 8.12 Furniture layout application
furniture as a birds-eye view of a floor plan. The following six furniture items are represented: chair, couch, fish tank, lamp, table, and TV. Furniture selection is supported by moving the robot on top of the appropriate image, upon which the image would snap to the robot and the furniture movement is slaved to the robot’s movement. Only one piece of furniture can be attached to a robot at a time. A small threshold is included, to avoid any unwanted snapping that could occur while moving the robot around during normal interaction. This threshold only allows a furniture image to snap to a robot, if the robot was within 10 pixels of the centre of the representation of the piece of furniture. This threshold is easily adjustable for different applications. Once a piece of furniture is selected, the robot takes on the role of that piece of furniture and can be moved around and oriented as if it was a physical model of that furniture item. This embodies a costume metaphor, in which a robot can represent different pieces of furniture at different points in time. The costume metaphor was forced by the limited availability of robots of our original system, which only allowed one tangible per environment. We have overcome this limitation in our new version of RATI. Ideally one robot would be available for each piece of furniture, and be permanently bound to that identity for the duration of the application. This is referred to as space multiplexed input and output (IO), because each device is dedicated to a single task only. The alternative time multiplexed IO “uses one device to control different functions at different points in time” [30]. Another appropriate metaphor for the interface could be a bulldozer metaphor by implementing robots fitted with forklift arms, see Fig. 8.6. In a bulldozer fashion, a single robot could move all the passive furniture models, albeit one at a time. A real-time 3D view of the furniture arrangement provided a third person perspective for users, on a vertical screen across the table from the user. The 3D visualization provided the users with a front-on perspective of the room. Figure 8.12 depicts the 3D view of an example layout. Each DMCS client is connected to the
184
M. Inami et al.
server using TCP/IP. The 3D visualization is rendered by a separate computer, which received a continuous stream of data via UDP.
Future Trends An interesting future trend is an asymmetric use of the robots in a RATI enhanced application. The Hand of God (HOG) is an indoor system to provide an intuitive means for providing a wider bandwidth of human communication between indoor experts and users in the field [31]. Figure 8.13 depicts an indoor expert employing the HOG by pointing to locations on a map. The indoor and outdoor users have a supplementary voice communication channel. An outdoor field worker employing an outdoor AR wearable computer can, for example, visualizes a 3D model of the indoor expert’s hand geo-referenced at a point on the map, as depicted in Fig. 8.13. The indoor expert is able to promptly and naturally communicate to the outdoor field operative. As depicted in Fig. 8.13, the indoor expert is able to point to a precise location on a map and give the outdoor user a visual waypoint to navigate to, see Fig. 8.14. Physical props may be placed onto the HOG table; an example is the placement of a signpost onto a geo-referenced point, as shown in Fig. 8.15. These props currently operate as TUI, but the outdoor user is able to virtual move the signpost. The use of active tangible interactions will enable the physical signpost in the HOG tank to move and reflect the new location the outdoor user has placed the virtual signpost. This would enable the physical signpost and virtual signpost to be synchronized. Breaking the RATI one-to-one relationship will open a number of interesting possibilities. Having physical representation of virtual worlds such as Second Life2 would be possible.
Fig. 8.13 An indoor expert employing the Hand of God interface
2 http://secondlife.com/
8
Active Tangible Interactions
185
Fig. 8.14 Head mounted display view seen by the outdoor participant
Fig. 8.15 Physical props as signposts for the outdoor user
Conclusion This chapter explored active tangible interactions that are an extension of tangible user interactions. Active tangible interactions employ tangible objects with some form of self automation in the form of robotics or locomotion. Tangible user interfaces (TUI) are graspable physical interfaces that employ physical objects. Two example forms of active tangible interactions were presented, Local Active Tangible Interactions and Remote Active Tangible Interactions. Local Active Tangible Interactions (LATI) is a concept that allows users to interact with actuated physical interfaces such as small robots locally. The Remote Active Tangible Interactions (RATI) system supports fully featured distributed active tangible interactions. The underlining technology Display-Based Measurement and Control System to support our instantiations of Local Active Tangible Interactions and Remote Active
186
M. Inami et al.
Tangible Interactions was presented. Two applications exploring the concepts are given, Augmented Coliseum (a LATI game) and a RATI enhanced furniture layout application.
References 1. Richter J, Thomas BH, Sugimoto M, Inami M (2007) Remote active tangible interactions. In: Proceedings of the 1st international conference on tangible and embedded interaction (TEI ’07), ACM Press, New York, pp 39–42, doi: 10.1145/1226969.1226977 2. Sugimoto M, Kodama K, Nakamura A, Kojima M, Inami M (2007) A display-based tracking system: Display-based computing for measurement systems. In: Proceedings of the 17th international conference on artificial reality and telexistence (ICAT 2007), IEEE Computer Society, Los Alamitos, CA, pp 31–38, doi: 10.1109/ICAT.2007.50 3. Ishii H (1999) Tangible bits: Coupling physicality and virtuality through tangible user interfaces. In: Ohta Y, Tamura H (eds) Mixed reality: Merging real and virtual worlds, Ohmsha Ltd, Tokyo, pp 229–247 4. Fitzmaurice GW, Ishii H, Buxton WAS (1995) Bricks: Laying the foundations for graspable user interfaces. In: Proceedings of the SIGCHI conference on human factors in computing systems (CHI ’95), ACM Press, New York, pp 442–449, doi: 10.1145/223904.223964 5. Rosenfeld D, Zawadzki M, Sudol J, Perlin K (2004) Physical objects as bidirectional user interface elements. IEEE Computer Graphics and Applications 24(1):44 6. Kato H, Billinghurst M, Poupyrev I, Imamoto K, Tachibana K (2000) Virtual object manipulation on a table-top AR environment. In: Proceedings of the international symposium on augmented reality (ISAR 2000), IEEE Computer Society, Los Alamitos, CA, pp 111–119, doi: 10.1109/ISAR.2000.10013 7. Dietz P, Leigh D (2001) DiamondTouch: A multi-user touch technology. In: Proceedings of UIST ’01, ACM Press, New York, pp 219–226, doi: 10.1145/502348.502389 8. Hachet M, Guitton P (2002) The interaction table: A new input device designed for interaction in immersive large display environments. In: Proceedings of the workshop on virtual environments 2002 (EGVE ’02), Eurographics Association, Aire-la-Ville, pp 189–196 9. Shen C, Vernier FD, Forlines C, Ringel M (2004) Diamondspin: An extensible toolkit for around-the-table interaction. In: Proceedings of the SIGCHI conference on human factors in computing systems (CHI 2004), ACM Press, Vienna, pp 167–174 10. Chen F, Close B, Eades P, Epps J, Hutterer P, Lichman S, Takatsuka M, Thomas B, Wu M (2006) ViCAT: Visualisation and interaction on a collaborative access table. In: Proceedings of the 1st IEEE international workshop on horizontal interactive humancomputer systems (TABLETOP ’06), IEEE Computer Society, Adelaide, pp 59–60, doi: 10.1109/TABLETOP.2006.36 11. Yang F, Baber C (2006) Maptable: A tactical command and control interface. In: Proceedings of the 11th international conference on intelligent user interfaces (IUI ’06), ACM Press, New York, pp 294–296, doi: 10.1145/1111449.1111515 12. Fjeld M, Voorhorst F, Bichsel M, Lauche K, Rauterberg M, Krueger H (1999) Exploring brick-based navigation and composition in an augmented reality. In: Proceedings of the 1st international symposium on handheld and ubiquitous computing (HUC ’99), Springer-Verlag, London, pp 102–116 13. Raskar R, Welch G, Chen WC (1999) Table-top spatially-augmented reality: Bringing physical models to life with projected imagery. In: Proceedings of the 2nd IEEE and ACM international workshop on augmented reality (IWAR ’99), IEEE Computer Society, Washington, DC, p 64 14. Jacob RJK, Ishii H, Pangaro G, Patten J (2002) A tangible interface for organizing information using a grid. In: Proceedings of the SIGCHI conference on human factors in computing systems (CHI ’02), ACM Press, New York, pp 339–346, doi: 10.1145/503376.503437
8
Active Tangible Interactions
187
15. Kurata T, Oyabu T, Sakata N, Kourogi M, Kuzuoka H (2005) Tangible tabletop interface for an expert to collaborate with remote field workers. In: Proceedings of the 1st international conference on collaboration technology (CollabTech 2005), Tokyo, pp 58–63 16. Toney A, Thomas BH (2006) Considering reach in tangible and table top design. In: Proceedings of the 1st IEEE international workshop on horizontal interactive humancomputer systems (TABLETOP ’06), IEEE Computer Society, Adelaide, pp 57–58, doi: 10.1109/TABLETOP.2006.9 17. Ullmer B, Ishii H (1997) The metaDESK: Models and prototypes for tangible user interfaces. In: Proceedings of the 10th annual ACM symposium on user interface software and technology (UIST ’97), ACM Press, New York, pp 223–232 18. Rauterberg M, Fjeld M, Krueger H, Bichsel M, Leonhardt U, Meier M (1997) BUILD-IT: A computer vision-based interaction technique for a planning tool. In: HCI 97: Proceedings of HCI on people and computers XII, Springer-Verlag, London, pp 303–314 19. Azuma R, Baillot Y, Behringer R, Feiner S, Julier S, MacIntyre B (2001) Recent advances in augmented reality. IEEE Computer Graphics and Applications 21(6):34–47 20. Milgram P, Colquhoun H (1999) A taxonomy of real and virtual world display integration. In: Mixed reality – merging real and virtual worlds, Springer Verlag, Berlin, pp 1–16 21. Kato H, Billinghurst M (1999) Marker tracking and HMD calibration for a video-based augmented reality conferencing system. In: Proceedings of the 2nd IEEE and ACM international workshop on augmented reality (IWAR ’99), IEEE Computer Society, Washington, DC, p 85 22. Fujimura N (2004) Remote furniture: Interactive art installation for public space. In: ACM SIGGRAPH 2004 emerging technologies (SIGGRAPH ’04), ACM Press, New York, p 23, doi: 10.1145/1186155.1186179 23. Brave S, Ishii H, Dahley A (1998) Tangible interfaces for remote collaboration and communication. In: Proceedings of the ACM conference on computer supported cooperative work (CSCW 1998), ACM Press, New York, Seattle, WA, pp 169–178 24. Pangaro G, Maynes-Aminzade D, Ishii H (2002) The actuated workbench: Computercontrolled actuation in tabletop tangible interfaces. In: Proceedings of the 14th annual ACM symposium on user interface software and technology (UIST ’02), Paris, pp 181–190 25. Everitt KM, Klemmer SR, Lee R, Landay JA (2003) Two worlds apart: Bridging the gap between physical and virtual media for distributed design collaboration. In: Proceedings of the SIGCHI conference on human factors in computing systems (CHI ’03), ACM Press, New York, pp 553–560, doi: 10.1145/642611.642707 26. Bajura M, Neumann U (1995) Dynamic registration correction in video-based augmented reality systems. IEEE Computer Graphics and Applications 15(5):52–60 27. Inami M, Tomita M, Nagaya N (2007) Relative motion racing. The National Museum of Emerging Science and Innovation, Tokyo 28. Kojima M, Sugimoto M, Nakamura A, Tomita M, Nii H, Inami M (2006) Augmented coliseum: An augmented game environment with small vehicles. In: Proceedings of the 1st IEEE international workshop on horizontal interactive human-computer systems (TABLETOP ’06), IEEE, Adelaide, vol 1, pp 3–8 29. Bimber O, Raskar R (2005) Spatial augmented reality: Merging real and virtual worlds, A K Peters, Wellesley, MA 30. Hauber J, Regenbrecht H, Hills A, Cockburn A, Billinghurst M (2005) Social presence in two- and three-dimensional videoconferencing. In: Proceedings of the 8th annual international workshop on presence, London, pp 189–198 31. Stafford A, Piekarski W, Thomas B (2006) Implementation of god-like interaction techniques for supporting collaboration between outdoor AR and indoor tabletop users. In: Proceedings of the 5th IEEE and ACM international symposium on mixed and augmented reality (ISMAR ’06), IEEE Computer Society, Washington, DC, pp 165–172, doi: 10.1109/ISMAR.2006.297809