Proceedings of the Second International Conference on Tangible and Embedded Interaction (TEI'08), Feb 18-20 2008, Bonn, Germany
HYUI - A Visual Framework for Prototyping Hybrid User Interfaces Christian Geiger, Robin Fritze, Anke Lehmann University of Applied Science Düsseldorf Josef Gockeln Str 9, 40474 Düsseldorf, Germany geiger @fh-duesseldorf.de,
[email protected],
[email protected]
Jörg Stöcklein University Paderborn Fürstenallee 11, 33100 Paderborn, Germany
[email protected]
separated from 3D elements are used in 3D modeling programs, 2D on top of 3D refers to head-up display (HUD) techniques and 2D fully registered in 3D worlds are often used in immersive virtual environments. Hybrid user interfaces combine 2D, 3D and real object interaction and may use multiple input and output devices and different modalities. The term was originally coined by Feiner et al. [1] for a mixture of 2D and 3D representations. We extended the definition towards real objects to include mixed reality and tangible interaction techniques. 3D user interfaces, mixed reality UIs and tangible interaction are still areas of active research and hybrid user interface design which uses combinations of these elements cannot expect to have established design techniques and standard design tools in the near future. If available, design approaches are adapted from software engineering, not addressing the special requirements of hybrid UI design. However, a simple “implement-test-throw away” cycle is not viable because the implementation of different working prototypes is expensive and time consuming. This limits the number of concepts and designs that can be possibly explored.
ABSTRACT
This paper describes a pragmatic approach for the design of hybrid user interfaces based on a number of extensions of an existing 3D authoring system. We present the design and realization of a visual framework dedicated to the prototyping of hybrid user interfaces. The rapid development environment was applied in a teaching context during lectures on advanced user interface design. The results showed that our framework provides a suitable tool to quickly design and test hybrid user interfaces. Author Keywords
Hybrid user interfaces, 3D authoring, prototyping. ACM Classification Keywords
H5.1 Multimedia Information Systems, H5.m. Information interfaces and presentation (e.g., HCI): Miscellaneous. INTRODUCTION
Advances in virtual and augmented reality, tangible computing and embedded interaction allow designing intuitive deictic interaction techniques, e.g. based on direct manipulation. Despite the growing number of applications that successfully employ embedded, tangible, and 3D techniques for intuitive interaction, established 2D HCI concepts are still valid for most user tasks. Universal 3D interaction tasks can be classified into selection, manipulation, navigation (travel, way finding), and system control. System control refers to executing a command or changing the state of the system. As suggested by [5], system control is well understood in WIMP based 2D graphical user interfaces but poorly supported in 3D and tangible user interfaces where it is often cumbersome and needlessly complicated. Thus, different combinations of 2D and 3D interface elements are often used: 2D elements Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. TEI 2008, February 18–20, 2008, Bonn, Germany. Copyright 2008 ACM 978-1-60558-004-3/08/02...$5.00.
Figure 1. Table-Sized L-Shaped HYUI Application
63
Proceedings of the Second International Conference on Tangible and Embedded Interaction (TEI'08), Feb 18-20 2008, Bonn, Germany
To address these problems we developed a design process and corresponding support of tools based on the idea of a “testable design representation” [22]. The process is aimed at the fast development and evaluation of iterative prototypes throughout the design process. To support this workflow a technical framework for prototyping hybrid user interfaces is needed. This paper describes a pragmatic approach for the structured design of hybrid user interfaces based on an extension of an existing 3D authoring system. The current prototype was used in a lecture “Advanced User Interface Design” by media technology students. During the lab sessions students designed and implemented hybrid interactions techniques for a game-like application. The modular structure of the game using a number of quests allows concentrating on selected interaction design tasks.
RELATED WORK
A number of different approaches consider the efficient design of 2D, 3D and real world objects and their combination: Sandor et al presented an immersive authoring approach for hybrid user interfaces in [4]. End users can configure hybrid user interfaces using an augmented reality overlay that features a head-tracked, seethrough HMD, a number of different input devices and 3D objects presented on multiple screens. The augmented reality representation is used to configure input devices to perform simple 3D operations. A simple visual language was developed to visualize the relationship between objects, devices and operations. Thekla, a system proposed by Pirchheim et al [5] also features a visual programming environment. This approach is based on Qt Designer and is mainly restricted for 2D GUIs. Thekla provides the glue between 2D graphical user interfaces and 3D components based on the Studierstube framework. The described approaches do not provide a complete visual programming solution.
REQUIREMENTS
The design of our HYUI framework should help users to develop task specific hybrid user interfaces with all their benefits, while supporting reuse and making the development cost effective. The system is designed for users with a media-related background, i.e. people that do not necessarily have in-depth programming knowledge. This target group needs technical support for rapid prototyping and evaluating the “look and feel” of initial ideas. This requires three basic elements:
A clear understanding of such systems is only possible if the underlying interaction model is precisely specified. Beaudouin-Lafon described an interaction model for traditional WIMP (windows, icons, menus, pointer) and post-WIMP interfaces based on the notion of instrumental interaction [2]. The key idea is to use tools (or instruments) as mediators for the direct manipulation of domain objects by transforming user actions into commands. Coutrix and Nigay extended this model towards mixed interaction [3]. Their approach links modalities between physical and digital properties. With the use of mixed objects existing approaches like augmented reality, augmented virtuality, tangible user interfaces, and classical 2D interaction can be described in a unified way. In [23] we presented our initial approach for mixed reality authoring that was based on an actor-based conceptual model, an iterative design process and a set of tools for modeling, animation (Maya) and realtime 3D programming (Java, Java3D, ARToolKit). Although it provides a high-level view of an AR application it did not support a visual authoring environment and was not well suitable for multimodal and hybrid user interface design.
x a structured design process based on a suitable conceptual model, x tool support for multimodal interaction and hybrid user interfaces, and x an extendible system and reusable components based on the idea of building blocks. Based on previous work in mixed 2D/3D user interfaces and mixed reality [23], we identified the following detailed requirements for an easy-to-use framework: x Defined iterative process model for the development x Support for prototype tests in every step of the process model based on a testable design representation x Rapid prototyping support using script based tools for input / output mapping of devices x visual programming component model
notation
and
an
The idea of idea of constructing complex interaction techniques from building blocks by composition dates back to Foley et al [6]. Since then a number of projects extended this approach. Card et al. use sensor data and composition operators to characterize interaction techniques and guide the design [7]. Card’s design space of input devices is based on the physical properties that are used by input devices (absolute and relative position, absolute and relative force, both in linear and rotary form) and composition operators (merge, layout, connection). The European research project AMIRE provided different reusable mixed reality elements [25]. Low level “MR gems” and high level “MR components” defined a mixed reality repository that can be used by content experts to design mixed reality
appropriate
x arbitrary combination of 2D, 3D and tangible interaction components x repository of devices / sensors that can be selected, configured and combined according to task The HYUI framework should address all these requirements. However, in this paper we present only the technical aspects of this project. The main focus is to describe a set of extensions we have built for a 3D authoring system and its application in a game-like example used for teaching advanced user interface design.
64
Proceedings of the Second International Conference on Tangible and Embedded Interaction (TEI'08), Feb 18-20 2008, Bonn, Germany
resources. Our support for hybrid user interface design should equally support 2D, 3D and physical interface components. Coming from the computer graphics and media design domain, we (historically) did concentrate on 2D and 3D elements at first.
applications. In [8] this concept was extended towards story patterns and a 2D visual data flow notation was used to describe a non-linear story path. TUI design is not addressed in this work. The DART system extends a commercial authoring system by AR and tracking components [9]. DART has been used in large projects at Georgia Tech University, USA and allows content experts to build AR worlds using Adobe Director’s theatre metaphor for designing multimedia applications. Director’s features for 3D graphics and physical simulation provide powerful means for AR authoring. Cavazza presented the Alterne platform, a new way of creating virtual worlds [26]. It is based on qualitative reasoning, an AI technique used to reason about the behaviour of a physical system without precise quantitative information. With this technique alternative laws of physics have been developed for alternate reality worlds. Similar to our approach, the representation was realized by extending an existing graphics engine. Alterne used the Unreal Tournament 2003 game engine and extends it to run on VR environments such as CAVE.
Base System
Technical base for our projects is the Virtools system from Dassault Systèmes [27] - a 3D authoring system for rapidly developing interactive and real-time 3D content and applications. In Virtools the content is visually created by linking logical components (building blocks) to a workflow. Input and output ports of these building blocks (BBs) are connected by links and implement a data and control flow within a visual scripting environment. The authoring system provides a large set of predefined elements stored in a repository for creating high-level 3D graphics. It is possible to connect input and output devices via a midi and OSC interface. Virtools allows the import of complete 3D scenes created by animation tools. For procedural components a textual notation is provided and supports the development of complex program sequences. The textual scripting language and visual building blocks are not well suitable for runtime intensive computations. For this purpose Virtools supports a Software Development Kit (SDK) that gives programmers access to the low-level functionality, enabling them to write software components that directly use these functionalities within their own building blocks. The integration of existing software libraries is well supported. The simple example of the visual notation shows a script that rotates its owner 45 deg/sec around the X axis.
A number of approaches exist for prototyping ubiquitous computing applications with physical components. The iSTuff toolkit provides a number of physical devices and flexible software to explore the design of new interaction techniques [10]. Smart-Its is a similar approach to augment physical objects with embedded processing and interaction to develop augmented artefacts [11]. The EiTookit uses the concept of stubs to enable and to combine a number of underlying technologies [17]. Papier-Mâché is a toolkit that eases the effort to develop tangible user interfaces for users who are not input hardware experts [31]. Through a set of input abstractions and the support of several types of physical input (e. g. RFID, computer vision, barcodes, etc) Papier-Mâché provides design flexibility, allowing developers to retarget an application to a different input technology with minimal code changes. The tool has been evaluated in coursework. Hartmann et al presented a toolkit that embodied an iterative design-centered approach for prototyping of physical prototypes [12]. Three types of hardware extensibility were introduced at intra-hardware, communication and circuit level. The authoring environment d.tools offers a number of options for the design of physical prototypes. A state-chart based visual notation supports early prototyping and a design-testanalyze cycle supports a user-centered design workflow.
Figure 2. Simple visual script
A number of additional modules exist for physical simulation based on the Havok game physics engine, virtual reality set-ups (multi sided projections and VR tracking), multi-user server and AI functionality. To use the 3D authoring system for the design of hybrid user interfaces, we developed a number of custom extensions for our HYUI framework that can be easily used as additional visual building blocks:
HYUI FRAMEWORK
We found that a suitable conceptual model and a corresponding iterative design approach are essential for designing hybrid user interfaces. These aspects are not covered in this paper and we focus on the technical details of the authoring system and a validating example. In contrast to other projects we choose to extend an existing commercial authoring system. This pragmatic decision constrained some of the design decisions but allows us to concentrate on selected design aspects with our limited
2D GUI based on Flash
Rapid 2D GUI design is not well supported in virtools and Flash cannot be combined with this system. We integrated a Flash player into the system and can now use interactive flash films on top of a 3D scene (head-up display), in separate windows and within a 3D scene (render to interactive texture).
65
Proceedings of the Second International Conference on Tangible and Embedded Interaction (TEI'08), Feb 18-20 2008, Bonn, Germany
virtual sorcerer’s position is controlled by the orientation of the plate based on the movement of the real objects. After moving towards the lowest point the user orients the virtual figure by pointing a wand with a mounted IR reflection dot. This is tracked by the Optitrack component. If the desired orientation is achieved the user command the sorcerer to cast a spell shouting “fire”. The goal is to hit the enemy’s castle five times and win the game. After each turn the opponent player commands the sorcerer.
Optical Tracking
Marker based tracking is the primary base technology for tangible interaction and augmented reality. The ARToolKitPlus [15] is a software library for building augmented reality applications. It is based on optical tracking and uses fiducial markers (square black and white patterns). It is similar to the well-known ARToolKit but does not include rendering and video capturing. Advantages are the integrated marker library, more accurate pose estimation, and the automatic threshold adjustment for changing light conditions. It can be easily used to compute the transformation matrix between camera and markers. This matrix is then used to register 3D objects in an augmented scene.
Haptics and Gestures
Vibrational feedback with game controllers is supported with our extension for DirectInput / XInput and the Wiimote. We integrated a public domain library for the Wiimote [28] and extended it to recognize gestures. The use of custom gestures for the Wiimote and Nunchuk controller has been implemented following a simple pattern matching approach. The Wiimote sensor continuously sends an acceleration vector of the device. We separated the X/Y/Zcomponents and store a sample of n values of each component in three discrete functions. This sample is compared with a set of prerecorded gesture samples. If the difference is small enough, the gestures are considered as equal. This works well for simple gestures, but may fail for complex motions prerecorded by others than the user. We use a set of variations of a gesture as pattern in these cases. This pragmatic solution provided us with a fairly robust and flexible component for gesture recognition with the Wiimote. Gestures can be easily recorded and recognized within the HYUI framework.
reacTIVision is a very popular 2D tracking based on fiducial markers and widely used in table top applications [16]. It provides a very fast, flexible and reliable tracking of markers. We developed a simple building block that starts and configures the reactivision system from within virtools. OptiTrack [18] is a low-cost optical tracking system that uses USB cameras surrounded by eight IR-LEDs and passive reflective markers. These markers are recognized by the camera and a 2D position of each marker in the current camera frame can be queried from the system using the OptiTrack API. In addition, the 3D position of three markers can be computed, if these are arranged in a predefined triangle. A vector clip for an arrangement is available and can be used for head tracking etc. An initial prototype of a set of OpenCV building blocks is also available for prototyping interaction techniques based on this computer vision library.
We realized a more advanced solution for haptic interaction with the Phantom Omni device [29]. The Phantom Omni is a 3D input device that supports point-based haptic feedback for a mounted pen-based handle (see figure 4, bottom). We used the low level part of the Phantom Omni’s OpenHaptics-SDK and developed custom building blocks for getting tracking data from the device and set the corresponding forces. Additional building blocks help during initialization of haptic interaction techniques. Figure 4 (top) shows parts of the visual script for controlling a virtual car model with this haptic device. This distributed multi-user application allows two users to explore a physical environment with their vehicles. One player can bowl down the stacked virtual objects while the other reconstructs the piles. Figure 4 (bottom) shows the “builder” user with a Phantom Omni. Although not visible in this screen shot the application uses 2D graphical interfaces. Scripting Tools and Hardware Interface
Initial phases during our iterative workflow often requires to experiment with new combinations of sensors and input / output mappings that are not yet implemented as building blocks. For initial prototypes we use existing tools that allow connecting input and outputting signals via scripting. We provide components for controlling Max/MSP [19] and
Figure 3. Mixed Reality Interaction
Figure 3 shows a sample applications developed with these extensions: a mixed reality board game that is controlled via tangible interaction, speech and gestures, see [24]. A
66
Proceedings of the Second International Conference on Tangible and Embedded Interaction (TEI'08), Feb 18-20 2008, Bonn, Germany
Island consists of a small set of islands that are inhabited by a swarm of birds, called "Marvs". Goal is to control the swarm to reach the final island and leave it. The Marvs can be moved over the islands almost freely by manipulation of the appropriate swarm parameters only restricted by topology and water. All Marvs can take damage and if their health value reached zero, they will die. The Marvs align themselves. They try to stay in range of their neighbors and follow the leader who can be selected throughout the game. If the minimum distance between all Marvs falls below a certain value, the “discipline” of the Marvs decreases. Only with a sufficient level of discipline they will follow to their leader, i.e. remain close together. Furthermore the Marvs are interested to arrive at objects of interest, specified in a landmarks-array. The quests are intended for the integration of new technologies within this project. These are small program sections, which are independent of the actual application. The user recognizes a quests by a red pulsating ring that can be distributed everywhere on the islands. For the duration of a quest new program routines can be introduced, for example physical simulations or optical tracking scenarios. The HYUI Framework permits it to extend the Marv application by hybrid interface components.
GlovePie [20] from within our framework. These tools are well suited for this task. The prototyping of new IO devices with different sensors is another focus in our projects. Easy-to-use sensor hardware like the midi-based I-CubeX sensors [13] or the open source hardware project Arduino [14] is easily available. Currently, we use a set of I-CubeX sensors that are connected via MIDI to our system but also DMX-based setups. DMX512 is a standard that describes a method of digital data transmission between controllers and lighting equipment and accessories [30]. It is the primary method to control stage lighting and effects. We developed a building block that allows controlling DMX devices (e.g. dim packs, fog machines, light set ups, etc) via the HYUI framework.
3D View
2D GUI
Interactive Area
Figure 4. Haptic Interaction in 3D EXAMPLE
The interactive 3D application "Marv Island" is a table-top adventure-based game. It was developed with our HYUI framework to be used in lectures to teach basics of human computer interaction in a virtual reality context. The idea is to provide users with a complex HYUI application that can be easily extended by new interaction techniques. Students can choose from a set of predefined building blocks in order to rapidly build special interaction scenarios and evaluate them inside the Marv Island application. In addition they learn how to use input devices and sensors by playing the game, i.e. they have to assemble sensor devices in order to fulfill particular tasks of the game – so-called quests. Marv
Figure 5. L-Shaped table-top display and hybrid screen layout
We build a simple table-based environment with an L-shape display for the game (see also fig.1). Figure 3 illustrates the set-up and the chosen hybrid screen layout used in the MARV application. The table-top setup uses a 30” horizontal display and a projector image that is deflected 90° by a mirror on a semi-transparent acrylic plate. The
67
Proceedings of the Second International Conference on Tangible and Embedded Interaction (TEI'08), Feb 18-20 2008, Bonn, Germany
chopping” quest has been started the leader of the Marvs leaves the swarm towards a couple of palm trees located at the edge of one island. The player is advised to take the Wiimote for cutting the trees. In order to do so she performs a chopping gesture with the Wiimote and the Marv moves its head according to gesture (Figure 7). When the beak hits the trunk the palm starts swinging. The user has to try to fit the rate of his arm movements with the frequency of the swinging until the palm collapses. When the quest is finished the Marvs can pass on to the second island. This quest had been implemented by utilizing the Wii component of the HYUI framework. There is one building block waiting for button input of the device so that the user can confirm the quest and another one for the detection of shaking movements and analyzing the gesture.
recognition of multi-finger interaction is currently realized with retro-reflective IR dots that are stuck on the user’s finger tips. A more advanced FTIR-solution [21] is under development. The MARV swarm is controlled with multifinger interaction on the table’s interaction area. A new leader can be selected at anytime during the game and the swarm adjusts itself to the leading Marv character. Selection is done by finger tipping at a Marv representation in the interactive area. Moving the leader in a given direction is realized by moving the finger after selecting the leader. Rotation of a Marv is done with two fingers. The swarm could be widened with all fingers and a gesture as presented in the illustration in fig. 5.
Wiimote shake Marv moves head
Palms swing until they fall down
Figure 7. Interactive Quest – „Palm Chopping“
Boulder shifting: Goal of this quest is to pass the Marvs between two islands by the use of movable rocks placed in the water. There are two boulders that can be manipulated directly by moving corresponding physical blocks places on the TUI area of the table surface (Figure 7). The Marvs can jump onto one of the rocks while the other one is moved. If a boulder is moved while Marvs are on it they will fall into the water. The blocks have to be shifted alternately in order to pass the gap between the islands.
Move Marv with one and two fingers
Widening the swarm with all fingers Move physical cubes
Figure 6. Controlling the swarm with multi finger interaction Quests
We present three quests of the Marv Island environment for testing several interactions by using special input devices, e.g. Wiimote, ARToolKit, OptiTrack and ICubeX-sensors. These have been developed as part of student projects during the lectures:
Virtual rocks move
Palm chopping: Goal of this quest is to chop a palm tree for using it as a bridge in order to allow the Marvs passing the gap between two neighbored islands. Once the “palm
Figure 8. Interactive Quest – „Boulder Shifting“
Sailing: There is a raft for navigating the Marvs over large distances on the water surface. A precise control of the raft
68
Proceedings of the Second International Conference on Tangible and Embedded Interaction (TEI'08), Feb 18-20 2008, Bonn, Germany
of hybrid user interfaces. For this purpose, we extended a commercial 3D authoring system by a number of multimodal devices and tracking technologies, 2D flash and scripting tools for input / output mapping. A non trivial game-based scenario was realized that easily allows to experiment with new interactions techniques. Future work will include more interactions devices. We are planning to use the Arduino hardware in future projects. Moreover, the integration of MATLAB will allow for a model-based design approach.
is necessary in order to avoid collision with obstacles inside the water. The sailing speed is controlled by the strength of the wind that is generated by user interaction when he blows into an I-CubeX air sensor that measures the pressure of blow. The user also controls the direction of the navigation by tracking her head via the OptiTrack building block. Head-tracking is appropriate as the position and orientation of the sensor is directly dependent on head movement
ACKNOWLEDGEMENTS Blow for wind
We thank Florian Klompmaker and Björn Wöldecke for the implementation support of various extensions. The student project group “MARV Island” and Jan Bodenstein designed and implemented the MARV Island example during a semester project organized by Christian Geiger at University of Applied Science Harz.
Turn head
REFERENCES
Raft moves accordingly to wind
1. S. Feiner, A. Shamash: Hybrid user interfaces: breeding virtually bigger interfaces for physically smaller computers, Proc. UIST 01, ACM Press (2001).
Figure 9. Interactive Quest – „Raft Sailing“
2. M. Beaudouin-Lafon. An Interaction Model for Designing Post-WIMP User Interfaces. Proc. ACM Human Factors in Computing Systems (CHI 2000), The Hague (Netherlands). ACM Press, 2000.
EVALUATION AND CONCLUSION
The described quests were realized as part as lecture assignment without much difficulties. The modular structure of the application allowed the students to concentrate on the selected interaction techniques. First prototypes were built using available building blocks as substitutes if the desired functionality was not yet implemented. In a second step scripting tools like GlovePie or Max / MSP were used to better simulate and test the interaction. Finally new building blocks were realized and integrated into the authoring system. For prototyping, the current framework proved to be suitable even for nonprogrammers. The design of complex applications is more difficult due to the missing object-oriented programming approach experienced developers would prefer. However, for the purpose of rapid prototyping of multimodal and hybrid user interfaces a visual authoring framework seems to be of significant value. The time for the development of quest was between a couple of days if all needed building blocks existed in the system and three to four weeks if new building blocks had to be developed by the students. Feedback from evaluations after the lectures showed that students with a strong affinity to media design are very satisfied with the high-level support of the framework. Students with a focus on media programming complaint that the visual notation was at first unusual and the missing object oriented development approach was hard to accept. However, after they get involved with this prototyping approach they were also satisfied with the rapid development of their work.
3. C. Coutrix, L. Nigay. Mixed Reality: A model of mixed interaction. Advanced Visual Interfaces, AVI’06, May 23-26, 2006, Italy. 4. C. Sandor; A. Olwal, B. Bell, S. Feiner: Immersive Mixed-Reality Configuration of Hybrid User Interfaces. In: ISMAR ’05: IEEE and ACM Int. Symposium on Mixed and Augmented Reality (2005) 5. C. Pirchheim, D. Schmalstieg, A. Bornik: Visual Programming for Hybrid User Interfaces. In: Proc 2nd International Workshop on Mixed Reality User Interfaces (MRUI’07) 6. J. D. Foley, L. V. Wallace, and P. Chan. The human factors of computer graphics interaction techniques. IEEE Computer Graphics Applications 4, 11 (Nov. 1984), pp. 13-48. 7. S. K. Card, J. D. Mackinlay, and G. G. Robertson: A Morphological Analysis of the Design Space of Input Devices, ACM Transactions on Information Systems, Vol. 9 , No. 2, April 1991, pp. 99 - 122 8. D. F. Abawi, S. Reinhold, R. Dörner. A Toolkit for Authoring Non-linear Storytelling Environments Using Mixed Reality. TIDSE 2004. Technologies for interactive Storytelling, Springer, 2004. 9. B. MacIntyre, M. Gandy, S. Dow, and J. D. Bolter. DART: A Toolkit for Rapid Design Exploration of Augmented Reality Experiences. UIST04, User Interface Software and Technology.
In this paper we described the technical aspects of the HYUI framework that is dedicated to the visual prototyping
69
Proceedings of the Second International Conference on Tangible and Embedded Interaction (TEI'08), Feb 18-20 2008, Bonn, Germany
reality authoring. Proc. ISMAR 2002. IEEE Computer Society (2002).
10.R. Ballagas, M. Ringel, M. Stone, J. Borchers, iStuff: a Physical User Interface Toolkit for Ubiquitous Computing Environment: CHI '03: Conference on Human Factors in Computing Systems, 2003.
23.C. Geiger, J. Stöcklein, T. Pflug. Towards Structured Design of Mixed Realty Content, eProceedings, HCI International 2005.
11.H. Gellersen, G. Kortuem, M. Beigl, A. Schmidt: Physical Prototyping with Smart-Its, IEEE Pervasive Computing Mag., 3/04
24.C. Geiger, F. Klompmaker, J. Stoecklein, R. Fritze: Development of an Augmented Reality Game by Extending a 3D Authoring System, ACE 2007, ACM Press (2007)
12.B. Hartmann, S. Klemmer, M. Bernstein, L. Abdulla, B. Burr, A. Robinson-Mosher, J. Gee, Reflective physical prototyping through integrated design, test, and analysis. UIST 2006: ACM Symposium on User Interface Software and Technology.
25. AMIRE. http://www.amire.net/ 26. M. Cavazza, J.-L. Lugrin, S. Hartley, P. Libardi, M. J. Barnes, M. Le Bras, M. Le Renard, L. Bec, and A. Nandi. New Ways of Worldmaking: the Alterne Platform for VR Art. ACM Multimedia 2004, ACM Press (2004).
13.Icube-X. http://www.infusionsystems.com 14.Arduino. http://www.arduino.cc 15.D. Wagner, D. Schmalstieg. ARToolKitPlus for Pose Tracking on Mobile Devices. Proc. CVWW’07.
27.Virtools. http://www.virtools.com 28.CWiimote. http://www.wiili.org/index.php/CWiimote
16.M Kaltenbrunner, R. Bencina: reacTIVision: A Computer-Vision Framework for Table-Based Tangible Interaction. In Proc TEI07, ACM Press (2007)
29. Phantom Omni. http://www.sensable.com 30.DMX512-A protocol. http://www.usitt.org/standards/ DMX512.html
17.EiToolkit. http://www.eitoolkit.de 18. Optitrack. http://www.naturalpoint.com
31.S. R. Klemmer, J. Li, J. Lin, J. A Landay. PapierMâché: Toolkit Support for Tangible Input CHI 2004: ACM Press (2004), pp. 399–406.
19. Max/MSP. http://www.cycling74.com 20. GlovePie. http://carl.kenner.googlepages.com/glovepie 21. J. Han: Low-cost multitouch sensing through frustrated total internal reflection. In UIST 2005, ACM Press (2005). 22. C. Geiger, V. Paelke, C. Reimann, W. Rosenbach. Testable design representations for mobile augmented
70