designing applications around the limitations of existing in- put devices. ... main advantage of an IVE is that it facilitates collabora- tive work: multiple people can ... of the IVE require omnidirectional wireless communication. Proceedings of the ...
Rapid Prototyping of Mobile Input Devices Using Wireless Sensor Nodes James Carlson, Richard Han, Shandong Lao, Chaitanya Narayan, Sagar Sanghani University of Colorado at Boulder Boulder, Colorado, 80309-0530 {james.carlson, richard.han}@colorado.edu Abstract Many options exist for prototyping software-based human-computer interfaces, but there are few comparable technologies that allows application developers to create ad-hoc hardware interfaces with corresponding ease and flexibility. In this paper, we present a system for rapidly constructing low cost prototypes of mobile input devices by leveraging wireless sensor nodes. We demonstrate two proof-of-concept applications–a conductor’s baton and a scene navigation controller–that are prototyped using wireless sensor networks. These applications illustrate that wireless sensor technology can be used to quickly and inexpensively prototype diverse physical user interfaces. We believe that this technology will be of value in many areas, including the study of ergonomics, haptic interfaces, collaborative design, low-cost VR systems, and usability research.
1. Introduction One of the most challenging and pervasive problems in the field of immersive visualization (more commonly known as virtual reality, or VR) is determining which type of input device is most appropriate for an application. Unlike desktop PC environments, where industry standards in user interface design have lead to the dominance of the keyboard and mouse as primary input devices, such standards have yet to be established for immersive 3D environments [14] [13]. In order to facilitate the exploration of novel controllers, the developers of VR software should be able to create working prototypes of input devices concurrently with software application development, rather than designing applications around the limitations of existing input devices. Unfortunately, building such prototypes is a costly and time-consuming venture, requiring experience in both electrical engineering and software engineering, as well as domain-specific expertise in immersive visualization.
Our goal is to provide an enabling technology that helps bridge the gap between hardware interface design and software interface design. We demonstrate how wireless sensor nodes can be used as an abstraction layer, allowing for a system by which novel wireless input devices can be rapidly prototyped, at very low cost, by anyone who has an elementary understanding of electronics. By focusing on applications in immersive visualization, this work points to new considerations for the design of wireless sensor network technology that are relatively unexplored by similar efforts in ubiquitous computing and embedded interface research.
2. Design requirements The testbed for this research was the immersive visualization environment (IVE) at the BP Center for Visualization at CU, Boulder. Unlike VR systems that rely on a headmounted display, the IVE is a form of VR display in which the user is surrounded by three 10 x 12 foot walls and a 12 x 12 foot floor onto which a stereoscopic image is projected (Figure 1). A position and orientation tracker is attached to the user’s stereoscopic glasses, allowing the visualization software to recalculate the perspective of the 3D graphics based on the user’s proximity to the walls. The main advantage of an IVE is that it facilitates collaborative work: multiple people can use the facility simultaneously in order to view the same data. This usage scenario, coupled with the fact that the primary user is mobile, motivates the strong preference for wireless input devices. Our decision to focus on applications in immersive environments poses some other unique challenges. We are concentrating on providing real-time (or near real-time) interaction, and therefore system latency is a primary concern. As a general guideline, a delay of less than 150ms between manipulation of the input device and the resultant change in the software interface is acceptable [9], but there are applications for which a finer resolution (or more accurate timing of the user’s input) is required. The physical characteristics of the IVE require omnidirectional wireless communication
Proceedings of the Fifth IEEE Workshop on Mobile Computing Systems & Applications (WMSCA 2003) 0-7695-1995-4/03 $17.00 © 2003 IEEE
• Addresses specific requirements for use in the IVE: low latency, omnidirectional wireless communication, small size. • Allows for easy reconfiguration of both hardware and software for use in many different applications. • Provides a user interface and/or API that is compatible with current standards for 3D input device libraries. • Low cost compared to alternative approaches.
3. Selection of the wireless sensor platform
Figure 1. Within an immersive visualiation environment (IVE), wireless sensor nodes are used to prototype novel mobile input devices, e.g. a conductor’s wand.
Rapid prototyping of physical computer interfaces is not a new concept, and there are many options for developing wireless input devices for IVEs. Here we discuss existing approaches from the fields of immersive visualization and ubiquitous computing, and the rationale behind our decision to base our technology on wireless sensor networks.
3.1. Related work at an effective range of at least 12 feet, and our desire to incorporate this technology into handheld devices imposes a size restriction on the hardware. Finally, it must be possible to connect the system with the computer that controls the visualization. It cannot be assumed that a direct physical connection is available, since VR environments are often powered by supercomputers which must be housed in an air-conditioned room that is separate from the graphical display. There are also a number of usability concerns that must be addressed. Our target audience is the software development community, and therefore we cannot assume that the users of this system have a strong background in electrical engineering; it should be possible to build useful input devices using very simple circuitry. Conversely, it is undesirable to limit the input sensors a predefined set of simple ”plug and play” circuits, since our primary goal is to allow developers to think freely and without the restrictions of externally enforced preconceptions. The software interfaces should share this characteristic of simplicity while maintaining flexibility: the developer should not be required to learn a new programming language in order to use the system, and our API should integrate well with the code of existing immersive applications. The cost of our platform must also be sufficiently cost-effective to justify its use in preference to building new input devices from scratch. To summarize, our prototyping system should have the following characteristics: • Makes hardware-based input device prototyping accessible to software engineers with little background in electrical engineering.
A system for prototyping wired VR input devices was first proposed in [3] using LegoTMbricks. More recently, a solution provided by InterSense (a leading manufacturer of tracked input devices for immersive environments) exposes an I2C bus on their wireless modules to allow customization by OEMs [20]. This approach handles the problem of low-level connectivity to the visualization server, but the developer must still have experience in analog and digital circuit design in order to take advantage of this facility. One interesting approach to creating rapidly reconfigurable mobile interfaces is to build a software-based GUI on a PDA or tablet PC that interfaces with the visualization server via wireless Ethernet. There are many positive aspects to this approach–instant reconfiguration via software, for example–but it is important to note that it also suffers from significant limitations when used in an immersive 3D environment. The physical form factor of the device is inflexible, and a touch screen interface provides no tactile differentiation between the controls. In an immersive environment, the best interface is often one that does not require the user to look away from the data, and the lack of a physical representation of the PDA’s graphical buttons precludes their use in many applications [15]. Several research projects have emerged from the ubiquitous computing field that are specifically targeted at physical interface prototyping. The Phidgets [11] project aims to create a system for developing physical input devices and actuators, primarily in a wired environment. More applicable solutions are proposed in [4] and [10], which focus on the use of wireless networks of mobile sensors to form the infrastructure for reconfigurable interactive devices.
Proceedings of the Fifth IEEE Workshop on Mobile Computing Systems & Applications (WMSCA 2003) 0-7695-1995-4/03 $17.00 © 2003 IEEE
It is important to recognize that the goals of ubiquitous computing and immersive visualization are very different, and in some cases, almost in opposition: immersive visualization attempts to completely supplant the user’s environment for a finite period of time, whereas the objective of ubiquitous computing is, arguably, to permanently integrate computers into everyday objects in order to augment the user’s environment without an intrusive computer interface. For this reason, some considerations that are of great importance in ubiquitous computing applications–such as power efficiency, security, distributed communication, and context awareness–are of marginal interest in VR interfaces. On the other hand, greater attention must be paid to issues such as system latency [4], range of wireless communication [2], and the ability to integrate with legacy software and hardware.
Figure 2. The MANTIS Nymph wireless sensor node.
3.2. Wireless sensor networks Wireless sensor nodes provide an excellent compromise between simplicity and utility, and are sufficiently inexpensive that experimentation with these devices does not require a major investment. We examined several sensor node technologies before settling on the MANTIS Nymph, a new wireless sensor network platform that is being developed independent of this project at the University of Colorado. The first platform that we investigated was the Berkeley Mica sensor node running TinyOS [12]. The Mica node fits many of our requirements, such as omnidirectional wireless communication, small size, and the ability to connect a diverse range of sensors. However, a separate circuit board must be attached to the Mica in order to connect simple sensors, which increases both the size and complexity of this solution. TinyOS itself, while well established in the wireless sensor network community, requires the developer to learn an unfamiliar event-driven programming model in order to customize the system infrastructure, which also raises the barrier to entry. Our first prototype was based on the MIT HandyCricket [8], which were designed specifically to support simple hardware prototyping and robotics applications [17]. The HandyCricket’s onboard analog to digital converter (ADC) allows resistive sensor circuits to be connected directly without an external circuit board, and the Cricket Logo language is trivial to learn. Unfortunately, the HandyCricket proved to be too limited for our use: wireless communication is handled via infrared transceivers, which restricts communication to line-of-sight operation over a very short range. The computational abilities of the HandyCricket are very limited as well; slow clock speed, combined with a tiny (less than 4 kilobyte) program memory, would restrict future expansion of the platform.
MANTIS [1] is a new mobile sensor node platform that provides a sophisticated, multi-threaded operating system running on a small (3.5 x 5.5 x 2 cm) hardware device known as the MANTIS Nymph (Figure 2). The Nymph has three on-board sensor ports, wireless communication via a 900Mhz radio, and direct serial port connectivity. Like the HandyCricket, the Nymph had an onboard ADC that allows direct connection of sensor circuits, without requiring the user to design and build a custom circuit board. The MANTIS operating system (MOS) was designed from the ground up to provide a familiar, UNIX-like programming environment with a simple C API. MOS is also being ported to run on the Berkeley Mica hardware, allowing us the flexibility to experiment with sensor nodes other than the Nymph. Due to these advantages, we decided to build our system around the MANTIS platform for this stage in our research.
4. System architecture Figure 3 shows the six hardware components that comprise our system. The first components are the sensor circuits themselves, which form the buttons, knobs, wheels, etc., that provide the user with a physical interface to the immersive application. The second component is the mobile Nymph to which the sensor circuits are connected. The third component is a base station Nymph, which acts as the collection point for the data that are generated by the mobile Nymph. Connected to the base station via the serial port is the PC, the fourth component, which reads the incoming data and forwards them to the server which drives the VR display. This visualization server is the fifth component, which receives the data stream from the PC and interprets the sensor readings so that the immersive application can respond to the user’s manipulation of the custom input
Proceedings of the Fifth IEEE Workshop on Mobile Computing Systems & Applications (WMSCA 2003) 0-7695-1995-4/03 $17.00 © 2003 IEEE
Figure 4. Conceptual stages of the software.
Figure 3. Data flow diagram of the MANTISbased input device prototyping system.
device. Finally, the server updates the graphics based on the changes in the immersive application and transmits the image to the display device (most commonly, a stereo-capable projection system), thus completing the cycle.
4.1. Software design Our software architecture is divided into three stages: • MANTIS stage: The mobile MANTIS Nymph collects sensor data which is transmitted wirelessly to the PC via an intermediate, wired Nymph (or base station). • PC stage: Acts as a conduit between the Nymph stage and the Server stage by reading the sensor data from the serial port and retransmitting via TCP/IP to the Server stage. • Server stage: Daemon running on the visualization server which receives the incoming TCP packets and interprets the sensor data based on per-application configuration files. 4.1.1. MANTIS stage. Two separate programs are used in this stage, one which runs on the mobile Nymph and transmits sensor readings, and another which runs on the base station Nymph and listens for incoming packets. For both ends of the MANTIS stage, simplicity is of primary concern, since mobile sensor nodes are severely resourceconstrained in comparison with PCs and servers; therefore, we have offloaded as much of the data processing as possible to the Server stage, allowing the server to handle the interpretation of the sensor readings. As stated in Section 2, communication latency is a primary concern for this effort. We can categorize applications
of input devices into two categories: those that require a response that is as fast as possible, and those that require accurate measurement of the timing of events. The first category would apply to controllers such as joysticks, button pads, or any other type of device where the user expects an immediate response when the controller is manipulated. The second category encompasses devices that are used to support gestural interfaces (such as the conductor’s baton, discussed in Section 5.2), or scientific research applications for which the precise time of a user’s reaction must be recorded. The only solution for the first category of timing requirements is to ensure that the latency is within acceptable limits: the hardware and software architecture must be capable of supporting fast communication between the wireless device and the base station, and between the PC and the visualization server. To confirm that our architecture achieves our required latency of less than 150ms, we tested both the round trip time of communication from the mobile device and the server and the timing of communication between each stage in the system. Our analysis shows that our system has a total latency of approximately 120ms for a 6 byte packet, with the overwhelming majority of the latency caused by the Nymph-to-Nymph wireless communication. (The latencies for the Nymph-to-PC and PC-to-server communication were found to be 11ms and 3.5µs, respectively.) While this latency falls within our stated requirements (and anecdotal reports from users suggests that many people find this delay acceptable), it is still much slower than desirable, especially considering delays that will be introduced by the visualization software itself. We intend to investigate this issue further to find the exact source of this slowdown, and to determine what steps can be taken to reduce the latency. The second class of applications requires a different approach, especially in conditions where the system latency is unacceptably slow, or variable due to radio interference. If all that is required is an accurate indicator of the relative timing of sensor events, a simple time stamp can be in-
Proceedings of the Fifth IEEE Workshop on Mobile Computing Systems & Applications (WMSCA 2003) 0-7695-1995-4/03 $17.00 © 2003 IEEE
cluded in the packet that is sent from the mobile Nymph to the base station; this technique was used effectively in the conductor’s baton application. If the time stamp must be accurate relative to the system time on the visualization server, a domain-specific version of the network time protocol (NTP) can be used to synchronize the clock on the mobile Nymph with that of the server, giving us global time synchronization across all levels of the system. We are currently exploring the application of this technique to an input device that will be used in a study of Alzheimer’s Disease, where an accurate measure of human response time must be recorded. To address both of these concerns, the software on the mobile Nymph continuously reads and forwards sensor data, sending a 6 byte packet consisting of a packet sequence number, a 16 bit time stamp (measured in centiseconds), and three 8 bit sensor readings; we leave the issue of Nymph-server time synchronization for future work. The source code that implements this functionality is shown in Figure 5. At present we are also not addressing the issue of unreliable communication. MOS does support a stop-andwait reliable networking protocol, which was determined to be inappropriate for our application given the required maximum latency of 150ms. The software on the base station Nymph is quite simple: a single thread listens to the designated radio channel and forwards packets directly to the PC via the serial port. 4.1.2. PC stage. The PC plays the role of a bridge between the Nymph stage and the Server stage. This bridge is necessary for several reasons, the most important of which is the issue of physical separation of the visualization server from the IVE, as discussed in Section 2. The practical range of a serial connection is only a few meters, so a range extender would be required to connect the base station Nymph directly to the server. The second reason is platform independence: although our system is initially being targeted at SGI-based VR environments, we hope to make this technology available on a wide variety of platforms. The APIs for reading data from a serial port vary widely across operating systems, but the TCP/IP APIs are much more standardized across platforms, which will make porting the technology much more straightforward. The PC stage consists of a simple program that listens on the serial port for data from the Nymphs and forwards the raw data to the server via TCP. This program also performs the function of adjusting for wrap-around of the mobile Nymph’s 16 bit time stamp, and converts this value to a 32 bit integer before forwarding the data. It may seem that the PC stage adds significantly to the expense of our architecture. While it is true that using a PC as a bridge is not as inexpensive as a serial cable, we would argue that the cost is not prohibitive. The TINI board [19], for example, is a Java-based device that provides serial and
Figure 5. The compact application source code that runs on the remote MANTIS sensor Nymph.
10 Base-T Ethernet connectivity, and could serve as the serial to Ethernet bridge for a cost of $50. 4.1.3. Server stage. The main objective for the Server stage is to enable the programmer to integrate MANTISbased input devices into immersive applications in a way that is not awkward or significantly different from existing input device APIs. The de facto standard for VR software development is VRCO’s CAVELib library [7], which therefore gives our measure of acceptability based on the similarity of our API to that of CAVELib. The design of our API mirrors that of CAVELib in two ways: the use of a configuration file to establish library settings upon initialization, and simple “call-and-return” queries of the input device, instead of a more elaborate approach (e.g. an event driven system using callbacks). The API for the MANTIS-based input device is implemented as a C++ object class that acts as a TCP/IP server. Upon instantiation, it establishes a thread which listens for client (PC) communication on a designated port, and interprets the packets received according to the settings in a configuration file. Since a Nymph has only three sensor ports, we anticipate that a voltage divider circuit will commonly be used to support multiple switches on a single sensor port,
Proceedings of the Fifth IEEE Workshop on Mobile Computing Systems & Applications (WMSCA 2003) 0-7695-1995-4/03 $17.00 © 2003 IEEE
so our API includes provisions for automatically interpreting switch states based on impedance-to-switch mappings specified in the configuration file. Methods are also provided for reading the 8 bit value of each sensor and the time at which the latest sensor reading was taken.
5. Applications In order to assess the feasibility of our system, we constructed two novel input devices, corresponding to each of the two timing problems discussed in Section 4.1.1. The first input device is a simple gamepad-like controller that allows the user to navigate and manipulate a virtual scene, and tests our claim that the latency of our system is acceptable in practice for an application that demands fast interaction. The second device shows how a gestural interface can be constructed that uses the relative timing of sensor events, demonstrating our initial approach to the second class of timing problems. The goal of both applications is to serve as proofs of concept for the ability to rapidly prototype novel controllers, and to provide initial evidence of the effectiveness of the input devices in practice.
The input device that we chose to construct is modeled after the control pads commonly found on video game consoles. An inexpensive plastic box was used to house the Nymph, and four buttons were arranged in a diamond pattern on the surface, allowing the user to move forward, backward, left, and right (these buttons are connected to a voltage divider circuit, as discussed in Section 4.1.3). A potentiometer was mounted on the opposite side of the box, which acts as the lighting controller. The completed input device, shown in Figure 7, was constructed entirely from off-the-shelf components in approximately four hours, at a total cost of approximately $150 (not including the cost of the PC conduit). Although we did not perform rigorous user testing with this input device, anecdotal evidence from users suggests that the latency is acceptable.
5.1. Model navigation controller For the first controller, we constructed a very simple immersive application that allows the user to navigate a virtual room and to adjust the level of lighting in the room (Figure 6). Since this project was our first attempt to use our system in practice, we focused on building a suitable input device as rapidly as possible, rather than on the novelty of that device.
Figure 7. Input device for model navigation demo. The bottom image shows the construction of the device.
Figure 6. Screenshot of the model navigation demo.
An existing CAVELib application was used as the framework for our model navigation application. Apart from the changes that were required to display the desired 3D model, less than 20 lines of code were modified, and only 7 lines were added to support the custom input device. Figure 8 shows the related sections of code: the top section in the figure shows the code that is used to initialize the controller, while the bottom section shows the controller’s code integrated with the per-frame update code for the CAVELib application.
Proceedings of the Fifth IEEE Workshop on Mobile Computing Systems & Applications (WMSCA 2003) 0-7695-1995-4/03 $17.00 © 2003 IEEE
culate a tempo based on the values measured from the accelerometer, the server-side program stores the time stamp that is sent when the acceleration of the wand’s tip reaches a peak. At the next peak, the application takes the difference between the old time stamp and the new time stamp, then calculates the tempo in beats per second based on the frequency of the peaks.
Figure 9. Conductor’s baton.
Figure 8. Server-side source code for the model navigation demo. Lines in bold are specific to the controller’s API.
The construction of the baton took approximately 3 hours, including two hours to write the software that interprets the tempo and an additional hour to fit the accelerometer to the Nymph. The total cost was roughly $180.
6. Future directions 5.2. Conductor’s baton: a gestural interface Our second input device demonstrates the utility of our system for experimentation with gestural interfaces. Instead of building a new application from scratch, we incorporated our device into an algorithmic music application that had been written for a Master’s thesis at the University of Colorado [16]. The application generates, in real time, a variation on a MIDI sequence by mapping musical segments to regions of a three dimensional chaotic attractor. A new trajectory, beginning from a different initial condition, is generated using the same chaotic system. For each new point generated, the technique efficiently finds a containing region in the original attractor and triggers the corresponding music segment. The result is that the notes of the original piece are played in a new, nonrandom order. Inspired by work such as the virtual orchestra conductor shown in [5], we built an electronic “conductor’s baton” that uses an accelerometer to detect the motion of the user’s hand. A two-axis accelerometer was attached to the end of a wooden stick, with X and Y axes connected to different sensor ports on the mobile Nymph (Figure 9). In order to cal-
The work that we have discussed above is only the first step in creating an ideal technology for developing custom input devices for immersive applications. There are a number of directions that we intend to explore, including onboard localization, collaborative interfaces using multiple sensor nodes, and support for haptic feedback. At present, input devices created with our system are limited to applications where the physical location of the device is not important, because we lack the ability to detect the position of the Nymph inside the IVE. It is possible to build a tracked device by mounting an external tracking module onto the enclosure [3], but this requires expensive tracking hardware such as the systems produced by InterSense [20]. We would like to investigate the practicality of using on-board localization using ultrasound, as described in [18]. Haptic feedback has been found to be important for immersive interfaces [15]. While the MANTIS Nymph currently lacks the ability to drive any form of actuator, we are looking into the possibility of adding support for servo control.
Proceedings of the Fifth IEEE Workshop on Mobile Computing Systems & Applications (WMSCA 2003) 0-7695-1995-4/03 $17.00 © 2003 IEEE
The applications that we have developed thus far do not extend to designs that would require more than one sensor node, but there are many contexts in which such an interface would be appropriate. The MANTIS platform supports multi-hop routing and conflict avoidance amongst multiple Nymphs, but it remains to be seen how much performance will suffer when multiple nodes are in competition for the attention of the base station. Finally, we must perform usability testing of the prototyping system itself. We claim in this paper that we have reduced the complexity of building custom input devices sufficiently that hardware prototyping is accessible to software developers who have limited knowledge of electrical engineering, but we have little evidence beyond our own experiences to justify this claim. We will also need to test the performance requirements of these devices in terms of the maximum acceptable response time; as was found in the study of 2D interfaces in [9], the requirements for responsiveness of an interface vary based on the nature of the input device, and it stands to reason that the same variations may be found in 3D interfaces.
7. Conclusion Current input device hardware for immersive 3D environments are expensive, fragile, and usually not wireless, and the variety is limited to a few general purpose controllers such as wands and pinch gloves. Since the field is still in its infancy, it is not known whether or not these controllers are actually the best solutions for the domain, and more experimentation with novel controllers is needed. We have presented a simple, inexpensive solution for rapid prototyping of input devices using the MANTIS wireless sensor network platform. We discussed the constraints on the overall system and the strengths and limitations of the Nymphs, and shown an effective three-stage approach to interfacing a MANTIS with the computer that drives the VR display. The demonstration input device and VR application confirm the feasibility of integration with immersive applications using standard cave programming libraries. Wireless sensor node-based input devices show promise for real applications, but more rigorous usability and timing experiments are needed. Enhancements such as tactile feedback and collaborative interfaces using multiple Nymphs are also under consideration for continued research.
References [1] H. Abrach, S. Bhatti, J. Carlson, H. Dai, J. Rose, A. Sheth, B. Shucker, R. Han, “MANTIS: System Support for MultimodAl NeTworks of In-situ Sensors,” 2nd ACM International Workshop on Wireless Sensor Networks and Applications (WSNA) 2003 (to appear).
[2] D. Avrahami, S. Hudson, “Forming Interactivity: A Tool for Rapid Prototyping of Physical Interactive Products,” Proceedings of the conference on Designing interactive systems: processes, practices, methods, and techniques, June 2002. [3] M. Ayers , R. Zelezni, “The Lego interface toolkit,” Proceedings of the 9th annual ACM symposium on User interface software and technology, pages 97-98, November 1996. [4] R. Ballagas, M. Ringel, M. Stone, J. Borchers “iStuff: A Physical User Interface Toolkit for Ubiquitous Computing Environments,” Proceedings of the conference on Human factors in computing systems, April 2003. [5] J. Borchers, W. Samminger, M. M¨uhlh¨auser, “Conducting a realistic electronic orchestra,” Proceedings of the 14th annual ACM symposium on User interface software and technology, pages 161-162, November 2001. [6] BP Center for Visualization website, http://www. bpvizcenter.com/ [7] CAVELib Programming Reference, http://vrco.com/ CAVE_USER/ [8] “Crickets: Tiny Computers for Big Ideas,” http://web. media.mit.edu/˜fredm/projects/cricket/ [9] J. Dabrowski , E. Munson, “Is 100 Milliseconds Too Fast?,” CHI ’01 extended abstracts on Human factors in computer systems, pages 317-318, March 2001. [10] H. Gellersen, A. Schmidt, M. Beigl “Multi-Sensor ContextAwareness in Mobile Devices and Smart Artefacts,” Mobile Networks and Applications, Volume 7, Issue 5, October 2002. [11] S. Greenberg, C. Fitchett, “Phidgets: Easy Development of Physical Interfaces through Physical Widgets,” Proceedings of the 14th annual ACM symposium on User interface software and technology, November 2001. [12] J. Hill, R.Szewczyk, A. Woo, S. Hollar, D. Culler, K. Pister, “System Architecture Directions for Networked Sensors,” Proceedings of the ninth international conference on Architectural support for programming languages and operating systems, pages 93-104, November 2000. [13] K. Hinckley, R. Pausch, J. Goble, N. Kassell, “A survey of design issues in spatial input,” Proceedings of the 7th annual ACM symposium on User interface software and technology, pages 213-222, November 1994. [14] R. Jacob, “Human-computer interaction:input devices,” ACM Computing Surveys (CSUR), pages 177-179, March 1996. [15] R. Lindeman , J.Sibert , J. Hahn, “Towards usable VR:an empirical study of user interfaces for immersive virtual environments,” Proceedings of the SIGCHI conference on Human factors in computing systems: the CHI is the limit, pages 64-71, May 1999. [16] J. Marbach, “Real-Time Chaotic Variation of Symbol Sequences,” Master’s thesis, University of Colorado at Boulder, July 2003. [17] F. Martin, B. Mikhak, B. Silverman, “MetaCricket: A designer’s kit for making computational devices,” IBM Systems Journal, Vol 39, Nos. 3&4, 2000.
Proceedings of the Fifth IEEE Workshop on Mobile Computing Systems & Applications (WMSCA 2003) 0-7695-1995-4/03 $17.00 © 2003 IEEE
[18] A. Savvides, C. Han, M. Strivastava, “Dynamic FineGrained Localization in AdHoc Networks of Sensors,” Proceedings of the seventh annual international conference on Mobile computing and networking, pages 166-179, July 2001. [19] “TINI Board,” http://www.ibutton.com/TINI/ hardware/index.html [20] D. Wormell, E. Foxlin, “Advancements in 3D Interactive Devices for Virtual Environments,” Proceedings of the workshop on Virtual environments, pages 47-56, May 2003.
Proceedings of the Fifth IEEE Workshop on Mobile Computing Systems & Applications (WMSCA 2003) 0-7695-1995-4/03 $17.00 © 2003 IEEE