Augmenting Reality Through The Coordinated Use ... - Semantic Scholar

1 downloads 10203 Views 559KB Size Report
The augurscope is based on a tripod-mounted laptop computer. A GPS receiver (for outdoor use) and electronic compass ... only recently become possible to buy laptops with 3D ... best a couple of hundred meters, and is also subject to.
Augmenting Reality Through The Coordinated Use of Diverse Interfaces Chris Greenhalgh, Steve Benford, Tom Rodden, Rob Anastasi, Ian Taylor, Martin Flintham, Shahram Izadi, Paul Chandler, Boriana Koleva, Holger Schnädelbach The Mixed Reality Laboratory The University of Nottingham Nottingham NG7 2RD, UK Tel: +44 115 951 4203 {cmg, sdb, tar, rma, imt, mdf, sxi, pmc, bnk, hms}@cs.nott.ac.uk ABSTRACT

example, Azuma [2] discusses:

We present seven diverse augmented reality interfaces including audio tunnels from a virtual environment to fixed and mobile phones; digital activity meters for locating hotspots of activity in a parallel virtual world; a tripodmounted display for small groups to view virtual events; and public projections of shadows and sound from a virtual world. An analysis of how these match different design goals and real world constraints demonstrates the potential utility of each. We explore how the use of a shared underlying virtual world enables multiple interfaces such as these to be coordinated to provide a rich and coherent augmented reality experience.

- displays being hard to read in sunlight; - tracking having variable accuracy; - portability being limited, especially as a function of power requirements. This paper introduces seven augmented reality interfaces that respond to these – and other – practical constraints in quite different ways. The interfaces outlined in this paper are intended to be used in tandem to allow a diverse population of users – such as the inhabitants of a city – to experience events that take place within a parallel virtual world. Thus rather than provide a single point of contact between the physical and the virtual we wish to realize a much broader augmented reality experience.

Keywords

Augmented reality, mixed reality, mobile applications. INTRODUCTION

The archetypal approach to augmented reality (AR) uses a wearable or handheld device to supplement a single user’s experience of a physical environment. For example, the user may don a wearable computer with tracking and specialized IO devices (such as a see-through head mounted display). This allows them to receive or recall additional context relevant information superimposed on their normal experience of physical spaces and/or artifacts [1]. Alternatively the user may carry a handheld device. A typical application for this kind of system has been in the production of electronic guides, where users are presented with information about their current location. This class of system ranges from museum based systems [3] to broader town and city guides [5]. The design, construction and operation of these devices is strongly influenced by the practical problems of augmented reality, especially when intended for outdoors use [7]. For

Using these interfaces as a starting point, we identify and chart the factors that determine each interface’s appropriateness for overlaying digital information on a physical environment as a function of intended use and context. We then discuss how the use of an underlying shared virtual environment enables diverse collections of such interfaces to work together in a concerted way to provide rich and coherent augmented reality experiences. PROTOTYPE INTERFACES

We begin with seven prototype interfaces that illustrate new and diverse approaches to augmented reality. Each establishes a different relationship between digital information (in our case, a virtual world) and a physical environment. These are: - The use of fixed and public telephones to create audio tunnels between physical and virtual worlds; - The extension of these to mobile phones; - The combination of a PDA, GPS device and wireless networking to create a digital activity meter, an interface for locating hotspots of activity in a parallel virtual world and displaying these on a radar display; - A second digital activity meter that produces an audio sonification rather than a visual display;

- A portable tripod-mounted display called an augurscope through which users may view virtual activity when outdoors; - The projection of a virtual world into public space as virtual shadows; - A second projection of a virtual world as an ambient soundfield. Audio tunnels using fixed or public telephones

Augmented or mixed reality does not necessarily require the development of novel devices. In fact, it can exploit devices that already exist and are already in use in the physical world as a means of augmentation. One example of this is the use of public payphones (and other fixed phones). Payphones are an established component of many urban landscapes, providing a potential bridge between physical and virtual space. The locations of public payphones can be determined in advance of an experience and these can then be used to allow activities within the virtual world to be heard from corresponding locations within the physical. This communication can also be two-way, with the audio information from the payphone link made available to the virtual users. The result is to create an audio tunnel between the digital and physical world. As an aside, if the identity of the person who answers or makes the call can be determined then the system also has a precise if momentary fix on their physical location. In other words, telephones can be used as a coarse tracking system. In our prototype, an on-line (virtual-only) user mo ving around a parallel virtual world can approach a virtual representation of a payphone. The system responds by phoning the corresponding payphone and establishing an audio channel to it from the corresponding part of the parallel virtual world. Figure 1 shows an avatar approaching a payphone in the virtual world. This automatically triggers a phone call to transmit the avatar’s audio to the equivalent physical payphone.

Audio tunnels using mobile phones

Mobile phones are carried and used in vast numbers, especially in Europe, North America and the Pacific Rim. As with fixed phones, they are an established pre-existing technology that can be appropriated to support augmented reality, rather than a completely new device. Obviously it is the nature of mobile phones that they move within the physical world, although they tend to remain linked to one person or a small number of related people (e.g. family members or business colleagues). In our prototype system a mobile phone can be accessed from the virtual world exactly like a fixed phone, above, although the spatial correspondence between the physical and parallel virtual world is then destroyed. However, the mobile phone can then be supplemented with some form of tracking technology in order to place it correctly within the virtual world. In this case, the mobile phone user (as well as the virtual user) may also trigger the creation of the audio tunnel by bumping into a virtual user. There is ongoing work on positioning of phones using just the phone network radio strength, for example to support emergency services in locating callers, however this information is not generally available (for reasons of security and privacy). The approach that we have prototyped is to use a GPS receiver and a PDA (a Palm Pilot) connected to the mobile phone, that notifies the virtual world via an SMS message when the mobile phone is moved physically. This allows the phone’s virtual representation to be kept up to date. Figure 2 shows the hardware carried by the mobile user (left) and a corresponding image of their avatar in a virtual environment (right) in which an audio tunnel is active (shown by the presence of the yellow pyramid). PDA

mobile phone

GPS receiver

Figure 2: mobile phone with PDA and GPS receiver A digital activity meter with a radar display

Figure 1: A virtual user approaches to payphone to establish an audio tunnel

There are various devices in fiction and fact that are specifically tailored to locate objects, places and activities within the physical world. For example, geiger counters are used to locate sources of radioactivity, psycho-kinetic energy meters (PK-meters) are used by paranormal

investigators to detect otherworldly presence and activity, and resistivity meters are used by archeologists to locate historical artifacts and buildings.

searching for a virtual object (a fragment of a bowl) in the parallel virtual world on the right. The avatar on the right shows their current position from the GPS.

Inspired by these, we have created two handheld “digital activity meters”. These alert the user to the presence of nearby digital activity, such as avatars or virtual objects in a parallel virtual world. They could be used to support a user who is searching for a particular AR experience (such as a virtual artifact or event) within a larger but less augmented space. These interfaces are designed to support searching by individual users. The first example combines a GPS receiver (to determine the user’s physical position), a wireless network (based on WaveLAN) and a PDA (Compaq iPAQ with colour display). It presents the user with a radar style display, indicating the relative positions of nearby artifacts and avatars in the virtual world. Figure 3 shows the radar indicating the presence of two nearby avatars as dots in the central circle.

Figure 4: Locating part of a virtual bowl. The Augurscope – a portable, tripod-mounted display

The augurscope (figure 5) is a portable augmented reality interface for use by small groups in open (indoor or outdoor) locations. It is used to directly view the parallel virtual world, for example after particular content has been located using a digital activity meter. Its goals, detailed design and initial application and evaluation are described by a sister paper to this one that is also submitted to CHI 2002. The following is a brief summary for completeness.

Figure 3: Digital activity meter with virtual radar display showing nearby avatars. A digital activity meter with a sonic display

Our second example uses an abstract audio presentation rather than 2D graphics to give the user proximity information about multiple nearby virtual objects. Each virtual object is associated with its own audio tone. As they move around the physical environment, the user hears a mix of tones that indicates the relative proximities of the objects (each tone increases in volume and frequency when the object is closer). Searching is typically a single element of a guide-type or general-purpose AR system (e.g. [5]). In contrast, these interfaces support searching as an activity in itself, whereby searching may be as significant as finding. This has been used to support a virtual archeology experience, in which users search for “hidden” virtual artifacts, which they “take back” to a fixed installation for detailed viewing [4]. Figure 4 show two users in the physical world on the left who are

Figure 5: The Augurscope in use. The augurscope is based on a tripod-mounted laptop computer. A GPS receiver (for outdoor use) and electronic compass provide global location information. An onboard accelerometer and rotary encoder allow the virtual viewpoint to be interactively manipulated by panning and tilting the physical device on its tripod. As the scope is moved the laptop’s display changes to show the corresponding view of the parallel virtual world, allowing users to view the virtual world alongside the corresponding part of the physical world. The augurscope is a public device designed to allow a small group of users to cluster around the view of the virtual world.

Virtual shadows as public projections

Our next prototype interface has been inspired by the presence of shadows in the everyday world. Shadows provide indirect projections of physical objects and activity onto public surfaces, typically outdoors, in a way that is at once familiar and distorted (and potentially aesthetically pleasing). Various VR artists have previously incorporated shadows as secondary displays of activity (e.g. Char Davies’ groundbreaking 1995 installation Osmose [9]). We have recently experimented with virtual shadows, projections of a virtual world into a public space that are deliberately simplified and distorted (like a shadow) so as to convey a sense of virtual presence and activity without the need for accurate positioning or overlaid 3D graphics. The primary goal is to create an ambient or impressionistic display, particularly aimed at bystanders and larger groups or crowds who are not typically addressed by current AR interfaces. A shadow projection can be realized as a viewpoint at a particular location within a virtual world that is then projected into a (public) place that normally corresponds to the virtual location. As users and objects in the virtual world move, the shadows projected into the physical world change accordingly. Figure 6 shows an example of projecting digital shadows of avatars onto the side of a large building. These shadows were projected over a distance or approximately 200 meters using a projector with a long throw lens. Unlike most of the interfaces described so far the devices that produce shadow projections are typically fixed and embedded within the environment, rather than being mobile. However, we have also experimented with an intermediate (semi-mobile) approach, running projectors and PCs from the back of a parked van, using a generator for power. Another possibility would be to use steerable projectors and cameras as described in [10].

the ambient sound field creates a (one-way or two-way) spatialised audio link between the virtual and physical worlds. In our initial prototype we created a virtual model of an indoors location (our laboratory space) and used 3 desktop PCs, each driving two speakers, to render the audio from three locations in the virtual world into the corresponding locations in the physical space. As the avatars of on-line users moved around the virtual laboratory their audio activity was played out from the corresponding speakers into the physical laboratory. Clearly these seven prototypes constitute a diverse collection of interfaces. Table 1 (over) summarizes them in terms of their technical operation and realization (PW = Physical World, VW = Virtual World). ANALYSIS

By considering these various interfaces we can identify a broad range of factors that influence the potential utility and applicability of each. In turn, this requires a deeper exploration of the design constraints and design goals of augmented reality in the physical world. As already noted, Azuma [2] raises a number of practical constraints: - Tracking having variable accuracy. - Portability being limited – portability is largely determined by a tradeoff between processing capability and power requirements, which in turn dictates size. There are also other hard constraints, for example it has only recently become possible to buy laptops with 3D graphics acceleration and this is still not available for PDAs. - Displays being hard to read in sunlight. To these can be added other issues related to the physical environment of use: - Noise, e.g. from traffic or other people, interferes with audio-based presentations. - Weather – in our particular locations, even during the height of Summer, we experienced rain on more than half of the days in which we worked outdoors. In other regions humidity and/or temperature (low or high) must also be considered. Further constraints arise when working both indoors and outdoors:

Figure 6: virtual shadows projected onto a building Ambient sound fields as public projections

Like shadow projections, this form of interface uses devices embedded within the physical environment to provide some awareness of the activity within the parallel virtual world. Whereas virtual shadows use abstract projected graphics,

- GPS typically only works outdoors (although indoor solutions are available, but not yet widely used), so multiple tracking solutions must be used in concert. For applications that include real-time information and/or collaboration (which is central to the development of a large scale augmented experience) we must also consider:

Payphone

Mobile phone

Hand-held ‘radar’

Handheld sonification

Augurscope

Virtual shadows

Ambient audio

CPU

-

Phone

PDA

PDA

Hi-spec Laptop

PC(s)

PC(s)

Network

PSTN

GSM, SMS

WaveLAN*

WaveLAN*

Optional WaveLAN

Wired (Ethernet)

Wired (Ethernet)

Power

Wired/phone

Battery

Battery

Battery

Battery

Mains or generator

Mains or generator

Mobility

Fixed, infrastructure

Hand-held

Hand-held

Hand-held

Mobile/ Wheeled

Fixed (or van)

Fixed (or van)

Tracking

Known position

Optional GPS

GPS*

GPS*

GPS*, compass, tilt, internal turn

Known position*

Known position*

PW Media

Mono audio

Mono audio

2D graphics

Mono audio

3D graphics and audio

Abstract 3D graphics

Multichannel audio

VW data to PW

audio

audio

positions

Distances

3D graphics and audio

Simple graphics

audio

PW data to VW

(future: audio)

position (GPS) (future: audio)

position

position and audio

position, orientation, tilt, zoom.

-

-

VW Presentation

Fixed object (plus audio)

Avatar (plus audio)

Mobile avatar

Mobile avatar plus audio

Mobile avatar

Optional virtual cameras

Optional virtual mics

PW users

1

1

1+

1+

1 or small group

Large group

Large group

Interaction

Check/ listen

(PW) Check/ listen, or (VW) notify

Search

Search

Explore, view

Be(come) aware

Be(come) aware

* As implemented; other approaches are possible (e.g. GSM or GPRS for networking; Indoor ultrasonic tracking). Table 1. Technical choices embodied in the prototype interfaces. - Networking – for wireless networking, as with fixed networking, there is generally an inverse relationship between distance and bandwidth. For example WaveLAN may provide several Mbits/second bandwidth, but only over a very limited distance of at best a couple of hundred meters, and is also subject to interference and occlusion. Wide area networking technologies such as SMS, GSM and GPRS offer much more limited bandwidth, and some (e.g. GPRS) are also only beginning to be available commercially. As well as these environmental and technical issues, we can also identify broader issues that become significant when you wish to place devices in the physical world to be used by the general public as part of an augmented experience.

- Use of bespoke versus commodity technologies and, related to this, the use of technologies that are already in place (e.g. owned by potential users) versus technologies must be acquired or provided specifically to support an application or trial. Use of commodity technologies already in place allows an augmented experience to be made available to many more users than a system that requires users (or organizers) to purchase entirely new devices. - Use of mobile versus embedded technologies, i.e. a user carrying a device that augments their own activities versus placing devices within the built or found environment that augment the activities taking place within those locations. Note that mobile approaches presume that users can be entrusted with the relevant devices (if they do not already own them), while embedded approaches presume that it is possible

and permitted to embed devices within the physical environment. - Supporting different numbers of users, ranging from an individual, through a small group to a larger group or crowd. - Supporting different relationship with the augmented experience ranging from total un-involvement (and

potential ignorance, e.g. a random passer-by on a city street), through various levels of engagement and commitment, to full involvement. This relationship varies between “users”, but may also vary over time for a single user, as their interests and other activities change.

Payphone

Mobile phone

Hand-held ‘radar’

Hand-held sonification

Augurscope

Virtual shadows

Ambient audio

No tracking/ Low-res

Y -

? Y

N Y (rough)

N Y (rough)

N ?

Y -

Y -

Portable

N

Y

Y

Y

Y (limited)

N (or van)

N (or van)

Sunlight/ Dark

Y Y

Y Y

? Y

Y Y

? Y (but why?)

N Y!!

Y Y

Noise

?

N

Y

N

? (no audio)

Y

N

Rain

Y

Y

?

Y

?

Y

Y

No network/ Low BW

N PSTN only

N Y

N (preset?) Y*

N (preset?) Y*

Standalone N

N Y*

N N

In place/ Commodity

Y Y

Y Y

N Y

N Y

N N

N (part?) N (part?)

N (part) N (part)

Mobile/ Embedded

Embedded

Mobile

Mobile

Mobile

Part-Mobile

Embedded

Embedded

Crowd/ Small group

N N

N N

N ?

N ?

N Y

Y Y

Y Y

Uninvolved/ Joining

? Y

? (ethics?) Y

N Y (if given)

N Y (if given)

Y (if “found”) Y (ditto)

Y Y

Y Y

* in principle, although not demonstrated in the prototypes to date. Table 2. Issues addressed by each interface. Table 2 shows how the seven interfaces described in this paper relate to these design constraints and goals. The simple or ideal situation for each issue (not shown) is as follows: high-resolution tracking; non-portable; artificial lighting; quiet; dry; high bandwidth network (wired or wireless); bespoke technology; single user; fully involved. This is the simplest context to address, and also the one that prevails under typical lab conditions! Note that none of the approaches can address all of the possible contexts of use. From this table we can identify a number of key strengths of the various interfaces described in this paper: - Use of in-place commodity technologies, such as fixed and mobile telephones, supports involvement of potentially large numbers of users at little additional cost. These technologies could also be used to make contact with potential users who are not yet engaged with a particular augmented experience.

- Simple hand-held devices supporting abstract displays (such as the digital activity meters) can cope relatively well with reduced tracking accuracy and bandwidth. They may also support users who are in the process of becoming involved, with less cost and/or complexity than a general-purpose device or system. - Medium-scale devices, such as the augurscope, may support small groups better than smaller hand-held devices. The augurscope also exemplifies a device that is neither fully mobile nor fully embedded within a particular location. As such it might support different patterns of use. - Approaches relying on embedded devices, such as virtual shadows and ambient sound fields, associate technology with a place rather than a person. As such they avoid some issues of mobile technology such as tracking (at least of the devices themselves), trust (of the users carrying the devices), and the need for wireless networking. They are also particularly well suited to augmenting the experiences of people who

are otherwise uninvolved: simply passing through a particular (presumably public) space results in an augmented experience. In summary, each of our prototype interfaces fills a particular niche in delivering augmented reality experiences.

Virtual World

Interfaces

COORDINATING MULTIPLE INTERFACES

However, it is not through these individual interfaces that the real advantage of this diversity will be realized. Rather, like the pieces of a jigsaw puzzle, they need to be fitted together to produce a larger picture. This section therefore explores how our interfaces can operate together in a coordinated way to deliver rich and coherent augmented reality experiences. Each device can be considered to create a “window” (or other more abstract channel) between the physical and the parallel virtual world. However each interface does this using different media and different kinds of devices, for different target users, and in different contexts of use. There are several reasons why we might wish many or all of the above interfaces (and multiple instances of each) to operate in a coordinated manner: - As already argued, each interface has its own unique capabilities and is best fitted to particular contexts of use. Consequently a user may want to move between different interfaces according their current activity, interest, etc. To do this, the interfaces must be appropriately coordinated. For example, if someone finds a virtual object or event using a digital activity meter then they may want to move to an augurscope to find out more about it. - Similarly, a user present in a space augmented with virtual shadows or ambient audio may expect these to be related to the other augmented experiences that are being accessed within the same physical space using personal mobile devices. - Some of the interfaces described are non-mobile (embedded), and users encounter them at different stages of their activities. Consequently the user – and the activities that they are engaged in – may be mobile even though the individual devices supporting that activity are fixed. In this case the various embedded devices must be coordinated with one another in order to provide a coherent experience for the user(s) moving between them. We are tackling this requirement for interface coordination in two ways. First, each interface augments the physical world with information from a common parallel virtual world that is conceptually overlaid onto the physical world as shown in figure 7.

Physical World Figure 7: Augmenting the Physical World with data from a parallel Virtual World. The interfaces described in this paper all use MASSIVE-3 [6], a collaborative virtual environment system, to realize this parallel virtual world. Because MASSIVE-3 is networked, each device (provided it has adequate network connectivity) can connect to the same virtual world, and therefore reflect the same virtual activities on a momentby-moment basis. One way to look at this is to consider the MASSIVE-3 virtual world as a common contentdelivery channel, which, if used consistently, will yield a coordinated experience across all of the interfaces. MASSIVE-3 is tailored to supporting mobile avatars, virtual objects, real-time audio and video. Consequently these are the natural forms of content for augmented experiences generated using it. MASSIVE-3 also supports “temporal links”, which allow virtual activity to be recorded and replayed in flexible ways. This mechanism can be used to create pre-recorded material (scripted, generated or improvised) for repeated playback in the virtual world, and hence in the AR experience. Note that the MASSIVE-3 virtual world can also be used as a coordination and management facility. For example, a person with privileged access to the virtual world could use it to monitor physical and virtual activity (in so far as it is represented within the virtual world). They could then intervene remotely – through the virtual world – to tailor the augmented experience. This builds on studies of previous mixed reality experiences such as Desert Rain [8] that revealed how performers and crew orchestrate participants’ experiences in such a way as to engage them with the content and then to maintain that engagement throughout, even in the face of various difficulties and failures. The second way in which we support coordination of multiple interfaces is through the consistent use of EQUIP, which is a new platform for integrating various software components, IO devices and wireless and mobile interfaces. For example, all of the interfaces described in this paper use EQUIP as an intermediate layer between their component devices (e.g. GPS receiver, PDA,

accelerometer) and MASSIVE-3. EQUIP allows the various elements of the software system to be easily re-used and re-configured. This allows for fine-grained coordination and customization of the interfaces that would not be possible within the MASSIVE-3 virtual world. For example, this might be used to allow a future version of a digital activity meter to “dock” with an augurscope, causing both to reconfigure themselves into a compound interface. SUMMARY

This paper has introduced a set of devices which provide different interfaces to link a virtual world with the physical world. We have suggested a set of properties that allowed us as designers to reason about the way in which these different devices can be used to realize an outdoors augmented experience for the general public. Two points are particularly worth emphasizing. First, delivering AR in the ‘real world’ involves meeting many challenging and interdependent design goals and constraints. There is currently no single technology that is able to do this and so we propose broadening the focus of AR to make use of a far more diverse set of interfaces, each of which is appropriate to a particular use and environment. We have introduced seven prototype interfaces to illustrate some of the possible ways in which we might diversify the augmentation of the physical environment. Second, these devices have the potential to deliver much richer AR experiences if they are used in concert. This can be achieved by having them access a shared underlying virtual world that integrates audio, graphics, video and other kinds of digital information into a common framework and in which the presence and locations of multiple AR interfaces can be represented to support content creation and orchestration. We are currently developing a large-scale citywide AR experience to be staged in 2002. Our digital world is likely to be a representation of the city, but altered in various ways, for example shifted temp orally (into the apparent future or past), or distorted spatially (to meet the physical city at various key locations). Participants on the streets of the city will see and hear this environment (and the avatars and objects within it) via interfaces such as those described here. The avatars might be played by actors and by other on-line participants, who would in turn be able to see and hear the participants in the physical city (at least

at key locations and times). This project is still in its conceptual stages, although some of the interfaces presented are early outcomes from planning workshops. We expect it to provide a driving application for further development of these interfaces and approaches. REFERENCES

1. Azuma, R. T., “A Survey of Augmented Reality”, Presence: Teleoperators and Virtual Environments, 6(4): 355-385, Aug. 1997. 2. Azuma, R., “The Challenge of Making Augmented Reality Work Outdoors”, In Mixed Reality: Merging Real and Virtual Worlds (Yuichi Ohta and Hideyuki Tamura, eds), 1999, Springer-Varlag. 3. Benelli, G., Bianchi, A., Marti, P., Not, E., Sennati, D. (1999a). “HIPS: Hyper-Interaction within Physical Space”, Proc. IEEE ICMCS99, Florence, June 1999. 4. Benford, S., Bowers, J., et al., “Unearthing virtual history: using diverse interfaces to reveal hidden virtual worlds”, Proc. Ubicomp 2001, Atlanta, 2001. 5. Cheverst, K., Davies, N., Mitchell, K., Friday, A. and Efstratiou, “Developing a Context -Aware Electronic Tourist Guide: Some Issues and Experiences”, Proc. CHI’2000, 17-24, The Hague, Netherlands, 2000. 6. Greenhalgh, C., Purbrick J., and Snowdon, D., Inside MASSIVE-3: Flexible Support for Data Consistency and World Structuring, Proc. Third International Conference on Collaborative Virtual Environments (CVE2000), Sept 2000, San Francisco, USA, pp. 119127, ACM, New York. 7. Höllerer, T., Feiner, S., Terauchi, T., Rashid, G., Hallaway, D., “Exploring MARS: Developing Indoor and Outdoor User Interfaces to a Mobile Augmented Reality System”, Computers and Graphics, 23(6), Elsevier Publishers, Dec. 1999, pp. 779-785 8. Koleva, B., Taylor, I., Benford, S., et al., Row-Farr, J., Adams, M, “Orchestrating a Mixed Reality Performance”, Proc. CHI’2001, Seattle, April 2001. 9. http://www.immersence.com/ (verified Sept 2001) 10. Pinhanez, C., “Using a Steerable Projector and Camera to Transform Surfaces into Interactive Displays”, Proc. CHI 2001 Extended Abstracts, 369-370, April 2001.

Suggest Documents