Lessons from the e-Campus Display Deployments - CiteSeerX

1 downloads 4360 Views 1MB Size Report
Public Ubiquitous Computing Systems: Lessons from the e-Campus Display. Deployments. Lancaster University's e-Campus project is exploring the creation.
R EAL -WO R LD D E PLOYM E NTS

Public Ubiquitous Computing Systems:

Lessons from the e-Campus Display Deployments Lessons learned from building and deploying three experimental public display systems have general applicability to many types of public ubicomp deployments.

L

ancaster University’s e-Campus project is exploring the creation of large-scale networked displays as part of a public, interactive pervasive computing environment. For the project, we built and deployed three experimental display systems that vary in technology, location, scale, and user community, and they’ve given us a rich set of experiences. We’ve summarized 13 lessons Oliver Storz, Adrian Friday, learned from this experience. Nigel Davies, Joe Finney, The lessons certainly apply to Corina Sas, and Jennifer G. Sheridan researchers planning similar Lancaster University deployments. We also believe they will generalize to other public ubicomp installations.

Deployments The e-Campus deployments consisted of two technology probes and one permanent interactive display installation. WMCSA 2004 conference signage Our fi rst deployment was a digital signage solution at the sixth IEEE Workshop on Mobile Computing Systems and Applications (http:// wmcsa2004.lancs.ac.uk). The WMCSA system consisted of four displays, one stationed outside each of four entrances to the main auditorium and demo room. The system featured rolling displays of information about ongoing confer40 P ER VA SI V E computing

ence activities tailored to each display’s location and the time of day. The information listed talks being presented in adjacent rooms as well as activities in the wider locale and navigation symbols directing delegates to refreshments at appropriate times (see figure 1). With the WMCSA deployment, we explored the key issue of how to simplify injecting content into the system and mapping the content to displays. We did this by exploiting a separation of concerns: authors could create content items (images, Web pages, RSS feeds, and videos) and use a constraint-based scheduler to request these to be mapped dynamically to the network of displays. This approach reduced programming the display of WMCSA content to a set of scheduling requests. A scheduler associated with each display observed the requests and attempted to construct a time line for the display that best matched the requested set of constraints. Brewery Arts Centre exhibition The second installation took place at the Brewery Arts Centre in Kendal, Cumbria, as part of their celebrations of the 60th Anniversary of Victory in Europe Day. Specifically, the installation was one element of an interactive exhibition of local wartime memorabilia designed to raise awareness of life in the region during World War II. It consisted of four main components: a set of three large projected pub-

Published by the IEEE CS and IEEE ComSoc ■ 1536-1268/06/$20.00 © 2006 IEEE

Figure 2. The gallery space during setup of the Brewery Arts Centre exhibition.

Figure 1. A rolling display for the IEEE Workshop on Mobile Computing Systems and Applications.

lic displays (see figure 2), a video diary booth, a Web-based diary, and “the Kirlian Table”—an interactive art exhibit created by BigDog Interactive (www.bigdoginteractive.co.uk). The public displays showed news footage and radio broadcasts evocative of the era, interspersed with images captured of objects placed on the Kirlian Table and video diary entries. The video diary application let visitors record their own war-related memories, which they could then access through a local, Web-based content management system. Additionally, if the contributors gave their consent, the system scheduled and displayed their diary entries on the projected public displays. Prior to deployment, we developed and tested individual parts of the Brewery system in one of our labs. Installation, integration, and final testing of the components took place in the exhibition space over a 48-hour period just prior to the official opening night—a highly visible event. The system remained active for the JULY–SEPTEMBER 2006

entire 14-day exhibition, attended by more than 1,700 visitors. We provided questionnaires and encouraged visitors to leave feedback about the system. We also collected information through observations. Results suggest that visitors generally found the exhibition informative, innovative, interesting, and appealing. The Underpass We installed the third deployment in a campus underground bus station called the Underpass. The installation provided a mixture of information and interactive content to people waiting for buses. In contrast to our other technology probes, this installation was intended to be a long-term deployment—lasting for at least several months and possibly a few years. We completely rewrote our software to reflect our experiences from the first two deployments. To fit the space’s physical dimensions, we deployed three largescale projected displays, aligning them side by side. Each display could project content independently or in wide-screen combinations of two or three displays. From the start we aimed to employ a mixture of content, including artistic material, textual information, and videos. The installation opened with an interactive video and sensor installation, “Metamorphosis,” commissioned by the University. The installation consisted of three side-byside video displays (see figure 3), controlled by a Max/MSP script (www. cycling74.com/products/maxmsp).

The installation was essentially selfcontained and distributed across four dedicated Mac Mini machines. Three machines rendered the videos; the fourth controlled the video playback according to events triggered by passing traffic and reported through external sensors. To support content besides Metamorphosis, we deployed a PC with a multiheaded graphics card that could render either different content on each head or content that spanned across two or more heads. An audiovisual matrix switch (www.sierravideo.com) and an embedded AMX controller (www.amx.com) let us switch between content rendered on the PC and content rendered on the dedicated Macs. We installed most of the hardware in a rack in a garage in the Underpass. For technical as well as health and safety considerations (we had to make sure not to blind the drivers in passing traffic), we mounted the projectors on the tunnel ceiling above the road with special lenses fitted to correct for the oblique projection angle. The system deployment therefore required a complete closure of the Underpass for several days.

Lessons in deploying ubicomp in public spaces Table 1 summarizes the lessons learned with these deployments and categorizes them according to five different aspects of our experience: technology and deployment, monitoring and management, content creation and P ER VA SI V E computing

41

REAL-WORLD DEPLOYMENTS

Figure 3. The opening of Metamorphosis running on the Underpass displays. The inset shows members of BigDog Interactive monitoring the system console.

management, orchestrating ubiquitous computing experiences, and working in public spaces. The sidebar on page 45 describes relates work. Lesson 1: Deployments are costly Never underestimate the effort involved in creating real system deployments.

Over the years, our group has amassed considerable experience in deploying mobile and ubiquitous computing applications—for example, in the GUIDE project.1 However, we were still surprised by the sheer volume of work involved in creating the deployments. For example, the Brewery deployment required a team of almost a dozen people to create the system and physically install it. After the installation, staff were needed on site more or less permanently during the exhibition. The Underpass deployment required many months of nontechnical activities such as liaising with local transport companies and university bodies. Even on the technical front, the audiovisual installation’s complexity surprised us. The hardware occupies a full-height rack and includes specialized components. For example, the AMX control42 P ER VA SI V E computing

ler has its own programming language and development tools to master. Researchers planning public deployments should carefully consider their project’s resource implications. Deployments tend to be extremely resource intensive, and research results such as experiences and reflections tend to return over the medium rather than the short term. In our projects, we’ve tried to address this issue by carefully reviewing the arguments for and against a particular deployment and whether the same results might be achieved using a less resource-intensive approach. Once a decision to deploy has been made, we recommend budgeting for contingencies. Lesson 2: Environmental challenges can be significant

For example, the Brewery tests its fire alarms weekly during which time they shut off all power to the room in which the exhibition was taking place. In the Underpass, the physical environment created enormous difficulties for our work. The Underpass is a cold, dark, damp, public space that’s heavily polluted with diesel fumes. The projectors had to be housed in specially designed cases that could withstand the elements and physical damage while simultaneously providing adequate cooling for the projectors and filtration of the diesel fumes. To give some idea of the scale of these problems in the Underpass, our first attempt at an install ran for less than one week before the projector filters were so clogged with diesel that the projectors themselves overheated and shut down. This was despite custom-designed housings with filters that were supposed to remove the diesel particles. It’s critical to design and develop ubicomp deployments with the target deployment environment in mind. In the past, we’ve successfully addressed this issue by rapidly prototyping technology probes that can be placed in the intended environment to gain early experience. Lesson 3: After deployment comes maintenance

Never underestimate the impact of environmental factors on a deployment.

Deploy for maintainability and change.

Both the Brewery and the Underpass installations posed significant technical challenges. At the Brewery, these related principally to the absence of adequate high-speed network access and the lack of technical exhibition staff. However, other environmental factors also affected our deployment.

While the need for maintainability may sound obvious, it’s worth providing a concrete example of the trade-offs involved. The projectors in our Underpass installation are located above a road and maintaining them requires closing the road. During university terms, this is practically impossible, so www.computer.org/pervasive

TABLE 1 Categorization and summary of lessons learned.

any component failures incur on average a five-week downtime (terms are 10 weeks long so in the worst case the system could be out of commission for 10 weeks). With hindsight, we should have mounted the projectors on a moving platform that we could mechanically pull clear of the road. At the time, we decided the extra expense was unjustified, but this was probably bad judgment on our part. Issues affecting maintainability will differ for each system. However, for public deployments, it’s crucial to ensure adequate physical access to, and control of, the space in which the system is deployed. For example, it’s often important to be able to test systems without members of the public being present. Creating duplicate deployments in the lab is obviously good practice, but it’s tempting to create a deployment in the lab and then move this installation into the field. We recommend keeping a duplicate in the lab to ease testing of proposed maintenance procedures. We believe that in most cases the extra costs will be more than justified in terms of ease of testing and maintenance. Lesson 4: Follow the rules Anticipate and plan for regulatory compliance issues.

Numerous regulations governed our deployments. In some cases, the requirements were clear, such as safety standards for electrical installations; but in other cases, they weren’t. Providing access for disabled people is one example; avoiding implicit discrimination in the displays and information takes great care. Complying with health and safety regulations in carrying out system maintenance is another example; the time it adds to completing even apparently simple tasks can be significant. We found that involving the local JULY–SEPTEMBER 2006

Aspect

Lesson

Technology and deployment

1. Never underestimate the effort involved in creating real system deployments. 2. Never underestimate the impact of environmental factors on a deployment. 3. Deploy for maintainability and change. 4. Anticipate and plan for regulatory compliance issues.

Monitoring and management

5. Ensure that it’s possible to remotely monitor the system’s output as the user perceives it. 6. Provide tools and abstractions that let individuals reason about the system’s internal state.

Content creation and management

7. Never underestimate the importance of content. 8. Set aside adequate resources for content creation. 9. Managing content for pervasive computing is a major task for which existing tools are poorly suited.

Orchestrating ubiquitous computing experiences

10. Ensure that public deployments can support orchestrated performances. 11. Ensure that users don’t experience partial or inconsistent changes in system state.

Working in public spaces

12. Managing user expectations is crucial in public ubicomp deployments. 13. Prepare yourself, your team, and your work for public scrutiny.

building or site manager of the intended deployment space early in the project helped ensure that we followed the correct procedures. In the Underpass, for example, we worked closely with a single university-estates representative, who was responsible for the target deployment spaces and could approve installations. Lesson 5: See what the public sees Ensure that it’s possible to remotely monitor the system’s output as the user perceives it.

During the WMCSA deployment, members of our team were also workshop delegates. Spotting and rectifying display system failures was therefore relatively easy. The Brewery Arts Centre deployment was scheduled to run for two weeks, so we decided to try monitoring the system remotely by periodically checking its activities on the event channel. We also

planned to monitor logs from individual components, which were accessible by logging into the systems from a remote location. We quickly learned, however, that the viewing public might not see the system as functional even if all the distributed components appeared to be perfectly healthy in their logs. For example, the information we obtained didn’t include any clues about the three projectors’ health. We received several phone calls during the first few days of deployment about malfunctions that turned out to result from a projector overheating or a bulb burning out. In the end, we reverted to the monitoring approach we had used for WMCSA— that is, putting people on site to visually monitor the system’s health by looking at the display output. In our more recent deployments, we’ve been careful to provide adequate monitoring facilities, such as cameras pointed at the displays. We recommend that public research deployments include such facilities. P ER VA SI V E computing

43

REAL-WORLD DEPLOYMENTS

Lesson 6: Build white boxes, not black Provide tools and abstractions that let individuals reason about the system’s internal state.

Remote monitoring of the deployment as the user perceives it is important, but equally important is the capability to monitor and reason about the system’s internal state. For example, early in the Brewery deployment we encountered difficulty determining each display’s content schedule. Watching the projectors didn’t help because individual pieces of content would often be displayed more than once during each schedule cycle. Inspecting the system’s output on the console and the event channel didn’t help either as it provided no data about the currently displayed content’s relative position with respect to other content. Complicating matters even more, the schedule contained periods during which no content would be displayed, giving the projectors time to cool down. In e-Campus, coordination through a centralized event channel let us observe state changes as they occurred. However, inferring component state from these exchanges is often difficult. To inspect component states even when no changes are occurring, we updated the infrastructure design so that each component generates periodic status messages. We employed these messages to build visual tools that inform us about system state. In summary, it’s critical to design public ubicomp deployments such that researchers who use them will be able to easily build tools and models for remote monitoring and inspection of system state. Lesson 7: Content is king Never underestimate the importance of content. 44 P ER VA SI V E computing

Systems such as GUIDE,1 Can You See Me Now, 2 and Uncle Roy All Around You3 have clearly demonstrated the user benefits that accrue from using highquality content in ubicomp applications. For most users in our e-Campus deployments, the content is the system. When we didn’t have content and turned off the displays, people assumed the system was broken. Similarly, poor content reflected badly on the system as a whole. Considering content too late in the project cycle leads to problems because a mismatch occurs between the content support requirements and the facilities that research prototypes tended to offer. For example, for installations such as the Underpass, Max/MSP is the natural choice for many content creators, but early versions of the eCampus hardware-software platform couldn’t support it. We made content the center of our attention in the Underpass deployment. Indeed, we crafted substantial parts of the system specifically to meet the requirements that content providers put forward in the project’s early stages. In general, plans for public ubicomp deployments should reflect the critical importance of content. Lesson 8: Content is expensive Set aside adequate resources for content creation.

Resources encompass both money and appropriately skilled staff. Generating compelling content is a nontrivial task, and in our experience, computer scientists typically lack the necessary expertise. This isn’t simply a matter of artistic talent: producing content often requires specialized training and knowledge of tools and processes. Furthermore, while many students and staff can justify working on systems infrastructure or user studies as part of their research, finding time to work

on content generation, which is often (incorrectly) regarded as a luxury, is more problematic. We’ve used a wide range of techniques to develop content in our deployments. These include employing students and commissioning content from professional artists. However, even these techniques incur significant management overheads. Most recently, we’ve begun to establish long-term relationships with experts in curating mixed-media content—recognizing the specialist skills and domain knowledge required for this task. Lesson 9: Manage your assets Managing content for pervasive computing is a major task for which existing tools are poorly suited.

Although we had made provision for the difficulties of content creation in our deployments, we hadn’t anticipated the difficulties of managing the content once it was created. For example, in the Brewery deployment, we ended up with many copies and versions of our video diary entries. This reflected the need to convert the media into different encodings for different parts of the system. We also needed to transfer the material off-site for previewing, approval, and archiving. Many of our content management problems stemmed from the need to manage content-associated workflows, such as which content needed archiving and approval before scheduling it for display. Projects must document such workflows and have appropriate tool support available. Our experiences have let us identify a significant mismatch between existing content management system capabilities and the CMS requirements for pervasive computing environments. We haven’t yet found a CMS suitable for ubicomp deployments, although we’ve examined systems from both the broadwww.computer.org/pervasive

Related Work on Display Deployments

T

he largest public display deployments to date are in the commercial sector. These systems provide information and advertising specifically tailored to audiences in spaces such as airports, train stations, and shopping centers. Such deployments are usually supported by commercial signage software and hardware such as Sony’s Ziris system. Around 300 Zirisbased displays have recently been deployed to a small number of a UK supermarket chain’s stores as a part of a six-month system trial. A growing number of displays are also being deployed for entertainment reasons. For example, the BBC’s “Big Screens” (www.bbc.co.uk/bigscreens) are 25-square-meter, daylightviewable screens installed directly at central locations in six UK cities. The screens provide a rich media set that’s partly adapted to and contributed by the local community. While the BBC has always encouraged content contributions to the big screens— for example, from visual artists—experiments are under way to let the general public interact with the screens and provide content. At this time, people can send messages to the screens using their mobile phones. They can also participate in interactive collaborative games that are based on crowd movement in front of the screens. Finally, many experimental deployments of public display systems investigate specific research questions.1–8 Most deployments in this context tend to be small scale and short lived, but some—notably, those reported by Elizabeth Churchill4 and by Dan Fitton9 and their colleagues—have been subjected to larger-scale longitudinal studies.

cast media and Web communities. We have therefore begun a new initiative to develop an appropriate tool set. Lesson 10: Define the user experience Ensure that public deployments can support orchestrated performances.

As computer scientists motivated by the ubicomp vision, we’ve found ourselves often tempted to distribute system intelligence to as many lowlevel components as possible. Our system architectures therefore typically appear as collections of fairly autonomous entities with no obvious central point of control. In the e-Campus project, these entities are displays and associated components. However, JULY–SEPTEMBER 2006

REFERENCES 1. D.M. Russell and R. Gossweiler, “On the Design of Personal & Communal Large Information Scale Appliances,” Proc. 3rd Int’l Conf. Ubiquitous Computing (UbiComp 01), Springer, 2001, pp. 354–361. 2. S. Izadi et al., “Dynamo: A Public Interactive Surface Supporting the Cooperative Sharing and Exchange of Media,” Proc. 16th Ann. ACM Symp. User Interface Software and Technology (UIST 03), ACM Press, 2003, pp. 159–168. 3. B. Johanson, A. Fox, and T. Winograd, “The Interactive Workspaces Project: Experiences with Ubiquitous Computing Rooms,” IEEE Pervasive Computing, vol. 1, no. 2, 2002, pp. 159–168. 4. E.F. Churchill et al., “Sharing Multimedia Content with Interactive Public Displays: A Case Study,” Proc. Conf. Designing Interactive Systems (DIS 04), ACM Press, 2004, pp. 7–16. 5. K. O’Hara, M. Perry, and S. Lewis, “Social Coordination Around a Situated Display Appliance,” Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI 03), ACM Press, 2003, pp. 65–72. 6. J.F. McCarthy, T.J. Costa, and E.S. Liongosari, “UniCast, OutCast & GroupCast: Three Steps Toward Ubiquitous, Peripheral Displays,” Proc. 3rd Int’l Conf. Ubiquitous Computing (UbiComp 01), Springer, 2001, pp. 332–345. 7. S. Greenberg and M. Rounding, “The Notification Collage: Posting Information to Public and Personal Displays,” Proc. SIGCHI Conf. Human Factors in Computing Systems (CHI 01), ACM Press, 2001, pp. 514–521. 8. M. Wichary et al., “Vista: Interactive Coffee-Corner Display,” Proc. CHI Extended Abstracts on Human Factors in Computing Systems (CHI 05), ACM Press, 2005, pp. 1062–1077. 9. D. Fitton et al., “Rapid Prototyping and User-Centered Design of Interactive Display-Based Systems,” IEEE Pervasive Computing, vol. 4, no. 4, 2005, pp. 58–66.

our experiences suggest that any system architecture must be able to support carefully coordinated—orchestrated—performances. The performances define the user experience over a given period. Supporting them was a key requirement of both the Brewery and Underpass deployments. We identified a clear need to be able to develop and test performances offline—that is, away from the physical infrastructure—and, crucially, outside real time. This lets developers step through or fast forward performances to tweak the user’s experience. The Brewery lacked such facilities, so we couldn’t test and refi ne a day’s worth of content in advance. We added these features to the system deployed in the Underpass.4

In general, we recommend making sure it’s possible to precisely define user experiences and test them before deploying public installations. Lesson 11: Provide transactions Ensure that users don’t experience partial or inconsistent changes in system state.

Performances typically span multiple components in the ubicomp deployment—in our case, multiple displays. We found it necessary to provide transaction-like semantics for manipulating groups of displays. For example, we needed to be able to allocate content to a collection of displays and display the content if and only if we could do so on all the displays simultaneously. To P ER VA SI V E computing

45

REAL-WORLD DEPLOYMENTS

the AUTHORS Oliver Storz is a research associate and part-time PhD student in the Computing Department at Lancaster University. His research is related to distributed systems support for deployable mobile and ubiquitous computing applications with a current focus on public display networks. He has a Diplom-Informatiker (MSc) in computer science from the University of Karlsruhe. He’s a member of the EPSRC-funded Equator IRC. Contact him at the Computing Dept., InfoLab21, South Dr., Lancaster Univ., Lancaster, LA1 4WA, UK; [email protected]. Adrian Friday is a senior lecturer in computer networking in the Computing Department at Lancaster University. His research interests include distributed systems support for mobility and deployable ubiquitous computing. He has a PhD in computer science from Lancaster University. He is coinvestigator on the EU-funded SMS project, and manager for Lancaster’s contribution to the EPSRC-funded Equator IRC. He’s a program cochair for Ubicomp 2006. Contact him at the Computing Dept., InfoLab21, South Dr., Lancaster Univ., Lancaster, LA1 4WA, UK; [email protected]. Nigel Davies is a professor of computer science at Lancaster University and an adjunct associate professor of computer science at the University of Arizona. His research interests include systems support for mobile and pervasive computing. He focuses in particular on the challenges of creating deployable mobile and ubiquitous computing systems that can be used and evaluated “in the wild.” He’s an associate editor of IEEE Pervasive Computing. Contact him at the Computing Dept., InfoLab21, South Dr., Lancaster Univ., Lancaster, LA1 4WA, UK; [email protected]. Joe Finney is a lecturer in embedded systems and multimedia at Lancaster University. His research interests include networked mobile systems, network support for embedded ubiquitous systems, and novel mobile applications. He has a PhD in computer science from Lancaster University. He is a member of the IEEE, ACM, and British Computing Society. Contact him at the Computing Dept., InfoLab21, South Dr., Lancaster Univ., Lancaster, LA1 4WA, UK; [email protected]; www.comp.lancs.ac.uk/computing/staff/joe.

Corina Sas is a lecturer in human computer interaction in the Computing Department at Lancaster University. Her research interests include user modeling, adaptive systems, data mining, spatial cognition, user studies, and individual differences. She has a PhD in computer science from University College Dublin. Contact her at the Computing Dept., InfoLab21, South Dr., Lancaster Univ., Lancaster, LA1 4WA, UK; [email protected].

Jennifer G. Sheridan is a research associate in the Computing Department at Lancaster University and the director of BigDog Interactive, a digital live art computing company. Her research interest is exploring performative interactions that occur with non-task-based interactive uses of technologies in performance and installation arts particularly for playful arenas and unanticipated performance spaces. She has a PhD in computer science from Lancaster University. Contact her at Computing Dept., Lancaster Univ., InfoLab21, South Dr., Lancaster, LA1 4WA, UK; [email protected].

to properly manage user expectations. Three examples illustrate this issue. First, we found that once the system was operational for a short period, people assumed it would continue to be operational for the foreseeable future. In most of our deployments, we needed test periods that displayed information on the actual displays in situ while the system itself was still under development. This testing period confused the viewing public. Second, we failed to adequately communicate our access policies. Consequently, some people didn’t contribute content because they didn’t realize they could, and others asked for inappropriate content to be displayed. Finally, we note that blank displays implicitly create an expectation of content to come (or more troublesome, the perception of a broken system). This is one area where projections on surfaces already present in the environment have a significant advantage over conventional displays. We recommend taking time to develop a plan for managing user expectations as you introduce a system. You might have to involve experts in this discipline or collaborate with local information providers and opinion leaders. In our deployments, we worked closely with the local media to make users aware of the research and involve them in it. Lesson 13: Be accountable

ensure a satisfactory experience, partial transitions must remain invisible. Essentially, this means we had to support transaction-level atomicity and isolation—at least a variation on isolation that we term visual isolation.4 We suspect that this requirement to manipulate ubicomp component groups and to constrain the visibility of state changes generalizes beyond display networks such as e-Campus. This is an 46 P ER VA SI V E computing

area of future research, however, and we don’t yet have specific guidelines on how to achieve these semantics in the general case. Lesson 12: Manage expectations Managing user expectations is crucial in public ubicomp deployments.

Several problems during our deployments resulted directly from our failure

Prepare yourself, your team, . and your work for public scrutiny

Research deployments of public displays are, almost by definition, visible to the general public. As such, they can generate significant press and public interest. In practice, this manifested itself for us in terms of numerous press interviews. We also had to respond to one Freedom of Information Act request. These requests stem from UK www.computer.org/pervasive

PURPOSE The IEEE Computer Society is the world’s largest association of computing professionals, and is the leading provider of technical information in the field. MEMBERSHIP Members receive the monthly magazine Computer, discounts, and opportunities to serve (all activities are led by volunteer members). Membership is open to all IEEE members, affiliate society members, and others interested in the computer field. COMPUTER SOCIETY WEB SITE The IEEE Computer Society’s Web site, at www.computer.org, offers information and samples from the society’s publications and conferences, as well as a broad range of information about technical committees, standards, student activities, and more. BOARD OF GOVERNORS Term Expiring 2006: Mark Christensen, Alan Clements, Robert Colwell, Annie Combelles, Ann Q. Gates, Rohit Kapur, Bill N. Schilit Term Expiring 2007: Jean M. Bacon, George V. Cybenko, Antonio Doria, Richard A. Kemmerer, Itaru Mimura,Brian M. O’Connell, Christina M. Schober Term Expiring 2008: Richard H. Eckhouse, James D. Isaak, James W. Moore, Gary McGraw, Robert H. Sloan, Makoto Takizawa, Stephanie M. White Next Board Meeting: 01 Nov. 06, San Diego, CA

IEEE OFFICERS President : MICHAEL R. LIGHTNER President-Elect: LEAH H. JAMIESON Past President: W. CLEON ANDERSON Executive Director: JEFFRY W. RAYNES Secretary: J. ROBERTO DE MARCA Treasurer: JOSEPH V. LILLIE VP, Educational Activities: MOSHE KAM VP, Pub. Services & Products: SAIFUR RAHMAN VP, Regional Activities: PEDRO RAY President, Standards Assoc: DONALD N. HEIRMAN VP, Technical Activities: CELIA DESMOND IEEE Division V Director: OSCAR N. GARCIA IEEE Division VIII Director: STEPHEN L. DIAMOND President, IEEE-USA: RALPH W. WYNDRUM, JR.

legislation that allows the public to request information from any public body. In our case, the request was for details of our spending on the Underpass deployment. In the end, recipients used all the information we provided in a very positive fashion, which was most gratifying. Researchers should be aware that entering the public domain inevitably makes their work subject to increased levels of scrutiny. For projects involving public deployments, we found it best to assume that all emails, paperwork, decisions, and expenditures will be open to the public and that the project will be audited. Once again, we benefited from involving outside experts with experience in such projects. JULY–SEPTEMBER 2006

EXECUTIVE COMMITTEE President:

DEBORAH M. COOPER* PO Box 8822 Reston, VA 20195 Phone: +1 703 716 1164 Fax: +1 703 716 1159 [email protected]

COMPUTER SOCIETY OFFICES Washington Office 1730 Massachusetts Ave. NW Washington, DC 20036-1992 Phone: +1 202 371 0101 Fax: +1 202 728 9614 E-mail: [email protected] Los Alamitos Office 10662 Los Vaqueros Cir., PO Box 3014 Los Alamitos, CA 90720-1314 Phone:+1 714 821 8380 E-mail: [email protected] Membership and Publication Orders: Phone: +1 800 272 6657 Fax: +1 714 821 4641 E-mail: [email protected] Asia/Pacific Office Watanabe Building 1-4-2 Minami-Aoyama,Minato-ku Tokyo107-0062, Japan Phone: +81 3 3408 3118 Fax: +81 3 3408 3553 E-mail: [email protected]

T

he e-Campus project is embarking on a major new set of deployments. We’re using these lessons to help guide our work, and we believe the lessons learned will be valuable to all researchers deploying ubicomp systems in public spaces.

ACKNOWLEDGMENTS We acknowledge the generous support of the EPSRC (grant no. 15986: The Equator project). Our thanks to Lancaster University for supporting the e-Campus initiative; to the Brewery Arts Centre, Welfare State International, BigDog Interactive (formerly .:the Pooch:.), Andrew Scott, Dave Ingles, and the e-Campus team for all their hard work on the technology probes; and the Lancaster Friends Programme for a grant that partially funded Metamorphosis.

President-Elect: MICHAEL R. WILLIAMS* Past President: GERALD L. ENGEL* VP, Conferences and Tutorials: RANGACHAR KASTURI (1ST VP)* VP, Standards Activities: SUSAN K. (KATHY) LAND (2ND VP)* VP, Chapters Activities: CHRISTINA M. SCHOBER* VP, Educational Activities: MURALI VARANASI† VP, Electronic Products and Services: SOREL REISMAN† VP, Publications: JON G. ROKNE† VP, Technical Activities: STEPHANIE M. WHITE* Secretary: ANN Q. GATES* Treasurer: STEPHEN B. SEIDMAN† 2006–2007 IEEE Division V Director: OSCAR N. GARCIA† 2005–2006 IEEE Division VIII Director: STEPHEN L. DIAMOND† 2006 IEEE Division VIII Director-Elect: THOMAS W. WILLIAMS† Computer Editor in Chief: DORIS L. CARVER† Executive Director: DAVID W. HENNAGE† * voting member of the Board of Governors † nonvoting member of the Board of Governors

EXECUTIVE

STAFF

Executive Director: DAVID W. HENNAGE Assoc. Executive Director: ANNE MARIE KELLY Publisher: ANGELA BURGESS Associate Publisher: DICK PRICE Director, Administration: VIOLET S. DOAN Director, Information Technology & Services: ROBERT CARE Director, Business & Product Development: PETER TURNER Director, Finance and Accounting: JOHN MILLER rev. 18 May 06

REFERENCES 1. K. Cheverst et al., “Experiences of Developing and Deploying a Context-aware Tourist Guide: The GUIDE Project,” Proc. 6th Ann. Int’l Conf. Mobile Computing and Networking (MobiCom 00), ACM Press, 2000, pp. 20–31. 2. A. Crabtree et al., “Moving Out of the Control Room: Decentralizing Orchestration of a Mixed Reality Game,” Proc. CHI Conf. Human Factors in Computing Systems, 2004. 3. M. Flintham et al., “Uncle Roy All Around You: Mixing Games and Theatre on the City Streets,” 1st Int’l Conf. Digital Games Research Assoc. (DIGRA 03), 2003. 4. O. Storz, A. Friday, and N. Davies, “Supporting Content Scheduling on Situated Public Displays,” accepted for publication in Computers & Graphics. P ER VA SI V E computing

47