the build site to illustrate their ideas. A group of 15 children attended the sessions on a regular basis, two from year 12. (ages 16-17), two from year 13 (ages ...
Using Actuated Devices in Location-Aware Systems Mike Fraser, Kirsten Cater and Paul Duff Department of Computer Science, University of Bristol, Merchant Venturers Building, Woodland Road, Bristol, BS81UB, UK. {fraser, cater, duff}@cs.bris.ac.uk +44 (0)1179545144 ABSTRACT
Location-aware systems have traditionally left mobility to the user through carrying, supporting and manipulating the device itself. This design choice has limited the scale and style of device to corresponding weight and form constraints. This paper presents a project introducing school children to location aware systems. We observed that it is hard to notice, physically grasp and simultaneously share these small personal devices in groups. These behaviours are partly grounded in the physical device design, but also in the location awareness model itself, which provides information ‘right here’ while the children are looking around and about them. These observations lead us to suggest the alternative model of pointing at locations so that they can be noticed and experienced by groups in public places. We further build this location model into the device itself by introducing actuated components from robotics to make a location-aware device called ‘Limbot’ that can be physically pointed. A preliminary study of the Limbot with the school children indicates rich sharing behaviours, but that user control of actuation at all points is critical to the ultimate success of our approach, and further exploration of our location model is required. Keywords
Location awareness, actuators, robotics, human-robot interaction. INTRODUCTION
pointing,
physicality,
Devices and application designs increasingly promote mobile interaction through techniques in ubiquitous computing [33]. Particularly prominent in Ubicomp has been the idea that changes in a device’s situation can be used to drive content delivery. The most common sensor cue for context-aware applications has been location [29], determined through sensors using GPS (e.g. [8, 25]), ultrasound [23] or image processing techniques [10]. Under these approaches, location remains intrinsically tied to the idea that the current location of the device is the most interesting piece of information for it to determine [8, 9], and that the location is varied by the user’s movement. Under this paradigm, determining location is a precision problem which changes with the variable (in)accuracy of
the sensing system. A good example of this ‘traditional’ location awareness paradigm is described by Hull et al. [17] who outline techniques for rapidly authoring located experiences by drawing areas on a digital map which can have media associated with them. These digital spaces can then be experienced on off-the-shelf mobile devices, typically GPS-enabled PDAs. An important feature of this idea is that the location-aware application is a primarily digital experience off-site, whereas using it is a physical experience on-site. The authors suggest that designing an experience off-site can lead to the physical world (noise, tracking error, etc.) emerging in unforeseen ways. In this respect it has been suggested that the sensor-driven location is overly removed from the contextual cues we use in everyday interaction [1, 12]. Recent work has expanded our understanding of location by describing difficulties with location error boundaries [4], rules for aggregation [32] and even potential for error to be useful, given the right design strategies [14, 7]. Authors of this work have started to suggest devices should break location metaphors based directly in sensor data in favour of social descriptions of location such as transitory groupings that might be determined based on sensor thresholding [4]. In this paper, we expand on these explorations of location by describing a project we have been undertaking in which school children have been designing location-aware experiences. Some difficulties the children encountered lead us to suggest an alternative location model alongside an actuated device. This leads us to explore the convergence of location-aware devices and robotics. BACKGROUND
We have been working with a school located near Bristol, UK that has specialist status in Media and Technology. The school was recently selected to be one of the schools in a programme called Building Schools for the Future (BSF) funded by the UK government which will provide a new £30m state-of-the-art school within three years. It will be the largest building project ever undertaken by the local council and will be delivered within very tight deadlines. It is therefore important to the school to engage the community, including its own pupils, in the design of the new buildings. We have helped the children articulate their thoughts and opinions on the new school using a locationaware experience authoring tool.
The children created experiences in pairs using the Mobile Bristol Mediascape authoring software [17] which is now called Create-a-Scape1 and off-the-shelf GPS-tracked PDAs to experience their located audio, images and videos. The tool is freely available and designed to be very simple to use for children. Results can be seen and experienced rapidly, encouraging the children to get involved. Over twelve sessions we introduced the children to the software, discussed the different stakeholders involved in the new school (e.g. builders, visitors, pupils) and what designs and facilities they might like to see. During the sessions we took fieldnotes and recorded videos for later analysis. The children then used the tool to locate sounds and graphics on the build site to illustrate their ideas. A group of 15 children attended the sessions on a regular basis, two from year 12 (ages 16-17), two from year 13 (ages 17-18) and eleven from year 9 (ages 13-14).
Figure 1. Authoring tool with regions representing a new school design (right bottom); in use in the school terminal room (right top); children test their design on the school site (left)
Figure 1 shows the authoring tool in use. We will only describe the pertinent details of its operation here; for a full description see [17]. Briefly, the tool allows users to develop arbitrarily shaped media-tagged regions on a map. Each region can have many different media files of a variety of types (audio, image, video etc.), and can also have logic associated with it created in a scripting language. Scripts can manipulate the relationship between sensor and region details, events and media behaviour. For example, a typical operation would be to loop a sound file while GPS reports that a region is entered and stop it playing on exiting the region. The tool also allows the children to emulate GPS signals around the map using their mouse to test the design without having to visit the site each time. This approach assumes that media files should be triggered when the user moves the device into the region. We have come to think of this approach as a ‘here’ location model, where ‘here’ is determined by the GPS tracking, and the regions are delimited by correlating GPS signals with map 1
http://www.createascape.org.uk/
coordinates. When the children were creating their experiences on the map, this model was simple for them to understand. Regions represented different kinds of artefact in their new school design. Many of their regions represented the physical size and shape of a building, and associated media would indicate that the user had entered the building (for example, images of its contents, sounds to give the ambience and so on). However, some regions represented places in which certain activities would occur (e.g. entering the school’s land), or even decorative objects (e.g. a line of trees). The children initially found no difficulty in reconciling these disparate semantics into the GPS regions while they were designing their experience indoors. However, when we asked how these regions would be encountered outside by users they struggled to reconcile the ‘here’ location model with their goals. To give an example, the ‘line of trees’ included a number of aligned green circles which lined an entrance drive to the school. The author had planned to associate a particular image of a tree with each region. At this point, the child’s primary concern was that the inaccurate GPS would struggle to retain the trees’ linear alignment, which was important to the design. When discussing the design it was described in terms of ‘looking at the line of trees’. The design was meant to be observed, clearly not from within where the trees stood, but from some distance away in which the line could be perceived. We found that the ‘here’ location model of associated media is very rarely the only one to be considered, because the children often think about how the features would appear rather than what it would be like to be inside them. Ultimately, this constrained their designs to what could useful be demarcated as ‘here’. Instead, what was required was a ‘there’ location model. This issue was even more pronounced when the children were outside testing their experiences. In pairs, they used an HP iPAQ with a bluetooth GPS receiver and an audio splitter cable to explore the future site of the building in pairs. Views of the environment which were unexplored on the map suddenly became potentially of relevance on-site. However, as soon as children encountered media ‘here’, they stopped and looked at their feet or into space while listening or at their PDA screen while viewing images. The model of information located ‘here’ created a fracture between the digital information and taking a step back and looking at the physical real-world surroundings. We were also struck by issues of how the PDAs featured in testing. The bird’s-eye map view of the GPS emulator hides away this device which the children actually experience their design on, giving the impression of a seamless tool. When the pairs of children went outdoors for the first time, unanticipated effects occurred. They wore a pair of headphones each with an audio splitter plugged into the PDA, whose screen revealed any located images. Unfortunately this configuration tended to make it difficult for the children to interact with one another. There were at
least two reasons for this isolation. Firstly, the screen of the PDA is difficult to share simultaneously – one person has to hold the device and notice the image appearing as a region is entered, and then show it to another by tilting the screen towards them. This is particularly difficult to achieve outdoors because of the light on the screen, so often the PDA is handed over which can cause the headphone cables to get tangled. Secondly, the headphones prevent the pair from talking properly to one another, which means they have a sense of what the other is hearing, but that they find this difficult to share and discuss. When all the children were wandering around the site at the same time, we observed virtually no sharing between pairs. In short, both the location model and the device the children were using was isolating them from one another and their surroundings, rather than supporting a social experience. The PDA provides poor support for sharing between pairs of users and lacks the physical presence of digital encounters at a distance necessary to encourage spectator attention and serendipitous group formation [25]. We therefore became interested in exploring a ‘there’ location model and how to design a device to support such an approach. This led us to consider how to get a device to physically point at the landscape so that it could be seen, noticed and shared by the children. RELATED WORK
Our interest in physical pointing brings into question the relationship between Ubicomp location-aware systems and robotics. There are a number of crossovers between the ways in which robots and systems to support social interaction operate [3]. Both location-aware systems and robotic devices tend to determine their own location. Positioning in robotics has often been achieved by reasoning over radar or computer vision data, using SLAM (Simultaneous Localisation and Mapping) techniques [10], although there are also teleembodiment solutions where the user remotely navigates the robot (e.g. [22]). There are some cases of locationaware system using related probabilistic approaches [19]. In most location-aware systems, however, positioning has used GPS (e.g. [17]) or ultrasound tracking indoors (e.g. [24]). Location-aware systems can also differ from robotics in their mobility. For a located experience, the user typically reconfigures and moves the device around locations. Robotics, however has often designed devices which self-configure their location, both re-positioning in the environment and physically actuating local parts of the device itself such as limbs for grasping objects. Nonetheless, there has been growing interest in selfactuating devices beyond core robotics concerns. Interest in non-mobile devices which self-actuate has appeared in shape displays in tangible computing [23], force communication technologies [5] and haptics [6], and in mixed reality systems design [21]. Related work on Phidgets [15] includes a range of servo-based functions to
support rapid authoring of actuated experiences, although none of these approaches explicitly engages with location awareness. Luk et al. [20] describe one of the first mobile tacitile feedback systems, but use small scale electronic devices to stretch skin rather than investigating humanhardware interaction on a scale which is capable of producing gestures or being noticed in groups. Where tangible interaction is shared across co-present groups it tends to be embodied in existing physical objects by users rather than actuated (e.g. [27]), even when children are designing their own tangible experiences [28, 31]. In location-aware systems we are motivated by the physical presence and mobility of the Augurscope [30] in supporting a shared, located application in a physical, mobile device. Although the Augurscope can’t self-actuate or self-move, the authors discovered that the physical presence of the device provided significant support for embodying and sharing the located experience. We also wanted our device to be useable at standing height and have shared physical capabilities. We also drew on the design of Gestureman [18] to explore the idea of producing pointing gestures towards spatial locations in the device itself. Although Gestureman is a directly controlled telepresence device designed to support remote instruction, the authors’ investigations into the production of pointing led us to explore an articulated arm so that a range of referential gestures could be designed. Finally, we were inspired by Curlybot [13] which allows recording and replaying physical movements in the same space. Although Curlybot is designed for other uses, it illustrates that coherent experiences can be physically demonstrated and recorded using the device itself, which could match the authoring and experiencing phases of location-aware systems. According to Hardian et al. [16], each device balances user control against device control. Some authors explicitly describe techniques for sharing control between a device and its user, for example Dey and Mankoff [9] explain how location-based systems could provide ‘mediation’ allowing users to assist devices in context interpretation. Equally some devices are designed to work autonomously despite being carried around by users, such as in the case of mobile medical monitoring [2] where the user’s mobile behaviour is captured and interpreted by autonomic sensors without any direct user control. We were acutely aware that location-aware systems normally provide direct control over the physical design and operation of the system (e.g. [17]), whereas autonomous robotic systems assume that the device remains in control of itself. Introducing actuation into the design of a location-aware device makes design of its ‘adjustable autonomy’ [11] all the more critical because safety factors become more immediate and problematic. Our review illustrates that it is unusual for a mobile device to be both self-mobile and yet remain directly under a user’s control. However, we suggest that actuating mobile devices will provide clear means for users to appropriate
serendipitous, emergent, sharable and publicly visible device properties that traditional robots have provided. This approach could free up Ubicomp research to develop location-aware applications and services based on location models which more closely integrate located digital information with the surrounding physical space. Finally it could allow devices to exhibit public actions, giving them visibility relative to human body movement and therefore overcoming some current usability issues with the use of attention-hungry personal devices in shared public places. We have therefore designed a robotic device that supports recording (for authoring) and replaying (for experiencing) movements and pointing gestures. As an engineering problem, we were faced with many functional requirements. Our first attempt is certainly not an optimal solution to these goals, but it demonstrates the potential for such devices to be constructed. In the next section we describe in more depth how we constructed our prototype. ACTUATING LOCATION
We have drawn on our workshop sessions in the school and previous work to explore the design of a new device that could be used to author and experience locations through physical pointing gestures to ‘there’. Our first prototype, which we have called ‘Limbot’, attempts to address difficulties the children encountered refering to what can be seen from the current position. In the following sections we therefore describe in depth how we constructed a first largescale prototype solution using off-the-shelf components. This allows us to explore the feasibility of our suggested device properties in advance of miniaturizing and integrating the relevant components. Phantom haptic arm
Tripod mounting Laptop computer Laminate base Motor controllers
Batteries & inverter
Wheels, motors & gears
Castor
Figure 2. Limbot design
Pointing Arm
Our first design decision was to use a Sensable Phantom 1.5/6 DOF haptic feedback arm as the pointing arm. Three degree-of-freedom robot arms are usually designed primarily as actuated output devices, whereas the Phantom is primarily an actuated input device. This means it contains very accurate rotary encoders for detecting self-orientation, as well as a handle at its tip with a small button and three gimbals measuring roll, pitch and yaw. The arm can be translated and rotated around all axes, or made to push against a user (and gravity) by applying suitable forces using motors in each of three joints. This is generally used to physically render a virtual surface but we have used it to point the arm in particular directions by applying appropriate forces to the motors over time. The arm works in two major modes: an input mode where the arm is used to control movement of the wheels in a particular direction, and to point, recording a sequence of movements and gestures under the user’s direction; and an output mode where the software replays the same sequence of movement and pointing. These ‘modes’ correspond to the two stages of traditional located experience design that the children undertook, authoring and experiencing. However, with the Limbot the same device is used for both authoring and experiencing the located gestures, which means they must be designed in situ with the author immersed in the physical environment which the user would later encounter. Thus, while we haven’t yet added sound, image and graphics capabilities by integrating with the Create-a-Scape software, the merger of these two would provide pointing alongside existing media features We decided to mount the arm approximately 1m from the ground, the normal height for a UK door handle and hence a known comfortable height. Drawing directly on the design of the Augurscope [30], we mounted the Phantom arm to the base using a modified steel camera tripod. The tripod is bolted securely to the Phantom via expanding screw bolts through a rigid wooden support plate. The 10kg arm’s placement at this height makes toppling more likely, so we positioned four heavy 44Ah lead-acid batteries as low as possible on the base for counterweight, which could also power the arm for several hours. The Phantom is designed to run from AC power and required an inverter to convert the DC battery supply. Initially we tried to run the inverter and the wheel motor controllers from the same 24V DC supply, but the inverter generated interference on the controller data lines. The solution we found was to power the inverter and controllers from independent supplies, giving a total of four 12V batteries in the system. The Limbot moves smoothly and relatively slowly to add to stability, approximately at human walking pace. Jolting movements are minimised by position profiling using the motor controllers, giving the robot a stopping distance of around 10cm at a normal speed of 0.3m/s. The arm operates in its own dedicated thread, since it requires a continuous 1kHz refresh rate to reliably compensate for the overall
device movement, gravity and force applied by users. We use a spring damping physics model to minimise vibrations as the arm attempts to stabilise itself at an orientation. Base and Wheels
The base is 1cm thick 60cm radius circular wood laminate, leaving access to the arm and avoiding corners catching during rotation. Two wheels are arranged on a common axis with a further free-moving castor providing stability. Driving the wheels in a common direction translates the device and driving in opposite directions rotates it. The wheels have a solid rubber tyre for grip. Each wheel is driven by a brushed DC 37W motor powered by the batteries in the base, chosen for its precision and high torque with a gearhead ratio of 54.2:1 and maximum speed of approximately 0.6m/s. The weight tolerances of the motors can support 80kg which sufficiently covers the batteries and the Phantom. The motors controllers are linked to each other using the CANopen protocol and connected to the laptop by a serial cable. Movement commands run asynchronously on the controllers so that processing can continue during movement. Positioning
The laptop and the positioning system carry their own independent batteries. We deliberately placed the laptop in sight on top of the base to support debugging and maintenance. We designed the robot so that it could use three different tracking systems, although we have only tested two thus far. The first tracking method is to continuously read rotational odometry data from the wheels. This method drifts slightly over time due to wheel slip, but this is negligible over short distances and times. We also tested a room-scale indoor ultrasonic positioning system with a passive receiver on the base, which calculates position from signals transmitted by eight transmitters mounted on walls and the ceiling of the room [24]. This configuration provides a continuous minimum accuracy of 10cm. We also plan to substitute the ultrasound receiver with a GPS receiver for outdoor tracking with mutual odometry correction but have not tested this configuration. Operation
The control software implements a state machine to move between recording and replaying modes of operation and to read data from and write data to the wheel motors and Phantom arm. We inserted stages of free operation between these modes to allow flexibility of use. A double click on the button on the arm cycles between recording and replaying modes and the laptop beeps to give feedback. Initialisation and calibration modes run at start-up and zero the orientation and position of the robot base (with respect to measurement in the wheels) and arm. As the Phantom only has a 180º range of movement in the plane parallel to the floor, it is only possible to point the arm in front of the robot. To enable users to point behind the robot, the wheels rotate in the appropriate direction while the arm is in the outermost 10º of this range.
DEPLOYMENT AND EVALUATION
We introduced the Limbot device to the children on the tenth and eleventh workshop sessions to gauge their views on its possible use in the experiences that they were authoring. We demonstrated the Limbot indoors in the classroom and outdoors on the playground where the new school buildings were to be built, and asked them to use it in pairs and groups. We then interviewed them to gauge opinions on the design and video recorded these sessions. We only used dead reckoning positioning from the wheels for these school sessions. The odometry proved accurate enough for trial use on the flat surfaces over short periods of time as the rotary encoders on the wheel motors supply a precision of 1/60th of a degree and there was only a little slippage between the floor surface and the rubber wheel tyres. We have not yet developed the Limbot into a full location-awareness authoring system with equivalent features to, for example, Create-a-Scape, because we wanted to understand whether our approach was feasible and would support a future combination of these technologies. Our evaluation highlighted three primary issues. These relate to the physical mobility of the device; the juxtaposition of physical and digital media; and the coproduction of locations. Physical Mobility
The device is both mobile and self-mobile; in other words, it can be moved and can move itself at different stages. Broadly, our design separates these capabilities into different time frames, such that in recording mode and free movement mode the device’s motors are mobile (both the wheels and the arm), but during replaying mode it is selfmobile. The only exception to this is that during recording mode when pointing the arm sustains its position against gravity. We wanted to find out from our group whether this was an acceptable design solution. Extracts 1 and 2 are taken from interviews with two of our groups who have used the Limbot for the first time. Extract 1 Yeh I think its a good idea, I think it would be good on a tour cos there's actually something physical there that could you know guide you, so you know that … well as long as you trust the technology to take you where you're supposed to go, there's times where it’s trundling away on its own and then you have to press the button, is that, I don't know, it’s not too like fast so you feel like you're sort of are in control of it, but, yeh there's always sort of a worry like is it going to stop, like bump into things
Extract 2
Make it a bit faster so you could walk round and get it pointing at things … like there’s too much going on
These two quotes illustrate a recurrent theme within our discussions on the movement of the robot. Many of our group were keen for the Limbot to speed up to get round an experience more quickly, and yet they were also a little concerned about the independent movement. This is partly because the robot never moves under constant direct control by the user: either the user is pointing it in a particular
direction and telling it to move forwards (free movement and recording), or it is moving entirely autonomously. The typical robotics solution to this problem would be to engineer wayfinding and collision detection in the robot using radar or image processing techniques, and indeed one group suggested this to us as a partial solution. However, parsing the above quotes indicates that part of the issue is a deeper one of trusting in and sharing mobility control. Our group did not notice much difference between recording and replaying in terms of being ‘in control’, because both require mobility contributions from the user and the robot. In both modes, the robot moves independently of the user, but the user starts and stops each step. There were a number of moments where a group did ‘lose control’ of the robot, for example accidentally driving it into a desk during recording. This caused the immediate problem of stopping the robot, normally by picking it up and moving it whilst still on (which is not easy) or turning it off at the power switch and moving it. Neither solution was graceful, nor did it allow the recording session to continue, as tracking the robot purely through wheel rotation meant that disengaging the wheels from the floor also effectively disabled the robot’s location system until a new origin was set. This confirmed to us that an environment-wide tracking system such as GPS would be more suitable. More importantly, however, the root cause of difficulties was in understanding which mode the Limbot is currently in. Recording and replaying of moving and pointing creates a minimum of four modes which need to be more clearly noticeable and controllable. Most importantly, user-responsiveness needs to be at its most acute at the same time as the device is at its most ‘autonomous’ (i.e. actuating its wheels). Juxtaposing Media
Our intention was to produce a device which could improve the presence and quality of location-based systems. Given that our group had already been using a located experience design toolkit, they were quick to note the importance of the robot for the multimedia experience. Extract 3 is taken from a conversation between a pair which has just finished using the robot. Extract 3 A:
B:
A: B:
Put like a voice package with it so it’ll point at something and say “this is the (.) blahdeyblahdeyblah” that if you’re doing a [cool school [that’s that’s it is cos if you could integrate this with your [PDA the the sound that you’re attaching to spaces and the images [sound that you’re putting up as well f- for regions you kind of have all of that in one it’d be quite nice I reckon
Despite excitement about the ability to point at features of the environment, this pair reflected on the fact that the relationship between the gesture and the media will be of utmost importance, and indeed all of our group noted at some point that referential gestures made by the robot made less sense without the juxtaposition of additional media.
However, the pair’s conversation is also highlighting another point. Our group all treated the pointing arm as having the same status in a located application as located sounds or images: one should be able to leave pointing gestures in particular places for others to experience later on. This raises an interesting distinction: should the arm device be treated by the application as a representation of location information onto which multimedia can be attached, for example pointing shows an image or sound that represents what is being pointed at?; or should pointing have the status of multimedia itself, such that a ‘point’ can be left in a particular location alongside other media types? The two options are to retain a GPS-driven model under which pointing gestures can also be placed in individual locations; or to extend our location model to include deixis as a fundamental location mapping. The first mode requires leaving and playing media and pointing gestures ‘here’, the second requires playing media located over ‘there’ by pointing to there from here. We have yet to determine which of these models is superior but we can report that the children presumed the former ‘here’ model, although this may because they had been using that model with Create-aScape rather than the one that naturally made sense to them. Co-Producing References
Our final section reviewing the use of our device looks at how our group used and shared the pointing feature. Below we present an extract which is taken from a group of four using the pointing arm to explore how the Limbot works. We present this conversation in some detail because it illustrates many of the issues that were raised individually elsewhere. At the start of the extract, B is aiming the arm at A as though it were a weapon (shown in Fig. 3 (top) which is from the reverse angle to show B clearly). A points at his own forehead and speaks. Extract 4 A:
Let it point right right there point up there
B:
its not
c
a
(.) that's gonna
b
((both trace to arm's orientation, B from d
A: B:
the end effector, A from the arm itself)) That's the pointy thing ((points at arm)), not that ((points at end)) oh yeh, I spose If you double clicked again it would point yeh? err when you drive it, when you stop driving it it'll point again ((to B)) redrive ((drives, stops, points at 'A' again))
A:
((sees it is directly at him, moves & laughs))
A: B: A: R:
e
This interchange highlights several points. Firstly, the way in which B aims the arm (fig. 3, top) and the subsequent debate between A and B regarding the difference in orientation brings to light the difference between pointing with a robot arm and a pointing with your own arm. Because the sense of orientation, distance and perspective are much harder to discern, the arm is aimed rather than pointed, and even after this its orientation is treated as uncertain due to the end effector not actuating. Secondly, A’s comment about double clicking illustrates wider
a
confusion about the status or mode of the robot and its capabilities at any one time. Finally, during the end of the sequence B successfully re-attempts to point at A. On this occasion, A moves so that the robot is not pointing at him. All the group join in and laugh, illustrating the benefit the children can find in manipulating the robot’s pointing under their control. Pointing at nothing is not a ‘technical failure’ but a fun, notable event. Implications
b
c
d
e
Figure. 3. (top) B points at A (a) “right there” (b) “gonna point up there” (c & d) “its not” (e) A moves and laughs
Despite introducing free movement modes into our design to alleviate pressure on recording and replaying sessions, only rarely did the children appear to be comfortable with control. This implies that users need to potentially have arm and wheel control of the device at any point during its use. We suggest that at least three factors could improve control of our device: (i) have clearly discernable mappings between appearance and status; (ii) at any actuation point, provide a means of returning to a steady state as quickly as possible while still maintaining engineering safety; and (iii) design control for points of failure and operational transitions as well as successes. We have systematised these aims by looking at relationships and transitions between actuated events and user control. Across all operations including failures this inspection results in Table 1. This table gives a clear indication of which control features were missing in our design, and also highlights what worked well: the event which gave best coverage to these considerations (rotating) was also by far the easiest to perform for our group (no-one even commented on it, despite the complexity of our combined base and arm rotation design). It is clear that for other events, our user control focus means that we need to anticipate and design for
Event
Actuating?
Stopping possible?
Stopping mechanism
Immediate? N/A
Waiting
No
N/A
No
Align arm
Yes
No
No
N/A
Rotate
Yes
Yes
Move arm
Yes
Forwards
Yes
Yes
Button
No
Arm points
Yes
Yes
Button
No
Move to start
Yes
No
Intervention
No
Arm drops
Yes
No
No
N/A
Collision
Yes
Yes
Intervention
No
Table 1. User control of actuated events
operational mistakes and failures during the devices’ selfmovement. We suggest that each actuated component of our design would have benefited from a more instantaneous user-controlled ‘off switch’ of some kind: the pointing arm should become loose immediately when switched off, because the user is holding the arm to press the button anyway. The wheels also need an ‘off switch’, for example a handle to push the Limbot containing a clutch to disengage the wheel gears would improve both speed and agility of the robot whilst also removing the necessity for additional ‘free movement’ modes of actuated operation. However, it will also require that we provide more visible state on the device itself. How this device behaviour translates to a location model is less clear. Our design certainly requires genuine media juxtaposition to become useful for featuring in a located media application. For example, using sound to support the physical group would allow information and physical pointing to make sense of one another. However, we have not yet managed to parse whether and how the status of physical spaces map appropriately onto the location representation or its located multimedia. CONCLUSION
Our Limbot device was designed to be user-controlled, but at some points it remains autonomous. There are things we believe in retrospect we could do to make it less autonomous, such as the introduction of a hand-operated clutch for the wheels. Given that the children found it hard enough with our limited and predictable software modes to understand what the robot was currently doing, we suggest that additional complexity or intelligence is not an important or missing feature of our design. Rather, we believe our robot fails at points where it continues to operate autonomously regardless of the childrens’ desire to make it do something else. This occurs despite our original design aim to provide points of freeform use at each stage of recording and replaying. Our conclusion is that at every point of potential actuation, location-aware devices should provide a clear disengagement option, and that this should not be assumed to mean ‘turn off’, but rather for the user to tell the hardware and software ‘it’s my turn to be in control’. This is an important distinction: we are not suggesting that control should revert towards users and
away from the robot: the users are always in control, and will pick the robot up, lift it to shift its path or turn off the power when it is not operating in a sufficiently pliable manner. Rather, we are suggesting that an understanding of the existing predominance of user control should be built more closely into the design of the robot itself. Finally, in future work we plan to integrate different forms of digital media and physical actuation together into the design, to determine whether ‘here’ or ‘there’ location models are the most appropriate ways of integrating physically shared pointing into location-aware experiences. ACKNOWLEDGEMENTS
We gratefully acknowledge funding for this work from UK EPSRC grants GR/S58485/01 and EP/D504708/1. REFERENCES 1. Barkhuus, L., Context Information vs. Sensor Information: A Model for Categorizing Context in Context-Aware Mobile Computing, in Proc. Collaborative Technologies and Systems, San Diego, 2003. 2. Barratt, C., Brogni, A., Chalmers, M., Cobern, W. R., Crowe, J., Cruickshank, D., Davies, N., de Roure, D., Friday, A., Hampshire, A., Gibson. O. J., Greenhalgh, C., Hayes-Gill, B., Humble, J., Muller, H., Palethorpe, B., Rodden, T., Setchell, C., Sumner, M., Storz, O. and Tarassenko, L., Extending the Grid to Support Remote Medical Monitoring, in Proc. UK eScience AHM 2003, EPSRC. 3. Bartneck, C. and Forlizzi, J., Shaping human-robot interaction: understanding the social aspects of intelligent robotic products, in Adjunct Proc. CHI’04, pp. 1731-1732. 4. Benford, S., Rowland, D., Flintham, M., Drozd, A., Hull, R., Reid, J., Morrison, J. and Facer, K., Life on the edge: supporting collaboration in location-based experiences, in Proc. CHI 2005, pp. 721-730. 5. Brave, S., Ishii, H. and Dahley, A., Tangible Interfaces for Remote Collaboration and Communication, in Proc. CSCW 1998, pp. 169-178, ACM. 6. Brewster, S. A. and Murray-Smith, R., Haptic HumanComputer Interaction, Glasgow, UK, Springer 2001. 7. Chalmers, M., A Historical View of Context, in Computer Supported Cooperative Work, 13 (3), pp. 223-247, 2004. 8. Cheverst, K., Mitchell, K. and Davies, N., Design of an object model for a context sensitive tourist GUIDE, in Computers and Graphics, 23 (6), pp. 883-891, 1999. 9. Dey, A. K. and Mankoff, J., Designing mediation for contextaware applications, in Transactions on Computer-Human Interaction, 12 (1), pp. 53-80, 2005, ACM. 10. Dissanayake, M., Newman, P., Durrant-Whyte, H., Clark, S. and Csorba, M., An Experimental and Theoretical Investigation into Simultaneous Localization and Map Building, in Proc. International Symposium on Experimental Robotics, pp. 265-274, 1999. 11. Dorais, G., Bonasso, R., Kortenkamp, D., Pell, P. and Schreckenghost, D., Adjustable autonomy for human-centered autonomous systems on mars, in Proc. Mars Society Conference, 1998. 12. Dourish, P., What we talk about when we talk about context, Personal & Ubiquitous Computing, 8 (1), pp. 19-30, 2004. 13. Frei, P., Su, V., Mikhak, B. and Ishii, H., Curlybot: designing a new class of computational toys, in Proc. CHI 2000, pp. 129-136, ACM.
14. Gaver, W. W., Beaver, J. and Benford, S., Ambiguity as a resource for design, in Proc. CHI 2003, pp. 233-240, ACM. 15. Greenberg, S. and Fitchett, C., Phidgets: Easy development of physical interfaces through physical widgets, in Proc. UIST 2001, pp 209-218, ACM. 16. Hardian, B., Indulska J. and Henricksen, K., Balancing Autonomy and User Control in Context-Aware Systems - a Survey, in PerCom Workshops 2006, pp. 51-56. 17. Hull, R., Clayton, B. and Melamed, T., Rapid Authoring of Mediascapes, in Proc. Ubicomp’04, pp.125-142. 18. Kuzuoka, H., Oyama, S., Yamazaki, K., Suzuki, K. and Mitsuishi, M., GestureMan: a mobile robot that embodies a remote instructor' s actions, in Proc. CSCW’00, pp. 155-162. 19. LaMarca, A., Hightower, J., Smith, I. E. and Consolvo, S., Self-Mapping in 802.11 Location Systems, in Proc. Ubicomp 2005, pp. 87-104. 20. Luk, J., Pasquero, J., Little, S., MacLean, K. E., Lévesque, V. and Hayward, V., A role for haptics in mobile interaction: initial design using a handheld tactile display prototype, in Proc. CHI 2006, pp. 171-180, ACM. 21. Ng, K. H., Benford, S. and Koleva, B., PINS push in and POUTS pop out: creating a tangible pin-board that ejects physical documents, in CHI Abstracts ‘05, pp. 1981-1984. 22. Paulos, E. and Canny, J. F., Social Tele-Embodiment: Understanding Presence, in Autonomous Robots, 11 (1), pp. 87-95, 2001. 23. Poupyrev, I., Nashida, T. and Okabe, M., Actuation and Tangible User Interfaces: Vaucanson duck, Robots and Shape Displays, in Proc. TEI’07, pp. 205-212, ACM. 24. Randell, C. and Muller, H. L., The Shopping Jacket: Wearable Computing for the Consumer, in Personal and Ubiquitous Computing, 4 (4), pp. 241-244, 2000. 25. Reeves, S., Benford, S., O' Malley, C. and Fraser, M., Designing the Spectator Experience, in Proc. CHI 2005, pp.741-750, ACM. 26. Reid, J., Geelhoed, E., Hull, R., Cater, K. and Clayton, B., Parallel Worlds: Immersion in Location-based Experiences, in CHI Extended Abstracts 2005, pp. 1733- 1736, ACM. 27. Regenbrecht, H., Wagner, M. and Baratoff,G., MagicMeeting: A Collaborative Tangible Augmented Reality System, in Virtual Reality, 6 (3), pp. 151-166, 2002. 28. Revelle, G., Zuckerman, O., Druin, A. and Bolas, M., Tangible user interfaces for children, in CHI Extended Abstracts 2005, pp. 2051-2052, ACM. 29. Schilit, B., Adams, N. and Want, R. Context-aware computing applications, in Proc. International Workshop on Mobile Computing Systems and Applications, 1994, IEEE. 30. Schnädelbach, H., Koleva, B., Flintham, M., Fraser, M., Izadi, S., Chandler, P., Foster, M., Benford, S., Greenhalgh, C. and Rodden, T, The augurscope: a mixed reality interface for outdoors, in Proc. CHI 2002, pp. 9-16, ACM. 31. Stanton, D., Bayon, V., Neale, H., Ghali, A., Benford, S., Cobb, S., Ingram, R., Wilson, J., Pridmore, T. and O' Malley, C., Classroom Collaboration in the Design of Tangible Interfaces for Storytelling, in Proc. CHI’01, pp. 482-489. 32. Strohbach, M., Gellersen, H-W., Kortuem, G. and Kray, C., Cooperative Artefacts: Assessing Real World Situations with Embedded Technology, in Proc. Ubicomp’04, pp. 250-267. 33. Weiser, M., Some Computer Science Issues in Ubiquitous Computing, in CACM, pp. 75-84, 1993, ACM.