Int. J. Human-Computer Studies 72 (2014) 111–125
Contents lists available at ScienceDirect
Int. J. Human-Computer Studies journal homepage: www.elsevier.com/locate/ijhcs
Distributed user interfaces in public spaces using RFID-based panels Ricardo Tesoriero a,n, Pedro G. Villanueva a, Habib M. Fardoun b, Gabriel Sebastián Rivera a a b
University of Castilla-La Mancha, ISE Research group, Information System Department, Campus Universitario de Albacete, 02071 Albacete, Spain King Abdulaziz University, P.O. Box: 80200 and 21589, Jeddah, Kingdom of Saudi Arabia
art ic l e i nf o
a b s t r a c t
Article history: Received 21 September 2012 Received in revised form 17 August 2013 Accepted 28 August 2013 Communicated by E. Motta Available online 14 September 2013
The combination and integration of services between mobile computing and context-aware applications responds to the use of mobile devices defining a wide range of distributed user interfaces to support social activities. In this paper, we propose a novel solution that combines social software with context awareness to improve users' interaction in public spaces. This approach is based on the concept of collaborative interactive panels where users share their opinions and ideas about environmental issues by performing natural gestures. And so, taking advantage of physical resources already available in public spaces combined with the use of well-known technologies, such as mobile devices and RFID, we extend the concept of social software from the Web to physical public scenarios, such as bus stations, squares, etc. As an example, we present a case of study that encourage citizens' participation in decisions related to the community environmental issues reducing the gap between the social software and users. & 2013 Elsevier Ltd. All rights reserved.
Keywords: Distributed user interfaces RFID technology Mobile devices
1. Introduction Software solutions based on mobile technology (smart-phones, tablets, PDAs, etc.) are becoming more and more common because of the exponential growing of their users in the last decade. One of the most important factors of this expansion is the growing spreading of communications networks such as Wi-Fi, GPRS, 3G and so on, that provides users with connectivity almost everywhere. This fact leads to the deployment of Internet-based solutions that provide mobile users with new capabilities and services. Keeping this scenario in mind, users are able to interact with the environment using mobile devices then provide them with the ability to establish an interactive dialog with it while keeping in touch with other users at the same time. Moving to another subject, the challenge of building a sustainable environment begins on each individual person because the social responsibility lays on individuals. Consequently, people need information about environmental issues in order to encourage their participation in the solution of these issues as active agents in the society. The participation of the society is fundamental to discuss these problems as well as to establish concrete actions. Town councils, public administrations, cultural and educational institutions, use blackboards, maps or different kinds of panels to present citizens different ecologic problems, such as the pollution levels, the
n
Corresponding author. Tel.: þ 34 967599200 ext. 2295; fax: þ 34 967995224. E-mail addresses:
[email protected],
[email protected] (R. Tesoriero),
[email protected] (P.G. Villanueva),
[email protected] (H.M. Fardoun),
[email protected] (G. Sebastián Rivera). 1071-5819/$ - see front matter & 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.ijhcs.2013.08.010
distribution of recycling containers, the disposal of toxic waste, the firefighting strategies, and so on. Up to now, traditional informative panels offer poor information that is not well organized, or is difficult to understand, because the information is confined to a small portion of physical space. Besides, these panels provide users with static information that tends to become obsolete very quickly. As result of this situation, citizens experience confusion and disorientation when trying to reach the information from these panels. Therefore, the interest in the information provided by these panels decreases substantially, becoming it useless. To address this problem we propose a new type of panel that provides users with a novel interaction mechanism that takes advantage of the wide use of mobile devices in order to increase the social responsibility of citizens about environmental problems by means of mobile technologies. The panel has been conceived as a new device that allows the interaction among different users, individually or in groups of them, providing them a simple interaction mechanism based on gestures. Therefore, users are able to retrieve dynamic and up to date information from the panel into their mobile devices just by approaching the device to a physical point of interest in the panel. Thus, users do not need extra technology background other than that they already have as users of mobile devices. This device has been developed focusing on “easy-to-use” and “easy-to-learn” requirements as the main goal. Therefore, it provides users with an amiable and simple to use gesture-based interface intended to support natural interaction. The implementation is based on the RFID technology which is actually being broadly used in the industry and the commerce as a reliable medium to achieve gestural interaction. The RFID technology
112
R. Tesoriero et al. / Int. J. Human-Computer Studies 72 (2014) 111–125
The problem this paper tackles is how to turn static informative panels into dynamic and interactive panels. As we have mentioned in the introduction section, the solution is based on of three concepts: context-awareness, gesture-based interaction on a distributed user interface and social software. This section exposes how these concepts are related to our proposal.
tourists to get context-aware information in outdoor environments by the means of the user's position and orientation that is obtained using the GPS technology. Another interesting approach is GUIDE (Davies et al., 1998; Cheverst et al., 1999, 2000). It is a hybrid system that allows users to retrieve information using different location systems depending on whether he/she is in an indoor or outdoor scenario. It employs the WaveLan technology in indoor environments and the GPS technology in outdoor environments. The Smart Sight Tourist Assistant application (Yang et al., 1999) provides users with information based on users' GPS coordinates and marks that identify objects. An interesting characteristic of this system is the possibility of sharing information among users that are visiting the space. This solution is suitable if precision is lower than 10 m because GPS does not work properly when natural or gestural interaction is going to be provided (less than 2 m). Some approaches, such as Becker et al. (2008), Ni et al. (2004) and Hähnel et al. (2004), achieve a “coarse-grained” location-awareness (Tesoriero et al., 2012) and they do not deal with “fine-grained” location-awareness (Tesoriero et al., 2012), such as gestures. In Kukka et al. (2011) authors employ Bluetooth as an alternative to RFID; therefore, they employ an “active” technology that require external power supply conversely to passive RFID. This fact leads to higher maintenance and deployment costs. Another interesting approach is presented in Hosio et al. (2010). However, this approach does not employ the RFID technology to perform the interaction. Thus, none of these approaches are able to interact with a 3D real location, due to the granularity of interaction they are able to achieve.
2.1. Context-awareness
2.2. Gesture-based interaction on a distributed user interface
According to Dey and Abowd (2000), context-aware applications are systems that use the context to provide relevant information and/or services to the user, where the importance of such information depends on the task the user is performing. The concept of context includes all the information that can be used to characterize the situation of an entity such as a person, a place or an object which is considered relevant to the interaction between the user and the application, including the user and the application themselves. One of the classical applications of context-awareness is the development of location-aware applications. Mobile computing can benefit of the location-awareness to improve two important factors of software applications: (a) the user interface and (b) automatic adaptation to a changing environment. Taking advantage of the user mobility, we have the opportunity to create new user interfaces that provide more human and natural interaction with the physical context avoiding users to go through the learning of non-natural interfaces of new devices. Consequently, these applications are more flexible, and are able to adapt themselves to users' needs using the location as part of the context to enrich their experience. The canonical application of location-awareness is the electronic museum guide provided to museum visitors in order to retrieve information about museum pieces. These guides, running on mobile devices, allow users to visit a cultural space and retrieve information related to their location inside the cultural space they are visiting (i.e. museum, art exhibition, etc). Some illustrative references to this kind of applications can be found in Butz et al. (2000), Ciavarella and Paternò (2003), Ciavarella and Paternò (2004) and Lozano et al. (2007). All these applications have a common limitation: they work on indoor scenarios only; they do not work properly in open spaces. However, there are some tourist guide applications, such as Cyberguide (Long et al., 1996; Abowd et al., 1997), which allow
In Välkkynen et al. (2003, 2006), an RFID based positioning system that retrieves information from physical objects is described. It also proposes the use of this technology not only to retrieve information from objects but also to control them. Thus, Välkkynen et al. (2003) proposes a user interaction paradigm to browse information using physical objects using the RFID technology. We propose an extension of this idea of person-to-object interaction that includes the interaction among people through physical objects to introduce the person-object-person interaction idea. For instance, we can use any object to communicate with other people using mobile devices. The main difference between traditional approaches and our idea is the ability to interact with other people using the surrounding environment directly; instead of the mobile device itself. Another aspect to highlight about our proposal is the use of natural gestures. During the last years, we have been witnesses of emerging applications that use the concept of natural interaction, or gesture-based interaction. The Nintendo Wii,1 the Micorosoft Kinect2 and the Playstation Move3 are the most relevant samples of devices controlled by user gestures. These systems have been designed to work in a controlled indoor space (in terms of light intensity and space); therefore the deployment of these controllers in public spaces is not trivial. The work by Müller et al. (2010) discusses the foundations for creating exciting public displays and multimedia experiences to enable new forms of engagement with digital content. The limitation of this study is that users can only interact with the environment through digital public displays. Our proposal goes further by
is based on Near Field Communication (NFC) that enables users to get information that is stored on tags (assigned to a physical location) just by bringing the NFC reader near to the tag area. Through this technology and the communication service provided by mobile devices, these panels become dynamic by allowing users to reach multimedia information. Users' participation is motivated by the use of these devices, and the inclusion of social software concepts, which use the Internet as a platform. In fact, these panels take social software from the Internet and put them into the streets. To validate this proposal, we have performed a usability evaluation based on the ISO/IEC 9126-4 international standard metric (ISO/IEC 9126-4, 2001). The article is organized as follows. Section 2 presents a brief overview about social and interactive context-aware applications. Section 3 describes the Interactive Panels as a solution to the problem presented in Section 2. Section 4 presents the evaluation of the system. Section 5 describes Interactive Panels in terms of distributed user interface characteristics, and classifies them according to the multi-display ecosystem taxonomy. Finally, Section 6 describes the conclusions and future works.
2. Related work
1
The Nintendo Wii Official site http://www.nintendo.com/wii. Microsoft Kinect for Windows http://www.microsoft.com/en-us/kinectforwin dows/. 3 The Playstation Move Site http://us.playstation.com/ps3/playstation-move/. 2
R. Tesoriero et al. / Int. J. Human-Computer Studies 72 (2014) 111–125
allowing users to interact with any type of physical object in the environment. An extension to Lozano et al. (2007) where RFID technology is used to improve visitors' experience in art museums is presented in Tesoriero et al. (2008). This extension also contemplates the idea of collaboration among users, although it was not implemented in the prototype. Therefore, the proposal presented in this work goes a step further because it shows how users' collaboration can be achieved using the RFID technology as a public Distributed User Interface (DUI). Regarding the interaction on DUIs, according to Vanderdonckt (2010) main characteristics are related to the repartition of one or many elements from one or many user interfaces in order to support one or many users to carry out one or many tasks on one or many domains in one or many contexts of use, each context of use consisting of users, platforms, and environments. Based on this definition, our proposal is considered a distributed user interfaces since:
It divides the user interface into different physical spaces of
interaction: the panel and the mobile device screen. While the mobile device screen reproduce multimedia resources, the panel provides users with controls to retrieve augmented information from physical objects. It supports the interaction among different users by the means of social software tools, such as RSS or chats enabling users to carry out collaborative tasks. It provides users with tools to carry out different kind of tasks (not only send and receive information). It is embedded into different contexts, environments and platforms (i.e. mobile device and RFID tag).
113
Another important aspect of our proposal that is worth to highlight is the translation of the social software concept from the Web 2.0 to a physical environment. Based on the Web 2.0 technology, which platform is the Internet, the social software provides users with a set of communication tools that allow the interaction and collaboration among people by means of social and informal agreements. These tools include email, distribution mailing lists, Usenet, IRC, instantaneous messaging, blogging, wikis, news, social bookmarks, folksonomies, and so on. The proposed system extends the concept of social software to provide interaction mechanisms beyond the computer, and to bring them up to public spaces as near to people as possible. The interaction mechanisms that we propose are designed to be easy to use, by supporting natural interaction between the user and the existing physical environment.
3. Interactive panels by RFID In this section, we describe the implementation details of the proposed system known as Interactive Panels. To explain the implementation of the system, we start presenting the set of Tools, Commands and Parameters that the user is able to manipulate through the panel. Then we use a CTT task model to describe the functionality of the prototype that is employed to perform the preliminary usability evaluation of the Interactive Panels. Finally, we explain software infrastructure in terms of the RFID Interactive Panel Framework (Fardoun et al., 2012) that was employed to develop the application, and the hardware infrastructure that supports the prototype employed to perform the preliminary usability evaluation of the system in Section 4. 3.1. System tools, commands and parameters
From another perspective, according to the taxonomy presented in Terrenghi et al. (2009), the system can be seen as an inch to a yard (or even to a perch) display ecosystem (Smartphones or tablets); which are immersed into a many-to-many social interaction environment. Besides, this approach can be classified as a non-traditional distributed user interface which combines displays of different size to achieve social tasks. Thus, our proposal is a particular kind of distributed user interface that involves a wide-range of users (from the social interaction perspective) and spaces (from the contexts of use). Finally, from the hardware perspective, this proposal combines different types of displays the panel, which can be considered a passive device due to the lack of a power supply, and the mobile device. 2.3. Social software The Mirrored Message Wall (MMW) (Yeom and Tan, 2010) describes a public display to share thoughts and messages among large groups of users. This work enables users to obtain information as well as to add new information to the system at the same time that is shared by other users. The main difference to our proposal is that MMW users interact with their mobile devices conversely to our proposal where users interact through their mobile devices with objects in the physical world. Other research works where users retrieve information from RFID tags were presented in Oppermann and Specht (2000) and Paternò and Santoro (2007). The main difference between these works and our approach is that these applications are limited to information retrieval only. Our approach implements the concept of command allowing users to perform any amount of functions on the system, such as retrieving information, introducing information, get notifications, post information, physical selection of objects, etc.
The conceptual model of Interactive Panels is very simple and it is built on 3 main concepts: the Tool, the Command an a set of Parameters. The Tool defines a “sub-application” in the system which is controlled through the execution of Commands that take user information through a set of Parameters. 3.1.1. System tools The system provides users with the following set of tools: Multimedia content reproduction: It allows users to reproduce different types of multimedia resources (i.e. photo gallery, audio, audio-text and video). For each type of media, we define a different tool. Statistics: It shows users to analyze users' involvement with the subject of the panel. Help: It introduces new users on how to use the system. As the system is easy to use, this tool is not used very often. Blog: It allows users to exchange experiences and express by posting or retrieving posts. RSS feeds: It allows users to receive notifications about a virtual or physical activity related to the panel, or a special point of interest on it. 3.1.2. System commands Tools are manipulated though Commands that differ according to the Tool. As a conceptual proof of concept, we have defined the following set of commands: Next: It represents that the user is able to go a “step” forward. For instance, if the user executes this Command when reproducing a video, it starts playing the next video in the play list. Previous: It represents that the user is able to “back”. For instance, if the user executes this Command when reproducing a video, it starts playing the previous video in the play list.
114
R. Tesoriero et al. / Int. J. Human-Computer Studies 72 (2014) 111–125
Post: It represents that the user is able to “send” or “share” information. For instance, if the user executes this Command when composing a message for a Blog, it posts the message. Read: It represents that the user is able to “retrieve” or “read” information. For instance, if the user executes this Command when a blog was selected Blog, blog messages are retrieved. Select Area: It represents that the user is able to select a Parameter, or a set of them, for a Command. For instance, if the panel shows a map, users are able to select a zone in the map. It is worth to notice that some Commands are shared by more that one Tool; for instance, the Next and Previous Commands are shared by the audio and video reproducers as well as the photo gallery Tools.
3.1.3. System parameters Sometimes, commands need extra information to be executed; for instance, suppose that we have to choose the set of information to be reproduced (i.e. select a play list, from a set of play lists). To cope with this situation, parameters are provided to commands through a “set parameter” Command; for instance, the Next Command executed in a Photo Gallery Tool is parametrized with the Play list from which it should take the next resource. In this case, if the panel shows a map where users are able to choose the area from which are able to reproduce multimedia resources, the “set parameter” command is the Select Area Command. There would be one “Select Area” Command for each play list.
3.2. User interaction The user interaction with the environment is based on the “approach and remove” metaphor (see Fig. 1). The interaction starts when users approach the NFC reader attached to the mobile device close to a RFID tag that represents the hot-spot in the panel the user in interested in.
As result of this approach, the reader is able to get the RFID tag ID that is just below the reader. This ID is sent to a Web service hosted in a server deployed on the Internet. The server processes the request and then sends the information that is associated to the RFID tag ID back to the mobile device. This information is interpreted in two different ways, as a Tool or a Command/ Parameter. No matter whether it is a tool or a command, the last command that was received is executed on the last tool that was received in order to execute the action on the mobile device. The result of this execution may be the transference of a resource (i.e. video, photo, audio, text, etc.) or information stored in the database system (i.e. messages belonging to the Blog). In this way, users perceive in their mobile devices as an interaction between themselves and the panel; instead of an interaction between themselves and a Web application. Fig. 2 depicts the whole process.
3.3. The decision making scenario The system can be easily extended to new Commands, Tools and Parameters by associating information to new RFID tags (see Section 3.5). This set of tools enables users to interact among them by means the Interactive Panels, as they can retrieve text and multimedia information uploaded by other users. A decision making scenario where the town council decides where to locate a new green zone in the city depicts a concrete example where panels are a suitable solution to gather citizens' opinions. A possible implementation of the panel is shown in Fig. 3 and the set of PDA screens showing a set of photos from a Region is depicted in Fig. 4. This panel could be exposed in the entrance hall of any public building, such as town councils or other public areas of the city bus stations, squares, etc. This panel offers four zones (identified by the numbers 1, 2, 3 and 4) which show the candidate locations for the green zone in
Fig. 1. The “approach and remove” metaphor.
Fig. 2. The Interactive Panel under the hood: (1 and 2) the RFID reader generates an electromagnetic field that is used by the tag to send the tag ID back to the reader; (3 and 4) the ID is sent to the Resource Server and, as result, (5 and 6) it sends back the information related to the tag ID to the mobile device.
R. Tesoriero et al. / Int. J. Human-Computer Studies 72 (2014) 111–125
115
Fig. 3. A sample of a real Interactive Panel (297 mm 420 mm) exposing information to citizens about possible locations for green zones in Albacete, Spain. (For interpretation of the references to color in this figure caption, the reader is referred to the web version of this article.)
116
R. Tesoriero et al. / Int. J. Human-Computer Studies 72 (2014) 111–125
Fig. 4. PDA screens. (a) Photo selection, (b) first photo, (c) second photo and (d) third photo.
the city. The panel also offers a set of commands that allows users to interact with different multimedia resources and social software tools provided by the system. For instance, if users approach the mobile device to the Blog icon (Fig. 3, upper left icon); they can see on the mobile device the comments left by other users of the system, and write down their own opinion. Besides, citizens are able to subscribe the panel using the RSS feed in order notify interested citizens about related events (see the upper right icon on Fig. 3). In this case, if there is some news about the green area planning, the system notifies those users who have shown interest in this information. The RSS tool activate actions using a single gesture. However, Interactive Panels allow users to employ command-based interaction where users are able to manipulate the information of the panel using different sequences of gestures. An example of Command-based interaction is the use of the Photo Gallery Tool. To use this Tool, users bring the device close to the photo icon represented by the camera on Fig. 3 (Show Photo Tool and First Photo Command); and then select the zone from where photos were taken (Parameter). In order to browse photos from the gallery, users bring the device close to the Previous and Next commands, represented by the left and right arrow icons shown on Fig. 3 respectively. Note that the Zone is implicitly selected from the last Select Zone Command/Parameter. A video demonstration of Interactive Eco Panels by RFID can be seen in Demonstration Interactive Eco Panels (2008). 3.4. System functionality In Section 3.1 we described Tools, Commands and Parameters the system provides to users, and an example of how users interact with these elements. In this section we describe which tasks users are able to perform with these tools. Tasks modeled using the ConcurrenTask Tree (CTT) model Fabio (1999) are shown in Fig. 5. 3.4.1. The Main task model Fig. 5(a) shows the main task model made up of five abstract tasks. These abstract tasks are: Blogging, the RSS, Show Resource, Help and Get Statistics. Note that the abstract root task denotes an asterisk decorator to indicate this is an iterative task (the task can be repeated indefinitely). The temporal relationships among these tasks belong the Choice type; therefore only one task can be performed at a time. The only exception is the Exit task that allows
users to end the Main task. This model is the root of a set of submodels representing each tool. 3.4.2. The Blog task model The Blog task model is a sub-model of the Main task model. It represents how users read blog posts or post messages to the blog. Fig. 5(b) shows how the user selects the Blog icon in the panel, and how the system shows two possible options: (a) write a post or (b) read blog posts. If the user decides to write a new post, a text entry is enabled allowing the user to write the post, and eventually post the message to the blog. However, if the user decides to read blog posts, the system shows blog posts to the user. 3.4.3. The RSS task model The RSS task model is a sub-model the of Main task model. It represents how users are able to subscribe/unsubscribe form the panel information feed. Fig. 5(c) shows how users subscribe/ unsubscribe to/from the RSS depending on the user state. If the user is not subscribed, then it subscribes the user; otherwise the user is unsubscribed. 3.4.4. The Show Resource task model The Show Resource task model is a sub-model of the Main task model. It represents the reproduction of multimedia resources related to a set of defined zones on the map (panel). Fig. 5(d) shows how the user chooses the map zone he/she is interested in order to retrieve multimedia information according to the type of multimedia resource (audio, video, audio-text, photos, etc.) he/she has selected. The user is also able to browse resources of the same type by the means of the left and right arrows that represent the go to Previous and Next resource Commands respectively. 3.4.5. The Help task model The Help task model is a sub-model of the Main task model. It offers users a brief explanation about how the interactive panel can be used with the mobile device. Fig. 5(f) exposes how the user selects the Help tool to retrieve the Help information. 3.4.6. The Statistics task model The Statistics task model is a sub-model of the Main task model. It offers users a set of statistics related to the use of the panel (accesses, posts, etc.). Fig. 5(e) shows how the user selects the Statistics tool to show statistical information.
R. Tesoriero et al. / Int. J. Human-Computer Studies 72 (2014) 111–125
117
Fig. 5. The Interactive Panel task models.
3.5. Software architecture We decided to use a Service Oriented Architecture (SOA) to allow users to receive multimedia resources avoiding problems
related to the portability of the service provider or data exchange format. By employing this architecture, we map users' natural gestures into internal commands that the system's model is able to interpret and associate to concrete events and actions.
118
R. Tesoriero et al. / Int. J. Human-Computer Studies 72 (2014) 111–125
Therefore, we based our implementation on the Framework to develop web applications based on panels RFID explained in detail in Fardoun et al. (2012). This approach is highly flexible and scalable, and allows the application to be easily adaptable to future extensions or modifications derived from the potential incorporation of new contexts. This framework divides the application development in two steps: (a) the application definition and (b) the application configuration.
3.5.1. The application definition The application definition step to develop applications using the Framework described in Fardoun et al. (2012) is devoted to the definition of the application logic in terms of Tools, Commands and Parameters which are implemented as sub-classes of the Tool and Command classes (see Fig. 6). This framework is based on a distributed adaptation of the Model View Controller (MVC) software architecture pattern. As in most of distributed client-server systems, the framework locates the view-controller part of the architecture in the client, and the model was located at the server side of the application. In the client, we clearly delegate the controller responsibility on the RFIDReader, RFIDEvent and IRFIDListener interfaces. In this case, we have used the combination of two design patterns Gamma et al. (1993) to solve the design problem. In first place, we applied the Adapter design pattern to insulate the specific RFID technology to be used (active, passive, different implementations, etc.) from the rest of the application. Therefore, the RFIDReader plays the role of Adapter in the described pattern. Besides, this class also plays the Observable role in the Observer design pattern that we have also applied in order to provide other software components with the ability to be engaged to this controller through the RFIDEvent class and the IRFIDListener interface. Thus, the controller sends the tag ID that was read to the model (located at
the server side of the application) using the IEcoPanelModel interface that is implemented by the ECOPanelModel class. The communication between the client and the server is carried out by means of the application of the Proxy design pattern where the ServerProxy class plays the role of proxy. At the server side, the model processes the tag ID by mapping it to a concrete Command or a ToolModel that is sent back to the client. As result of this process, the client receives the serialized representation of the tool or the command/parameter from the proxy, executes the tool on the command, and shows the result using the appropriate view. 3.5.2. The application configuration The application configuration step to develop applications using the Framework described in Fardoun et al. (2012) is devoted to the mapping between physical information (RFID tag ID) and, application tools, commands and parameters. This configuration is written in XML in order to improve system configuration flexibility. Fig. 7 shows: (a) the configuration of the application prototype (Fig. 7(a)), and (b) the schema used to validate the configuration file (Fig. 7(b)). As we can see in Fig. 7(a), some commands are missing (i.e. PostBlog and ReadBlog). It is because they are not associated to any RFID tag; they are related to button controls on the graphical user interface, instead. 3.6. Hardware infrastructure Once we have described the system software architecture, we describe the hardware infrastructure that supports the prototype that was the subject of the preliminary usability evaluation described in Section 4. The hardware infrastructure is composed by 4 basic elements: 1. The communication network. 2. The mobile device.
Fig. 6. Interactive Panel variant of the distributed Model-View Controller implementation. The client is defined by the View and Controller component jointly with a ServerProxy used to reach information from the server running the model of the application.
R. Tesoriero et al. / Int. J. Human-Computer Studies 72 (2014) 111–125
119
Fig. 7. The XML configuration file and XSD configuration schema. (a) The prototype XML configuration and (b) the application XML schema.
3. The panel. 4. The service provider.
3.6.1. The communication network The application prototype employs a wireless access point as the bridge between the Service Provider and mobile clients. Meanwhile the connection between the Service Provider and the wireless access point is a wired connection that follows the IEEE 802.3 standard, the connection between the wireless access point and mobile clients follows the IEEE 802.11g standard. 3.6.2. The Service Provider The service provider is responsible for interpreting the user input information and answering accordingly. The Service Provider runs the Microsoft Windows Server 2003 R2 Operating System, the Microsoft SQL Server 2005 as the Database Server and the Microsoft Internet Information Server 7.0 as the Web Server. The application was developed with Microsoft ASP .NET 3.5. 3.6.3. The mobile device The mobile device is the tool that allows users to interact with the panel services according to user intentions. The mobile device is a PDA HP iPAQ 214 Enterprise Handheld that runs the Microsoft Windows Mobile 5.0 Operating System and the application was developed under the Microsoft .NET Compact Framework 3.5. The prototype was deployed on the PDA device shown on Fig. 8(a),
which was equipped with an RFID reader attached to the Compact Flash expansion slot (see Fig. 8(b)). As the reader have noticed, the technology of the prototype may result out of date. The main reason to use this technology is the reuse of the Framework described in Fardoun et al. (2012) to generate a rapid prototype and perform the preliminary usability evaluation of the system. However, according to Fardoun et al. (2012), the same approach can be applied to a different implementation technology. For instance, the NFC technology is actually integrated into several mobile devices, such as the Nokia 5140, Nokia 6212, Nokia 6131, Nokia 3220 þ NFC Shell, Nokia EcoSensor, Samsung SGH-X700 NFC, Samsung D500E, SAGEM my700X Contactless, LG 600 V contactless, Benq T80, Samsung Galaxy Note II, etc. Besides, the APIs supporting this technology are available for popular operating systems, such as Android.
3.6.4. The panel The panel is the medium by which the user interacts with the system. It defines a set of hot-spots that represents panel areas where users are able to approach their mobile devices to select a tool (i.e. vote, show photos, play videos, play audio tasks, etc.) or execute a command (i.e. a select a zone on the map). As panels are widely disseminated all around in public spaces (i.e. parks, monuments, underground, public buildings, etc.), we have enriched them with passive RFID technology to improve the interaction between users and the environment. Main reasons to
120
R. Tesoriero et al. / Int. J. Human-Computer Studies 72 (2014) 111–125
Fig. 8. The mobile client hardware configuration. (a) The PDA prototype, (b) the RFID reader and (c) the RFID tag.
use passive RFID tags are the low cost of acquisition and maintenance because of the lack of batteries. Fig. 8(c) shows a sample of 4 cm 4 cm passive tag that represents a hot-spot , or part of it, because RFID tags can be grouped to create a single entity of information by sharing the same reference. Fig. 9 shows how RFID tags are deployed on an interactive panel. Thus, the transition of a traditional panel into an Interactive Panel is easy and cheap.
4. Preliminary usability evaluation Although the Interactive Panel project got the second place in the Spanish finals of Microsoft Imagine Cup in 2008; we validated the idea performing a preliminary usability evaluation based on the 9 steps for conducting a qualitative test according to Sauro (2010). Fig. 9. The RFID panel implementation.
1. 2. 3. 4. 5. 6. 7. 8. 9.
Determine what you want to test. Why are you testing? Write a minimum of 3–5 task. Recruit a set of users. Test your users. Collect as many metrics as possible. Code your data and analyze it. Generate confidence intervals. Summarize the results in graphs and a report or briefing.
To carry out this task, we followed the ISO 9126-4 (ISO/IEC 9126-4, 2001) international standard guidelines to evaluate software usability which is defined as the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.
4.1. Determine what you want to test We want to evaluate the application user interfaces and the interaction mechanism that controls it. The set of function we want to test are: 1. 2. 3. 4.
Reproduce multimedia resources from specific regions. Browse multimedia resources from a gallery. Post blog comments. Get help from the panel.
4.2. Why are you testing? Main reasons are: find and fix potential usability problems, provide designers with information about how users interact with
R. Tesoriero et al. / Int. J. Human-Computer Studies 72 (2014) 111–125
the system, gather good and bad aspects related to the interaction mechanism, asset how easy is to interact with the Interactive Panels using RFID technology and asset how easy is to exchange information among users through the Interactive Panels 4.3. Write a minimum of 3–5 tasks All participants were asked to perform the same six tasks in the same conditions. These tasks were the following: 1. 2. 3. 4. 5. 6.
Show a video of the Region 1. Go to the next video. Show a photo of the Region 2. Write down a comment on the blog. Show the help screen that is related to the Show Resource Tool. Exit from the help mode.
121
3. I thought the system was easy to use. 4. I think that I would need the support of a technical person to be able to use this system. 5. I found the various functions in this system were well integrated. 6. I thought there was too much inconsistency in this system. 7. I would imagine that most people would learn to use this system very quickly. 8. I found the system very cumbersome to use. 9. I felt very confident using the system. 10. I needed to learn a lot of things before I could get going with this system. The testing was conducted in a usability laboratory (controlled environment). 4.6. Collect as many metrics as possible
4.4. Recruit a set of users According to Sauro, 14 users are enough to find out the 85% of usability problems that may affect to the 30% of future users (Sauro, 2010). The profile of the users attending tasks of the application is broad. Therefore, we selected a user population of 13 users. The age of the users is from 13 to 58 years old (37 years old is the average age). Regarding the gender, while the number of female users is 6, the number of male users is 7. Fig. 10 shows the distribution of user ages according to 4 groups: Teenagers (12–18 years old), Young people (19–29 years old), Adult people (30–50 years old) and Elder people (above 50 years old). The technological background of all users is the usual for those that are familiar with mobile phones. Besides, one of them is familiar with RFID technology. 4.5. Test your users The testing period was two weeks from the beginning to the end of the experiment. Each testing session was about 15–20 min long. After each testing session users fulfilled a System Usability Scale (SUS) questionnaire (Brooke, 1996). The questions included in the questionnaire are: 1. I think that I would like to use this system frequently. 2. I found the system unnecessarily complex.
The efficiency evaluation is based on the measurement of task completion time. The effectiveness evaluation is based on the measurement of the task completion rates (typically measured as a binary value: 1 ¼passed the task and 0 ¼ failed the task) and the number of user errors. The satisfaction evaluation is based on the analysis of a SUS questionnaire where each question was valued in a scale from the range of 1 to 5 (1 Strongly disagree, 5 Strongly agree). 4.7. Code your data and analyze it As result of the test session we collected the information shown in Tables 1 and 2. Besides, regarding task completion, all users completed all proposed tasks. On the one hand, Table 1 shows users' time to complete each task, and how many mistakes they committed before they successfully performed task. On the other hand, Table 2 shows how many times they asked for help to perform each task. Finally, the results of the SUS questionnaire are depicted in Table 3. 4.8. Generate confidence intervals According to Sauro (2010), the confidence intervals around the average time for each task assume the data to be approximately normally distributed. To compute the confidence intervals using the t-distribution we use the formula (1) where X is the mean of pffiffiffi the log-task times, s= n is the standard error of the mean, n is the sample size and t is the critical value from the t-distribution for the number of degrees of freedom and desired confidence level. s X 7 pffiffiffi t n
ð1Þ
The confidence intervals around the average time for each task are depicted in Table 4. 4.9. Summarize the results in graphs and a report or briefing
Fig. 10. User age distribution.
Regarding task completion, all users completed all tasks. However, comparing the average number of errors per task of out approach (presented in Table 1) to Sauro's benchmarks Sauro (2010) on page 67, we conclude that: Task 1 matches to the 0–10th percentile, Task 2 matches to the 50th percentile, Task 3 matches the 90–99th percentile, Task 4 matches to the 20 th percentile, Task 5 matches to the 40th percentile and Task 6 matches the 90–99th percentile.
122
R. Tesoriero et al. / Int. J. Human-Computer Studies 72 (2014) 111–125
Table 1 Completion time (in seconds) and errors for each task. ID
Age
Task 1
Task 2
Time 1 2 3 4 5 6 7 8 9 10 11 12 13
58 55 54 52 51 50 32 26 24 21 21 17 16
Errors
117 90 20 95 150 27 10 160 90 10 18 15 57
Sum Average
Time
7 5 0 3 5 1 0 6 6 0 0 0 4
859 66.08
Task 3 Errors
50 15 10 18 104 9 11 14 10 15 19 8 20
37 2.85
3 0 0 2 3 0 1 1 0 0 0 0 0
303 23.31
Time 5 15 2 5 2 10 6 8 1 1 5 5 3
10 0.77
122 9.38
Task 4 Errors 0 0 1 0 0 0 0 1 0 0 0 0 0 2 0.15
Table 2 Help needed according to task and user. ID
Age
Task 1
Task 2
Task 3
Task 4
Task 5
Task 6
1 2 3 4 5 6 7 8 9 10 11 12 13
58 55 54 52 51 50 32 26 24 21 21 17 16
Yes Yes No Yes Yes No No Yes Yes No No Yes No
Yes No No No No No No No No No No Yes No
No No No No No No No No No No No No No
Yes No No No No Yes Yes No No Yes No Yes No
Yes No No No No No No No No No No Yes No
No No No No No Yes No No No No No No No
Table 3 SUS questionnaire results. User ID
1 2 3 4 5 6 7 8 9 10 11 12 13 Average
Question ID
Results
1
2
3
4
5
6
7
8
9
10
SUS
Usab.
Learn.
3 3 3 3 4 4 3 3 3 3 3 3 2
4 2 1 3 5 3 â 3 3 1 1 1 1
4 4 4 5 5 4 4 4 5 4 5 4 5
1 2 2 2 2 4 2 2 2 2 1 4 1
4 2 1 3 5 3 5 3 3 1 1 1 1
5 2 1 4 4 2 5 3 3 1 1 1 3
1 4 5 2 2 4 1 3 3 5 5 5 3
4 2 1 3 5 3 5 3 3 1 1 1 1
1 4 5 2 2 4 1 3 3 5 5 5 3
1 2 2 2 2 4 2 2 2 2 1 4 1
45.0 67.5 76.4 52.5 50.0 57.5 37.5 57.5 60.0 77.5 85.0 67.5 67.5 61.6
31.3 65.6 76.7 46.9 43.8 65.6 28.1 53.1 56.3 78.1 81.3 78.1 59.4 58.8
100.0 75.0 75.0 75.0 75.0 25.0 75.0 75.0 75.0 75.0 100.0 25.0 100.0 73.1
This information shows that Task 1 has a bad score because the average number of errors of this task is in the 10% of tasks that have the lowest score according to the 719 task evaluated in Sauro's benchmark Sauro (2010). Besides, Task 2 is better that the 50% of Sauro's dataset. It shows that users get used with the application quickly. The number of errors of Task 3 shows that
Time 35 25 7 43 15 30 30 7 60 16 12 30 53 363 27.92
Task 5 Errors 5 0 0 3 0 1 15 0 2 0 0 0 5 31 2.38
Time 16 10 13 34 80 21 20 10 49 6 13 14 10 306 23.54
Task 6 Errors 3 0 0 4 5 1 0 0 2 0 0 0 0 15 1.15
Total
Time
Errors
Time
Errors
5 5 10 1 2 7 1 1 1 7 1 4 2
0 0 0 0 0 0 0 0 0 0 0 0 0
228 150 25 196 353 104 321 191 202 32 30 76 118
18 5 1 12 13 3 16 8 10 0 0 0 9
47 3.62
0 0.00
user has learned how to use the system (matches the 90–99th percentile). However, the number of errors of Task 4 shows that this task is not easy to carry out, even if the user got accustomed with the system. It is due to fact that writing a comment denotes change of the interaction paradigm, from “approach and remove” to touch screen interaction paradigm. Therefore, interaction mechanism changes are not welcome. Task 5 and 6 mark a good score because the user has experience with the interaction mechanism (going back to the “approach and remove” interaction paradigm). The confidence intervals around the average time for each task are depicted in Fig. 11. It shows that the average time of task completion decreases as soon as the user get used to the system. An exception occurs when performing Task 4 due to the interaction paradigm change we have described. Fig. 12 shows task completion time according to the user age. We can deduce that although elder people takes more time to complete tasks than younger people, they get used to the system quickly. Beside, the knowledge of the technology is a key factor to improve the system usability because when a user knows the technology (RFID) the task completion time is lower (i.e. User 3). The satisfaction results are exposed in Table 3. It denotes a average satisfaction of 61.6 which matches to the 50 t h percentile according to the Sauro's dataset (Sauro, 2010).
5. Discussion In this section we present the benefits of our proposal as opposed to the rest of works mentioned in Section 2. The “approach and remove” interaction mechanism replaces the traditional pointer, that is usually controlled by a trackball, because user input is captured by users' gestures. The only exception, that is inherent to our prototype implementation, is the message post action on the Blog (triggered by a button widget on the UI). Therefore, this approach reduces the cost required to learn how to control environment embedded devices, such as trackballs, which are not very popular actually. Besides, as we explained in Section 2, it tackles the problem of motion capture cameras as well as human interaction devices (i.e. Microsoft Kinect) in outdoor public spaces that are not easy to deploy due to environmental issues, such as the light intensity. Finally, this interaction mechanism does not require any extra hardware, such as the Wii-mote or a the Playstation move, users employ smartphones or tables to interact with the environment.
R. Tesoriero et al. / Int. J. Human-Computer Studies 72 (2014) 111–125
123
Table 4 Average time, margin error and confidence intervals for each task. Results
Task 1
Task 2
Task 3
Task 4
Task 5
Task 6
Average task time Margin of error Confidence interval
66.10 32.79 [98.89–33.26]
23.30 16.04 [39.34–7.27]
5.20 2.42 [7.62–2.84]
27.90 10.19 [38.09–17.76]
22.80 12.53 [35.33–10.21]
3.60 1.82 [5.42–1.81]
Fig. 11. The confidence intervals around the average time for each task in seconds.
Fig. 12. Task completion time according to the user age.
Regarding the panel, it can be easily deployed on different locations, such as public buildings, or even in outdoor environments across the city. For example, bus stops or metro stations may offer information to users using Interactive Panels. By using these panels, people are able to interact in-situ with other people or social institutions while waiting for the bus or the metro. The panel is also easy to update or relocate because of the mechanical simplicity of this approach based on passive RFID tags that does not require any extra hardware to operate. As an example, the same approach used to choose the location of the next green zone
in the city, may be employed to choose the location of garbage containers, the location of public lights in a square, or gathering information about public transport system. Most relevant benefits worth to be highlighted of this approach are summarized in the following lines: The implementation of a novel gestural user interface for mobile devices where RFID technology allows designers to define a new interaction mechanism based on the integration of natural and intuitive gestures.
124
R. Tesoriero et al. / Int. J. Human-Computer Studies 72 (2014) 111–125
The use of open software architecture, such as SOA, by the means of a native user interface to interact with physical objects in the environment. Bringing the Web 2.0 to outdoor public spaces by the means of physical object interaction in the environment. Considering the RFID technology as an approach to improve the Web 2.0 accessibility by the means of physical object interaction in public spaces. For instance, the use of audio-texts improves visually impaired people the access to public information. Panel information internationalization, through the use of gestures as the universal language. It is a difficult goal to achieve in traditional panels. Flexibility to update the panel contents and to adapt it to different application domains. Infrastructure low costs. Interactive Panels are easy to deploy because no external power supply is needed and the cost of manufacture are low, due to the popularity of passive RFID tags. Besides, mobile devices are progressively integrating RFID reader, which reduce the cost because no external reader is needed to read RFID tags. Passive RFID Technology is a reliable technology that has demonstrated to be robust and safe. For instance: ○ If we compare touch screens or QR-Codes to Passive RFID tags, the latter proved to be a better solution when dealing with adverse climatological conditions, such as the rain or sunlight. ○ The risk of fire is extremely low compared to other approaches such as touch screens that require an external power supply.
6. Conclusions and future work This work describes a solution based on the successful integration of social software and context-aware applications. The system offers to users a collaborative application that allows them to share comments and ideas about topics related to environmental issues, by performing natural and intuitive gestures. Thus, this approach brings the Web 2.0 from the virtual space to public physical spaces by the means of the “person-object-person” interaction. Besides, defines a completely new type of distributed user interface that employs heterogeneous media to perform the user interaction (physical panels, RFID technology and mobile devices). It allows the participation of the society and serves as a tool to increase the social responsibility for creating a sustainable environment. The Interactive Panels allow citizens to both retrieve information and express opinions without the creation of a completely new infrastructure, but by recycling the informative panels that are present in our squares, streets, bus stops, etc. Besides, the deployment and maintenance costs are extremely low due to the widespread propagation of RFID technology and the low cost of passive tags that do not need permanent external power supply to work. The proposal integrates some of the most promising technologies: mobile devices, Wi-Fi networks, RFID technology and SOA architecture. Besides, the use of RFID has improved the user interaction through the definition of a natural interaction mechanism. This novel interaction mechanism is not limited to information retrieval only, presented in many previous works; it also provides users with the ability to execute of parametrized commands and to add information to the system. The usability of system has been evaluated, using both quantitative and qualitative metrics showing an acceptable degree of
usability, as the users have performed proposed tasks with also an acceptable degree of effectiveness, efficiency and satisfaction. Regarding future work, we are incorporating some interfaces to existing social networks, as Google Maps. Besides, the design of a RFID-based keyword to avoid interaction paradigm changes while writing seems to be a good direction to follow. The extrapolation of this application concept to other domains is almost trivial and we are now working in the developing of interactive panels oriented to e-learning systems. Finally, we also plan to improve the content management application that is actually in a preliminary stage. This is an important issue to take into account to have a full-fledged system.
Acknowledgments We thank all the CICYT-TIN 2011-27767-C02-01 Spanish project, the PPII10-0300-4174 and the PII2C09-0185-1030 JCCM Projects for supporting this research. We also would like to thank to the Programa de Potenciación de Recursos Humanos from the Scientific Research, Technological Development and Innovation Regional Plan 2011-2015 (PRINCET). References Abowd, G.D., Atkeson, C.G., Hong, J.I., Long, S., Kooper, R., Pinkerton, M., 1997. Cyberguide: a mobile context-aware tour guide. Wireless Networks 3 (5), 421–433 〈http://dx.doi.org/10.1023/A:1019194325861〉. Becker, B., Huber, M., Klinker, G., 2008. Utilizing RFIDs for location aware computing. In: Sandnes, F.E., Zhang, Y., Rong, C., Yang, L.T., Ma, J. (Eds.), UIC, Lecture Notes in Computer Science, vol. 5061. Springer, pp. 216–228 〈http://dx.doi.org/10.1007/978-3-540-69293-5_18〉. Brooke, J., 1996. SUS: a quick and dirty usability scale. In: Jordan, P.W., Weerdmeester, B., Thomas, A., Mclelland, I.L. (Eds.), Usability Evaluation in Industry. Taylor and Francis, London. Butz, A., Baus, J., Kruger, A., 2000. Augmenting buildings with infrared information (January 2000). 〈http://citeseer.ist.psu.edu/645312.html; http://www1.cs.colum bia.edu/graphics/courses/mobwear/resources/butz-isar00.pdf〉. Cheverst, K., Mitchell, K., Davies, N., 1999. Design of an object model for a context sensitive tourist guide. In: Computers and Graphics, pp. 883–891. Cheverst, K., Davies, N., Mitchell, K., Friday, A., 2000. Experiences of developing and deploying a context-aware tourist guide: the guide project. In: Proceedings of the 6th Annual International Conference on Mobile Computing and Networking, MobiCom '00. ACM, New York, NY, USA, pp. 20–31. 〈http://dx.doi.org/10. 1145/345910.345916〉. Ciavarella, C., Paternò, F., 2003. Design criteria for location-aware, indoor, PDA applications. In: Chittaro, L. (Ed.), Human–Computer Interaction with Mobile Devices and Services, 5th International Symposium, Mobile HCI 2003, Udine, Italy, September 8–11, 2003, Proceedings, Lecture Notes in Computer Science, vol. 2795. Springer, pp. 131–144 〈http://dx.doi.org/10.1007/978-3-540-45233-1_11〉. Ciavarella, C., Paternò, F., 2004. The design of a handheld, location-aware guide for indoor environments. Personal and Ubiquitous Computing 8 (2), 82–91 〈http:// dx.doi.org/10.1007/s00779-004-0265-z〉. Davies, N., Mitchell, K., Cheverst, K., Blair, G., 1998. Developing a context sensitive tourist guide. In: Proceedings of the First Workshop on Human–Computer Interaction with Mobile Devices, Glasgow, Scotland, 1998-05-21/1998-05-22, p. 10. 〈http://www.dcs. gla.ac.uk/ johnson/papers/mobile/HCIMD1.html#_Toc420818986〉. Demonstration Interactive Eco Panels, 2008. 〈http://www.youtube.com/watch? v=Z9DKYC2_gqs〉. Dey, A.K., Abowd, G.D., 2000. Towards a better understanding of context and context-awareness. In: Workshop on the What, Who, Where, When, and How of Context-Awareness, as part of the 2000 Conference on Human Factors in Computing Systems (CHI 2000). Fabio, Paternò, 1999. Model-Based Design and Evaluation of Interactive Applications, Applied Computing. Springer-Verlag. Fardoun, H.M., Altalhi, A.H., Villanueva, P.G., Tesoriero, R., Gallud, J.A., 2012. A framework to develop web applications based on RFID panels. In: Fardoun, H.M. (Ed.), Proceedings of the 14th International Conference on Enterprise Information Systems ICEIS 2012. 1st International Workshop on Interaction Design in Educational Environments IDEE 2012. June 28, 2012. Wroclaw, Poland, vol. 707. INSTICC, pp. 37–46. Gamma, E., Helm, R., Vlissides, J., Johnson, R.E., 1993. Design patterns: abstraction and reuse of object-oriented design. In: Nierstrasz, O. (Ed.), Proceedings ECOOP '93, LNCS, vol. 707. Springer-Verlag, Kaiserslautern, Germany, pp. 406–431. 〈ftp://st.cs.uiuc.edu/pub/papers/patterns/ecoop93-patterns.ps〉. Hähnel, D., Burgard, W., Fox, D., Fishkin, K.P., Philipose, M., 2004. Mapping and localization with RFID technology. In: ICRA, IEEE, pp. 1015–1020. 〈http://dx.doi. org/10.1109/ROBOT.2004.1307283〉.
R. Tesoriero et al. / Int. J. Human-Computer Studies 72 (2014) 111–125
Hosio, S., Jurmu, M., Kukka, H., Riekki, J., Ojala, T., 2010. Supporting distributed private and public user interfaces in urban environments. In: Dalton, A., Want, R. (Eds.), HotMobile. ACM, pp. 25–30 〈http://doi.acm.org/10.1145/1734583.1734590〉. ISO/IEC 9126-4: Software engineering–software product quality–part 4: quality in use metrics, 2001. Kukka, H., Kruger, F., Kostakos, V., Ojala, T., Jurmu, M., 2011. Information to go: exploring in-situ information pick-up “In the Wild”. In: Campos, P., Graham, T.C. N., Jorge, J.A., Nunes, N.J., Palanque, P.A., Winckler, M. (Eds.), INTERACT (2), Lecture Notes in Computer Science, vol. 6947. Springer, pp. 487–504 〈http://dx.doi.org/10.1007/978-3-642-23771-3〉. Long, S., Kooper, R., Abowd, G.D., Atkeson, C.G., 1996. Rapid prototyping of mobile context-aware applications: the Cyberguide case study. In: Proceedings of the 2nd Annual International Conference on Mobile Computing and Networking (MobiCom'96), Rye, New York, USA, pp. 97-107. 〈http://www.cc.gatech.edu/fce/ cyberguide/pubs/mobicom96-cyberguide.ps〉. Lozano, M.D., Tesoriero, R., Gallud, J.A., Penichet, V.M.R., 2007. A mobile software developed for art museums: conceptual model and architecture. In: Filipe, J., Cordeiro, J., Encarnaç ao, B., Pedrosa, V. (Eds.), WEBIST 2007 - Proceedings of the Third International Conference on Web Information Systems and Technologies, Volume IT, Barcelona, Spain, March 3–6, 2007. INSTICC Press, pp. 469–474. Müller, J., Alt, F., Michelis, D., Schmidt, A., 2010. Requirements and design space for interactive public displays. In: Proceedings of the International Conference on Multimedia, MM '10, ACM, New York, NY, USA, pp. 1285–1294. 〈http://doi.acm. org/10.1145/1873951.1874203〉. Ni, L.M., Liu, Y., Lau, Y.C., Patil, A.P., 2004. LANDMARC: indoor location sensing using active RFID. Wireless Networks 10 (6), 701–710 〈http://dx.doi.org/10.1023/B: WINE.0000044029.06344.dd〉. Oppermann, R., Specht, M., 2000. A context-sensitive nomadic exhibition guide. In: Thomas, P.J., Gellersen, H.-W. (Eds.), Handheld and Ubiquitous Computing, Second International Symposium, HUC 2000, Bristol, UK, September 25–27, 2000, Proceedings, Lecture Notes in Computer Science, vol. 1927. Springer, pp. 127–142 http://dx.doi.org/10.1007/3-540-39959-3_10〉. Paternò, F., Santoro, C., 2007. Exploiting mobile devices to support museum visits through multi-modal interfaces and multi-device games. In: Filipe, J., Cordeiro,
125
J., Encarnação, B., Pedrosa, V. (Eds.), WEBIST 2007 - Proceedings of the Third International Conference on Web Information Systems and Technologies, Volume IT, Barcelona, Spain, March 3–6. INSTICC Press, pp. 459–465. Sauro, J., 2010. A practical guide to measuring usability. CreateSpace. Terrenghi, L., Quigley, A.J., Dix, A.J., 2009. A taxonomy for and analysis of multiperson-display ecosystems. Personal and Ubiquitous Computing 13 (8), 583–598 〈http://dx.doi.org/10.1007/s00779-009-0244-5〉. Tesoriero, R., Gallud, J.A., Lozano, M.D., Penichet, V.M.R., 2008. A location-aware system using RFID and mobile devices for art museums. In: ICAS, IEEE Computer Society, 2008, pp. 76–81. 〈http://doi.ieeecomputersociety.org/10. 1109/ICAS.2008.38〉. Tesoriero, R., Gallud, J., Lozano, M., Penichet, V., 2012. Enhancing visitors' experience in art museums using mobile technologies. Information Systems Frontiers, 1–25, http://dx.doi.org/10.1007/s10796-012-9345-1. Välkkynen, P., Korhonen, I., Plomp, J., Tuomisto, T., A user interaction paradigm for physical browsing and near-object control based on tags, of physical interaction, 2003. 〈http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.123. 4120&rep=rep1&type=pdf〉. Välkkynen, P., Niemelä, M., Tuomisto, T., 2006. Evaluating touching and pointing with a mobile terminal for physical browsing. In: Proceedings of the 4th Nordic Conference on Human–Computer Interaction: Changing Roles. NordiCHI '06, ACM, New York, NY, USA, 2006, pp. 28–37. 〈http://doi.acm.org/10.1145/1182475. 1182479〉. Vanderdonckt, J., 2010. How to distribute user interface elements across users, platforms, and environments. In: INTERACCION 2010–Proceedings of the 10th Congreso Internacional de Interacción Persona-Ordenador Interacción’2010, Valencia, Spain, 2010, pp. 3–14. Yang, W., Denecke, M., Waibel, A., 1999. Smart sight: a tourist assistant system. In: ISWC, IEEE Computer Society, pp. 73–78. 〈http://www.computer.org/csdl/ proceedings/iswc/1999/0428/00/index.html〉. Yeom, J.-H., Tan, B.-K., 2010. Mirrored message wall: sharing between real and virtual space. In: CHI '10 Extended Abstracts on Human Factors in Computing Systems, CHI EA '10. ACM, New York, NY, USA, pp. 4783–4788. 〈http://doi.acm. org/10.1145/1753846.1754231〉.