Modeling Context through Identification: An Approach

0 downloads 0 Views 781KB Size Report
through RFID technology, identifying users as implicit inputs to the system and ..... students wearing tags enter the classroom, the location (attendance) and ...
Modeling Context through Identification: An Approach to Embedded Interaction Bravo, J., Hervás, R., Chavira, G. & Nava, S. Castilla-La Mancha University Paseo de la Universidad, 4 10071 – Ciudad Real - Spain [email protected]

Abstract. Pervasive computing needs scenarios and applications capable of detecting users, providing them with good-quality contextual information. In this sense we should take full advantage of technologies which make it possible to identify and locate people in a way that is adapted to their requirements.Our aim is to create non-intrusive applications for handling the daily activities of users. If possible, these applications should operate without any extra effort in terms of interaction. In this work we present an approach to context awareness through RFID technology, identifying users as implicit inputs to the system and offering services as implicit outputs. The particular contexts studied here are the classroom, the research laboratory and the lecturer’s office.

1. Introduction New forms of computing are appearing in order to get natural interfaces by making the use of the computer easier, trying to distribute it in artefacts placed in the environment around us. In this sense we know about pervasive computing, referring to omnipresent of computers with a lot of devices making computational daily activities in an invisibly and unobtrusively way and freeing people to a large extent from tedious routine tasks. The IST Advisory Group (ISTAG) of the European Union had a vision of "Ambient Intelligence" (AmI) in 1999. It refers to an exciting new paradigm of information technology, in which people are empowered through a digital environment that is aware of their presence and context sensitive, adaptive and responsive to their needs, habits, gestures and emotions. In AmI the technology will become invisible, embedded, present whenever we need it, enabled by simple interactions, attuned to all our senses and adaptive to users and contexts [1]. AmI proposes a shift in computing from the traditional computer to a whole set of devices placed around us providing users with an intelligent background. One of the most significant challenges in AmI/pervasive computing technologies is to create user-friendly interfaces which describe people in a given environment recognizing and responding to the presence of individuals in an way that is not openly visible.

2

Bravo, J., Hervás, R., Chavira, G. & Nava, S.

Ambient Intelligence is based on three key technologies: first of all, Ubiquitous Computing, integrating microprocessors into everyday objects. Then there is Ubiquitous Communication, which allows these objects to communicate with each other and with users. Finally, natural interfaces interacting with the environment in an easier more and personalized way. However, for this vision to become a reality it is necessary to handle the context-aware information. A. Dey defines context as “any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and application themselves.” [2].This author also defines a context-aware system as “a context to provide relevant information and/or services to the user, where relevancy depends on the user’s task”. In order to use context effectively it is necessary to identify certain types of context-aware information [3] The user profile and situation are essential, that is, the identity-awareness (IAw). The relative location of people is location-awareness (LAw). Time-awareness (TAw) is another main type of context-awareness that will be present. The task which the user carries out and everything he wants to do is transformed into Activity-awareness (AAw). Finally, it is important to bear in mind why the user wants to carry out a task in a certain place; here it is Objectiveawareness (OAw) we are talking about. All these types of awareness answer the five basic questions (Who, Where, What, When and Why) that provide the guidelines for context modelling. If we want to perform daily computational activities, these tasks should be carried out without any extra user interaction, if that is possible. We aim to use new artefacts placed around us and to complement traditional explicit interaction. So, new forms of interaction, which are more natural and closer to the user, need to be available. Albrecht Schmidt [4][5] proposes a definition of Implicit Human Interaction (iHCI): “iHCI is the interaction of a human with the environment and with artefacts, which is aimed to accomplish a goal. Within this process the system acquires implicit input from the user and may present implicit output to the user” Schmidt defines implicit input as user perceptions interacting with the physical environment, allowing the system to anticipate the user by offering explicit outputs. This agrees with the idea of users should concentrate on the task and not on the tool. In addition this author defines embedded interaction in two terms. The first one embed technologies into artefacts, devices and environments. The second one, in a conceptual level, is the embedding of interactions in the user activities (task or actions) [6]. With these ideas in mind, our main goal is to achieve natural interaction, as the implicit interaction concept proposes. The present work sets out the identification process as an implicit and embedded input to the system, perceiving the user’s identity, his profile and other kinds of dynamic data. This information, complemented by the context in which the user is found, as well as by the schedule and time, makes it possible to anticipate the user’s needs. An additional contribution is to complement system inputs with sensorial capabilities as well. We should point out that the embedded interaction is also present when the user approaches visualization devices and obtains adaptive information. A good method for obtaining input to the systems comes from the devices that identify and locate people. Want and Hopper locate individuals by means of active badges which transmit signals that are picked up by sensors situated in the building

Modeling Context through Identification: An Approach to Embedded Interaction

3

[7]. IBM developed the Blueboard experiment with a display and RFID tag for the user’s collaboration [8]. A similar project is IntelliBadge [9] developed for the academic conference context. The present work focuses on the search for context-aware situations, allowing users to obtain system outputs through natural interactions on campus in three contexts: the classroom, the office of the lecturer and the lab of the research group. Under the next heading we present the identification process using RFID technology and the way that we applied context-concepts. In section three architectures for the contexts mentioned before are described. Section 4 presents services by identification those offered by RFID technology itself and our contribution of Context-Aware Visualization-Based Services, called “Mosaics”. In the section following that, a proposal for complementing the implicit and embedded interaction by id-sensor fusion is shown. Finally, conclusions and future work are set out.

2. Towards Natural Interaction The main goal of this work is to investigate the way that interaction of users with computers becomes easier thanks to the identification process. This interaction is further improved, when it is done without any explicit interaction, as Schmidt proposes. In this sense we are adapting RFID technology embedded in different objects such as credit cards, watches, etc., modeling the context-aware environment and trying to highlight the user’s daily actions so as to include the necessary and unobtrusive input to the system. The identification of individuals is a very good implicit input to the computer. The simple action of walking near the antenna allows the system to read and write information such as an Id. Number, profile and other very useful items of information which may vary, depending on the context where the users find themselves. We have focused the context aspects mentioned before on the identification process [10][11][12][13]. We are therefore placing these concepts strategically in order to obtain visualization services for the user as is shown in figure 1. We aim to make the interaction transparent, non-intrusive and included in the everyday activities for the user. Thus, a user wearing tags and walking through a building can interact with the context without any explicit interaction, He or she can also, obtain typical identification services such as location, access, presence, inventory, phone call routing, etc. All of these are obtained with a combination of “who”, “where” and “when” concepts of context. In the following section we present the applied technology and our idea of implicit and embedded interaction which comes about as a result of a user’s everyday actions.

4

Bravo, J., Hervás, R., Chavira, G. & Nava, S.

Fig. 1.- Context Concepts through Identification.

2.1 The technology: RFID This technology is commonly used to identify objects. Our purpose is to use it to identify people, with the added advantage of being able to use the small amount of dynamic information stored in the tags as a context interaction. If we identify people and objects using the same components, there is a consequent saving of readers and antennas. Services are thereby increased when new needs appear. Figure 2 shows two types of devices. The one on the left presents a reader and an antenna with read-and-write capability reach of over 75 cm. This has been especially designed for its location on classroom doors, or near boards. It can read several labels simultaneously, when identifying people entering the classroom. It can also identify the teacher or the students who may be approaching the board. The one on the right is a contact reader including an antenna with a reach of only 10 cm. A model of the tag is also shown. This identification system is especially for individual use. We use another kind of RFID set, offering more distance between reader and tags (2 or 3 meters) see figure 2 on the right. Entry to and exit from each context will also be controlled. This system is called HFKE (Hands Free Keyless Entry) and has a semi-passive tag using a battery along with 32 Kbytes of EPROM memory for user’ data. The reader (HFRDR) transmits waves of low frequency 125 Khz continuously. When a tag detects this wave it activates the microcontroller sending the required information in UHF frequency. There is also another device whose name is HFCTR. It’s a control unit for up to eight readers within a distance up to 1000 meters. This unit is connected to the network via TCP/IP.

Modeling Context through Identification: An Approach to Embedded Interaction

Reader

5

Antenna

Passive Tags

Contact Reader + Antenna

Figure 2: RFID devices 2.2 The user actions This technology offers some traditional services such as access control to a building, lab, classroom or office. The embedded interaction consequently comes from user actions such as walking near doors or boards. The reaction of the context could be location, access control, etc. There are some situations where it’s very important to locate people quickly. To do that, interfaces that are closer to the user are vital, where it is possible to carry out the task, with a simple click of the mouse, for example. In other words, without much effort in terms of interaction. Other actions such as going into or leaving the classroom and approaching the board (“proximity”) could be considered as embedded interaction. Both of these could put visualization services for users into operation. In this case, it would mean, for instance, visualization for the lecture/class when a lecturer or teacher comes into the classroom (maybe in terms of presentation, documentation, etc).We could think of interaction with students during the class by means of actions done within the proximity of the board, etc (such as in solving problems) or the use we could give to the technology in the time between classes or lectures (for news and notices and so on). In other contexts also, visualization services are a good addition to alreadyexisting services for those who are using a university, school or college take, for example the research lab or the lecturer’s office at university.

3. Context-aware by identification We are going to consider three different contexts: the classroom, the research laboratory and the lecturer’s office. All these places contain some likely situation in which embedded interaction is possible when the people involved in their use are wearing tags. Lecturers and learners can manage the information contained in his tag dynamically while they are walking around contexts where visualization services are present. Figure 3 shows how, in the office of the lecturer, prepares the lesson which will be presented in the classroom. Where the lesson presentation is located should be

6

Bravo, J., Hervás, R., Chavira, G. & Nava, S.

controlled by including its reference in the tag. Other kinds of information contained could be some links or documents that the teacher has just selected before class. When he or she finishes the class, the lesson status is stored.. By status we mean, for example, the slide number where the class ended, the proposed questions for that day, etc. A good way to describe everyday activities at University is through scenario complementing figure 3. Chris arrives at the University Faculty in the morning. As he walks into the building an antenna, placed near the door, detects him. From that moment he will locatable. Once in his office, while he’s hanging up his coat, a virtual board is showing information about news, notices, schedule and headlines from e-mails. When Chris has finished the first tasks of the day, he prepares the class on “ComputerHuman Interaction Systems” which will take place in 45 minutes. He goes over the presentation, documentation and problem proposed for the students to answer. When he leaves his office passing near an antenna situated on the door, a reader connected to the computer stores the information for the class mentioned above in a tag that he is wearing. When he arrives in he classroom or lecture room, the presentation starts. This begins with the last slide from the day before, along with the documentation he had prepared beforehand. When the class finishes and Chris returns to his office, two doctoral students are waiting for a meeting that had been arranged two days ago. As they go into the office, information about their meeting agenda is shown on the virtual board: subject, documents, links, tasks, etc. A user interface for this meeting also appears displayed on the computer, helping interaction amongst those present. Two hours later, Chris has to go the lab to solve some problems with members of the research group. When he reaches the lab he can see information to do with the group on the virtual board: news, notices, deadlines for congresses, journals, weekschedules, etc. He then approaches the computer of one of the members of the group and the issues that Chris has to talk about with this person appear on the computer display. This is an approach to manage information in the specified context walking and wearing tags.

Modeling Context through Identification: An Approach to Embedded Interaction

7

Office Lab

Meeting Classroom Schedule

Class

Home Fig. 3.- Walking while wearing tags

3.1 The classroom context The Identification Architecture for classroom support is shown in figure 4. It is equipped with a computer, reader and antennas. At this way, lecturer and students wearing tags are able to manage information of the class contents. When teachers and students wearing tags enter the classroom, the location (attendance) and access control services are activated automatically.

8

Bravo, J., Hervás, R., Chavira, G. & Nava, S.

Reader

Classroom

homework Antenna

Tag Id.

Teachers

Classroom Server

Home

Students News & Notices

Tag

Fig. 4.- Architecture for the classroom

In addition, the visualization services come up on the board. This is a mosaic of information which changes according to the context (classroom) and time (schedule). This mosaic of information, as it is shown in figure 5, is different for each attendance profile. In the time between classes the information is about news and announcements to students. In the actual class-time, the mosaic is changed and it presents information prepared by the lecturer for the session in hand.

Modeling Context through Identification: An Approach to Embedded Interaction

9

Fig. 5.- Mosaic of Information

The second activity takes place when someone in the classroom wants to get information onto the board by means of coming near to its surface (proximity). When it is the lecturer who approaches the board, his lesson presentation, problems proposed, their solutions, his documentation, etc. are shown. Figure 5 shows this mosaic which could also appear when the lecturer comes into the classroom. In the illustration we can observe different parts of the board showing information about attendance (1), lecturer’s plan (2) for controlling the order of these activities, plan of the building/floor (3), schedule (4), lesson presentation (5) and documents (6 & 7). The lecturer can also interact with the board by means a set of sensors placed strategically near the board and in this way they work through the plan for each class. If the student is the one approaching the board, the answer he has given to the problem proposed previously can be displayed. Another possibility is for some presentation, or indeed any other kind of information to be shown. The third activity consists of storing the information of the lesson in tags. To do so, a Data Base including this documentation must exist. This will be indexed by codes which are put onto the tags. Teachers and students carry this information when they go through the door on leaving the classroom. These activities are completed by work at home, supported by a contact reader, which reads information included in tags and/or rewrites more for the next class. All transactions are managed by the classroom server. Between classes, the board puts up information of relevance to the whole group, as well as to individual students.

10

Bravo, J., Hervás, R., Chavira, G. & Nava, S.

3.2 The lab and office contexts Fgure 6 shows the architecture for the other two contexts: the research lab and the office. In both of them it can see readers and antennas situated next to the board and doors can be seen. In the lab are placed more than one virtual boards are placed. These display shows different information that has to do with to the research group. There is another virtual board in the office too. All the information is managed by a server.

Fig. 6.- Research Lab and Office Architecture.

To manage tag information in the solving of everyday activities of members of the research group, the appropriate user interface must be obtained. There should be no excessive cost in interaction. The figure 7(a) shows an individual interface for each member of the group.

Modeling Context through Identification: An Approach to Embedded Interaction Fig. 7(a).- Individual Schedule

11

Fig. 7(b).- Interface for meetings.

The location zone can be seen at the top. Using this everyone can locate people the group quickly. In addition, actions addressed to another member(s) of the group could be managed by dragging these into the photo. On the right, “search”, “link”, “make”, “plan”, ”proposal”, etc., and other actions are displayed in a set of buttons to making it easy to take a note of group tasks. In Figure 7(b) the interface for meetings is presented. It is similar to the individual interface and it includes a form for notes on meetings.

Fig. 8.- Virtual Board at Lab

Finally, one of the Virtual Boards placed in the Lab could present information like that illustrated in Figure 8. In this, location is offered at the top. Below that, the schedule for a week (month or year) deadlines of events, as well as news are accessible. In addition, on the bottom, notices for members and documentation are available at the bottom. Other mosaics are more specialized dealing with work papers in progress, papers or projects display how the work is progressing and other relevant information. So, if we want to get optimized information services for users, an ontology studies the concepts mentioned before is needed. It is by means of the “who” question that we can then display, the “what” in “where” (location) are all of which are essential. Other aspects such as size, latency, priority (profile) and schedule are kept in mind to optimize each mosaic.

12

Bravo, J., Hervás, R., Chavira, G. & Nava, S.

In order to change the contents of the mosaic automatically it is essential to administer the information which will be presented. This information depends on factors such as attendance, context, schedule, time, information in user’s tags, etc. This responsibility falls to the server manager. Each member really needs to be able to introduce information easily. This is produced by interacting with the mosaics. Consequently, mosaics are not only a way to present information, on the virtual board. They are also an interface which can be used to manage it, on a computer display. 3.3. Interaction sensor complement There are some cases where interaction supported only by the identification process on virtual boards need to be complemented. When that happens, we are attempting to merge the identification with other kind of inputs. To do this we have located some sensors near visualization devices. We interact with them by passing a hand across them quickly or by leaving our hand steady, near the sensor for a few seconds. The interaction required for each hand movement is a combination of identification and sensors. Each member handles the functionality he or she particularly wants or needs for each sensor and movements. This is done by inserting in the tag the appropriate sensor purpose for all contexts that he usually visits. So, with the identification, profile, context and schedule, the sensor interaction has multi-functionality. Each sensor action is presented at the bottom of the virtual board to help users interacting with it.

4. Conclusion and future works We have modeled three different contexts using an available technology. We are making it possible for that technology to be closer to the user. This improvement comes about from interaction with the system, carried out in a way that is not “visible”, and by means of normal everyday actions. To do all this we take as a basis studies on context and, from our particular point of view, the most important concepts of context that derive from answering who?, when?, where? and what?. At the same time, we make use of the benefits of RFID technology, thereby complementing the identification itself and improving the services the system offers to the users as a result. Our aim is to model familiar contexts like the lecture/classroom, lab or office. We have to improve the reaction of the system when it finds itself without any explicit user interaction, by optimizing the visualization services. We also have to explore new forms of explicit and implicit interaction, the main goal being to make them invisible. The ideal would be to get to the point where just wearing a tag is enough for these interactions to take place

Modeling Context through Identification: An Approach to Embedded Interaction

13

5. Acknowledgments This work has been financed by a 2005-2006 SERVIDOR project (PBI-05-034) from Junta de Comunidades de Castilla-La Mancha and the Ministerio de Ciencia y Tecnología in Mosaic Learning project 2005-2007 (TSI2005-08225-C07-07).

6. References [1]

ISTAG, Scenarios for Ambient Intelligence in 2010. Feb. 2001. http://www.cordis.lu/ist/istag.htm. [2] Dey, A. (2001). “Understanding and Using Context”. Personal and Ubiquitous Computing 5(1), 2001, pp. 4-7. [3] Kevin Brooks (2003). “The Context Quintet: narrative elements applied to Context Awareness”. In Human Computer Interaction International Proceedings, 2003, (Crete, Greece) by Erlbaum Associates, Inc. [4] Schmidt, A. (2000). “Implicit Human Computer Interaction Through Context”. Personal Technologies Volume 4(2&3) 191-199 [5] Schmidt, A. (2005). “Interactive Context-Aware Systems. Intering with Ambient Intelligence”. In Ambient Intelligence. G. Riva, F. Vatalaro, F. Davide & M. Alcañiz (Eds.). [6] Schmidt, A., M. Kranz, and P. Holleis. Interacting with the Ubiquitous Computing – Towards Embedding Interaction. in Smart Objects & Ambient Intelligence (sOc-EuSAI 2005). 2005. Grenoble, Francia. [7] [Want, 92] Want, R. & Hopper, A. (1992). “The Active Badge Location System”. ACM Transactions on Information Systems, 10(1):91–102, Jan 1992. [8] Daniel M. Russell, Jay P. Trimble, Andreas Dieberger. “The use patterns of large, interactive display surfaces: Case studies of media design and use for BlueBoard and MERBoard”. In Proceedings of the 37th Hawaii International Conference on System Sciences – 2004. [9] Donna Cox, Volodymyr Kindratenko, David Pointer In Proc. of 1st International Workshop on Ubiquitous Systems for Supporting Social Interaction and Face-to-Face Communication in Public Spaces, 5th Annual Conference on Ubiquitous Computing, October 12- 4, 2003, Seattle, WA, pp. 41-47 [10] José Bravo, Ramón Hervás, Inocente Sánchez, Agustin Crespo. “Servicios por identificación en el aula ubicua”. In Avances en Informática Educativa. Juan Manuel Sánchez et al. (Eds.). Servicio de Publicaciones – Universidad de Extremadura. ISBN847723-654-2. [11] Bravo, J., R. Hervás, and G. Chavira, Ubiquitous computing at classroom: An approach through identification process. Journal of Universal Computer. Special Issue on Computers and Education: Research and Experiences in eLearning Technology. 2005. 11(9): p. 14941504. [12] Bravo, J., Hervás, R., Chavira, G. & Nava, S. “Modeling Contexts by RFID-Sensor Fusion”. Accepted paper from 3rd Workshop on Context Modeling and Reasoning (CoMoRea 2006). Pisa (Italy) [13] Bravo, J., Hervás, R., Chavira, G., Nava, S. & Sanz, J. (2005). “Display-based services through identification: An approach in a conference context”. Ubiquitous Computing & Ambient Intelligence (UCAmI’05). Thomson. ISBN:84-9732-442-0. pp.3-10.

14

Bravo, J., Hervás, R., Chavira, G. & Nava, S.

Suggest Documents