How to Reuse Exisiting Interactive Applications in ... - CiteSeerX

0 downloads 0 Views 397KB Size Report
republish, to post on servers or to redistribute to lists, requires prior specific permission ... GTK+ and Java Swing to develop interactive applications that choose ...
How to Reuse Exisiting Interactive Applications in Ubiquitous Computing Environments ? Tatsuo Nakajima Department of Computer Science Waseda University

[email protected] ABSTRACT In ubiquitous computing environments, we will access various devices and appliances from a variety of mobile devices such as mobile phones, PDAs and wearable devices. However, we need to reuse existing interactive applications that adopt traditional GUI toolkits that assume to use mouses and keyboards, and these applications should be operated from the mobile interaction devices. Our approach enables us to use existing GUI-based interactive applications although a variety of interaction devices can be adopted to control the applications. Therefore, the approach allows us to use traditional GUI toolkits to build ubiquitous computing applications that choose appropriate interaction devices dynamically. The paper describes the design and implementation of our middleware to realize the approach. We also present some examples to show the effectiveness of our approach.

Categories and Subject Descriptors D.2.10 [Design]: Methodologies; D.2.13 [Reusable Software]: Reusable Liblaries

Keywords Interaction, Middleware, Ubiquitous Computing

1.

INTRODUCTION

In ubiquitous computing environments, a large amount of computers will be embedded in our environments, and applications running on the computers can be operated from a variety of mobile interaction devices such as mobile phones, PDAs and wearable devices. There are several projects to offer new interaction management systems for developing ubiquitous computing applications[11, 16, 17]. We need to reuse existing interactive applications that adopt existing GUI toolkits. It is desirable that these applications are operated from the mobile interaction devices without modifying them because the new interaction management systems require to adopt new models to develop interactive applications, and it is hard to modify exisiting applications on the new model.

Future mobile interaction devices enable us to interact with embedded computer more naturally. Current standard middleware infrastructures for networked home appliances have adopted traditional standard graphical user interface systems such as Java AWT and GTK+. It is not easy to control home appliances from advanced interaction devices such as PDAs, cellular phones, or a variety of research prototypes like [14, 8]. Also, natural interaction is changed according to a user’s current situation. The most appropriate interaction device should be dynamically chosen according to a user’s current situation and preference, and the selection of interaction devices should be consistent whether s/he is living in any spaces such as at home, in offices, or in outdoor. For example, if a user is cooking a lunch, s/he likes to control appliances via voice, but if s/he is watching TV on a sofa, a remote controller may be better. In the future, a divergence of interaction devices should be taken into account because various interaction devices will be developed, and a user likes to choose his/her favorite devices according to the situation. In this paper, we describe the design and implementation of a middleware infrastructure to allow existing interactive applications to be operated from various mobile interaction devices without modifying the applications. Programmers can adopt traditional GUI toolkits such as GTK+ and Java Swing to develop interactive applications that choose appropriate interaction devices dynamically. Our current work focuses on home computing environments where home appliances such as TV and VCR are connected by a home network. Most of current home appliances do not assume a divergence of mobile interaction devices. Standard middleware for home computing has adopted traditional GUI toolkits such as Java AWT. Our middleware is attractive to reuse existing standard middleware infrastructures without modifying the standard specifications. The remainder of this paper is structured as follows. In Section 2, we describe the characteristics of ubiquitous computing applications, and describe related work in Section 3. Section 4 presents the design and implementation of our user interface system. In Section 5, we show three scenarios that show the effectiveness of our approach. Section 6 discusses our current system. Finally, we conclude the paper in Section 7.

Permission to make digital or hard copies of all or part of this work for 2. APPLICATIONS FOR UBIQUITOUS COMpersonal or classroom use is granted without fee provided that copies are PUTING not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to In ubiquitous computing environments, one of the most republish, to post on servers or to redistribute to lists, requires prior specific important problems is how to interact with a variety of objects embedding computers. New interaction techniques permission and/or a fee. between us and computers embedded in various objects SAC’06 April 23-27, 2006, Dijon, France have been developed by several research groups like [6]. It Copyright 2006 ACM 1-59593-108-2/06/0004 ...$5.00.

1127

is necessary to consider how to manage these interaction devices. Let us consider that a user uses a mobile device to control home appliances or to retrieve interesting information. In this case, applications running on the device need to take into account its mobility. We also like to use various display devices to show control panels of home appliances. For example, when a user sits on a sofa, s/he will use his/her PDA to control the appliances. In this case, the control panel is displayed on the PDA. If s/he can find a large display near him/her, s/he likes to use it to control the appliances. The control panel should be displayed on the large display, and the panel may be navigated from his/her PDA. If the display is a touch panel, s/he navigates the panel by touching it. We believe that it is important to choose a variety of interaction devices dynamically according to his/her situation since one interaction device is not enough to cover various situations. However, the dynamic selection of interaction devices is complex. It is desirable to hide the complexities from application programmers. Dynamic reconfiguration is a very important topic in ubiquitous computing. The behavior of a dynamic configurable application is changed according to its user’s current situation. To reuse existing interactive applications in ubiquitous computing environments, the dynamic reconfiguration should be hidden in our middleware infrastructure. An application programmer needs not to take into account which mobile interaction devices are used for operating his/her application. The approach makes it easy to develop interactive applications for ubiquitous computing very easy. There have been many researches to develop dynamic reconfigurable applications[4, 13], and there are generic frameworks to develop dynamic reconfigurable applications like [2]. We think that there are two important issues that should be taken into account building dynamic reconfigurable applications. The first issue is how to retrieve context information. We need common interface to offer higher level abstraction to abstract various sensor information. For example, context toolkit[2] and sentient information framework[5] provide high level abstraction to hide details of context information retrieved from sensors. The second issue is a mechanism to adapt software structure according to retrieved context information. Traditionally, most of dynamic reconfigurable applications change their software structures in an ad-hoc way, and it is not easy to take into account adding to deal with new context information. We believe that it is important to develop a framework to build adaptive applications in a systematic way[10]. Although there are many researches to attack the problems described above, it is not easy to develop dynamic reconfigurable applications. Because implementing adaptive applications is still ad-hoc, and programmers need to learn advanced programming techniques such as design patterns and aspect-oriented programming to build flexible dynamic reconfigurable applications for satisfying future requirements. We need to reimplement existing applications if we like to modify them to support dynamic reconfiguration. For example, if a dynamic reconfigurable application needs to display video streams according to the location of a user. The application should implement the redirection of the video streams to transmit them to computers that are desirable to display them[1]. We believe that there are two contributions of our middleware infrastructure. The first contribution is that we propose a simple approach to allow existing GUI-based interactive applications to be operated from various mobile interaction devices. It is easy to reuse existing interactive applications in ubiquitous computing environments. The

second contribution is to show the importance to hide dynamic reconfiguration in middleware infrastructures to make it easy to develop applications for ubiquitous computing. In the future, the divergence of interaction devices should be taken into account, and our middleware infrastructure allows an application to use various new interaction techniques to operate applications without modifying the applications.

3.

RELATED WORK

There are several approaches that are close to our research efforts. In this section, we describe these systems, and compare the systems with our system. The first system is the Pebble system[9]. The Pebble system enables us to control desktop applications on the MS-Windows operating system through PalmPilot which is the most popular PDA in the world. For example, a cursor on the desktop can be moved by touching a screen of the PalmPilot. The system is close to our current prototype. However, the focus is different since Pebble focuses on the usability of a system. On the other hand, our system focuses on the system architecture. Our system enables us to use a variety of input devices to navigate a graphical user interface on an output display. Also, input and output interaction devices can be switched according to a user’s preference. We think that the flexibility of our system is more suitable to support ubiquitous computing environments. The second system is the UIML(User Interface Markup Language) system[18], which is an XML based language that permits a declarative description of a user interface in a highly device-independent way. If an application writes a user interface as a UIML document, the document is rendered according to respective input/output interaction devices. For example, a UIML document is rendered on PalmPilot to use it as a input/output devices. Also, a UIML document can be rendered for VoiceXML to support voice interaction. The approach is difficult to be adopted when input and output devices are separated, and there is no support to dynamically switch these input/output devices according to a user’s situations. There are also similar document based approaches like [3]. In [17], they have described a more generic approach to deal with a diversity of interaction devices, but, those approaches cannot take into account the reusability of exisiting interactive applications. The CUES system[7] provides a framework to control a variety of appliances from a mobile device. In the system, each appliance has a Java bytecode that is transmitted to a mobile device. The code contains a graphical user interface, and it is displayed on the mobile computer. A user can control the appliance by the graphical user interface on the mobile computer. The approach enables us to use an appropriate graphical user interface for respective appliances. However, the approach assumes that the mobile computer has a medium size display to show a graphical user interface implemented by Java AWT/Swing, and should have a pointing device and a keyboard. Also, the approach does not support the customization of user interface and dynamic switching of interaction devices. HAVi[15] provides two ways to support the interaction with a user. The first way is Data Driven Interaction(DDI). This provides a declarative way to describe graphical user interface. In the approach, user interface is described as a document like UIML, and the document is rendered according to the characteristics of a display device, but UIML is more general than HAVi’s DDI since UIML is based on XML. The second way is similar to the CUE system. The Java bytecode containing a user interface written in Java AWT is downloaded in a HAVi

1128

device, and the graphical user interface is displayed on the display of the device. The problem of the approach is also the same as CUES. ICrafter[11] is a framework for services and their user interface in interactive workspaces. In the framework, each service provides a service specification that contains a list of operations that the service can accept. The user interface is customized according to the characteristics of interaction devices. The framework can aggregate several service specifications. If one service provides a producer interface and the another offers a consumer interface, these two interfaces can be merged. The approach is attractive since the service aggregation is important to make it easy to use future complex appliances. A problem is that the approach requires us to modify existing applications to be used in the framework since we need to write ubiquitous computing applications by using the framework. BEACH[16] offers a uniform model to develop synchronous collaboration applications for future office environments. The system allows a user to share documents on DynaWall, CommChair, and InteracTable that are computer Figure 1: Overview of Our System mediated walls, chairs, and tables in future offices. A problem is that an application programmer should use the BEACH system to develop applications for operating these devices, and it is not easy to reuse existing appli- events are used as the uniform interaction protocol in the cations. The system also does not take into account the current implementation. In our system, the uniform prodiversity of interaction devices. tocol is called the universal interaction protocol. The role of the middleware is to select appropriate interaction devices according to context information. Also, in4. DESIGN AND IMPLEMENTATION put/output events are converted to keyboard/mouse events The section first describes some design issues for build- and bitmap images according to the characteristics of ining our system. Then, we present the architecture and teraction devices. implementation of our system. The approach enables us to use traditional graphical user interface toolkits such as Java AWT, GTK+, and 4.1 Design Issues Qt for interfacing with a variety of interaction devices. In There are three requirements to solve the problems de- fact, a lot of middleware standards for consumer electronscribed in Section 2. The first requirement is that in- ics such as HAVi have adopted Java AWT for their GUI put interaction devices and output interaction devices are standards. Our approach will allow us to control various chosen independently according to a user’s situation and future consumer electronics from various interaction depreference. For example, users can select their PDAs for vices without modifying their application programs. The their input/output interaction, or the users may choose characteristic is desirable because it is difficult to change their cellular phones as their input interaction devices, the implementation of existing GUI standards. and television displays as their output interaction devices. The second requirement is that our approach enables 4.2 System Structure us to choose suitable input/output interaction devices acOur system consists of the following four components cording to a user’s preference. These interaction devices as shown in Figure 2. are dynamically changed according to the user’s current situation. For example, a user who controls an appliance • Home Computing Application by his/her cellular phone as an input interaction device will change the interaction device to a voice input sys• UniInt Server tem because his/her both hands are busy for other work currently. • UniInt Proxy The third requirement is that any applications exe• Input/Output Interaction Devices cuted in appliances can use traditional GUI toolkits if they speak a uniform interaction protocol. The requireHome computing applications[15] generate graphical user ment is very important because it is desirable to reuse existing GUI-based applications in order to migrate to interface for currently available home appliances to confuture ubiquitous computing environments in a smooth trol them. For example, if TV is currently available, the way. application generates user interface for the TV. On the Our proposed architecture shown in Figure 1 satisfies other hand, the application generates the composed GUI the above three requirements. In the current implemen- for TV and VCR if both TV and VCR are currently availtation, an application generates bitmap images contain- able. ing information such as control panels, photo images and The UniInt server transmits bitmap images generated video images. The approach is simple because popu- by a window system using the universal interaction prolar operating systems provide a mechanism to retrieve tocol to a UniInt proxy. It forwards mouse and keyboard bitmap images generated by applications. These applica- events received from a UniInt proxy to the window systions can receive keyboard and mouse events to be con- tem. In our current implementation, any applications trolled. The user interface middleware receives bitmap running on window systems(Currently, MS-Windows and images from applications and transmits keyboard and X Window System) supporting a UniInt server can be mouse events. These bitmap images and keyboard/mouse controlled in our system without modifying them.

1129

Figure 2: System Structure The UniInt proxy is the most important component in our system. The UniInt proxy converts bitmap images received from a UniInt server according to the characteristics of output devices. It converts events received from input devices to mouse or keyboard events that are compliant to the universal interaction protocol. The UniInt proxy chooses a currently appropriate input and output interaction devices for controlling appliances. To convert interaction events according to the characteristics of interaction devices, the selected input device transmits an input specification, and the selected output device transmits an output specification to the UniInt proxy. These specifications contain information that allow a UniInt proxy to convert input and output events. The last component is input and output interaction devices. An input device supports the interaction with a user. The role of an input device is to deliver commands issued by a user to control home appliances. An output device has a display device to show graphical user interface to control appliances.

4.3

Implementation of UniInt Proxy

The current version of UniInt proxy is written in Java, and the implementation contains four modules as shown in Figure 2. The first module is the universal interaction protocol module that executes the universal interaction protocol to communicate with a UniInt server. The second module is the plug and play management module. The module collects information about currently available interaction devices, and builds a database containing information about respective interaction devices. The third module is the input management module. The module selects a suitable input interaction device by using the database contained in the plug and play management module. The last module is an output management module. The module also selects a suitable output interaction device. Also, it converts bitmap images received from the universal interaction module according to the output specification of the currently selected output interaction device.

4.3.1

Management of Available Interaction Devices

The plug and play management module detects currently available input and output devices according to context information. The module implements the Universal Plug and Play Protocol to detect currently available interaction devices. An interaction device transmits advertisement messages using the simple service discovery protocol(SSDP). When a UniInt proxy detects the mes-

sages, it knows the network address of the interaction device. The UniInt proxy transmits a request for retrieving information about the interaction device. In our system, information about interaction devices are represented as XML documents. If the interaction device is an input device, the document contains various attributes about the device, which are used for the selection of the most suitable device. For an output device, the document contains information about the display size and the attributes for the device. The plug and play management module maintains a database containing all information about currently detected interaction devices. The selection of interaction devices is determined as follows in the current implementation. Each interaction device has a device type ID. A user registers a list of interaction devices, and the order in the list shows the preference of the user. When sereval interaction devices are detected, the device that has the higest preference in the currently detected devices will be selected. If the several detected interaction devices have the same device type ID, other attributes are examined. For input interaction devices, a device that is the most recently used will be selected. For output interaction devices, a device whose location attribute is the closest to the currently selected input interaction device is chosen. Also, a user can choose an interaction device explicitly by sending an event to UniInt Proxy from the device.

4.3.2

Adaptation of Input and Output Events

The role of the input management module and the output management module is to determine the policies for selecting interaction devices. As described in the previous section, all information about currently available interaction devices are stored in a database of the plug and play management module. The database provides a query interface to retrieve information about interaction devices. Each entry in the database contains a pair of a network address and a list of attributes for each interaction device, then the entry whose attributes are matched to a user’s preference provided in a query is returned. The current implementation of the input management module receives all input events from any currently available input devices. When a new input device is detected, a new thread for receiving input events from the newly detected device is created. All input events are delivered to the universal interaction module, and they are processed by applications eventually. Each input event contains a device type ID, and the conversion of events is processed according to the device type ID. In the current implementation, we assume that the input management module

1130

knows how to convert events for all device type IDs. In the future, when an unknown device ID is detected, the conversion module will be downloaded to UniInt Proxy from the Internet incremantally. The output management module converts bitmap images received from the universal interaction module according to the display size of an output device. The size is stored in the database of the plug and play management module. When an output device is selected, the display size is retrieved from the database. The bitmap image is converted according to the retrieved information, then it is transmitted to the selected output device.

4.4

Current Status

Our system have modified the AT&T VNC system[12], and the VNC server can be used as the UniInt server without modifying it. Also, the RFB protocol defined in the VNC system is adopted as the current version of a universal interaction protocol. The current prototype has been used in our HAVi-based home computing system[15], where HAVi is a standard specification for digital audio and video. Currently, we have developed two home appliances. The first one is a DV viewer and the second one is a digital TV emulator. Our application shows a graphical user interface according to currently available appliances. The cursor on a screen that displays a graphical user interface can be moved from a Compaq iPAQ. When the device is turned off, the cursor is controlled by other devices such as a game console. It is also possible to show a graphical user interface on the PDA device according to a user’s preference. The current system has integrated cellular phones to control home appliances. NTT Docomo’s i-mode phones have Web browsers, and this makes it possible to move a cursor by clicking special links displayed on the cellular phones. Figure 3 contains several photos to demonstrate our system. Currently, our home computing applications are executed on HAVi, and a control panel is written by using Java AWT. In the demonstration, if both a DV camera and a digital TV tuner are simultaneously available, the control panels for them are combined as one control panel as shown in the photo(Top-Left). The control panel can be navigated by both a cellular phone(Top-Right) and a game console(Bottom-Left). Also, the control panel can be displayed and navigated on a PDA(Bottom-Right).

5.

DISCUSSIONS

We have improved our system for one year from the first prototype implementation. During the improvement, we have discussed several issues that should be considered to work in the future. In this section, we present some important issues discussed in our group. The first issue is the limitation of our architecture. Our system assumes that an output device is available to interact with appliances. The assumption may not be realistic in some conditions, but we believe that future rooms will provide many display devices. A user may have a personal display device such as a PDA, a cellular phone or wearable display. Therefore, our assumption is not unrealistic in a real environment. In our system, a bitmap image that contains a graphical user interface is transferred from a UniInt server to a UniInt proxy. Since the image does not contain semantic information about its content, the UniInt proxy does not understand the content. For example, it is difficult to extract the layout of each GUI component from the image. Therefore, it is not easy to change the layout according to the characteristics of output devices or a user’s preference. Our system can also deal with only mouse and keyboard

events. The navigation of a graphical user interface can be done by emulating the movement of a cursor or pressing a keyboard and mouse buttons. If the limitation makes the usability of a system bad, other approaches should be chosen. However, navigating a graphical user interface from a PDA and a cellular phone provides flexible interaction with home appliances. Our experiences show that home appliances usually allow us to use a large display and render graphical user interfaces on the display to control the appliances. We believe that our system has enough power to make future middleware infrastructures for home appliances flexible. However, we need to consider what is a right common abstraction to communicate between applications and interaction devices as a universal interaction protocol. Current abstraction such as bitmap images, keyboard and mouse events is too low level, but it enables us to use existing applications. In the future, we should look for better abstraction to use various interaction devices in a uniform way. The keyboad and nouse events adopted in our current system is too low level to customized GUI according to the charactersitics of display devices. The abstraction should not hide the advanced characteristics of respective interaction devices. The second issue is how to represent context information in our system. Currently, the use of context information is ad-hoc, and we need more general modeling of our real world. For example, representing the nearest display device to us requires to model both the location of the display device and us. It is important to model a room where a display device is located because if there is a wall between us and the display, the device cannot be used for the interaction. The third issue is related to how to use our system. The selection policy of devices is changed which appliance a user likes to control. Since the system should take into account the differences among environments, it is important to change the selection policy of interaction devices for each environment. Our system will provide high level API to control the policy. We believe that it is impossible to hide the dynamic reconfiguration in a middleware infrastructure completely. It is desirable to provide the high level API to customize dynamic reconfiguration according to respective situations. The fourth issue is about security. It is unrealistic to assume that any users can control any appliances. We are considering to identify a user who tries to control appliances. There are two topics that should be solved in the future. The first topic is how to personalize the user interface to protect appliances. For example, an appliance should not display a button that is not allowed to be controlled by a user. We consider that an application provides multiple control panels that are different for each security level. A UniInt proxy transmits bitmap images suitable for a user to an output device. Although the approach requires to modify application programs, but the modification is not so difficult. The second topic is how to authenticate a user to control appliances. If a user uses his/her personal device to control the appliances, it is not difficult to identify the user, but we need another mechanism to identify the user if an interaction device is shared by many users. The last issue is whether interaction devices are selected in a manual way or in an automatic way. In our approach, the most suitable interaction device is selected automatically according to location information of a user. we found that the automatic selection is not comfortable in any cases. While a user navigates a control panel, it should not be moved to other displays. We believe that the movement should be predictable by a user according to his/her mental model. If the system behaves in an unexpected way, the user becomes nervous. When the

1131

Figure 3: Controlling Our HAVi based Home Computing System system cannot determine to change the devices automatically, it is desirable to show a list of possible interaction devices to a user and to select the most suitable one in the list by him/her.

6.

SCENARIOS OF OUR APPROACH

In this section, we present three scenarios to show the effectiveness of our approach. The first scenario enables us to interact with existing applications running on MSWindows or Linux in a location-aware fashion. The second scenario is a location-aware video phone. The third scenario controls appliances from wearable devices.

6.1

Location-Aware Interaction

The system enables a user to control a home appliance in a location-aware way. In the future, our home will have many display devices to show control panels for controlling home appliances. A user usually likes to use the nearest display device to him/her to control a variety of home appliances. For example, if a user sits on a sofa, the control panel of a home application is displayed on his/her PDA. On the other hand, if s/he is in front of a display device, the control panel is shown on the display and s/he navigates it from his/her PDA or a game console. Figure 4 shows our demonstration system to realize the location-aware interaction. In the system, we are using X10 motion sensors to detect the movement of a user. We have implemented a Jini-based laser disc player. The graphical user interface is written in Java by using the Swing library. If a user is in front of a large display device, the control panel is shown on the display. The user can navigate the control panel from his/her PDA device. However, when s/he moves in front of a mobile computer, the motion sensor near the computer detects his/her movement. In this case, the control panel is moved on the display of the mobile computer. The image of the control panel is reduced and converted to a monochrome image to reduce the bandwidth on a wireless network. Currently, the system is replacing motion sensors to RFID tags to detect a user’s movement.

6.2

Ubiquitous Video Phones

The second example is a ubiquitous video phone that enables us to use a video phone in various ways. In this example, we assume that a user speaks with his/her friend by using a future broadband phone. The phone has a receiver like traditional phones, but it also has a small display. When the phone is used as a video phone, the small display renders video streams transmitted from other phones. The display is also able to show various information such as photos, pictures, and HTML files that are shared by speakers. Our user interface system makes the phone more attractive, and we believe that the extension is a useful application in ubiquitous computing environments. When a user needs to start to prepare a dinner, s/he will go to his/her kitchen, but s/he likes to keep to talk with his/her friend. The traditional phone receiver is not appropriate to continue the conversation in the kitchen because his/her both hands may be used for cooking. In this case, we use a microphone and a speaker in the kitchen so that s/he can use both hands for preparing the dinner while talking with his/her friend. In the future, various home appliances such as a refrigerator and a microwave provide displays. A kitchen table may have a display to show a recipe. These displays will be used by the video phone to show a video stream. In a similar way, a video phone can use various interaction devices for interacting with a user. The approach enables us to use a telephone in a more seamless way. Our system allows us to use a standard VoIP application running on Linux. The application provides a graphical user interface on the X window system. Our system allows a user to be able to choose various interaction styles according to his/her situation. Also, if his/her situation is also changed, the current interaction style is changed according to his/her preference.

6.3

Controlling a TV appliance using a Wearable Device

In this example, we like to show that our approach enables us to use advanced wearable devices to control various home appliances. Let us assume that a user wears a

1132

Figure 4: Location-Aware Interaction head-mounted display that cannot be distinguished from prescription lenses, and s/he wants to control a television. In this case, the graphical user interface of the television is displayed on his/her glass. The user navigates the graphical user interface via his/her voice. The UniInt proxy converts the image size that is suitable for displaying on the glass. If a user takes off the glass, the graphical user interface is automatically displayed the graphical user interface on the display near the user. The voice is also used to move the cursor on the graphical user interface. The voice is translated to keyboard/mouse events in the UniInt proxy and these events are delivered to the application executed in the television. In the future, our cloth will embed a display and a control panel, and the cloth can be used to control a variety of appliances. The future wearable computing is very attractive to use our system.

7.

CONCLUSION

In the paper, we have proposed a new approach to build ubiquitous computing applications. Our middleware enables us to access existing GUI-based interactive applications from various mobile interaction devices. In the future, we may use new interaction management systems such as ICrafters and BEACH for developing ubiquitous computing applications, but we still use existing GUI toolkits to develop interactive applications. We believe that it is important to develop a system reusing existing interactive applications to use them in ubiquitous computing environments. In the future, we believe that it is important to model our mental states and to change an application’s behavior by accessing the model to make the behavior of the application and the user’s mental states consistent. We are currently discussing which abstraction is suitable for modeling our mental states.

8.

REFERENCES

[1] J. Bacon, J Bates, and D. Halls, “Location-Oriented Multimedia”, IEEE Personal Communications, Vol.4, No.5, 1997. [2] A.K. Dey, D. Salber, and G.D. Abowd, “A Conceptual Framework and a Toolkit for Supporting the Rapid Prototyping of Context-Aware Applications”, Human-Computer Interactions, 16, 2001.

[3] T. Hodes, and R. H. Katz, “A Document-based Framework for Internet Application Control”, In Proceedings of the Second USENIX Symposium on Internet Technologies and Systems, 1999. [4] A. Harter, A. Hopper, P. Steggles, A. Ward, P. Webster, “The Anatomy of a Context-Aware Application”, In Proceedings of the 5th International Conference on Mobile Computing and Networking, 1999. [5] D. L. de Ipina, “Building Components for a Distributed Sentient Framework with Python and CORBA”, In Proceedings of the 8th International Python Conference, 2000. [6] H. Ishii, B.Ullmer, “Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms”, In Proceedings of Conference on Human Factors in Computing Systems,1997. [7] K. Kangas, and J. Roning, “Using Code Mobility to Create Ubiquitous and Active Augmented Reality in Mobile Computing”, In Proceedings of the 5th International Conference on Mobile Computing and Networking, 1999. [8] N.Khotake, J.Rekimoto and Y.Anzai, ”InfoStick: an interaction device for Inter-Appliance Computing”, Workshop on Handheld and Ubiquitous Computing (HUC’99), 1999. [9] Brad A. Mayer, Herb Stiel, and Robert Gargiulo, “Collaboration Using Multiple PDAs Connected to a PC”, In Proceedings CSCW’98: ACM Conference on Computer-Supported Cooperative Work, 1998. [10] T. Nakajima “A Framework for Building Adaptive Continuous Media Applications using Service Proxies”, In Handbook of Internet and Multimedia Systems and Applications, CRC Press, 1998. [11] S.R. Ponnekanti, B. Lee, A. Fox, P. Hanrahan, T. Winograd, “ICrafter: A Service Framework for Ubiquitous Computing Environments”, In Proceeding of the UBICOMP 2001, 2001. [12] T.Richardson, et al., “Virtual Network Computing”, IEEE Internet Computing, Vol.2, No.1, 1998. [13] B.N. Schilit, N.Adames, and R. Want, “Context-Aware Computing Applications”, In Proceedings of the Workshop on Mobile Computing Systems and Applications, IEEE, 1994. [14] I.Siio, T.Masui, K.Fukuchi, “Real-world Interaction using the FieldMouse”, In Proceedings of the ACM Symposium on User Interface Software and Technology (UIST’99), 1999. [15] K.Soejima, et. al., “Building Audio and Visual Home Applications on Commodity Software”, IEEE Transactions on Consumer Electronics, Vol.47, No.3, 2001. [16] P. Tandler, ”The BEACH Application Model and Software Framework for Synchronous Collaboration in Ubiquitous Computing Environments”, In Journal of System and Software, October, 2003. [17] D.Therenin and J.Coutaz, ”Plasticity of User Interface: Framework and Research Agenda”, Interact’99, 1999. [18] User Interface Markup Language, http://www.uiml.org/

1133

Suggest Documents