O'Hare, G.M.P., Keegan, S., O'Grady, M.J., Interaction for Intelligent Mobile ... SMS suggests that significant numbers of people use keypad text entry .... List construction and profile construction (explicit interaction) .... named Interface Agents.
O'Hare, G.M.P., Keegan, S., O'Grady, M.J., Interaction for Intelligent Mobile Systems. Proceedings of 10th International Conference on Knowledge-Based Intelligent Information & Engineering Systems (KES2006), Bournemouth, UK, October 2006. Lecture Notes in Artificial Intelligence (LNAI), Vol. 4252, 686-693. Springer-Verlag.
Interaction for Intelligent Mobile Systems G.M.P. O'Hare, S. Keegan and M.J. O'Grady, Adaptive Information Cluster (AIC), School of Computer of Computer Science & Informatics, University College Dublin (UCD), Belfield, Dublin 4, Ireland. {gregory.ohare, Stephen.keegan, michael.j.ogrady}@ucd.ie
Abstract. Mobile computing poses significant new challenges due the disparity of the environments in which it may be deployed and the difficulties in realizing effective software solutions within the computational constraints of the average mobile device. Likewise, enabling seamless and intuitive interaction is a process fraught with difficulty. Embedding intelligence into the mobile application or the physical environment as articulated by the AmI vision is one potential strategy that software engineers could adopt. In this paper, some pertinent issues concerning the deployment of intelligent agents on mobile devices for certain interaction paradigms are discussed and illustrated in the context of an m-commerce application.
1 Introduction Ambient Intelligence (AmI) is motivated by an awareness that the increasing proliferation of embedded computing within the environment may become a source of frustration to users if appropriate interaction modalities are not identified. Technically, AmI is closely related to the ubiquitous computing initiative articulated by Weiser over decade earlier. Both envisage computing artifacts being augmented with computational technologies. Both acknowledge the need for seamless and intuitive interaction. However, as developments in the necessary hardware and software continue, the issue of interaction has become increasingly important; hence the AmI initiative. This advocates the development of intelligent user interfaces that would mediate between the user and the embedded artifact. Historically, text entry has been the traditional modality of interaction with computers starting with the QWERTY style keyboards back in the 1970s. However, developments in mobile telecommunications led to alternative interaction modalities being considered. At present, the default layout of mobile phone keyboards conforms to an ISO standard [1]. Numeric keys are overloaded with alphabetic characters and other symbols, and though the interface is not at first sight intuitive, the success of SMS suggests that significant numbers of people use keypad text entry without a thought. However, any difficulties that arise pale into insignificance when it is considered that sensors are considered to have sophisticated user interface if they support three LEDS! Thus, significant obstacles must be overcome if intelligent user interfaces are be realistically deployed in AmI environments.
O'Hare, G.M.P., Keegan, S., O'Grady, M.J., Interaction for Intelligent Mobile Systems. Proceedings of 10th International Conference on Knowledge-Based Intelligent Information & Engineering Systems (KES2006), Bournemouth, UK, October 2006. Lecture Notes in Artificial Intelligence (LNAI), Vol. 4252, 686-693. Springer-Verlag.
This paper is structured as follows: Section 2 considers interaction from a traditional mobile computing perspective. In Section 3, the intelligent agent paradigm is examined. In Section 4, the use of intelligent agents for interaction monitoring is illustrated through a brief discussion of EasiShop, an m-commerce application. Some related research is presented in Section 5 after which the paper is concluded.
2 Observations on Interaction
Fig 1. The Interaction Continuum.
In recent years, a number of different modalities of interaction have been researched and described in the literature. Multi-modal interaction where a number of different modalities of interaction, for example, voice, gesture and so on, are used for both for input and output, is a case in point. For the purposes of this discussion, however, the input modality is of particular interest and is viewed as ranging from explicit to implicit (Fig. 1). One could also classify the input interaction modality as ranging from event-based to streaming-based [2]. Explicit interaction occurs when a user tells a computer quite clearly what it is to do. This may be accomplished by manipulating a GUI, running a command in a command window or using a voice recognition system, to mention but a few. In short, the user performs some action, thus unleashing a series of events, resulting in an expected outcome. Most interaction with computers is of the explicit variety. Implicit interaction [3] is, from a computation perspective, a more recent development. In itself, it is not a new concept as people also communicate implicitly, for example by subconsciously smiling or frowning. Interaction is considered as being implicit when an action is performed that a user would not associate as being input but which a computer interprets as such. A classic example is that of a museum visitor. By moving towards certain exhibits and viewing them for a significant time span would be interpreted by a mobile electronic tourist guide as an indication of a strong interest in the exhibit. Therefore the guide could deduce with a reasonably high degree of certainty that the visitor might welcome some additional information about that exhibit. Implicit interaction is closely associated with a user's context and some knowledge of this is almost essential if designers want to incorporate it into their applications. A potentially serious problem arises when an incomplete model of the prevailing context is available. At some stage, a decision has to be made as to whether to initiate an explicit interaction with the user. However, when to do this depends on the nature
O'Hare, G.M.P., Keegan, S., O'Grady, M.J., Interaction for Intelligent Mobile Systems. Proceedings of 10th International Conference on Knowledge-Based Intelligent Information & Engineering Systems (KES2006), Bournemouth, UK, October 2006. Lecture Notes in Artificial Intelligence (LNAI), Vol. 4252, 686-693. Springer-Verlag.
of the application in question. Using techniques such as machine learning, bayesian probability and so on, a threshold can be identified that, if exceeded, would prompt an explicit interaction. The value of this threshold would essentially be a judgment call by the software engineer and may even change dynamically. Suffice to say that a health monitoring application would have a lower threshold than one that recommends local hostelries. Thus, the need for a degree of intelligence in the application. In the next section, one solution, intelligent agents, is considered.
3 Intelligent Agent Architectures Intelligent agents continue to be the subject of intense research in academia. Though their uses are varied and many, agents are perceived as offering alternative strategies for software development in areas that traditional techniques have not proved effective. Examples include domains that are inherently dynamic and complex. Three broad categories of agent architecture have been identified: 1. Reactive agents act in a simple stimulus-response fashion and are characterized by a tight coupling between event perception and subsequent actions. Such agents may be modeled and implemented quite easily. The Subsumption Architecture [4] is classic example of a Reactive Architecture. 2. Deliberative agents can reason about their actions. Fundamental to such agents is the maintenance of symbolic model of their environment. One popular implementation of the deliberative stance is the belief-desire-intention (BDI) model [5], which has found its way into commercial products, for example JACK [6]. In the BDI scheme, agents maintain a model of their environment through a set of beliefs. Each agent has set of objective or tasks that it seeks to fulfill, referred to as desires. By continuously monitoring its environment, the agent will detect opportunities when it is appropriate to carry out some of its desires. Such desires are formulated as intentions which agent proceeds to realize. 3. Hybrid architectures seek to adopt the best aspects of each approach. A strategy that might be adopted would be to use the reactive component for event handling, and the deliberative components for longer term goals. A number of characteristics are traditionally associated with agents. Reactivity, proactivity, autonomy and societal are characteristics of so called weak agents while strong agents augment these further with rationality, benevolence, veracity and mobility [7]. Of course, not all of these will be possessed by individual agents.
4 Capturing Interaction through Agents On completing requirements analysis and, as part of the initial design stage, the software designer, possibly in conjunction with a HCI engineer, must decide on the necessity for supporting the implicit interaction modality for the proposed
O'Hare, G.M.P., Keegan, S., O'Grady, M.J., Interaction for Intelligent Mobile Systems. Proceedings of 10th International Conference on Knowledge-Based Intelligent Information & Engineering Systems (KES2006), Bournemouth, UK, October 2006. Lecture Notes in Artificial Intelligence (LNAI), Vol. 4252, 686-693. Springer-Verlag.
application. Should they decide favorably, then a mechanism for monitoring and capturing both explicit and implicit interaction must be identified. A number of options exist; and the final decision will be influenced by a number of factors that are application domain dependent. However, one viable strategy concerns intelligent agents. From the previous discussion on interaction and intelligent agents, it can be seen that certain characteristics of the agent paradigm are particularly suited for capturing interaction. Practically all applications must support explicit interaction. The reactive nature of agents ensures that they can handle this common scenario. As to whether it is prudent to use the computational overhead of intelligent agents just to capture explicit user input is debatable, particularly in a mobile computing scenario. Of more interest is a situation where implicit interaction needs to be captured and interpreted. Implicit interaction calls for continuous observation of the end-user. As agents are autonomous, this does not present any particular difficulty. Though identifying some particular aspect or combinations of the user's context may be quite straightforward technically, interpreting what constitutes an implicit interaction, and the appropriateness of explicitly responding to it in a timely manner may be quite difficult. Hence, the need for a deliberative component. As an illustration of the issues involved, interaction modalities supported by EasiShop, a system based on the agent paradigm are now considered. EasiShop [8] is a functioning prototype mobile computing application, developed to illustrate the validity of the m-commerce paradigm. By augmenting m-commerce with intelligent and autonomous components, the significant benefits of convenience and added value may be realized for the average shopper as they wander their local shopping mall or high street. In the rest of this section, the synergy between agents and interaction is demonstrated through an illustration of an archetypical EasiShop usage scenario. 4.1 EasiShop Usage Scenario EasiShop is a suite of software deployed in a tripartite architecture to support mcommerce transactions. There are three distinct stages of interaction within EasiShop. 1. List construction and profile construction (explicit interaction) EasiShop is initiated when the user constructs a seed user profile. This is comprised of a set of bounded personal information such as age and gender. A further set of generalized information, alluding to the type of product classes which are of interest, is then obtained from the user. This last type of data is obtained when the user constructs a shopping list. The list details what products are sought by the user and, to a certain extent, under what terms and context this acquisition is permissible. 2. Movement (implicit interaction) EasiShop is primarily an automated shopping system. Once the user has specified their profile information and has constructed a shopping list (fig. 2a), the various components collaborate in a transparent and unobtrusive manner to satisfy the requirements of the user. To manage this process, a certain degree of coordination is required, hence the use of mobile agents.
O'Hare, G.M.P., Keegan, S., O'Grady, M.J., Interaction for Intelligent Mobile Systems. Proceedings of 10th International Conference on Knowledge-Based Intelligent Information & Engineering Systems (KES2006), Bournemouth, UK, October 2006. Lecture Notes in Artificial Intelligence (LNAI), Vol. 4252, 686-693. Springer-Verlag.
As the user wanders their local high-street, a proxy agent migrates transparently from the user’s device into a proximal store. From here, this entity may migrate to an open marketplace where representative agents from a myriad of stores (including the current proximal store) may vie for the user’s agent’s custom. This process entails a reverse auction whereby the user’s requirements (and profile) are presented to the marketplace. Interested parties request to enter the ensuing auction and a set of the most appropriate selling candidates is chosen by the user’s agent upon completion of that auction (if any). At this point the user’s agent is ready to return to the user’s device and will attempt to do so, though it may return from a different point.
(a)
(b)
Fig. 2. Shoppers must explicitly inform EasiShop about their requirements (a). EasiShop implicitly monitors shopper behavior and (b) autonomously negotiates with nearby stores.
3. Decision (explicit interaction) When the agent has returned to the user’s device, it is laden with the set of product offerings resulting from the auction process. This set is presented to the user from whom an explicit decision is requested as to which offering (if any) is the most acceptable (fig. 2b). Once this indication has been made, the user is free to collect the item from the relevant shop. This decision is used as reinforcement feedback in that selection data is garnered to determine what kind of choice is likely in the future. Should the result of the auction not meet the agent's initial requirements, the user is not informed of what has taken place. However, the process will be repeated until such time as the shopping list is empty.
O'Hare, G.M.P., Keegan, S., O'Grady, M.J., Interaction for Intelligent Mobile Systems. Proceedings of 10th International Conference on Knowledge-Based Intelligent Information & Engineering Systems (KES2006), Bournemouth, UK, October 2006. Lecture Notes in Artificial Intelligence (LNAI), Vol. 4252, 686-693. Springer-Verlag.
4.2 Architecture of EasiShop
Fig. 3. Architecture of EasiShop. EasiShop functions on a three-tiered distributed architecture (fig. 3). From a center-out perspective, the first tier is a special centralized server called the Marketplace. The design of the marketplace permits trade (in the form of reverse auctions) to occur. The second tier is termed the Hotspot. This is a hardware and software hybrid suite, situated at each participating retail outlet, which is permanently connected to the Marketplace and which allows the process of moving (migrating) representative (selling) agents from the retailers together with (buying) agents representing shoppers to the Marketplace. The final and outermost tier in this schema is collection of device nodes, usually a smart mobile phones or PDA.
4.3 Utilization of Mobile Intelligent Agents The principal challenge when devising appropriate agents in a preference-based auctioneering domain is to deliver personality embodiment. These personalities represent traits of buyers (users) and sellers (retailers) and are required to encapsulate dynamism (in that traits change over time) and persistency (in that a record of the traits needs to be accessible to the agents). These requirements are satisfied by the utilization of file-based xml data storage which contain sets of values upon which
O'Hare, G.M.P., Keegan, S., O'Grady, M.J., Interaction for Intelligent Mobile Systems. Proceedings of 10th International Conference on Knowledge-Based Intelligent Information & Engineering Systems (KES2006), Bournemouth, UK, October 2006. Lecture Notes in Artificial Intelligence (LNAI), Vol. 4252, 686-693. Springer-Verlag.
traits depend. For example, the characteristics of a store are encapsulated in an xml file containing rules which can formalize attributes like preferences for certain types of shopper (of a certain age or gender), temporal constraints (like pricing products according to time of day) and stock restrictions (like pricing products according to current stock levels). The mechanism to deliver agent mobility is implemented in a separate subsystem and uses both fixed (LAN) and wireless (Bluetooth) carriers.
5 Related Research Intelligent agents encompass a broad and dynamic research area. However, deploying agents on mobile devices has, until recently, been unrealistic primarily due to hardware limitations. However, ongoing developments are increasingly rendering these limitations obsolete and a number of agent environments have been described in the literature. In some cases, existing platforms have been extended. For example, LEAP [9] has evolved from the JADE [10] platform. Likewise microFIPA-OS [11] is an extension of the well-known open source platform FIPA-OS [12]. In the case of BDI agents, Agent Factory Lite [13] is one environment that supports such agents. One classification of agents that is closely associated with interfaces are the aptly named Interface Agents. Maes [14] has done pioneering work in this area and regards interface agents as potential collaborators with users in their everyday work and to whom certain tasks could be delegated. Ideally, interface agents would take the form of conversational characters which would interact with the user in a social manner. Recently, the use of intelligent agents for ambient intelligent applications has been explored. Satoh [15] describes a framework, based on mobile agents, that allows personalized access to services for mobile users. Grill et al [16] describe an environment that supports the transfer of agent functionality into everyday objects. Finally, Hagras et al describe iDorm [17], an intelligent dormitory that uses embedded agents to realize an AmI environment. The system uses fuzzy techniques to derive models of user behaviors.
6 Conclusion Mobile computing scenarios offer a different set of challenges from those traditionally experienced in networked workstation environments. Implicit interaction offers software designers an alternative model of capturing user intent and significant opportunities to proactively aid the user in the fulfillment of their tasks. However, capturing and interpreting implicit user interaction require that some intelligence either be hosted on the user's device or embedded in their surroundings. In this paper, the use of intelligent agents has been discussed as a promising approach to managing both the traditional explicit interaction modality and, where necessary, the implicit interaction modality. Such an approach offers software designers an intuitive method of incorporating the implicit interaction modality into
O'Hare, G.M.P., Keegan, S., O'Grady, M.J., Interaction for Intelligent Mobile Systems. Proceedings of 10th International Conference on Knowledge-Based Intelligent Information & Engineering Systems (KES2006), Bournemouth, UK, October 2006. Lecture Notes in Artificial Intelligence (LNAI), Vol. 4252, 686-693. Springer-Verlag.
their designs. Attention is frequently at a premium in mobile computing environments thus the use of autonomous agents for managing user interaction in an intelligent manner offers a significant opportunity for enhancing the user experience – a fundamental prerequisite if mobile computing is to fulfill its potential.
References 1. ISO/IEC 9995-8:1994, Information technology -- Keyboard layouts for text and office systems -- Part 8: Allocation of letters to the keys of a numeric keypad. 2. Obrenovic, Z., Starcevic, D., Modeling Multimodal Human-Computer Interaction, IEEE Computer, vol. 37, no. 9, 2004, pp. 65-72. 3. Schmidt, A. Implicit Human Computer Interaction through Context. Personal Technologies, Volume 4(2&3), June 2000. Springer-Verlag. pp. 191-199. 4. Brooks, RA, Intelligence without representation, Artificial Intelligence 47, 139–159, 1991. 5. Rao, A.S., Georgeff, M.P., Modelling Rational Agents within a BDI Architecture. In: Principles of Knowledge Representation. & Reasoning, San Mateo, CA. 1991. 6. JACK - The Agent Oriented Software Group, http://www.agent-software.com. 7. Wooldridge, M., Jennings, N.R., Intelligent Agents: Theory and Practice, The Knowledge Engineering Review, vol.10, no.2, 1995, pp. 115-152. 8. Keegan, S., O’Hare, G.M.P., EasiShop: Enabling uCommerce through Intelligent Mobile Agent Technologies, Proceedings of 5th International Workshop on Mobile Agents for Telecommunication Applications (MATA’03), Marrakesh, Morocco, 2003. 9. Bergenti, F., Poggi, A., LEAP: A FIPA Platform for Handheld and Mobile Devices, Proceedings of the 8th International Workshop on Agent Theories, Architectures and Languages (ATAL-2001), Seattle, WA, USA, August 2001. 10. Bellifemine, F., Rimassa, G., Poggi, A., JADE - A FIPA compliant Agent Framework, Proceedings of the 4th International Conference and Exhibition on the Practical Application of Intelligent Agents and Multi-Agents, London, 1999. 11. Tarkoma, S., Laukkanen, M., Supporting Software Agents on Small Devices, Proceedings of AAMAS, Bologna, Italy, July 2002. 12. Foundation for Intelligent Physical Agents (FIPA), http://www.fipa.org. 13. Muldoon, C., O Hare, G.M.P., Collier, R.W., O Grady, M.J., Agent Factory Micro Edition: A Framework for Ambient Applications, Proceedings of: Intelligent Agents in Computing Systems, Reading, UK, May, 2006. 14. Maes, P., Agents that reduce work and information overload, Communications of the ACM 37(7), 30-40, 1994. 15. Satoh, I., Software Agents for Ambient Intelligence, Proceedings of IEEE International Conference on Systems, Man and Cybernetics (SMC'2004), pp.1147-1150, 2004. 16. Grill, T.,Ibrahim, I.K., Kotsis, G., Agents Visualization in Smart Environments, Proc. of the 2nd International Conference on Mobile Multimedia (MOMM2004), Bali – Indonesia. 17. Hagras, H., Callaghan, V., Colley, M., Clarke, G., Pounds-Cornish, A., Duman, H., Creating an ambient-intelligence environment using embedded agents, Intelligent Systems, 19(6), 12-20, 2004.