Mobile User Interfaces and their Utilization in a Smart City Bertrand David1, Yun Zhou1, Tao Xu1, René Chalon1 1 Université de Lyon, CNRS, Ecole Centrale de Lyon, LIRIS, UMR5205, 36 avenue Guy de Collongue, F-69134 Ecully Cedex, France Contact :
[email protected] [email protected] [email protected] [email protected]
Abstract - Up-to-date web (2.0) is collaborative, mobile and contextual. It takes into account human actors as well as different things, i.e. LBS (location based services) and internet of things are now an integral part of internet. We are working on this approach with the acronym MOCOCO (Mobility, Contextualization, Collaboration). Our research work is generic with multiple applications in working, learning and societal situations. Professional and home working situations, professional and teenager contextual mobile learning situations as well as Smart City applications are taken into account: transportation, goods distribution and local sport and cultural activities. In the paper we propose to present our approach for contextual mobile user interfaces and their application to the Smart City. Keywords: Mobile web, Location-based services, Internet of things, mobility and nomadism, in-environment user interface, environment dependent and independent user interface, mobility, contextualization, collaboration
1
Introduction
Ambient intelligence (AmI) refers to electronic environments that are sensitive and responsive to the presence of people [1]. In an ambient intelligence world, devices work together to support people in carrying out their everyday life activities, tasks and rituals in an easy and natural way using information and intelligence that is hidden in the network connecting these devices. AmI is a distributed system [2]. It represents a new vision of daily life, consisting of several kinds of sensors and computing devices, which lead to pervasive intelligence in the surrounding environment supporting user activities and interaction. This means that computing technology will exist in everything that surrounds us (devices, appliances, objects, clothes, materials) and that everything will be interconnected by a ubiquitous network. The system formed by all these intelligent things (also called the Internet of Things) will interface with humans by means of more advanced interfaces, which are natural and flexible and which adapt to the needs and preferences of each user.
The final goal is to obtain an adaptive and "intelligent" system that assists humans in their day to day activities. To attain Ambient Intelligence (AmI), first some kind of mechanism is required that enables sensors and computing devices to achieve it. Furthermore, it must also detect changes in context, which could affect the applications executed in user devices. In computing, there is a discipline that deals with this kind of problem, namely “context-aware computing” [3]. Context-aware computing, which was first discussed by Schilit and Theimer [4] in 1994, refers to a general class of mobile systems that can sense their physical environment, and adapt their behaviors accordingly. Inside this area, a special kind of application has emerged, the context-aware middleware solutions. In general, they provide the same functionality associated with traditional middleware in terms of communication. However, they also provide context management and adaptation to deal with the different resources present in the environments [5]. From a device point of view, desktops and laptops are no longer unique in their kind: more mobile devices such as TabletPC, PDA, and Smartphone, known as wearable or handheld computers, are being increasingly used. In this paper we explain our proposal for a contextaware middleware for ambient intelligence, a taxonomy of user interfaces allowing mobile and nomadic actors to participate in this real augmented environment. We also describe a particular situation, which is a subset of the Smart City, i.e. City 2.0 as a real augmented environment for digital born actors [6].
2
Context-aware middleware for ambient intelligence
The goal of our research is to develop a context-aware middleware for ambient intelligence. This middleware can collect contextual information from various interaction devices (data gloves, Wii Mote, etc.), techniques (gesture
recognition, markers or object recognition, etc.) and sensors (RFID, QR codes, etc.), and use these contexts to provide relevant information and/or services to the user, where relevancy depends on the user’s task [7]. We propose a context-aware middleware for building and rapid prototyping of the context-aware services in the ambient intelligence, which consists of various computational entities (Smartphone, computer and tablet) with appropriate UI devices and various sensor-based devices (RFID, camera, and marker). The architecture is made up of the following components: sensor data fusion, reasoning engine, Context Knowledge Base (KB), context database, context query engine. OWL is used to express a context model taking into account mainly, but not only, our laboratory sensors. Its main component services are (figure1): • Sensor data fusion: collect and transform the information from the sensors to OWL (we use ontology OWL for context description); • Reasoning Engine and Context KB: check context consistency and deduce the high-level context from lowlevel context; • Context database: store the context data; • Context query engine: handle the query from the application.
Application Layer
Context KB
Context Database
Context Query Engine
We try to use a simple scenario to interpret how this system works (Fig.2 (b)). In the morning, Tao enters the laboratory, his friend behind him says hello and Tao turns round and smiles at him. The camera has missed Tao’s face, and yet the RFID reader obtains Tao’s information precisely according to Tao’s card, and then transmits Tao’s location to the server. Tao comes in and hangs his coat up in the hall, before going to his room. But he leaves his card in his pocket. It doesn’t matter, as the marker in his room can help him locate his position and update data by his wearable camera. In this way, by multiple sensors, we are able to obtain more robust context recognition.
Middleware Layer