Passive identification and control of arbitrary devices in ... - CiteSeerX

1 downloads 0 Views 401KB Size Report
method is using triangulation or multilateration of radio-frequency signals, whereas a ..... High-performance wide-area tracking for virtual and augmented ...
Passive identification and control of arbitrary devices in smart environments Andreas Braun, Felix Kamieth Fraunhofer Institute for Computer Graphics Research - IGD, Darmstadt, Germany

{andreas.braun, felix.kamieth}@igd.fraunhofer.de

Abstract. Modern smart environments are comprised of multiple interconnected appliances controlled by a central system. Pointing at devices in order to control them is an intuitive way of interaction, often unconsciously performed when switching TV stations with an infrared remote, even though it is usually not required. However, only a limited number of devices have the required facilities for this kind of interaction since it does require attaching transceivers and often results in the necessity to use multiple remote controls. We propose a system giving a user the ability to intuitively control arbitrary devices in smart environments by identifying the appliance an interaction device is pointed at and providing means to manipulate these. The system is based on identifying the position and orientation of said interaction device, registering these values to a virtual representation of the physical environment, which is used to identify the selected appliance. We have created a prototype interaction device that manipulates the environment using gesture-based interaction. Keywords: Pointing device, gesture-based interaction, smart environments

1

Introduction

Smart environments are comprised of various interconnected devices, e.g. lighting and heating. They can be roughly grouped into sensors, actuators, output devices and computational elements. Typically those devices are heterogeneous concerning their purpose, interaction metaphors and technological basis. Modern home automation and middleware systems are able to abstract the technological basis to a certain extent but heterogeneous input modalities remain an issue. Those systems are usually controlled by a central server and the user can interact with it using remote controls or networkenabled systems like smartphones or tablets. Gestural interaction with a system is a natural, intuitive way for controlling programs that enjoyed considerable market success in recent years with devices like the Nintendo Wii. We have developed a methodology that will allow appliances in smart environments to be controlled using sensor-equipped interaction devices. Based on determining the position and

orientation of an input device it is possible to register those values into a virtual representation of the smart environment and identify an appliance the input device is currently pointing at. Based on this methodology we have created a first prototype acting as proof-of-concept device for this kind of user-system interaction. This prototype is focusing on the sensor-equipped interaction device and is integrated to a simulated smart environment instead of a real one. It allows the user to modify the virtual appliances performing gestures with the actual prototype.

2

Related Work

Virtual representations of smart environments have been commonly used in research, e.g. as user interface metaphor. A three-dimensional model of the smart environment is often directly used as graphical user interface that either allow direct manipulation of appliances [1] or is augmented with classical user interface items [2], distinguishing between selection and manipulation. Another research topic has been investigating virtual representations of smart environments to allow rapid prototyping and user-centered testing of apartments and buildings in the virtual realm [3], e.g. in the context of ambient assisted living, creating apartments that are suited for persons with physical impairments. There are various different principles of indoor localization available. A common method is using triangulation or multilateration of radio-frequency signals, whereas a device is tracked by using geometrical information of several senders/receivers and time-of-flight data [4]. However precision is usually limited and therefor this method is mostly suited for detecting the occupation of a room, not the exact location within it. Optical tracking is another method for indoor localization that is using feature extraction in combination with camera parameters and known positions of several cameras to calculate the position of an object in 3D coordinates, e.g. using a stereo camera system and blob detection to track multiple persons [5]. An unobtrusive but technically complex solution is using active floor systems. These are based on pressure or presence sensing equipment in a floor that is able to determine the position of one or more persons. SensFloor [6] is such a system, relying on capacitive sensors that are embedded into the floor covering and communicate wirelessly with a central system. Research regarding pointing at devices in smart environments has resulted in the development of the gesturePen [7], a line-of-sight based system that is able to identify appliances. It allows interaction with devices that are equipped with wireless RF-tags. Therefore it allows manipulation based on pointing without the usage of a virtual environment but requires every controllable appliance to be equipped with additional hardware.

3

Appliance identification

As previously mentioned the appliance identification method is based on two premises. The smart environment needs the ability to reliably track position and orientation of the interaction device and the system needs a virtual representation of the smart environment, where all appliances and their respective positions are modeled. 3.1

Methodology

The basic idea of this method is to first determine position and orientation of an interaction device in the smart environment. These values are registered into the virtual representation of the latter, making it possible to apply discovery methods to find the appliance the device is pointed at in the real world. The system then is able to give commands to the connected appliance. The general method can be grouped into four main steps leading from detecting the interaction device to sending commands to the devices placed in the smart environment. The first step is, as noted in the prerequisites, to provide means for the smart environment to register the position and orientation of the interaction device. The second step is to register orientation and position into the virtual representation of the environment. In the third phase we use this data to identify the appliance via discovery methods in the virtual environment. Finally the performed user interaction is registered and the selected device is controlled accordingly. The whole process is shown in Fig. 1. A more detailed description of the different steps and how they have been realized in the created prototype is shown in the following subchapters.

Fig. 1. Process to manipulate appliances via interaction device

3.2

Interaction device position and orientation detection

The position and orientation of any object can be described using six degrees-offreedom. Most commonly Cartesian coordinates are used to define the position and Tait-Bryan angles (roll, pitch and yaw) to describe orientation. These values can be detected combined or independently of each other. Our proposed system uses an independent approach, whereas the position is determined using an indoor localization

system based on an active floor, having ability to sense presence and position of a human being and an interaction device equipped with sensors to determine orientation of the device. The position of the device has to be tracked with a certain precision in order to achieve sufficient accuracy of the appliance selection algorithm. RF-based tracking of devices using time-of-flight data to track the position of a sender is usually suffering from an error in the range of several meters [9]. There are various options to determine the device position within the smart environment. It is possible to use optical markers on the interaction device a camera can detect and calculate position from [8]. Another option is to use a generic indoor localization method and estimate the device position using average body parameters, e.g. shoulder height and arm length. This will require factoring in an amount of uncertainty that will have to be considered when designing the identification of the virtual appliances and the feedback output devices give to the user. A common method to determine the orientation of a device is using an IMU (intertiameasurement unit) that is comprised of accelerometers, gyroscopes and magnetometers, having the ability to determine orientation and velocity of an object. These sensor units are nowadays installed in many multimedia devices, ranging from game console controllers to smartphones. The velocity data can additionally be used to improve the position detection of the device, using dead-reckoning1 to estimate device location changes, thus creating more samples for the localization. 3.3

Registration in virtual environment representation

Transforming position and orientation to the virtual representation is trivial using simple geometric transformations, based on the system knowledge about the real environment. We use the virtual position to project a ray along the virtual orientation and use ray intersection methods with the bounding volumes (BV) of the modeled appliances. There are various strategies to modify this ray intersection. The obvious solution is to select the first bounding volume that is hit. Other options include using metadata to prioritize important appliances or give a user the option to select between all detected objects on the path of the ray via the user interface. Another option to modify this process is to use virtual appliances. These are regions within the virtual representation that have been assigned to a certain appliance without physical reference. This is useful to model very small appliances like lighting switches or hidden appliances like heating. For example now in the virtual

1

method to estimate the location relative to a known position, e.g. by integrating over velocity values generated by an IMU

representation a region at the ceiling can be assigned to manipulate the lighting, or a region on the floor can be assigned to heating control. In either the case the user needs feedback from the user interface to successfully select these strictly virtual regions. Coping with very small or distant appliances resulting in small BVs, as well as occluded appliances, causing ambiguous intersections have to be taken into account using various strategies of modifying BVs in the virtual realm. Examples are distancebased scaling or morphing the shapes of occluded BVs. Similar strategies can be applied to increase the amount of used space if the appliances are sparsely distributed in the smart environment.

4

Interaction device prototype

We have created a first proof of concept interaction device (Fig. 2), equipped with a minimal IMU system and gestural interaction modality. The system is based on the Arduino2 platform and uses a three-axis accelerometer and a single-axis magnetometer to determine orientation. The accelerometers are used to determine roll and pitch of the device. The magnetometer detects yaw relative to the earth magnetic field. The Arduino board is additionally equipped with a wireless communication chip based on the ZigBee protocol that is used to transfer generated data to the system. Signal processing is implemented on the Arduino microcontroller. This includes calibration of the sensors and scaling of acquired data to generate radian values that can be used in the subsequent processing steps. This prototype possesses no facilities that aid the localization (with the exception of supporting dead reckoning) and therefore has to rely on the smart environment to track the position of the user and extrapolate device position. User-interaction is realized using a single button and interpretation of the accelerometer data using the wiigee3 library for Java that supports learning gestures based on collected training data using machine learning methods. Concerning the prototype the appliance selection and appliance manipulation through gestures is separated. If the attached button is pushed the device will stop appliance identification, assuming the most recently selected as desired and subsequently use accelerometer data to detect performed gestures that activate certain events. A small GUI-based program is supporting the learning process and stores gestures into readable files. These can be freely associated to different manipulations of the various appliances. For example a

2

www.arduino.cc - open-source electronics prototyping platform

3

www.wigee.org - created for interfacing with Nintendo Wii Remote

generic left-to-right gesture could be used to switch channels if television is selected or to increase brightness if the interaction device is pointing at the room lighting.

Fig. 2. Image of the Arduino-based prototype interaction device with accelerometers, magnetometer and button visible on the top board

5

Simulated environment

In order to realize this proof-of-concept we have established a minimum simulated environment equipped with various virtual appliances that change their state based on performed gestures. At this point this virtual environment is not based on a real room and not connected to any indoor localization methods. However, it is possible to manually control the position of the person to test the device identification from various points within the simulated environment. An example scene of this virtual environment is shown in fig. 3. The user is in a virtual room and can control four appliances, two different windows, a TV set and a radio. In this scene the virtual person points at the TV, causing the associated BV to be highlighted. This way the user is getting a concise visual feedback, allowing him to easily determine that he has successfully selected the desired device. Now various gestures can be performed, which in this prototype state are limited to changing textures of the appliances. The system can be connected to a home automation system or middleware at a later point in time, to allow manipulation of actual appliances in a real smart environment, where

instead of mapping detected gesture events to the virtual devices they can be sent to the home automation system controlling the actual device.

Fig. 3. Simulated environment with ray projected from the manually controlled person, bounding volumes of all integrated appliances and the green-tinted selected bounding volume

The acquired orientation data is registered to the simulated environment and used to cast a ray in the desired direction. We use the trivial method of selecting the first device, whose bounding volume is intersected and use gestures to manipulate the appliances in the simulated environment. Coming back to the process initially modeled the realization of our proof-of-concept system can be combined as shown in fig. 4.

Fig. 4. Passive identification and control process as realized in the current prototype

6

Limitations

The whole system is based on the prerequisites that the smart environment needs to reliably track position and orientation of the desired interaction device and the system needs a virtual representation of the environment with all appliances modeled. This limits the system to static environments. If appliances are in the environment that are moved regularly, strategies have to be involved that compensate this limitation, e.g. providing simple interfaces that allow the user to reconfigure the environment.

The IMU is using magnetometers relying on the earth magnetic field to detect yaw. In home environments there may be various other sources of electromagnetic or static magnetic fields that will disturb these measurements and in worst case prevent the method from working altogether. It may be viable to include a Hall-sensor that can detect external magnetic field to let the user know why the interaction is interrupted, for example through GUI notifications or audio signals, alternatively it is possible to use on-the-fly calibration methods to allow for improved results in changed environments. Currently differences in body posture and height are not explicitly compensated in the calculation process. It is assumed that appropriately sized bounding volumes and clear visual feedback like highlighting the selected appliance will allow the user to perform the interaction with sufficient precision. However this will have to be confirmed in a planned future evaluation of a completely implemented system in a real smart environment.

7

Conclusion and future work

By relying on a capable smart environment control unit and sensor-equipped interaction devices the complex task of identifying appliances the user points at is reduced to two components, allowing for a flexible solution that does not require the controlled appliances to feature active receivers or transmitters communicating with the interaction device. We have created a first realization of the sensor-equipped interaction as proof-of-concept relying on a minimal IMU system and a demonstration virtual representation that is not connected to a home-automation system. As future work we intend to connect the system to our living lab that features various appliances connected to a home automation system. In combination with active floor indoor localization technologies the whole process of passively identifying arbitrary device can be realized in an actual smart environment. We will perform an evaluation of the complete system, focusing on the effects of different body parameters and pointing habits, in order to further improve the methodology. Other points of interest include testing commonly available devices including IMUlike systems, e.g. smartphones and tablets. The main interest is to evaluate whether their integrated sensor units are sufficiently precise to perform the device identification with otherwise unchanged parameters. We plan to test other methods of indoor localization, e.g. optical tracking of a marker attached to the interaction device or depth imaging object recognition that should allow tracking of both position and orientation of an interaction device, eventually allowing the user himself to act as input device. These methods should be compared with regard to technical viability,

performance and user acceptance. A final point of interest is the optimization of the appliance identification in the virtual representation using the BV optimization, hinted at in section 3.3 that should improve the method regarding the identification of small or distant appliances, view-point occlusion and sparsely distributed devices in the smart environment.

8

References

1. Borodulkin L., Ruser H., Tränkler, H.-R. 3D virtual smart home user interface. Proceedings of the IEEE International Symposium on Virtual and Intelligent Measurement Systems, Girdwood, AK, 2002. 2. Shirehjini A.A.N. A novel interaction metaphor for personal environment control: Direct manipulation of physical environment based on 3d visualization, in: Computers & Graphics, Special Issue on Pervasive Computing and Ambient Intelligence, Vol. 28, Elsevier Science, 2004, pp. 667–675. 3. Kamieth F., Roeder P., Hellenschmidt M. Accelerated User-Centred Design of AAL solutions using simulated environments. In Proceedings Ambient Assisted Living, VDE Verlag GmbH, January 27-28th 2009. Berlin, Germany 4. Lim H., Kung L., Hou J. C., Luo, H.: Zero-Configuration, Robust Indoor Localization: Theory and Experimentation. In Proceedings of IEEE INFOCOM, pp. 123-125, 2006. 5. Krumm J., Harris S., Meyers B., Brumitt B., Hale M., Shafer S.: Multi-camera multi-person tracking for easyliving. In Proceedings of the Third IEEE International Work-shop on Visual Surveillance, 2000. 6. Joint Project SensFloor, http://www.sensfloor.de/ 7. Swindells, C., Inkpen K., Dill J., Tory M. That one there! Gesture for on-demand device identity. In Proceedings UIST. 2002. Paris. 8. Welch G., Bishop G., Vicci L., Brumback S., Keller K., Colucci D. The HiBall tracker: High-performance wide-area tracking for virtual and augmented environments. In Proceedings ACM Symposium on Virtual Reality Software and Technology, December 1999. 9. Taneja, S., Akcamete A., Akinci B., Garrett J.H., East E.W., Soibelman L. Analysis of three indoor localization technologies to support facility management field activities. In Proceedings of the International Conference on Computing in Civil and Building Engineering, Nottingham, UK, 2010.

Suggest Documents