Augmented Reality visualization interface for

1 downloads 0 Views 213KB Size Report
Wireless sensor networks are being intensely used in health care environments to collect ... Each node contains a control element to process the received information and perform different ... of RAM and 48 Kb of flash memory. Additionally, an ...
Augmented Reality visualization interface for Biometric Wireless Sensor Networks D´ebora Claros, Mario De Haro, Miguel Dom´ınguez, Carmen de Trazegnies, Cristina Urdiales, and Francisco Sandoval Grupo ISIS, Departamento de Tecnolog´ıa Electr´ onica, E.T.S.I. Telecomunicaci´ on Universidad de M´ alaga, Campus de Teatinos, 29071, M´ alaga, Espa˜ na [email protected], [email protected], [email protected]

Abstract. Wireless sensor networks are being intensely used in health care environments to collect biometric signals from patients. This paper presents an augmented reality visual interface based on artificial markers intended to be used by medical staff, to monitor real time information from different kind of sensors attached to the patients in care centers in a fast and flexible way. The system can be applied for any kind of information source. In this work, it has been tested with temperature and humidity sensors.

1

Introduction

Sensors play an important role on many everyday life aspects. They can be found in a large amount of systems: airplanes, cars, surgery instruments, buildings, etc. Many control and monitoring processes couldn’t be achieved without them. Communication technologies are in constant evolution towards integration and wireless connectivity and this has led to new concepts, such as ”Ambient Intelligence” (AmI) [1] [2], which regards an environment where this technology is integrated, hidden in background, but is also adaptive, non invasive and capable of interacting with people or objects without human explicit supervision. In fact, AmI is a technology with an ambitious purpose: assisting the users by creating a sensing and processing network, where artificial intelligence resides in the distributed and relatively simple net elements, rather than in a central processing unit. It provides basic criteria for intelligent environments where devices are ”invisible” to us. Sensors and actuators with processing capacity are used for environment monitoring, human or objets identification, health monitoring, etc. In these terms, a Wireless Sensor Network (WSN) consists of a large number of sensors integrated into wireless nodes, communicated through multihop radio links. Each node contains a control element to process the received information and perform different tasks that can be distributed along the network depending on the application [3]. WSN has become a solution for the need of an efficient way to process information collected from distributed sensor networks. As these technologies are introduced in daily life, new user interfaces need to be proposed to fulfill new user requirements. AmI has opened new perspectives

2

in human activities related to accessing information from systems integrated everywhere. In this field, the application of Augmented Reality (AR) to WSN can provide the user with a visual interface to get the information from the network in a very handy way. AR techniques have seen a fast development in the last decade, opening the possibility of generation of new computer graphics, vision and interfacing techniques [4]. From the technical point of view, AR is an intelligent combination of several technologies. It focuses on enriching the user perception of the real world with additional information. This additional information, mostly visual, is presented to the user in an intuitive way, i.e. as contextual information, completing or complementing the perceptual information from the real world [4]. By this means, the user is not constrained to the data readings available from a personal computer, nor is forced to select the data source, type or format. Information can simply be overlapped to real world in textual form, as a virtual object, or even as an animated avatar [5][6][7]. This article presents an application that sets on top of the real world virtual images containing identification data and signal graphs related to the measures taken by the network sensors. Thus, the user does not need to specify which data collection he will be using at every moment. The combination of AR and WSN technologies provides him with the information from sensor readings at the place and at the time where they will be used, by means of a friendly and intuitive visual interface. Intelligent systems require this kind of high level cognitive interfaces to be valid to any potential user. Hence, they do not need a deep understanding of the low level processes running in the network sensors as the system supplies them with the information needed in a simple way. This new working philosophy frees the user from the need of selecting and collecting data previously to its use. Hence, it does not only provide a friendly working environment but also reduces the risk of information mess or loss. This paper is structured as follows: in section 2 the system and its components are described, in section 3 the results of the tested prototype are analyzed, and finally, conclusions are presented in section 4.

2

System overview

The developed application has two main components. First, a deployed WSN consisting of a set of nodes collecting biometric information and just one node acting as a sink and connected to a central computer, were the information is saved in a database. Second, a set of AR tools, capable of recognizing each sensor and presenting its readings in visual form. 2.1

The wireless sensor network application

The sensor network application has been implemented on a set of Tmote Sky modules from Moteiv Corporation [8]. These modules are low power devices with

3

integrated sensors, radio, antenna, microcontroller and programming capabilities. Humidity, temperature and solar radiation sensors are integrated in the module. Additionally, general purpose inputs/outputs, analog/digital converters, SPI, UART and I2 C interfaces available in the microcontroller can be used to attach other non invasive sensors when needed. The low power characteristic of the device is due to the presence of the ultra low power MSP430 F1611 microcontroller from Texas Instruments. It works at 8 MHz and features 10 Kb of RAM and 48 Kb of flash memory. Additionally, an external flash of 1 Mb is integrated in the module. The radio chip integrated in the module is IEEE 802.15.4 compliant [10]. It provides the physical and some MAC layer functions and is controlled by the microcontroller through the SPI port and some digital input/output lines. The maximum bandwith supported by the module is 250 kbps. The deployed network runs a modified version of Delta, a Moteiv’s mesh networking application. Basically, it uses a multihop ad-hoc mesh networking protocol based on shortest path to the sink with spatial and temporal redundancy and the ”Sensornet Protocol” (SP) abstraction for sending and receiving messages [9]. Network nodes take measures at a configurable sampling rate from their sensors and generate a message to the sink, that finally transfers the information to an application running in the central computer where it is collected into a database. This database can be accessed by the AR tools and other components that are part of the global system. The sampling rate of each sensor should be modified depending on the collected signal nature. 2.2

Augmented reality tools

The visualization of the information provided by WSN is not a trivial subject. It can be implemented with a graphical interface and localization software installed in the central computer that can locate each node on a map. An alternative way is the use of displays, but it is necessary a display for each node or a single one that can be connected sequentially to each node. A display for each node seems to be an expensive solution, and the connection and disconnection of a display may affect the node. Using the AR application neither additional hardware nor connections are necessary in the node. AR uses the information from the network in graphic form. It is necessary to transform the data provided by the sensors into an image with graphical information that can be used by the AR application. These images are stored as OpenGL textures that ARToolkit uses to introduce virtual elements in the real image. The localization and orientation of the virtual graph in the image is calculated by means of the fiducial markers used by ARToolkit. Each sensor has an univocally associated marker. The internal pattern of the marker must be as mismatching as possible to each other to avoid confusion. Markers must be positioned near its corresponding sensors so that they can be clearly associated to them. Thus, ARToolkit uses markers to resolve graphs spacial orientation, making them visible to the user.

4

A USB web camera with high 320 x 240 pixel resolution was used to capture the environment images. Its maximum video speed is 30 frames per second, it features automatic white balance, gain control and manually adjustable focus. The size of the used markers was 64 x 64 mm. 2.3

Integration

A complete system scheme is presented in Fig. 1. In this situation, a different ARToolkit marker is associated to each sensor in the mesh network. It’s important to point out that various sensors can be integrated in the same node. Once a marker is recognized by the AR application running in the central computer, it looks up the database for two kind of information associated to that marker and, consequently, to an specific sensor: identification and signal data.

Fig. 1. Complete system overview

The identification data could include any characteristic that the medical staff could need or propose. In this work, we have used the signal type that sensor is collecting and the patient’s name the sensor has been attached to. The application also takes a graph image corresponding to the signal captured by the associated sensor. The hour and date when the samples were collected is also recorded and visually presented to the user. The graph images are drawn with the most recent data samples collected by the central computer. Thus, the AR application can handle the graph in real time in a fast way and doesn’t need to wait for its generation, avoiding delays in the user visual experience. Once the AR application has the identification and graphic data, it handles them as OpenGL textures and friendly present them to the user on top of the

5

marker. The application works this way with every single marker it can recognize at any moment. The update of the visual information from every sensor is performed any time a certain marker is detected in the image.

3

Results

The developed prototype was tested on different scenarios with several markers and sensors from a wireless network. Temperature and humidity sensors were used and configured at a sampling period of five seconds. Temperature and humidity are low frequency signals so that sampling rate is enough. Fig. 2 shows the user visual experience where virtual images composed by a graph and identification data are superimposed onto the real world images the user perceives. In this case, a temperature sensor was used. Fig 3 shows two markers associated to different sensors, temperature and humidity, attached to a single node. Fig. 4 shows a scenario where two markers were present and associated to two temperature sensors from different nodes.

Fig. 2. User visual experience with one marker

The use of the ARToolkit platform leads to some limitations in the final application. The system is not able to perform the virtual object positioning when partial or total marker oclusions occur. This problem also appears when the system can’t tell the difference between two markers or other objects in the environment are similar to the markers. In addition to this, the maximum tracking mobile object speed the system can achieve (in this case, the mobile object is the camera) will be that who keeps the image clear. The tracking limitation will be given by the resolution, image capture and computing rates. Furthermore, vibration effects on the virtual images increase as distance is increased or markers size is reduced and depends on the angle between the marker and the camera. The experiments showed that vibration effects appeared

6

Fig. 3. User visual experience with two markers associated to sensors in a single node

Fig. 4. User visual experience with two markers associated to sensors in different nodes

when the camera was at 180 cm from the markers. The camera was not able to recognize the markers when they were at 250 cm at least from the camera, so the virtual images could not be applied. Also, the environment illumination affects the performance of the system, though this effect can be controlled at the certain extent with a configurable illumination threshold.

4

Conclusions

The application presented in this paper provides an intuitive visual interface for a wireless sensor network. The nodes of this network can be attached to the patient or located at the surrounding area, so biometric and ambient information can be monitored at the same time. One of the advantages of this visual and augmented method is that it provides a cheap and easy way to access to the information offered by a WSN.

7

As explained in section 2 before, data visualization is not an easy problem to solve, but using the AR application no connectors are involved, and just a laptop connected to the sink node and a webcam or AR googles are needed. Another advantage is that different markers can be associated to different sensors attached to the node and patient, so each signal can be processed independently. Furthermore, the marker doesn’t need to be attached or near its specified node, so the user, for example, could take different markers, move them to another room where the nodes can’t be seen and he still could see the information that the associated sensors are collecting at every moment. This can be useful when the user needs to compare graphs from sensors in different rooms. The usage of AR technology presents some restrictions. Firstly, the room needs to be appropriately illuminated, so the application can recognize the markers. Secondly, in this prototype only visualization is implemented. In the future, interaction with graphs can be possible using vision tracking software. For instance, the user could select or deselect markers, or even modify the virtual images size in order to better organize scenes containing several markers. As a new feature, different sensors attached to the same patient could be associated to a single marker, so the user could interact with the visual interface selecting at any time the information source he wants to monitor. Using digital image processing techniques some visual pattern recognition could be achieved to handle the virtual graphs. This visual patterns could be hand gesture or additional markers. Furthermore, the final visual results achieved by the system may significatively change using different cameras with better performance characteristics. Certainly, the mentioned limitations and vibration, illumination and oclusion effects could be reduced this way improving the user visual experience. Additionally, the distance at which the camera can recognize the markers would be increased. Finally, the maximum bandwith supported by the radio module integrated in the nodes lead to a limitation in the number of sensor types that can be connected. Some biometric signals need to be sampled at high frequency to become significative due to its information nature, such as electrocardiograms or electromyograms. In that case, a high traffic volume is generated. A node connected to that sensor could reach an overflow state if the radio module finds problems sending data messages, due to high traffic density, poor coverage, environmental changes, etc. Thus, the system should be limited to biometric sensors generating data with a relative low bitrate.

5

Acknowledgements

This work was partially supported by the European Union in the VI Frame Program, Project N FP6-2005-IST-5. STREP No. 045088, by Spanish Ministerio de Educaci´on y Ciencia and FEDER funds, project N. TEC2006-11689 and by the spanish Junta de Andaluc´ıa Project No. TIC 249. Additionally, we would like to thank AT4 Wireless for kindly providing the Tmote Sky devices.

8

References 1. Rabaey, J.: Ultra-low Power Computation and Communication enables Ambient Intelligence. Technical Report. (2003) 2. Remagnino, P., Foresti, G., Ellis, T.J. (Eds.): Ambient Intelligence: a novel approach. Springer. (2005) 3. Beigl, M., Decker, C., Krohn, A., Riedel, T., Zimmer, T.: µParts: Low Cost Sensor Networks at Scale. Technical Report. (2005) 4. Azuma, R., Baillot, Y., Behringer, R., Feiner, S., Julier, S., MacIntyre, B.: Recent advances in augmented reality. IEEE Computer Graphics and Application, 21(6), pp. 3447. (2001) 5. Lee, C.H., Wetzel, J., Selker, T.: Enhancing Interface Design Using Attentive Interaction Design Toolkit. In SIGGRAPH ’06: ACM SIGGRAPH 2006 Educators program, ACM Press: New York, USA, 2006. 6. Liarokapis, F., White, M., and Lister, P.F.: Augmented Reality Interface Toolkit, In Proc. International Symposium on Augmented and Virtual Reality, London, 761-767, 2004. 7. Papagiannakis, G., Schertenleib, S., O’Kennedy, B., Arevalo-Poizat, M., MagnenatThalmann, N., Stoddart, A., Thalmann, D.: Mixing Virtual and Real scenes in the site of ancient Pompeii, Computer Animation and Virtual Worlds, 16(1), pp. 11-24, 2005. 8. Moteiv Corporation: Tmote Sky quick start guide. (2006) 9. Polastre, J., Hui, J., Levis, P., Zhao, J., Culler, D., Shenker, S., Stoica, I.: A unifiying link abstraction for wireless sensor networks. SenSys’05, November 2–4, 2005, San Diego, California, USA. 10. IEEE 802.15.4 (2003). IEEE Standard for Information technology Part 15.4: Specifications for Low-Rate Wireless Personal Area Networks (LR-WPANs). Print: ISBN 0-7381-3686-7 SH9512713-040864-6.