Robotic Services at Home: An Initialization System ... - SAGE Journals

3 downloads 6161 Views 1MB Size Report
to create an actual robotic system (intelligent space) such that there is no ... Nur Safwati Mohd Nor and Makoto Mizukawa: Robotic Services at Home: An Initialization. System Based on ..... furniture shape from the IKEA custom stencil to the.
International Journal of Advanced Robotic Systems

ARTICLE

Robotic Services at Home: An Initialization System Based on Robots’ Information and User Preferences in Unknown Environments Regular Paper

Nur Safwati Mohd Nor1,* and Makoto Mizukawa1 1 Graduate School of Engineering, Shibaura Institute of Technology, Tokyo * Corresponding author E-mail: [email protected] Received 05 Jul 2013; Accepted 07 May 2014 DOI: 10.5772/58682 © 2014 The Author(s). Licensee InTech. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract One important issue in robotic services is the construction of the robotic system in the actual environment. In other words, robots must perform environment sensing or have information on real objects, such as location and 3D dimensions, in order to live together with humans. It is crucial to have a mechanism to create an actual robotic system (intelligent space) such that there is no initialization framework for the objects in the environment, or we have to perform SLAM and object recognition as well as mapping to generate a useful environmental database. In intelligent space research, normally the objects are attached to various sensors in order to extract the necessary information. However, that approach will highly depend on sensor accuracy and the robotic system will be burdened if there are too many sensors in an environment. Therefore, in this paper we present a system in which a robot can obtain information about an object and even create the furniture layout map for an unknown environment. Our approach is intended to improve home-based robotic services by taking into account the user or individual preferences for the

Intelligent Space (IS). With this information, we can create an informational map of the home-based environment for the realization of robot assistance of humans in their daily activities at home, especially for disabled people. The result shows the system design and development in our approach by using model-based system engineering. Keywords Intelligent Space, Human-robot Interaction, Space Sensing and Mapping, Robotic Services

1. Introduction 1.1 Related work In order to provide robotic services at home, not only will humans need to live together with one or multiple robots, but more importantly the robots will have to adapt to the arrangement of the environment created by humans according to their lifestyle and preference. Besides

Int Mizukawa: J Adv RobotRobotic Syst, 2014, 11:112 | doi: 10.5772/58682 Nur Safwati Mohd Nor and Makoto Services at Home: An Initialization System Based on Robots’ Information and User Preferences in Unknown Environments

1

creating the system abstraction and interface, we must populate our living space as well as create an informational environment database for the robots providing daily services to the human or humans. To solve this problem, ubiquitous sensing using RFID technology has been widely reported for detecting activities of candidates in a physical environment like intelligent space. As an example, in [1] an RFID-based object-tracking method was introduced to allow robots and humans to find ‘forgotten’ objects in everyday environments. As the robot equipped with RFID reader moves around the house, an environment database is populated, making tracking of the lost object possible. Another technique to sense and perceive the unknown environment was implemented in the domestic cleaning robot presented in [2]. In order for the cleaning robot to cover the accessible area and avoid uncleaned areas, a floor map storing all the information on the robot’s working environment was required. A topological map was built from omnidirectional images that were used by the trajectory controller to maintain the robot on parallel lanes. This is different from [3], in which cell-based SLAM data were applied to create accurate maps in modelling changing environments for real-world applications. The ability of the robot to detect obstacles in space is also crucial for space mapping. The robot should be able to detect unexpected moving obstacles presented for example by users’ movements, and static obstacles like doors and furniture [4] [5].

Information about the living environment according to user lifestyle and preference is essential whenever we want to apply the concept of intelligent space in the real world and thus generate a more appropriate and intuitive robot service. Nowadays, research related to intelligent space mostly focuses on the user’s activities with objects such as recognition, service generation and task localization [6] [7] [8]. RoboCup@Home is one example of bringing the world of RT (Robot Technology) into the actual home environment. Other than that, not much research has dealt with registration or initialization in the actual home environment. Consequently, the process of attaching RT and robotic devices in an actual environment will consume much time and effort. Robots will not naturally interact with humans and environments. Previously, work has been done in developing a highlevel system database for the distributed robotic system known as ‘Kukanchi’ Middleware (Kukanchi is Japanese word and means ‘Interactive Human-Space Design and Intelligence’). Kukanchi Middleware acts as a platform to integrate service-request inputs from the user interface and environmental data from sensors and then send task requests to the service robot [9]. However, there are limitations applied, as we assume the objects are stored in smart cases known as RT boxes and that the user must

2

Int J Adv Robot Syst, 2014, 11:112 | doi: 10.5772/58682

provide necessary information about the objects to the system. These limitations arise since we do not have the initialization system in the actual environment to register the daily 3D objects into the high-level system database or abstraction. Therefore, in this paper, we propose a method to generate an individual environment model for actual living space. This is essential so that the robot, as the agent, will be familiar enough with the unknown space to interact with the user and finally execute the appropriate daily-life service. Specifically in unknown 3D living space, the robot will become familiar with the environment by sensing each room and tracking its current position, thus building up the map. One wellknown method used to build the map is called Simultaneous Localization and Mapping (SLAM). Using this technology, robots are able to acquire the geometrical environment information for navigation while roving, and at the same time localize themselves relative to the map. In our research, the goal of applying SLAM technology is to create the corresponding floor plan of our developed intelligent space known as Kukanchi. Our goal is that this process may provide a minimal infrastructure of a simple unknown home interior. From the floor plan, we will seek to locate and identify the nonmanipulatable objects (furniture such as sofa, desk and TV stand) installed by the user. 1.2 Kukanchi system In this research, we used Kukanchi Framework as the reference 3D space, shown in Figure 1. This framework was developed in our laboratory from 2007, focusing on sensing technology to construct environmental information using sensor networks and RT Middleware for integrating robots and sensors [10] [11] [12]. Significant work has been done to develop a user interface, system database and Middleware, as well as realizing the functional interaction between human and robot. In addition, our research has aimed to support humans in their daily life by preparing structured and intelligent environments for robots to provide services requested by the user. Kukanchi is built up based on distributed sensor networks. To provide physical and informational robotic services, our Kukanchi system consists of several components, as follows. 1. RFID sensors are attached under the floor for the robot to navigate autonomously. 2. Ucode transmitters are installed on the ceiling to track the real-time objects. 3. Manipulatable objects are given an RFID tag and are located in an RT box with an RFID reader. 4. Stereo cameras are used to capture and detect users’ pointing gestures. 5. Kukanchi Middleware is used to integrate RT modules and search for robot service algorithms. 6. The mobile robot is used as the agent to support the user in daily activities.

Figure 1. Kukanchi Experimental Area

However, in our current Kukanchi system, there is no system component which allows for 3D object initialization in the actual environment, thus making the information available and useful for certain robotic services. For instance, the position of furniture in the 3D space is predefined for the robotic services. Here we seek to solve this issue and develop a better robotic system. To imitate actual living-space activities, Ukai introduced the Tsuide task framework to provide related additional services to the user [13]. Meanwhile, Trung [14] developed a knowledge-based RT ontology for a home service robot applied to the Kukanchi system. This knowledge-based ontology deals with the problem of providing common-sense knowledge for understanding user intention. Another contribution was made by Tsukumori [15], who developed a PDA-based user interface for the intelligent space. This interfacing device can provide input data like object type and task required to activate the service. This paper will focus on developing an initialization system for actual intelligent space that provides the robot with knowledge of the individual environment such as room layout and furniture properties. This system will help robots understand the user’s service request based on their environment preferences and service needs in the real living space. Current research in robotic services at home covers a range of activities, such as cleaning the room or bringing objects to the user. For demonstration, we choose the ‘bring something’ command because it is a very basic and straightforward robot service. The process is, however, difficult because the robot has to locate the object requested by the user in the actual 3D space, go to that location to get the object, and finally bring it to the user's actual location [16]. Object location and user

location in actual 3D space can be defined by furniture properties such as size and dimension. One of the goals of our system framework is to improve robotic services by attaching information to the environment itself rather than to the robot [17]. By doing this, the task information is attached beforehand to the environment where robots are present, and thus robots are able to easily recognize what to do. Elzabadani [18] proposed RFID-based furniture identification, which stores information like type of furniture, dimension and area covered. Compared to his work on sensor-based space identification, our approach, which is based on an online furniture catalogue, may provide a more appropriate service with respect to preference and situation. The remainder of this paper is organized as follows. In chapter 2, we discuss the first step for the initialization system, which is the creation of the furniture database. Also in this chapter we explain the build-up of the Kukanchi experimental area layout from the SLAM data using Microsoft Office Visio. Next, in chapter 3 we present our model-based system design and the screenshot of the interface created by the Visio plugin. In this chapter we also highlight our intelligent space with 3D furniture. Finally, we conclude by discussing the importance of our system. 2. Creating the furniture database from an online catalogue Nowadays, it is common to check the online catalogue before we buy a certain product. Normally, the online catalogue gives complete information about a specific product including price, colour, size or dimension, as well as assembly instructions. All this information is enough for a user to plan or create their own living space.

Nur Safwati Mohd Nor and Makoto Mizukawa: Robotic Services at Home: An Initialization System Based on Robots’ Information and User Preferences in Unknown Environments

3

Figure 2. Database of Kukanchi system

Another advantage is that an online catalogue will be updated by the manufacturer, and therefore it is essential that it can be linked to and automatically update our furniture database. In this research, we propose to create the furniture database using layout software and an online furniture catalogue such as Microsoft Office Visio 2010 or the IKEA online catalogue. Specifically for the application of robotic services in the intelligent space, we extracted the product information from the IKEA online catalogue, including dimension/size, image and catalogue number. This information determines the 2D/3D area that will be filled by the furniture in the intelligent space. Such information is greatly useful and important for the service robot to locate and identify the objects in the actual environment. 2.1 System background As explained in the introduction, at present in our Kukanchi developed intelligent space there is no system to initialize or build up the system in relation to the actual living environment. The only modification that we propose is the addition of some system components necessary for initialization in an actual 3D environment to identify furniture location. Figure 2 describes the system database developed in Kukanchi. In this figure, the object referred to represents an everyday 3D object in an actual living space, such as a book, lunch box or TV remote. Once the system detects a user’s pointing gesture

4

Int J Adv Robot Syst, 2014, 11:112 | doi: 10.5772/58682

indicating an object in the living space [19], information from the robot, sensors, HMI and goods (daily-life objects)components will be gathered. Next, Kukanchi Middleware integrates these RT modules into a task request to the robot. The task request to the robot for delivery of daily robotic services at home is based on the Goods-Place relationship, the Goods-Activity relationship, and the Goods-Goods relationship, known as RT Ontology. 2.2 Adding new system components We describe our approach as in the following figures. Figure 3 illustrates the general overview of our system background. The goal of this system is to input information about static, huge objects like furniture semiautomatically or automatically in the intelligent environment. Based on the created furniture database imported into Microsoft Office Visio, the user may select and input a furniture object into the living space floor layout. In addition, we created layers of floor plan to imitate the actual living environment. Once the system receives information from user inputs, the Kukanchi system is updated to locate and identify the furniture. Our framework for the system level design is shown in Figure 4. We divided our framework into three parts, integrating the three layers as shown: from SLAM layer to floor-plan and finally to furniture layer.

Figure 3. Overview of locating and identifying furniture Figure 5. Furniture specification algorithm in the application layer

2.3 Environment and system setup for robotics service A mobile robot is a physical agent for supporting human beings in intelligent space [20]. In our research, we use a TurtleBot 2 mounted with a Asus Xtion Pro Live depth camera to obtain the required data from the 3D space for this initialization of the robotic service environment.

Figure 4. Framework for system level design and analysis

To set up the home-based intelligent space, we have to specify the candidates through the application and User Interface (UI) layer. Next, we explain the object’s specification algorithm in the application layer. From the created floor plan shown in Figure 4 above, the object specification algorithm is designed based on a selected area that is of interest to the user on the floor plan itself. Choosing the area to place the interested furniture, the system matches the area’s dimensions with furniture of appropriate dimensions. Based on these matching dimension, the robotic system is able to locate and identify static objects like furniture at its initial position (Figure 5). To acquire information on the unknown environment, like the number of available rooms, walls and doors, we superimpose the SLAM data image onto the actual floor plan. We assume the floor plan is provided during construction. Otherwise, the physical layout or arrangement of the floor needs to be determined manually. Using the superimposing technique, we may not only confirm the layout structure, but at the same time are able to recognize distinct features, if there are any. From the SLAM data, the robot can detect the number of rooms and recognize walls in our Kukanchi space (Figure 6). Figure 7 shows the superimposition of the SLAM map on the actual building floor plan. The area highlighted in blue shows the Kukanchi experimental area (related to Figure 1).

Firstly, we calibrated our depth camera with the real Kukanchi environment, consisting of furniture arrangement and position. To conduct this calibration process, we cleared the floor of any objects so that we could estimate the furniture ground plane and floor plane. Secondly, by using the open-source software OpeniNI and OpenGL, we overlaid the 3D point-cloud data with the depth data from the depth camera. This procedure was done to extract the 3D points belonging to each furniture plane and floor plane. Thirdly, the camera was programmed with a top-view projection to obtain the occupancy map of the floor. At this stage, we removed the floor-plane data so that the furniture position could be easily detected. Finally, a segmentation algorithm was applied to recognize the furniture 2D area. From this 2D furniture data, we were able to search the online catalogue database for appropriate candidates for a real 3D living space like Kukanchi. In this environment, there are four types of furniture to be initialized.

Figure 6. Kukanchi space mapping using SLAM technology. One pixel is equal to 55 millimetres (mm).

Nur Safwati Mohd Nor and Makoto Mizukawa: Robotic Services at Home: An Initialization System Based on Robots’ Information and User Preferences in Unknown Environments

5

can be realized without any further need to predefine furniture location. Figure 8 illustrates the actual living environment used in the calibration procedure. 3. System design result 3.1 Model-based system design

Figure 7. Superimpose SLAM data to actual area map of 3D space

Figure 8. The actual living environment to be initialized with furniture candidates

Based on the candidate’s information obtained by this initialization system, the robotics service environment

Figure 9. System component design using SysML

6

Int J Adv Robot Syst, 2014, 11:112 | doi: 10.5772/58682

To design our system components, we use a modelbased form of system engineering which describes the static structure of the system, including software and hardware parts. In Figure 9, the User Interface component represents the hardware part while the Database and Setup Subsystem component corresponds to the software part. The hardware part consists of a touch-screen personal computer as a user input device and also a Vuzix Eyewear wearable sensor as the information display. A more detailed design model is shown in Figure 10. The Database component includes the required information on the room and furniture for space setup initialization. The system will identify how many rooms are available and the area and furniture properties in each room. The main function of the User Interface component is to view and select the furniture in a 3D view using the touch-screen interface and Vuzix Eyewear. In the Setup Subsystem component, the system will import and update the furniture database from the online IKEA catalogue, overlay all three layers (SLAM layer, Floor Plan layer and Furniture layer), and finally locate the furniture in the actual 3D living space.

Figure 10. Relationship and connections between parts in system components

Figure 11. Creation of new plug-ins in Visio connecting to online catalogue, intelligent space layout and 3D layout viewer

Nur Safwati Mohd Nor and Makoto Mizukawa: Robotic Services at Home: An Initialization System Based on Robots’ Information and User Preferences in Unknown Environments

7

3.2 System interface We prepare the plug-in for our application (furniture initialization system in intelligent space) using Microsoft Office Visio 2010. The plug-in is created using the Visio 2010 API. There are three main plug-ins needed in our system. Firstly, ‘ViXAM’, which allows 3D views of the actual living-space layout drawn in Microsoft Office Visio. Secondly, the IKEA website, which provides a hyperlink to online catalogues with specific lists like ‘modular sofa’, ‘dining sets’ and ‘coffee or side tables’. Lastly, the ‘Intelligent Space Setup’ plug-in gives access to the floor-plan layer, SLAM layer and furniture layer. In other words, this plug-in is needed to prepare all information and data for the three layers mentioned. Figure 11 illustrates the created plug-in as a menu command in Microsoft Office Visio 2010. We created an IKEA custom stencil based on data from the IKEA online catalogue for the furniture layer creation. The IKEA stencil is drawn according to its actual dimensions by using the ViXAM plug-in in Microsoft Office Visio. To determine the layout of furniture in the actual space, we drag the furniture shape from the IKEA custom stencil to the ‘space layer’. After that, we draw another layer, which

shows the Kukanchi experimental area, as described previously in Figure 1 and Figure 7. Finally, we visualize the 2D layout as a 3D layout by using the ViXAM viewer as in Figure 12. From these figures, we are able to analyse the position and location of the furniture in our intelligent space with the necessary information from the online catalogue embedded.

Figure 12. 3D layout of Kukanchi actual environment

Figure 13. Experiment result shows dataset for all candidates in the 3D space 8

Int J Adv Robot Syst, 2014, 11:112 | doi: 10.5772/58682

4. Experiment result To validate our camera calibration as well as the methodology proposed in this research, an analysis was made based on several experimental results. The experiments and analysis aim to measure the percentage of error between the experimental data and actual 2D properties from the furniture catalogue database. For the experiments, firstly we took data from the furniture’s original positions. Secondly, we swapped the position of the furniture in the 3D space to generate new experimental data to be analysed. Figure 13 shows the graphs which plot the 2D data for four types of candidates in our 3D living space. Based on the dataset graphs, the ‘Data 1’ in the x-axis represents the first data given by the depth camera while the ‘Data 2’ in the y-axis represents the second data given. ‘Data 1’ and ‘Data 2’ resemble the width and length of the candidates in the online catalogue database. The area under the graph shows the occupancy area of the candidates on the floor. The blue rectangular area shows the actual 2D size of the candidates in the catalogue. While the other three 2D plots show the experiment data from the depth camera. In candidate 1 dataset, the accuracy of the experiment data varies from 5% to 50% for both ‘Data 1’ and ‘Data 2’ which ‘Data 2’ much more accurate than ‘Data 1’. The candidate 2 dataset has the same accuracy as the candidate 1 dataset. In the candidate 3 dataset, the accuracy is much better, varying from 2% to 23% of error. In the candidate 4 dataset, the percentage of error can be seen to vary from 2% to 20%. As with the candidate 1 dataset, ‘Data 2’ has better accuracy than ‘Data 1’ in the candidate 4 dataset. Based on our previous experiment, whereby the initialization system is able to list several candidates out of a total number in the catalogue database, the accuracy of these data is acceptable. In conclusion, regardless of the size of the online catalogue database, a robot may extract several candidates that match the gathered information on the 3D living environment. 5. Conclusion and future work This paper has presented an initialization system for home-based robotic services. The presented system design, using model-based system engineering, can be applied to locate and identify installed furniture in an actual living space. It is essential to have a setup system for all the objects involved as candidates in the intelligent space. Therefore, it is important for the user to enter information on their intelligent space, such as furniture selection and organization, prior to asking the service robot ‘Bring me something’.

At present, the creation from layer to layer (SLAM layer to furniture layer) has to be done manually by superimposing and comparing the SLAM data obtained by the robot sensor with the actual floor plan. However, we intend to improve and simplify such robotic services in real-world environments that contain vast information that cannot be entirely understood by robots. A semiautomatic or automatic system would be a great advantage in this application. On the other hand, the interaction between human and robot can be made more intuitive and natural by providing enough information on the environment. Future work will aim at improving initialization systems to automatically map the floor plan with a furniture database initiated from the SLAM data. Secondly, we will embed this initialization system into the available Kukanchi system in our laboratory, and hence provide a new and complete system. Thirdly, we intend to carry out experimental tests of this new system in an actual 3D space for further system analysis. 6. Acknowledgements The authors would like to thank Miss Haeyeon Lee from the Division of Partner Robots, Toyota Motor Corporation, Japan, Mr Hiroyuki Nakamoto from Systems Engineering Consultants (SEC) Co., and Mr Shinji Tamura from Meister Corporation, Japan, for their guidance and help during the development of the system design process. 7. References [1] Mamei M, Zambonelli F (2005). Spreading Pheromones in Everyday Environments Through RFID Technology. In: Proc. of the 2nd IEEE Symposium on Swarm Intelligence, in press. pp. 281-288. [2] Gerstmayr-Hillen L, Roben F, Krzykawski M, Kreft S, Venjakob D, Moller R (2013). Dense Topological Maps and Partial Post Estimation for Visual Control of an Autonomous Cleaning Robot. Journal of Robotics and Autonomous Systems. Vol. 61, no. 5. pp. 497-516. [3] Meyer-Delius D, Beinhofer M, Burgard W (2012). Occupancy Grid Models for Robot Mapping in Changing Environment. In: Proc. of the 26th AAAI Conference on Artificial Intelligence, in press. [4] Kuderer M, Kretzschmar H, Sprunk C, Burgard W (2012). Feature-based Prediction of Trajectories for Socially Compliant Navigation. In: Proc. of Robotics: Science and System VIII.P25. [5] Lau B, Sprunk C, Burgard W (2011). Incremental Updates of Configuration Space Representations for Non-circular Mobile Robots with 2D, 2.5D or 3D Obstacle Models. In: European Conference on Mobile Robotics (ECMR). pp:49-54.

Nur Safwati Mohd Nor and Makoto Mizukawa: Robotic Services at Home: An Initialization System Based on Robots’ Information and User Preferences in Unknown Environments

9

[6] Malchus K, Jaecks P, Damm O, Stenneken P, Meyer C, Wrede B (2013). The Role of Emotional Congruence in Human-Robot Interaction. Presented at Human-Robot Interaction (HRI 2013), Tokyo, Japan. [7] Peltason J, Rieser H, Wachsmuth S (2013). ‘The hand is no banana!’ On communicating natural kind terms to a robot. In: Alignment in Communication: Towards a New Theory of Communication, in press. [8] Lohse M, van Welbergen H (2012). Designing Appropriate Feedback for Virtual Agents and Robots. In: Position paper at RO-MAN 2012 Workshop: Robot Feedback in Human-Robot Interaction: How to Make a Robot Readable for a Human Interaction Partner. [9] Ishiguro Y, Maeda Y, Trung N L, Sakamoto T, Mizukawa M, Yoshimi T, Ando Y (2011). Architecture of Kukanchi Middleware in Ubiquitous Robots and Ambient Intelligence (URAI). pp. 119-123. [10] Mizukawa M (2007). Experimental Environment Design of Ubiquitous Robot Technology System Research Center for Context Monitoring of Human Daily Activities. Proceeding, SEATUC. pp:11-16. [11] Kato T, Ukai K, Tsukumori Y, Ando Y, Mizukawa M (2009). Investigation of Service Management System Design of RT-service. Proc. IEEE FUZZIEEE. pp. 1496-1500. [12] Ukai K, Ando Y, Mizukawa M (2009). Investigation of User RT-service Generation System Design. Fuzzy Systems: FUZZ-IEEE International Conference. pp. 1486-1491. [13] Ukai K, Yoshimi T, Mizukawa M, Ando Y (2011). Tsuide Task (Accompanied Task) Service Framework for User RT-service Generation System. In: IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM). pp:1040-1045.

10 Int J Adv Robot Syst, 2014, 11:112 | doi: 10.5772/58682

[14] Trung N L, Maeda Y, Lee H, Mizukawa M (2012). Ambiguous Command Understanding with Commonsense in RO-MAN. pp. 545-550. [15] Tsukumori Y, Ando Y, Mizukawa M (2008). Development of Interface that Cooperates with Intelligent Space to Operate Daily Support Easily. In: System Integration Division Annual Conference (SICE SI2008). pp. 557-558. [16] Nor N S M, Maeda Y, Mizukawa M (2013). Pointing Angle Estimation for Human-Robot Interface. Journal of Advances in Computer Networks. Vol. 1, no. 2, pp. 75-78. [17] Ukai K, Ando Y, Mizukawa M (2009). Robot Technology Ontology Targeting Robot Technology Services in Kukanchi – ‘Interactive Human-space Design and Intelligence’. Journal of Robotics and Mechatronics. Vol. 21, no. 4, pp. 489-497. [18] Elzabadani H E (2006). Self-sensing Spaces. Dissertation submitted for the degree of Doctor of Philosophy, University of Florida. [19] Nor N S M, Trung N L, Maeda Y, Mizukawa M (2012) Tracking and Detection of Pointing Gesture in 3D Space. In: Ubiquitous Robots and Ambient Intelligence (URAI). pp. 234-235. [20] Morioka K, Lee J-H and Hashimoto H (2008). Intelligent Space for Human Centered Robotics. Advances in Service Robotics, Ahn H S (Ed.), ISBN: 978-953-7619-02-2, InTech. Available from: http://www.intechopen.com/books/advances_in_se rvice_robotics/intelligent_space_for_human_center ed_robotics. Accessed on 19 Feb 2013.