Passive Radio Frequency Exteroception in Robot ... - Semantic Scholar

1 downloads 1255 Views 512KB Size Report
In our previous work, we investigated several technical aspects of robot- assisted ... technology for the intelligent navigation of service robots. However, they place ..... This research has been supported, in part, through NSF CAREER grant (IIS-. 0346880) ... University of Florida Technical Report number TR04-. 009 (2004). 9.
Passive Radio Frequency Exteroception in Robot Assisted Shopping for the Blind Chaitanya Gharpure, Vladimir Kulyukin, Minghui Jiang, and Aliasgar Kutiyanawala Computer Science Assistive Technology Laboratory, Utah State University, Logan, UT 84321, USA. [email protected], [email protected], [email protected], [email protected], Webpage: http://www.cs.usu.edu/ vkulyukin/vkweb/research/sandee.html

Abstract. In 2004, the Computer Science Assistive Technology Laboratory (CSATL) of Utah State University (USU) started a project whose objective is to develop RoboCart, a robotic shopping assistant for the visually impaired. RoboCart is a continuation of our previous work on RG, a robotic guide for the visually impaired in structured indoor environments. The determinism provided by exteroception of passive RFIDenabled surfaces is desirable when dealing with dynamic and uncertain environments where probabilistic approaches like Monte Carlo Markov localization (MCL) may fail. We present the results of a pilot feasibility study with two visually impaired shoppers in Lee’s MarketPlace, a supermarket in Logan, Utah.

1

Introduction

There are 11.4 visually impaired users living in the U.S. [1]. Grocery shopping is an activity that presents a barrier to independence for many visually impaired people who either do not go grocery shopping at all or rely on sighted guides, e.g., friends, spouses, and partners [2]. While some visually impaired people currently rely on the store personnel to help them shop, they express two common concerns [3]: 1) such personnel may not be immediately available, which results in the shopper having to wait for assistance for a lengthy period of time, and 2) the shopper may not be comfortable shopping for gender sensitive items with a stranger, e.g. purchasing items related to personal hygiene. In our previous work, we investigated several technical aspects of robotassisted navigation for the blind, such as RFID-based localization, greedy free space selection, and topological knowledge representation in [4, 5, 2]. In this paper, we focus on how passive radio frequency (PRF) sufaces can assist a robotic shopping assistant in a grocery store. Many systems that operate in smart environments utilize proprioception (action is determined relative to an internal frame of reference) or exteroception (action is determined from a stimulus originating in the environment itself). RFID has become an exteroceptive technology of choice due to low power requirements , low cost, and ease of installation.

This paper is organized as follows. In section 2, we present related work. In section 3, we explain the hardware and navigation algorithms of RoboCart. In section 4, we desscribe two proof-of-concept experiments which demonstrate the advantages of using RFID mats and the practicality of a smart device like RoboCart. In section 5, we give our conclusions.

2

Related Work

Smart environments have become a major focus of assistive technology research [6]. The researchers at the Smith-Kettlewell Eye Research Institute developed c audio signage IR sensors for the visually impaired that assoTalking Signs , ciate audio signals with various signs in the environment [7]. Willis and Helal [8] propose an assisted navigation system where an RFID reader is embedded into a blind navigator’s shoe and passive RFID sensors are placed in the floor. Vorwerk [9], a German company, manufactures carpets containing integrated RFID technology for the intelligent navigation of service robots. However, they place RFID tags strictly in a rectangular grid format. Several research efforts in mobile robotics are similar to the research described in this paper insomuch as they use RFID technology for robot navigation. Kantor and Singh [10] use RFID tags for robot localization and mapping. They utilize time-of-arrival signals from known RFID tags to estimate distance from detected tags and localize the robot. Hahnel et. al. [11] propose a probabilistic measurement model for using RFID signals to to analyze whether RFID can be used to improve the localization of mobile robots in office environments. They demonstrate how RFID can be used to improve the performance of laser based localization.

3 3.1

Robot-Assisted Shopping RoboCart’s Hardware

The RoboCart hardware design is a modification of RG, our indoor robotic guide for the blind that we built in 2003-2004 on top of another Pioneer 2DX base [12]. RoboCart is built on top of a Pioneer 2DX robotic platform from ActivMedia, Inc. RoboCart’s wayfinding toolkit resides in a polyvinyl chloride (PVC) pipe structure securely attached to the platform (See figure 1). The wayfinding toolkit consists of a Dell TM Ultralight X300 laptop connected to the platform’s microcontroller, a SICK laser range finder, a TI-Series 2000 RFID reader from Texas c camera facing vertically down. The RFID reader Instruments, and a Logitech is attached to a 200mm x 200mm antenna. Unlike in RG which had its RFID antenna on the right side of the PVC structure approximately a meter and a half from the floor, in RoboCart, as seen in Figure 1, the RFID antenna resides close to the floor in front of the robot for reasons that will be explained later.

Fig. 1. RoboCart Hardware

3.2

Fig. 2. RFID mat

Navigation

Navigation in RoboCart is based on Kuipers’ Spatial Semantic Hierarchy (SSH) [13]. The SSH is a model to represent spatial knowledge. According to SSH, spatial knowledge can be represented in five levels: sensory, control, causal, topological and metric. Sensory level is the interface to the robot’s sensory system. RoboCart’s primary sensors are a laser range finder, a camera, and an RFID reader. The control level represents the environment in terms of control laws which have trigger and termination conditions associated with them. The causal level describes the environment in terms of views and action. Views specify triggers; actions specify control laws. For example, follow-hall can be a control law triggered by start-of-hall and terminated by end-of-hall. The topological level of the SSH is a higher level of abstraction, consisting of places, paths and regions, and their connectivity, order and containment relationships. The metrical level describes a global metric map of the environment within a single frame of reference. To deal with large open spaces, we decided to use laser-based Monte Carlo Markov localization (MCL) [14], as it was already implemented in ActivMedia’s Laser Mapping and Navigation software. After several field tests, we discovered some problems with MCL localization. First, the robot’s ability to accurately localize rapidly deteriorated in the presence of heavy shopper traffic. Second, MCL sometimes failed due to wheel slippage on a wet floor or due to the blind shopper inadvertently pulling on the handle. Third, since MCL relies exclusively on odometery to localize itself along a long uniform hallway that lacks unique laser range signatures, it would frequently get lost in an aisle. Fourth, MCL localization frequently failed in the store lobby, because the lobby constantly changed its layout due to promotion displays, flower stands, product boxes. Finally, once MCL fails, it either never recovers, or recovers after a long drift.

3.3

RFID-based recalibration

We conjectured that MCL was a viable option if the robot could somehow recalibrate, periodically and reliably, its position on the global map. To allow for periodic and reliable MCL recalibration, we decided to turn the floor of the store into an RFID-enabled surface, where each RFID tag had its 2D coordinates. The literature search showed that some ubiquitous computing researchers had started thinking along the same lines [8]. The concept of the RFID-enabled surface was refined into the concept of recalibration areas, i.e., areas of the floor with embedded RFID tags. In our current implementation, recalibration areas are RFID mats which are small carpets with embedded RFID tags. The mats are placed at specified locations in the store without causing any disruption to the indigeneous business processes. The literature search showed that RFID has been used to assist laser-based localization. For example, in [15], the authors demonstrate how RFID can be used to improve the performance of laser-based localization through a probabilistic measurement model for RFID readers. While this is certainly a valid approach, we think that one advantage of recalibration areas is deterministic localization: when the robot reaches a recalibration area, its location is known with certainty. We built several RFID mats with RFID tags embedded in a hexagonal fashion. An RFID mat is shown in Figure 2. Every recalibration area is mapped to a corresponding rectangular region in the store’s metric global map c In constructed using ActivMedia’s metric map building software, Mapper3 . the future, as larger recalibration areas are deployed, every RFID tag may have a unique ID so that a recalibration area may act as a topological region with its own co-ordinate system and a frame of reference. 3.4

Semi-automatic acquistion of topology and causality

A principal limiation of RG was the fact that the topological and causal levels of the SSH had to be manually created for a given environment [5]. In RoboCart’s, several aspects of acquiring topological and causal knowledge were automated. The problem here is to have the robot itself acquire the connectivity of landmarks and maneuvers that can be executed at each landmark. There are two types of representations of the environment that must be acquired before RoboCart can navigate a grocery store. First, the metrical level representation in form of an occupancy grid map, which is used in the MCL algorithm, and second, the control/causal level representation, which is an abstraction of the metric map (used for path planning). In RoboCart, the acquistion process has four steps. First, the robot is manually driven through the environment to acquire a global metric map with ActivMedia’s Mapper3 laser-based software. Figure 3 shows the metric map for the area of Lee’s MarketPlace used in the experiments. Second, a dark blue masking tape is placed on on the floor. In Figure 3, the tape goes north from the robot’s home location and turns west between the cash registers and the grocery aisles.

Third, the robot follows the tape to acquire the topological and causal knowlege. Fourth, the tape is removed. c web cam to capture floor images. Four actions The robot uses a Logitech are used: follow-tape, turn-left-90, turn-right-90, turn-180. Two action triggers are tape intersections and ends of turns. An edge detection algorithm is used to follow the tape and to recognize three tape fiducials: straight-tape, intersection, and, horizontal-tape. When a tape intersection is detected, the robot stops and presents a confirmation dialogue to the operator. The operator accepts the landmark if it is a true positive, and rejects it if it is a false positive. Thus, the causal schemas < V iew, Action, V iew > are obtained through user interaction, where Action ∈ {follow-tape, turn-left-90, turn-right-90, turn-180} and V iew ∈ {tape-intersection, horizontal-tape, end-of-turn}. If a visual landmark is accepted, a fuzzy metric landmark is created. If the global position is < x, y >, the fuzzy landmark is a rectangular region from xi to xj , and from yk to ym , where xi = x − δ, xj = x + δ, yk = y − δ, and ym = y + δ, where δ is an integer constant. It took us 40 minutes to acquire the topological and causal knowledge of the area of Lee’s MarketPlace shown in Figure 3: 10 minutes for the metric map acquisition, 10 minutes for deploying the tape, 15 minutes for running the robot, and 5 minutes for removing the tape. Thus, the acquired knowledge of the environment consists of three files: the global metric map file, a file with fuzzy metric landmarks, and a file with a fuzzy metric landmark connectivity graph that also contains the actions that can be executed at each landmark described in the next section.

Fig. 3. Fuzzy areas in the grocery store environment.

3.5

Actions

Path planning in RoboCart is done through a breadth first search. A path plan is a sequence of fuzzy metric landmarks, connected by actions. Whenever a landmark is reached, the appropriate action is triggered. There are five actions: turn-into-left-aisle, turn-into-right-aisle, follow-maximum-empty-space and track-target. The first three rely on finding and choosing the maximum empty space as the direction to travel. The track-target action relies on current pose Pc and the destination pose Pd , to compute direction of travel. Other actions which are not a part of the pre-planned path are stop-andrecalibrate, beep-and-stop, and inform-and-stop. If there is an obstacle in the path, RoboCart emits a beep and stops for 8 seconds before starting to avoid the obstacle. When RoboCart reaches a destination, it informs the user about the destination and stops. Different actions and their trigger and termination views are listed in Table 1. Table 1. Actions and Views Trigger fuzzy-area fuzzy-area fuzzy-area fuzzy-area RFID-tag Obstacle Obstacle destination cashier

3.6

Action Termination track-target fuzzy-area turn-into-right-aisle fuzzy-area turn-into-left-aisle fuzzy-area maximum-empty-space fuzzy-area stop-and-recalibrate pose=tag-x-y beep-and-stop no-obstacle beep-and-stop time-out inform-and-stop none inform-and-stop resume

Human-Robot Interaction

The shopper communicates with RoboCart through a 10-key numeric keypad attached on the handle of the cart. A speech enabled menu allows the shopper to perform tasks like browse through the hierarchical database, select products, navigate, pause, and resume. A wireless IT2020 barcode reader from Hand Held Products Inc., is attached to the laptop. When the shopper reaches the desired product in the aisle, he/she picks up the barcode and scans the barcodes on the edge of the shelf. When a barcode is scanned the reader beeps. If the barcode scanned is that of the desired item, the shopper hears the product title in the Bluetooth (R) headphones. The shopper can then carefully reach for the product above the scanned barcode and place it in the shopping basket installed on RoboCart.

4 4.1

Proof-of-Concept Experiments Experiment 1

Localization error samples were collected from the two populations: Pnomat and Pmat , where, Pnomat is the population of localization-errors when no recalibration is done at RFID mats, and Pmat is the population of localization-errors when recalibration is done. Localization error in centimeters was calculated from true and calculated poses as follows. A white masking tape was placed on a dark brown office floor forming a rectangle with a 30-meter perimeter. At 24 selected locations, the tape was crossed by perpendicular stretches of the same white masking tape. The x and y coordinates of each intersection were recorded. Four new PRF mats were developed. Each mat consisted of a carpet surface, 1.2 meters long and 0.6 meters wide, instrumented with 12 RFID tags. The x-y regions of each mat were supplied to the robot. The mats were placed in the middle of each side of the rectangle. The robot used the vision-based tape following and tape intersection recognition algorithms to determine its true pose (ground truth). Thus, whenever a tape intersection was recognized, the robot recorded two readings: its true posed determined from vision and its estimated pose determined from MCL. The first 16 runs without recalibration produced 384 (24 landmarks x 16 runs) samples from Pnomat . Another 16 runs with recalibration produced 384 samples from Pmat . Let H0 : µ1 − µ2 = 0 , be the null hypothesis, where µ1 and µ2 are the means of Pnomat and Pmat , respectively. The paired p t-test at α = 0.001 was used to compute the t-statistic as t = (M1 − M2 )/ 2 (σ12 /n1 ) + (σ22 /n2 ), where M1 and M2 are the sample means. From the data obtained in the experiments, the value of the t-statistic was calculated to be 6.67, which was sufficient to reject H0 at selected α. The use of PRF mats as recalibration areas showed a 20.23 % reduction in localization error: from a mean localization error of 16.8cm without recalibration, to 13.4cm with recalibration. Since the test was conducted in a simple office environment, the errors are small. It is expected that in a larger environment, e.g. a supermarket, the errors will be significantly larger. Negotiations with the store for conducting recalibration experiments are underway as this paper is being written. 4.2

Experiment 2

Upon entering the store, a visually impaired shopper must complete the following tasks: find RoboCart, use RoboCart to navigate to shelf sections with needed grocery items, find those items on the shelves, place them into RoboCart, navigate to the cash register, place the items on the conveyer belt, pay for the items, navigate to the exit, remove the shopping bags from RoboCart, and leave the store. The purpose of the second experiment was to test the feasibility with respect to these tasks. In particular, we focused on two questions: 1) Can the shopper

successfully retrieve a given set of products?; and 2) Does the repeated use of RoboCart result in the reduction of the overall shopping time? The sample product database consisted of products from aisles 9 and 10, with 8 products on the top shelf, 8 products on the third shelf from the bottom, and 8 products on the bottom shelf. The trials were run with two visually impaired shoppers from the local community over a period of six days. A single shopping trial consisted of the user picking up RoboCart from the docking area, navigating to three pre-selected products, and navigating back to the docking area through the cash register. Before the actual trials, the shopper was given 15 minutes of training on using the barcode reader to scan barcodes on the shelves. We ran 7 trials for three different sets of products. To make the shopping task realistic, for each trial, one product was chosen from the top shelf, one from the third shelf, and one from the bottom shelf. Split timings for each of the ten tasks were recorded and graphed. Figure 3 shows the path taken by RoboCart during each trial. The RFID mats were placed at both ends of the aisles as shown in Figure 3. The shoppers successfully retrieved all products. The times for the seven shopping iterations for product sets 1, 2 and 3 were graphed. The time taken by the different navigation tasks remained fairly constant over all shopping trials. From the graph in figure 4 it can be seen that the time to find a product reduces after a few trials. The initial longer time in finding the product is due the fact that the user is not aware of the exact location of the product on the shelf. Eventually the user learns where to look for the barcode, and the product retrieval time reduces. The product retrieval time stabilized at an average of 20 to 30 seconds.

Fig. 4. Product Retrieval Performance for Participant 1

Fig. 5. Product Retrieval Performance for Participant 2

After conducting the experiments with the first participant, we felt the need to modify the structure of the barcode reader, so that the user could easily scan barcodes on the shelves. This led to a minor ergonomic modification to the barcode which enabled the user to rest the barcode on the shelves and scan the barcode with ease. This modification greatly improved the performance, which can be seen from the results in figure 5.

5

Conclusions

We presented a proof-of-concept prototype of RoboCart, a robotic shopping assistant for the visually impaired. We described how we approach the navigation problem in a grocery store. We also presented our approach to semi-automatic acquisition of two levels of the SSH. Our use of RFID mats to recalibrate MCL was described. We experimentally discovered that RFID-based recalibration reduced the MCL localization error by 20.23%. In the pilot experiments, we observed that the two visually impaired shoppers successfully retrieved all products and that the repeated use of RoboCart resulted in the reduction of the overall shopping time. The overall shopping time appears to be inversely related to the number of shopping trials and eventually stabilized. While the pilot feasilibity study presented in this paper confirmed the practicality of a device like a smart shopping cart and gave valuable insights into the design of future experiments, our approach has limitations. Recalibration areas are currently placed in an ad-hoc fashion. The system would greatly benefit if the placement of recalibration areas on the global metric map was done algorithmically. The automatic or semi-automatic construction of the product database with useful descriptions and handling instructions for all products has not been attempted.

6

Acknowledgements

This research has been supported, in part, through NSF CAREER grant (IIS0346880) and two Community University Research Initiative (CURI) grants (CURI-04 and CURI-05) from the State of Utah awarded to Vladimir Kulyukin.

References 1. LaPlante, M.P., Carlson, D.: Disability in the united states: Prevalence and causes. In: U.S. Department of Education, National Institute of Disability and Rehabilitation Research, Washington, DC (2000) 2. Kulyukin, V., Gharpure, C., Nicholson, J.: Robocart: Toward robot-assisted navigation of grocery stores by the visually impaired. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE/RSJ (2005) 3. Burrell, A.: Robot lends a seeing eye for blind shoppers. USA Today, Monday, Jul 11, 2005 (2005) 4. Kulyukin, V., Gharpure, C.P., De Graw., N.: Human-computer interaction in a robotic guide for visually impaired. In: the Proceedings of AAAI Spring Symposium, Palo Alto, CA (2004) 5. Kulyukin, V., Gharpure, C.P., Nicholson, J., Pavithran, S.: Rfid in robot-assisted indoor navigation for the visually impaired. In: Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2004), Sendai, Japan (2004)

6. Zita Haigh, K., Kiff, L., Myers, J., Guralnik, V., Gieb, C., Phelps, J., Wagner, T.: The intependent life style assistant: Ai lessons learned. In: Proceedings of the 2004 IAAI Conference, San Jose, CA, AAAI (2004) 7. Marston, J., Golledge, R.: Towards an accessible city: Removing functional barriers for the blind and visually impaired: A case for auditory signs. Technical Report, Department of Geography, University of California at Santa Barbara (2000) 8. Scooter, S., Helal, S.: A passive rfid information grid for location and proximity sensing for the blind user. University of Florida Technical Report number TR04009 (2004) 9. : Smart carpet, vorwerk and co. In: http://www.vorwerk-teppich.de. (2006) 10. Kantor, G., Singh, S.: Priliminary results in range-only localization and mapping. In: IEEE Conference on Robotics and Automation, Washington, D.C. (2002) 11. Hahnel, D., Burgard, W., Fox, D., Fishkin, K., Philipose, M.: Mapping and localization with rfid technology. In: Technical Report, IRS-TR-03-014, Intel Research Institute, Seattle, Washington (2003) 12. Kulyukin, V., Gharpure, C.P., Sute, P., DeGraw, N., Nicholson, J.: A robotic wayfinding system for the visually impaired. In: Proceedings of the Sixteenth Innovative Applications of Artificial Intelligence Conference, San Jose, CA (2004) 13. Kupiers, B.: The spatial semantic hierarchy. Artificial Intelligence 119 (2000) 191–233 14. Fox, D.: Markov Localization: A Probabilistic Framework for Mobile Robot Localization and Navigation. PhD thesis, University of Bonn, Germany (1998) 15. Hahnel, D., Burgard, W., Fox, D., Fishkin, K., Philipose, M.: Mapping and localization with rfid technology. In: Intel Research Institute. Tech. Rep. IRS-TR-03-014, Seattle, WA (2003)

Suggest Documents