Position Tracking Using Sensor Fusion of a Wireless Network and a Laser Range Finder ∗† University
Christopher Kirsch∗ and Christof Röhrig† of Applied Sciences and Arts Dortmund, Department of Computer Science, Emil-Figge-Str. 42, 44227 Dortmund ∗
[email protected] †
[email protected]
Abstract—Automation of the internal material flow is used by many companies to reduce costs. Automation can be realized by using Self Guided Vehicles (SGV). SGVs are transporting goods between manufacturing-plants or inside a warehouse. Wireless Sensor Networks (WSN) provide a lot of potential for logistics applications. They can be used in logistics to realize a decentralization stock database or to control the values of a cold chain. Furthermore, distance measurements from a mobile node to nodes with fixed and known positions can be used to localize the mobile node. The nanoLOC WSN measures the distance between two nodes using the signal propagation time and provides a secure communication method. This paper describes a sensor fusion of the nanoLOC WSN with a laser range finder. The laser range finder is used to detect landmarks and to provide accurate positioning for docking on a charging station. The distances of the WSN and the distances to the landmarks are combined in a Monte Carlo Particlefilter (MCP). The structure of the MCP and experimental results are presented. The results show that this sensor fusion provides an accurate positioning for SGVs.
I. I Localization of mobile robots is an important issue for many applications and in current research activities. Localization is the process of estimating position and orientation of a mobile robot in a cartesian space. Automated Guided Vehicles (AGV) are a special kind of mobile robots. An AGV is a material handling system which is guided along defined pathways [1]. AGVs are suitable for automating the material flow in production processes and warehouses. The primary AGVs applications in production and logistics are: 1) driverless train transport, 2) storage and distribution, 3) assembly line applications and 4) flexible manufacturing systems. The pathways which AGVs follow are mostly defined by imbedded guide wires in the floor or paint strips on the ground. The costs for installing imbedded guide wires are very high and paint strips have to be renewed in time intervals because of abrasion. The company Kiva Systems uses another technique to define pathways for AGVs. They put barcodes onto the floor which the AGVs can detect with a camera [2]. These barcodes specify the pathways and guarantee an accurate localization. One problem of AGVs is that they are not very flexible because of the defined pathways and that the installation of the pathways is often expensive. The advantage of AGVs is, that they do not need
978-1-4244-7157-7/10/$26.00 ©2010 IEEE
complicated processes to localize themselves, because they are guided by defined pathways. Another kind of AGVs are Self Guided Vehicles (SGV). SGVs do not follow a defined pathway, they drive to their target, e.g. the charging station, by using path planning algorithms. For SGVs it is necessary that they can localize themselves through their own sensor measurements. Position tracking is especially important, because SGVs always know their start points. The SGV which is developed at the Fraunhofer Institute for Material Flow and Logistics in collaboration with the University of Applied Science in Dortmund is a combination of a SGV with a handling system. This SGV has the capability to pull into a rack of a warehouse, to take some products out and bring them to a target, e.g. to an order-picker. Between the rack and the target it is necessary, that the SGV has the capability to localize itself accurately. Especially for the pull into the rack a accurate localization is necessary. There are lots of techniques which can be used to localize a SGV, but it is always a combination of dead reckoning and one or more sensors. The most commonly used sensor is the laser range finder because of its ability to make high resolution measurements. Another sensor, which is used in different ways, is the Wireless Sensor Network (WSN). A WSN consists of spatially distributed autonomous sensor nodes for data acquisition. Besides military applications and monitoring physical or environmental conditions, WSN can also be used for localization. To localize a mobile node, called tag, a couple of nodes with fixed and known positions are needed. These nodes are called beacons or anchors. WSNs have the advantages that they can be used in a wide field of applications and that they are inexpensive. An overview of applications for WSNs in logistics is given in [3]. In this paper a sensor fusion of a laser range finder and the nanoLOC WSN is presented. To localize a SGV, the distance measurements of the laser to two landmarks are combined with the delay-time based distance measurements of the nanoLOC WSN. Both measurements are used by a Monte Carlo Particlefilter (MCP) together with dead reckoning information. This paper describes how the sensor fusion is implemented and which advantages the combination has. This paper is organized as follows: Section II presents related work. In Section III the characteristics of the nanoLOC WSN are described. An introduction into the
193
MCP is given in Section IV. The implementation of the sensor fusion is presented in Section IV. In this section, the main point of interest is the structure of nanoLOC measurement errors and the process for detecting landmarks in the environment. Experimental results are presented in Section V. Finally, the conclusion is given and further work is presented in Section VI. II. R W Localization is not only important for SGVs, it also can be important to localize pallets or containers in warehouses [3]. Today several sensors and methods for mobile robot localization are available. The methods for radio based technologies, especially for WSNs, can be divided in two groups: range-free and range-based methods. Range-free methods use the information of connections to known anchors for localization. This method is available in every wireless network, if the addresses of the anchors or the MAC-address of the access points in a WLAN are known. The accuracy of the localization depends on the density of the anchors and on the range of the used technology. In [4] and [5] the connection information to one-hop and two-hop anchors of a WSN are used in a MCP for tag localization. Both papers focus on mobile anchors and a fix, known connectivity range. Another option to localize a mobile robot is to use the Radio Frequency Identification (RFID) technology. RFID tags can be applied to known positions in the environment, in order to obtain position information when they are in range. This information can be fused with data from other sensors (e.g. odometers) for the purpose of improving the accuracy of localization [6]. The disadvantage of the RFID method is, that the installation of the tags is expensive and complex. Using the Received Signal Strength (RSS) is another rangefree method for wireless networks. This method combines the RSS information with a radio map to estimate the position of a tag. This combination of RSS information and a radio map is called fingerprinting and is presented in [7]. Fingerprinting is divided into two phases: In the initial calibration phase, the radio map is built by moving around and storing RSS values at various predefined points of the environment. In the localization phase, the mobile device moves in the same environment while its position is estimated by comparing the current RSS values with the radio map. To compare the values the Euclidean distance, a Bayesian algorithm or the Delaunay triangulation can be used. The general drawbacks of range-free methods are the low accuracy and the high costs for installation and configuration. The range-based methods use distances to known anchors to localize a tag. Common techniques for measuring the distance or angle between two nodes are: • • • • •
Angle of Arrival (AoA), Received Signal Strength (RSS), Time of Arrival (ToA), Round-trip Time of Flight (RToF), Time Difference of Arrival (TDoA).
AoA determines the position of the tag with the angle of arrival of a signal from fixed anchors using triangulation. It is necessary to estimate the angle to at least two anchors. The angle of arrival can be measured using antenna arrays or rotating beam antennas. Drawbacks on AoA based methods are the costs for the installation and the special hardware configuration. RSS can also be used as a range-based method, because the distance between sender and receiver can be obtained with the Log Distance Path Loss Model described in [8]. Unfortunately, the propagation model is sensitive to disturbances such as reflection, diffraction and multi-path effects. The signal propagation depends on the dimensions of the building, obstructions, partitioning materials and surrounding moving objects. Own measurements show that this disturbances make the use of a propagation model for accurate localization in an indoor environment almost impossible [9]. ToA, RToF and TDoA estimate the range between two nodes by measuring the signal propagation delay. To estimate the position of a mobile device with trilateration, the distances to at least three anchors have to be measured. A special wireless technology is Ultra-Wideband (UWB), which can guarantee an accurate distance measuring with ToA, because of the large bandwidth (> 500 MHz). In [10] distance measurements to three UWB anchors are used together with a particle filter for tracking the position of a mobile robot. The particle filter is advanced to handle LOS and NLOS measurements. A modification of RToF is utilized by the NanoLOC WSN of Nanotron Technologies. The features and modifications of the nanoLOC WSN are described in the next section. Two other popular methods for SGV localization are landmark based localization and map based localization. In both methods the mostly used sensors are laser range finders, ultrasound sensors and cameras. In landmark based localization methods the main task is to extract the important values out from the sensor measurement to detect the landmark. A landmark is a distinguishable feature in the environment which can be detected. The distances or angles to the landmarks can be used to localize a SGV. An overview about landmark based localization is given in [11]. At map based localization the pose of a mobile robot is estimated by comparing current measurements with a map of the environment. The accuracy of this method depends mostly on the accuracy of the map. III. T LOC WSN Nanotron Technologies has developed a WSN which can work as a Real-Time Location System (RTLS). The distance between two wireless nodes is determined by Symmetrical Double-Sided Two Way Ranging (SDS-TWR). SDS-TWR allows a distance measurement by means of the signal propagation delay as described in [12]. It estimates the distance between two nodes by measuring the RToF symmetrically from both sides. The wireless communication as well as the ranging methodology SDS-TWR are integrated in a single
194
chip, the nanoLOC TRX Transceiver [13]. The transceiver operates on the ISM band at 2.4 GHz and supports locationaware applications including Location Based Services (LBS) and asset tracking applications. The wireless communication is based on Nanotron’s patented modulation technique Chirp Spread Spectrum (CSS) according to the wireless standard IEEE 802.15.4a. Data rates are selectable from 2 Mbit/s to 125 kbit/s. SDS-TWR is a technique that uses two delays, which occur in signal transmission to determine the range between two nodes. This technique measures the round trip time and avoids the need to synchronize the clocks. The time measurement starts in node A by sending a package. Node B starts its measurement when it receives this packet from node A and stops, when the answer is sent back to the former transmitter. When node A receives the acknowledgment from node B, the accumulated time values in the received packet are used to calculate the distance between the two stations. The difference between the time measured by node A minus the time measured by node B is twice the time of the signal propagation. To avoid the drawback of clock drift, the range measurement is performed twice and symmetrically. The signal propagation time td can be calculated as follows: (T 1 − T 2 ) + (T 3 − T 4 ) (1) 4 where T 1 and T 4 are the delay times measured in node A during the first respectively second round trip and T 2 and T 3 are the delay times measured in node B during the first respectively second round trip. This double-sided measurement zeros out the first order errors caused by clock drift [12]. Based on the nanoLOC TRX transceiver and the micro-controller ATmega 128L, the nanoLOC WSN can be used to develop location-aware and distance ranging wireless applications [14]. A mobile tag localizes itself by measuring the distances to a set of anchors used as reference points. The anchors are located to predefined positions within a Cartesian coordinate system. The tag position can be calculated by trilateration. td =
IV. M C P The usage of raw distance measurements of a WSN has two crucially drawbacks: Firstly, because of noisy distance measurements there are always position leaps, even if the tag is not moving. The noisy distance measurements result from None-line-of-sight (NLOS) measurements and multipath fading. Secondly the orientation of the tag can not be estimated. The last disadvantage can be relaxed for position tracking by combining the distance measurements of the WSN with odometry data. Through this combination the position and orientation – called pose – of the SGV can be estimated. However, the estimated pose can be erroneous because both information are noisy. To estimate the SGV pose accurately by using noisy measurements, methods based on the Bayesian filter are used. The Bayesian filter estimates the pose by using probability density functions which model the measurement errors. An introduction into the Bayesian filter is given in [15]. One method which is based on the Bayesian filter is the
Kalman Filter (KF). The KF has approved itself in mobile robots for position tracking. In [16] the Extended Kalman Filter (EKF) is used to track the position of a forklift truck. The EKF is an extension of the KF for non-linear systems. The EKF from [16] is enhanced in [17] to use distance measurements in NLOS environments. In both papers the distance measurements from a tag to anchors of a nanoLOC WSN are used. The KF and EKF rely on the assumption, that motion and sensor errors are Gaussian and that the estimated position can be modeled by using a Gaussian distribution. Because of this fact, KF and EKF can not handle position ambiguities. Another method which is based on the Bayesian filter is a Particle Filter (PF). A PF can handle position ambiguities and does not rely on the assumption that motion and sensor errors are Gaussian. Also PF can cope with multimodal distributions. In a PF, a set S of N samples is distributed in the environment or at known places. A sample s is defined by cartesian coordinates and an orientation. A widely used PF for mobile robot localization is the MCP, which is described in [18], [19] and [20]. The estimated pose of a mobile robot and its uncertainty about the correctness is represented by the samples. MCP consists of two phases: The prediction phase and the update phase. Inside the prediction phase the motion information ut are applied on each sample sit−1 (1 ≤ i ≤ N). The prediction phase is also called motion model. The result of the motion model is a new set of samples S t which represents the positions, where the mobile robot could be after executing the movement ut . Inside the update phase, the set of distance measurements Dt is used to assign each sample with an importance factor w. The importance factor complies the probability p(Dt | sit , m), i.e. the probability of the distance measurements Dt at a point in the environment defined by sample sit and by using the information from the map m. In m the positions of anchors and landmarks are stored. The result of the update phase – also called measurement update – is the set of samples S t of the prediction phase with the corresponding set of importance weights w[N] t . Both sets together represent the current position likelihood of the mobile robot. After the update phase, the resampling step follows. Inside the resampling step, samples with a low importance weight are removed and samples with a high importance factor are duplicated. The result of the resampling is the set S t of N samples which represents the current position of the mobile robot. In the next time step, the set S t is used as S t−1 . There are two possibilities to extract the pose of the mobile robot out of the sample set S t : The first method is to use the average of all samples and the second method is to use the sample with the highest importance factor. MCPs flow chart is drafted in Fig. 1. The MCP has the advantages that it copes with global localization (no apriori information) and position tracking (given apriori information). The running-time and the memory efficiency are very good. Especially the sensor fusion with some dependencies and special cases can be
195
S t = S t−1
Prediction phase
ut
St dnanoLOC,t
Update phase
dlaser,t
S t , w[N] t
100
80
80
60
60
40
40
20
20
0
Resampling Fig. 1.
100
MCP flow chart.
implemented easily. Generally the combination of sensor specific advantages and the compensation of sensor specific disadvantages is called sensor fusion. In this paper the MCP uses distance measurements from the nanoLOC WSN and a laser range finder. It is described how this sensor fusion is implemented. The main focus of attention is on the structure of the probability density function of the nanoLOC measurement error and on the landmark detection. In the next section experimental results of this MCP are presented. A. Nanoloc Measurement Model Inside the update phase the measurement model is used to calculate the importance factor w for each sample s. The measurement model is the probability density function p(dnanoLOC,t | s[i] t , m) which characterizes the measurement properties and error. The measurement set dnanoLOC,t contains distance measurements to A anchors. The density function depends on the used sensor and environment. To estimate the density function for nanoLOC distance measurements, LOSmeasurements to four anchors are taken while a mobile robot drove a straight path between them. While the robot drove, its position was estimated by laser measurements to two walls. In Fig. 2, error histograms of measurements to four anchors are shown. The error is the difference between measured distance dta and the Euclidean distance from the robot position to anchor a. The histograms show, that all measurements are too long, the average is 107 cm. The error depends on the position of the anchor and on the environment. The median and standard deviation of the error distributions are different but they have all a Gaussian structure. Because of that fact, it is possible to use a Gaussian distribution, Eq. 2, as nanoLOC probability density function. ! 1 −1 (x − µ)2 2 N x, µ, σ = √ exp (2) 2 σ2 2πσ2 To calculate the importance weight of sample s[i] t , the Euclidean distance dta,i∗ between this sample and the anchor a is necessary and calculated as q da,i∗ = (xi − xa )2 + (yi − ya )2 . (3)
-100
0
100
200
300
400
500
0
100
100
80
80
60
60
40
40
20
20
0
-100
0
100
200
300
400
500
0
-100
0
100
200
300
400
500
-100
0
100
200
300
400
500
Fig. 2. Error histograms of nanoLOC distance measurements to four anchors. The x-axis is the error in centimeter and the y-axis show their frequency.
(xa , ya ) is the position of the Anchor a and (xi , yi ) are the cartesian coordinates of the sample s[i] t . The Euclidean distance and the measured distance dt[a] are used with an anchor specific a constant dconst in a fixed Gaussian distribution, Eq. 4. The a constant dconst is the median of the distance errors shown in the histograms. [a] a,i∗ p dnanoLOC,t | s[i] − (dt[a] − dconst ), 0, σ2 (4) t , m = c1 N d
The advantages of this fixed Gaussian distribution are, that a P normalization during the localization, to guarantee p = 1, is not necessary and that the domain can be restricted. This last advantage can be used to calculate whether the estimated position is good or not. If a lot of samples have a too large difference, new samples can be drawn in the environment. This enables the MCP to re-localize the mobile robot – especially the SGV. The importance factor of a sample is calculated as Eq. 5. The importance factor w is the product of the probability of measurements to A anchors and to two landmarks. The l probability p(dlaser,t |s[i] t , m) is a fixed Gaussian with σ = 10 mm. How landmarks are detected is presented in the next subsection. If no landmarks are detected, the importance factor is the same as the probability of the distance measurements to A anchors. w[i] = t
A 2 Y [a] Y [l] p dnanoLOC,t | s[i] · p dlaser,t | s[i] t t a=1
(5)
l=1
B. Landmark Detection Landmark detection is the process in which sensor data are checked, if they exhibit some known characteristics. In SGVs environment, two round columns are in front of charging stations and racks. These columns are used as landmarks. The used sensor is a laser range finder, so the process has to detect circles in the laser range data B. B is the set of L beams b,
196
which are characterized through a distance measurement b at a known angle α. ! b1 ... bL B = (6) α1 ... αL Round columns are used because they are symmetric, therefore they have the same characteristics from every angle of view. Two landmarks are used because distance measurements to both can guarantee an accurate localization. If landmarks are distinguishable, i.e. they have different radii, the position can be estimated uniquely. In SGV environments the columns are indistinguishable. By using distance measurements to both landmarks the position of the SGV is at one of two places. This disadvantage overcomes the MCP by using the nanoLOC WSN for co-localization. We assume that the localization through nanoLOC distance measurements is good enough, so that there are only samples near one of the possible places. The used method for detecting circles in laser range data is based on the Squared-Residual Voting Strategy (SRVS) described in [21]. This strategy calculates for each beam the mean of the squared-residual which describes how good a set of beams fit to a circle with known radius. A circle is detected if the mean of the squared-residual factor is lower then a given threshold. To calculate the mean of the squared-residual of beam bl it is assumed that this beam impacts on the landmark centre. Then the number of beams is calculated, which can also impact on the landmark. This depends on the measured distance bl of beam bl , the landmark radius r and the resolution of the laser range finder ∆α. These beams are used to calculate the squared-residual factor of the current beam bl . The geometric construction is shown in Fig. 3 and the formula to calculate the number of beams is shown in Eq. 7.
are the angle and distance of the beam which is analyzed and used as centre. q (bl + r) + bi − 2(bl + r)bi · cos (αl − αi ) (9) di = The mean of the squared-residual for beam bl is calculated as 2
dl = 2
1: 2: 3: 4: 5: 6:
8:
C
bl
∆α
9: 10:
Laser
di
11: Fig. 3.
12: 13:
Geometric illustration of the squared-residual voting strategy.
n =
1 r · arcsin ∆α r + bl
!
(7)
14: 15: 16:
The set of beams which are used to calculate the mean of the squared-residual for beam bl is given in Eq. 8. The number of elements in this set is always 2n+1. Since n specifies the number of beams on one side, beams on the left and on the right of the current beam bl are selected. Bl = {bl−n , ..., bl , ... bl+n }
(10)
If dl is calculated for each beam, landmarks can be detected by comparing them with a threshold. The SRVS is fast and can be implement very easily. If columns with a radius ≤ 10 cm are used, often corners like door frames are detected as round columns because they also get a small squared-residual factor. To overcome this disadvantage, we extend this method by using two more information about the landmark positions. The first information is, that the space beside the landmarks is free. Laser beams which impact on the boundary of a landmark have neighbors which have a much longer distance. Because of that, the squared-residual factor of such beams is much greater than of beams which impact - with all 2n+1 neighbors - on a landmark or on a wall. Then a landmark can be detected by searching a minimum between two maxima. The second information is used if a set of landmarks is detected: The known distance dLL between two landmarks is used to check which pair of landmarks belong together. The landmark detection process is described through pseudocode in Fig. 4.
7:
r
2n+1 1 X (di − r)2 . 2n + 1 i=1
(8)
For each beam i of the set Bl the distance to the assumed centre is calculated by using the law of cosines in Eq. 9. αl and bl
197
17: 18: 19: 20: 21: 22: 23:
B =getRangeData() 2 d =∅ Bmax = Bmin = ∅ for all bl ∈ B do n = arcsin r/(r + bl ) for i = l − n to l + n do c = bl + r q 2 dl +
=
1 2n+1
c2
+
2 bi
−
2 2cbi
cos (αl − αi ) − r
!2
end for 2 if dl ≥ T max then add bl to Bmax end if end for for k = 0 to |Bmax | − 1 do if between Bmax (k) and Bmax (k + 1) is a minimum then add this minimum to Bmin end if end for for all M ⊂ Bmin with |M| == 2 do if distance between M(1) and M(2) ≈ dLL then return distances to both landmarks end if end for Fig. 4.
Landmark Detection Algorithm.
Driven Path Best Samples
Driven Path Sample Clouds
Driven Path Cloud Center
15000
15000
15000
y
20000
y
20000
y
20000
10000
10000
10000
5000
5000
5000
0 600
700
800
900
1000
1100
1200
1300
1400
0 600
700
800
900
1000
x
(a)
1100
1200
1300
0 600
1400
700
800
900
x
1000
1100
1200
1300
1400
x
(b)
(c)
Fig. 5. Results of a local localization of a mobile robot in a LOS environment using the MCP sensor fusion. a) The path and the best weighted samples. The path is estimated from laser distance measurements to two walls. b) The sample clouds. c) Center of sample clouds. In all three pictures, the red line is the driven path and the black points are the called results of the MCP. The environment is represented in mm.
To test the sensor fusion, a mobile robot drove a straight path in a corridor of the University of Applied Sciences and Arts Dortmund. The utilized robot was the Pioneer3AT from the company MobileRobots Inc., equipped with a SICK LMS200 laser range finder. During the robot drove, all necessary information for the MCP are stored. These are the odometry data, LOS distance measurements to four nanoLOC anchors and the laser range data. To verify the accuracy of the MCP the driven path was extracted out of the laser range data. The laser range finder used a resolution ∆α of 0.5◦ . The used landmarks have a radius of 5,2 cm and were positioned 18 m behind the start point. On a path of 20 m, 290 pairs of the called information were stored. These information are used in an offline implementation of the MCP. The important task for SGVs is position tracking. Because of this, the MCP starts with 1000 samples at known position with correct orientation. The results of the position tracking are shown in Fig. 5. a) shows the driven path and the best weighted samples. In b) the sample clouds are shown. Fig. 5 c) shows the center of the clouds. All three pictures are the result of the MCP working on the stored data with the same parameter. In every picture the estimated positions are following the driven path with slightly errors. In a) it is shown, that using the best weighted sample as SGVs pose is quite difficult because sometimes there are heavy position leaps. This disadvantage overcomes the usage of the sample cloud center points shown in c). These positions are following the driven path much better.
Fig. 5 b) shows that there are less information for the xcomponent because of the wide sample clouds at the beginning of the localization. The landmark detection is represented in 5 b) by a very densly, roundly sample cloud at y=13 500. In the other two pictures the detection is represented by a much better localization result. The uncertainty in the positioning is much bigger if only the nanoLOC measurements are used. To prove the accuracy of the MCP, the cloud centers are compared with the positions which are estimated from the laser range data. In Fig. 6 the error histogram is shown. The error is the Euclidean distance between the two called positions in millimeter.
1000
800
Error in [mm]
V. E R
600
400
200
0
50
100
150
200
Execution Number of the Monte Carlo Algorithm
Fig. 6. mm.
198
250
Error of the cloud centers shown in Fig. 5 c) to the driven path in
The largest error by using only nanoLOC distance measurements is 280 mm. If the two landmarks are detected, the error is between 20 mm and 50 mm. The landmark detection is very accurate and detects the two landmarks the first time on a distance of 5.20 m to the landmarks, and the last time on a distance of 40 cm to the landmarks. The results show, that the assumption that there are only samples on one of the two possible places was correct. After the robot drove through the landmarks, the error climbs. This depends on the environment and on the positions of the anchors. VI. C F W In this paper, position tracking of a SGV by using a sensor fusion is presented. A MCP is implemented and tested, which uses the distance measurements to anchors of the nanoLOC WSN and laser measurements to two landmarks. The main focus of attention was on the structure of the measurement model and on the landmark detection process. Position tracking is important for SGVs between the place where they get goods and the target, e.g. another production area. The experimental results point out, that the sensor fusion is an accurate and flexible method for position tracking. The localization accuracy by using the nanoLOC measurements is good enough for position tracking between the called places. If two landmarks are detected, the position error is between 2 cm and 5 cm. This is good enough to control a docking operation, for example on a charging station. To use the presented MCP, two preparations have to be done. Firstly, positions of anchors and landmarks have to be added to the map m and secondly, the measurement model for nanoLOC distance measurements has to be checked because it depends on the environment and on the anchor positions. In the next step, we want to use the estimated positions in a path planning algorithm. We also want to check how good this sensor fusion works for global localization and especially re-localization. The path planning and the ability to re-localize itself are very important for a SGV, because both can guarantee an autonomous and secure operation.
[7] P. Bahl and V. N. Padmanabhan, “RADAR: An In-Building RF-based User Location and Tracking System,” in Proceedings of the 19th Annual Joint Conference of the IEEE Computer and Communications Societies, vol. 2, Tel Aviv, Israel, Mar. 2000, pp. 775–784. [8] N. Patwari, A. O. Hero, M. Perkins, N. S. Correal, and R. O’Dea, “Relative Location Estimation in Wireless Sensor Networks,” IEEE Transactions on Signal Processing, vol. 51, no. 8, pp. 2137–2148, 2003. [9] C. Röhrig and F. Künemund, “WLAN based Pose Estimation for Mobile Robots,” in Proceedings of the 17th IFAC World Congress, Seoul, Korea, Jul. 2008, pp. 10 433–10 438. [10] J. González, J. L. Blanco, C. Galindo, A. Ortiz-de Galisteo, J. A. Fernández-Madrigal, F. A. Moreno, and J. L. Martínez, “Mobile robot localization based on ultra-wide-band ranging: A particle filter approach,” Robot. Auton. Syst., vol. 57, no. 5, pp. 496–507, 2009. [11] M. Betke and L. Gurvits, “Mobile Robot Localization using Landmarks,” in IEEE Transactions on Robotics and Automation, vol. 13, no. 2, 1997, pp. 251–263. [12] “Real Time Location Systems (RTLS),” Nanotron Technologies GmbH, Berlin, Germany, White paper NA-06-0248-0391-1.02, Apr. 2007. [13] “nanoloc TRX Transceiver (NA5TR1),” Nanotron Technologies GmbH, Berlin, Germany, Datasheet NA-06-0230-0388-2.00, Apr. 2008. [14] “nanoloc Development Kit User Guide,” Nanotron Technologies GmbH, Berlin, Germany, Technical Report NA-06-0230-0402-1.03, Feb. 2007. [15] D. Fox, J. Hightower, and L. Liao, “Bayesian filtering for location estimation,” IEEE Pervasive Computing, vol. 2, no. 3, pp. 24–33, 2003. [16] C. Röhrig and S. Spieker, “Tracking of Transport Vehicles for Warehouse Management using a Wireless Sensor Network,” in 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2008), Nice, France, Sep. 2008, pp. 3260–3265. [17] C. Röhrig and M. Müller, “Indoor Location Tracking in Non-line-ofSight Environments Using a IEEE 802.15.4a Wireless Network,” pp. 552–557, Oct. 2009. [Online]. Available: http://www.inf.fh-dortmund. de/personen/professoren/roehrig/papers/iros09.pdf [18] F. Dellaert, D. Fox, W. Burgard, and S. Thrun, “Monte Carlo Localization for Mobile Robots,” in IEEE International Conference on Robotics and Automation (ICRA99), May 1999. [19] D. Fox, W. Burgard, F. Dellaert, and S. Thrun, “Monte Carlo Localization: Efficient Position Estimation for Mobile Robots,” in In Proc. of the National Conference on Artificial Intelligence (AAAI, 1999, pp. 343–349. [20] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics. Massachusetts USA: The MIT Press, 2006. [21] J. Ryde and H. Hu, “Fast Circular Landmark Detection for Cooperative Localisation and Mapping,” in Proc. IEEE Int. Conf. on Robotics and Automation, 2005, pp. 2756–2761.
R [1] M. P. Groover, Automation, Production Systems, and ComputerIntegrated Manufacturing. Prentince Hall, 2007. [2] E. Guizzo, “Three Engineers, Hundreds of Robots, one Warehouse,” IEEE Spectrum, vol. 7, pp. 27–34, 2008. [3] L. Evers, M. J. J. Bijl, M. Marin-Perianu, R. S. Marin-Perianu, and P. J. M. Havinga, “Wireless Sensor Networks and Beyond: A Case Study on Transport and Logistics,” Centre for Telematics and Information Technology, University of Twente, Enschede, Netherlands, Technical Report TR-CTIT-05-26, 2005. [4] A. Baggio and K. Langendeon, “Monte-Carlo Localization for Mobile Wireless Sensor Networks,” Technology Report of Delft University (Nummer: PDS: 2006-004), 2006. [5] D. Evans and L. Hu, “Localization for Mobile Sensor Networks,” in Proceedings of the 10th Annual International Conference on Mobile Computing and Networking, 2004, pp. 45–57. [6] J. Koch, J. Wettach, and E. Bloch, “Indoor Localisation of Humans, Objects, and Mobile Robots with RFID Infrastructure,” in Proceedings of the 7th International Conference on Hybrid Intelligent Systems, Kaiserslautern, Germany, Sep. 2007, pp. 271–276.
199