square boxes indicate windows of certain size that can be ..... 7. Structure of INS/GPS/vSLAM integrated filter. Fig. 8. Arrangement of ..... He joined Samsung.
International Journal of Control, Automation, and Systems (2010) 8(6):1232-1240 DOI 10.1007/s12555-010-0608-7
http://www.springer.com/12555
INS/vSLAM System Using Distributed Particle Filter Dae Hee Won, Sebum Chun, Sangkyung Sung, Young Jae Lee, Jeongho Cho, Jungmin Joo, and Jungkeun Park* Abstract: In implementing an INS/SLAM integrated navigation system based on the vision sensor, a suboptimal nonlinear filter is used to figure out the nonlinear characteristics in measurement and noise model. When a conventional centralized filter is used, however, the entire state vectors need to be reconfigured in every necessary cycle as the number of feature points changes, which is hard to isolate potential faults. Furthermore, any change in the number of feature points and a subsequent increase in the dimension of state variables may result in an exponential growth in computation quantities. In order to address these issues, this paper presents a distributed particle filter approach for implementing a vision sensor based INS/SLAM system. The proposed system has several local filters which are subject to change flexibly by the number of feature points, and separates state vectors into sub-states for vehicle dynamics and feature points so that minimum state vectors can be estimated in the master filter. Simulation results show that the distributed particle filter performs competitively as with the centralized particle filter and is capable of improving computation quantities. Keywords: Distributed filter, inertial navigation system, particle filter, vision sensor.
1. INTRODUCTION Due to the convergence between the high-performance, low-power and low-priced processor and the vision sensor, active research has recently been underway on the navigation system based on simultaneous localization and mapping (SLAM). SLAM is a technique that generates a map and identifies the position. Unlike the global positioning system (GPS), it does not need any additional signals from outside system, and it performs automatic navigation without any prior information on the environment and the position [1-3]. This SLAM technology has been explored primarily in the field of mobile robots carrying out their missions in a two-dimensional environment [1,7]. Capitalizing on the fact that it does not require prior information on the surroundings and robot position, it can be utilized as a navigation method for unmanned aerial vehicles (UAV) that perform surveillance missions for a long period of time in a vast range of areas. A vision-based SLAM (vSLAM) system can be a great solution especially when the GPS system, which __________ Manuscript received April 29, 2009; revised January 31, 2010; accepted July 21, 2010. Recommended by Editorial Board member Hyo-Choong Bang under the direction of Editor Jae-Bok Song. This work was supported by Korea Research Council of Fundamental Science & Technology. Dae Hee Won, Sangkyung Sung, Young Jae Lee, and Jungkeun Park are with the Department of Aerospace Information Engineering, Konkuk University, 1 Hwayang-dong, Gwangjin-gu, Seoul 143-701, Korea (e-mails: {smileday, sksung, younglee, parkjk}@ konkuk.ac.kr). Sebum Chun, Jeongho Cho, and Jungmin Joo are with the Satellite Navigation Department, Korea Aerospace Research Institute, 115 Gwahangno, Yuseong-gu, Daejeon 305-333, Korea (emails: {sbchun, jcho, jmjoo}@kari.re.kr). * Corresponding author. © ICROS, KIEE and Springer 2010
is widely used in UAVs, has outages owing to a lack of visible satellites or multipath errors in poor environments such as urban areas, indoor navigation or underground/space exploration. SLAM, however, is rarely used alone but is combined mostly with dead reckoning (DR) or inertial navigation system (INS), because it has a low update rate for providing navigation solution [4-6]. In general, constituting a SLAM system requires radar, laser scanner, vision sensor or any other types of sensor that identify the range and the bearing. This research particularly uses a single vision sensor which is relatively low-priced but offers bearing-only information as a single camera is applied, while it fails to directly provide any range information. Due to the nonlinear observation model of the vision sensor, such integration is performed using extended Kalman filter (EKF), unscented Kalman filter (UKF) or particle filter. Kim [2,3] and Sukkarieh [5] realized an INS/SLAM integrated system using EKF, while Tim Bailey [1] implemented an SLAM system using UKF and particle filter. When the particle filter is used, it is generally known that its computation quantities increase compared to EKF and UKF while its estimation performance increases [8]. The SLAM technique suggested by Kim and Bailey, however, has a structure of centralized filter, so state vectors need to be reconfigured whenever the observation environment of the feature point changes. The filter is realized into state vectors with relatively high order, so their computation quantities are sizable and it is hard to deal with faults [2,3]. To tackle these limitations of centralized filter, this paper proposes distributed particle filter. In this distributed filter, individual local filters are designed to change flexibly by the number of feature points. The proposed method also separates the state vectors of local filters into vehicles related one and feature points related one so that
INS/vSLAM System Using Distributed Particle Filter
the minimum state vectors alone can be estimated in the master filter. Furthermore, it is easy to deal with faults by checking each local filter and extend the additional sensors by adding local filters. Lastly, Monte Carlo simulation is performed to verify the stability of the filter, and its performance and computation quantities are compared with those of the extended Kalman filter and centralized particle filter. Simulation results show that suggested filter has better performance from the viewpoint of changes of state vector and computation quantities, while it has similar estimation performance compared with centralized particle filter which is known optimal filter. EKF is the best performance in the point of computation speed; however, its estimation result has the largest error because of Kalman filter characteristics. Due to these reason, it is not proper filter for this system. Centralized particle filter needs the largest computation time, even though its estimation result is similar with distributed one. Thus, proposed filter can obtain the higher estimation performance and reduce the computation time using particle filter and distributed filter structure. This paper is organized as follows. Section 2 explains an INS/vSLAM integrated system based on vision sensor, and Section 3 proposes a vision sensor-based INS/ vSLAM integrated system using distributed particle filter. Section 4 shows simulation results for verifying the performance of the proposed algorithm, and Section 5 states our conclusion. 2. INS/VSLAM INTEGRATED SYSTEM In this section, we first present vision sensor based SLAM algorithm. After that, we explain system and observation model of INS/vSLAM integrated system. 2.1. vSLAM In order to configure vSLAM, it is essential to extract easy-to-track feature points and track these feature points within the image data being obtained consecutively. Algorithms generally used for extracting and tracking feature points include Kanade-Lucas-Tomasi (KLT) [9] and Scale-Invariant Feature Transform (SIFT) [10]. When it is assumed that a feature point is fixed within local frame, the movements of feature points are correlated only by changes in the attitude and position of the vision sensor. If the vision sensor is fixed within the vehicle, the movements of feature points can be used in estimating the position and attitude of the vehicle. Using the estimated navigation solution, the vision sensor estimates the navigation errors from other sensors considering the position of estimated and observed feature points. Fig. 1 is an example of feature point tracking, where square boxes indicate windows of certain size that can be feature points within the respective images. The feature point selected in Frame 1 identifies a window with same characteristics in Frame 2; the vision sensor’s movements can be calculated by analyzing changes in the position of these feature points. To generate navigation solution and estimate errors,
1233
Fig. 1. Examples of feature points and tracking results.
Fig. 2. Conceptual diagram of vSLAM. vSLAM requires the three-dimensional (3-D) positioning of feature points. The vision sensor measurement only provides two-dimensional (2-D) information on bearing and elevation, making it impossible to identify the 3-D position of feature point within a frame. Therefore, delayed initialization or other methods are applied to calculate the initial 3-D position of feature point [13], which is then combined with navigation solution and other feature points to calculate the position more precisely [1,11]. Combined with DR and other methods, this vSLAM technique has been widely applied to indoor robots where the GPS system cannot be available; the technique proves useful enough in INS-based 3-D navigation, too. The initial position of feature points, however, is closely correlated to the vehicle’s position, so any initial errors in the feature points may result in the accumulation of estimation errors and thereby a decline in the precision of navigation performance. Fig. 2 represents a conceptual diagram of vSLAM [2,3,14]. Owing to errors in the initial position of feature points and the recursive calculation of feature point position, the navigation solution of vSLAM involves errors over time, but such errors are relatively smaller than those in low-grade INS-only navigation. 2.2. INS/vSLAM integrated model A navigation filter that integrates different types of sensor can be configured into a direct filter or an indirect filter depending on the complexity of the system and the types of sensor [12]. We adopt the indirect method that is more efficient in integrating INS/vSLAM and other sensors which have different operational cycles. Fig. 3 shows the configuration of the INS/vSLAM filter. The vision sensor provides bearing and elevation between the vehicle and feature points, while INS offers calculated position, velocity and attitude; the indirect filter estimates navigation errors using the bearing and elevation. The three-dimensional position of feature points is estimated using the delayed initialization me-
Dae Hee Won, Sebum Chun, Sangkyung Sung, Young Jae Lee, Jeongho Cho, Jungmin Joo, and Jungkeun Park
1234
δ p v :Vehicle position error, δ vTv :Vehicle velocity error, δ ψTv :Vehicle platform tilting error, x fi:i th Feature point position, fb :Specific force in body axis, w bib :Angular velocity in body axis, Cbn :Rotation matrix from body to navigation frame.
Fig. 3. Configuration of INS/vSLAM. thod, and information from INS and the vision sensor is utilized to provide initial estimates and covariance value [2,14]. 2.2.1 Configuration of system model A system model combines the vehicle’s dynamics model with the feature point model. The state vectors of the vehicle-related model represent position, velocity and attitude errors; those of the feature point-related model refer to the feature point position. x = [Vehicle − related state, Feature point − related state]T = [ Position error , Velocity error ,
(1)
Attitude error , Feature Point Position]T
A linearized INS error model is used as the dynamic model for the vehicle, while the feature point position is assumed to be a fixed point and its error is therefore modeled into a random constant. Equation (2) describes a system model used in INS/vSLAM. IMU is assumed to be a low-grade IMU, while gravity compensation and the Earth’s rotation are ignored in the equation [15]: x = Fx + Gw , 03×3 I 3×3 03×3 [fb ×]3×3 03×3 F = , 03×3 03×3 03×3 03×3 n 03×3 Cb 3×3 fb −Cbn 3×3 0 G = 3×3 , w = b , w ib 03×3 03×3 03×3 03×3
2.2.2 Configuration of observation model The bearing and elevation of the feature points is utilized to configure a SLAM system, and these values are determined by the attitude of the vehicle and the relative position between the vehicle and feature points. The observation model is expressed into a nonlinear [2,5]: z (k ) = h(x(k ), v(k )),
(3)
where z, x and v represent the observed value, the state vector and the measurement noise. In (3), the bearing and elevation noises can be assumed to be in a simple relationship of summation so that the equation is converted into the following equation: z (k ) = h(xv (k ), x fi (k )) + v (k ),
where xv and xfi stand for the vehicle related state and the position of the i-th feature point, respectively. The measurement noise, as described in (5), can be assumed to be normal distribution with the covariance of R. v ~ N (0, R)
(5)
The initialized position of feature points is expressed using a navigation frame, but the obtained images are expressed with a sensor frame. Fig. 4 illustrates the relationship of frames used to represent the position of measured feature points. In the Fig. 4., pnv is a vector that represents the vehicle’s position on the navigation frame, while ps is a vector that shows the vision sensor’s position on the body frame. Using the transformation matrixes between individual frames, the position of feature points can be expressed on the sensor frame, which obtains image information, as follows:
(
)
x sfi = Cbs Cnb (x nfi − p vn ) − p s ,
x sfi: i th Feature point position on sensor frame, (2)
where x = [δ xTv δ xv = [δ pTv
δ xTf 1 xTfi ]T , δ vTv
δ ψTv ]T ,
(4)
Fig. 4. Coordinate system for vSLAM.
(6)
INS/vSLAM System Using Distributed Particle Filter
1235
Cbs : Rotational transformation matrix from body to sensor frame, Cnb
:Rotational transformation matrix from navigation to body frame.
The relationship between the feature point position— expressed on the sensor frame—and the measurement is described in (7): y fi tan −1 x fi z= z fi tan −1 2 2 x fi + y fi
,
(7) Fig. 6. Structure of distributed filter.
where x sfi = [ x fi
y fi
z fi ]T , z = [ψ
Fig. 5. Structure of centralized filter.
θ ]T ,
ψ : Bearing, θ : Elevation.
3. IMPLEMENTATION OF INS/VSLAM USING DISTRIBUTED PARTICLE FILTER This section explains the structure of INS/vSLAM system, to which the distributed filter is applied to cope with changes in the number of observable feature points and use additional sensor information. The particle filter is also utilized to deal with the nonlinearity of the vision sensor system. 3.1. Distributed filter A conventional centralized filter provides the optimal solution and is the simplest and easy-to-implement form of filter. As a single filter estimates all states, greater computation quantities are generated as the number of measurements increases; it is also hard to respond to changes in the dimension of state vector. Furthermore, a centralized filter fails to guarantee robustness against possible faults [3,14,24]. A distributed filter, on the other hand, involves a complicated process: measurements are put separately to different local filters, and the estimated results from individual local filters need to be transmitted to the master filter. Measurements from sensors are processed in multiple local filters, reducing the amount of sensor information to be processed by each local filter. The distributed filter has smaller computation quantities than those of the centralized filter, as its local filters are configured in a more simplified way and it involves state vectors of lower dimension [3,24]. In the centralized filter, faults taking place in a handful of sensors may affect the entire filter, while the distributed filter can minimize the effects of such faults by detecting and isolating faults in independent local filters [17,18]. This implies the applied distributed filter can provide more efficient structure for fault isolation since each local filter can run a certain filter
based fault management algorithm. Fig. 6 describes the structure of conventional distributed filter. Unlike the centralized filter, the distributed filter consists of multiple local filters independently processing individual measurements. The results of estimation by each local filter are put together in the master filter; the estimation results of one local filter are entirely independent from those of the others. Even if one local filter experiences faults, this filter can be easily isolated without affecting other local filters. It assumed that an independent fault detection algorithm exists [14,17,18]. 3.2. Configuration of INS/vSLAM system using distributed particle filter Unlike the Kalman filter, a particle filter can be applied to nonlinear system models with a non-Gaussian noise distribution; it is capable of estimating the state even when it is hard to configure a measurement model or when the distribution is not Gaussian [8,19-21]. The vision sensor based system for estimating the state is a nonlinear model, as illustrated in (7), so its estimating performance can be improved using the particle filter. One of the biggest problems in configuring an INS/vSLAM filter is that the types and number of observable feature points as well as the combination of available sensors change by circumstance, thereby modifying the configuration of the filter. Any change in the feature points, in particular, alters not just the measurements but also the dimension of state vectors for the entire filter. A centralized filter can cope with change in the dimension of state vectors, too; however this is a very complicated process. We apply a distributed filter structure to deal efficiently with such change in state vectors [14]. The structure of distributed filter, configured in this paper to realize an INS/vSLAM system, is described in Fig. 7. In Fig. 7, “Feature Point Block” indicates local filters being assigned to each of the measured feature points. Individual local filters are generated when their respective feature points are measured; they are eliminated when those feature points lose their functions. “Landmark Block” is a local filter block that utilizes the precise position of landmark (i.e., feature point whose posi-
1236
Dae Hee Won, Sebum Chun, Sangkyung Sung, Young Jae Lee, Jeongho Cho, Jungmin Joo, and Jungkeun Park
can have advantage compared with other structures. It was assumed that the process of extracting and tracking feature points was carried out independently by an external module. For the performance comparison, results from the centralized EKF, centralized particle filter and distributed particle filter are demonstrated together.
Fig. 7. Structure of INS/GPS/vSLAM integrated filter. tion is identified), if measured, to estimate the state. As in the case of feature points, the number of local filters changes by the number of landmarks. “GPS Block” is used when precise information on the position is used to estimate the state when the GPS system is available for use. When additional sensors are used, changes in the number of measurements can be successfully treated— without any change in the filter structure—by just adding local filters. The results of estimation by individual local filters are transmitted to the master filter, which estimates navigation solution using particles provided by each of the local filters. The master filter estimates the state related to the vehicle only, so it is not necessary to pass all state vectors in the local filters to the master filter. Therefore, feature point-related state vectors are separated from vehicle-related ones in the “Particle Separation” phase, and vehicle-related state vectors alone are delivered to the master filter. When estimating state vectors, the master filter applies different weights to each local filter. Here the weight is set based on the innovation of each local filter and is calculated as follows: 1 Innovationi wi = N 1 ∑ Innovation j j =1
.
4.1. Simulation environment With the flight settings of an unmanned aerial vehicle (UAV) being considered, the simulation environment had feature points distributed on the positions that were as high as or lower than 20m and on the surface. We assumed, as illustrated in Fig. 8, that 40 feature points were distributed in random positions within the rectangle. The flight trajectory was a combination of straight flight, turning, accelerating and elevating sections; the designed trajectory is shown in Fig. 9. With the initial velocity being 10 m/s toward the north, the simulated vehicle flew along the designed trajectory for 57.68 seconds in total. The inertial sensor used in the simulation was MEMSgrade IMU as assumed earlier in the system model. The installed vision sensor was assumed to be tilted 45 degrees vertically downward. The maximum number of observable feature points was set at seven at once. The specification of inertial and vision sensors used in simulating the INS/vSLAM integrated system is explained in Table 1 [2,25]. The INS/SLAM algorithm estimated the position, velocity and attitude at an update rate of 100 Hz; using feature points, it corrected navigation errors at a rate of 10 Hz. To verify the performance of distributed particle filter, we compared the estimation results of INS-onlyuse case, EKF and centralized particle filter. The other
(8) Fig. 8. Arrangement of feature points in flight.
Here, i is the index of local filters and N their total number. Transmitted back to the local filters through the feedback process, of local filters, the results of final estimation are combined with state vectors indicating the position of existing feature points. By running individual local filters independently, this filter structure enables easy diagnosis and isolation of possible faults. 4. SIMULATION AND EVALUATION ON PERFORMANCE OF INS/VSLAM SYSTEM We performed simulation on the distributed particle filter-base INS/vSLAM system to verify its performance. The fault detection and isolation was not considered in this simulation, even though distributed filter structure
Fig. 9. Simulation trajectory.
INS/vSLAM System Using Distributed Particle Filter
1237 Estimated Position
Table 1. Specification of IMU and vision sensor. Sensor Inertial Measurement Unit (IMU) Vision Sensor
Performance Element
Value
Gyroscope
Standard deviation: 1 °/s
Accelerometer Field of view Bearing errors
Standard deviation: 1m/s2 50° 0.5°
True Track INS Only DPF 30 ) 20 m (e du 10 titl 0 A
Table 2. Simulation computer specification.
20
Intel Core 2 Duo 3.00 GHz L1 Cache 32kb*2, L2 Cache 4MB 4 GB Windows 7 MATLAB R2008a
CPU RAM OS Software
North(m)
Estimated Position True Track INS Only
20
10
North(m)
0 -10
-40
-20
0
20
40
East(m)
Fig. 10. Results from INS-only navigation.
80
-40
-20
0
100
East(m)
100
Number of feature points
8 7 6 5 Number
4.2. Simulation results Fig. 10 indicates the position estimated in the aboveproposed simulation environment using INS alone, where “True Track” represents the true trajectory of the vehicle and “INS Only” represents the trajectory generated when navigation is carried out using IMU only. As demonstrated in the picture, when navigation is done using IMU alone, navigation errors are left uncompensated, ending up accumulated and diverging over time. As described in Fig. 10, the results of INS-only navigation diverge compared to the true trajectory. Because absolute position is not provided to compensate the INS errors throughout the simulation, navigation errors are accumulated as a result. Fig. 11 shows the averaged results of 100 Monte Carlo simulations done using the proposed distributed particle filter (noted as ‘DPF’). The extent of divergence for estimated positions is mitigated significantly compared to INS only navigation. In SLAM based navigation, errors in the position of feature points, however, lead to increased overall estimation errors. Since navigation errors are accumulated over time, and the estimated position affected by the accumulated errors is used to estimate the feature point position subsequently, the
60
0 -10
40
20
80
Fig. 11. Comparison of results from 100 Monte Carlo simulations and INS-only navigation.
filters are set to have same initial values but have structural differences. Table 2 describes the specification of the computer for the simulation.
30 ) 20 (m ed 10 uit tl 0 A
10
60
4 3 2 1 0
0
1000
2000
3000 Epoch
4000
5000
6000
Fig. 12. Change of the number of feature points. navigation errors tend to increase gradually. Yet it is observed these errors are significantly smaller than those of the INS-only navigation. The number of available feature points is varying while INS/vSLAM simulation is running, because visual environment is changed and the feature point needs initialization time to have 3-D position on navigation frame. Fig. 12 shows that it varies from zero to seven. Due to this reason, INS/vSLAM system has to cope with this change and all filters use corresponding algorithm in this simulation. Figs. 13 and 14 describe estimated position and position error according to the filter structures, respectively. For the purpose of comparison, the centralized particle filter (CPF) is implemented with the filter dimension of 9 + 3N, where N is the number of feature points at the time instant. For the fair comparison, the total number of particles is assigned with the same quantity. Specifically, when the feature points are 5, the total number of particles for each particle filter based method (CPF and DPF) is 1000. Consequently, 1000 particles are used for the estimation of overall 24 states in case of CPF while 5 local filters are independently operated for the estimation of 12 states of vehicle position, velocity, attitude, and feature point position through 200 particles, respectively. In Fig. 13, it is observed that EKF has the worst estimation performance because it employs a linearized model in spite of the model nonlinearity. Other two filters involving particle
Dae Hee Won, Sebum Chun, Sangkyung Sung, Young Jae Lee, Jeongho Cho, Jungmin Joo, and Jungkeun Park
1238
True Track EKF CPF DPF
Estimated Position
0
-10
-20
EKF CPF DPF
2 ) e e r g e d ( r o r r E
) 30 m (e 20 du titl 10 A 10
Attitude Error
2.5
0
20
40
80
60
100
1.5 1
S M R
0.5 0
East(m)
0
1000
2000
3000 Epoch
4000
5000
6000
Fig. 15. Estimated attitude error.
North(m)
Table 3. Comparison computation time.
Fig. 13. Estimated position by navigation filters.
Computation time
Position Error
EKF 7 sec
Distributed PF Centralized PF 72 sec 388 sec
INS Only 40
EKF
) (m ro rr E S M R
CPF
30
DPF
20
10
0
1000
2000
3000
4000
5000
6000
Epoch
Fig. 14. Estimated position error. filter, i.e., CPF and DPF have similar performance. The distributed particle filter, in theory, is a suboptimal filter as it may involve potential errors in the process of separating and converging particles. The simulated results, however, demonstrate that it obtains a competitive performance as the centralized filter which is optimal. As the dimension of state vector is increased in particle filter, the number of particles should be increased proportionally to the square of the state dimension for the estimation performance. However, the implemented CPF is constrained by the particle number as feature point number multiplied by 200, whose total number of particles are the same as that of DPF. As a result, the filter performance of CPF is similar with the distributed implementation. Fig. 14 represents position errors by navigation filters. When navigation is done using INS alone, errors are left uncorrected, leading to a constant increase in errors. In the remaining cases, errors diverge slowly as the vSLAM algorithm is added. EKF has better performance than INS only case but poor than other two particle filters. INS model is used to compute position and attitude. Position errors are correlated with attitude as shown in (2). Fig. 15 presents the estimated attitude error. Centralized particle filter has best performance and EKF has the worst one. Table 3 compares the computation quantities of navigation filters; 200 particles are used per each feature point in particle filter. In the distributed particle filter, only the number of local filters varies by that of feature points; the state vector is in 12th order as it uses nine variables related to the vehicle (i.e., position, velocity and attitude) and three variables indicating the feature point position. We utilize
seven feature points and therefore run 1,400 (200*7) particles which has 16,800 (200*7*12) computation points. The centralized particle filter uses only one filter, but as the state vector is in 9+3Nth order if the number of feature points is N. The centralized particle filter uses 1,400 (200*7) particles but it runs 42,000 (200*7*30) computation points, resulting in greater computation quantities than the distributed particle filters. As illustrated in Table 3, the centralized particle filter requires approximately five times the computation time of the distributed particle filter. Simulation results show that the distributed particle filter reduces computation quantities compared to the centralized one while two filters have similar navigation performance. Though EKF shows the best performance on the aspect of computation time, its estimation performance is worse than other two particle filters so that it’s not a proper filter for this system. The distributed particle filter includes the separation of state vector and operates the several local filters. Considering the navigation performance and the computation quantity, the distributed particle filter has the best filter for INS/vSLAM system which uses several feature point measurements. 5. CONCLUSIONS In this paper, we have applied distributed particle filter to the configuration of INS/vSLAM system. Individual local filters are designed to change flexibly by the number of feature points. It also separates the state vectors of local filter so that the minimum state vectors alone can be estimated in the master filter. Furthermore, it is easy to deal with faults by checking each local filter and extend the additional sensors by adding local filters. Monte Carlo simulation is performed to verify the performance and computation quantities with respect to those of INS only navigation, EKF and centralized particle filter. The results show that INS only navigation involves greater errors from the true trajectory over time as the navigation errors are accumulated. When the distributed particle filter is used, the system demonstrates improved performance; the distributed filter is shown to
INS/vSLAM System Using Distributed Particle Filter
have similar navigation performance as that of the centralized filter that provides the optimal solution. Meanwhile, the centralized filter requires around five times the computation time of the distributed filter, as it increases the dimension of state vectors. Though EKF shows the best performance with respect to the computation time, its estimation performance is worse than other two particle filters so that it’s not a proper filter for this system. As a consequence, the proposed filter can improve the estimation performance and reduce the computation time using particle filter and distributed structure.
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8] [9]
[10]
[11]
[12] [13]
REFERENCES H. Durrant-Whyte and T. Bailey, “Simultaneous localization and mapping-tutorial I&II,” IEEE Robotics & Automation Magazine, vol. 13, no. 2&3, June 2006. J. Kim, Autonomous Navigation for Airborne Application, Ph.D. Thesis, Sydney University, May 2004. J. Kim and S. Sukkari, “SLAM aided GPS/INS navigation in GPS denied and unknown environments,” Proc. of International Symposium on GNSS /GPS, December 2004. R. D. Braun, H. S. Wright, and D. A. Spencer, “The mars airplane: a credible science platform,” Proc. of IEEE Aerospace Conference Proceedings, pp. 396-408, 2004. M. Bryson and S. Sukkarieh, “Building a robust implementation of bearing-only inertial SLAM for a UAV,” Journal of Field Robotics, vol. 24, no. 1, pp. 113-143, 2007. J. Kim and S. Sukkarieh, “Autonomous airborne navigation in unknown terrain environments,” IEEE Trans. on Aerospace and Electronic Systems, vol. 40, no. 3, pp. 1031-1045, 2004. G. Dissannayake, P. Newman, S. Clark, and H. F. Durrant-Whyte, “A solution to the simultaneous localization and map building (SLAM) problem,” IEEE Trans. on Robotics and Automation, vol. 17, no. 3, pp. 229-241, 2001. B. Ristic, Beyond the Kalman Filter, Artech House, 2004. C. Tomasi and T. Kanade, Detection and Tracking of Point Features, Technical Report CMU-CS-91132, April 1991. D. G. Lowe, “Object recognition from local scaleinvariant features,” Proc. of the International Conference on Computer Vision 2, pp. 1150-1157, 1999. X. Wang and H. Zhang, “Good image feature for bearing-only SLAM,” IEEE/RS International Conference on Intelligent Robots and Systems, pp. 2576-2581, 2006. P. S. Maybeck, Stochastic Models, Estimation, and Control, Navtech Book, 1994. T. Bailey, “Constrained initialization for bearingonly SLAM,” Proc. of IEEE International Conference on Robotics and Automation, Taipei, Taiwan, September 2003.
1239
[14] S. Chun, Performance Improvement of INS/GPS Using Multiple Vision Sensors, Ph.D. Thesis, Konkuk University, December 2007. [15] D. H. Titterton, Strapdown Inertial Navigation Technology, Peter Peregrinus Ltd., 1997. [16] A. S. Bashi, “Distributed implementation of particle filters,” Proc. of the 6th International Conference of Information Fusion, vol. 2, pp. 1164-1171, 2003. [17] S. A. Broatch, “An integrated navigation system manager using federated Kalman filtering,” Proc. of Aerospace and Electronics Conference, vol. 1, pp. 422-426, May 1991. [18] N. A. Carlson, “Federated filter for fault-tolerant integrated navigation system,” Proc. of IEEE Position Location and Navigation Symposium, vol. 4, pp. 110-119, Dec. 1988. [19] P. M. Djuric, “Particle filtering,” IEEE Signal Processing Magazine, vol. 20, no. 5, pp. 19-38, 2003. [20] R. van der Merwe, The Unscented Particle Filter, Technical Report CUED/F-INFENG/TR 380, Cambridge University Engineering Department, 2000. [21] A. Doucet, Sequential Monte Carlo Methods in Practice, Springer, 2001. [22] R. G. Brown, Introduction to Random Signals and Applied Kalman Filtering, 3rd Ed, John Wiley & Sons, 1997. [23] Y. Gao, “Comparison and analysis of centralized, decentralized, and federated filters,” Navigation: Journal of the Institute of Navigation, vol. 40, no. 1, pp. 69-85, 1993. [24] A. G. O. Mutambara, Decentralized Estimation and Control for Multisensor System, CRC Press, 1998. [25] H. P. Henderson, Jr and David M. Bevly, “Relative position of UGVs in constrained environments using low cost IMU and GPS augmented with ultrasonic sensors,” Proc. of IEEE/ION Position, Location and Navigation Symposium, pp. 1269-1277, May 2008. Dae Hee Won received his B.S. and M.S. degrees in Aerospace Information Engineering from Konkuk University, Korea, in 2006 and 2008. He is a Ph.D. candidate in the Department of Aerospace Information Engineering at Konkuk University. His research interests include GPS/INS/Vision system integration, GPS RTK and nonlinear estimation. Sebum Chun received his M.S. degree in Aerospace Engineering from the Konkuk University, Korea in 2002. He received his Ph.D. degree in Aerospace Engineering from Konkuk University in 2008. His research interests include GPS/INS/Vision system integration, inertial navigation system and nonlinear estimation.
1240
Dae Hee Won, Sebum Chun, Sangkyung Sung, Young Jae Lee, Jeongho Cho, Jungmin Joo, and Jungkeun Park
Sangkyung Sung received his B.S., M.S., and Ph.D. degrees in Electrical Engineering from Seoul National University, Seoul, Korea, in 1996, 1998, and 2003, respectively. From March 1996 to February 2003, he worked for Automatic Control Research Center in Seoul National University. Currently, he is an Assistant Professor of the department of aerospace information engineering, Konkuk University, Korea. His research interests are avionic system hardware, inertial sensors, integrated navigation theory and application to unmanned systems.
Jungmin Joo received the B.S. degree in Electrical Engineering from Busan National University in 2002 and his M.S. degree in Electrical Engineering from Korea Advanced Institute of Science and Technology in 2004. Currently, he is a senior researcher in the Department of Satellite Navigation at Korea Aerospace Research Institute and Ph.D. candidate in the Department of Aerospace Engineering at KAIST. His research interests include GPS Integrity Monitoring, Terrain Referenced Navigation and Aircraft and Spacecraft Navigation.
Young Jae Lee is a Professor in the Department of Aerospace Information Engineering at Konkuk University, Korea. He received his Ph.D. degree in Aerospace Engineering from the University of Texas at Austin in 1990. His research interests include integrity monitoring of GNSS signal, GBAS, RTK, attitude determination, orbit determination, and GNSS related engineering problems.
Jungkeun Park is currently an Assistant Professor of the Department of Aerospace Information Engineering at Konkuk University. He received his B.S. and M.S. degrees in Electrical Engineering from the Seoul National University, Korea, in 1997 and 1999, respectively. He received his Ph.D. in Electrical Engineering and Computer Science from the Seoul National University in 2004. His current research interests include embedded real-time systems design, real-time operating systems, and UAV systems.
Jeongho Cho received his Ph.D. degree in Electrical and Computer Engineering from the University of Florida, Gainesville, FL, in 2004. He joined Samsung Electronics in 2006, after one year appointment as a postdoctoral research associate at the University of Florida. Since 2008, he has been a senior researcher in KARI where he is investigating on FDE approaches, with applications on satellite navigation.