Evolutionary Particle Filter: Re-sampling from the Genetic Algorithm Perspective∗ N. M. Kwok1 , Gu Fang2 and Weizhen Zhou1 1
ARC Centre of Excellence for Autonomous Systems University of Technology, Sydney Broadway, NSW, 2007, Australia {ngai.kwok,wzhou}@eng.uts.edu.au 2 School of Engineering and Industrial Design University of Western Sydney Penrith, NSW, 2747, Australia
[email protected]
Abstract— The sample impoverishment problem in particle filters is investigated from the perspective of genetic algorithms. The contribution of this paper is in the proposal of a hybrid technique to mitigate sample impoverishment such that the number of particles required and hence the computation complexity are reduced. Studies are conducted through the use of Chebyshev inequality for the number of particles required. The relationship between the number of particles and the time for impoverishment is examined by considering the takeover phenomena as found in genetic algorithms. It is revealed that the sample impoverishment problem is caused by the resampling scheme in implementing the particle filter with a finite number of particles. The use of uniform or roulette-wheel sampling also contributes to the problem. Crossover operators from genetic algorithms are adopted to tackle the finite particle problem by re-defining or re-supplying impoverished particles during filter iterations. Effectiveness of the proposed approach is demonstrated by simulations for a monobot simultaneous localization and mapping application. Index Terms— particle filter, re-sampling, genetic algorithms, selection.
I. I NTRODUCTION Particle filters [1] had been widely applied in estimation problems containing non-linear system and non-Gaussian noise models. The particle filter (PF) is, in principle, a sample based implementation of Bayesian estimation [2]. Applications of PFs include those in mobile robot localization and mapping, [3], fault diagnosis in nonlinear stochastic systems [4], user detection in wireless telecommunications [5], speaker tracking in auditory applications [6] and many other areas. Although the application of PFs to non-linear/nonGaussian systems has demonstrated satisfactory results, the implementation complexity is prohibitively high for systems with limited computing resources especially for dynamical and high-dimensional systems. This difficulty arises because ∗ This work is supported by the ARC Centre of Excellence programme, funded by the Australian Research Council (ARC) and the New South Wales State Government.
of the so-called sample impoverishment, i.e., the loss of diversity for the particles to adequately represent the solution space. An obvious solution to this problem is to use a large number of particles at the beginning of the filtering process. However, this increases the computational complexity. In order to reduce the impoverishment effect, or the number of particles required, several approach had been proposed in the literature. In [7], it is suggested to include, as implementation measures, sample boosting, smoothing and prior editing. In sample boosting, the number of particles is increased in an order of magnitude during an intermediate calculation stage, then re-sampled to restore to the original size. This method rather increases the computational complexity. For the smoothing technique, particles are perturbed as the virtual result of sampling from a continuous approximation of the discrete states represented by the particles. However, there may be difficulties in designing the continuous approximation when the system is highly non-linear or cannot be accurately modelled. In prior editing, the number of particles is increased in regions of high likelihoods. This approach is an advance from sample boosting by concentrating on the promising solution regions, but the lack of knowledge on locations and numbers of high likelihood regions may hinder the success of this method. Since sample impoverishment is mostly contributed from re-sampling, a test of the effective particle number is checked before re-sampling is carried out, see [8]. This method partially avoids the lost of particle diversity, but there is still no recovery of particles once they have impoverished. It has been noted in many occasions in the literature, that particle filters bear certain similar implementation characteristics to that of genetic algorithms (GA), [9] and [10]. In [11], the application of sampling algorithms is treated as the survival of the fittest inspired by the Theory of Evolution from which the GA is developed. Furthermore, in [12], the connection between PF and GA was established from the Monte Carlo simulation view point. On the other
hand, incorporation of Bayesian framework into evolutionary computation was proposed in [13] with performance improvements in function optimizations. More recent work in the hybridization of PF and GA can be found in [14] and [15] where function optimization problems are addressed. In these research works, the application of hybridized GA/PF has indicated an attractive research direction in combining estimation and optimization. Within the mobile robotics research area, application of GA was found in [16]. It adopted the GA to enhance the estimations from an extended Kalman filter (EKF) but the implementation of the GA was not specifically addressed. In [17], a GA with a simple fitness function design was applied in mobile robot localization and mapping but insights into the algorithm were not reported either. In the PF implementation, uniform re-sampling or selection is frequently employed. This scheme, unavoidably, introduces the sample impoverishment problem and an analysis is available in [18]. This scheme also contains unlimited error spread as proved in [19] where the stochastic universal sampling (SUS) was proposed which bounds the sampling error. Apart from uniform re-sampling, there are alternative selection methods available in the GA literature including the tournament and truncation selection schemes [20]. It is also noted that the complexity of a PF depends critically on the number of particles required. This observation was considered in [21], where the choice of the number of particles was guided by the Chebyshev inequality. Another major problem in PF implementation is that particles are not being supplied in the high probability regions as needed. Although there may not be known a priori on where the region is in the solution space, the GA approach re-supplies or re-defines particles via the crossover and mutation operators [22]. These techniques may suggest an attractive hybridization approach by combining the advantages of PF and GA. In this paper, the operation of a particle filter will be re-studied from the GA perspective. The major contribution of this paper are in characterizing the sample impoverishment problem and proposing an alternative re-sampling scheme. The rest of the paper is organized as follows. In Section II, the implementations of the particle filter and genetic algorithm are briefly reviewed. The combination of the two techniques in mobile robot localization is developed in Section III. Simulation results are presented and discussed in Section IV. A conclusion is drawn in Section V.
transition is given by the process model, xv,k+1 = xv,k + vk ∆T + ηv,k ,
where xv is the robot state, k is the time index, v is the velocity control, ∆T is the discrete time interval and ηv is the process noise assumed as ηv ∼ N (0, Q) and is a stationary sequence. The complete system state is xk = [xv,k , xm1,k , xm2,k ]T ,
where zi is the range measurement to the i−th landmark xmi , ηz is the measurement noise assumed to be ηz ∼ N (0, R). B. Particle Filter The particle filter is developed on the basis of Bayes’ Rule which states that the posterior is proportional to the product of the likelihood and prior, given as p(xk |z1:k ) ∝
p(zk |xk )p(xk |xk−1 )p(xk−1 |z1:k−1 ) p(zk |z1:k−1 )
(4)
where p(·) is a probability density function (pdf), xk is the current state to be estimated and z1:k is the measurement up to time index k. The operation of the particle filter is as follows, see [1] and [2]. 1) Initialize: i • Generate xv = 0, for i = 1 · · · N , representing the initial location of the robot (assumed at the origin of the coordinate frame), where N is the number of particles. i • Generate random numbers xm1,2 in [0, xmax ] uniformly distributed, (a 2 × N matrix) representing the initial unknown location of landmarks. xmax is the maximum operating space. 2) Measure: • Make range measurements to landmarks, giving z1,2 , which are corrupted by noise. • Calculate the importance weights, wji = exp(−0.5νjiT R−1 νji ),
A. System Description
1 This is a simple one-dimensional problem and the robot is termed a monobot.
(2)
Note that the landmarks are assumed stationary, so they are not included into the process model for presentation simplicity. While the robot moves, it observes or measures the distance (range) from the landmarks. The measurement model is, (3) zi,k = xmi,k − xv,k + ηz,k ,
II. PARTICLE F ILTER AND G ENETIC A LGORITHM
Assume a mobile robot being deployed in its operation area. The robot moves along a straight line with two landmarks being observed1. In state space description, the robot
(1)
•
j = 1, 2;
(5)
where νji = zj − (ximj − xiv ) is the innovation and superscript (T ) stands for transpose. Calculate the normalized overall importance weight 2 , w ˜i = Π2j=1 wji ,
w ¯i =
w ˜i ΣN ˜i i=1 w
,
(6)
2 Note that the product and summation is performed component-wise for the weights and there are 2 landmarks assumed.
3) Update: • Perform re-sampling to form new particles x ˜i , such that the probability of selection is proportional to the weights w ¯i . • Calculate the estimate and the uncertainty covariance ¯i x ˜i , x ˆ = ΣN i=1 w
P = ΣN ¯i (˜ xi − x ˆ)(˜ xi − x ˆ)T . (7) i=1 w
C. Genetic Algorithm The genetic algorithm, as a stochastic searching algorithm, is widely treated as a function optimizer and is developed by the inspiration from Darwin’s Theory of Evolution. GA simulates the evolution of individuals in competing for survival. Fitter3 individuals cross-breed and produce better offsprings hence promoting the fitness of the whole population. Moreover, mutation also occurs during the production of off-springs. In computer implementations, GA is governed by the Schema Theorem4 originally derived from binary string representation of the genes of a chromosome within an individual5. The Schema Theorem can be expressed as (see [10] for details), f () δ() m(, t + 1) ≥ m(, t) ¯ (1 − pc )(1 − pm )o() , (8) L−1 f where m(, t) is the number of schema at generation t, f () is the average fitness of chromosomes having the same schema, f¯ is the average fitness of the whole population, pc is the crossover probability, δ() is the length of a schema, L is the chromosome length, pm is the mutation probability and o() is the order of a schema. The Schema Theorem says that the fitness of individuals having contributing characteristics will increase over generations (iterations) and finally converge to the optimal solution. The implementation procedures of the GA are described in the following. 1) Initialize: • Generate random numbers (chromosomes) describing the solution, the number corresponds to the size of the population and bounded within the solution space. 2) Iteration: • Calculate the fitness of chromosomes based on measurements made (equivalent to equ. 6 in PF implementation). • Select chromosomes into an intermediate population according to their fitness. • Perform crossover and mutation to mix/perturb the intermediate population. 3 Fitness can be viewed as the closeness of a candidate solution from the optimum. 4 A schema is a defining characteristic of the encoding that contributes to the optimal solution. 5 A chromosome is an encoding of the solution to an optimization problem which is equivalent to a particle in the particle filter. Hence, the two terms will be used interchangeably in the rest of the paper.
3) Termination: • If some termination condition is met . Calculate the estimation (e.g., equ. 7) . Otherwise, repeat iterations. III. E VOLUTIONARY PARTICLE F ILTER Based on the similarities and differences between PF and GA, an evolutionary particle filter is proposed (EPF). This algorithm complements PF and GA. In particular, solves the sample impoverishment problem found in the PF. The number of particles required, the cause of sample impoverishment and the time for impoverishment will be investigated in the sequel from the perspective of the GA selection process. A. Number of Particles Required Consider the one dimensional case, a particle x which is the one left after impoverishment. It was initialized by drawing a sample from a distribution with mean µ and variance σ 2 . The final probability of estimation error is bounded by the Chebyshev inequality given by σ2 . (9) 2 When the particles were initialized to cover a certain range in the solution space, the chance of the particle that falls in the vicinity of the true solution is increased by generating more samples. Now consider that there were N particles initialized identically independently distributed, and arrange the particles in a sequence x. Then the Chebyshev inequality gives σ2 x . (10) P (| − µ| ≥ ) ≤ N N 2 Hence, for a specified error , the error probability is inversely proportional to the number of particles required. However, there is always a limitation in the computational resources and the use of a small number of particles is very desirable. P (|x − µ| ≥ ) ≤
B. Impoverishment from Re-sampling The sample impoverishment phenomenon may be studied via the gambler’s ruin problem. Consider two particles as gamblers A and B. When they were initialized, capitals, cA and cB respectively, were assigned according to their closeness to the true solution. In most particle filter resampling process, a pointer is generated from a uniform distribution emulating a spin from a roulette wheel. However, if a small number of particles are used, a true uniform distribution cannot be guaranteed in practice. In the gambler’s view, this becomes an unfair game as p = q, where p is the winning probability for gambler A and q is the winning probability for B. Consider when a particle is duplicated for gambler A and a particle is removed from B, which corresponds to the winning and losing outcomes. The probability that, say,
gambler A eventually wins (sample impoverished) can be derived from the theory of random walks as PA =
(q/p)cA − 1 , (q/p)cA +cB − 1
(11)
where cA is the initial capital of gambler A and a similar expression applies to gambler B. It is evident that as far as the game is unfair, say, gambler A is favorable, then A will ultimately win all the wealth of gambler B. Since exact uniform distribution for the selection pointer is not available in practice, this stochastic effect will accelerate the impoverishment process. From the genetic algorithm literature [19], a linear spaced pointer is generated in the selection process called the stochastic universal sampling (SUS). These pointers satisfy a uniform distribution and guarantee the same interval between pointers. Hence, reduces the selection bias and the adverse effect of impoverishment is reduced. A set of pointers are generated by a single roulette wheel spin as follows, Pt = N −1 ((1 · · · N ) − r),
(12)
where r ∈ [0, 1] is a random number. C. Impoverishment Time In the PF re-sampling procedure, particles are copied or removed according to their weights w ¯i , the change in copies is proportional to multiples of 1/N . If the weights are sorted and cumulatively summed, they can be approximated by the power law such as uc , where u is the normalized index, u ∈ [0, 1], resulted from sorting and c is the power constant (see details in [18]). For example, Fig. 1(a) shows the normalized weights of particles in the space range ±2.5m. The weights are generated from a Gaussian distribution with µ = 0 and σ 2 = 0.2. The corresponding cumulative sum in normalized index is depicted in Fig. 1(b) which is the approximation by the power law (here, c = 6.5 is determined experimentally). 1
0.9
0.8
0.8
0.7
0.7 Cumulative weight
1
0.9
Weight
0.6
0.5
0.4
0.6
0.5
0.4
0.3
0.3
0.2
0.2
0.1
0 −2.5
0.1
−2
−1.5
−1
−0.5
0 Range(m)
0.5
1
1.5
(a) Distribution of particles Fig. 1.
2
2.5
0
0
0.1
0.2
0.3
0.4
0.5 0.6 Ranking index
0.7
0.8
0.9
1
(b) Cumulative sum of weights
Power law approximation
By considering a general range in u and u − 1/N , the proportion of particles at iteration k is Pu,k = uck+1 − (u − 1/N )ck+1 .
(13)
This expression indicates the growth of particles within the range around u. Sample impoverishment occurs when the particle with the highest weight dominates, i.e., u = 1. The trace of its growth becomes ck+1 N −1 . (14) P1,k = 1 − N Setting this proportion to N − 1)/N , which represents the highest ranked weight, and after some manipulations, the impoverishment time k ∗ is approximately k ∗ ≈ c−1 (N ln N − 1).
(15)
For re-sampling or selection to be effective, the power law constant must be c > 1 and a larger c imposes larger selectivity. The above equation shows that in this case, the impoverishment time k ∗ is finite and is extended by the number of particles in proportional to N ln N . Hence, impoverishment is inevitable when the re-sampling process is adopted in implementing a particle filter. D. Proposed Approach In order to mitigate the sample impoverishment problem when implementing a particle filter with re-sampling, an evolutionary approach is proposed where re-sampling is conducted implicitly thus avoiding the impoverishment. The algorithm hybridizes the particle filter and genetic algorithm procedures while complements the advantages of each other. In the monobot scenario, the system state initially contains the robot location and the landmark states are augmented when they are firstly observed to form an overall system state. The system state is partitioned such that floating point numbers are used to represent individual state and there are N copies, or chromosomes, to form a population. The states are modifies by the genetic crossover operator depending on the range measurements made between the robot and landmarks during each time step. The algorithm can be described as follows. 1) System initialization at time k = 0: 6 • Generate N chromosomes to represent the robot state, all are set to zero representing the origin of the coordinate frame. 2) Measurements: • Make range measurements from the robot to landmarks, giving zi for each landmark. • If the landmarks are firstly seen, generate sets of chromosomes for each landmark7. • Otherwise, calculate and normalize the fitness of each chromosome such that they sum to unity. 6 Floating point numbers are used in this work to gain a better resolution of the estimation. 7 The chromosomes are locations around the first range measurement with some arbitrary distribution, e.g., uniform distribution.
−3
3) Update at k > 0: • Select chromosomes using stochastic universal sampling. • Compute the estimates of the robot and landmark states. • Re-normalize fitness to f ∈ [0, 1]. • Loop through N times. - Randomly pick two chromosomes c1 and c2 with fitness f1 and f2 . - If f1 < γ, then set (16) c1 ← c1 + r∆c, ∆c = c1 − c2
•
Robot
x 10
Weight
2.5
2
1.5
1
0.4
0.5
0.6
3
2
1.5
1.5
2.8
2.9
3 Mark1
1
3.1
(17)
4 Mark2
4.1
Robot
x 10
2
1
0
0.2
0.3
0.4
0.5
−3
4
E. Justifications
0.6
0.7
−3
x 10
4
3
3
2
2
1
1
0
IV. S IMULATION
3.9
3
where γ ∈ [0.1] is some fitness threshold (e.g., γ = 0.05), r ∈ [0, 1] is a random number, ∆c is the distance between the chromosomes. Repeat from the measurement step until user specified termination of the filtering process.
Simulations were conducted for a monobot initially located at the coordinate origin then moves repeatedly from left to right and vice versa in a 5m range. Two landmarks were placed at 3m and 4m respectively. The robot moves at 0.2m/s and the odometer measures the velocity with an error of standard deviation at 0.03m/s, the range measurement to landmarks carries an error of 0.1m standard deviation, the noises are assumed Gaussian. Two cases were simulated: 1) standard particle filter implementation with re-sampling and 2) the proposed evolutionary approach. In both cases, the use of a small number of particles, 500, and a relatively larger number of 5000 are tested. Fig. 2(a) plots the particles corresponding to the robot and landmarks and their associated weights in case 1. Due to sample impoverishment, particles concentrated on discrete locations. The improvement from adopting the evolutionary approach is illustrated in Fig. 2(b) with the use of 500
3.8
(a) PF re-sampling approach −3
In the proposed approach, the initialization and measurement stages follow that of standard PF or GA implementations. In the update stage, chromosomes are selected according to their fitness. The resultant chromosomes need to be separated to prevent impoverishment. A pair of chromosomes are manipulated when the fitness of one chromosome in the pair that is lower than some threshold γ. The distance between the two chromosomes is calculated. The adjusted chromosome is then repelled from the one of higher fitness. Moreover, the adjustment is also moderated by the distance and the weighting given by r. This technique may be viewed as re-supplying chromosomes or particles to locations in the solution space not yet being explored while preventing sample impoverishment.
0.9
x 10
2.5
2
1
0.8
−3
x 10
2.5
4
∆c = c2 − c1
0.7
−3
3
Weight
- If f2 < γ, then set c2 ← c2 + r∆c,
3
2.9
3 Mark1
3.1
3.2
0
x 10
3.9
4 Mark2
4.1
4.2
(b) Proposed approach Fig. 2.
Particles distribution vs. weights: top - robot, bottom - landmarks
particles. It is clear that particles are able to represent the pdf which can be noted from a trace of the envelope. A more concentrated region of particles is also observable which indicates the convergence to the solution. Time traces of the spread of the particles in case 1 are plotted in Fig. 3(a) and 3(b) respectively. The top trace is for the robot location error while the lower two are for the landmarks, the corresponding 3σ error bound is also shown. It is clearly seen that for 500 particles used, the particles collapsed to a single one at about 250 time steps. The location estimations becomes un-reliable. The sample impoverishment is also noticeable even when 5000 particles are used, see Fig. 3(b). Results from case 2, which adopts the proposed evolutionary particle filter approach, are depicted in Fig. 4(a) and 4(b). The results from the use of 500 particles show acceptable results while the use of 5000 particles clearly removes the sample impoverishment problem. V. C ONCLUSION In this paper, the sample impoverishment problem in a particle filter is resolved by hybridizing techniques using genetic algorithms. It has been shown by analysis and simulations that the proposed method produces better estimation results than the conventional particle filter. This is because that the proposed method maintains the diversity of particles in the re-sampling process. Further work will be done on a real robot.
Robot err(m)
Estimation Error
R EFERENCES
0.1 0.05 0 −0.05 −0.1 0
50
100
150
200
250
300
350
400
450
500
0
50
100
150
200
250
300
350
400
450
500
0
50
100
150
200
250
300
350
400
450
500
M1 err(m)
0.2 0.1 0 −0.1 −0.2
M2 err(m)
0.2 0.1 0 −0.1 −0.2
(a) 500 particles Robot err(m)
Estimation Error 0.1 0.05 0 −0.05 −0.1 0
50
100
150
200
250
300
350
400
450
500
0
50
100
150
200
250
300
350
400
450
500
0
50
100
150
200
250
300
350
400
450
500
M1 err(m)
0.2 0.1 0 −0.1 −0.2
M2 err(m)
0.2 0.1 0 −0.1 −0.2
(b) 5000 particles Fig. 3.
Simulation results from standard PF implementation
Robot err(m)
Estimation Error 0.1 0.05 0 −0.05 −0.1 0
50
100
150
200
250
300
350
400
450
500
0
50
100
150
200
250
300
350
400
450
500
0
50
100
150
200
250
300
350
400
450
500
M1 err(m)
0.2 0.1 0 −0.1 −0.2
M2 err(m)
0.2 0.1 0 −0.1 −0.2
(a) 500 particles Robot err(m)
Estimation Error 0.1 0.05 0 −0.05 −0.1 0
50
100
150
200
250
300
350
400
450
500
0
50
100
150
200
250
300
350
400
450
500
0
50
100
150
200
250
300
350
400
450
500
M1 err(m)
0.2 0.1 0 −0.1 −0.2
M2 err(m)
0.2 0.1 0 −0.1 −0.2
(b) 5000 particles Fig. 4.
Simulation results from evolutionary particle filter approach
[1] N. J. Gordon, D. J. Salmond and A. F. M. Smith, ”Novel approach to nonlinear/non-Gaussian Bayesian state estimation,” IEE Proc.-F, Vol. 140, No. 2, April 1993, pp. 107-113. [2] M. S. Arulampalam, S. Maskell, N. Gordon and T. Clapp, ”A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking,” IEEE Trans. on Signal Processing, Vol. 50, No. 2, February 2002, pp. 174-188. [3] N. M. Kwok and G. Dissanayake, ”Bearing-only SLAM in indoor environments using a modified particle filter,” Proc. Australasian Conf. on Robotics and Automation, December 2003, CD-ROM. [4] P. Li and V. Kadirkamanathan, ”Particle filtering based likelihood ratio approach to fault diagnosis in nonlinear stochastic systems,” IEEE Trans. on Systems, Man and Cybernetics - Part C, Vol. 31, No. 3, August 2001, pp. 337-343. [5] E. Punskya, C. Andrieu, A. Doucet and W.J. Fitzgerald, ”Particle filtering for multi-user detection in fading CDMA channels,” Proc. 11th IEEE Signal Processing Workshop on Statistical Signal Processing, August 2001, pp. 38-41. [6] Y. Chen and Y. Rui, ”Real-time speaker tracking using particle filter sensor fusion,” Proc. of the IEEE, Vol. 92, No. 3, March 2004, pp. 485-494. [7] N. J. Gordon, ”Non-linear/Non-Gaussian filtering and the bootstrap filter,” IEE Colloquium on Non-linear Filters, May 1994, pp. 1-6. [8] J. S. Liu and R. Chen, ”Sequential Monte Carlo methods for dynamical systems,” Journal of the American Statistical Society, Vol. 93, 1998, pp. 1032-1044. [9] J. H. Holland, ”Adoption in natural and artificial systems,” Ann Arbor: The University of Michigan Press, 1975. [10] D. E. Goldberg, ”Genetic algorithms in search, optimization and machine learning,” Addison-Wesley Pub. Co., Massachusetts, 1989. [11] K. Kanazawa, D. Koller and S. Russell, ”Stochastic simulation algorithms for dynamic probabilistic networks,” Proc. 11th Annual Conf. of Uncertainty and AI, 1995, pp. 346-351. [12] T. Higuchi, ”Monte Carlo filter using the genetic algorithm operators,” Journal of Sstist. Comput. Simul., Vol. 59, 1997, pp. 1-23. [13] B. T. Zhang, ”A Bayesian framework for evolutionary computation,” Proc. of the 1999 Congress on Evolutionary Computation, 1999, pp. 722-728. [14] K. Uosaki, Y. Kimura and T. Hatanaka, ”Evolution strategies based particle filters for state and parameter estimation of nonlinear models,” Proc. 2004 Congress on Evolutionary Computation, June 2004, pp. 884-890. [15] M. M. Drugan and D. Thierens, ”Evolutionary Markov Chain Monte Carlo,” in P. Liardet et al. (Eds.), EA 2003, LNCS 2936, pp. 63-76, Springer-Verlag, Berlin Heidelberg, 2004. [16] L. Moreno et al., ”A genetic algorithm for mobile robot localization using ultrasonic sensors,” Journal of Intelligent and Robotic Systems, Vol. 34(2), June 2002, pp. 135-154. [17] T. Duckett, ”A genetic algorithm for simultaneous localization and mapping,” Proc. 2003 IEEE Intl. Conf. on Robotics and Automation, September 2003, pp. 434-439. [18] D. G. Goldberg and K. Deb, ”A comparative analysis of selection schemes used in genetic algorithms,” in Foundations of Genetic Algorithms, B. M. Spatz (Eds.), Morgan Kaufmann Publ., San Mateo, CA, 1991, pp. 69-93. [19] J. E. Baker, ”Reducing bias and inefficiency in the selection algorithm,” Genetic Algorithms and Their Applications, MIT, Cambridge, MA, 1987, pp. 14-21. [20] T. Blickle and L. Thiele, ”A comparison of selection schemes used in genetic algorithms,” Technical Report: Swiss Federal Institute of Technology, TIK-Report Nr. 11, December 1995. [21] Y. Boers, ”On the number of samples to be drawn in particle filtering,” IEE Colloquium on Target Tracking: Algorithms and Applications, November 1999, pp. 5/1-5/6. [22] F. Herrera, M. Lozano and J. L. Verdegay, ”Tackling real-coded genetic algorithms: operators and tools for behavioural analysis,” Artificial Intelligence Review, 1998, Vol. 12(4), pp. 265-319.