Efficient Design of High Pass FIR Filter using Quantum-behaved ...

0 downloads 0 Views 660KB Size Report
Abstract—Quantum-behaved particle swarm optimization. (QPSO) algorithm theoretically guarantees global convergence and has been implemented on a wide ...
Efficient Design of High Pass FIR Filter using Quantum-behaved Particle Swarm Optimization with Weighted Mean Best Position 1,2

Supriya Dhabal1, Saptarshi Sengupta2 Department of Electronics and Communication Engineering Netaji Subhash Engineering College Kolkata, West Bengal, India 1 [email protected], [email protected]

Abstract—Quantum-behaved particle swarm optimization (QPSO) algorithm theoretically guarantees global convergence and has been implemented on a wide suite of continuous optimization problems. In this paper, the nonlinear multimodal optimization problem of high pass FIR filter design is investigated using the weighted mean best QPSO algorithm (WQPSO). The results are compared with competitive techniques such as QPSO keeping PSO and PM as references. It is seen that WQPSO statistically outperforms QPSO in terms of convergence characteristics and ripple performance of the designed filter. Keywords—QPSO; Quantum Behaviour; Swarm Intelligence; Global Optimization; FIR Filter

I. INTRODUCTION In signal processing applications, digital filters find extensive use due to the advantages of higher reliability, smaller physical dimensions and reduced sensitivity over their analog counterparts. Additionally, their implementation in software enables better control over response by enabling change in finite number of registers only. Of the two major filtering modes, the FIR (Finite Impulse Response) is preferable over the IIR (Infinite Impulse Response) because it guarantees stability of the system and provides finer adjustment of the response curve. Among the traditional modes of design, the window method [1] is admired by many researchers. In this scheme of design the perfect impulse response is multiplied with a suitable window function from a library of such functions (Chebyshev, Kaiser, Hamming etc.) relying on the design goals of minimum ripple in the pass-band (δp) and stopband (δs), as well as reduction in the transition width. Window functions truncate the infinite impulse response to a finite one but are limited in their ability to provide sufficient control over design parameters. A more efficient basis for planning exact linear phase responses was proposed by Parks and McClellan [2] as the Remez-exchange algorithm (1972). However, it does not allow direct selection of δp and δs and only permits a specification of the ripple ratio δp/δs. It also generates floating 978-1-4799-4445-3/15/$31.00 ©2015 IEEE

point coefficients which require further quantization before hardware based implementation strategies are sought. The need for evolutionary algorithms in this regard arises out of several shortcomings of conventional gradient-based schemes. For instance, gradient-based methods require continuous and differentiable cost functions over the search space. Also, for a relatively large search space the conventional algorithms are susceptible to reporting local minima near the point of initialization and revisiting them in subsequent stages of execution. As such, evolutionary algorithms are better at reaching the global optima. The literature contains several competing algorithms such as Simulated Annealing (SA) [3-4], Genetic Algorithm (GA) [5], Particle Swarm Optimization (PSO) [6-8] and so on. Amongst these, PSO is a robust, stochastic search algorithm based on a population where each point is termed as a particle and is a potential solution to the problem. Each particle moves through the search space with a random velocity assigned to it. PSO works well with nondifferentiable cost functions but presents limitations in reaching the global best as shown by Van den Bergh [9]. A large number of variants of the basic PSO algorithm have been proposed [10-12] as of date. One such extension is inspired by the laws of quantum mechanics and represents a particle by a wave function ψ(x,t) instead of explicitly stating its position and velocity. This is called the Quantum-behaved Particle Swarm Optimization (QPSO) [13-14] and theoretically guarantees globally optimum results. QPSO is superior to traditional PSO due to the following reasons: a) The former has many more states owing to state superposition; b) QPSO associates an uncertainty with each particle such that it can be anywhere in the search space as opposed to a finite bound in search space imposed on particles in traditional PSO and finally c) QPSO employs ‘survival of the fittest’ scheme through initiation of mean best parameters which more closely mimic the leader’s behaviour in a swarm of birds or school of fish. With QPSO the goals of FIR filter design are met more closely and convergence is faster [15-16]. This paper discusses FIR high pass filter design using a weighted mean best update in QPSO [17-18] which is able to approximate filter orders of 20, 30 and 40 better in terms of ripple and attenuation at the

cost of acceptable tradeoff in transition width. Based on the set of optimal results obtained through this method, a compelling case for the superiority of this algorithm is set up. The rest of this paper is organized as follows: in Section II the problem formulation is presented. In Section III, a brief outline of PSO, QPSO and WQPSO algorithms are given. Section IV states the parameter settings and Section V discusses simulation results obtained using the algorithms. Section VI concludes the paper.

(1)

where N represents the order of the filter and h(n) is the set of filter co-efficients that determine whether the system is a low pass, band pass or high pass type. There are (N+1) coefficients in the making of order N filter given by the polynomial (1). Typically, in optimization problems a desired response is approximated with varying degrees of success by minimizing the error function of the approximated response with respect to the desired. With traditional window design approach, the error varies greatly between nearer and farther regions from discontinuities, which is not desirable in some applications. This motivated the design of equiripple filters which minimize the error in both pass-band and stop-band through algorithmic approaches. The error function, written as the weighted difference of the ideal and approximated frequency responses in both pass-band and stop-band is as follows: ω

ω

ω

ω

( )− H a (e j ) ]

E ( ) = W ( )[ H d e j

(2)

ω

ω

μ

μ

= E p + (1 − ) E s

0 < ≤1

(5)

Ep and Es are errors in pass band and stop band given by:

ω

s

2 ∫ (1 − H ( )) d

(6)

p

s

2 ∫ ( H ( )) d

ω

π

1

ω

ω

Es =

ω

1

ω

Ep =

π

where ωp and ωs are the pass-band and stop-band edge frequencies. The weight function W(ω) provides control over error minimization in both of the frequency bands. Therefore, the equiripple filter implementations have some fixed parameters, while the rest are optimized. There have been investigations in which filter order (N), max. stop-band ripple (δs), max. pass-band ripple (δp) and the normalized stop-band and pass-band edge frequencies (ωs and ωp) are treated as fixed parameters while other parameters are set for minimization.

s

PM, PSO, QPSO and WQPSO are employed to test the design problem. Based on the results, we try to investigate the performance of the filter in the pass-band and stop-band regions. Using co-efficient matching the dimensionality of the problem is halved as FIR Filters are symmetric with respect to impulse responses. This reduction in algorithmic complexity provides for faster response and more computational accuracy. In this paper, the mean square error based cost function is adopted:

π

(3)



p

μ



s

π

ω

p ≤



ω

ω

=1

0≤

ω

ω

( )= 0

Hd e j

(4)

The PM algorithm generates floating point co-efficients making it unfit for hardware implementation without quantization first. Evolutionary algorithms do not have this limitation. In Swarm Intelligence, for example in Particle Swarm Optimization (PSO), the particle is moved around in a d-dimensional search space with a velocity that is constantly updated after every iteration based on the particle’s own experience and those of its neighbours and the whole swarm. PSO mimics simple social interactions instead of focusing only on cognition based learning. However, PSO fails to guarantee convergence to even local optima [9] and thus an improved variant QPSO is also considered. QPSO is inspired by quantum mechanical laws and trajectory analysis of conventional PSO and theoretically guarantees convergence to global optima.

φ

( )

( )

and H a e j are the desired and where H d e j approximated frequency responses, respectively. The ideal response for a high pass filter is given as:

δ

n = 0, 1… N

n=0



) + max( E ( ) − s ) ω ω

N

H ( z) = ∑ h(n)z −n ,

p

ω

In FIR filter design the system transfer function H(z) can be written as:

)−

ω ω

U = max E (

δ

(

PROBLEM FORMULATION

ω

II.

Still others developed algorithms where only N, δp and δs are fixed. The algorithm proposed by Parks and McClellan is flexible and computationally efficient and has become the most popular one for equiripple optimum filter design. Equation (2) represents the Parks McClellan error function E(ω). The design objective is to minimize the maximum bound of error. Thus the best approximation is subject to minimizing (max |E(ω)| ) over the requisite frequency bands. The limitation is that δp/δs is fixed. An improvement in the error function in (2) has been adopted and used in literature [19]:

(7)

0

H is the magnitude response of the prototype filter. The cost function φ is a weighted sum of mean square errors Ep and Es. Ep and Es are minimized for obtaining better performance in terms of AS, δp and δs.

III. OVERVIEW OF ALGORITHMS USED

(B) Quantum-behaved Particle Swarm Optimization (QPSO)

(A)Preliminary Concepts of PSO Particle Swarm Optimization (PSO), introduced by Eberhart and Kennedy [6], is a population based stochastic search technique capable of handling non-differentiable cost function unlike gradient based techniques. In PSO a population of particles is termed as a swarm and each particle is a potential solution to the optimization problem at hand. PSO is less likely to get trapped in local optima like GA or Simulated Annealing but it is not a global search technique as shown by Van den Bergh based on the convergence criteria set by Solis and Wets [20]. Eberhart and Kennedy [6] followed by Eberhart and Shi [7] modeled PSO similar to bird-flocking in nature where flocking is feasible through successful optimization of a specific cost function. Each particle mimics a bird in fly and bases its cognition and social updates on its best position found so far (pbest) and the global best position found by any of the particles in the swarm (gbest). The consequence is that finding promising new positions in the search space leads to other particles swarming to these positions. This leads to exploration of the neighbouring region in a more thorough manner.

)

xij (t + 1) = xij (t ) + vij (t + 1)

(8) (9)

∀j ∈ 1,2,3, … , n and v ij represents the velocity of the i-th

particle in its j-th dimension. c1 and c2 are the social and cognition acceleration weights and r1 and r2 are two random numbers uniformly distributed in (0,1). p gj (t ) is the groupwise best position of the swarm i.e. the global best. The pbest updates occur in the following manner: p i ( t + 1) = p i ( t )

if f ( x i ( t + 1 ) ) ≥ f (p i ( t ) )

= x i ( t + 1 ) if f ( x i ( t + 1 ) ) ≤ f (p i ( t ) )

(10)

ij (t )). p gj (t )

(12)

φij(t) and uij(t) are uniformly distributed random numbers in (0,1). The parameter Lij(t) is evaluated using: Lij (t ) = 2. . Pij (t ) − x ij (t )

(13)

β controls the convergence speed and is called the contractionexpansion factor. The complete position update equation is given by: x ij (t + 1) = Pij (t ) ± . Pij (t ) − x ij (t ) ln(1 / u ij (t ))

(14)

L can be controlled to get varying degrees of performance and convergence speed out of the algorithm. Working on this notion, an improvement called “Mainstream Thought” has been introduced [14]. The mainstream thought or mean best position is the center of pbest positions of the particles.

mbest (t ) = [mbest1 (t ), mbest 2 (t ), … , mbest d (t )] ⎡1 M ⎤ 1 M 1 M =⎢ ∑ pi1 (t ), ∑ pi 2 (t ),.… , ∑ p id (t )⎥ M i =1 M i =1 ⎣ M i =1 ⎦

(15)

M is the swarm size and pid is the dimension co-efficient of the personal best of the i-th particle. Consequently L becomes the following: β

The first term in (8) is the inertial motion which explores new regions of search space whereas the second and third terms are responsible for diversity through which the particle bases its experience on that of its own and others in the swarm.

ij (t ). p ij (t ) + (1 −

(11)

φ

(

+ c 2 .r2 (t ). p gj (t ) − x ij (t )

Pij (t ) =

β

)

The local attractor Pij(t) given by:

β

(

v ij (t + 1) = w.V ij (t ) + c1 .r1 (t ). p ij (t ) − x ij (t )

x ij (t + 1) = Pij (t ) ± (1 2).Lij (t ). ln(1 u ij (t ))

φ

In PSO, population vectors are initialized with the dimensionality being same as the problem. In high pass filter design every particle has the same number of dimensions that the order of the filter demands. Let M be the population size and d be the dimensionality of the problem. Each particle i (1 ≤ i ≤ M ) has a current position in the search space given by xi = (xi1, xi 2 , xi 3 , … , xid ) and a current velocity given by vi = (vi1, vi 2 , vi3 ,…, vid ) and the personal best position pi = ( pi1, pi 2 , pi3 ,…, pid ) for which the fitness value calculated is least for the i-th particle. At the end of an iteration particles update their position and velocity according to the equations presented:

In classical mechanics, a particle’s motion can be described by knowing its position and velocity at any instant. Conventional PSO draws from this Newtonian framework and its update equations are formulated accordingly. In quantum mechanics, however, simultaneous determination of position and velocity carries no meaning according to Heisenberg’s Uncertainty Principle. Instead, the state of the system can be described using Schrödinger’s equation with a wave function ψ(x,t). The dynamic behaviour of a quantum-behaved particle system is widely different from the classical one as each particle is formally associated with a probability of existence through its probability density function │ψ(x,t)│2 . In QPSO each particle has a position vector xi, a personal best pi and the population has a global best gi which has the best fitness value among the set of personal best values. Using Monte Carlo method, the position update equation of QPSO can be formally written as:

Lij (t ) = 2. . mbest j (t ) − x ij (t )

(16)

β

x ij (t + 1) = Pij (t ) ± mbest j (t ) − x ij (t ) . ln(1 / u ij (t ))

(17)

The expression in the right hand side of (17) is additive provided rand (0,1) > 0.5. Equation (17) is the position update for QPSO and provides for a global optima guaranteed search method. (C) QPSO with Weighted Mean Best Position Update The equally weighted mean best is not representative of social behaviour because particles of varying fitness contribute equally in it. In societal settings, the elite contribute more towards qualitative improvements. Such analogy is drawn into the mean best calculation by assigning larger weights to particles having better fitness and smaller weights to comparatively poorer performing particles. Weights are sorted in a descending scheme and multiplied with personal bests ranked in an ascending correspondence to fitness values, with lower fitness values coming first. This results in a weighted mean best that has a higher proportion of better fitness values [17]. The mean best position in weighted QPSO is expressed as:

Table 1. Comparison of design parameters for N= 20

Parameters

PM

PSO

QPSO

WQPSO

AS (dB)

-15.60

-20.78

-25.78

-26.17

δp(normalized)

0.166

0.134

0.042

0.045

δs (normalized)

0.166

0.091

0.051

0.049

0 −5 −10

Magnitude (dB)

The mainstream thought based position update equation changes to:

−15 −20 −25 −30 PM

−35

PSO −40

QPSO WQPSO

−45

mbest(t ) = [mbest1(t ), mbest2 (t ),…, mbestd (t )]

0

0.1

0.2

α

= (1.0 − 0.5)[(evals − evaluations ) / evals] + 0.5

0.7

0.8

0.9

PM

PSO

QPSO

WQPSO

AS (dB)

-20.50

- 23.75

-30.53

- 32.48

δp (normalized)

0.095

0.073

0.017

0.015

δs (normalized)

0.095

0.065

0.029

0.024

(19)

where evals is the maximum number of iteration and evaluations is the current iteration count. In WQPSO, the random weights are assigned values between 0.5 and 1.5 and normalized. The value of μ in the cost function is varied from 0.1 to 1.0 during 30 successive runs of the algorithms. The best results of 9000 iterations are reported here.

0.6

Table 2. Comparison of design parameters for N = 30

0

−10

Magnitude (dB)

α

α

β

The design parameters of the filter include pass-band edge frequency (ωp) of 0.45 and stop-band edge frequency (ωs) of 0.4 (normalized). The population size (M) is 120, c1 and c2 are taken as 2.05 each and wmax, wmin are 0.9 and 0.4 respectively for PSO. The value of contraction-expansion factor β is decreased linearly from 1.0 to 0.5 during the course of iterations of QPSO and WQPSO. The update equation for β is:

0.5

Figure 1. Magnitude plot in dB for N = 20

Parameters

IV. PARAMETER SETTINGS

0.4

Normalized Frequency (×π rad/sample)

⎡1 M ⎤ (18) 1 M 1 M = ⎢ ∑ i1 pi1(t ), ∑ i 2 pi 2 (t ),.…, ∑ id pid (t )⎥ M i =1 M i =1 ⎣ M i =1 ⎦ where αi represents the weight co-efficient array and it contains d dimensional weights of the i-th particle.

0.3

−20

−30

−40 PM

V. RESULTS AND DISCUSSIONS

PSO −50

QPSO WQPSO

The algorithms are run on MATLAB with sampling frequency fs = 1 Hz and 128 sampling points to realize filter orders 20, 30 and 40. The performances are shown:

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Normalized Frequency (×π rad/sample)

Figure 2. Magnitude plot in dB for N = 30

Table 3. Comparison of design parameters for N = 40 1.15

Parameters

PM

PSO

QPSO

1.1

WQPSO

AS (dB)

-25.08

-29.13

-34.44

-37.26

δp (normalized)

0.056

0.130

0.036

0.031

Magnitude

1.05 1 0.95 0.9 0.85 PM 0.8

δs (normalized)

0.035

0.056

0.019

PSO QPSO

0.75

0.014

WQPSO

0.7 0.5

0.55

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

Normalized Frequency (×π rad/sample) 0

Figure 5. Zoom plot of pass-band ripple for N = 40

Magnitude (dB)

−10

The ripple performances in pass-band are shown in figures 4-5. With the exception of order 20 (not shown), the maximum pass-band ripple for both orders 30 and 40 decreases for WQPSO (0.015 for order 30 and 0.031 for order 40).

−20

−30

−40 PM PSO −50

Figures 6-7 shown below are the magnitude responses for N=30 and N=40.

QPSO WQPSO 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Normalized Frequency (×π rad/sample)

1

Figure 3. Magnitude plot in dB for N = 40 Figures 1-3 show the magnitude (dB) plots for orders 20, 30 and 40 for each of the algorithms discussed. It is observed that with the increasing of filter order, the stop-band ripple performances are improved using WQPSO as 0.049 for order 20, 0.024 for order 30 and 0.014 for order 40. The stop-band attenuation for WQPSO is more in comparison to QPSO, PSO and PM in each implementation.

Magnitude

0.8

0.6

0.4

PM

0.2

PSO QPSO WQPSO

0 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Normalized Frequency (×π rad/sample)

Figure 6. Magnitude response for N = 30 1.15 1.1 1

1 0.8

0.95 0.9 0.85 PM 0.8

PSO

0.6

0.4

QPSO

0.75

WQPSO 0.7 0.55

Magnitude

Magnitude

1.05

PM

0.2

PSO

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

QPSO

Normalized Frequency (×π rad/sample)

WQPSO

0

Figure 4. Zoom plot of pass-band ripple for N = 30

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Normalized Frequency (×π rad/sample)

Figure 7. Magnitude response for N = 40

0.08 PSO QPSO WQPSO

Fitness

0.07

REFERENCES

0.06

[1]

0.05

[2]

0.04

[3] 0.03

[4]

0.02 0.01 0 3000

[5] 4000

5000

6000 Iterations

7000

8000

9000

Figure 8. Convergence profile for N = 40

Tables 1-3 listed the performances of PM, PSO, QPSO and WQPSO for the high pass filter design problem. It is seen that the stop-band attenuation for 20th order filter using WQPSO is -26.17 dB while the same using Parks-McClellan, PSO and QPSO are -15.60 dB, -20.78 dB and -25.78 dB. The normalized stop-band ripple obtained using WQPSO is 0.049. This indicates better performance at the cost of very little increase in pass-band ripple with respect to QPSO (0.045 v/s 0.042) For filter order 30, the stop-band attenuation using WQPSO is -32.48 dB in comparison to -30.53 dB, -23.75 dB and -20.50 dB for QPSO, PSO and PM. For WQPSO, the normalized stop-band ripple is 0.024 and the pass-band ripple is 0.015 both of which are improvements in terms of QPSO based results. Simulation results show that for order 40, the attenuation using WQPSO is -37.26 dB and the pass-band and stop-band ripples are at 0.031 and 0.014. For the same order QPSO, PSO and PM algorithms result in -34.44 dB, -29.13 dB and -25.08 dB of attenuation and their ripple performances are not as good. The ripple performances are improved from QPSO by 3.92%, 17.24% and 26.31% using WQPSO for orders 20, 30 and 40 in the stop-band. From the tabulated data and discussions it is inferred that filters designed using WQPSO are better performing in the stop-band. With the exception of lower orders (20), the pass-band ripple performances are also improved by 11.76% and 13.88% for higher orders (30 and 40). The convergence profile of WQPSO against QPSO and PSO for order 40 with 9000 iterations is shown in Fig.8. WQPSO converges to lower error fitness of 0.0131 against 0.0143 and 0.0285 of QPSO and PSO. VI. CONCLUSIONS AND FUTURE SCOPE From the results and discussions it is evident that WQPSO is a viable alternative for designing better performing FIR high pass filters at the expense of slightly more transition width. WQPSO can be further applied to digital IIR filter design or pattern synthesis of array antenna.

[6] [7]

[8]

[9] [10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19] [20]

Yuksel, Ozbay., Bekir, Karlik, Kavsaoglu, Resit., A, “A windows-based digital filter design” , Math. Comput. Appl. 8 (3), 287–294, 2003. Parks, T.W., McClellan, J.H., “Chebyshev approximation for non recursive digital filters with linear phase.” IEEE Trans. Circ. Theory CT-19, 189–194, 1972. Chen, S., “IIR Model identification using batch-recursive adaptive simulated annealing algorithm.”, Proceedings of 6th Annual Chinese Automation and Computer Science Conference, pp. 151–155, 2000. Benvenuto, N., Marchesi, M., Uncini, A., “Applications of simulated annealing for the design of special digital filters”, IEEE Transactions on Signal Processing, vol. 40, no. 2, pp. 323-332, 1992. Mastorakis, N.E., Gonos, I.F., Swamy, M.N.S., “Design of two dimensional recursive filters using genetic algorithms.” IEEE Trans. Circ. Syst. I – Fundam. Theory Appl. 50, 634–639, 2003. Kennedy, J., Eberhart, R., “Particle swarm optimization.”, Proc. IEEE Int. Conf. Neural Network, 1995. Eberhart, R., Shi, Y., “Comparison between genetic algorithms and particle swarm optimization.”, Proc. 7th Ann. Conf. Evolutionary Computation, San Diego, 2000. Langlois, J.M.P., “Design of linear phase FIR filters using particle swarm optimization.”, Proceedings of 22nd Biennial Symposium on Communications, Kingston, Ontario, Canada, 2004. Van den Bergh, F., “An analysis of particle swarm optimizers.”, Ph.D. Thesis, University of Pretoria, November 2001. Banks, A.,Vincent, J., Anyakoha, C., “A review of particle swarm optimization. Part II: hybridization, combinatorial, multicriteria and constrained optimization, and indicative applications.”,Natural Computing, vol. 7(1), pp. 109-24, 2008. Van den Bergh, F., Engelbrecht, A.P., “A Cooperative approach to particle swarm optimization.”, IEEE Transactions on Evolutionary Computation 8, 225–239, 2004. Liang, J.J., Suganthan, P.N., “Dynamic multi-swarm particle swarm optimizer with a novel constraint-handling mechanism.”, IEEE Congress on Evolutionary Computation, 2006, CEC 2006, pp. 9–16, 2006. Sun, J., Feng, B., Xu, W.B., “Particle swarm optimization with particles having quantum behavior.”, IEEE Proceedings of Congress on Evolutionary Computation, pp. 325–331, 2004. Sun, J., Xu,W.B., Feng, B., “A global search strategy of quantumbehaved particle swarm optimization.”, Cybernetics and Intelligent Systems Proceedings of the 2004 IEEE Conference, pp. 111–116, 2004. Fang, W., Sun, J., Xu, W.B., Liu, J., “FIR digital filters design based on quantum-behaved particle swarm optimization.”, First International Conference on Innovative Computing, Information and Control, pp. 615-619, 2006. Fang, W., Sun, J., Xu, W.B., “FIR filter design based on adaptive quantum-behaved particle swarm optimization algorithm.”, Systems Engineering and Electronics, vol. 30(7), pp. 1378-1381, 2008. Xia, ML., Sun, J., Xu, W.B., “An improved quantum-behaved particle swarm optimization algorithm with weighted mean best position.”, Appl Math Comput 205(2):751–759, 2008. Xi, M., Sun, J., Xu, W.B., “Quantum-behaved particle swarm optimization with elitist mean best position.”, Complex Systems and Applications-Modeling, Control and Simulations, pp. 1643-1647, 2007. Sarangi, A., Mahapatra, RK., Panigrahi, SP., “DEPSO and PSO-QI in digital filter design.”, Expert Syst. Appl. 38 (9), 10966–10973, 2011. Solis, F.J., Wets, R. J-B., “Minimization by random search techniques.”, Mathematics of Operations Research 6 pp. 19–30, 1981.

Suggest Documents