Recently, there has been a renewed interest in multi-objective control system design ... three algorithms which can be used to solve the method of inequalities are .... where ej, j = 1, 2,...,q are real numbers that determine the length of the step in ..... 0 ⤠fj ⤠2, is assigned to each individual by a linear interpolation from the ...
Algorithms for Solving the Method of Inequalities – A Comparative Study
J F Whidborne, D-W Gu, I Postlethwaite
Control Systems Research Department of Engineering University of Leicester Leicester LE1 7RH UK
Report No: 94-17 August 1994
Abstract Recently, there has been a renewed interest in multi-objective control system design using parameter search techniques. One multi-objective approach is the method of inequalities, where the problem is formulated as a set of algebraic inequalities which must be satisfied for a successful design. In this report, three algorithms which can be used to solve the method of inequalities are described and compared. Two of the algorithms are based on hill-climbing techniques, whilst the third uses a genetic algorithm approach. The report also serves as an introduction and tutorial into hill-climbing and genetic algorithm techniques for multi-objective design.
1
Introduction
The majority of engineering design problems are multi-objective, in that there are several conflicting design aims which need to be simultaneously achieved. If these design aims are expressed quantitatively as a set of n design objective functions φi (p) : i = 1 . . . n, where p denotes the design parameters chosen by the designer, the design problem could be formulated as a multi-objective optimisation problem: min {φi (p), for i = 1 . . . n} . p∈P
(1)
where P denotes the set of possible design parameters p. In most cases, the objective functions are in conflict, so the reduction of one objective function leads to the increase in another. Subsequently, the result of the multi-objective optimisation is known as a Pareto-optimal solution [Par06]. A Paretooptimal solution has the property that it is not possible to reduce any of the objective functions without increasing at least one of the other objective functions. A point p∗ ∈ P is defined as being Pareto-optimal if and only if there exists no other point p ∈ P such that a) φi (p) ≤ φi (p∗ ) for all i = 1, . . . , n and (2) b) φj (p) < φj (p∗ ) for at least one j. The problem with multi-objective optimisation is that there is generally a very large set of Pareto-optimal solutions. Subsequently there is a difficulty in representing the set of Pareto-optimal solutions and in choosing the solution which is the best design. To overcome this difficulty, the design problem can be formulated as the method of inequalities (MOI) [ZAN73] (see also [PM82, Mac89, WL93]). In the method of inequalities, the problem is expressed as a set of algebraic inequalities which need to be satisfied for a successful design. The design problem is expressed as φi (p) ≤ εi for i = 1 . . . n, (3) where εi are real numbers, p ∈ P is a real vector (p1 , p2 , . . . , pq ) chosen from a given set P and φi are real functions of p. The design goals εi are chosen by the designer and represent the largest tolerable values of the objective functions φi . The aim of the design is to find a p that simultaneously satisfies the set of inequalities. For control system design, the functions φi (p) may be functionals of the system step response, for example the rise-time, overshoot or the integral absolute error, or functionals of the frequency response, such as the bandwidth. They can also represent measures of the system stability and robustness, such as the maximum real part of the closed-loop poles. Additional inequalities which arise from the physical constraints of the system can also be included, to restrict for example, the maximum control signal. The design parameter, p, may parameterise a controller with a particular structure (e.g. [Whi92, Whi93]). For example, p = (p1 , p2 ) could parameterise a PI controller p1 + p2 /s. Alternatively, p, may parameterise the weighting functions required by analytical optimisation methods [WPG94, WMGP95, PWMG94] to provide a mixed optimisation approach. The actual solution to the set of inequalities (3) may be obtained by means of numerical search algorithms, the purpose of this report is to investigate several of the algorithms that have been proposed for the solution of the MOI. Generally, the design process is interactive, with the computer providing information to the designer about conflicting design requirements, and the designer adjusting the inequalities to explore the various possible solutions to the problem. The designer can be supported in this role by various graphical displays [Ng89] which provide information about the progress of the search algorithm and about the conflicting design requirements. The original algorithm, proposed by Zakian and Al-Naib [ZAN73], is known as the moving boundaries process (MBP). This algorithm uses Rosenbrock’s hill climbing [Ros60] to perform a local search to try and improve on at least one of the unsatisfied performance indices. This algorithm has worked well over the years, however, it does rely on a great deal of user-interaction to provide assistance for when 1
local minima are found. The success of the algorithm is very dependent on being provided with a good starting point. This does have the advantage of forcing the user to carefully analyse the problem before the design is completed, and hence guarding against ‘unreasonable’ solutions. The algorithm has a second problem with regard to the direction of search. This is detailed in [Ng89] and [Rut93], and is discussed in more detail in Section 2. To overcome this limitation, Ng [Ng89] has proposed another algorithm. This algorithm is also based on hill-climbing, namely Nelder and Mead’s modified simplex method [NM65a, NM65b]. This algorithm also appears to work well, but is also very dependent on being provided with a good starting point. The final algorithm is a genetic algorithm (GA). It has been developed by Fonseca and Fleming [FF93b, FF93c, FF94] who have dubbed it the “multi-objective genetic algorithm” (MOGA). The design philosophy is slightly different, in that a set of simultaneous solutions is sought, and the designer then selects the best solution from the set. The purpose of this report is to investigate the performance of these three algorithms. In Sections 2, 3 and 4, the details of the algorithms are presented. The algorithms are tested on three problems, these results are presented in Section 5. Some concluding remarks are presented in the final section.
2
Moving Boundaries Process
The method of inequalities, can be solved using a numerical algorithm known as the moving boundaries process (MBP) [ZAN73]. The design problem (3) is restated here as the set of inequalities φi (p) ≤ εi
i = 1, . . . , n,
(4)
where εi are real numbers, p represents the real vector (p1 , p2 , . . . , pq ), the elements of which are the controller parameters to be determined, and φi are real functions of p. Included in the inequality set are the constraints on p which define the set P. The problem is to find a value of p which satisfies all the inequalities (4). Each inequality φi (p) ≤ εi of the set of inequalities (4) defines a set Si of points in the q-dimensional space Rq such that Si = {p : φi (p) ≤ εi } . (5) The boundary of this set is defined by φi (p) = εi . A point p ∈ Rq is a solution to the set of inequalities (4) if and only if it lies inside every set Si , i = 1, 2, . . . , n and hence inside the set S which denotes the intersection of all the sets Si , n \ Si . (6) S= i=1
S is called the admissible set and any point p in S is called an admissible point. The objective is thus to find a point p such that p ∈ S. Such a point p satisfies the set of inequalities (4) and is said to be a solution. The MBP proceeds from an arbitrary initial point to an admissible point, i.e. any point in the set S, in an iterative way. Let pk denote the value of p at the kth move. Sik is a set formed by the inequality ˜k. φi (p) ≤ φi (pk ) with a boundary φi (p) = φi (pk ). A step is taken from the point pk to a trial point p k If for every i = 1, 2, . . . , n, the boundary defined by φi (p) = φi (˜ p ) is closer, or no further away from, k k ˜ is accepted and becomes the new point pk+1 . After a sufficient the boundary of Si , then the point p number of successful steps, the boundary of Sik coincides with the boundary of Si for every i = 1, 2, . . . , n, and the problem is solved. Thus S
k
Sik
=
n \
i=1
=
Sik ,
(7)
p : φi (p) ≤ εki ,
i = 1, 2, . . . , n,
2
(8)
εki
=
if φi (pk ) ≤ εi , if φi (pk ) > εi ,
εi φi (pk )
i = 1, 2, . . . , n, i = 1, 2, . . . , n.
(9)
˜ k . This point is a success and we set pk+1 = p ˜ k if and only if A step is taken from pk to a trial point p φi (˜ pk ) ≤ εki ,
i = 1, 2, . . . , n.
(10)
If any of the inequalities (10) do not hold, another trial point is made from pk until a success occurs. When all the strict inequalities (10) hold, the boundaries of the set S k+1 would have moved closer to the boundaries of S. The process is terminated when, after a sufficient number of successful steps, the boundaries of S k converged to those of S, i.e. when εki = εi ,
2.1
i = 1, 2, . . . , n.
(11)
Trial point generation
˜ k is generated using a scheme provided by Rosenbrock [Ros60]. This is a simple, but robust A trial point p scheme which does not require the computation of gradients. Let Vjr , j = 1, 2, . . . , q denote unit vectors that form an orthonormal basis for Rq . A set of vectors V1 , V2 , . . . , Vq is orthonormal if both k Vj k = 1 and hVi , Vj i = 0 for i 6= j, where k · k denotes the length and h·, ·i denotes the inner product. Each vector Vjr defines a direction of search and is orthogonal to the others. ˜ k is generated by A trial point p ˜ k = pk + ej Vjr , p (12) where ej , j = 1, 2, . . . , q are real numbers that determine the length of the step in each direction of ˜ k is a success, ej is replaced by αej , where α > 1. If p ˜ k is not a success, ej is replaced by search Vjr . If p βej , where −1 > β > 0. In either case j is replaced by j + 1 in (12). One iteration of the MBP is defined as q consecutive trials and hence, after the last trial (i.e. j = q) of the Lth iteration, we let j = 1 and L = L + 1 for the next iteration. As soon as one success followed by one failure has occurred for every j = 1, 2, . . . , q, vector Vjr is replaced by Vjr+1 , j = 1, 2, . . . , q. The vectors Vjr+1 are computed as follows. Let dj be equal to the sum of all successful values of ej during the rth stage, and let a1 a2 .. . aq
+ d2 V2r + ...
d1 V1r d2 V2r .. . = dq Vqr . = =
+ ... + dq Vqr ,
+ dq Vqr , (13)
The Gram-Schmidt procedure is used to orthogonalise the vectors ai to get b1 V1r+1 b2 r+1 V2 .. . bq Vqr+1
= = = =
a1 b1 /k b1 k, a2 − ha2 , V1r+1 iV1r+1 , b2 /k b2 k, .. . q−1 X = aq − haq , Vkr+1 iVkr+1 ,
(14)
k=1
= bq /k bq k.
Initially at r = 0, ej and Vjr are chosen arbitrarily. However, as r increases, the rate of convergence of S towards S tends to improve because V1r becomes more orientated along the direction of rapid advance, V2r along the direction normal to V1r and so on. k
3
2.2
Algorithm
i) Data given as εi (i = 1, 2, . . . , n). Specify maximum number of iterations allowed Nm . Set parameters α and β. ii) Set p = p0 . Compute φi (p) (i = 1, 2, . . . , n). If φi (p) ≤ εi (i = 1, 2, . . . , n), the initial point satisfies the inequalities and the problem is solved. iii) Set e0j = 0.1|pj | if |pj | ≥ 0 (j = 1, 2, . . . , q). Set e0j = 0.01 if pj = 0 (j = 1, 2, . . . , q). Set V1 = [ 1 0 0 . . . 0 0 V2 = [ 0 1 0 . . . 0 0 .. .. .. . . . Vq−1 = [ 0 0 0 . . . 0 1 Vq = [ 0 0 0 . . . 0 0
0 0
], ],
0 1
], ].
Set ε′i = φi (p) if φi (p) > εi (i = 1, 2, . . . , n). Set ε′i = εi if φi (p) ≤ εi (i = 1, 2, . . . , n). Set L = 0, r = 0. iv) Set ej = e0j and dj = 0. v) Start a new iteration. Set L = L + 1 and j = 1. ˜ = p + ej Vj . vi) Generate a trial point p Compute φi (˜ p) (i = 1, 2, . . . , n). Test whether φi (˜ p) ≤ ε′i (i = 1, 2, . . . , n). If success go to (vii). If failure go to (viii). ˜ , dj = dj + ej and ej = α ej . vii) Set p = p ′ Set εi = φi (p) if φi (p) > εi (i = 1, 2, . . . , n). Set ε′i = εi if φi (p) ≤ εi (i = 1, 2, . . . , n). Test whether ε′i = εi (i = 1, 2, . . . , n). If yes, the problem is solved. Otherwise go to (ix). ˜ and set ej = β ej . viii) Discard p Did a success and a failure occur for each current Vj ? If yes, re-initialise vectors Vj (j = 1, 2, . . . , q) using (13) and (14), set r = r + 1 and go to (iv). If no, go to (ix). ix) If j = q, go to (v). If j < q, set j = j + 1 and go to (vi). It has been found that suitable values for α and β are α = 3 and β = −0.5, and these are the values used in this study. Note that the order of calculation of the φi is important to save computational effort. The test p ∈ P should be evaluated first, and if successful, the stability functions should be evaluated prior to performance evaluations in a control system design since performance indices cannot generally be evaluated for an unstable system.
4
cone of descent
A directions
A 6 A
A A A
A A A
A A A A"
A A A ""
A A A "
A AA " A A "
AA " A
A " " A A
A " AA""
A
A "
""
"A" s
- Vr "
V2r
pk
1
Figure 1: Cone of descent direction
2.3
Comments
The MBP is simple, robust and effective and has been used successfully for solving the MOI for many years. Practical experience indicates that it does have finite convergence for a large class of practical problems, but a characterisation of this class has not been established theoretically. Ng [Ng89] has identified a weakness in the algorithm. It is possible that there are no descent directions contained in the set of search vectors, but there exists a cone of descent direction, as shown in Figure 1. The cone of descent directions is the set p : φi (p) ≤ φi (pk ), i = 1, 2, the search directions V1r , V2r , −V1r and −V2r are all ascent directions. Thus, in such cases, the algorithm will not find a descent direction although one exists unless the search directions are rotated. Rutland [Rut93] remarks that the search directions will eventually rotate once the search steps become small enough, but this reduces the efficiency of the algorithm.
3
Nelder Mead Dynamic Minimax Method
The Nelder Mead modified simplex method [NM65a, NM65b] is a well-known hill-climbing method (e.g. [Mat92, PFTV92, BSS93]). It is generally used for unconstrained minimisation problems, however, a scheme by Ng [Ng89] provides a dynamic minimax mechanism for using the Nelder Mead algorithm for solving the MOI. We will call this approach the Nelder Mead dynamic minimax method (NMDMM).
3.1
Nelder Mead Algorithm
Consider first, the minimisation of a function ψ(p) where p ∈ Rq . Define a current simplex p0 , p1 , . . . , pq where pi ∈ Rq , with ψi = ψ(pi ). Define h as the subscript such that ψh
=
max(ψi ),
(15)
l as the subscript such that ψl
=
min(ψi ).
(16)
i
i
¯ as the centroid of the points pi , i 6= h. Three possible operations are performed on ph at each Define p iteration - reflection, expansion and contraction. Additionally, multiple contraction can be performed on all the pi . These operations are described in the algorithm below, and are illustrated in Figure 2.
5
i) simplex at beginning of step
_ p ph pl
p (r)
_ p
ii) reflection pl
p (e) _ p
iii) expansion pl
p (c)
_ p
iv) contraction
pl pi
(pi + pl )/2 v) multiple contraction
pl
Figure 2: Operations on the simplex
6
3.2
Algorithm
i) Define a starting point p0 = (p0,1 , p0,2 , . . . , p0,q ) and set up initial simplex p0 , . . . , pq where p1
= (1.1p0,1 , p0,2 , . . . , p0,q )
(17)
p2 .. .
= (p0,1 , 1.1p0,2 , . . . , p0,q ) .. .
(18)
pi .. .
= (p0,1 , p0,2 , . . . , 1.1p0,i , . . . , p0,q ) .. .
(21)
pq
= (p0,1 , p0,2 , . . . , 1.1p0,q )
(22)
(19) (20)
(23) ii) Start an iteration. ¯. Compute ψ(pi )(i = 0, 2, . . . , q). Determine ph , pl and p iii) Perform a reflection on ph , denoted by p(r) where p(r) = (1 + α)¯ p − αph
(24)
where α is a positive constant, the reflection coefficient. iv) If ψl < ψ(p(r) ) < ψi , ∀i 6= h, l, then ph is replaced by p(r) , and go to (ii) to start a new iteration. v) If ψ(p(r) ) < ψl , i.e. if reflection has produced a new minimum, perform an expansion on p(r) denoted by p(e) where p(e) = γp(r) + (1 − γ)¯ p (25) where γ is a constant greater than unity, the expansion coefficient. If ψ(p(e) ) < ψl , replace ph by p(e) , and go to (ii) to start a new iteration. If ψ(p(e) ) > ψl , the expansion has failed, so replace ph by p(r) , and go to (ii) to start a new iteration. vi) If, however, ψ(p(r) ) > ψi ∀i 6= h, then if ψ(p(r) ) < ψh , replace ph by p(r) but if ψ(p(r) ) > ψh , do not replace ph , and then perform a contraction, on ph denoted by p(c) where p(c) = βph + (1 − β)¯ p
(26)
where 0 < β < 1. If ψ(p(c) ) < ψh , replace ph by p(c) , and go to (ii) to start a new iteration. If ψ(p(c) ) > ψh , perform a multiple contraction on pi , that is replace all pi by (pi + pl )/2, and go to (ii) to start a new iteration. It has been found that suitable values for α, β and γ are α = 1, β = 0.5 and γ = 2, and these are the values used in this study.
3.3
Dynamic Minimax Formulation
Ng [Ng89] has proposed a scheme to use the Nelder Mead algorithm to solve the method of inequalities. The following dynamic minimax formulation makes all indices with unsatisfied bounds equally active at the start of each iteration of the Nelder Mead algorithm; at the kth iteration, one step of the following minimax problem is solved: min ψ(p) (27) p
7
where ψ(p) = max i,j
where
φ¯i , i = 1, 2, . . . , n ∪ {¯ gj , j = 1, 2, . . . , q, } φi (p) − φgi , φ¯i = φbi − φgi
and φgi =
φbi
=
εi φi (pk ) − δ
φi (pk ) εi
if φi (pk ) > εi , if φi (pk ) ≤ εi
if φi (pk ) > εi , if φi (pk ) ≤ εi
and δ is set to a small positive number; and where pj − pjub − δ pjlb − pj − δ g¯j = max , , δ δ
(28)
(29)
(30) (31)
(32)
where pjub and pjlb are the upper bound and lower bound values on the values of each parameter pj which define the set P such that pjlb < pj < pjub .
3.4
Comments
With the moving boundaries process, the search process seeks to improve at each iteration, all the indices with unsatisfied bounds while keeping other bounds satisfied. With Ng’s dynamic minimax approach, there is no guarantee that after an iteration any functional that is not satisfied will not get any worse, and that any functional that is satisfied will not increase beyond its epsilon limit. This is not necessarily a bad thing because it can allow the algorithm to escape from some local minima. Note that in step (v) of the algorithm, for the test for the acceptance of p(e) , a comparison is made with the value ψl , rather than with the value ψ(p(r) ). This means that p(e) could be accepted even though p(r) is a better point than p(e) . This feature is present in references [NM65a, Mat92, PFTV92], however in reference [BSS93], ψ(p(e) ) is compared with ψ(p(r) ). Both acceptance tests were tried during the evaluations, however neither test was consistently more efficient.
4
Multi-Objective Genetic Algorithm
Genetic algorithms (GA’s) are search procedures based on the evolutionary process in nature. They differ from the previous two approaches in that they use probabilistic and not deterministic criteria for progressing the search. The idea is that the GA operates a population of individuals, each individual representing a potential solution to the problem, and applies the principle of survival of the fittest on the population, so that the individuals evolve towards better solutions to the problem. The individuals are given a chromosoidal representation, which corresponds to the genotype of an individual in nature. Three operations can be performed on individuals in the population, selection, cross-over and mutation. These correspond to the selection of individuals in nature for breeding, where the fitter members of a population breed and so pass-on their genetic material. The cross-over corresponds to the combination of genes by mating, and mutation to genetic mutation in nature. The selection is weighted so that the ‘fittest’ individuals are more likely to be selected for cross-over, the fitness being a function of the function which is being minimised. By means of these operations, the population will evolve towards a solution. Most GA’s have been used for single objective problems, although several multi-objective schemes have been proposed (e.g. [Sch85, WLK92]). In particular, Liu and Patton [LP93] have used a GA to solve the MOI by converting the problem to a minimax optimisation problem. Fonseca and Fleming 8
[FF93c, FF93b, FF94] have used an approach called the multi-objective genetic algorithm (MOGA), which is an extension on an idea by Goldberg [Gol89]. This formulation maintains the genuine multiobjective nature of the problem, and is essentially the scheme used here. The idea behind the MOGA is to develop a population of Pareto-optimal or near Pareto-optimal solutions. To restrict the size of the near Pareto-optimal set and to give a more practical setting to the MOGA, Fonseca and Fleming have formulated the problem in a similar way to the MOI. The aim is to find a set of solutions which are non-dominated and which satisfy a set of inequalities. An individual j with a set of objective functions φj = (φj1 , . . . , φjn ) is said to be non-dominated if for a population of N individuals, there are no other individuals k = 1, . . . , N, k 6= j such that a) b)
φki ≤ φji for all i = 1, . . . , n φki < φji for at least one i.
and
(33)
The MOGA is set into a multi-objective context by means of the fitness function. The mechanism is described later. The MOGA problem for the MOI could be stated as: Problem Find a set of M admissible points pj , j = 1, . . . M such that φji ≤ εi (j = 1, . . . M, i = 1, . . . , n) and such that φj (j = 1, . . . M ) are non-dominated.
4.1
Algorithm
The algorithm used in this study is: i) Create a chromosome population of N individuals ii) Decode chromosomes to obtain phenotypes pj ∈ P(j = 1, . . . , N ) iii) Calculate index vectors φj (j = 1, . . . , N ). iv) Rank individuals and calculate fitness functions fj (j = 1, . . . , N ) v) Make selection of N individuals based on fitness vi) Perform cross-over on selected individuals vii) Perform mutation on some individuals viii) With new chromosome population, return to (ii) The algorithm is terminated when M admissible points have been found. The MATLAB Genetic Algorithms Toolbox [CFPF94] has been used to implement the GA. Many different variations of the GA have been suggested, with different schemes for chromosoid representation, ranking and fitness calculation, selection, cross-over and mutation. The rules for deciding which schemes to use are generally heuristic, as are the criteria for selecting the population size N and the various probability constants for the various schemes. The schemes chosen for use in this study are generally the most simple schemes. The chromosome population, N , was set to 100 for this study.
4.2
Chromosome Representation
The real parameter space P ⊂ Rq is discretised into a mesh of discrete points, each point assigned a code based on a binary representation of its real value. The binary codes are then concatenated together into a single binary string representing the chromosome. An alternative scheme could use a Gray code. For this study, a 12-bit binary representation for each real valued parameter has been used, however, it is possible to perform the genetic operations on real valued chromosome representation e.g. [LP93, CFPF94]. 9
4.3
Ranking and Fitness
Fonseca and Fleming [FF93b] have proposed a fitness scheme for MOGA to solve the MOI which maintains the genuine multi-objective nature of the problem. The scheme needs the concept of one individual being preferable to another. Definition: Preferable Consider two individuals a and b with objective function sets φa and φb respectively, and with a set of design goals ε = (ε1 , . . . , εn ). Three possible cases can occur with respect to individual a: i) Individual a satisfies none of the inequalities That is, if φai > εi ∀ i = 1, . . . , n, then individual a is preferable to individual b if and only if φai ≤ φbi ∀ i = 1, . . . , n
(34)
and there exists at least one φai such that φai < φbi . ii) Individual a satisfies some of the inequalities That is, if there exists at least one φai such that φai ≤ εi and there exists at least one φai such that φai > εi , then individual a is preferable to individual b if φai ≤ φbi ∀ i such that φai > εi and there exists at least one φai > εi such that φai < φbi ; or if φai = φbi ∀ i such that φai > εi ,
(35)
(36)
then individual a is preferable to individual b if φai ≤ φbi ∀ i such that φai ≤ εi and there exists at least one φai ≤ εi such that φai < φbi , or if there exists a φbi > εi for some i such that φai ≤ εi .
(37)
(38)
iii) Individual a satisfies all the inequalities That is, if φai ≤ εi ∀ i = 1, . . . , n, then individual a is preferable to individual b if φai ≤ φbi ∀ i = 1, . . . , n and there exists at least one φai such that φai < φbi ; or if there exists a φbi > εi .
(39)
22
All the individuals are assigned a rank according to how many individuals which are preferable to an individual. Thus, the rank of an individual j is given by rj and the number of individuals in that generation that are preferable to j are k j , then rj = 1 + k j . To calculate the fitness, the population is sorted according to rank. A fitness value f j (j = 1, . . . , N ), 0 ≤ f j ≤ 2, is assigned to each individual by a linear interpolation from the best individual (f best = 2) to the worst individual (f worst = 0). The fitness of all individuals with the same rank is then averaged, so that they are sampled at the same rate. Thus, if ngt is the number of individuals with a rank r > rj , and if neq is the number of individuals with a rank r = rj , then fj =
2ngt + neq . N −1
(40)
Figure 3 shows an example for 10 individuals where φ = (φ1 , φ2 ). The goals ε1 , ε2 are also shown. The preference relationship between each pair of individuals and their subsequent rank and fitness is shown in Table 1. 10
φ1 a b
c d
ε1 e f g
h i j
ε2
φ2
Figure 3: Example of multi-objective ranking
a is preferable to b is preferable to c is preferable to d is preferable to e is preferable to f is preferable to g is preferable to h is preferable to i is preferable to j is preferable to rank fitness f
a − √ √ × × √ √ √
b × − × × × √ √ √
c × × − × × √ √ √
× ×
× ×
× ×
6 0.222
4 1.111
4 1.111
d × × × − √ √ √ √ √
e × × × × − √ √ √ √
×
×
6 0.222
5 0.666
f × × × × × − √ × × ×
2 1.555
g × × × × × × − × × ×
1 1.888
h × × × × × × × − × ×
1 1.888
i × × × × × √ √ √ − ×
4 1.111
j × × × × √ √ √ √ √ −
6 0.222
Table 1: Preference relationship for ranking example
Clearly, this scheme could be adapted, if desired, to provide some ordering to the importance of the indices and the inequalities, e.g. if some of the inequalities could be described as ‘hard’ constraints and 11
others as ‘soft’. An alternative, less computationally demanding, ranking scheme has been proposed by [LSII94].
4.4
Selection
Many different selection schemes have been proposed. The scheme used here is known as roulette wheel selection (rws). Each individual is assigned a slot on the wheel in proportion to its fitness value. The wheel is then ‘spun’, and each winning individual selected. Thus the probability ps of an individual i being selected is fi ps = PN , (41) j j=1 f
where f j represents the fitness of an individual j.
4.5
Cross-over
The scheme used here is a simple single point cross-over. Adjacent pairs of chromosomes are selected with a certain probability for mating (the population is randomly shuffled in the selection procedure). The cross-over point for each pair is then randomly selected, and the rightmost bits of the two individuals are exchanged to produce two offspring. For example, suppose two chromosomes (a1 , a2 , a3 , a4 , a5 ) and (b1 , b2 , b3 , b4 , b5 ) are the ‘parents’ which have been randomly chosen. A cross-over point is chosen at random between the numbers 1,2,3 and 4. If the cross-over point is 3, then the ‘offspring’ will be (a1 , a2 , a3 , b4 , b5 ) and (b1 , b2 , b3 , a4 , a5 ) as illustrated in Figure 4. For this study, the selection probability was set to 0.7.
a1 a2 a3 a4 a5
a1 a2 a3 b 4 b 5 -
b 1 b 2 b 3 a4 a5
b1 b2 b3 b4 b5 6
6
cross over point
cross over point
Figure 4: Chromosome cross-over
4.6
Mutation
The mutation operation provides the opportunity to reach parts of the search space which perhaps cannot be reached by cross-over alone. The mutation operation consists of simply flipping each bit in the chromosomes of each individual with a probability pm . For example, an individual with a chromosome (01110) would become (00111) if the 2nd and the 5th bits were chosen for mutation. For this study, pm = 0.01.
5
The Algorithm Performance Tests
The evaluation of each algorithm’s performance is undertaken on 3 test problems. The first two tests are from the original paper on the MOI [ZAN73], and the searches are done over the parameters of fixed structure controllers. The final test is based on the benchmark for the 1991 CDC [Lim91], and is based
12
on the ‘mixed optimisation approach’ [WPG94], where the search is done over the parameters of the weighting functions required for H ∞ optimisation. Ten admissible points were found for each problem using each method. These are shown in Tables in B, along with the starting points and values of the objective functions φ. The performance is measured by the number of objective function evaluations required to find a feasible point. This is used because for multiobjective control system design problems, the evaluation of the objective functions is the most computationally demanding part of the process. The starting points for the searches were generated at random.
5.1
Problem 1
This problem is a simple SISO system which has been investigated by several designers [DH66, BJ65, ZAN73]. The plant G(s) is 10 , (42) G(s) = s(s + 1)(s + 5) and the controller K(s, p) has the form K(s, p) = p1
(1 + p2 s) . (1 + p2 p3 s)
(43)
The aim is to obtain a good step response for the system. The design by Zakian and Al Naib [ZAN73] had a very large control effort, so the additional proviso is included that the control effort must not be too large. The problem is thus defined as to find values for p such that φi (p) ≤ εi , i = 1 . . . 4
(44)
where φ1 is the abscissa of stability, φ2 is the rise-time index, φ3 is the overshoot and φ4 is the control effort index. The indices are defined in Appendix A.1. The design goals ε are ε = (−0.001, 1.5, 0.2, 10) (45) The design parameters are constrained such that 0.01 ≤ p1 ≤ 50, 0 ≤ p2 ≤ 20 and 0.01 ≤ p3 ≤ 10. The results are shown in Tables 5 – 7. The starting points were generated at random with an even distribution over the possible parameter range. The points which converged to a non-admissible local minimum were discarded. These represented a very large proportion of the attempts. The GA search results are shown in Table 7. The GA results were also very dependent upon the starting population. If the starting population was randomly chosen with an even distribution over the possible parameter space, despite several attempts, the GA did not converge to an admissible solution. Thus, the parameter p3 was distributed as a log function over its possible range, this weighted the starting population to low values of p3 . The average number of objective function evaluations required to locate each admissible point for the successful attempts for each search method is shown in Table 2, along with the total number of failed search attempts. The MBP required the fewest objective function evaluations, but had far more failed search attempts than the NMDM method. The MOGA solutions tended to be close to one another.
13
Method MBP NMDM MOGA
Average number of function evaluations 77.5 109.4 130
Number of failed attempts 589 197 –
Table 2: Test 1 result summary
5.2
Problem 2
This problem is a 2-input 2-output system [Mun72, ZAN73]. The plant G(s) is 0 0 1.38 −0.2077 6.715 −5.676 −0.5814 −4.29 0 0.675 5.679 0 1.067 4.273 −6.654 5.893 1.136 −3.146 s G(s) = 0.048 4.273 1.343 −2.104 1.136 0 1 0 p1 −p1 0 0 0 0 0 1 0 0
(46)
and the controller K(s, p) has the form K(s, p) =
1 0 p2 (1 + p3 s) 0 s p4 (1 + p5 s)
(47)
The aim is to obtain a good step response for the system with little cross-coupling. The design by Zakian and Al Naib had a very large control effort, so the additional proviso is included that the control effort must not be too large. The indices φi , i = 1, . . . , 9 are defined in Appendix A.2. The design goals ε are ε = (−0.001, 0.2, 0.1, 0.2, 0.1, 0.1, 0.1, 10, 10) (48) The design parameters are constrained such that 0.1 ≤ p1 ≤ 10, 1 ≤ p2 ≤ 100, 0.1 ≤ p3 ≤ 10, −20 ≤ p4 ≤ 20 and 0.1 ≤ p5 ≤ 10. The results are shown in Tables 8 – 10 in Appendix B.2. The starting points, including those for the MOGA search, were generated at random with an even distribution over the possible parameter range. The average number of objective function evaluations for each search method is shown in Table 3. Again, the MBP required the fewest objective function evaluations, but this time it had far fewer failed search attempts than the NMDM method. Again, the MOGA solutions tended to be close to one another, and the number of objective function evaluations was quite a lot higher than for the hill-climbing methods. Method MBP NMDM MOGA
Average number of function evaluations 97.9 193.0 350
Number of failed attempts 25 306 –
Table 3: Test 2 result summary
5.3
Problem 3
The proposed design method is used to design a control system for the high purity distillation column described in [Lim91]. The column is considered in just one of its configurations, the LV configuration, 14
for which the following model is relevant 1 0.878 GD = 75s + 1 1.082
−0.864 −1.096
k1 e−τ1 s 0
0 k2 e−τ2 s
(49)
where 0.8 ≤ k1 , k2 ≤ 1.2 and 0 ≤ τ1 , τ2 ≤ 1, and all time units are in minutes. The time delay and actuator gain values used in the nominal model G are k1 = k2 = 1.0 and τ1 = τ2 = 0.5. The time delay element is approximated by a first-order Pad´e approximation. The original design specifications are very tight, and so have been relaxed for the purposes of this study. The specifications are to design a controller which guarantees closed-loop stability for all 0.8 ≤ k1 , k2 ≤ 1.2 and 0 ≤ τ1 , τ2 ≤ 1, and for the nominal plant: i) The output response to a step demand h(t) 10 satisfies y1 (t) ≤ 1.1 for all t, y1 (t) ≥ 0.9 for all t > 40 and y2 (t) ≤ 0.5 for all t. ii) The output response to a step demand h(t) 01 satisfies y1 (t) ≤ 0.5 for all t, y2 (t) ≤ 1.1 for all t and y2 (t) ≥ 0.9 for all t > 40. iii) Zero steady state error. Performance functionals based on these specifications are defined in Appendix A.3. For this design, we use the mixed optimisation approach [WPG94, WMGP95, PWMG94], where the MOI is used to design the weighting functions required by McFarlane and Glover’s loop-shaping design procedure (LSDP) [MG90, MG92].
ref -
K(0)W2 (0)
+ m -
W1
u-
G
-y
+ 6
K
W2
The LSDP maximises robust stability to perturbations on the normalised coprime factors of a plant weighted by pre- and post-compensators W1 and W2 as shown in Figure 5.3. An explicit controller K for optimal γ
−1
W1 K −1 −1
(I − GK) [ W2 GW1 ] γ0 = inf
W 2 K stabilising ∞
can be synthesised, the weights are then simply incorporated into the final controller Kf so that Kf = W1 KW2 . The problem can be formulated as the MOI as follows: Design Problem Find W1 , W2 and hence Kf such that γ0 (W1 , W2 ) ≤ ǫγ , and φi (W1 , W2 ) ≤ εi
for
i = 1 . . . n.
For the distillation problem, the pre-compensator W1 (s) has the structure 100 (p1 s + p2 ) 0 W1 = 0 (p3 s + p4 ) s 15
(50)
and the post-compensator is fixed as W2 = I2 . The goal for γ is set to εγ = 4, and the design goals ε are set to ε = (0, 1.1, −0.9, 0.5, 0.5, 1.1, −0.9).
(51)
The design parameters are constrained such that 0.1 ≤ pi ≤ 10, i = 1, 2, 3, 4. The results are shown in Tables 11 – 13 in Appendix B.3. For this test, all the starting points were generated at random with a log function distribution over the possible parameter range. The average number of objective function evaluations for each search method is shown in Table 4. Again, the MBP required the fewest objective function evaluations, and this time it had more failed search attempts than the NMDM method. The number of objective function evaluations for the MOGA was not much more than for the hill-climbing methods. Method MBP NMDM MOGA
Average number of function evaluations 81.9 87.0 110
Number of failed attempts 27 11 –
Table 4: Test 3 result summary
6
Discussion and Conclusions
As with most comparative tests, it is very difficult to do a totally fair comparison for the performance of the algorithms. This is particularly so for comparing these algorithms, as they are all intended to be used interactively, with the designer making decisions which affect the course of the computations. Additionally, the MOGA is operating in a discretised parameter space, the MBP and NMDM search in a real parameter space. As can be seen from the results, the success and efficiency of the algorithms are highly dependent upon the starting points, however, the algorithms do not necessarily all converge to an admissible solution from the same starting point. The MOGA solutions tended to be close to one another in the parameter space, Fonseca and Fleming [FF93b] stated that this was likely, a phenomenon known as “genetic drift” and have suggested a “niche formation” method to prevent this. This was not included in this study, but this study indicates that it is important to do so. The GA used here used a discretised parameter space, there are now quite a number of studies which use real valued search spaces, [Gol91, Wri91, MRM92, MSV93, LP93] and the latest version of the Genetic Algorithms Toolbox [CFPF94] includes facilities to use real valued search spaces. Genetic algorithms are implicitly parallel [Gol89], and as such are very amenable to implementation on parallel computers (e.g. [Ree93, MSB91]). An interesting aspect of MOGA developed by Fonseca and Fleming is that the the motivation for the multi-objective formulation of the problem came from the “goal attainment method” [Gem74, GH75, FP86], which essentially converts the MOI formulation into a single objective optimisation problem min λ (52) λ∈R,p∈P
subject to φi (p) − wi λ ≤ εi where wi ≥ 0 are weights specified by the designer. However, the MOI formulation is implicitly parallel, as a simultaneous solution to a set of inequalities is desired, and the formulation of the MOGA returns to the implicit parallelism of the MOI formulation. The parallel nature of the MOGA also aids in the exploration of feasible solutions. For the original implementation of the MOI, if an admissible point is easily found, the designer reduces some of the 16
inequalities, reducing the size of the admissible set, and so explores the possible solutions. The MOGA, on the other hand, automatically explores a set of admissible, preferably Pareto optimal, points, and thus is providing the designer with a larger amount of information about the possible design trade-offs. this can be regarded as an advance on the original MOI exploiting the modern potential computing power of parallel processors. This would seem to be a conceptual advance on the original MOI which exploits the potential computing power of modern parallel processors. Another interesting aspect of the MOGA is that the fitness of the individuals is only dependent on the relationship to the other individuals and to the goal, and not to the actual values of the objective functions. Whilst this makes the scheme more generic and less problem specific, more efficient solutions may be obtained if one were to make use of the information from the values of the objective functions. Whilst no algorithm in this study can be seen to be better than the others, the tests do show that all the algorithms are useful for interactive multi-objective design. The algorithms are currently being incorporated into a MATLAB Toolbox called MODCONS (Multi-Objective Design of Control Systems) [WGP94]. Genetic algorithms are a heuristic search approach, other heuristic search approaches could be considered for solving the MOI. In particular, a simulated annealing based algorithm is currently being developed by the authors [WGP96]. Another option would be to combine GA search with hill-climbing [MSB91, FF93a]. The MBP and the NMDM are just two algorithms that have been developed for feasibility design problems. Other algorithms have been developed by Polak, Mayne and co-workers (see for example [BHM79]) for the solution of functional inequalities. Additionally, Boyd and Barrett [BB91] have developed algorithms for the feasibility problem for a restricted set of objective functions which can be set as convex optimisation problems.
17
A
The Objective Functions
A.1
Problem 1
The abscissa of stability φ1 is defined as φ1 = max {Re {λ} < 0 ∀ λ ∈ Λ} ,
(53)
where Λ denote the set of all the finite poles λ of the closed-loop transfer functions. The rise-time index φ2 is defined as the least value of t such that y(t, h) = 0.9y(∞, h),
(54)
where y(t, h) is the step response of the plant output. The overshoot φ3 is defined as φ3 =
yˆ(h) − |y(∞, h)| |y(∞, h)|
if yˆ(h) > |y(∞, h)| ,
(55)
where yˆ(h) = sup |y(t, h)| .
(56)
t≥0
The control effort index φ4 is defined as φ4 = sup |u(t, h)| ,
(57)
t≥0
where y(t, h) is the step response of the plant input.
A.2
Problem 2
The abscissa of stability φ1 is defined as φ1 = max {Re {λ} < 0 ∀ λ ∈ Λ} ,
(58)
where Λ denote the set of all the finite poles λ of the closed-loop transfer functions. Denoting the plant output response of the closed loop system at a time t to a reference step demand h1 h(t) by yi (t, [h1 h2 ]′ ), i = 1, 2, the step response functional indices φ2 , . . . , φ7 are defined as h2 φ2
=
φ3
=
φ4
=
min {t such that y1 (t, [1 0]′ ) = 0.9} , ′
(59)
sup y1 (t, [1 0] ) − 1,
(60)
min {t such that y2 (t, [0 1]′ ) = 0.9} ,
(61)
t
φ5
=
φ6
=
φ7
=
′
sup {y2 (t, [0 1] ) − 1} ,
(62)
sup y1 (t, [0 1]′ ),
(63)
sup {y2 (t, [1 0]′ ) − 1} .
(64)
t
t
t
Similarly denoting the controller output response of the closed loop system by ui ([h1 h2 ]′ , t), i = 1, 2, the control effort indices φ8 , . . . , φ9 are defined as φ8
=
φ9
=
sup |u1 (t, [1 0]′ )| ,
(65)
sup |u2 (t, [0 1]′ )| .
(66)
t
t
(67) 18
A.3
Problem 3
For the robustness requirement, the plant GD is related to the nominal plant G by [HHL91] GD = G(I + ∆), where ∆= which gives σ ¯ (∆(jω))
k1 e(1/2−τ1 )s − 1 0 , 0 k2 e(1/2−τ2 )s − 1
o n max k1 e(1/2−τ1 )jω − 1 , k2 e(1/2−τ2 )jω − 1 , ≤ 1.2e−jω/2 − 1 , p = 2.44 − 2.4 cos(ω/2)). =
Thus, from the small gain theorem, for the system to remain stable, p −1 2.44 − 2.4 cos(ω/2)) − σ ¯ (I − KG)−1 KG)(jω) < 0 ∀ ω. Thus, the robust stability index φ1 is defined as p −1 . ¯ (I − KG)−1 KG)(jω) φ2 = sup 2.44 − 2.4 cos(ω/2)) − σ
(68)
(69)
(70) (71) (72)
(73)
(74)
ω
Performance functionals φ2 to φ7 are measures of the step response specifications. Denoting the output response ofthe closed loop system with the nominal plant GD at a time t to a reference step h1 demand h(t) by yi ([h1 h2 ]′ , t), i = 1, 2, the step response functionals are taken as h2 sup y1 (t, [1 0]′ ),
φ2
=
φ3
= − inf y1 (t, [1 0]′ ),
φ4
=
φ5
=
φ6
=
φ7
= − inf y2 (t, [0 1]′ ).
(75)
t
t>40
(76)
sup y2 (t, [1 0]′ ),
(77)
sup y1 (t, [0 1]′ ),
(78)
sup y2 (t, [0 1]′ ),
(79)
t
t
t
t>40
The steady state specifications are satisfied automatically by the use of integral action.
19
(80)
B B.1
Test Details Problem 1 initial p [p1 , p2 , p3 ]
p [p1 , p2 , p3 ]
φ [φ1 , φ2 , φ3 , φ4 ]
[12.92, 4.194, 0.089] [1.536, 1.337, 6.536] [4.35, 0.1501, 1.206] [1.114, 1.639, 2.537] [48.34, 4.215, 8.294] [0.2011, 5.172, 2.987] [20.85, 0.2714, 0.8505] [29.74, 1.548, 1.227] [0.3648, 5.642, 0.06973] [30.55, 4.954, 4.632]
[0.7401, 0.9047, 0.2013] [0.9919, 1.159, 0.1206] [1.155, 0.9389, 0.1215] [0.943, 1.12, 0.1722] [0.1176, 18.74, 0.08446] [0.4295, 2.727, 0.2277] [0.9393, 1.599, 0.1125] [0.1378, 19.7, 0.1226] [0.3465, 5.642, 0.06973] [0.1009, 16.87, 0.06755]
[−1.227, 1.253, 0.08784, 3.676] [−0.8139, 0.8508, 0.1151, 8.222] [−1.099, 0.8329, 0.1405, 9.507] [−0.8516, 0.9344, 0.1445, 5.477] [−0.04448, 1.396, 0.1548, 1.393] [−0.2982, 1.326, 0.09587, 1.886] [−0.56, 0.7604, 0.1611, 8.351] [−0.0439, 1.5, 0.1983, 1.125] [−0.1478, 0.8408, 0.1837, 4.97] [−0.04695, 1.445, 0.06368, 1.493]
fn evals 53 61 27 37 169 39 62 184 4 139
Table 5: Test 1 results - MBP search
initial p [p1 , p2 , p3 ]
p [p1 , p2 , p3 ]
φ [φ1 , φ2 , φ3 , φ4 ]
[11.6, 3.123, 7.388] [9.877, 5.007, 6.695] [0.7447, 0.3536, 1.87] [11.42, 4.567, 7.317] [5.073, 1.847, 7.26] [6.637, 2.9, 8.756] [2.23, 4.662, 8.69] [15.89, 2.903, 9.079] [1.262, 19.14, 2.493] [0.3256, 2.132, 5.601]
[0.3261, 5.697, 0.03697] [0.248, 9.019, 0.2107] [0.8663, 0.5384, 0.1964] [0.0869, 15.64, 0.01433] [0.4232, 3.069, 0.09225] [0.2472, 5.906, 0.05706] [0.1638, 12.41, 0.01817] [0.3346, 6.792, 0.03385] [0.1021, 19.98, 0.01937] [0.3723, 2.857, 0.1469]
[−0.1442, 0.7407, 0.09637, 8.82] [−0.09539, 1.476, 0.1961, 1.177] [−0.8248, 1.29, 0.1717, 4.411] [−0.04759, 1.098, −0.005779, 6.065] [−0.2607, 1.006, 0.03524, 4.588] [−0.1327, 1.005, 0.02913, 4.333] [−0.06586, 0.7186, 0.1074, 9.015] [−0.1245, 0.6597, 0.1924, 9.884] [−0.0407, 0.8393, 0.1405, 5.275] [−0.272, 1.3, 0.007011, 2.535]
Table 6: Test 1 results - NMDM search
p [p1 , p2 , p3 ] [0.303, 5.675, 0.07332] [0.303, 5.988, 0.1489] [0.303, 5.363, 0.1129] [0.303, 5.675, 0.1258] [0.303, 5.988, 0.05858] [0.303, 5.695, 0.1332] [0.303, 5.675, 0.09022] [0.303, 5.67, 0.1125] [0.303, 6.066, 0.09022] [0.303, 5.675, 0.05858]
φ [φ1 , φ2 , φ3 , φ4 ] [−0.1436, 0.9348, 0.1256, 4.133] [−0.1389, 1.186, 0.1755, 2.035] [−0.1516, 1.099, 0.1268, 2.683] [−0.1448, 1.125, 0.1537, 2.409] [−0.1369, 0.8577, 0.133, 5.172] [−0.1446, 1.149, 0.1565, 2.274] [−0.144, 0.9995, 0.139, 3.358] [−0.1446, 1.08, 0.1495, 2.692] [−0.1361, 0.9794, 0.1679, 3.358] [−0.1432, 0.8762, 0.1072, 5.172]
Table 7: Test 1 results - MOGA search
20
fn evals 97 120 22 160 53 90 145 206 172 29
B.2
Problem 2 initial p [p1 , p2 , p3 , p4 , p5 ]
p [p1 , p2 , p3 , p4 , p5 ]
[2.656, 21.76, 0.1783, 16.04, 6.767]
[1.346, 15.04, 0.5978, −19.78, 0.4971]
[7.218, 22.24, 3.94, 7.496, 2.553]
[1.454, 11.15, 0.8591, −14.79, 0.6102]
[9.022, 25.61, 2.106, 14.27, 5.536]
[1.915, 13.67, 0.1519, −17.51, 0.5572]
[4.374, 25.26, 0.8834, 18.95, 5.085]
[1.606, 13.38, 0.2265, −19.95, 0.4983]
[7.394, 75.08, 8.911, 15.42, 4.271]
[1.523, 22.9, 0.2753, −19.41, 0.4853]
[4.067, 36.73, 1.776, −15.63, 0.5797]
[1.423, 12.85, 0.6216, −17.19, 0.5797]
[6.839, 2.072, 9.988, −15.06, 7.922]
[1.779, 1.923, 2.998, −14.67, 0.6783]
[6.72, 16.92, 3.111, −13.98, 8.388]
[1.301, 6.266, 0.9372, −12.87, 0.7616]
[4.937, 69.23, 8.091, 14.9, 5.069]
[1.369, 25.08, 0.1999, −18.52, 0.5029]
[4.376, 4.268, 8.22, 15.44, 4.417]
[1.925, 3.052, 3.101, −16.69, 0.5327]
φ [φ1 , φ2 , φ3 , φ4 , φ5 φ6 , φ7 , φ8 , φ9 ] [−1.36, 0.05621, 0.09054, 0.05362, 0.004621, 0.007692, 0.07722, 9.834, 8.99] [−1.107, 0.05971, 0.09872, 0.05168, 0.002675, 0.008403, 0.09887, 9.025, 9.579] [−1.295, 0.04399, 0.07864, 0.1563, 0.0674, 0.01426, 0.0987, 9.757, 3.442] [−1.287, 0.04914, 0.08116, 0.1283, 0.02601, 0.01333, 0.08304, 9.94, 3.041] [−1.252, 0.05382, 0.0881, 0.07614, 0.009963, 0.00733, 0.08397, 9.418, 6.306] [−1.312, 0.0545, 0.08885, 0.06166, 0.004605, 0.008743, 0.08575, 9.965, 7.99] [−0.3189, 0.04803, 0.08402, 0.147, 0.0, 0.02064, 0.09862, 9.948, 5.766] [−1.002, 0.06052, 0.09773, 0.1013, 0.004842, , 0.01378, 0.09854, 9.8, 5.873] [−1.218, 0.05876, 0.09508, 0.08478, 0.02327, 0.007923, 0.08469, 9.315, 5.014] [−0.3119, 0.04989, 0.08571, 0.06069, 0.0, 0.01332, 0.09961, 8.888, 9.463]
fn evals 94 154 98 85 111 18 90 121 151 57
Table 8: Test 2 results - MBP search
initial p [p1 , p2 , p3 , p4 , p5 ]
p [p1 , p2 , p3 , p4 , p5 ]
[5.191, 66.19, 8.003, −19.69, 3.34]
[1.113, 29.79, 0.1021, −19.75, 0.5059]
[5.379, 17.01, 5.436, −6.36, 0.3117]
[1.631, 1.312, 4.548, −15.12, 0.5579]
[1.492, 22.65, 5.999, −16.48, 1.83]
[2.613, 1.053, 6.388, −19.37, 0.5045]
[1.219, 62.33, 0.8858, −17.29, 3.22]
[1.977, 59.67, 0.1448, −19.96, 0.496]
[5.225, 37.98, 1.494, −7.222, 0.8748]
[1.594, 7.666, 1.175, −18.26, 0.4376]
[5.615, 39.67, 3.333, −12.76, 0.7837]
[1.242, 1.203, 4.904, −11.33, 0.8792]
[1.367, 69.29, 1.196, −17.98, 7.305]
[2.238, 65.27, 0.1451, −19.07, 0.5125]
[4.919, 5.187, 7.726, −10.49, 6.347]
[1.122, 13.15, 0.1825, −16.95, 0.5891]
[2.304, 20.56, 5.115, −8.747, 0.5045]
[1.203, 1.431, 4.82, −13.6, 0.7302]
[1.23, 77.06, 8.722, −13.52, 4.128]
[2.663, 2.159, 2.513, −19.45, 0.5034]
φ [φ1 , φ2 , φ3 , φ4 , φ5 φ6 , φ7 , φ8 , φ9 ] [−1.184, 0.06224, 0.09998, 0.1178, 0.07886, 0.009633, 0.07498, 9.994, 3.042] [−1.2098, 0.06187, 0.09855, 0.1431, 0.0, 0.02105, 0.09887, 8.434, 5.965] [−1.1434, 0.05203, 0.06692, 0.1089, 0.0, 0.02524, 0.09822, 9.774, 6.725] [−1.324, 0.04172, 0.07407, 0.04682, 0.03381, 0.003173, 0.08899, 9.898, 8.643] [−1.9281, 0.06248, 0.09981, 0.06094, 0.001068, 0.0101, 0.09146, 7.993, 9.01] [−1.1969, 0.06336, 0.09923, 0.1515, 0.0, 0.01978, 0.09903, 9.957, 5.9] [−1.317, 0.03807, 0.07211, 0.04132, 0.03179, 0.002747, 0.09775, 9.773, 9.472] [−1.209, 0.06318, 0.09998, 0.1411, 0.04676, 0.01657, 0.0831, 9.987, 2.558] [−1.2016, 0.06353, 0.09896, 0.1023, 0.0, 0.01678, 0.08922, 9.929, 6.899] [−1.3527, 0.03653, 0.06615, 0.1627, 0.0, 0.02483, 0.09947, 9.793, 5.426]
Table 9: Test 2 results - NMDM search
p [p1 , p2 , p3 , p4 , p5 ] [1.294, 1.87, 3.124, −19.59, 0.5086] [1.275, 1.774, 3.12, −19.59, 0.5062] [1.294, 1.87, 3.124, −19.55, 0.5062] [1.294, 1.87, 3.124, −19.59, 0.5062] [1.333, 2.015, 3.124, −19.55, 0.5062] [1.285, 1.967, 3.124, −19.59, 0.5062] [1.292, 2.04, 3.124, −11.67, 0.8543] [1.292, 2.04, 3.124, −19.55, 0.5062] [1.27, 1.774, 3.743, −19.55, 0.5062] [1.294, 1.774, 3.743, −19.55, 0.4868]
φ [φ1 , φ2 , φ3 , φ4 , φ5 , φ6 , φ7 , φ8 , φ9 ] [−0.3123, 0.05802, 0.09141, 0.1397, 0.0, 0.0185, 0.07211, 9.963, 5.844] [−0.3127, 0.0589, 0.09259, 0.1696, 0.0, 0.0194, 0.07167, 9.915, 5.533] [−0.3123, 0.05841, 0.09201, 0.1397, 0.0, 0.01851, 0.0724, 9.896, 5.844] [−0.3123, 0.05828, 0.09181, 0.1397, 0.0, 0.0185, 0.07225, 9.915, 5.844] [−0.3124, 0.05715, 0.09049, 0.1205, 0.0, 0.01738, 0.07342, 9.896, 6.297] [−0.3128, 0.05859, 0.09222, 0.1253, 0.0, 0.01761, 0.0723, 9.915, 6.146] [−0.3136, 0.06172, 0.09759, 0.1152, 0.0, 0.01731, 0.09998, 9.968, 6.372] [−0.313, 0.05848, 0.09214, 0.118, 0.0, 0.01705, 0.07277, 9.896, 6.372] [−0.2602, 0.06005, 0.09306, 0.1082, 0.0, 0.01682, 0.07237, 9.896, 6.639] [−0.26, 0.06104, 0.09547, 0.1058, 0.0, 0.01692, 0.07389, 9.518, 6.639]
Table 10: Test 2 results - MOGA search
21
fn evals 384 140 93 209 129 255 124 224 126 246
B.3
Problem 3
initial p [p1 , p2 , p3 , p4 ]
p [p1 , p2 , p3 , p4 ]
γ, φ [γ, φ1 , φ2 , φ3 , φ4 , φ5 , φ6 , φ7 ]
[0.1949, 2.727, 2.106, 0.7307] [0.78, 1.039, 1.138, 1.399] [0.5268, 0.4709, 0.2221, 0.1487] [0.4373, 2.069, 0.1008, 1.811] [0.7728, 3.272, 0.2627, 1.59] [0.2595, 0.5536, 0.1763, 6.548] [1.119, 5.842, 0.2917, 2.302] [0.536, 2.976, 0.1188, 3.882] [0.1139, 2.791, 5.31, 1.963] [5.83, 7.38, 0.3991, 0.2511]
[0.2842, 0.3772, 0.1717, 0.4819] [0.2494, 0.2772, 0.2501, 0.3668] [0.5265, 0.4853, 0.2165, 0.252] [0.2223, 0.5169, 0.1787, 0.4524] [0.3162, 0.6337, 0.1051, 0.2916] [0.2175, 0.2374, 0.2099, 0.6587] [0.1628, 0.6013, 0.1959, 0.2787] [0.2532, 0.3948, 0.1292, 0.5172] [0.1589, 0.4201, 0.3217, 0.5398] [0.5546, 0.4828, 0.1244, 0.2488]
[2.814, −0.06696, 1.016, −0.9516, 0.478, 0.4698, 1.009, −0.9692] [2.589, −0.1868, 1.014, −0.9048, 0.472, 0.4678, 1.009, −0.9395] [2.52, −0.002413, 1.013, −0.9087, 0.4731, 0.468, 1.008, −0.9404] [2.927, −0.01333, 1.016, −0.9697, 0.4766, 0.476, 1.01, −0.9805] [2.875, −0.003872, 1.016, −0.9313, 0.4812, 0.4793, 1.01, −0.9555] [2.955, −0.002686, 1.018, −0.9001, 0.4832, 0.4757, 1.01, −0.9404] [2.984, −0.02805, 1.016, −0.926, 0.478, 0.4828, 1.011, −0.9501] [2.931, −0.03834, 1.017, −0.9587, 0.4798, 0.4722, 1.01, −0.9739] [2.836, −0.01021, 1.015, −0.9661, 0.4721, 0.476, 1.01, −0.9787] [2.533, −0.00634, 1.013, −0.9043, 0.4816, 0.4665, 1.008, −0.937]
fn evals 125 37 63 41 45 77 122 53 106 150
Table 11: Test 3 results - MBP search
initial p [p1 , p2 , p3 , p4 ]
p [p1 , p2 , p3 , p4 ]
γ, φ [γ, φ1 , φ2 , φ3 , φ4 , φ5 , φ6 , φ7 ]
[0.2041, 1.126, 0.2567, 0.191] [0.7279, 0.6413, 0.4089, 2.054] [0.4568, 0.119, 0.144, 0.1166] [2.755, 2.018, 0.1845, 3.933] [0.2682, 0.6646, 1.063, 0.1104] [0.1912, 7.913, 1.104, 2.679] [0.1859, 1.843, 0.1459, 2.267] [9.744, 0.3606, 0.4024, 0.1312] [3.12, 1.972, 0.2336, 1.01] [0.1072, 0.4854, 5.446, 0.3621]
[0.2551, 0.5631, 0.3209, 0.2388] [0.2484, 0.4816, 0.2501, 0.4646] [0.1934, 0.2788, 0.2469, 0.5661] [0.4223, 0.4563, 0.1505, 0.2636] [0.1452, 0.377, 0.6026, 0.2438] [0.279, 0.2868, 0.2362, 0.5172] [0.3153, 0.5276, 0.2019, 0.2556] [0.3352, 0.4137, 0.5265, 0.2307] [0.1024, 0.4114, 0.596, 0.252] [0.2122, 0.2561, 0.1359, 0.592]
[2.749, −0.05518, 1.014, −0.907, 0.4689, 0.48, 1.01, −0.9373] [2.82, −0.02226, 1.015, −0.9674, 0.4733, 0.4737, 1.01, −0.9792] [2.838, −0.05354, 1.016, −0.9221, 0.4783, 0.4744, 1.01, −0.9515] [2.582, −0.1117, 1.014, −0.9106, 0.4764, 0.4694, 1.009, −0.9417] [2.537, −0.01831, 1.012, −0.901, 0.4435, 0.4764, 1.009, −0.9367] [2.752, −0.07947, 1.015, −0.9261, 0.4784, 0.4686, 1.009, −0.9539] [2.711, −0.08273, 1.014, −0.9115, 0.4741, 0.4769, 1.01, −0.9422] [2.479, −8.182e − 05, 1.012, −0.9, 0.4491, 0.4778, 1.009, −0.9358] [2.605, −0.006459, 1.012, −0.909, 0.4451, 0.4751, 1.01, −0.9408] [2.999, −0.03881, 1.018, −0.9092, 0.4831, 0.4752, 1.01, −0.9461]
Table 12: Test 3 results - NMDM search
p [p1 , p2 , p3 , p4 ] [0.1393, 0.4228, 0.3294, 0.4437] [0.1997, 0.4285, 0.106, 0.4373] [0.1997, 0.4271, 0.2288, 0.5294] [0.1831, 0.5044, 0.2657, 0.2957] [0.282, 0.4368, 0.3069, 0.2957] [0.1373, 0.431, 0.2702, 0.4373] [0.282, 0.3665, 0.2094, 0.4153] [0.1831, 0.4228, 0.2429, 0.4564] [0.1831, 0.4883, 0.1041, 0.4699] [0.282, 0.4228, 0.2429, 0.4699]
γ, φ [γ, φ1 , φ2 , φ3 , φ4 , φ5 , φ6 , φ7 ] [2.784, −0.06408, 1.014, −0.956, 0.4693, 0.4762, 1.01, −0.9722] [2.961, −0.06142, 1.017, −0.9533, 0.4791, 0.4753, 1.01, −0.9703] [2.894, −0.0179, 1.016, −0.9661, 0.4756, 0.475, 1.01, −0.9785] [2.807, −0.08802, 1.015, −0.9284, 0.4713, 0.4786, 1.01, −0.9528] [2.614, −0.1213, 1.013, −0.9234, 0.4663, 0.4732, 1.009, −0.9506] [2.845, −0.06635, 1.015, −0.9561, 0.472, 0.4769, 1.01, −0.972] [2.709, −0.1107, 1.015, −0.9394, 0.4746, 0.4691, 1.009, −0.9613] [2.835, −0.06059, 1.015, −0.9571, 0.4735, 0.4752, 1.01, −0.9727] [3.049, −0.01278, 1.017, −0.9685, 0.4798, 0.4772, 1.011, −0.9799] [2.763, −0.04721, 1.015, −0.9594, 0.474, 0.4708, 1.009, −0.974]
Table 13: Test 3 results - MOGA search
22
fn evals 7 44 48 184 95 105 32 242 68 45
References [BB91]
S. Boyd and C. Barratt. Linear Controller Design: Limits of Performance. Prentice-Hall, Englewood Cliffs, N.J., 1991.
[BHM79]
R.G. Becker, A.J. Heunis, and D.Q. Mayne. Computer-aided design of control systems via optimization. IEE Proc.-D, 126(6):573–578, 1979.
[BJ65]
R.E. Bach Jun. A practical approach to control system optimization. In Proc. IFAC Symp. on Systems Engineering for Control-System Design, pages 129–135, Tokyo, 1965.
[BSS93]
M.S. Bazaraa, H.D. Sherali, and C.M. Shetty. Nonlinear Programming: Theory and Algorithms. John Wiley, New York, 2nd edition, 1993.
[CFPF94]
A. Chipperfield, P.J. Fleming, H. Pohlheim, and C.M. Fonseca. Genetic Algorithm Toolbox: User’s Guide. Dept. Automatic Control and Systems Engineering, University of Sheffield, U.K., 1994.
[DH66]
J.J. D’Azzo and C.H. Houpis. Feedback Control System Analysis and Synthesis. McGrawHill, 2nd edition, 1966.
[FF93a]
P.J. Fleming and C.M. Fonseca. Genetic algorithms in control systems engineering: A brief introduction. In IEE Colloquium on Genetic Algorithms for Control Systems Engineering, number 1993/130, pages 1/1–1/5, London, England, 1993.
[FF93b]
C.M. Fonseca and P.J. Fleming. Genetic algorithms for multiobjective optimization: formulation, discussion and generalization. In Genetic Algorithms: Proceeding of the Fifth International Conference, pages 416–423, San Mateo, CA, 1993.
[FF93c]
C.M. Fonseca and P.J. Fleming. Multiobjective genetic algorithms. In IEE Colloquium on Genetic Algorithms for Control Systems Engineering, number 1993/130, pages 6/1–6/5, London, England, 1993.
[FF94]
C.M. Fonseca and P.J. Fleming. Multiobjective optimal controller design with genetic algorithms. In Proc. Control 94, pages 745–749, Coventry, England, 1994.
[FP86]
P.J. Fleming and A.P. Pashkevich. Application of multi-objective optimization to compensator design for SISO control systems. Electronics Letters, 22(5):258–259, 1986.
[Gem74]
F.W. Gembicki. Vector optimization for control with performance and parameter sensitivity indices. PhD thesis, Case Western Reserve University, Cleveland, Ohio, 1974.
[GH75]
F.W. Gembicki and Y.Y. Haimes. Approach to performance and sensitivity multiobjective optimization: the goal attainment method. IEEE Trans. Autom. Control, AC-20(8):821–830, 1975.
[Gol89]
D.E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning. AddisonWesley, Reading, MA., 1989.
[Gol91]
D.E. Goldberg. Real-coded genetic algorithms, virtual alphabets and blocking. Complex Systems, 5:129–167, 1991.
[HHL91]
D.J. Hoyle, R.A. Hyde, and D.J.N. Limebeer. An H∞ approach to two degree of freedom design. In Proc. 30th IEEE Conf. Decision Contr., pages 1579–1580, Brighton, England, 1991.
23
[Lim91]
D.J.N. Limebeer. The specification and purpose of a controller design case study. In Proc. 30th IEEE Conf. Decision Contr., pages 1579–1580, Brighton, England, 1991.
[LP93]
G.P. Liu and R.J. Patton. Multi-objective control system design using eigenstructure assignment and genetic algorithms. Technical report, Dept. Electronics, University of York, 1993.
[LSII94]
T.K. Liu, T. Satoh, T. Ishihara, and H. Inooka. An application of genetic algorithms to control system design. In Proc. 1st Asian Contr. Conf., volume III, pages 701–704, Tokyo, 1994.
[Mac89]
J.M. Maciejowski. Multivariable Feedback Design. Addison-Wesley, Wokingham, U.K., 1989.
[Mat92]
The MathWorks, Inc., MA, USA. MATLAB: Reference Guide, 1992.
[MG90]
D.C. McFarlane and K. Glover. Robust Controller Design Using Normalized Coprime Factor Plant Descriptions, volume 138 of Lect. Notes Control & Inf. Sci. Springer-Verlag, Berlin, 1990.
[MG92]
D.C. McFarlane and K. Glover. A loop shaping design procedure using H∞ synthesis. IEEE Trans. Autom. Control, AC-37(6):759–769, 1992.
[MRM92]
HE Michielssen, S. Ranjithan, and R. Mittra. Optimal multilayer filter design using real coded genetic algorithms. IEE Proc.-J, 139(6):413–420, 1992.
[MSB91]
H. M¨ uhlenbein, M. Schomisch, and J. Born. The parallel genetic algorithm as function optimizer. Parallel Computing, 17:619–632, 1991.
[MSV93]
H. M¨ uhlenbein and D. Schlierkamp-Voosen. Pedictive models for the breedier genetic algorithm I. Continuous parameter optimisation. Evolutionary Computation, 1(1):25–49, 1993.
[Mun72]
N. Munro. Design of controllers for open-loop unstable multivariable system using inverse Nyquist array. Proc. IEE, 119(9):1377–1382, 1972.
[Ng89]
W.Y. Ng. Interactive Multi-Objective Programming as a Framework for Computer-Aided Control System Design, volume 132 of Lect. Notes Control & Inf. Sci. Springer-Verlag, Berlin, 1989.
[NM65a]
J.A. Nelder and R. Mead. A simplex method for function minimization. Comp. J., 7(4):308– 313, 1965.
[NM65b]
J.A. Nelder and R. Mead. A simplex method for function minimization – errata. Comp. J., 8(1):27, 1965.
[Par06]
V. Pareto. Manuale di Economia Politica. Societa Editrice Libraria, Milan, Italy, 1906.
[PFTV92]
W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling. Numerical Recipes: The Art of Scientific Computing. C.U.P., Cambridge, England, 2nd edition, 1992.
[PM82]
R.V. Patel and N. Munro. Multivariable Theory and Design. Pergamon Press, Oxford, England, 1982.
[PWMG94] I. Postlethwaite, J.F. Whidborne, G. Murad, and D.-W. Gu. Robust control of the benchmark problem using H∞ methods and numerical optimization techniques. Automatica, 30(4):615– 619, 1994. [Ree93]
C.R. Reeves. Genetic algorithms. In C.R. Reeves, editor, Modern Heuristic Techniques for Combinatorial Problems, pages 151–196. Blackwell, Oxford, U.K., 1993. 24
[Ros60]
H.H. Rosenbrock. An automatic method for finding the greatest or least value of a function. Comp. J., 3:175–184, 1960.
[Rut93]
N.K. Rutland. The principle of matching and the method of inequalities: design of control systems. PhD thesis, University of Manchester Institute of Science and Technology, Manchester, 1993.
[Sch85]
J.D. Schaffer. Multiple objective optimization with vector evaluated genetic algorithms. In G.J.E. Grefenstette, J.J., editor, Proc. First Int. Conf. on Genetic Algorithms, pages 93–100. Lawrence Erlbaum, 1985.
[WGP94]
J.F. Whidborne, D.-W. Gu, and I. Postlethwaite. MODCONS – a MATLAB toolbox for multi-objective control system design. Technical Report 94-26, Leicester University Engineering Dept, Leicester, U.K., 1994.
[WGP96]
J.F Whidborne, D.-W. Gu, and I. Postlethwaite. Simulated annealing for multi-objective control system design. Technical Report 96- , Leicester University Engineering Department, Leicester, U.K., 1996. In preparation.
[Whi92]
J.F. Whidborne. Performance in sampled data systems. IEE Proc.-D, 139(3):245–250, 1992.
[Whi93]
J.F. Whidborne. EMS control system design for a maglev vehicle - A critical system. Automatica, 29(5):1345–1349, 1993.
[WL93]
J.F. Whidborne and G.P. Liu. Critical Control Systems: Theory, Design and Applications. Research Studies Press, Taunton, U.K., 1993.
[WLK92]
D. Wienke, C. Lucasius, and G. Kateman. Multicriteria target vector optimization of analytical procedures using a genetic algorithm. Part I. Theory, numerical simulations and application to atomic emission spectroscopy. Analytica Chemica, 265(2):211–225, 1992.
[WMGP95] J.F. Whidborne, G. Murad, D.-W. Gu, and I. Postlethwaite. Robust control of an unknown plant – the IFAC 93 benchmark. Int. J. Control, 61(3):589–640, 1995. [WPG94]
J.F. Whidborne, I. Postlethwaite, and D.-W. Gu. Robust controller design using H∞ loopshaping and the method of inequalities. IEEE Trans. on Contr. Syst. Technology, 2(4):455– 461, 1994.
[Wri91]
A.H. Wright. Genetic algorithms for real parameter optimization. In G.J.E. Rawlins, editor, Foundations of Genetic Algorithms, pages 205–218. Morgan Kaufmann, San Mateo, CA., 1991.
[ZAN73]
V. Zakian and U. Al-Naib. Design of dynamical and control systems by the method of inequalities. Proc. IEE, 120(11):1421–1427, 1973.
25