Appendix B: MATLAB

1 downloads 0 Views 1MB Size Report
global solnew sol pop popnew fitness fitold f range; ..... in x and y implies that £» = y» at the stationary points, so we have. 1 -2ax*(x, + x„) = 1 -Aaxl = 0,. (B.5).
APPENDIX B MATLAB® PROGRAMS

The following codes intend to demonstrate how each algorithm works, so they are relatively simple and we do not intend to optimize them. In addition, most demonstrative cases are for 2D only, though they can be extended to any higher dimensions in principle. They are not for general-purpose optimization, because there are much better programs out there, both free and commercial. These codes should work using Matlab® 1 . For Octave, 2 slight modifications may be needed. B.l

GENETIC ALGORITHMS

The following simple demo program of genetic algorithms tries to find the maximum of f(x) = -cos(x)e-(>x-^2. 1

Matlab, www.mathworks.com J . W. Eaton, GNU Octave Manual, Network Theory Ltd, www.gnu.org/software/octave, 2002. 2

Engineering Optimization: An Introduction with Metaheuristic By Xin-She Yang Copyright © 2010 John Wiley & Sons, Inc.

Applications.

267

268

MATLAB® PROGRAMS

'/, Genetic Algorithm (Simple Demo) Matlab/Octave Program 7, Written by X S Yang (Cambridge University) % Usage: gasimple or gasimple('x*exp(-x)'); function [bestsol, bestfun, count]=gasimple(funstr) global solnew sol pop popnew fitness fitold f range; if narginrand, kk=floor(popsize*rand)+l; count=count+l; popnew(kk,:)=mutate(pop(kk,:),nsite); evolve(kk); end end V, end for j % Record the current best bestfun(i)=max(fitness); bestsol(i)=mean(sol(bestfun(i)==fitness)); end % Display results subplot (2,1,1); plot(bestsol); titleCBest estimates'); subplot(2,l,2); plot(bestfun); title('Fitness'); 7. All the sub functions '/, generation of the initial population function pop=init_gen(np,nsbit) % String length=nsbit+l with pop(:,l) for the Sign pop=rand(np,nsbit+l)>0.5; % Evolving the new generation function evolve(j) global solnew popnew fitness fitold pop sol f; solnew(j)=bintodec(popnew(j,:)); fitness(j)=f(solnew(j)); if fitness(j)>fitold(j), pop(j,:)=popnew(j,:); sol(j)=solnew(j); end % Convert a binary string into a decimal number function [dec]=bintodec(bin) global range; % Length of the string without sign nn=length(bin)-l; num=bin(2:end); % get the binary '/. Sign=+1 if bin(l)=0; Sign=-l if bin(l)=l. Sign=l-2*bin(l); dec=0; % floating point/decimal place in a binary string dp=floor(log2(max(abs(range)))); for i=l:nn, dec=dec+num(i)*2~(dp-i); end dec=dec*Sign; % Crossover operator function [c,d]=crossover(a,b)

269

270

MATLAB® PROGRAMS

nn=length(a)-l; '/, generating a random crossover point cpoint=floor(nn*rand)+l; c=[a(l:cpoint) b(cpoint+l:end)]; d=[b(l:cpoint) a(cpoint+l:end)]; % Mutatation operator function anew=mutate(a,nsite) nn=length(a); anew=a; for i=l:nsite, j=floor(rand*nn)+l; anew(j)=mod(a(j)+l,2); end

B.2

SIMULATED ANNEALING

The implemented simulated annealing intends to find the minimum of Rosenbrock's function f(x,y) = (l-x)2 + 100(y-x2)2. t Simulated Annealing (by X-S Yang, Cambridge University) % Usage: sa_demo dispO Simulating ... it will take a minute or so!'); % Rosenbrock's function with f*=0 at (1,1) fstr='(l-x)~2+100*(y-x~2)~2'; % Convert into an inline function f=vectorize(inline(fstr)); '/, Show the topography of the objective function range=[-2 2 - 2 2 ] ; xgrid=range(l):0.1:range(2); ygrid=range(3):0.1:range(4); [x,y]=meshgrid(xgrid,ygrid); surfc(x,y,f(x,y)); % Initializing parameters and settings T_init = 1 . 0 ; % Initial temperature T_min = le-10; °/, Final stopping temperature F_min = -le+100; °/, Min value of the function max_rej=2500; '/, Maximum number of rejections max_run=500; °/, Maximum number of runs max_accept = 15; °/0 Maximum number of accept k = 1; '/, Boltzmann constant alpha=0.95; '/, Cooling factor Enorm=le-8; '/, Energy norm (eg, Enorm=le-8) guess=[2 2]; '/, Initial guess °/0 Initializing the counters i,j etc i= 0; j = 0; accept = 0; totaleval = 0;

B.2 SIMULATED ANNEALING '/« Initializing various values T = T_init; E_init = f(guess(l),guess(2)); E_old = E_init; E_new=E_old; best=guess; '/„ initially guessed values 7, Starting the simulated annealling while ((T > T_min) & (j F_min) i = i+1; % Check if max numbers of run/accept are met if (i >= max_run) I (accept >= max_accept) % Cooling according to a cooling schedule T = alpha*T; totaleval = totaleval + i; % reset the counters i = 1; accept = 1; end % Function evaluations at new locations ns=guess+rand(l,2)*randn; E_new = f(ns(l),ns(2)); % Decide to accept the new solution DeltaE=E_new-E_old; % Accept if improved if (-DeltaE > Enorm) best = ns; E_old = E_new; accept=accept+l; j = 0; end % Accept with a small probability if not improved if (DeltaErand ) ; best = ns; E_old = E_new; accept=accept+l; else end % Update the estimated optimal solution f_opt=E_old; end '/, Display the final results disp(strcat('Obj function :',fstr)); disp(strcat('Evaluations :', num2str(totaleval))); disp(strcat('Best solution:', num2str(best))); disp(strcat('Best objective:', num2str(f_opt)));

271

272

B.3

MATLAB® PROGRAMS

PARTICLE SWARM OPTIMIZATION

The following PSO program tries to find the global minimum of Michaelwicz's 2D function f(x,y)

= -sin(z)sm20(-) -sin(y)sin20(^). π



°/0 Particle Swarm Optimization (by X-S Yang, Cambridge University) % Usage: pso_demo(number_of.particles,Num_iterations) "/. eg: best=pso_demo(20,15); % where best=[xbest ybest zbest] %an n by 3 matrix function [best]=pso_demo(n,Num_iterations) % n=number of particles; Num_iterations=number of iterations if nargin(l-xl)-2+100*(x2-xl~2)~2'; end % Converting to an inline function f=vectorize(inline(funstr)); ndim=2; '/«Number of independent variables '/. The range of the objective function ranged, :) = [-5 5 ] ; range(2,: ) = [-5 5 ] ; % Pitch range for pitch adjusting pa_range=[100 100]; % Initial parameter setting HS_size=20; "/.Length of solution vector HMacceptRate=0.95; °/,HM Accepting Rate PArate=0.7; '/.Pitch Adjusting rate '/. Generating Initial Solution Vector for i=l:HS_size, for j=l:ndim, x(j)=range(j,1)+(range(j,2)-range(j,1))*rand; end HM(i, :) = x; HMbest(i) = f(x(l), x(2)); end '/,'/ HSmax, HSmaxNum = i; HSmax = HMbest(i); end if HMbest(i)delta alpha=r'*r/(d'*A*d); K=r'*r; u=u+alpha*d; r=r-alpha*A*d; beta=r'*r/K;

'/„ alpha_n °/0 temporary value % u_{n+l} '/, r_{n+l} '/, beta_n

B.7 NONLINEAR OPTIMIZATION d=r+beta*d; u'

279

7, d_{n+l} °/0 d i s p u on t h e screen

end Initially, you can run the program using a small n = 50, which will show almost exact answers. Then, you can try n = 500 and 5000 as further numeric experiments. B.7

NONLINEAR OPTIMIZATION

To write our own codes is often time-consuming. In many cases, we can simply use existing well-tested programs. For example, the optimizer f mincon of the Matlab optimization toolbox can be used to solve many optimization problems. Let us use two examples to demonstrate how to turn a nonlinear optimization problem into simple Matlab codes. B.7.1

Spring Design

The original spring design optimization is minimize f(x) = (2 + L)dw2,

(B.8)

gx(x) = 1 - 7?7%wi < 0

subject to

„ /_\ id2—wd _i_ i x 92\ ) — 12566dw 3 -w 4 "·" 5108™^

^