2017 9th International Conference on Computational Intelligence and Communication Networks
Parameter Tuning in Modeling and Simulations by Using Swarm Intelligence Optimization Algorithms Rabia KORKMAZ TAN Department of Computer Engineering Ege University, Izmir, Turkey
[email protected]
Şebnem BORA Department of Computer Engineering, Ege University, Izmir, Turkey
[email protected]
Abstract—Modeling and simulation of real-world environments has in recent times being widely used. The modeling of environments whose examination in particular is difficult and the examination via the model becomes easier. The parameters of the modeled systems and the values they can obtain are quite large, and manual tuning is tedious and requires a lot of effort while it often it is almost impossible to get the desired results.. For this reason, there is a need for the parameter space to be set. The studies conducted in recent years were reviewed, it has been observed that there are few studies for parameter tuning problem in modeling and simulations. In this study, work has been done for a solution to be found to the problem of parameter tuning with swarm intelligence optimization algorithms Particle swarm optimization and Firefly algorithms. The performance of these algorithms in the parameter tuning process has been tested on 2 different agent based model studies. The performance of the algorithms has been observed by manually entering the parameters found for the model. According to the obtained results, it has been seen that the Firefly algorithm where the Particle swarm optimization algorithm works faster has better parameter values. With this study, the parameter tuning problem of the models in the different fields were solved.
Firefly Algorithm (FA) is also a swarm-based heuristic optimization algorithm. It was developed by taking fireflies’ brightness-sensitive social behaviors into consideration[16]. The swarm-based algorithms used in this study were tested with different models being used. According to the obtained results, it has been observed that every PSO algorithm is faster while the FA algorithm has better parameter values. When modeling studies are examined, some problems require faster methods with acceptable parameter values while in other problems, it is important for the best possible parameter values to be found by keeping the speed in the background. This study presents a solution for both problems. Afterwards, the algorithms will be explained with their main lines in the study, and then the algorithms tested using 2 different models will be compared. The success and performance of the algorithm on each model and function will be compared.
Keywords— Swarm Intelligence Optimization Algorithms, Particle swarm optimization, Firefly algorithms, Modeling and Simulation, Parameter tuning.
I.
INTRODUCTION
Modeling and simulation are often used in modeling real systems to make investigations and inspections easier [1-8]. The formation of models that partially or completely reflect the actual system is tied to the parameter values of the system. Manually tuning these parameter values is long and tedious. For this reason, there is a need for a systematic parameter tuning process. The parameter tuning process, which is an optimization problem, has been proposed for the parameter tuning problem with the use of swarm intelligence algorithms, which are frequently used in optimization problems and are created by being inspired by events in nature [9-14]. The particle swarm optimization(PSO) algorithm is a swarm-based heuristic optimization algorithm. It was developed by being inspired from the movements of swarm, especially from covey. The movements of birds in covey according to the position of those that are close to the feed underlies the basis of the algorithm[15].
978-1-5090-5001-7/17/$31.00 ©2017 IEEE DOI 10.1109/CICN.2017.34
II. METHODS A. Particle Swarm Optimization Algorithm-PSO The PSO algorithm gets started by creating a set of random solutions (swarm of particles ), and solution sets are updated in each iteration to find the optimum solution, and two important values are kept in the memory[15]. These values are pbest and gbest. The coordinate that provides the best solution for the solution set is kept at pbest variable, the coordinate that provides the best solution obtained so far for all solution sets in the population is kept at gbest variable. The position and velocity of each particle are updated according to (1) and (2) equations at each iteration. The optimum value is tried to be obtained by using the fitness function of the model. The fitness function is based on the model, and can differ according to the needs of the model designer or user. The particle with the best position is determined according to this value. The fitness function differs depending on the model and the user's expectation. vik= w.vik+ c1.rand1k (pbestik – xik) + c2.rand2k(gbestk – xik)
(1)
lik= lik+ vik (2) c1 and c2 constant values which are learning factors pull each set of solutions towards pbest and gbest values. The random values between 0 and 1 are assigned to rand1 and
148
rand2 in the equation. k refers to number of iterations, i refers to the index number of the selected solution set, l ik refers to the position of the particle and vik refers to the speed value of particle. w which is intertia weight determines the contribution of the previous speed of a set of solutions to the speed of the current time step. The linear decreasing inertia weight strategy was used [17]. B. Firefly Algoritm-FA Firefly algorithm is an algorithm, being created on flashing behaviors of fireflies[22]. The algorithm is based on three main principles: • All fireflies can affect each other regardless of gender, • The attractiveness of fireflies increase as their brightness increase. The less bright firefly approaches to the brighter one. • The brightness of a firefly is determined by the value it generates from the fitness function of the model. The fitness function can vary based on the model designer or user's expectation. • The beginning batch is initially created in the FA algorithm, xi, represents the solution of the i th firefly. First, xi has members as as much as the problem size for all fireflies, and gets started with random values at every turn. The value turned back from optimization problem by each candidate solution(xi) refers to that firefly’s a brightness / intensity of light I, and is formulated as in equation 3. I=
−γ 2
(3)
I0 represents initial lightintensity, γ light absorption coefficient, r the distance between two fireflies. r is calculated in equation 6. The attractiveness of a firefly depends on the brightness of other firefly, and the distance between them. Therefore, attractiveness β is defined as in equation 4. β = β −γ 2 (4) β0 is a firefly’s attractiveness value when r which refers the distance between two fireflies equals 0. β 0 can take values ranging between 0 and 1. When β0 approximates 1, it means that the brightest firefly will have a strong effect on the change of the locations of other fireflies. Based on β value, ith firefly which is less attractive goes nearby the firefly that is more attractive that is the reason why it has been impressed by. This relocation occurs according to equation (5). xi,p= xi,p+ β(xj,p – xi,p) + αƐi,p (5) xi represents the candidate solution of ith firefly, xj, that of j the firefly. The distance between two fireflies is identified by Cartesian distance: rij= |xi – xj|2 = ∑
( , −
, )2
(6)
included in equation (5), is a random number in the range [0.5,0.5]. As is also understood from the formule, the selected i th firefly moves according to the brightness value of j th firefly. α is random variable, and it is recommended to be take a constant value in the range [0,1]. The greater the value of α, the greater the effect of the randomly generated Ɛi value on the movement of the firefly. The β0 value was assumed as 1. III.
EXPERIMENTAL STUDIES
The parameter tuning experiment using swarm intelligence optimization algorithms was tested on two different agent based models[1] which were developed in Repast Simphony-2.3.1 environment and the results were compared. The users can choose the algorithm to be used for each model, and can manually enter the parameters of the algorithm used. Moreover, the parameters of model can be optionally manually entered. Thus, manually entering the best parameter set produced by the algorithms allows for retesting the performance of this parameter set. A distinguishable parameter tuning process is performed on each algorithm model. A. Used Models a. Predator-Prey Model In this simulation model, the predator-prey ecosystem was investigated. The values of the parameters such as the relation of the predator-prey population with each other, and the influence of nutrition included in the ecosystem on the population were set through the proposed method. The goal is to ensure the continuity of the population in ecology, with the most appropriate parameter values. There are three agents in the predator-prey simulation model: wolf, sheep and grass. Sheep and wolf agents move randomly in the simulation environment. They lose energy as long as they continue to move in the environment. The wolf and sheep agents are born having a certain initial energy. However, they hunt to continue their lives in the environment. If their energy values decreases to 0 level, they will no longer to live in the environment. If a wolf meets with a sheep in the same coordinate as itself, hunts it, and increases its energy according to the predefined parameter value (wolf gain from food). Similarly, if a sheep agent meets with a grass agent, eats it, and increases its energy according to the predefined parameter value(sheep gain from food). Grass agents exist at every coordinate in the environment. After grass agents are created, they grow until the specified value (grass regrow time). Grass agents die at the end of their lifetime, even if they are not eaten in the life cycle of the model. During the life cycle of grass, it becomes edible again after growth time value has been exceeded, and its state value is updated to be alive[18]. Thus, these parameter values are adjusted with this approach, and the best parameter set allowing for the population continuity is reached. b. Flow Zombies Model 149
There are three agents in the Flow Zombies simulation model: Zombie, human, and human energy. In this model, the zombies that exist in the environment convert people into zombies in a short period of time. Working logic of the model; Human move towards places where agents zombies are not found. They stop and rest and gather energy when their energy is depleted. In this waiting period, the chance of encountering a zombie is higher. Zombies move towards areas where people are concentrated, they convert the people in the same coordinate into a zombie. The aim of this model is to keep the lifespan of the human race long. With the approach developed, work has been done to ensure that humans stay alive for as long as possible by zombie, human, and human energy parameters being optimized. B. Results a. PredatorPrey Model Common Operating Conditions: Maximum Iteration: 15 Population Number of Individuals: 10 Tick Count: 500 The results were obtained by running each algorithm 5 times under the above conditions and the results obtained by averaging these results were used to compare the algorithms. TABLE I.
THE PSO ALGORITHM BEING RUN 5 TIMES IN THE PREDATOR-PREY SIMULATION IN THE PARAMETER TUNING OPERATION Tick
Average fitness value
Initial Best Improvement Average best fitness difference local fitness value fitness value value
Average general fitness value
When Table-1 is examined, it is seen that very close results came out. It is seen that when the eligibility value obtained from the initial randomly assigned parameter values is very large, the best eligibility value found by the value of the parameters obtained after the parameter tuning process is much better. This also shows that the PSO algorithm improves the parameter values. The eligibility value takes values between [0,1]. The eligibility value approaching 0 indicates that the parameter values are better while the eligibility value approaching 0 indicates that the parameter values are distant to the optimum. Furthermore, another finding from these results is that simulations with smaller average general eligibility values last longer. The reason for this is that when the species is exhausted in the predator-prey model, the simulation is terminated to reach the faster simulation. If the species is not exhausted, the simulation is continued until the maximum number of ticks and this causes the prolongation of the period. Also, when solution clusters with the same values are found, the eligibility value is not calculated again and the eligibility values of the same solution clusters held in memory are assigned and become effective over the period in this process. For this reason, it can be observed that the period is smaller when the number of ticks is increased. The same conditions apply in the FA algorithm. TABLE II. THE RESULT OF THE FA ALGORITHM RUN 5 TIMES IN THE PREDATOR-PREY SIMULATION IN THE PARAMETER TUNING OPERATION
FA-1
Initia Averag l best e fitnes Best Local Averag s Fitness e value Fitnes Value: fitness s Improvemen value Tick Value t Difference 70699.0 0.41 0.7 0.05 0.65 0.20
Averag e Genera l Fitness Value 0.18
FA-2
70604.0 0.32
0.26
0.06
0.20
0.14
0.12
FA-3
67566.0 0.57
0.9
0.18
0.71
0.42
0.38
PSO-1
57878.0 0.55
0.7
0.10
0.60
0.32
0.22
PSO-2
47277.0 0.60
0.7
0.10
0.60
0.32
0.26
PSO-3
61307.0 0.56
0.7
0.24
0.45
0.39
0.36
PSO-4
59623.0 0.52
0.7
0.10
0.60
0.29
0.19
PSO-5
46077.0 0.62
0.26
0.10
0.16
0.27
0.13
FA-4
70401.0 0.36
0.28
0.05
0.23
0.13
0.09
Average 54432.4 0.57
0.61
0.13
0.48
0.32
0.23
FA-5
71176.0 0.50
0.31
0.10
0.20
0.28
0.22
Averag 0.43 e 70089.2
0.49
0.08
0.40
0.23
0.19
Tick: The average of tick counts in each operation of simulations Average Fitness Value: The average of fitness values of solution sets in each iteration. Initial Best Fitness Value: The average of the best fitness value of randomly created solution sets in the first generation in simulations. Best Fitness Value: The average of the fitness values of the best solution sets yielded after simulations. Improvement Difference: The difference between the “ Initial Best Fitness”and“Best Fitness” values. Average Local Fitness Value: The average of local values in each iteration in simulations. Average General Fitness Value: The average of general values in each iteration in simulations.
The values obtained as a result of parameter tuning in the FA predator-prey model are shown in Table-2. When the eligibility values of the FA algorithm are examined, it is seen by looking at the difference of the improvement that they have been converted from the eligibility values produced at the start into even better results. When the average general eligibility values are looked at, it is observed that simulations where better solutions are produced last longer and the reason for this is, as explained in the PSO, the species are not exhausted and each simulation works until the maximum number of tics determined. When the results and averages obtained from the PSO and FA algorithms are compared, among the findings are that the parameter values found by the FA are must better but that the PSO algorithm work much faster than the FA algorithm. 150
in that the FA algorithm is slower but has better parameter values than the PSO algorithm.
b. Flow Zombie Model Common Operating Conditions: Maximum Iteration: 15 Population Number of Individuals: 10 Tik Count: 2000 TABLE III. THE RESULT OF THE PSO ALGORITHM RUN 5 TIMES IN THE HUMANZOMBIE SIMULATION IN THE PARAMETER TUNING OPERATION
Tick
Initial Average Local Average best Best fitness fitness Fitness Improvement Fitness value Value Difference Value: value
Average General Fitness Value
PSO-1
94950
0,052
0,294
0,005
0,289
0,0201
0,0196
PSO-2
82414
0,063
0,286
0,0056
0,2804
0,00847 0,0076
PSO-3
85424
0,054
0,435
0,005
0,4298
0,0114
PSO-4
98034
0,049
0,286
0,0052
0,2808
0,01438 0,0135
PSO-5
90086
0,051
0,333
0,005
0,328
0,01668 0,0163
Average 90181,6 0,0538
0,3268 0,00516 0,3216
0,142
0,0109
0,1346
When Table 3 is examined, it is seen that there is a clear difference between the initial randomly generated eligibility value (This eligibility function takes values between [0,1] and as mentioned before, this value approaching zero means that the parameter values have approached the optimum) and the best suitability value. This also shows the success of the algorithm in the parameter tuning process in the humanzombie model. Furthermore, another feature observed is the change of the average eligibility value as inversely proportional to the tick count. This situation, as observed in the Predator-prey model, results from the good parameter values running longer than the simulation and the worse parameter values terminating a shorter period time than the simulation. In this simulation, the aim is to ensure the continuation of human generations as much as possible. The parameter values that ensure this are the best parameter values. TABLE IV. THE FA ALGORITHM BEING RUN 5 TIMES IN THE HUMAN-ZOMBIE SIMULATION IN THE PARAMETER TUNING OPERATION
Tick FA-1 FA-2 FA-3 FA-4 FA-5 Averag e
247382 126793 169377 102274 217748 172714, 8
Figure 2. The population distribution as a result of any running of the PSO algorithm in the parameter tuning period
Figure 3. The graph obtained as a result of the manual input of the best values of the PSO algorithm found in the parameter tuning process
Figure 4. The graph obtained from the parameter tuning process of the FA algorithm.
Initial Averag Averag best e e Averag fitness Best Improvemen Local General Fitness Fitness e fitness value Fitness t Value Difference Value: Value value
0,132 0,148 0,136 0,164 0,134 0,143
0,313 0,476 0,313 0,417 0,238 0,351 4
0,007 0,036 0,0136 0,073 0,013 0,0285 2
0,306 0,44 0,2994 0,344 0,225 0,32288
0,033 0,037 0,024 0,023 0,028 0,029
0,008 0,013 0,012 0,016 0,017 0,0132
When the FA results are examined in Table-4, it is observed that there are findings similar to the PSO. When the PSO and FA are compared, one of the findings observed in this model
Figure 5. The graph obtained as a result of the manual input of the best values found in the parameter tuning process of the FA algorithm
When Figure 2, Figure 3, Figure 4, and Figure 5 are examined, they show the wolf, lamb, and grass distribution 151
through the predator-prey model of the PSO and FA algorithms respectively. Figure 2 and Figure 4 show the graph obtained while finding the appropriate parameter value for the predatorprey model. As seen in the graph, the simulation results of good and bad solution sets are available. The graphs of Figure-3 and Figure 5 are the graphs obtained by manually running the best parameter values, obtained from PSO and FA algorithms respectively. When the graphs in Figure 3 and Figure 5 are analyzed, it is seen that all these algorithms contain solution sets that provide the continuity of the simulation. IV.
CONCLUSION
In this study, a solution has been sought for the problem of tuning the parameter parameters of real systems that are modeled in swarm FA algorithms. Their accomplishment on the model was tested using particle swarm optimization and firefly algorithms. The PSO algorithm finding a solution faster and to a more acceptable accuracy and the FA algorithm finding a much better parameter value despite being slower are some of the results observed in this study. Additionally, this takes over the burden of the determination of parameter values from model users, and gives it to the model itself. The user makes the necessary adjustments manually in a way that will satisfy the user's expectations, in the cases where the most approximate parameter value or speed come into forefront. Since each algorithm can be run independently on the model, it is posible to distinguish which algorithm yields better result in which model. Possibilities like obtaining acceptable parameters very quickly just as parameters can be increased to the nearest best can be obtained with this approach depending on the duration of the simulation entered manually and the maximum iterations. In the next steps of this approach, it is planned to integrate algorithms that will be able to provide solutions from different aspects to parameter tuning problems of wider modeling studies. Furthermore, the system to be developed will be tested on more models and problems. REFERANCES [1]
[2]
[3]
[4]
C. Macal and M. North, “Tutorial on agent-based modeling and simulation part 2: how to model with agents”. In WSC’06: Proceedings of the 38th conference on Winter simulation, California, Dec. 2006, pp. 73–83. Di Marzo Serugendo, Giovanna, Gleizes, Marie-Pierre, Karageorgos, Anthony, Self-organising Software From Natural to Artificial Adaptation, 1st editör, Heidelberg, Berlin: Springer-Verlag, 2011, pp. 7-32. E. Kaddoum and J.-P. Georg´e, “Collective Self-Tuning for Complex Product Design (short paper),” in IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO), Lyon, CPS, September 2012, pp. (electronic medium). D. Capera, M.-P. Gleizes and P. Glize: “Mechanism Type Synthesis based on Self-Assembling Agents,” Journal of Applied Artificial
Intelligence, vol. 18, october 2004, pp. 921–936, doi: 10.1080/08839510490509090 [5] K. Ottens, M.-P. Gleizes, and P. Glize: “A Multi-Agent System for Building Dynamic Ontologies”, in International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), Hawaii. ACM Press, May 2007, pp. 1278–1284, doi:10.1145/1329125.1329399 [6] C. Bernon, D. Capera, and J.-P. Mano: “Engineering Self-modeling Systems:Application to Biology”, Engineering Societies in The Agents World IX, vol. 5485, in: Artikis A.,Picard G., Vercouter L.Eds., Springer, Berlin, Heidelberg, 2009, pp. 248–263. [7] J.-P. Georgé and M.-P. Gleizes and P. Glize and C. Régis, "Real-time Simulation for Flood Forecast: an Adaptive Multi-Agent System STAFF", Semposium on Adaptive Agents and Multi-Agent Systems (AISB) , Toulouse Cedex, 2003, pp. 109-114. [8] S. Lemouzy, V. Camps and P. Glize, “Real time learning of behaviour features for personalised interest assessment”, Advances in Practical Applications of Agents and Multiagent Systems of Advances in Soft Computing, Springer Berlin / Heidelberg, Vol. 70, in Y. Demazeau, F. Dignum, J. Corchado, and J. Pérez, Eds., Berlin, 2010, pp. 5-14 [9] B. Calvez, G. Hutzler, "Automatic tuning of agent-based models using genetic algorithms", Proceedings of the 6th International Workshop on Multi-Agent Based Simulation (MABS'05), Springer, Utrecht, TheNetherland, 2005, pp. 41-57. [10] C. Salwala, V. Kotrajara and P. Horkaew, "Improving Performance for Emergent Environments Parameter Tuning and Simulation in Games Using GPU", Computer Science and Information Technology (ICCSIT), 2010 3rd IEEE International Conference on, Chengdu, September 2010, pp. 37-41, doi:10.1109/ICCSIT.2010.5564465 [11] F. Imbault and K. Lebart, "A stochastic optimization approach for parameter tuning of support vector machines", Proceedings of the 17th International Conference on Pattern Recognition (ICPR), Cambridge, Aug. 2004, pp. 597-600, doi: 10.1109/ICPR.2004.1333843 [12] D.S. Bolme, J.R. Beveridge, B.A. Draper, P.J. Phillips, Y.M. Lui. "Automatically Searching for Optimal Parameter Settings Using a Genetic Algorithm", Computer Vision Systems - 8th International Conference, {ICVS}, vol. 6962, Crowley J.l., Draper B.A., Thonnat M. Eds., Sophia Antipolis, 2011, pp. 213-222 [13] B. Calvez and G. Hutzler, "Adaptive Dichotomic Optimization: a New Method for the Calibration Of Agent Based Models", 21st Annual European Simulation and Modelling Conference (ESM 2007), Malta, January 2007, pp. 415-419. [14] F. Dobslaw, "A Parameter Tuning Framework for Metaheuristics Based on Design of Experiments and Artificial Neural Networks", Proceeding of the International Conference on Computer Mathematics and Natural Computing, Rome, January 2010. [15] J. Kennedy anf R. C. Eberhart, “Particle Swarm Optimization”, Proc. of the IEEE Int. Conference on Neural Networks, Western Australia, Nov. 1995, pp. 1942-1948, doi: 10.1109/ICNN.1995.488968. [16] X. S. Yang, Firefly algorithm, Nature-Inspired Metaheuristic Algorithms, 2nd Ed., UK, 2008, pp. 79-90. [17] J. C. Bansal, P. K. Singh, M. Saraswat, “Inertia Weight Strategies in Particle Swarm Optimization”, 2011 Third World Congress on Nature and Biologically Inspired Computing, Salamanca, Oct 2011, pp. 633640, doi: 10.1109/NaBIC.2011.6089659 [18] İ. Çakırlar, Çakırlar, “ Etmen Temelli Benzetimler İçin Test Güdümlü Bir Yaklaşım Geliştirilmesi”, Ege University, Computer Engineering Department, PhD Thesis, İzmir, 2014.
152