Optimizing Risk Management Using NSGA-II

3 downloads 0 Views 1MB Size Report
treatments that best fit in an execution time window, (ii) better use financial resources and (iii) minimize risk exposure. For testing purposes, a simulated ...
Optimizing Risk Management Using NSGA-II ´ Marcos Alvares Barbosa Junior

Fernando Buarque de Lima Neto

Tshilidzi Marwala

University of Johannesburg Johannesburg, South Africa Email: [email protected]

University of Pernambuco Pernambuco, Brazil Email: [email protected]

University of Johanneburg Johannesburg, South Africa Email: [email protected]

Abstract—Companies are often susceptible to uncertainties which can disturb the achievement of their objectives. The effect of these uncertainties can be perceived as risk that will be taken. A healthful company have to anticipate undesired events by defining a process for managing risks. Risk management processes are responsible for identifying, analyzing and evaluating risky scenarios and whether they should undergo control in order to satisfy a previously defined risk criteria. Risk specialists have to consider, at the same time, many operational aspects (decision variables) and objectives to decide which and when risk treatments have to be executed. In line with that, most companies select risks to be treated by using expertise of human specialists or simple sorting heuristics based on the believed impact. Companies have limited resources (e.g. human and financial resources) and risk treatments have costs which the selection process has to deal with. Aiming to balancing the competition between risk and resource management this paper proposes a new optimization step within the standard risk management methodology created by the International Organization for Standardization (a.k.a. ISO). To test the resulted methodology, experiments based on the Non-dominated Sorting Genetic Algorithm (more specifically NSGA-II) were performed aiming to manage risk and resources of a simulated company. Results show us that the proposed approach can deal with multiple conflicting objectives reducing the risk exposure time by selecting risks to be treated according their impact and available resources.

I. I NTRODUCTION All companies have elements that directly impact their business value. In financial accounting jargon, these elements are called “Assets” and represent economic resources. Anything tangible or intangible that is capable of being owned or controlled to produce value and that is held to have positive economic value is considered an asset. Assets are susceptible to damages caused by undesired events. In line with that, risk management is the preventive process responsible for protecting companies against effects of these undesired events – moreover these effects are what specialists call “risk”. Risk management processes identify, analyze, evaluate and treat risks aiming to protect assets. For each identified risk, an impact level, likelihood and treatment (or a set of treatments) are specified. Risk evaluation and treatment selection are processes that involve many strategic objectives according to the previously defined risk criteria, like: protection (reduction of the risk exposure time) or cost reduction. The process to decide the best approach to treat risks, considering multiple objectives and limited operational resources (e.g. time, money, human resources), is a complex task. Risk management process described by ISOs 31000 [1] and

31010 [2], specifies a general risk management process. These standards were built over an ideal scenario where a company has enough resources for treating risks over a period of time. Unlikely this ideal scenario, real world companies have limited operational resources. According to the amount of detected threats and resources, the treatment process has to be iterated over several time slices. A new issue raises from this scenario: risk selection optimization. A risk manager has to determine a set of risks to be treated which better satisfies all objectives and resource restrictions of the company. Aiming to help asset managers and companies to protect their business, this work investigates the use of a state-of-art multi-objective genetic algorithm for selecting treatments. This treatment selection characterizes a combinatorial problem, to which we decide to apply a genetic algorithm. Nowadays, the state-of-art for solving multi-objective combinatorial problems is the NSGA-II algorithm proposed by Kalyanmoy et al. [3], [4]. The proposed approach has three main objectives: (i) select treatments that best fit in an execution time window, (ii) better use financial resources and (iii) minimize risk exposure. For testing purposes, a simulated environment was developed containing assets, risks and treatments. This simulated environment returns a list of evaluated risks for an external optimization module deciding which risk will be treated. Although, experiments performed in this paper select risks according few objectives (enough for most companies), the proposed approach can be extended for optimizing risk selection considering several cooperative or competitive objectives. Preliminary results show that the proposed approach can successfully optimize the risk management process, reducing the risk exposure time by better using of operational resources. This article is divided into seven main sections. Next three sections are related to required background knowledge, Sections II, III and IV cover Risk Management Process, Information Security Risk Management and the NSGA-II algorithm, respectively. Then, the proposed architecture is presented in Section V and validated in Section VI. Finally results, conclusions and future work are commented upon the last Section. II. R ISK M ANAGEMENT ISO risk management methodology defines steps for: (i) defining a scope (internal and external contexts), (ii) assessing

risks (identify, analyses and evaluate risks) and (iii) treatmenting risks. These steps can be graphicaly visualized through Figure 1. Successful risk management is dependent of effective communication and consultation with stakeholders1 . The entire process must be enabled to access an efficient communication mechanisms for alerting stakeholders about events that impact their assets. Risk management process also states that companies have to define their internal and external contexts before assessing risk. The context of the risk management process varies according needs of the analized company. This step can involve, but is not limited to: goals and objectives, responsibilities, scope, projects relationships and risk assessment methodology.

Establishing the context Risk assessment Risk identification Communication and consultation

Risk analysis

Monitoring and review

Risk evaluation

Risk treatment

Fig. 1: Risk management process according ISO 31010 The purpose of risk identification is finding, recognizing and recording risks. Risk analysis consists in determining the consequences and probabilities related to identified risks. The consequences and their probabilities are then combined to determine a level of the identified risk. The last step of risk assessment involves comparing analyzed risks with the risk criteria, in order to determine the significance and type of each risk. Risk assessment receives a scope as input and returns a list of evaluated risks. Finally, risk treatment step involves selecting and agreeing to one or more relevant options for changing the probability of occurrence, the effect of risk, or both, and implementing these options. Risk treatment involves a cyclical process of: • assessing risk treatment; • deciding whether residual risk levels are tolerable; • if not tolerable. generating a new risk treatment; and • assessing the effectiveness of this treatment. All these steps are embedded in a cycle that continuously re-evaluate the scope and the context aiming at identify new 1 Person or company that can affect, be affect by, or perceive themselves to be affected by a decision or activity.

risks. Currently the entire ISO risk management process does not specify any order or priority for risk treatments execution. Section V introduces an optimization step to the risk management process that deals with priorities and order of risk treatments. III. I NFORMATION S ECURITY R ISK M ANAGEMENT Nowadays, some of the most valuable companies have information as their principal asset. Like any other kind of asset, information must be managed and can have associated risks, which need to be treated [5], [6]. Information Security domain of Security Management incorporates the identification of the information data assets with the development and implementation of policies, standards, guidelines and procedures. It addresses confidentiality, integrity and availability, by identifying threats, classifying the assets, and rating their vulnerabilities so that effective security controls can be implemented. The process described in Section II can be used for identifying, analyzing and evaluating risks associated to information sources. For each identified risk a security police is created (e.g. treatment). The concept of vulnerability was introduced and constitutes the absence of a safeguard. A minor threat has the potential to become greater threat, or a more frequent, because of a vulnerability. Combined with the terms asset and threat, vulnerability is the third part of an element that is called a triple in risk management of information. Information security risks can be assessed by using several techniques based on historical data or specialists opinion [2]. These identified risks can be treated by mitigating vulnerabilities and implementing security polices. Similarly to the general risk management process this specialized process does not specifies priorities and order for treating threats. Risk managers have to decide by themselves the best way to conduct the treatment process for balancing risk treatment and limited operational resources. No need to stress this is a fault prone process as it relies solely on the human abilities to tackle threats, which could be too many. IV. N ON -D OMINATED S ORTING G ENETIC A LGORITHM – NSGA-II NSGA-II is a fast and elitist multi-objective evolutionary algorithm developed in 2000 by Deb et al. at Kampur Genetic Algorithms Laboratory [3]. After ten years, this algorithm remains the state-of-art for finding solutions to combinatorial and multi-objective problems. Over the years, NSGA-II has been used for finding solutions for problems in a large range of areas, such as: Economics and Engineering [7], [8], [9], [10], [11], [12]. NSGA-II has features that set it apart from other techniques, like: • the fast non-dominated sorting procedure is implemented2 ; • implements elitism for multi-objective search, using an elitism-preserving approach; 2 this algorithm sorts the individuals of a given population according to the level of non-domination.

• • •

a parameter-less diversity preservation mechanism is adopted; the constraint handling method does not make use of penalty parameters; and allows both continuous (“real-coded”) and discrete (“binary-coded”) design variables.

Establishing the context

Risk assessment

NSGA-II is showed in Algorithm 1 and is composed by three sub-algorithms: (i) fast non-dominated sort algorithm, (ii) crowded distance assignment algorithm and the (iii) crowded comparison operator (see [3] for more details). Algorithm 1 NSGA-II Algorithm. Initialize randomly a population P with N individuals Evaluate all the individuals of the population Separate individuals in Pareto Fronts using dominance repeat repeat Select parents using binary tournament Create a new individual using crossover and mutation Evaluate the fitness of the individual Add solution to the population until N new individuals are created Separate individuals in Pareto Fronts using dominance Evaluate the crowding distance of each individual Discard worse individuals until the maximum number of iterations is reached Like any other genetic algorithm, NSGA-II has mutation and crossover parameters, however its fitness evaluation process is a composition of evaluations of target objectives. The fast non-dominated sort algorithm sorts the solution according to the level of non-domination. Next section presents the use of NSGA-II for automatically selects risks to be treated through an new optimization step.

Monitoring and review

Communication and consultation Risk optimization

Risk treatment

Fig. 2: Proposed optimization step inside the risk management architecture.

• •

costs – selects a set of risks that best fits to the available amount of financial resources for treating risks; and time – selects a set of risks that best fits to the available amount of time for treating risks.

Optimizing these three objectives leads to a set of risks to be treated that considers limited operational resources and minimizes company risk exposure time. The method used in the optimization step has to find risk treatments that promote reductions on the overall risk level and return a low level of residual risk. These objectives can be represented by the following three equations:

total risk(R) =



reducing total risk – selects a set of high impact and likelihood of risks to be treated;

3 As stated before, the proposed architecture supports more than three objectives. Amount of objectives can change according needs of the analyzed company.

(risk levelr − residual riskr )

r∈R

V. O PTIMIZING R ISK A SSESSMENT O UTPUT Given the above mentioned problem, we proposed the inclusion in the ISO model of an optimization step between risk assessment and risk treatment steps aiming to automatically decide which risks have to be treated in order to satisfy objectives and needs of companies. Figure 2 indicates the position of the proposed optimization step inside the traditional risk management architecture. Risk assessment returns a list of identified, analyzed and evaluated risks to the optimization step that decides which risks have to be treated. This decision is suggested considering “secondary” objectives besides risk reduction, such as: time and financial resources. In this paper, the proposed optimization step has three main objectives3 :

X

(1) time(R) =

X

(treatment timer ) − time window

r∈R

(2) budget(R) =

X

(treatment costr ) − budget window.

r∈R

(3) Where R is a set of risks selected in search space (e.g. output of the risk assessment step). The time window constant represents the amount of time reserved for treating risks. This constant can be estimated by summing the work-time of all workers. For instance, if the company has an operation team with 5 members and each worker works 6 hours per day, then one week (5 workdays) of work contains 150 hours which can be used as the time window constant. The budget window constant represents the amount of money are available to be spent for treating risks over iterations. For accomplishing the proposed objectives the above men-

tioned equations are to be optimized using: min[total risk(R), time(R), budget(R)]. R

(4)

This minimization step can be solved by severals methods [13][14]. Other optimization algorithms based on metaheuristics are plausible approaches [15][16][17] to tackle the above mentioned minimization problem due to their capabilities to deal with discrete and large search spaces. For this paper a solution based on genetic algorithms was suggested for composing the proposed optimization step. The NonDominated Sorting Genetic Algorithm (see Section IV) was used for finding optimal solution which satisfies the analyzed objectives mainly because its multi-objective capabilities. The next subsections contains specificities and operators used for composing our Genetic Algorithm based optimization module. A. Search Space

child (C) inherits parents size and “cut points” were randomly selected. Figure 3.b shows crossover between parents with different sizes. Then child has a size between its bigger and smaller parents and “cut points” were selected by marking the smaller parent size position and dividing the last part into two parts.

P1

P1

P2

P2

P3

P3

C

C

(a)

(b)

Fig. 3: Graphical representation of the crossover strategy

Search space is composed of the power set of all risks returned by the risk assessment excluded the empty set. The mathematical representation for the search space is: P = P (S) − Ø |S|

n=2

− 1.

(5) (6)

Where S is a set with all risks returned by the risk assessment, P is the search space composed by all possible combinations of these risks and n is the size of the search space. For instance, if risk assessment step identifies 3 risks, then the search space is composed by 7 elements. Assessing risks of a medium-sized company can return thousand of risks which produce a huge search space. B. Crossover Operator The used selection mechanism was binary tournament with multi-parent recombination [18], where three individuals are randomly chosen to generate the next generation individual. Each individual is composed by a set of risks of different sizes (i.e. chromosome). For defining the child size, the following heuristic was used: • if all parents have the same size then the child will have the same size of the parent; and • if there is one parent smaller than others then the child will have a random size between the bigger and smaller parents. For defining the crossover “cut point” the following heuristic was used: • if all parents have the same size then two cut points will be randomly selected; and • if there is one parent smaller than others then the child have the first “cut point” in the correspondent position of the size of the smaller parent and another “cut point” in the middle last part of the chromosome. Different scenarios of crossover can be observed in Figure 3. In Figure 3.a, all the parents (P n) have the same size then the

C. Mutation Operator For exploring the search space, the proposed algorithm catches a random risk (in the risk pool generated by the risk assessment step) and inserts inside into the chromosome of the new individual according the follow strategy: • if the total treatment time of the individual is less than time window then a new risk is added at the end of its chromosome; • else a new risk replaces an existing risk into chromosome. D. Proposed algorithm dynamic The output of the proposed optimization step is a set of risks to be treated. This means that the proposed risk management process does not treat all risks at once. Residual risks returned are merged with the risk pool, which will be analyzed and evaluated in the next cycle of the algorithm. Algorithm 2 represents the proposed risk management process including the optimization step. Real world companies have a dynamic environment and the total risk level trends to increase over time. Therefore, the proposed algorithm must be executed continuously to ensure the acceptable risk level4 over time. Same as in ISO 31000 and ISO 31010, the proposed algorithm is generic and can be applied for managing risks of different categories of assets. In the next Section the proposed algorithm is tested on managing risks associated to information assets. VI. E XPERIMENT For testing the proposed approach the associated risks to information security were managed using the optimization module. This Section is divided into three parts: (i) infrastructure and technologies, (ii) simulation description and (iii) experiments execution and analysis. 4 Companies

acceptable risk level is defined during the risk criteria creation.

Algorithm 2 Proposed Risk Management Algorithm Establish the context Assess risk and create an initial risk pool repeat Review internal and external contexts; Identify new risks; Merge new risks with the risk pool if There are changes in context then Analyze all risks into the risk pool Evaluate all risks into the risk pool else Analyze new risks Evaluate new risks end if Optimize risk treatment selection Treat selected risks Merge residual risks with the risk pool until Total risk is acceptable

A. Infra-structure and technologies TM R Experiments were carried out in an Intel Core i7 CPU 870 2.93 GHz with 4 GB of RAM 1333 MHz (0.8 ns) running under Linux Ubuntu 11.10 operating system. Ruby programming language was used for developing the simulator and the optimization module (e.g. NSGA-II).

B. Simulated environment A simulated company was created, it contains all relevant characteristics for the performed experiments like: assets, associated risks, treatments (and residual risks), vulnerabilities and operational resources. This simulated environment creates an initial setup according to a specified configuration. For each iteration, the simulator receives a set of risks to be treated as input and returns an updated pool of risks. The evolution of the total risk was observed using different strategies for selecting risks to be treated. The proposed simulation has parameters that allow fine adjustments to represent specific scenarios and to control the simulation dynamics. There are 10 main parameters that control the simulation which afford the model with great flexibility to adhere to real world companies, they are: • amount of assets: number of valuable objects related to the managed scope; • amount of risks per asset: max number of risks for each asset; • amount of treatment per risk: max number of possible treatments for each identified risk; • vulnerabilities per asset: max number of found vulnerabilities per asset; • residual risk probability: generation probability of new risk after the treatment process; • new risks rate: rate of new risks to appear; • new vulnerabilities rate: rate of new vulnerabilities to appear;

budget: amount of money for managing risks per iteration; • time window size: amount of time for managing risks per iteration; and • acceptable risk level: a threshold which represents a safe risk condition. There are two categories of parameters, the first one is responsible for setting an initial state (e.g. number of assets or amount of risks per asset) and the second one is used for controlling the simulation dynamics (e.g. new risks rate or new vulnerabilities rate). Vulnerabilities affect risks level and are mitigated once all the associated risks are treated. Table I shows values used to setup the simulation for all experiments described in this Section. •

TABLE I: Simulated Company Configuration Values Parameter amount of assets amount of risks per asset amount of treatment per risk vulnerabilities per asset residual risk probability new risks rate new vulnerabilities rate money budget time window size acceptable risk level

Value 100 4 2 2 0.02 0.002 0.001 200 400 0.01

Most of these parameters were estimated by using collected data from a real company in Brazil making the simulation more plausible as possible. Unfortunately, the data set used to calibrate the simulation can not be attached to this paper due to non-disclosure agreement contract. The proposed simulator has five main entities: Simulator, Asset, Risk, Vulnerability and Treatment. These entities have relationships between themselves and follow specific roles during the simulation. The main entity is the Simulator which represents the entire company and provides controlling mechanisms. The company has Assets which has a value and category. Assets also can have Risks and Vulnerabilities. Each Risk has an impact, likelihood and Treatments. Vulnerabilities directly affect Risks and have impact and exploitation level. Treatments can reduce the Risk impact and likelihood and can return residual Risks. By using these five entities, the provided configuration and the described dynamics, a company scenario was composed and used for verifying the effectiveness of the proposed approach for risk treatment selection. C. Experiments execution and analysis The virtual environment was generated by using the configuration showed in Table I. This initial environment has 100 assets and approximately 268 risks. Our objective is to automatically find a subset of risks that maximize risk reduction and respect the company operational resource limitations. All possible combinations of found risks are represented by the power set of the risks returned by the simulator. This

5 the initial number of risks are different because the experiment is using three simulation instances. Risks inside each instance are initially created by using the “amount of risks per asset” parameter.

risk probability” parameters, respectively.

Without Selection

Simple Selection

Optimized Selection

Amount of Risks (#)

300 250

200 150 100 50 0 1

11

21

31

41

51

61

71

81

91 101 111 121 131 141

Simulation Iteration (#)

Fig. 4: Amount of risks over iterations. Figure 5 shows a comparison of total risk level over iterations between the simple and optimized selection approaches. The optimization algorithm has to select critical risks aiming the total risk level reduction of the analyzed scope.

Simple Selection

Optimized Selection

900

Total Risk Level (#)

means that the search space has 2268 − 1 possibilities. Due to the size of the search space, the combinatorial nature of the treated problem and the multiple objectives dealt at same time (e.g. exposure time reduction, better utilization of operational time, better utilization of budget), experiments performed on this section use the NSGA-II algorithm for finding optimized solutions. This means that for each simulation iteration the NSGA-II algorithm is executed for finding a new optimized solution due to the updated simulated scenario (after risks treatment). A population of 100 individuals was used for exploring the search space. Individuals crossover and mutation over generations guiding a evolutionary search process. At the end of execution, the NSGA-II algorithm returns a pareto containing optimized solutions for the treated problem. One of these solutions is selected to be passed as input to the next iteration of the simulation. The simulation was executed over 150 iterations for all scenarios simulated in this Section and data about the risk evolution was collected. The NSGAII was configured to run over 1000 iterations. Finally, each experiment scenario was executed 30 times and an average was calculated for validating statistically. The optimization module impacts the selection of risks to be treated and directly the total risk level. Along experiments this influence was observed, analyzed and compared with a nonoptimized selection approach. The non-optimized selection approach was called simple selection along the experiments. Simple selection catches risks ordered by level until fills the time window or money budget. The simple selection approach represents the most common approach to manage risks in companies, where risks are identified, ordered by level and treated sequentially until a time window or a security budget (i.e. operational resources) is available. Here a solution inside the returned Pareto is selected to feed the simulation according the follow criteria: • selects best solutions for the time window fulfilling objectives; then • selects best solutions for the total risk minimization objective; then • selects best solutions for the financial budget fulfilling objectives. Figure 4 shows evolution of the amount of risks per iteration over three different perspectives: (i) without risk treatment, (ii) simple treatment selection and (iii) optimized treatment selection. Risks are mitigated faster by using the optimized selection process 5 . This simulated scenario promotes a reduction in the company risk exposure time. The “without selection” data series shows the simulation behavior without risk treatment (only the simulation). The “new risks rate” can be observed through the creation of new risks over iterations. These new risks are composed by identified risks and residual risks which is controlled by the “new risks rate” and “residual

800 700 600 500 400 300 200 100 0 1

11

21

31

41

51

61

71

81

91 101 111 121 131 141 151

Simulation Iteration (#)

Fig. 5: Total risk level over iterations. Risk treatment selection using the proposed approach reaches an acceptable risk level 20% faster than the simple selection approach. Lets suppose that the acceptable total risk level is 100, then the optimized approach reaches this level using 20 iteration less than the simple selection approach. This improvement represents a huge gain of resources for the analyzed company. Saved resources along iterations can be human resources, time or financial resources. Beside all that, the company total exposure time is reduced and the probability of occurrence of critical incidents on the analyzed scope is reduced. Figure 6 shows a comparison between the simple selection approach and the optimized approach on instantaneous time spent over iterations. We can observe that the optimized selection better uses the time available per iteration. The proposed solutions are closer to the optimal solution (e.g. iteration time in window size). Hence, the operational team is more successfully used and has less idle time.

Simple Selection

Optimized Selection

Optimum Value

450

Exposed Time (h)

400 350

each risk management iteration. Table II shows a list with different configurations and total spent time. Results indicates that the proposed approach can be scaled up for managing risks of larger scopes and companies6 .

300

TABLE II: Execution Time for Different Simulation Setup

250 200 150 100 50 0 1

11

21

31

41

51

61

71

81

91 101 111 121 131 141

Population size 100 100 400

Amount of assets 100 1000 1000

Amount or risks 253 2780 2780

Total delay 6.3 hours 2.4 days 4.1 days

Simulation Iteration (#)

Fig. 6: Exposed time over iterations. Results showed in Figure 6 also supports the affirmation that the proposed approach reduces the total risk exposure time. Best solutions provided by the NSGA-II have an error margin of 5% when compared with the optimal solution. This error does not represent a big problem during a risk management process because the time window fulfilling objective is, most of the time, flexible once a risk manager can make small adjustments on the size of time window. Figure 7 shows a comparison between the simple selection approach and the optimized approach about the financial resource application over iterations. The optimized selection better uses financial resources available over iteration. The optimized selection approach proposes solutions closer to the optimal solution (e.g. iteration money budget size) than simple selection solutions. This objective is less flexible than the time window filling objective once the budget for treating risks is, most of time, controlled by non-technical departments inside the company (e.g. financial and accountability departments).

Simple Selection

Optimized Selection

Optimum Value

Spent Money ($)

250 200 150

100 50 0 1

11

21

31

41

51

61

71

81

91 101 111 121 131 141

Simulation Iteration (#)

Fig. 7: Spent money over iterations. Each simulated scenario invokes the NSGA-II algorithm using 1000 iterations and a population of 100 individuals along 150 simulation iterations. This setup takes 6.5 hours to perform all iterations using the specified hardware setup. After that, each simulation iteration takes 2.5 minutes to be executed. This amount of time is a very plausible delay for

All experiments were prototyped using Ruby script language which is interpreted and does not supports native threads. This category of programming language, despite its good expressiveness easiness for prototyping computational systems, has a drawback in terms of performance. Then, results showed in Table II can be improved by using compiled programing languages and native threads support. This multithreaded approach can also take advantage of recent multi-core architectures for increasing exponentially the performance. Other approach to enhance the performance is to use Graphic Processor Units which make use of high level of parallelism. VII. C ONCLUSION This paper proposes an optimization step for the traditional risk management process which is responsible for selecting risks to be treated aiming at optimizing the organization objectives. For validating the proposed approach, a simulated environment was created which has assets, generates a pool of risks and receives risks to be treated. An information security scenario was created for testing the proposed optimization module. Finaly, due to the conflicting (objectives) and combinatorial nature of the treated problem a multi-objective genetic algorithm was used. Experiments show that the optimized approach guides the simulated company to an acceptable risk level, in average, 20% faster than a traditional approach. This means that operational resources is better used and the company risk exposure time is reduced, protecting the treated scope against threats and eventual losses. Technical team was more efficiently used, once they had less idle time. Experiments also show that the proposed approach can be scaled for analyzing large scopes that contains more assets. Although, the performed experiments were used for managing risks related to information assets aiming increase their security level, the proposed approach can be applied for managing risks related to any type of assets. As future work, we are planning to verify the performance of the proposed approach by using parallel computation through Graphic Processor Units. Different mutation strategies have also to be investigated for determining the best approach for mutating a new generated individual after crossover. This investigation affects the capability of the algorithm to better 6 Four days is an acceptable amount of time for selecting risks of big companies with 1000 assets and more than 2700 identified risks.

explore the search space. More scenarios configurations must be investigated for validating scalability and accuracy of the proposed optimized risk management process. At last, we have to validate the proposed method using a real scenario, managing risks in a real company for observing the emergence of new issues and objectives to be dealt with. R EFERENCES [1] I. O. for Standardization, “Risk management - Principles and guidelines,” ISO 31000, Oct. 2009. [2] ——, “Risk Management - Risk assessment techniques,” ISO 31010, 2009. [3] K. Deb, S. Agrawal, A. Pratap, and T. Meyarivan, “A fast elitist nondominated sorting genetic algorithm for multi-objective optimization: NSGA-II,” in Parallel Problem Solving from Nature PPSN VI. Springer, 2000, pp. 849–858. [4] J. Durillo, A. Nebro, and F. Luna, “Convergence speed in multiobjective metaheuristics: Efficiency criteria and empirical study,” International Journal, no. May, pp. 1344–1375, 2010. [5] G. Stoneburner and A. Goguen, “Risk management guide for information technology systems: Recommendations of the national institute of standards and technology.” NIST Special Publication, 2001. [6] R. Krutz and R. Vines, The CISSP prep guide. John Wiley & Sons Inc, 2002. [7] R. T. F. A. King, H. C. S. Rughooputh, and K. Deb, “Solving the multiobjective environmental/economic dispatch problem with prohibited operating zones using NSGA-II,” in Proceedings of 2011 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing. IEEE, Aug. 2011, pp. 298–303. [8] X. Jiang, C. Yongqiang, W. Xiaoqing, Z. Minhui, and X. Liu, “Optimum design of antenna pattern for spaceborne SAR performance using improved NSGA-II,” in 2007 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2007, pp. 615–618. [9] J. Jia, J. Chen, G. Chang, J. Li, and Y. Jia, “Coverage Optimization based on Improved NSGA-II in Wireless Sensor Network,” in 2007 IEEE International Conference on Integration Technology. IEEE, Mar. 2007, pp. 614–618. [10] K. K. Mishra, A. Kumar, and A. Misra, “A variant of NSGA-II for solving priority based optimization problems,” in 2009 IEEE International Conference on Intelligent Computing and Intelligent Systems, vol. 1. IEEE, Nov. 2009, pp. 612–615. [11] P. Murugan, S. Kannan, and S. Baskar, “Application of NSGA-II Algorithm to Single-Objective Transmission Constrained Generation Expansion Planning,” IEEE Transactions on Power Systems, vol. 24, no. 4, pp. 1790–1797, Nov. 2009. [12] S. Mishra, G. Panda, S. Meher, R. Majhi, and M. Singh, “Portfolio management assessment by four multiobjective optimization algorithm,” in 2011 IEEE Recent Advances in Intelligent Computational Systems. IEEE, Sep. 2011, pp. 326–331. [13] I. Das and J. E. Dennis, “Normal-Boundary Intersection: A New Method for Generating the Pareto Surface in Nonlinear Multicriteria Optimization Problems,” SIAM Journal on Optimization, vol. 8, no. 3, p. 631, Jul. 1998. [14] A. Messac, A. Ismail-Yahaya, and C. A. Mattson, “The normalized normal constraint method for generating the Pareto frontier,” Structural and Multidisciplinary Optimization, vol. 25, no. 2, pp. 86–98, 2003. [15] Carmelo J. A. Bastos Filho, F. de Lima Neto, A. Lins, A. Nascimento, and M. Lima, “A novel search algorithm based on fish school behavior,” in Systems, Man and Cybernetics, 2008. SMC 2008. IEEE International Conference on. IEEE, 2008, pp. 2646–2651. [16] A. Colorni, M. Dorigo, and V. Maniezzo, “Distributed optimization by ant colonies,” Proceedings of the first, no. or D, 1991. [17] C. Coello, G. Lamont, and D. Van Veldhuizen, Evolutionary algorithms for solving multi-objective problems. Springer-Verlag New York Inc, 2007, vol. 5. [18] A. Eiben, P. Rau´e, and Z. Ruttkay, “Genetic algorithms with multi-parent recombination,” pp. 78–87, 1994.

Suggest Documents