Heuristic-Based Architecture Generation for Complex ...

0 downloads 0 Views 146KB Size Report
Cameron Maxwell, Artem Parakhine, John Leaney, Tim O'Neill, Mark Denford. Institute for ...... [9] S. Kodiyalam and J. Sobieszczanski-Sobieski. Multidisci-.
Heuristic-Based Architecture Generation for Complex Computer System Optimisation Cameron Maxwell, Artem Parakhine, John Leaney, Tim O’Neill, Mark Denford Institute for Information and Communication Technologies University of Technology, Sydney [cmaxwell | aparakhi]@it.uts.edu.au [John.Leaney | Tim.ONeill | Mark.Denford]@uts.edu.au Abstract Having come of age in the last decade, the use of architecture to describe complex systems, especially in software, is now maturing. With our current ability to describe, represent, analyse and evaluate architectures comes the next logical step in our application of architecture to system design and optimisation. Driven by the increasing scale and complexity of modern systems, the designers have been forced to find new ways of managing the difficult and complex task of balancing the quality trade-offs inherent in all architectures. Architecture-based optimisation has the potential to not only provide designers with a practical method for approaching this task, but also to provide a generic mechanism for increasing the overall quality of system design. In this paper we explore the issues that surround the development of architectural optimisation and present an example of heuristic-based optimisation of a system with respect to concrete goals.

1. Introduction A complex computer system is not defined purely by the extent of its functionality. In a comparison between two distinct systems created to perform an identical function the one which demonstrates greatest flexibility will prevail. Thus, it is imperative that the issues related to the nonfunctional attributes are addressed in the design process through consideration of the system architecture. The chosen methodology of addressing these attributes must take into account the fact that achieving the desired non-functional qualities may involve potentially conflicting architectural decisions. To avert this conflict, the design process must not be derailed by allowing any one nonfunctional quality to take overwhelm all others and suppress all other trends. Furthermore, the design methodology must allow for the integration and management of a potentially high number of design goals.

We believe that in order to be practically applicable the methodology must possess three major facilities: • it must allow for effective expression of the gross organisation of the complex computer system architecture including all members and their dependencies; • it must facilitate management and fulfilment of multiple, potentially conflicting, goals; • once an intermediary or a final result has been achieved there should be some way of communicating the current outcomes of the process to the designer. The main purpose of introducing optimisation and refinement built upon architectural modelling into the complex computer systems design process can be expressed as a movement to address the need for advancing the overall practice to a new level. Achieving architectural optimality at the design stage would translate into effective spending of the budgeted funds in order to attain the most desired qualities in the final product. This paper provides description of the issues associated with attempting to fulfil the aforementioned requirements. These issues are then demonstrated and explored further in an example of the activities associated with optimising an architecture with respect to its security and reliability characteristics.

1.1. Motivation Our motivation in developing a method for optimising complex computer systems is that we are seeking a way in which to manage the task of quality trade-offs. More specifically we are seeking a repeatable mechanism through which we can control, guide and ultimately automate the trade-off process. It has been well demonstrate by Bosch, Lundberg and H¨aggander that architectural design qualities can have adverse effects on each other. In [12] they showed through empirical studies of large telecommunications software that

Proceedings of the 12th IEEE International Conference and Workshops on the Engineering of Computer-Based Systems (ECBS’05) 0-7695-2308-0/05 $20.00 © 2005 IEEE

there is indeed a trade-off between performance and maintainability. In conjunction with Bengtsson they have also suggested that the management of these trade-offs should be done through the use of design guidelines [11]. These guidelines are rules and suggestions on ways in which each quality attribute can be improved during system design, especially when in conflict with one another. These results in conjunction with our own experiences at manual optimisation, have given us the confidence that a guideline, or heuristic, based approach to continuous improvement and optimisation of complex system architectures is indeed possible.

2. Previous Advances 2.1. Software Architecture Optimisation Bosch & Bengtsson [1] conducted work aimed at determining the theoretical limits of architectural optimality. Specifically they sought to use this to gauge not just how different architectures compare to each other, but how they compare to the ideal, or optimal, architecture. They demonstrated that by using change scenarios for a given architecture, it was possible to assess not only the maintenance effort required to maintain the architecture, but also what effort would be required for the optimal architecture. Although this work does not directly provide a mechanism for optimising software architecture, it does give us a useful way to measure the degree of optimality. Further it provides a useful basis upon which we can investigate ways in which maintainability can be improved. Kazman [8] prescribes a method of evaluation of architecture optimality with respect to a multitude of competing quality drivers (security, performance, availability). The method is introduced as a means to allow us to “reason about an architectural decision” based on the effect its application had on the fitness of the architecture to the desired parameters. The main focus of the approach resides in facilitating comprehensive definition of the elements of the architecture followed by an analysis of the trade-offs by identifying key areas of concern. The final outcome of the approach should be identification and resolution of conflicts between various quality attributes early in the process when it can be accomplished at a relatively low cost. On the other hand Gokhale [5] has attempted to provide a practical basis for an optimisation framework aimed at resolving the conflict between cost and reliability attributes of a computer based system on an architectural level. Gokhale showed that architecture-based analysis can be performed on several conflicting non-functional qualities. Such analysis may be possible because the trade-off between the desired qualities at the individual component level can be described as monotonic even though the general solution space is discontinuous [5]. Consequently, a number of well known optimisation techniques such as evolutionary algo-

rithms or other forms of numerical analysis can be applied as means of traversing the monotonous space.

2.2. Multidisciplinary Analysis, Design & Optimisation Multidisciplinary analysis and design arises from the difficulties inherent in the construction of complex systems. Aircraft, rockets, space shuttles and motor vehicles are all examples of multidisciplinary systems as their design and construction requires the expertise of specialists from many different fields, or disciplines. The aerospace industry, led to a large degree by NASA, has pioneered the use of multidisciplinary design and analysis techniques. These techniques are based on the notion that each sub-problem should be dealt with by a specialist in that field, and that these specialists should work in cooperation to achieve a better final design. This separation of concerns allows each specialist to use those methods that he thinks are most appropriate for his analysis and design, before he re-presents his design to the other specialists. The modern era of multidisciplinary design and optimisation emerged in 1984 when the 1st NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and Optimization was held. Since then this field of endeavour has been refined into what has become known as Multidisciplinary Design Optimisation (MDO). In the last decade the prevalence and applicability of MDO has increased in leaps and bounds. Today MDO is used in many other areas, such as motor vehicle design [9], integrated circuit design [10], and container ship design [4], to name but a few. This research is of particular interest to us with our focus on optimising complex computer systems. Especially as the use of separate experts or specialists in analysing software architecture for design qualities is already an accepted practice [8].

3. Elements of Design Optimisation 3.1. Design Inputs The process of system design, or re-design, is governed by various types of information representing the input parameters for which a suitable solution must be found. Effectively, this information determines both the overall design direction and scope . As such the types of input present in the design process can be separated into two broad categories: Requirements & Goals and Constraints. The system Requirements, both functional and nonfunctional, serve as the basis for the original design and represent the main driving force behind the need for optimisation. Primarily this is caused by the presence of multiple points of consideration, each bound to a particular discipline. Such arrangement indicates that a single requirement to the outcomes of the overall process

Proceedings of the 12th IEEE International Conference and Workshops on the Engineering of Computer-Based Systems (ECBS’05) 0-7695-2308-0/05 $20.00 © 2005 IEEE

may translate into a myriad of input requirements for each specialised optimiser. Furthermore, since the outcomes of the optimisation may involve a structural change to the system, the requirements must be comprehensive enough to cover all aspects of affecting such change within a complex computer system. As Bucci [2] has shown even the relatively simple problem of locating of a database within a spatially distributed organisation has many implicit factors which must be factored into the process of choosing an appropriate design. These factors may be quantitative requirements, such as costs associated with maintaining a new structure of the system, as well as qualitative requirements exemplified by a desire for flexibility to accommodate future changes. Once the definitive list of requirements has been obtained, it must be integrated into the optimisation process in such a way so as to be rigid enough to guide it and flexible enough to avoid inhibiting it. This can be facilitated by defining various types of optimisation Constraints. Inputs of this type serve two distinct purposes. Firstly, they bound the problem by limiting the possible solution space, thus often drastically simplifying the optimisation process. Secondly, they can provide a criteria for determining when the process should be terminated.

3.2. Multi-objective Design In any optimisation problem, as soon as we start to optimise with respect to more than one objective or goal, the complexity of the problem increases significantly. Specifically as soon as we have at least two goals we introduce the possibility that these goals may be in contention. Meaning that as we optimise towards one, we detrimentally effect the other. The management of this, to ensure that the final optimised system is the “best” overall system available, is the domain of multi-objective design and trade-off analysis. In software architectures the approach to this thus far has been through the use of specific methods developed around scenarios and possibilities [7, 8]. Other engineering disciplines have approached the problem of multi-objective design from a different angle. They deal with the problem in two ways. Firstly, they define an objective function which is used to evaluate and rank different solutions with a bias towards certain criteria (a concept we will deal with in more detail later). Secondly, they provide a number of simple mechanisms through which trade-off analysis can occur. Perhaps the most advanced method currently used for multi-objective design is Pareto optimisation [16]. Pareto optimisation uses vector-based positioning to determine which solutions are the better trade-offs between competing objectives. The position of these solutions, with respect to other solutions in the objective-space, is used to determine which solutions are optimal.

Pareto optimisation has several distinct advantages over the other methods. Firstly, it treats all objectives equally, that is it does not require any previous importance to be placed on them, which makes it very useful for doing initial design exploration. Secondly, Pareto optimisation does not produce just one solution, but several, giving the designer a number of choices as to which solution to use. This allows the designer to use domain specific prior knowledge and expertise to make a considered judgement given a number of possibilities. The problem with Pareto optimisation, however, is that in comparison with other methods available it is very computationally expensive. This is because a history of previous solutions needs to be maintained, both to maintain the Pareto front and to provide the designer with the collection of possible solutions. Pareto optimisation also suffers from the fact that once the number of objectives increase beyond two or three, it becomes conceptually difficult to understand the process, and may not be intuitive to the designer.

3.3. Objective Function The search for the “best” solution can be based on a simple evaluation of whether a solution meets the requirements. Ideally, however, we would like to determine which of the outcomes is the “best” of all the correct solutions. To do this an effective measure by which we can evaluate solutions, and then compare them, is required. For this measure we introduce the notion of an objective function, as is present in Multi-Disciplinary Optimisation (MDO). An objective function is a means by which a quantitative measure of the properties, and how well a solution meets its goals, can be calculated. Objective functions exist in many disciplines under different guises. In genetic algorithms, for example, the objective function is referred to as the “fitness” value. An objective function ideally represents the weighting of all goals for the problem. The formulation of an objective function is by no means a trivial task and is, in fact, quite sensitive to errors. An incorrectly formulated objective function can lead the optimisation process in the completely wrong direction, and to the wrong final solution. It is therefore critical that the objective function is a true representation of the designer’s goals with respect to the system. Take, for example, the case where a designer chooses to leave a property out of the objective function because he feels it is not a priority. As the property is not considered in the objective function it will be completely ignored by the optimisation process. The resulting system could therefore have an excellent objective value, based on the designers chosen priorities, but on a practical level be completely useless. Further to this the objective function can be a critical point at which trade-off determinations can be made. In an optimisation process which uses an objective function for rating each possible solution there is an implicit trade-off

Proceedings of the 12th IEEE International Conference and Workshops on the Engineering of Computer-Based Systems (ECBS’05) 0-7695-2308-0/05 $20.00 © 2005 IEEE

when the objective value is calculated. As the calculation will have bias towards certain properties, as designated by the designer, solutions with these properties will have higher objective values and thus be given priority over other solutions via a positive trade-off. It is for these reasons that it has been argued the formulation of the objective function is the most critical step in the optimisation process [13]. MDO frameworks do not directly address the issue of formulation of the objective function. However, optimisation research has developed a number of methods by which this may be approached including weighted-sum [6], physical programming [13] and utility theory [15].

3.4. Feasibility Given the concept of optimising complex computer systems, it is essential for us to consider those factors that may effect the overall feasibility of the process. Whether we will be able to practically effect system quality optimisation is dependent on our ability to deal with these factors. At present we have identified three major factors which will need to be assessed to ensure the feasibility of our optimisation process, these are scalability, sensitivity and composite qualities. 3.4.1. Scalability Managing the process of optimising system architecture is a particularly important task, and there are a number of ways in which the overall complexity of this task can be increased. These need to be addressed. Two of the most significant ways by which the process complexity increases is through growth in the size of the architecture under consideration, and through an increase in the number of qualities we wish to consider, or optimise, during the process. These factors can be addressed separately, and their solutions are in fact integral parts of the overall optimisation process. To deal with the scalability issues associated with the size of the architecture being optimised we break the system down into smaller systems, subsystems, or components. This approach has the added advantage of allowing us to specialise optimisations for specific subsystems, as particular qualities will be more important, and provide a greater benefit, if optimised within a specific subsystem. The scalability issues introduced by increasing the number of architectural qualities under consideration for optimisation is a more complicated task. Its management is more implicitly inherent in the optimisation process. As the number of qualities under consideration increases, so does the difficulty in balancing all their relative importance. The management of these relative importance is precisely the role of the objective function discussed earlier. 3.4.2. Sensitivity Architectural sensitivity is a problem that is inherent to all architectures. Specifically, where the architecture is highly sensitive, any small change to it voids

the architecture, rendering it useless. An architecture which is particularly sensitive in this way is often referred to as being brittle. This problem is particularly evident in existing, older, systems which have been modified and “hacked” into continuing to work. They no longer represent the original design goals of the system, but rather whatever it took to get it working. Brittleness is a distinct factor in our optimisation process as it has the potential to be both a limiting factor in its use, as well as an introduced side effect of the optimisation process. A brittle architecture will be less amenable to accepting optimising changes, which can make it virtually impossible to optimise without extreme refactoring. Additionally, and perhaps more of a concern, is the potential for the optimisation process to introduce brittleness into the architecture. As the optimisation process steers towards the optimal architecture given the designer’s goals, it is likely to make small changes to squeeze every last benefit out of the architecture, probably at the expense of increasing the brittleness of the architecture. To this end it is very important that the brittleness of architecture is managed during the optimisation process, lest we end up optimising to an architecture that is un-evolvable. This problem, like all architectural problems, is best dealt with in the initial design phase by specifically designing the system not to be brittle. Therefore our optimisation process must be able to ensure that it includes brittleness as a system quality that can be factored into the design. This may be via direct positive action, that is having heuristics which decrease the brittleness of a system, or via passive monitoring of the quality and invalidating architectures which do not meet minimum requirements. 3.4.3. Composite Qualities When optimising a complex computer system, we use its architectural qualities as both our goal and the measure of our achievement of those goals. However, certain architectural qualities are not independent measures, but rather a composite of other architectural qualities. For instance, in the example which we will present later, the security quality is discussed in terms of probability of compromise (PoC). This is a measure of how likely it is for someone to compromise the system, by breaking into it. What this doesn’t take into account is the extent of the damage that could be done should someone compromise the system. If we were to do this it would be clear that the architectural quality of security is a combination of compromisibility and pevertability. Compromisibility being how easy the architecture is to comprise and gain access to, and pervertability being how much the system can be diverted from its “correct” operation after it has been compromised. This is of interest because it raises the question of how we handle composite qualities during optimisation. We feel an integrated approach is to allow manipulation of the archi-

Proceedings of the 12th IEEE International Conference and Workshops on the Engineering of Computer-Based Systems (ECBS’05) 0-7695-2308-0/05 $20.00 © 2005 IEEE

tecture to be performed only on the independent qualities, with composite qualities used only in a measurement capacity. This has the advantage of allowing us to strengthen one independent quality, that is part of a composite quality, to such a point that the the value of the other independent qualities do not adversely effect the final composite quality.

3.5. Architecture Generation In most fields optimisation is performed by manipulating design variables until the desired optimum is achieved. The problem with this approach is that these design variables usually form part of a continuous function, at least within a certain constraint range. Further to this, the parameterisation of complex computer systems is not a straightforward process, and applying it generically to system architecture is fraught with complications. Therefore to effect this change we define heuristics [14], or rules, that can be used to improve certain system qualities [11]. If our system is below our desired level for a given quality, we can apply a heuristic for that quality to improve the system. It must be remembered that these heuristics make no guarantee to improve the overall system, rather they only seek to improve their associated quality. In fact it is highly likely that they will adversely effect another quality. It is when this happens that trade-offs must occur. Heuristics ideally should be historical, empirical collections of previous observations. In this way they are like patterns, whose inherent existence is not a factor of creation, but by virtue of observation. A detailed examination of the elicitation and application of heuristics is presented in our example in the next section. This illustrates how we can elicit heuristics for reliability and security, and then apply them to generate potentially optimised architectures.

4. Example Application The following example demonstrates how the current research can be applied to complex computer systems, and used to explore the issues that arise. This example investigates the optimisation of a simple architecture for two nonfunctional qualities. Take the architecture given in Figure 1. This could be a complete architecture, or the subset of an architecture that has been identified as being sub-standard. It is comprised of a single component A and two connectors. To maintain simplicity, we have chosen not to use a formal architectural description language (ADL) but rather a basic “box-andline” diagram.

A Figure 1. Example Architecture - Baseline

The two architectural qualities selected are reliability and security. We will also be considering cost as an implicit factor. It is important to note that cost can also be an explicit factor. Before we can begin to look at optimising this architecture, we must have a method for calculating, or measuring, these architectural qualities. To model reliability we determine the likelihood that the system will be functioning, given the likelihood of failure, or combination of failures, of any component or connector. To assess this we have to split the architecture into those parts where processing is performed serially, and those parts where it is performed in parallel. The equations shown below allow us to evalutate for reliability (1), serial reliability (2), parallel reliability in two paths (3), and parallel reliability in three paths (4). reliability = SR × P R SR =

P R2 =

m n   i

P R3

=



(2)

 −

m n   i

k 1,2  m  i k 2,3  m  i

= = = = = =

Sk

Bik

m n  



SR P Rt S B i k

k

k

i

where:

m 

(1)

 Bik



m n   i

Bik −

(3)

Bik

k

1,3  m  i

Bik

k

Bik

k

Bik

(4)

k

serial reliability parallel reliability with t paths reliabilities for elements in serial reliabilities for elements in parallel branch number serial element in branch

We will use a very simple model to describe security. Current methods for analysing security work on the basis of identifying possible vulnerabilities, and then assessing both their likelihood of occurrence and the impact of which should they occur [3, 8]. We will therefore choose to model security as the probability of compromise (P oC), which can be considered to be a combination of possible vulnerabilities and likelihood of occurrence. This allows for very simple calculation for overall system security, as shown in (5). Thus the measure of system security is now the positive probability of the “all is well” scenario. This simplistic modelling approach serves our purpose since it represents

Proceedings of the 12th IEEE International Conference and Workshops on the Engineering of Computer-Based Systems (ECBS’05) 0-7695-2308-0/05 $20.00 © 2005 IEEE

“security” as a non-functional property of a system which is in direct contention with the “reliability” property specified earlier. security =



A Splitter

(1 − P oCi )

i

Cost is an implicit property of computer systems, and for all practical systems a very central one. There are many methods for calculating cost, but for our purposes we will assign a cost value to all components and connectors. Thus total system cost becomes the sum of all the individual component and connector costs (CC). We recognise that in its current form this costing model does not take into consideration such factors as maintenance and installation expenditure, which severely limits its ability to reflect the real cost of the system. Nevertheless, it was chosen to be a part of this example for its simplicity and its relative independence of the other two properties involved in the optimisation process. cost =



CCi

Joiner

(5)

(6)

i

Now that we know what we are trying to optimise and how to measure it, we can look at how we will generate the new architectures. For this demonstration we will elicit two heuristics for both reliability and security. If we replace a component in a system with another, more reliable component, then it follows that we should increase the overall reliability of the system. This gives us our first heuristic, R1 . Another means by which we can achieve reliability is to parallelise processing. If we duplicate processing across two, or more, parallel components, then even if one fails the overall system can still complete its function. This is our second heuristic, R 2 . Applying this second heuristic, R 2 , is a much more involved action than R 1 . As the change to the architecture is more significant, the heuristic also needs to include how to maintain architectural integrity during its application. In the case of R2 where we are parallelising components, we cannot just add extra connectors to the system and expect the other components to accommodate the new connections. Rather, we must explicitly add new components to deal with these extra connections, and preserve the existing interfaces in the architecture. For parallelisation this means the addition of splitter and joiner components, that manage the parallelisation in-situ. Figure 2 illustrates this application. We now consider security. One means by which security can be improved is to take a system, or component, that is vulnerable to attack and replace it with one that is less vulnerable. We shall include this as our first security heuristic, S1 . Although the solution prescribed by this heuristic is almost identical to R1 , our first reliability heuristic, it must be treated as a separate heuristic. This is because not only

A Figure 2. Parallelised Component does it apply in different circumstances the overall goal of the heuristic is different. These are important distinctions to make when dealing with heuristics as we believe it is critical to the correct application of heuristics to be focused on the effect of the change, that is on the goal, rather than on the change itself. The larger the system, with increasingly more components and connectors, the more susceptible it is to a security breach. This can clearly been seen from (5). Thus it follows that if we reduce the number of elements in the system we will reduce its potential for being compromised. Which in turn will increase the overall security of the system. This will be our second security heuristic, S 2 . All these heuristics are summarised below in Table 1. Reliability R1 R2

Replace components with more reliable components Parallelise Components

Security S1 S2

Replace components with more secure components Reduce vulnerabilities, by reducing components.

Table 1. Optimising Heuristics Figure 3 shows how these heuristics can be applied to generate potentially optimised architectures. This map shows all the architectures that can be reached by two applications of each heuristic. To allow for this demonstration we have assumed we have two components B and C . These components are functionally equivalent to A , but have better reliability and security respectively. As the application of each heuristic must leave the architecture of the system functionally intact, the only use of S 2 in this example has been to reduce multiple identical components. In our case this happens to be the exact opposite of heuristic R2 . From Figure 3 it can be seen how, with only two applications of the heuristics that, we have already begun to explore the potential design space for these architectures. It is important to note that different iterative combinations of heuristics, can in fact lead to the same solution. Looking at the example, the application of R 1 followed by S 1 , is the same as just applying S 1 . The importance of this is that it

Proceedings of the 12th IEEE International Conference and Workshops on the Engineering of Computer-Based Systems (ECBS’05) 0-7695-2308-0/05 $20.00 © 2005 IEEE

0

R1

1

A

B

A

R2

S2 2

S1 3

A

R1

A

2

C

B

A

R

6

R

B

1

S1

B

4

S

R

1

1

R 2

A

C C

Architecture N

R1

Heuristic R 1 Applied

A

A C

N

7

S

B

1

R2

R2

B

C

A

5 A A

8

A

A A

C

Figure 3. Change Map shows with continued application of heuristics many of the Element Reliability PoC Cost design paths are likely to return to already visited solutions. Connector 99% 1% 100 This infers that the state explosion of the search space will Component A 85% 10% 1,000 be significantly limited. Component B 95% 10% 3,000 We see this heuristic based method of generating new, Component C 85% 3% 2,600 and optimised, architectures as very promising. It allows Splitter 99% 2% 1,500 us to embody both the knowledge and experience of proJoiner 99% 2% 1,500 fessional system designers, with theoretical and analytical results of research. The ease through which these heuristics Table 2. Initial Example Properties can be leveraged to generate, and manipulate, system archiWith these properties we can calculate the reliability, tectures, promotes their use as a very practical solution. security and cost of each architecture, using (1), (5) and (6) respectively. Details of these results for each architecture 4.1. Illustrative Results are in Table 3. To show the methods presented here do have the potenArchitecture Reliability Security Cost tial to provide practical improvement in system design we demonstrates their application. Continuing the previous exA0 :Baseline, (R2 -S2 ) 83% 88% 1, 200 ample, we will show how the architecture given in Figure 1 A1 :R1 , (S1 -R1 ) 93% 88% 3, 200 can be optimised for reliability, security and cost. A2 :R2 , 93% 73% 3, 600 Using the heuristics summarised in Table 1 we perform A3 :S1 , (R1 -S1 ) 83% 95% 2, 800 two rounds of architecture generation, as illustrated in FigA4 :R1 -R2 96% 73% 7, 600 ure 3. The next step is to assign the properties associated A5 :S1 -R2 93% 85% 6, 800 with each element used in the architectures. These are given A6 :R2 -R1 95% 73% 5, 600 below in Table 2. A7 :R2 -S1 93% 79% 5, 200 It should be noted that in assigning properties to the A8 :R2 -R2 96% 64% 4, 800 replacement components B and C we have kept the alternate Table 3. Calculated Qualities property the same. That is, Component B which is the more reliable component has the same security as the original To demonstrate the objective function we use a Component A. Although in reality such a clean replacement Weighted-sum objective function. Equation (7) is a slightly for a component is highly unlikely it does provide for a modified version of the weighted-sum found in [13]. much clearer example of the process. Proceedings of the 12th IEEE International Conference and Workshops on the Engineering of Computer-Based Systems (ECBS’05) 0-7695-2308-0/05 $20.00 © 2005 IEEE

The modifications have been introduced to normalise the differences in scaling quality values. This normalisation is performed about a chosen “good value” (G i ), which represents the target value we are trying to achieve. We have also used a variable weighting function (wi), (8), to reflect that we are not necessarily optimising for a specific target, but a general one. We use this to finalise our evaluations for each architecture. Jw =

 i

 wi = (µi (x)−µi,min )

   µi (x) − Gi    wi   Gi wi,max − wi,min µi,max − µi,min

(7)  +wi,min (8)

When choosing the weightings we use with this objective functions we decided that we are interested in reliability, security and cost, in that order. The weightings for these qualities in the objective function reflect this, as is shown in Table 4. In this table µ i,max and µi,min refer respectively to the maximum and minimum values we are willing to accept for each each quality. Quality Reliability Security Cost

wi,max

wi,min

µi,max

µi,min

Gi

0.6 0.8 4.0

2.4 2.2 0.8

100 100 10,000

70 70 1,200

95 90 4,000

Table 4. Objective Weightings With our weightings now chosen we have all the information we require to evaluate each generated architecture in terms of these objectives. As our objective function is a scaled measure of the distance from our chosen good values (Gi ), the lower the objective value (J w ) the better. Table 5 summarises these results and shows us that A 1 is our ideal architecture. Rank

Architecture

Jw

1st 2nd 3rd 4th 5th 6th 7th 8th 9th

A 1 : R1 A2 : R2 A3 : S1 A0 : Baseline A7 : R2 -S1 A8 : R2 -R2 A6 : R2 -R1 A5 : S1 -R2 A4 : R1 -R2

0.35 0.56 0.67 0.78 0.91 1.12 1.34 2.08 3.20

Table 5. Objective Values

4.2. Analysis It is interesting to reflect on why the architecture A 1 is the best architecture, as it is perhaps not the most obvious result. The two primary factors driving this result are the return on investment from the solution and our security model. One of the primary reasons this is the best solution is that it provides the greatest benefit for the least cost. For the small outlay of 3, 200 it provides a good increase in reliability, which is the quality we are most interested in improving in this example. Most importantly, however, it does this without detriment to the overall security of the architecture. The second factor that directly reflects the result is our security model, as expressed by (5). This model of security is biased towards architectures with fewer components. The result of this bias means that solutions which use parallelism to improve reliability are punished for increasing the number of components. In practice using a more sophisticated security model would express factors of the parallelism that would improve security, and thus counter this bias. This result can be viewed as an example of how over-design does not always result in the ideal solution. Looking at the overall results and how they have been achieved, it can clearly be seen that the outcomes are heavily dependent on a number of key factors. These factors are the size of the architecture in question, the number and type of heuristics used to generate the architectures and the objective function with its relative weightings. In our example we showed only those architectures that could be reached through 2 applications of the each heuristic. If we had applied a third round of heuristics we would have generated a further 19 architectures, including duplicates. With a little analysis it can shown that the number of architectures generated by these heuristics increases at a geometric rate. This rate doesn’t take into consideration the fact that many of these architectures are in fact duplicates of architectures already generated. These duplicates will reduce the number of potential architectural optimisation paths that will need to be followed. The rate at which these duplicates will retard the growth of the architectures is unknown at this point. The number and type of heuristics will also play a critical role in the number of architectures generated and the extent of the design space explored. As can be seen from the example with the parallelisation heuristic R 2 , the more invasive the heuristic, the more potential architectures can be generated from that architecture in the next iteration. Similarly, if we increase the number of different heuristics that we can apply, then the number of architectures that can potentially be generated at each iteration will increase with respect to how many times each heuristic can be applied. The actual rate for this increase is heavily dependant on the actual nature of the heuristics.

Proceedings of the 12th IEEE International Conference and Workshops on the Engineering of Computer-Based Systems (ECBS’05) 0-7695-2308-0/05 $20.00 © 2005 IEEE

5. Conclusion Design optimisation has long been a feature of other engineering and design domains. However, we believe that the methods we have presented in this paper may provide the foundations for its application to the design optimisation of complex computer systems. By applying the techniques developed for multi-disciplinary design optimisation (MDO) to the optimisation of architectural qualities we have made the optimisation of qualities at design time a practical reality. This approach allows for an integrated approach to tradeoff management between competing qualities. The iterative utilisation of heuristic-based improvement allows us to explore the potential design space for an architecture and find the most appropriate solution given the trade-offs involved. This method is also flexible enough to operate with any number of qualities. All that is required is a model for evaluating the quality and a collection of heuristics to improve it. The extensibility and flexibility of this approach also makes it ideal for automating. We believe it is possible that this approach can be developed into a tool that would allow the automation of architectural design optimisation with little input required from the designer. Further research will look at creating a formal methodology that utilises the approach presented in this paper.

Acknowledgements This work is funded by in part by University of Technology, Sydney (UTS) a scholarship for Cameron Maxwell. Funding was also provided in part by Distributed Systems Technology Centre (DSTC) as a scholarship for Artem Parakhine.

References [1] J. Bosch and P. Bengtsson. Assessing optimal software architecture maintainability. In Proceedings of the Fifth European Conference on Software Maintenance and Reengineering, page 168. IEEE Computer Society, 2001. [2] G. Bucci and D. N. Streeter. A methodology for the design of distributed information systems. Commun. ACM, 22(4):233– 245, 1979. [3] J. C. N. Payne. Using composition & refinement to support security architecture trade-off analysis. In Proceedings from the 22nd National Information Systems Security Conference, 1999. [4] S. Ganguly. Algorithmic modifications to a multidisciplinary design optimization model of containerships. Master’s thesis, Virginia Polytechnic Institute and State University, 2002. [5] S. S. Gokhale. Cost constrained reliabilty maximization of software systems. Reliability and Maintainability, 2004 Annual Symposium - RAMS, pages 195–200, January 26-29 2004. [6] C. L. Hwang and A. S. M. Musad. Multiple objective decision making – methods and applications. Lecture

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14] [15]

[16]

Notes in Economics and Mathematical Systems, (164), 1979. Springer Verlag, Berlin. R. Kazman, J. Asundi, and M. Klein. Quantifying the costs and benefits of architectural decisions. In Proceedings of the 23rd international conference on Software engineering, pages 297–306. IEEE Computer Society, 2001. R. Kazman, M. Klien, M. Barbacci, T. Longstaff, H. Lipson, and J. Carriere. The architecture tradeoff analysis method. In Fourth IEEE International Conference on Engineering of Complex Computer Systems, 1998. ICECCS ’98., 1998. S. Kodiyalam and J. Sobieszczanski-Sobieski. Multidisciplinary design optimization - some formal methods, framework requirements, and application to vehicle design. Int. J. Vehicle Design (Special Issue), pages 3–22, 2001. A. N. Lokanathan, J. B. Brockman, and J. Renaud. A multidisciplinary optimisation approach to integrated circuit design. In Proceedings of Concurrent Engineering: A Global Perspective, CE95 Conference, pages 121–129, McLean, USA, August 23-25 1995. L. Lundberg, J. Bosch, D. H¨aggander, and P.-O. Bengtsson. Quality attributes in software architecture design. In Proceedings of the IASTED 3rd International Conference on Software Engineering and Applications, pages 353–362, October 1999. L. Lundberg, D. H¨aggander, and W. Diestelkamp. Conflicts and trade-offs between software performance and maintainability. In Performance Engineering, State of the Art and Current Trends, pages 56–67. Springer-Verlag, 2001. A. Messac. From dubious construction of objective functions to the application of physical programming. AIAA Journal, 38(1):155–163, January 2000. E. Rechtin. The Art of Systems Architecting. CRC Press LLC, 2nd edition, 2000. J. von Neumann and O. Morgenstern. Theory of Games and Economic Behavior. Princeton University Press, New Jersey, 1947. E. Zitzler. Evolutionary algorithms for multiobjective optimization. In Evolutionary Methods for Design, Optimisation and Control. CIMNE, Barcelona, Spain, 2002.

Proceedings of the 12th IEEE International Conference and Workshops on the Engineering of Computer-Based Systems (ECBS’05) 0-7695-2308-0/05 $20.00 © 2005 IEEE