Exploiting non-functional preferences in architectural ... - upstream

4 downloads 96547 Views 574KB Size Report
stage—rigidly bound to the brittle assumptions made by a designer. Permission to make digital .... properties in the Google Android API. the limited accuracy of ...
Exploiting Non-Functional Preferences in Architectural Adaptation for Self-Managed Systems Daniel Sykes, William Heaven, Jeff Magee, Jeff Kramer Department Of Computing Imperial College, London {daniel.sykes, william.heaven, j.magee, j.kramer}@imperial.ac.uk ABSTRACT Among the many challenges of engineering dependable, self-managed, component-based systems is their need to make informed decisions about adaptive reconfigurations in response to changing requirements or a changing environment. Such decisions may be made on the basis of non-functional or QoS aspects of reconfiguration in addition to the purely functional properties needed to meet a goal. We present a practical approach for using non-functional information to guide a procedure for assembling, and subsequently modifying, configurations of software components, and compare the performance of two variants of the approach. In addition, we outline a scheme for monitoring non-functional properties in the running system such that more accurate information can be utilised in the next adaptation.

Categories and Subject Descriptors D.2.11 [Software Engineering]: Software Architectures

General Terms Management, Design, Reliability

Keywords Self-adaptive, software architecture, dynamic reconfiguration, autonomous systems, non-functional properties

1.

INTRODUCTION

One of the distinguishing characteristics of an autonomous, selfmanaging system is that it is situated in the real world, subjecting it to unpredictable changes of circumstance. Adaptation is a mechanism that provides the system with greater dependability by enabling it to continue meeting its goals and requirements despite its runtime context changing from that envisaged at design time. This dynamic aspect places adaptation in contrast to typical methods of verification, which are—despite their importance in the design stage—rigidly bound to the brittle assumptions made by a designer.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SAC’10 March 22-26, 2010, Sierre, Switzerland. Copyright 2010 ACM 978-1-60558-638-0/10/03 ...$10.00.

431

In previous work [19, 20, 7] we have described a framework for engineering self-managed component-based systems via the instantiation of a three-layer conceptual model [8], building on experiences gained in the robotics field [6]. The layers in this model stratify the operation of a self-managed system in order of computational cost, and hence responsiveness. The lowest layer (the component layer) is concerned with near real-time reactive behaviours, the middle layer (change management layer) with the composition and sequencing of those behaviours, and the uppermost layer (goal management layer) with the deliberative synthesis of a system controller [12], in the form of a reactive plan [9]. The system controller prescribes the behaviour required of a system in satisfying its given functional goals. Adaptation is provided for in each of the layers, with feedback passing from lower to upper layers, with the intent that the most expensive forms of adaptation occur the least often, as shown in Figure 1. Feedback passes from the component layer to the change management layer when a failure occurs in the component layer. If the change management layer is not able to adapt to the failure within the current plan, it requests from the goal management layer a new plan. Taking an architectural approach in instantiating this model [8], one of the functions of the change management layer (the middle layer) is to construct and subsequently modify a configuration of software components which implement the application behaviour existing in the bottom layer. This relieves the designer of the onerous responsibility of explicitly determining the many configurations the system may use and the conditions under which they are to be used, such that the system may derive hitherto unforeseen solutions. Assembly takes place once before execution of the application begins (before the plan is begun) and recurs as necessary when components fail. Components may fail as a result of ordinary implementation bugs or because their assumptions about the execution environment (such as the availability of a certain service or some property of the physical world) have been broken. The algorithm previously described [20] addresses the functional requirements of the plan and the correctness of the configuration with respect to the stated requirements of each component and the matching of interfaces against these requirements. Where there are several ways to meet a functional requirement, the algorithm makes an arbitrary choice. However, only the most dispassionate patron would regard two functionally equivalent structures as identical. Other concerns such as architectural style, performance and cost must surely guide the decision [16]. Accordingly, we have augmented our basic approach to take these concerns into account. In this paper, we focus on how such non-functional (including QoS) properties can be used to guide the assembly process, leading to more informed choices between alternative implementations.

the limited accuracy of this initial information. In Section 2 we review some of the related work on using nonfunctional properties in self-managed systems. We briefly describe the background of architectural assembly in Section 3, before presenting and comparing two approaches for exploiting non-functional properties to guide assembly in Section 4. We discuss monitoring in Section 5 and then revisit the smart phone example in Section 6. Finally, in Section 7, we briefly highlight how non-functional preferences might also be exploited in our goal management layer. We conclude in Section 8.

2.

RELATED WORK

There is much varied work studying the relationship between non-functional properties and adaptation (and architecture). We review some of the more relevant approaches here. In the world of autonomic computing, Walsh et al. [21] use utility functions to adapt data centres in response to particular nonfunctional properties such as latency. Adaptation is achieved by reallocating resources (of which there is a fixed amount) across the data centre in order to maximise the utility. Garlan et al. [5], in their work on Acme, enable architectural adaptation by writing repair strategies which work within an architectural style. Each component declares a number of non-functional properties to be monitored by the running system. If a constraint is violated, a repair strategy is invoked in response. Work on Rainbow, by Cheng et al. [3] takes this idea further by calculating for each of the repair strategies a utility so that the most appropriate response to the constraint violation can be selected. For example, each repair strategy has a cost such as disruption of application execution, and a benefit such as a gain in performance. This removes the reliance on the designer knowing which strategies are more appropriate, however it does assume the designer’s non-functional description of each strategy is correct. There is no mechanism for updating the information, if, for example, one particular strategy causes far more disruption at runtime than expected. Poladian et al. [17] describe a mechanism for predicting resource availability to enable pre-emptive adaptation, so that the resulting utility might be maximised. This is motivated by the observation that merely reactive adaptation (in response to changes in availability) leads to sub-optimal utility. In addition, frequent adaptation is discouraged by giving an increased utility to components that are already running. In a recent paper, Epifani et al. [4] present an approach (KAMI) for updating non-functional properties provided a priori after monitoring their real values, in order to improve a model of the system. Non-functional properties in this case are probabilities in a discrete time Markov chain describing the system behaviour. Furthermore, given some reliability requirements, KAMI can detect and even predict when the system will violate its requirements. The MADAM project [2, 10] considers various approaches for finding component configurations which have the optimal combination of non-functional properties (with respect to resource constraints), noting that the solution space is exponential in the number of variation points. However, the authors do not consider the interaction between architectural constraints and non-functional properties beyond observing that the former (when pre-evaluated) can restrict the search space of the latter. Since we are not concerned with constrained resources (nor the complication of variable hardware), when exponential search through the space of NF properties occurs, it is only as a consequence of having to satisfy architectural constraints. The MUSIC project [15] improves upon MADAM by dividing the configurations into packs of components which can adapt inde-

Figure 1: Three-layer conceptual architecture.

Consider, for example, the components that might be required to implement a program to provide location-based information (such as restaurants, hotels and road traffic) on a modern smart phone1 . There are two essential requirements: (i) a component to provide the user’s current location and (ii) a component to provide information about services near to that location. The first of these can be satisfied by using GPS (which has high accuracy, but drains the battery) or by using the mobile network itself (with less accuracy but minimal battery drain). The second requirement can be satisfied using an online search (which is more up to date, but incurs a charge from the network), or by looking in a database stored on the phone (which is free, but possibly outdated). Already, in this small example, the alternative implementations give rise to four possible component configurations, making it onerous for the developer to specify the conditions under which each configuration is to be used. In any large system it would become impractical to specify every possible combination. Repair strategies [5] can mitigate the complexity by only specifying changes, but remain bound to the particular components available at design time, and must be rewritten if new components become available. We propose an approach that allows the system to derive all possible configurations, even those not conceived of at design time (because they involve new components), and to choose between those configurations at runtime on the basis of their non-functional properties. When the system encounters a failure, it is able to switch to an alternate configuration and continue execution. Our approach makes use of whatever non-functional information the designer has provided a priori, regardless of its correctness. The assembly process treats this information as describing the designer’s preferences, rather than strict constraints on the resulting configuration. To provide guarantees about the satisfaction of such requirements demands design-time analysis, which runs contrary to the overall objective of minimising user involvement in the adaptation of a self-managed system. Thus, we propose an approach that, while not providing guarantees of this sort, is fully automated, and able to take account of more reliable NF information as it is updated by a monitoring regime. Many existing approaches operate in a “reactive” manner, monitoring the running system and using the violation of a non-functional requirement as a trigger for the adaptation process, without explicitly considering non-functional properties in the derivation of the initial configuration. In contrast, our approach aims to be “proactive”, exploiting whatever information may be available for architectural assembly, and instead using runtime monitoring to mitigate 1 This is partly inspired by the presence of such non-functional properties in the Google Android API.

432

pendently. These packs can then be distributed such that adaptation in two packs may happen in parallel, effectively reducing the complexity of the problem. The work which most strongly resembles ours comes from the area of web services [14, 22]. The Dino service broker, by Mukhija et al. [14], matches the functional and non-functional requirements of the service requester against the provisions of each service provider. Each provider states its expected value for a nonfunctional property, and its confidence that the value can be achieved. The property is then monitored by the broker, and the provider’s utility is penalised (through the inclusion of a trustworthiness property) when the service agreement is violated, which is to say, when the provider’s non-functional property diverges from the advertised value. This also triggers an adaptation to select an alternative provider. One difference between Dino and our approach is that in Dino each selection is made independently, disregarding global concerns that may require a choice which appears sub-optimal given only local information. Baresi et al. [1] likewise use monitors to ensure that web services do not violate their agreements. Mokhtar et al. [13] propose an approach similar to Dino wherein web services are semantically matched against a state machine representing the user’s task. Those which match are then compared by calculating the likely value of their non-functional properties, and checking that they do not exceed some constraint. The set of nonfunctional properties is restricted to those which can be calculated (such as latency), whereas in our approach no such restriction is necessary.

3.

FUNCTIONAL ASSEMBLY

Before considering assembly and re-assembly using non-functional information, it is necessary to address functional assembly, since this determines the correctness of a component configuration independently of the non-functional properties. In fact, it is the observation that functional assembly often gives multiple alternative solutions that provides part of the motivation for the incorporation of non-functional information, in order to avoid making a “blind” choice. The assembly process, then, is divided into three major steps. The first, called the dependency analysis, generates valid configurations with respect to the behavioural capabilities prescribed by the system’s controller (encoded in the reactive plan provided by the goal management layer). This step interacts with a second, called constraint checking, which checks which of the valid configurations conform to any structural constraints provided by the user (such as whether the configuration conforms to an architectural style), and failing that, finds configurations which do meet the constraints. The final part of the process, called utility checking, is to incorporate the non-functional properties to make the best choice between configurations. Thus, the procedure starts by identifying what components are required to execute the current reactive plan. As mentioned, the plan encodes a system controller, which, for a given system goal, prescribes an action towards that goal for each system state (from which it is reachable). This may be represented as a map from system states to actions: Ctrl : States → Actions The required actions, range(Ctrl), are then mapped to component interfaces using a predefined correspondence between action and implementation: Implements : Actions → Interf aces The initial input to the dependency analysis, then, is the image of

433

Figure 2: Generating a configuration for start_goto_X.

Implements under range(Ctrl): Implements[range(Ctrl)]. This is the set of interfaces for which implementations must be found in order to execute the plan. In other words, this set encodes the functional requirements of the plan. The dependency analysis step is described in detail in [20], so we provide only a brief overview here. The algorithm takes the set of interfaces Implements[range(Ctrl)], and uses the dependencies between the required and provided interfaces of each component to generate candidate configurations. Each component has a number of (named) ports, and each port requires or provides a single named interface. A configuration is constructed by instantiating components and connecting required ports to the provided ports of another component where the interface type matches. A complete configuration contains no component with an unsatisfied requirement. Figure 2 shows how a configuration would be generated to perform a start_goto_X action (which in the robotics domain could mean the robot should start moving towards a target). The system is aware of an interface provided by the GoToTask that implements the start_goto_X action. Then, to complete the configuration, it considers the requirements of the GoToTask (Location and Motors), and their requirements in turn. In this case there are two alternatives for the Location requirement. SLAM (simultaneous localisation and mapping) is used as the SkyCamera has previously experienced a failure. SLAM has a further requirement (Camera) satisfied by the Webcam. This process continues recursively until all requirements are satisfied. We assume that in general there are multiple implementations of each interface, and so there are several candidate configurations which provide the same functionality. This is what provides opportunities for adaptation in the change management layer. Each of the candidates is checked against any structural constraints that the user has provided. Any candidates which are not valid according to the constraints are vetoed. However, since the only way to satisfy the constraints may be to add a component which was not previously in any of the candidates (the simplest constraint would require the presence of an arbitrary component), each candidate which fails the constraint check is extended to create a number of new candidates. Each of the extended candidates is returned to the dependency analysis for completion to ensure the requirements of the new component are satisfied. Hence, for every vetoed candidate there are at most 2n new candidates, where n is the number of components not present in the vetoed candidate. The consequences of this are different in each of the non-functional assembly approaches, as explained below.

4.

NON-FUNCTIONAL ASSEMBLY

To incorporate non-functional information into the assembly process, the designer defines a set of non-functional properties, N F P rop. It is important to note that this set may be entirely domain-specific, since the particular meaning of each property is not necessary during assembly (though it is for monitoring). The user may be interested in the monetary cost of using a particular component, its performance, whether it uses a lot of battery life or consumes a lot of memory. Furthermore, the user can provide as little or as much information as is available, with the caveat that assembly falls back to arbitrary choice where no information is provided. Requiring the user to provide full and accurate information would necessitate detailed design-time analyses which may have to be repeated every time a new component becomes available. This places user interaction in the adaptation feedback loop, which is undesirable for a self-adaptive system. Moreover, it may not be possible to perform such analyses on commercial (COTS) components. The NF information can be captured using annotations in an ADL description of a component. For example, assume there is a Darwin-like [11] (UML-compatible) description of two components C1 and C2 which both provide interface I: component C1 [cpu=high, memory=5k] { prov a : I } component C2 [cpu=low, memory=100M] { prov b : I }

where c.p indicates the value of p for component c (if it is not provided then the utility is assumed to be 1), and wp is the weight for property p. Hence, with the additional definition of ucpu as ucpu (low) = 0.9 ucpu (high) = 0.1 and two weights representing the user’s preference between the cpu and memory properties wcpu = 0.6 wmem = 0.4 then the utility of components C1 and C2 can be calculated as U (C1) = 0.1 × wcpu + 0.9 × wmem = 0.42 U (C2) = 0.9 × wcpu + 0.1 × wmem = 0.58 Assuming that cpu and memory are the only properties of interest, C2 should be chosen over C1 to provide interface I. In general, the components with the highest utility should be selected, but as described in Section 4.2, making this decision with only local information may lead to a configuration that is globally sub-optimal. Having defined the utility of a single component, it is now possible to find the aggregate utility of a configuration in order to choose between candidates by taking the average utility of all the components in the configuration: P c∈arch U (c) Uagg (arch) = |arch| However, this method for calculating the utility of a configuration masks a significant assumption, namely, that the component annotations are correct regardless of whether the component works in isolation or in a large configuration; in other words, that nonfunctional properties are compositional. It is trivial to conceive of a situation where this is not the case, such as a configuration involving large numbers of individually fast components, but exhibiting poor overall performance. More obscure situations can arise if components compete for resources such as a hard disk. We address (but not solve) this problem by using monitoring to update the non-functional annotations, so that they more accurately reflect how the components actually behave at runtime. A general solution for accurate, compositional annotations requires significant further investigation. And yet, with our simplifications, the task of finding an optimal configuration given non-functional information remains a difficult one. We now present two approaches for selecting configurations using non-functional properties. The first uses the aggregate utility to choose between a list of completed candidates. The second is a greedy algorithm which uses the utility of components to make local choices. In order to evaluate these approaches, we consider (i) whether the approach chooses the configuration with maximal aggregate utility and (ii) the overhead in terms of processing time to use the approach.

Here each component is annotated with a set of pairs naming a non-functional property, and the value of the property for that component. The intended meaning may be that C1 uses a lot of CPU resources, but little memory, with the contrary holding for C2. We do not ascribe meaning to these natural-language annotations during assembly, and rely on the designer providing a utility function for each property to produce a value representing how useful that property value is. Operational meaning is however necessary for monitoring when, for example, it should be possible to update the annotation of C1 from memory=5k to memory=1M on the basis of data gathered at runtime. For each property p in N F P rop, the user defines a utility function up which maps a property value (which may be continuous or discrete) onto a real number between 0 and 1, where 1 represents the most useful value. For example, the utility function for memory may be defined as umem (5k) = 0.9 umem (100M ) = 0.1 with intermediate values linearly interpolated between these two. In addition, a relative weight between 0 and 1 is associated with each property, indicating its importance to the user. This way, the user can indicate their preference for candidates which offer maximal utility in particular dimensions, such as maximal reliability or minimal cost. Note that the set of all weights must sum to 1. With this information, the total utility of a component can be calculated from the property values by taking a weighted sum, resulting in a value between 0 and 1, placing functionally equivalent components in a partial order. In other words, the utility U (c) of a component c is X U (c) = wp × up (c.p)

4.1

Aggregate Selection

Aggregate selection adds the third independent step to the assembly process, which is to compute the aggregate utility (as defined above) for all the complete, constraint-satisfying candidate configurations produced by the previous two steps. The candidate with the maximum utility is selected. If there are two candidates with the same utility, the smaller one (in terms of the number of components) is selected. Figure 3 shows an overview of the procedure. For example, consider the components C1 to C8 below. component C1 { req i:I; prov a:A }

p∈N F P rop

434

needs to be added or removed to satisfy the constraint, it extends this candidate with further components (C8 cannot simply be removed since this would cause the candidate to be incomplete). In this case, 5 components (C2, C4, C5, C6 and C7) can be added to create a new candidate, giving a maximum of 31 extended candidates. Of these, 4 are not considered since they are not complete (that is, those that contain C2 without any of its dependencies), leaving 27 extended candidates such as {C1, C3, C8, C4, C6}, none of which pass the constraint check. Thus, while this method of selection has the advantage that it produces the solution with the maximum aggregate utility, it necessitates the generation of the full list of candidates, which (in the worst case) can be expensive.

Figure 3: Overview of aggregate selection.

4.2

Figure 4: Example dependency graph with utilities. The tree structure is incidental; cycles are common.

component component component component component component component

C2 C3 C4 C5 C6 C7 C8

{ { { { { { {

req j:J; req k:K; prov j:J prov j:J prov k:K prov k:K prov k:K

Figure 5: Overview of incremental selection. Incremental selection integrates the utility check into the dependency analysis, as shown in Figure 5. At each step of the dependency analysis, one required interface is under consideration, and all the providers of that interface are found. With incremental selection, the provider with the highest utility is selected (greedily), and the other alternatives are discarded (unless the constraint check vetoes the selection). In other words, this procedure produces a single candidate, which is the first to pass the constraint check. If we again consider the components from the previous section, the dependency analysis starts with the incomplete configuration {C1} and finds C2 and C3 as the implementations of interface I. Since U (C3) > U (C2), C3 is added to the configuration to give {C1, C3}. Now the implementations of interface K are found, and since the maximum utility among these is U (C6) = 0.4, the next configuration is {C1, C3, C6}. Since this configuration is complete, and passes the constraint check, it is returned as the final configuration. However, notice that its aggregate utility is 0.5, which is less than the maximum. This is because the local choice between C2 and C3 does not take account of the low utility of the components implicated by choosing C3. In other words, a good choice now may be a bad choice later. Thus, while this approach is much less costly, its solutions do not maximise the aggregate utility in many cases. In the next section, we quantify the differences between the two approaches by comparing their solutions and performance on a large number of components.

prov i:I } prov i:I } } } } } }

Assume that the reactive plan requires interface A, necessitating the selection of C1. C1 then requires C2 or C3 via interface I, which in turn require C4 to C8 via interfaces J and K. This information is depicted in Figure 4, which also includes the utility of each component (for simplicity we omit the particular non-functional properties). Assume also that there is a structural constraint stating that valid configurations do not contain C8. Hence, with aggregate selection, there are 4 candidate configurations that are both complete (no requirements unsatisfied) and valid (constraints are satisfied). These are shown in the following table, with their aggregate utilities: Candidate {C1, C2, C4} {C1, C2, C5} {C1, C3, C6} {C1, C3, C7}

Incremental Selection

Uagg 0.6 0.55 0.5 0.45

Given this list, it is a simple matter to choose the configuration with the highest utility, {C1, C2, C4}. However, in the process of deriving this list the configuration {C1, C3, C8} will have been generated and seen to fail the constraint check (viz. a configuration must not contain C8). Since the system is entirely ignorant of what

4.3

Comparison

In order to compare the approaches we ran both on sets of randomlygenerated components, increasing the size of the set each time. Each component provides or requires some number of interfaces

435

Figure 6: Execution times for incremental and aggregate selection using randomly-generated components.

Figure 7: Execution times for aggregate selection against the number of valid candidates.

(which are drawn from a small set), and is assigned a random utility. Figure 6 shows the time taken (in milliseconds) by each approach to generate a solution for sets containing between 40 and 120 components. In almost every case, incremental selection (Inc) takes less than 1ms to produce a solution. This is expected because its behaviour is not dependent on the number of components, but rather on the size of candidate configurations, which in these tests is normally less than 10 components. Aggregate selection, on the other hand, takes increasing lengths of time as the set of components grows. The results for aggregate selection do not form a smooth curve due to the varying number of candidate configurations permitted by each set of components (which is determined by the number of interfaces and the number of implementations of each interface, in addition to the size of the set). In fact, the time taken by aggregate selection increases linearly with the number of candidate configurations, as shown in Figure 7. In addition to comparing the execution time, we compared the optimality of the chosen candidates with respect to the maximum aggregate utility. Since it is known that aggregate selection chooses the maximum aggregate utility, we can compare the candidate chosen by incremental selection using:

claimed when placed in certain configurations (non-compositionality). In order to improve the accuracy of the annotations, the running system can be monitored to update the annotations with the observed values. However, we have deliberately avoided providing dedicated monitors for specific NF properties (however common they may be, such as performance), so that the system is general and flexible enough to support any non-functional property the designer may be interested in (such as the aesthetic quality of a GUI component). Thus, the designer is permitted to provide the semantics for each non-functional property by identifying a bespoke monitor component that observes the property’s actual value at runtime. For example, a monitor for a performance property may use profiling to calculate the correct value of a component’s NF annotation. Monitor components must implement (a subtype of) the NFMonitor interface, ensuring their inclusion in the generated component configuration. To get the latest values for each component, this interface is called by the configuration generator each time a reconfiguration is triggered.

optimality(archinc ) =

6.

Uagg (archinc ) Uagg (archagg )

where archinc is the candidate produced by incremental selection and archagg is that produced by aggregate selection. This allows us to say, for example, that given an aggregate utility of 0.4, produced with incremental selection, and a utility of 0.9, produced with aggregate selection, that incremental selection achieves 44% of the maximum aggregate utility. Surprisingly, in all our random tests, incremental selection gave solutions between 90 and 100% of the maximum. In fact, the average over all runs in Figure 6 was 98.9%. This suggests that the computational cost incurred by aggregate selection considerably outweighs the benefit of optimality.

5.

EXAMPLE REVISITED

We now revisit the mobile phone example given in the introduction to show how configurations would be generated using our approach. Suppose there is a main component called ServiceFinder which encapsulates the logic for getting the user’s current position, and requesting information about services near that location. ServiceFinder requires implementations of the interfaces Locator and ServiceLookup. The GPS and network locator are implementations of Locator, and online search and local database are implementations of ServiceLookup: component GPSLocator [accuracy=10m, powerdrain=100mA] { prov gps:Locator } component NetworkLocator [accuracy=100m, powerdrain=5mA] { prov network:Locator }

MONITORING

As indicated above, the success of the approach relies on the accuracy of the non-functional information provided for each component. There are two sources of inaccuracy: (i) the component never behaves as claimed and (ii) the component does not behave as

component OnlineServiceSearch [freshness=1, powerdrain=5mA, cost=£0.05]

436

{ prov search:ServiceLookup } component ServiceDatabase [freshness=0.5] { prov db:ServiceLookup }

where “freshness” indicates how up-to-date the information is. These components would likely have further dependencies, but we omit them for simplicity. To calculate the utility of these components, utility functions for accuracy, power drain, cost, and freshness are required: uacc (10m) = 0.9

uacc (100m) = 0.5

upow (100mA) = 0.2 upow (5mA) = 0.9 ucost (£0) = 1 ucost (£0.05) = 0.8 uf resh (1) = 1

uf resh (0.5) = 0.5

Suppose now that the weighting between these properties is determined either manually by the user or by observing the context (such as the battery level). Since the user cannot do without the phone, power drain is weighted very highly, with cost as a second important concern: wpow = 0.5 wcost = 0.3 wf resh = 0.1 wacc = 0.1 Hence, the aggregate utility of the four candidate configurations is: Candidate {ServiceFinder, GPSLocator, OnlineServiceSearch} {ServiceFinder, GPSLocator, ServiceDatabase} {ServiceFinder, NetworkLocator, OnlineServiceSearch} {ServiceFinder, NetworkLocator, ServiceDatabase}

Uagg 0.74 0.77 0.895

8.

0.925

This work describes a practical means of exploiting non-functional properties in an architectural assembly process with structural constraints, embedded within our overall model for engineering selfmanaged systems. Our approach makes use of whatever information is available a priori, and uses feedback from the running system in lieu of guarantees derived from design-time analysis. By not enforcing particular semantics for non-functional properties, our approach is general and flexible enough to support domainspecific non-functional properties. Our experiments have demonstrated the trade-off to be made between aggregate selection and incremental selection, suggesting that the latter is a better choice, certainly for the robotics domain that we have presented previously [7] where computational power is limited. Incremental selection is also preferable in large-scale systems since its running time is normally (without many candidates being vetoed) linear in the size of candidates, whereas aggregate selection approaches exponential running time regardless of how many candidates are vetoed. However, there remain several unresolved issues. Firstly, our approach considers non-functional properties local to each component, and assumes that the properties hold in all compositions. Unfortunately, this is often not the case as components contend for resources, indirectly interacting in ways not described at the architectural level. Work by other researchers such as [18] is likely to be relevant in overcoming this issue. Another aspect the user is likely to be concerned about is the non-functional utility of the composition as a whole. Properties

In this case both aggregate and incremental selection choose the candidate with the greatest utility which is {ServiceFinder, NetworkLocator, ServiceDatabase}, reflecting the user’s current preference to conserve money and power. Given different weightings, as the user’s preferences change, different configurations are chosen. Likewise, if a selected component such as the ServiceDatabase were to fail, the configuration with the next highest utility – that including the OnlineServiceSearch – would be chosen.

7.

also be exploited in the goal management layer, during the synthesis of reactive plans. We will briefly sketch two ways they might usefully be incorporated into our existing process. A detailed summary of our controller synthesis procedure is beyond the scope of this paper (see [19, 20, 7] for a full discussion), but it essentially involves taking a state machine representation of the full possible behaviour of the system and removing the states and transitions which neither satisfy, nor lead to the satisfaction of, the given system goals. We harness model-checking technology for part of this process. The current formulation of this procedure removes all but the shortest paths from each state to a goal state. The pruned state machine thus gives a reactive plan, since it prescribes just a single action for the system to execute in each possible state from which the goals are satisfiable. While removing all but the shortest paths to the goal is sufficient from a purely functional point of view, it is somewhat arbitrary, and it is desirable to explore ways in which this pruning might be informed by non-functional preferences. One approach would be to provide utility values for transitions, in much the same way that we do for interface implementations above. The controller synthesis procedure would then look for paths with the highest overall utility value rather than the shortest length. A second approach is to explicitly express non-functional preferences as additional constraints when specifying system goals. For example, a preference for optimal temperature might be expressed as the constraint 0◦ C < temp < 100◦ C. In this case, a path including a transition with a utility value for temp outside this range would be discarded since it did not satisfy the given constraint. Rather than selecting paths with the highest overall utility value, the procedure would here look for paths satisfying a set of constraints. We have yet to explore either of these options in detail. However, it would seem that exploiting non-functional preferences leads to more sophisticated strategies for self-management in both our change management and goal management layers.

PLANNING WITH NON-FUNCTIONAL GOALS

As outlined in the introduction, architectural assembly and reassembly is driven by reactive plans [9]. As we have seen, adaptation in the change management layer may be achieved by reconfiguring the runtime architecture to provide an alternative (functionally equivalent) means for the system to satisfy its goals, as prescribed by the current reactive plan. This paper focusses on how non-functional preferences may be exploited at this intermediate level of adaptation, i.e., when choosing between alternative component configurations. However, non-functional preferences could

437

CONCLUSIONS

such as throughput are defined at the global level in terms of the properties of the components in the composition. The user may not be especially concerned about each component, but that the assembled configuration has good global properties. Indeed, the user may wish to give constraints (rather than preferences) on derived candidates, to be sure that non-functional requirements are being met, as in other approaches [5, 3, 2]. The violation of a non-functional requirement at runtime can be used as another impetus for reconfiguration. Moreover, it may be possible to combine non-functional constraints into the assembly process to reduce the search space by discarding violating candidates earlier. An additional issue is that the current approach is centralised, requiring a single host to derive the full configuration and then, if the system is distributed, deploy parts of the configuration across the other hosts. A decentralised approach can remove the performance bottleneck of the central host by employing the otherwise unused processing power of peers to derive solutions, and can additionally make the system more tolerant of faults during adaptation. However, decentralisation introduces issues of consistency when hosts have only a limited view of the global system state. Resolving these issues is central to our ongoing research.

9.

[9] M. Ghallib, D. Nau, P. Traverso. Automated Planning: Theory and Practice. Morgan Kaufman, 2005. [10] MADAM. MADAM: Theory of Adaptation (Technical report). http://www.intermedia.uio.no/display/ madam/D2.2+-+Theory+of+Adaptation, 2006. [11] J. Magee, N. Dulay, S. Eisenbach, and J. Kramer. Specifying distributed software architectures. In Proceedings of the 5th European Software Engineering Conference, pages 137–153, London, UK, 1995. Springer-Verlag. [12] O. Maler, A. Pnueli, and J. Sifakis. On the synthesis of discrete controllers for timed systems (an extended abstract). In 12th Annual Symposium on Theoretical Aspects of Computer Science, volume 900, pages 229–242. Springer, 1995. [13] S. Mokhtar, J. Liu, N. Georgantas, and V. Issarny. QoS-aware dynamic service composition in ambient intelligence environments. In Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering, pages 317–320. ACM New York, NY, USA, 2005. [14] A. Mukhija, A. Dingwall-Smith, and D. Rosenblum. QoS-aware service composition in dino. In Proceedings of the Fifth European Conference on Web Services (ECOWS’07)-Volume 00, pages 3–12. IEEE Computer Society Washington, DC, USA, 2007. [15] MUSIC. MUSIC: Initial research results on mechanisms and planning algorithms for self-adaptation (Technical report). http://www.ist-music.eu/MUSIC/docs/ MUSIC-D12-AdaptationInitialResults.pdf/ view, 2007. [16] D. Perry and A. Wolf. Foundations for the study of software architecture. ACM SIGSOFT SOFTWARE ENGINEERING NOTES, 17(4):40, 1992. [17] V. Poladian, D. Garlan, M. Shaw, M. Satyanarayanan, B. Schmerl, and J. Sousa. Leveraging resource prediction for anticipatory dynamic configuration. In SASO ’07: Proceedings of the First International Conference on Self-Adaptive and Self-Organizing Systems, pages 214–223, Washington, DC, USA, 2007. IEEE Computer Society. [18] R. Staehli and F. Eliassen. Compositional Quality of Service Semantics. SAVCBS 2004 Specification and Verification of Component-Based Systems, page 62, 2004. [19] D. Sykes, W. Heaven, J. Magee, and J. Kramer. Plan-Directed Architectural Change For Autonomous Systems. Proc. of ESEC/FSE Workshop on Specification and Verification of Component-Based Systems (SAVCBS), 2007. [20] D. Sykes, W. Heaven, J. Magee, and J. Kramer. From Goals to Components: A Combined Approach to Self-Management. Proc. of ICSE Workshop on Software Engineering for Adaptive and Self-Managing Systems (SEAMS), 2008. [21] W. Walsh, G. Tesauro, J. Kephart, and R. Das. Utility functions in autonomic systems. Autonomic Computing, 2004. Proceedings. International Conference on, pages 70–77, 2004. [22] X. Wang, T. Vitvar, M. Kerrigan, and I. Toma. A QoS-aware selection model for semantic web services. LECTURE NOTES IN COMPUTER SCIENCE, 4294:390, 2006.

ACKNOWLEDGEMENTS

The work reported in this paper was funded by the Systems Engineering for Autonomous Systems (SEAS) Defence Technology Centre established by the UK Ministry of Defence.

10.

REFERENCES

[1] L. Baresi, C. Ghezzi, and S. Guinea. Smart monitors for composed services. In ICSOC ’04: Proceedings of the 2nd international conference on Service oriented computing, pages 193–202, New York, NY, USA, 2004. ACM. [2] G. Brataas, S. Hallsteinsen, R. Rouvoy, and F. Eliassen. Scalability of decision models for dynamic product lines. Int. Workshop on Dynamic Software Product Line (DSPL), pages 1–10, 2007. [3] S. Cheng, D. Garlan, and B. Schmerl. Architecture-based self-adaptation in the presence of multiple objectives. Proceedings of the 2006 international workshop on Self-adaptation and self-managing systems, pages 2–8, 2006. [4] I. Epifani, C. Ghezzi, R. Mirandola, and G. Tamburrelli. Model evolution by run-time parameter adaptation. In ICSE ’09: Proceedings of the 2009 IEEE 31st International Conference on Software Engineering, pages 111–121, Washington, DC, USA, 2009. IEEE Computer Society. [5] D. Garlan and B. Schmerl. Model-based adaptation for self-healing systems. In WOSS ’02: Proceedings of the first workshop on Self-healing systems, pages 27–32, New York, NY, USA, 2002. ACM Press. [6] E. Gat. Three-Layer Architectures. Artificial Intelligence and Mobile Robots: Case Studies of Successful Robot Systems, pages 195–210, 1998. [7] W. Heaven, D. Sykes, J. Magee, and J. Kramer. A Case Study in Goal-Driven Architectural Adaptation. Software Engineering for Self-Adaptive Systems, 2009. [8] J. Kramer and J. Magee. Self-managed systems: an architectural challenge. 2007 Future of Software Engineering, 0:259–268, 2007.

438

Suggest Documents